gskrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Project description
gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
----
Description: gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Requirements
============================
BeautifulSoup4
requests
urllib3 1.22
Commands
============================
<head>
------------
gskrawler.head(url)
<title>
------------
gskrawler.title(url)
<body>
------------
gskrawler.body(url)
response in html format
------------
gskrawler.html(url)
links in a website
------------
gskrawler.links(url)
class elements
------------
gskrawler.tagclass(url,tagname,classname)
id elements
------------
gskrawler.tagid(url,tagname,idname)
emails in a website
------------
gskrawler.emails(url)
images in a website
------------
gskrawler.images(url)
----
Example Code
------------
Open Python Interpreter::
>>> import gskrawler
>>> gskrawler.emails('https://www.fisglobal.com/')
>>> gskrawler.images('https://www.fisglobal.com/')
>>> gskrawler.head('https://www.fisglobal.com/')
>>> gskrawler.tagclass('https://www.naukri.com/','ul','set')
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
----
Description: gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Requirements
============================
BeautifulSoup4
requests
urllib3 1.22
Commands
============================
<head>
------------
gskrawler.head(url)
<title>
------------
gskrawler.title(url)
<body>
------------
gskrawler.body(url)
response in html format
------------
gskrawler.html(url)
links in a website
------------
gskrawler.links(url)
class elements
------------
gskrawler.tagclass(url,tagname,classname)
id elements
------------
gskrawler.tagid(url,tagname,idname)
emails in a website
------------
gskrawler.emails(url)
images in a website
------------
gskrawler.images(url)
----
Example Code
------------
Open Python Interpreter::
>>> import gskrawler
>>> gskrawler.emails('https://www.fisglobal.com/')
>>> gskrawler.images('https://www.fisglobal.com/')
>>> gskrawler.head('https://www.fisglobal.com/')
>>> gskrawler.tagclass('https://www.naukri.com/','ul','set')
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gskrawler-1.0.0.tar.gz
(16.2 kB
view hashes)