gskrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Project description
gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
----
Description: gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Requirements
============================
BeautifulSoup4
requests
urllib3 1.22
Commands
============================
<head>
------------
gskrawler.head(url)
<title>
------------
gskrawler.title(url)
<body>
------------
gskrawler.body(url)
response in html format
------------
gskrawler.html(url)
links in a website
------------
gskrawler.links(url)
class elements
------------
gskrawler.tagclass(url,tagname,classname)
id elements
------------
gskrawler.tagid(url,tagname,idname)
emails in a website
------------
gskrawler.emails(url)
images in a website
------------
gskrawler.images(url)
----
Example Code
------------
Open Python Interpreter::
>>> import gskrawler
>>> gskrawler.emails('https://www.fisglobal.com/')
>>> gskrawler.images('https://www.fisglobal.com/')
>>> gskrawler.head('https://www.fisglobal.com/')
>>> gskrawler.tagclass('https://www.naukri.com/','ul','set')
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
----
Description: gskrawler
==============
gskcrawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links etc..
Requirements
============================
BeautifulSoup4
requests
urllib3 1.22
Commands
============================
<head>
------------
gskrawler.head(url)
<title>
------------
gskrawler.title(url)
<body>
------------
gskrawler.body(url)
response in html format
------------
gskrawler.html(url)
links in a website
------------
gskrawler.links(url)
class elements
------------
gskrawler.tagclass(url,tagname,classname)
id elements
------------
gskrawler.tagid(url,tagname,idname)
emails in a website
------------
gskrawler.emails(url)
images in a website
------------
gskrawler.images(url)
----
Example Code
------------
Open Python Interpreter::
>>> import gskrawler
>>> gskrawler.emails('https://www.fisglobal.com/')
>>> gskrawler.images('https://www.fisglobal.com/')
>>> gskrawler.head('https://www.fisglobal.com/')
>>> gskrawler.tagclass('https://www.naukri.com/','ul','set')
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gskrawler-1.0.0.tar.gz
(16.2 kB
view details)
File details
Details for the file gskrawler-1.0.0.tar.gz.
File metadata
- Download URL: gskrawler-1.0.0.tar.gz
- Upload date:
- Size: 16.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0686faaf69bd5d18b8a643f528e434658c91840204a473878f05d7692da8d8dc
|
|
| MD5 |
346f65562428ab34096fd54cc65daa1d
|
|
| BLAKE2b-256 |
02c924fe4def0001451535a2390627b5d7e96f7a102d096aa130bd92da1ad713
|