Parse robots.txt files and find indexed urls
Project description
robotsparser
Python library that parses robots.txt files
Functionalities
- Automatically discover all sitemap files
- Unzip gziped files
- Fetch all URLs from sitemaps
Install
pip install robotsparser
Usage
from robotsparser.parser import Robotparser
robots_url = "https://www.example.com/robots.txt"
rb = Robotparser(url=robots_url, verbose=True)
# To initiate the crawl of sitemaps and indexed urls. sitemap_crawl_limit argument is optional
rb.read(fetch_sitemap_urls=True, sitemap_url_crawl_limit=5)
# Show information
rb.get_sitemap_indexes() # returns sitemap indexes
rb.get_sitemaps() # returns sitemaps
rb.get_urls() # returns a list of all urls
Multiprocessing usage (crawl in the background)
Crawl in the background and output new entries to file
This is useful for sites where sitemaps are heavily nested and take a long time to crawl
from robotsparser.parser import Robotparser
import multiprocessing as mp
from sh import tail
if __name__ == '__main__':
mp.freeze_support()
robots_url = "https://www.example.com/robots.txt"
entries_log_file="./entries.log"
rb = Robotparser(url=robots_url, verbose=False, sitemap_entries_file=entries_log_file)
sitemap_crawl_proc = mp.Process(target = rb.read, kwargs = {'fetch_sitemap_urls': False})
sitemap_crawl_proc.start()
for line in tail("-f", entries_log_file, _iter=True):
print(line.replace("\n", ""))
if not sitemap_crawl_proc.is_alive():
break
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
robotsparser-0.0.11.tar.gz
(5.6 kB
view hashes)
Built Distribution
Close
Hashes for robotsparser-0.0.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b24801a8ca98170e2af169911e6e824c1a6a3d48ac1383fae43127a1889d6259 |
|
MD5 | f906fd2e2fe0f4c71ca8b7cfdea2b29c |
|
BLAKE2b-256 | 1b7c31d3b2ecf0f10408770f3c011a5556dc6ed54c4b510eda0e8fa27e93bef6 |