A robots.txt parser alternative to Python's robotparser module
Project description
Robotexclusionrulesparser is an alternative to the Python standard library module robotparser. It fetches and parses robots.txt files and can answer questions as to whether or not a given user agent is permitted to visit a certain URL.
This module has some features that the standard library module robotparser does not, including the ability to decode non-ASCII robots.txt files, respect for Expires headers and understanding of Crawl-delay and Sitemap directives and wildcard syntax in path names.
Complete documentation (including a comparison with the standard library module robotparser) is available in ReadMe.html.
Robotexclusionrulesparser is released under a BSD license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size | File type | Python version | Upload date | Hashes |
---|---|---|---|---|
Filename, size robotexclusionrulesparser-1.7.1.tar.gz (31.5 kB) | File type Source | Python version None | Upload date | Hashes View hashes |
Hashes for robotexclusionrulesparser-1.7.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | d23aa14ae8145c13c95612d696736bad52a4bd0819ce8c9437ee745098fb8388 |
|
MD5 | f11ccefc9ec9397db8fc8e62b79c93ef |
|
BLAKE2-256 | 399774634de03a0856160a8c2fa92f03cdf1827c3b1d3d42378d4b79119cd9fa |