Disallow indexing completely via robots.txt
This module was written to implement a robots.txt file to prevent web crawlers (like google) from indexing pages. This module does not depend on Website module.
Table of contents
Installation as usual. No specific installation steps / configuration required.
WARNING: this module is not to be used with Website module as it has a separate functionality for robots.txt.
To use this module, you need to:
No configuration needed. Once installed adds robots.txt (ex.: http://example.org/robots.txt).
- Investigate possibilies for compatibility with Website module as it has a separate functionality for robots.txt.
Bugs are tracked on GitHub Issues. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us smashing it by providing a detailed and welcomed feedback.
Do not contact contributors directly about support or help with technical issues.
The development of this module has been financially supported by:
- Ventor, Xpansa Group (<https://ventor.tech/>)
This module is maintained by the OCA.
OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use.
This module is part of the OCA/web project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash SHA256 hash help||File type||Python version||Upload date|
|odoo8_addon_web_no_crawler-22.214.171.124.0.99.dev2-py2-none-any.whl (25.8 kB) Copy SHA256 hash SHA256||Wheel||py2|