Generate spiders md documentation based on spider docstrings.
Project description
Usage example
pip install scrapy-spiderdocs
scrapy spiderdocs <module.name>
Example project
See documented project for example.
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
"""Some text.
Hi!
; Note
Some note.
; Output
{
"1": 1
}
"""
name = 'example'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {
'body_length': len(response.body)
}
class ExampleSpider2(scrapy.Spider):
"""Some text.
Hi!
; Info
Some info.
"""
name = 'example2'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {'success': True}
Settings:
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda name, content: '### {name}\n\n```json\n{content}\n```'.format(name=name, content=content),
'info': lambda name, content: '{content}'.format(content=content)
}
Execute the command:
scrapy spiderdocs documented.spiders
Output:
# documented.spiders spiders
## example2 [documented.spiders.example.ExampleSpider2]
Some info.
## example [documented.spiders.example.ExampleSpider]
### Note
Some note.
### Output
```json
{
"1": 1
}
```
Output options
stdout
scrapy spiderdocs <module.name> > somefile.md
-o (–output) option
scrapy spiderdocs <module.name> -o somefile.md
Settings
SPIDERDOCS_LOCATIONS = {
'module.name': "somefile.md"
}
The setting used if no module specified.
scrapy spiderdocs
Docstring syntax
Use ; to create sections. For example:
; Section 1
Some text ...
; Section 2
Some text ...
Use ; end to close a section:
This text will not be added to the documentation.
; Section 1
Some text ...
; end
And this text also will be skipped.
Section processors
An example:
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda name, content: '### {name}\n\n```json\n{content}\n```'.format(name=name, content=content)
}
; Output
{
"attr": "value"
}
will be translated into:
### Output
```json
{
"attr": "value"
}
```
Scrapy settings
SPIDERDOCS_LOCATIONS: {<module>: <destination>}, default: {}.
SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function(name, content) -> str>}, default: {}.
See usage examples above.
Development
git clone git@github.com:nanvel/scrapy-spiderdocs.git
cd scrapy-spiderdocs
virtualenv .env --no-site-packages -p /usr/local/bin/python3
source .env/bin/activate
pip install scrapy
scrapy crawl example
scrapy spiderdocs documented.spiders
python -m unittest documented.tests
TODO
unittests (is there is no docstring, …)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file scrapy-spiderdocs-0.1.3.tar.gz
.
File metadata
- Download URL: scrapy-spiderdocs-0.1.3.tar.gz
- Upload date:
- Size: 8.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0faecf1c3567f3b391468ecfbf29e1539d84cd6d1d5c308b79402b9ff10b4786 |
|
MD5 | 1e3ce0925f66dbddc7e42cb5ea95783a |
|
BLAKE2b-256 | 3181dea2793fd8f34684ca226818c78a8076c97c59d88b7607005d525a6db1fc |