Generate spiders md documentation based on spider docstrings.
Project description
# Scrapy spiderdocs command
## Usage example
```bash
pip install git+https://github.com/nanvel/scrapy-spiderdocs.git
scrapy spiderdocs <module.name>
```
## Example project
See `documented` project for example.
```python
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
"""Some text.
Hi!
; Note
Some note.
; output
{
"1": 1
}
"""
name = 'example'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {
'body_length': len(response.body)
}
```
Settings:
```
SPIDERDOCS_LOCATIONS = {
'documented.spiders.example': "docs/example.md"
}
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda i: '```json\n{i}\n```'.format(i=i)
}
```
Execute the command:
```bash
scrapy spiderdocs documented.spiders
```
Output:
# documented.spiders spiders
## example [documented.spiders.example.ExampleSpider]
### Note
Some note.
### output
```json
Some note.
{
"1": 1
}
```
## Output options
### stdout
```bash
scrapy spiderdocs <module.name> > somefile.md
```
### `-o` (`--output`) option
```bash
scrapy spiderdocs <module.name> -o somefile.md
```
### Settings
```python
SPIDERDOCS_LOCATIONS = {
'module.name': "somefile.md"
}
```
The setting used if no module specified.
```bash
scrapy spiderdocs
```
## Docstring syntax
Use `;` to create sections. For example:
```text
; Section 1
Some text ...
; Section 2
Some text ...
```
Use `; end` to close a section:
```text
This text will not be added to the documentation.
; Section 1
Some text ...
; end
And this text also will be skipped.
```
### Section processors
An example:
```
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda i: '```json\n{i}\n```'.format(i=i)
}
```
; Output
{
"attr": "value"
}
will be translated into:
### Output
```json
{
"attr": "value"
}
```
## Scrapy settings
`SPIDERDOCS_LOCATIONS: {<module>: <destination>}`, default: `{}`.
`SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function>}`, default: `{}`.
See usage examples above.
## Development
```
git clone git@github.com:nanvel/scrapy-spiderdocs.git
cd scrapy-spiderdocs
virtualenv .env --no-site-packages -p /usr/local/bin/python3
source .env/bin/activate
pip install scrapy
scrapy crawl example
scrapy spiderdocs documented.spiders
python -m unittest documented.tests
```
## TODO
- unittests
## Usage example
```bash
pip install git+https://github.com/nanvel/scrapy-spiderdocs.git
scrapy spiderdocs <module.name>
```
## Example project
See `documented` project for example.
```python
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
"""Some text.
Hi!
; Note
Some note.
; output
{
"1": 1
}
"""
name = 'example'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {
'body_length': len(response.body)
}
```
Settings:
```
SPIDERDOCS_LOCATIONS = {
'documented.spiders.example': "docs/example.md"
}
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda i: '```json\n{i}\n```'.format(i=i)
}
```
Execute the command:
```bash
scrapy spiderdocs documented.spiders
```
Output:
# documented.spiders spiders
## example [documented.spiders.example.ExampleSpider]
### Note
Some note.
### output
```json
Some note.
{
"1": 1
}
```
## Output options
### stdout
```bash
scrapy spiderdocs <module.name> > somefile.md
```
### `-o` (`--output`) option
```bash
scrapy spiderdocs <module.name> -o somefile.md
```
### Settings
```python
SPIDERDOCS_LOCATIONS = {
'module.name': "somefile.md"
}
```
The setting used if no module specified.
```bash
scrapy spiderdocs
```
## Docstring syntax
Use `;` to create sections. For example:
```text
; Section 1
Some text ...
; Section 2
Some text ...
```
Use `; end` to close a section:
```text
This text will not be added to the documentation.
; Section 1
Some text ...
; end
And this text also will be skipped.
```
### Section processors
An example:
```
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda i: '```json\n{i}\n```'.format(i=i)
}
```
; Output
{
"attr": "value"
}
will be translated into:
### Output
```json
{
"attr": "value"
}
```
## Scrapy settings
`SPIDERDOCS_LOCATIONS: {<module>: <destination>}`, default: `{}`.
`SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function>}`, default: `{}`.
See usage examples above.
## Development
```
git clone git@github.com:nanvel/scrapy-spiderdocs.git
cd scrapy-spiderdocs
virtualenv .env --no-site-packages -p /usr/local/bin/python3
source .env/bin/activate
pip install scrapy
scrapy crawl example
scrapy spiderdocs documented.spiders
python -m unittest documented.tests
```
## TODO
- unittests
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file scrapy-spiderdocs-0.0.2.tar.gz.
File metadata
- Download URL: scrapy-spiderdocs-0.0.2.tar.gz
- Upload date:
- Size: 5.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6d89ccbba08d85bb5810e1b6bc4fbba34fb25ad392043833d004c70ff7ba483c
|
|
| MD5 |
2d8a390a927b722eba0232b6dd968a5a
|
|
| BLAKE2b-256 |
2574a6e897fa514817e96ea7b9e2ac7c849b368d2e7752db44c697d2fcb5c822
|