Skip to main content

Generate spiders md documentation based on spider docstrings.

Project description

Usage example

pip install git+https://github.com/nanvel/scrapy-spiderdocs.git
scrapy spiderdocs <module.name>

Example project

See documented project for example.

# -*- coding: utf-8 -*-
import scrapy


class ExampleSpider(scrapy.Spider):
    """Some text.
    Hi!

    ; Note

    Some note.

    ; output

    {
        "1": 1
    }
    """

    name = 'example'
    allowed_domains = ('example.com',)
    start_urls = ('http://example.com/',)

    def parse(self, response):
        yield {
            'body_length': len(response.body)
        }

Settings:

SPIDERDOCS_LOCATIONS = {
    'documented.spiders.example': "docs/example.md"
}
SPIDERDOCS_SECTION_PROCESSORS = {
    'output': lambda i: '```json\n{i}\n```'.format(i=i)
}

Execute the command:

scrapy spiderdocs documented.spiders

Output:

# documented.spiders spiders

## example [documented.spiders.example.ExampleSpider]

### Note

Some note.


### output

```json
Some note.


{
    "1": 1
}
```

Output options

stdout

scrapy spiderdocs <module.name> > somefile.md

-o (–output) option

scrapy spiderdocs <module.name> -o somefile.md

Settings

SPIDERDOCS_LOCATIONS = {
    'module.name': "somefile.md"
}

The setting used if no module specified.

scrapy spiderdocs

Docstring syntax

Use ; to create sections. For example:

; Section 1

Some text ...

; Section 2

Some text ...

Use ; end to close a section:

This text will not be added to the documentation.

; Section 1

Some text ...

; end

And this text also will be skipped.

Section processors

An example:

SPIDERDOCS_SECTION_PROCESSORS = {
    'output': lambda i: '```json\n{i}\n```'.format(i=i)
}
; Output

{
    "attr": "value"
}

will be translated into:

### Output

```json
{
    "attr": "value"
}
```

Scrapy settings

SPIDERDOCS_LOCATIONS: {<module>: <destination>}, default: {}.

SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function>}, default: {}.

See usage examples above.

Development

git clone git@github.com:nanvel/scrapy-spiderdocs.git
cd scrapy-spiderdocs
virtualenv .env --no-site-packages -p /usr/local/bin/python3
source .env/bin/activate
pip install scrapy
scrapy crawl example
scrapy spiderdocs documented.spiders
python -m unittest documented.tests

TODO

  • unittests

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-spiderdocs-0.0.3.tar.gz (5.4 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page