Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

Python interface to Scrapinghub Automatic Extraction API

Project description

PyPI Version Supported Python Versions Build Status Coverage report

Python client libraries for Scrapinghub AutoExtract API. It allows to extract product and article information from any website.

Both synchronous and asyncio wrappers are provided by this package.

License is BSD 3-clause.


pip install scrapinghub-autoextract

scrapinghub-autoextract requires Python 3.6+ for CLI tool and for the asyncio API; basic, synchronous API works with Python 3.5.


First, make sure you have an API key. To avoid passing it in api_key argument with every call, you can set SCRAPINGHUB_AUTOEXTRACT_KEY environment variable with the key.

Command-line interface

The most basic way to use the client is from a command line. First, create a file with urls, an URL per line (e.g. urls.txt). Second, set SCRAPINGHUB_AUTOEXTRACT_KEY env variable with your AutoExtract API key (you can also pass API key as --api-key script argument).

Then run a script, to get the results:

python -m autoextract urls.txt --page-type article > res.jl

Run python -m autoextract --help to get description of all supported options.

Synchronous API

Synchronous API provides an easy way to try autoextract in a script. For production usage asyncio API is strongly recommended.

You can send requests as described in API docs:

from autoextract.sync import request_raw
query = [{'url': '', 'pageType': 'article'}]
results = request_raw(query)

Note that if there are several URLs in the query, results can be returned in arbitrary order.

There is also a autoextract.sync.request_batch helper, which accepts URLs and page type, and ensures results are in the same order as requested URLs:

from autoextract.sync import request_batch
urls = ['', '']
results = request_batch(urls, page_type='article')


Currently request_batch is limited to 100 URLs at time only.

asyncio API

Basic usage is similar to sync API (request_raw), but asyncio event loop is used:

from autoextract.aio import request_raw

async def foo():
    results1 = await request_raw(query)
    # ...

There is also request_parallel function, which allows to process many URLs in parallel, using both batching and multiple connections:

import sys
from autoextract.aio import request_parallel, create_session

async def foo():
    async with create_session() as session:
        res_iter = request_parallel(urls, page_type='article',
                                    n_conn=10, batch_size=3,
        for f in res_iter:
                batch_result = await f
                for res in batch_result:
                    # do something with a result
            except ApiError as e:
                print(e, file=sys.stderr)

request_parallel and request_raw functions handle throttling (http 429 errors) and network errors, retrying a request in these cases.

CLI interface implementation (autoextract/ can serve as an usage example.


Use tox to run tests with different Python versions:


The command above also runs type checks; we use mypy.



Initial release.

Project details

Release history Release notifications

This version


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for scrapinghub-autoextract, version 0.1
Filename, size File type Python version Upload date Hashes
Filename, size scrapinghub_autoextract-0.1-py3-none-any.whl (12.0 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size scrapinghub-autoextract-0.1.tar.gz (11.0 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page