Skip to main content

LLM integration for Scrapy

Project description

Scrapy-LLM

LLM integration for scrapy as a middleware. Extract any data from the web using your own predefined schema with your own preferred language model.

view - Documentation     GitHub Actions

Features

  • Extract data from web page text using a language model.
  • Define a schema for the extracted data using pydantic models.
  • Validate the extracted data against the defined schema.
  • Seamlessly integrate with any API compatible with the OpenAI API specification.
  • Use any language model deployed on an API compatible with the OpenAI API specification.

Installation

pip install scrapy-llm

Usage

# settings.py

# set the response model to use for extracting data to a pydantic model (required)
# or set it as an attribute on the spider class as response_model
LLM_RESPONSE_MODEL = 'scraper.models.ResponseModel'

# enable the middleware
DOWNLOADER_MIDDLEWARES = {
    'scrapy_llm.handler.LlmExtractorMiddleware': 543,
    ...
}

then access extracted data from the response object.

# spider.py
def parse(self, response):
    extracted_data: Dict[str, Any] = response.request.meta.get('llm_extracted_data')
    ...

Examples

the examples directory contains a sample scrapy project that uses the middleware to extract capacity data from university websites.

to run the example project, export your openai api key as an environment variable, in addition to any other settings you want to change.

export OPENAI_API_KEY=<your-api-key>

then run the example project using the following command

cd examples
scrapy crawl generic -a urls_file=urls.csv

add more urls to the urls.csv file to extract data from more websites.

Configuration

All aspects of the middleware can be configured using the settings.py file except the API key which should be set as the environment variable OPENAI_API_KEY according to the openai api documentation here.

when using an API that does not require an API key, the OPENAI_API_KEY environment variable can be set to any value.

LLM_RESPONSE_MODEL

  • type: str
  • required: True

the response model to use for extracting data from the web page text.

RESPONSE_MODEL = 'scraper.models.ResponseModel'

this setting can also be set as an attribute on the spider class itself, in that case the class should be used directly instead of a string path to the class.

class MySpider(scrapy.Spider):
    response_model = ResponseModel
    ...

LLM_UNWRAP_NESTED

  • type: bool
  • required: False
  • default: True

whether to unwrap nested models in the extracted data.

LLM_UNWRAP_NESTED = True

for example if the following model is used

class ContactInfo(BaseModel):
    phone: str

class Person(BaseModel):
    name: str
    contact_info: ContactInfo

the extracted data will be unwrapped to

{
    "name": "John Doe",
    "phone": "1234567890"
}

without unwrapping the data will be

{
    "name": "John Doe",
    "contact_info": {
        "phone": "1234567890"
    }
}

LLM_API_BASE

base url for the openai compatible api.

LLM_API_BASE = 'https://api.openai.com/v1'

LLM_MODEL

  • type: str
  • required: False
  • default: "gpt-4-turbo"

the language model to use for extracting data from the web page text.

LLM_MODEL = 'gpt-4-turbo'

LLM_MODEL_TEMPERATURE

  • type: float
  • required: False
  • default: 0.0001

the temperature to use for the language model.

LLM_MODEL_TEMPERATURE = 0.0001

LLM_SYSTEM_MESSAGE

  • type: str
  • required: False
  • default: You are a data extraction expert, your role is to extract data from the given text according to the provided schema. make sure your output is a valid JSON object.

the system message to use for the language model.

LLM_SYSTEM_MESSAGE = '...'

Under the hood

Under the hood, scrapy-llm utilizes two libraries to facilitate data extraction from web page text. The first library is Instructor, which uses pydantic to define a schema for the extracted data. This schema is then used to validate the extracted data and ensure that it conforms to the desired structure. By defining a schema for the extracted data, Instructor provides a clear and consistent way to organize and process the extracted information.

The second library is LiteLLM, which enables seamless integration between instructor and any API compatible with the OpenAI API specification. LiteLLM allows using any language model as long as it is deployed on an API compatible with the OpenAI API specification. This flexibility makes it easy to switch between different language models and experiment with different configurations to find the best model for a given task.

By combining the functionalities of Instructor and LiteLLM, scrapy-llm becomes a robust tool for extracting data from web page text. Whether it's scraping a single page or crawling an entire website, scrapy-llm offers a reliable and adaptable solution for all data extraction needs.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_llm-0.1.12.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_llm-0.1.12-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_llm-0.1.12.tar.gz.

File metadata

  • Download URL: scrapy_llm-0.1.12.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for scrapy_llm-0.1.12.tar.gz
Algorithm Hash digest
SHA256 966a1aef80779d87129e1acc3b579f3282fab5a9ca44337bb7e5570028df7aea
MD5 2997987733e64a9e8bc9cbd923fca682
BLAKE2b-256 8d8f9ed9c7b73926cdc4acf4f2c30bab13743d270000075507b0972fe7a459a9

See more details on using hashes here.

File details

Details for the file scrapy_llm-0.1.12-py3-none-any.whl.

File metadata

  • Download URL: scrapy_llm-0.1.12-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for scrapy_llm-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 e81d352323695c0beb658bce2c70900b8be3c96ac8c4ff2786b4781cd65e1e46
MD5 5f01223c1ecbf9ec6542fbe2a25574a8
BLAKE2b-256 4ca8f6597b8196b6076dc7afb78afc9d29360333d5fb091f7a6aaa2dc08ff14b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page