Skip to main content

Lightweight library for scraping web-sites with LLMs

Project description

📦 Parsera

Website Downloads

Lightweight Python library for scraping websites with LLMs. You can test it on Parsera website.

Why Parsera?

Because it's simple and lightweight, with minimal token use which boosts speed and reduces expenses.

Installation

pip install parsera
playwright install

Basic usage

If you want to use OpenAI, remember to set up OPENAI_API_KEY env variable. You can do this from python with:

import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY_HERE"

Next, you can run a basic version that uses gpt-4o-mini

from parsera import Parsera

url = "https://news.ycombinator.com/"
elements = {
    "Title": "News title",
    "Points": "Number of points",
    "Comments": "Number of comments",
}

scraper = Parsera()
result = scraper.run(url=url, elements=elements)

result variable will contain a json with a list of records:

[
   {
      "Title":"Hacking the largest airline and hotel rewards platform (2023)",
      "Points":"104",
      "Comments":"24"
   },
    ...
]

There is also arun async method available:

result = await scrapper.arun(url=url, elements=elements)

Using proxy

You can use serve the traffic via proxy server when calling run method:

proxy_settings = {
    "server": "https://1.2.3.4:5678",
    "username": <PROXY_USERNAME>,
    "password": <PROXY_PASSWORD>,
}
result = scrapper.run(url=url, elements=elements, proxy_settings=proxy_settings)

Run with custom model

You can instantiate Parsera with any chat model supported by LangChain, for example, to run the model from Azure:

import os
from langchain_openai import AzureChatOpenAI

llm = AzureChatOpenAI(
    azure_endpoint=os.getenv("AZURE_GPT_BASE_URL"),
    openai_api_version="2023-05-15",
    deployment_name=os.getenv("AZURE_GPT_DEPLOYMENT_NAME"),
    openai_api_key=os.getenv("AZURE_GPT_API_KEY"),
    openai_api_type="azure",
    temperature=0.0,
)

url = "https://news.ycombinator.com/"
elements = {
    "Title": "News title",
    "Points": "Number of points",
    "Comments": "Number of comments",
}
scrapper = Parsera(model=llm)
result = scrapper.run(url=url, elements=elements)

Run local model with HuggingFace Trasformers

Currently, we only support models that include a system token

You should install Transformers with either pytorch (recommended) or TensorFlow 2.0

Transformers Installation Guide

example:

from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
from parsera.engine.model import HuggingFaceModel
from parsera import Parsera

# Define the URL and elements to scrape
url = "https://news.ycombinator.com/"
elements = {
"Title": "News title",
"Points": "Number of points",
"Comments": "Number of comments",
}

# Initialize model with transformers pipeline
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=5000)

# Initialize HuggingFaceModel
llm = HuggingFaceModel(pipeline=pipe)

# Scrapper with HuggingFace model
scrapper = Parsera(model=llm)
result = scrapper.run(url=url, elements=elements)

Using different extractor types

By default a tabular extractor is used, but you can also use the list or item extractors:

from parsera import Parsera

scraper = Parsera(extractor=Parsera.ExtractorType.LIST)
# or
scraper = Parsera(extractor=Parsera.ExtractorType.ITEM)

The tabular extractor is used to find rows of tabular data and has output of the form:

[
    {"name": "name1", "price": "100"},
    {"name": "name2", "price": "150"},
    {"name": "name3", "price": "300"},
]

The list extractor is used to find lists of different values and has output of the form:

{
    "name": ["name1", "name2", "name3"],
    "price": ["100", "150", "300"]
}

The item extractor is used to get singular items from a page like a title or price and has output of the form:

{
    "name": "name1",
    "price": "100"
}

Running with Jupyter Notebook:

Either place this code at the beginning of your notebook:

import nest_asyncio
nest_asyncio.apply()

Or instead of calling run method use async arun.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parsera-0.1.7.tar.gz (14.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parsera-0.1.7-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file parsera-0.1.7.tar.gz.

File metadata

  • Download URL: parsera-0.1.7.tar.gz
  • Upload date:
  • Size: 14.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for parsera-0.1.7.tar.gz
Algorithm Hash digest
SHA256 8b3093d62a9bfca2a5a51e6361564ccfc679499b7e7b1c5aaa388ce200fb6657
MD5 1e46309398213c8a3aba118e9a5ada58
BLAKE2b-256 4f0be98b7a21d74b3e9c90aa26c281cfee4aaebc6b9fda22261ac497ec2043d0

See more details on using hashes here.

File details

Details for the file parsera-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: parsera-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 15.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.2 Darwin/23.4.0

File hashes

Hashes for parsera-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 568e5b57b56a316b964eef8573c90387c55c31845643fdf414f8e953940c4b03
MD5 430e8fd1a5a4c3536b38706d20d54f01
BLAKE2b-256 b0768dfc55cf21afb2e3acd2fe8fd8e5a3eee00f7f2a2b3770cf1bb9ef929423

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page