Skip to main content

Add your description here

Project description

ainfo

Publish documentation Upload Python Package

gather structured information from any website - ready for LLMs

Architecture

The project separates concerns into distinct modules:

  • fetching – obtain raw data from a source
  • parsing – transform raw data into a structured form
  • extraction – pull relevant information from the parsed data
  • output – handle presentation of the extracted results

Usage

Command line

Install the project and run the CLI against a URL:

pip install ainfo
ainfo run https://example.com

The command fetches the page, parses its content and prints the page text. Specify one or more built-in extractors with --extract to pull extra information. For example, to collect contact details and hyperlinks:

ainfo run https://example.com --extract contacts --extract links

Available extractors include:

  • contacts – emails, phone numbers, addresses and social profiles
  • links – all hyperlinks on the page
  • headings – text of headings (h1–h6)

Use --json to emit machine-readable JSON instead of the default human-friendly format. The JSON keys mirror the selected extractors, with text included by default. Pass --no-text when you only need the extraction results. Retrieve the JSON schema for contact details with ainfo.output.json_schema.

For use within an existing asyncio application, the package exposes an async_fetch_data coroutine:

import asyncio
from ainfo import async_fetch_data

async def main():
    html = await async_fetch_data("https://example.com")
    print(html[:60])

asyncio.run(main())

To delegate information extraction or summarisation to an LLM, provide an OpenRouter API key via the OPENROUTER_API_KEY environment variable and pass --use-llm or --summarize:

export OPENROUTER_API_KEY=your_key
ainfo run https://example.com --use-llm --summarize

Summaries are generated in German by default. Override the language with --summary-language <LANG> on the CLI or by setting the AINFO_SUMMARY_LANGUAGE environment variable.

If the target site relies on client-side JavaScript, enable rendering with a headless browser:

ainfo run https://example.com --render-js

To crawl multiple pages starting from a URL and optionally run extractors on each page:

ainfo crawl https://example.com --depth 2 --extract contacts

The crawler visits pages breadth-first up to the specified depth and prints results for every page encountered. Pass --json to output the aggregated results as JSON instead.

Both commands accept --render-js to execute JavaScript before scraping, which uses Playwright. Installing the browser drivers may require running playwright install.

Utilities chunk_text and stream_chunks are available to break large pages into manageable pieces when sending content to LLMs.

Programmatic API

Most components can also be used directly from Python. Fetch and parse a page, then run the extractors yourself:

from ainfo.extractors import AVAILABLE_EXTRACTORS

from ainfo import fetch_data, parse_data, extract_information, extract_custom

html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")

# Contact details via built-in extractor
contacts = AVAILABLE_EXTRACTORS["contacts"](doc)

# All links
links = AVAILABLE_EXTRACTORS["links"](doc)

# Any additional data via regular expressions
extra = extract_custom(doc, {"prices": r"\$\d+(?:\.\d{2})?"})
print(contacts.emails, extra["prices"])

Serialise results with to_json or inspect the JSON schema with json_schema(ContactDetails).

Custom extractors

Define your own extractor by writing a function that accepts a Document and registering it in ainfo.extractors.AVAILABLE_EXTRACTORS.

# my_extractors.py
from ainfo.models import Document
from ainfo.extraction import extract_custom
from ainfo.extractors import AVAILABLE_EXTRACTORS

def extract_prices(doc: Document) -> list[str]:
    data = extract_custom(doc, {"prices": r"\$\d+(?:\.\d{2})?"})
    return data.get("prices", [])

AVAILABLE_EXTRACTORS["prices"] = extract_prices

After importing my_extractors your extractor becomes available on the command line:

ainfo run https://example.com --extract prices --no-text

LLM-based extraction

extract_custom can also delegate to a large language model. Supply an LLMService and a prompt describing the desired output:

from ainfo import fetch_data, parse_data
from ainfo.extraction import extract_custom
from ainfo.llm_service import LLMService

html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")

with LLMService() as llm:
    data = extract_custom(
        doc,
        llm=llm,
        prompt="List all products with their prices as JSON under 'products'",
    )
print(data["products"])

Workflow examples

Save contact details to JSON

pip install ainfo
ainfo run https://example.com --json > contacts.json

Summarize a large page with chunk_text

from ainfo import fetch_data, parse_data, chunk_text
from some_llm import summarize  # pseudo-code

html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")

parts = [summarize(chunk) for chunk in chunk_text(doc.text_content(), 1000)]
print(" ".join(parts))

Stream chunks on the fly

Fetch and chunk a page directly by URL or pass in raw text:

from ainfo import stream_chunks

for chunk in stream_chunks("https://example.com", size=1000):
    handle(chunk)  # send to LLM or other processor

Environment configuration

Copy .env.example to .env and fill in OPENROUTER_API_KEY, OPENROUTER_MODEL, and OPENROUTER_BASE_URL to enable LLM-powered features.

n8n integration

A minimal FastAPI wrapper and accompanying Dockerfile live in the integration/ directory. Build the container and run the service:

docker build -f integration/Dockerfile -t ainfo-api .
docker run -p 8000:8000 -e OPENROUTER_API_KEY=your_key -e AINFO_API_KEY=choose_a_secret ainfo-api
# or use an env file
docker run -p 8000:8000 --env-file .env ainfo-api

The server exposes a /run endpoint that executes:

ainfo run <url> --use-llm --summarize --render-js --extract contacts --no-text --json

Pass an optional summary_language query parameter to control the summary language (default: German).

integration/api.py uses python-dotenv to load a .env file, so sensitive values such as OPENROUTER_API_KEY can be supplied via environment variables. Protect the endpoint by setting AINFO_API_KEY and include an X-API-Key header with that value on every request. This makes it easy to call ainfo from workflow tools like n8n.

Limitations

  • The built-in extract_information targets contact and social media details. Use extract_custom for other patterns or implement your own domain-specific extractors.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ainfo-1.0.1.tar.gz (23.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ainfo-1.0.1-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file ainfo-1.0.1.tar.gz.

File metadata

  • Download URL: ainfo-1.0.1.tar.gz
  • Upload date:
  • Size: 23.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ainfo-1.0.1.tar.gz
Algorithm Hash digest
SHA256 aec978b99bd4efd1d6df21be8ed64dce22fd5dc2b4074ac437ef13a75b804b9d
MD5 eae85da50500d819f805dfaed71f4ab5
BLAKE2b-256 8a681c8d7a771e923ac8b8af30fc7b51b49dc79b838ed97c7c07d5da02649206

See more details on using hashes here.

Provenance

The following attestation bundles were made for ainfo-1.0.1.tar.gz:

Publisher: python-publish.yml on MisterXY89/ainfo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ainfo-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: ainfo-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 24.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ainfo-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 07b3bf4de0818d6436afe2f6814ac964da419c761224e25935bfafbd9a732cbe
MD5 240b14614a4d8f4557b5054059f8f0c9
BLAKE2b-256 9c5b94ff723039908ee26f0f0e4444d3c82a88d9ab25279347a7c2b9776b657e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ainfo-1.0.1-py3-none-any.whl:

Publisher: python-publish.yml on MisterXY89/ainfo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page