Add your description here
Project description
ainfo
gather structured information from any website - ready for LLMs
Architecture
The project separates concerns into distinct modules:
fetching– obtain raw data from a sourceparsing– transform raw data into a structured formextraction– pull relevant information from the parsed dataoutput– handle presentation of the extracted results
Usage
Command line
Install the project and run the CLI against a URL:
pip install ainfo
ainfo run https://example.com
The command fetches the page, parses its content and prints the page text.
Specify one or more built-in extractors with --extract to pull extra
information. For example, to collect contact details and hyperlinks:
ainfo run https://example.com --extract contacts --extract links
Available extractors include:
contacts– emails, phone numbers, addresses and social profileslinks– all hyperlinks on the pageheadings– text of headings (h1–h6)job_postings– structured job advertisement details like position and location
Use --json to emit machine-readable JSON instead of the default
human-friendly format. The JSON keys mirror the selected extractors, with
text included by default. Pass --no-text when you only need the
extraction results. Retrieve the JSON schema for contact details with
ainfo.output.json_schema.
For use within an existing asyncio application, the package exposes an
async_fetch_data coroutine:
import asyncio
from ainfo import async_fetch_data
async def main():
html = await async_fetch_data("https://example.com")
print(html[:60])
asyncio.run(main())
To delegate information extraction or summarisation to an LLM, provide an
OpenRouter API key via the OPENROUTER_API_KEY environment variable and pass
--use-llm or --summarize:
export OPENROUTER_API_KEY=your_key
ainfo run https://example.com --use-llm --summarize
Summaries are generated in German by default. Override the language with
--summary-language <LANG> on the CLI or by setting the AINFO_SUMMARY_LANGUAGE
environment variable.
If the target site relies on client-side JavaScript, enable rendering with a headless browser:
ainfo run https://example.com --render-js
To crawl multiple pages starting from a URL and optionally run extractors on each page:
ainfo crawl https://example.com --depth 2 --extract contacts
The crawler visits pages breadth-first up to the specified depth and prints
results for every page encountered. Pass --json to output the aggregated
results as JSON instead.
Both commands accept --render-js to execute JavaScript before scraping, which
uses Playwright. Installing the browser drivers may
require running playwright install.
Utilities chunk_text and stream_chunks are available to break large
pages into manageable pieces when sending content to LLMs.
Programmatic API
Most components can also be used directly from Python. Fetch and parse a page, then run the extractors yourself:
from ainfo.extractors import AVAILABLE_EXTRACTORS
from ainfo import fetch_data, parse_data, extract_information, extract_custom
html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")
# Contact details via built-in extractor
contacts = AVAILABLE_EXTRACTORS["contacts"](doc)
# All links
links = AVAILABLE_EXTRACTORS["links"](doc)
# Any additional data via regular expressions
extra = extract_custom(doc, {"prices": r"\$\d+(?:\.\d{2})?"})
print(contacts.emails, extra["prices"])
Serialise results with to_json or inspect the JSON schema with
json_schema(ContactDetails).
To crawl multiple pages of the same site and aggregate the results in code,
use extract_site. Pages are fetched breadth-first, deduplicated using a
content hash and restricted to the starting domain by default:
from ainfo import extract_site
pages = extract_site("https://example.com", depth=2, include_text=True)
for url, data in pages.items():
print(url, data["contacts"].emails)
Custom extractors
Define your own extractor by writing a function that accepts a
Document and registering it in ainfo.extractors.AVAILABLE_EXTRACTORS.
# my_extractors.py
from ainfo.models import Document
from ainfo.extraction import extract_custom
from ainfo.extractors import AVAILABLE_EXTRACTORS
def extract_prices(doc: Document) -> list[str]:
data = extract_custom(doc, {"prices": r"\$\d+(?:\.\d{2})?"})
return data.get("prices", [])
AVAILABLE_EXTRACTORS["prices"] = extract_prices
After importing my_extractors your extractor becomes available on the
command line:
ainfo run https://example.com --extract prices --no-text
LLM-based extraction
extract_custom can also delegate to a large language model. Supply an
LLMService and a prompt describing the desired output:
from ainfo import fetch_data, parse_data
from ainfo.extraction import extract_custom
from ainfo.llm_service import LLMService
html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")
with LLMService() as llm:
data = extract_custom(
doc,
llm=llm,
prompt="List all products with their prices as JSON under 'products'",
)
print(data["products"])
Workflow examples
Save contact details to JSON
pip install ainfo
ainfo run https://example.com --json > contacts.json
Summarize a large page with chunk_text
from ainfo import fetch_data, parse_data, chunk_text
from some_llm import summarize # pseudo-code
html = fetch_data("https://example.com")
doc = parse_data(html, url="https://example.com")
parts = [summarize(chunk) for chunk in chunk_text(doc.text_content(), 1000)]
print(" ".join(parts))
Stream chunks on the fly
Fetch and chunk a page directly by URL or pass in raw text:
from ainfo import stream_chunks
for chunk in stream_chunks("https://example.com", size=1000):
handle(chunk) # send to LLM or other processor
Environment configuration
Copy .env.example to .env and fill in OPENROUTER_API_KEY, OPENROUTER_MODEL, and OPENROUTER_BASE_URL to enable LLM-powered features.
Development & Releases
For automated version bumping and releases, see RELEASE.md for documentation on using the release.sh script.
n8n integration
A minimal FastAPI wrapper and accompanying Dockerfile live in the integration/ directory. Build the container and run the service:
docker build -f integration/Dockerfile -t ainfo-api .
docker run -p 8000:8000 -e OPENROUTER_API_KEY=your_key -e AINFO_API_KEY=choose_a_secret ainfo-api
# or use an env file
docker run -p 8000:8000 --env-file .env ainfo-api
The server exposes a /run endpoint that executes:
ainfo run <url> --use-llm --summarize --render-js --extract contacts --no-text --json
Pass an optional summary_language query parameter to control the summary
language (default: German).
integration/api.py uses python-dotenv to load a .env file, so sensitive values
such as OPENROUTER_API_KEY can be supplied via environment variables. Protect the endpoint by setting AINFO_API_KEY and
include an X-API-Key header with that value on every request. This makes it easy to call ainfo from workflow tools like
n8n.
Limitations
- The built-in
extract_informationtargets contact and social media details. Useextract_customfor other patterns or implement your own domain-specific extractors.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ainfo-1.2.0.tar.gz.
File metadata
- Download URL: ainfo-1.2.0.tar.gz
- Upload date:
- Size: 30.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
288877cf92740e159a4ab7e3fbde8ae830515e4940504ad6dc5c483c15e0886f
|
|
| MD5 |
d354d9432ad2e8cbfbe07afbcfc69e2e
|
|
| BLAKE2b-256 |
b109f9a5312550ae961e124bd2a7b30bcfe415605c9948d7ef99be54f140a141
|
Provenance
The following attestation bundles were made for ainfo-1.2.0.tar.gz:
Publisher:
python-publish.yml on MisterXY89/ainfo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ainfo-1.2.0.tar.gz -
Subject digest:
288877cf92740e159a4ab7e3fbde8ae830515e4940504ad6dc5c483c15e0886f - Sigstore transparency entry: 543752840
- Sigstore integration time:
-
Permalink:
MisterXY89/ainfo@cf7782f98c66b901027cee12aac304f943b457c3 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/MisterXY89
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@cf7782f98c66b901027cee12aac304f943b457c3 -
Trigger Event:
release
-
Statement type:
File details
Details for the file ainfo-1.2.0-py3-none-any.whl.
File metadata
- Download URL: ainfo-1.2.0-py3-none-any.whl
- Upload date:
- Size: 28.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66022860bc46fc9a7a5f12ccf04377eb8adc0bfd6779138686bdb5386b88baa2
|
|
| MD5 |
65a71837d52d5d0ef9751f1f51c4bcea
|
|
| BLAKE2b-256 |
787d9ba7f4ac22c10840da7aae305f18936c9fb7ff0759c3be98327cc3b127fa
|
Provenance
The following attestation bundles were made for ainfo-1.2.0-py3-none-any.whl:
Publisher:
python-publish.yml on MisterXY89/ainfo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ainfo-1.2.0-py3-none-any.whl -
Subject digest:
66022860bc46fc9a7a5f12ccf04377eb8adc0bfd6779138686bdb5386b88baa2 - Sigstore transparency entry: 543752843
- Sigstore integration time:
-
Permalink:
MisterXY89/ainfo@cf7782f98c66b901027cee12aac304f943b457c3 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/MisterXY89
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@cf7782f98c66b901027cee12aac304f943b457c3 -
Trigger Event:
release
-
Statement type: