AI-Powered Selector Discovery - Discover once, scrape forever
Project description
Yosoi - AI-Powered CSS Selector Discover
Discover CSS selectors once with AI, scrape forever with BeautifulSoup
Give Yosoi a URL, and it uses AI to automatically discover the best CSS selectors for extracting headlines, authors, dates, body text, and related content. Discovery takes 3 seconds and costs $0.001 per domain — then scrape thousands of articles for free with BeautifulSoup.
Key Benefits:
- Fast: 3 seconds to discover selectors per domain
- Cheap: $0.001 per domain (one-time cost)
- Accurate: Validates selectors before saving
- Reusable: Discover once, use forever
- Production-Ready: Type-safe, linted, tested
Quick Start
Installation
# Clone the repository
git clone <your-repo>
cd yosoi
# Install dependencies (using uv)
uv sync
# For development tools
uv sync --group dev
Configuration
Create a .env file (see env.example):
# Choose one or both providers
GROQ_KEY=your_groq_api_key_here # For Llama 3.3 (faster, recommended)
GEMINI_KEY=your_gemini_api_key_here # For Gemini 2.0 Flash
# Optional: Observability
LOGFIRE_TOKEN=your_logfire_token_here # For Logfire tracing
Get API Keys:
- Groq (Free): https://console.groq.com/keys
- Gemini: https://aistudio.google.com/app/apikey
- Logfire (Optional): https://logfire.pydantic.dev
Basic Usage
# Process a single URL
uv run yosoi --url https://example.com/article
# Process multiple URLs from a file
uv run yosoi --file urls.txt
# Force re-discovery
uv run yosoi --url https://example.com --force
# Show summary of all saved selectors
uv run yosoi --summary
# Enable debug mode (saves extracted HTML)
uv run yosoi --url https://example.com --debug
URLs File Format
Create urls.txt with one URL per line:
https://example.com/article1
https://example.com/article2
# Comments are allowed
https://example.com/article3
Or use JSON format (urls.json):
[
{"url": "https://example.com/article1"},
{"url": "https://example.com/article2"}
]
Project Structure
.
├── .yosoi/ # .yosoi helper directory (hidden)
│ └── selectors/ # Discovered selectors (hidden)
├── main.py # CLI entry point & orchestrator
├── selector_discovery.py # AI-powered selector discovery
├── selector_validator.py # Selector validation & testing
├── selector_storage.py # JSON storage operations
├── services.py # Shared services (Logfire config)
├── models.py # Pydantic models
├── pyproject.toml # Project config & dependencies
├── .env # API keys (create this)
├── CHEAT_SHEET.md # Dev tools quick reference
└── selectors/ # Output directory
└── selectors_*.json # Discovered selectors per domain
How It Works
Phase 1: Smart HTML Extraction
Full HTML (2MB)
↓
Remove noise (scripts, styles, nav, footer)
↓
Find main content (<article>, <main>, .content)
↓
Extract ~30k chars of relevant HTML
↓
Send to AI
Phase 2: AI Analysis
AI reads actual HTML structure
↓
Finds real class names & IDs
↓
Returns 3 selectors per field:
- Primary (most specific)
- Fallback (reliable backup)
- Tertiary (generic)
↓
Smart fallback if AI fails
Phase 3: Validation
Test each selector on the actual page
↓
Find first working selector per field
↓
Mark which priority worked (primary/fallback/tertiary)
↓
Save validated selectors to JSON
Output Format
Selectors are saved as JSON files in the .yosoi/selectors/ directory:
{
"headline": {
"primary": "h1.article-title",
"fallback": "h1",
"tertiary": "h2"
},
"author": {
"primary": "a[href*='/author/']",
"fallback": ".byline",
"tertiary": "NA"
},
"date": {
"primary": "time.published-date",
"fallback": "time",
"tertiary": ".date"
},
"body_text": {
"primary": "article.content p",
"fallback": "article p",
"tertiary": "p"
},
"related_content": {
"primary": "aside.related a",
"fallback": ".sidebar a",
"tertiary": "NA"
}
}
Using Discovered Selectors
Once selectors are discovered, use them with standard BeautifulSoup:
from selector_storage import SelectorStorage
from bs4 import BeautifulSoup
import requests
# Load discovered selectors
storage = SelectorStorage()
selectors = storage.load_selectors('example.com')
# Scrape using the selectors (fast & free!)
url = 'https://example.com/another-article'
html = requests.get(url).text
soup = BeautifulSoup(html, 'html.parser')
# Extract data using validated selectors
headline_selector = selectors['headline']['primary']
headline = soup.select_one(headline_selector)
if headline:
print(f"Headline: {headline.get_text(strip=True)}")
# Extract body text
body_selector = selectors['body_text']['primary']
paragraphs = soup.select(body_selector)
body_text = '\n\n'.join(p.get_text(strip=True) for p in paragraphs)
print(f"\nBody:\n{body_text}")
Using as a Library
from main import SelectorDiscoveryPipeline
import os
# Initialize with your preferred provider
pipeline = SelectorDiscoveryPipeline(
ai_api_key=os.getenv('GROQ_KEY'),
model_name='llama-3.3-70b-versatile',
provider='groq'
)
# Process a URL
success = pipeline.process_url('https://example.com/article')
# Process multiple URLs
urls = ['https://example.com/article1', 'https://example.com/article2']
pipeline.process_urls(urls, force=False)
# Show summary
pipeline.show_summary()
Supported AI Models
Groq (Recommended)
- Model:
llama-3.3-70b-versatile - Cost: Free tier available
- Setup:
GROQ_KEYin.env
Google Gemini
- Model:
gemini-2.0-flash-exp - Cost: Free tier available
- Setup:
GEMINI_KEYin.env
The system automatically uses Groq if GROQ_KEY is set, otherwise falls back to Gemini.
Observability with Logfire
Yosoi integrates with Logfire for comprehensive observability:
What's Tracked:
- Request/response traces for each URL
- AI model calls and responses
- Selector validation results
- Performance metrics
- Error tracking
Enable Logfire:
- Sign up at https://logfire.pydantic.dev
- Get your token
- Add
LOGFIRE_TOKEN=your_tokento.env - Run your discovery process
- View traces in Logfire dashboard
Features
AI-Powered - Uses Groq/Gemini to read HTML and find selectors Cheap - $0.001 per domain Validated - Tests each selector before saving Organized - Clean JSON output per domain Fallback System - Uses heuristics when AI fails Rich CLI - Nice terminal output with progress indicators Type-Safe - Full type hints with mypy checking Observable - Integrated with Logfire for tracing Production-Ready - Linted, formatted, and tested
Troubleshooting
AI Returns All "NA"
Cause: Site has poor semantic HTML or heavy JavaScript rendering Solution:
- Check if site requires JavaScript (use debug mode:
--debug) - Review extracted HTML in
debug_html/directory - Consider using Selenium for JavaScript-heavy sites
- Fallback heuristics will be used automatically
Selectors Don't Work
Cause: Site structure changed or uses dynamic content Solution:
- Re-run with
--forceto re-discover selectors - Check if site requires authentication
- Verify selectors with
--debugmode
API Key Errors
Problem: GROQ_KEY or GEMINI_KEY not found
Solution:
- Ensure
.envfile exists in project root - Verify key is correctly formatted (no quotes needed)
- Check key has not expired at provider's dashboard
HTTP Errors (403, 429, 500)
- 403 Forbidden: Site blocks scrapers - may need different User-Agent
- 429 Too Many Requests: Rate limited - add delays between requests
- 5xx Server Error: Server issue - Yosoi will skip retries automatically
Import Errors
Problem: ModuleNotFoundError for pydantic_ai, logfire, etc.
Solution:
# Reinstall dependencies
uv sync
# If still failing, try clean install
rm -rf .venv
uv sync
Best Practices
For Reliable Scraping
- Test on multiple pages: Validate selectors work across different articles
- Use fallback selectors: Always have primary/fallback/tertiary
- Monitor changes: Re-discover periodically (sites change)
- Handle missing data: Not all fields exist on all pages
For Better AI Results
- Use debug mode first: Check what HTML is being sent to AI
- Prefer semantic HTML: Sites with
<article>,<time>, etc. work best - Avoid paywalled sites: Content behind login walls won't work
- Check rate limits: Respect site's
robots.txtand rate limits
For Production Use
- Cache selectors: Store and reuse for same domain
- Add error handling: Sites can change or go down
- Use Logfire: Monitor success rates and failures
- Set timeouts: Don't let requests hang indefinitely
Limitations / Future Developments
- JavaScript-rendered content: Not visible in raw HTML (maybe a future development)
- Paywalled sites: Cannot access content behind logins
- Dynamic selectors: Sites that change class names frequently
- Rate limits: Some sites may block or rate-limit requests
Citation
If you use yosoi in your research or project, please cite it using the metadata provided in the CITATION.cff file.
BibTeX
If you are using LaTeX, you can use the following entry:
@software{Berg_yosoi_2026,
author = {Berg, Andrew and Miles, Houston and Mefford, Braeden and Wang, Mia},
license = {Apache-2.0},
month = feb,
title = {{yosoi}},
url = {https://github.com/CascadingLabs/Yosoi},
version = {0.0.1-alpha6},
year = {2026}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file yosoi-0.0.1a9.tar.gz.
File metadata
- Download URL: yosoi-0.0.1a9.tar.gz
- Upload date:
- Size: 46.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a7a1433271b1e63ce7bb7d4eed3b9db6b4eb1cc706613834091dad38851ff32a
|
|
| MD5 |
2e32b1e67e9d348d74ff9e1d443715cb
|
|
| BLAKE2b-256 |
1e44a01ae8b6474be9b4eccefcba590fe7e5527b4e5eaafa87844e1256ff614c
|
Provenance
The following attestation bundles were made for yosoi-0.0.1a9.tar.gz:
Publisher:
publish.yaml on CascadingLabs/Yosoi
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yosoi-0.0.1a9.tar.gz -
Subject digest:
a7a1433271b1e63ce7bb7d4eed3b9db6b4eb1cc706613834091dad38851ff32a - Sigstore transparency entry: 973409836
- Sigstore integration time:
-
Permalink:
CascadingLabs/Yosoi@56bf646ccc653bd9134a158b466f207eb5d46c31 -
Branch / Tag:
refs/tags/0.0.1a9 - Owner: https://github.com/CascadingLabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@56bf646ccc653bd9134a158b466f207eb5d46c31 -
Trigger Event:
release
-
Statement type:
File details
Details for the file yosoi-0.0.1a9-py3-none-any.whl.
File metadata
- Download URL: yosoi-0.0.1a9-py3-none-any.whl
- Upload date:
- Size: 52.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
137c8e41028a0afd822578b1a038b3141c32e8c09fca6dc50d12ea6460fba315
|
|
| MD5 |
8744043822b57b5f833d0dec917f1256
|
|
| BLAKE2b-256 |
85d6c17231abcbfa4c05c72ccdbddf3ef9f3556bfa806737d2396894a9917da6
|
Provenance
The following attestation bundles were made for yosoi-0.0.1a9-py3-none-any.whl:
Publisher:
publish.yaml on CascadingLabs/Yosoi
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yosoi-0.0.1a9-py3-none-any.whl -
Subject digest:
137c8e41028a0afd822578b1a038b3141c32e8c09fca6dc50d12ea6460fba315 - Sigstore transparency entry: 973409839
- Sigstore integration time:
-
Permalink:
CascadingLabs/Yosoi@56bf646ccc653bd9134a158b466f207eb5d46c31 -
Branch / Tag:
refs/tags/0.0.1a9 - Owner: https://github.com/CascadingLabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@56bf646ccc653bd9134a158b466f207eb5d46c31 -
Trigger Event:
release
-
Statement type: