LLM visibility monitoring for brands - track how your brand appears in AI-generated responses
Project description
PromptBeacon
Track how AI sees your brand. Monitor your brand's visibility across ChatGPT, Claude, Gemini, and other LLMs.
Why PromptBeacon?
As AI assistants become the new search engines, brands need to understand how they appear in AI-generated responses. PromptBeacon provides:
- Visibility Scoring: Measure how often and prominently your brand is mentioned (0-100 scale)
- Sentiment Analysis: Understand if AI talks about your brand positively, neutrally, or negatively
- Competitor Benchmarking: Compare your visibility against competitors
- Explainable Insights: Not just "score dropped 5%" but why with actual quotes
- Statistical Rigor: Confidence intervals, volatility scoring, significance testing
- Local-First: All data stays on your machine with DuckDB storage
Features
| Feature | Description |
|---|---|
| Multi-Provider | Query OpenAI, Anthropic, and Google simultaneously |
| Fluent API | Chainable, readable Python interface |
| Historical Tracking | DuckDB-powered local storage for trend analysis |
| CLI Interface | Full command-line support for automation |
| Export Formats | JSON, CSV, Markdown, HTML, pandas DataFrame |
| Async Support | Built for performance with async-first design |
Installation
pip install promptbeacon
With uv (recommended):
uv add promptbeacon
Quick Start
Simple Usage
from promptbeacon import Beacon
beacon = Beacon("Nike")
report = beacon.scan()
print(f"Visibility: {report.visibility_score}/100")
print(f"Mentions: {report.mention_count}")
print(f"Sentiment: {report.sentiment_breakdown.positive:.0%} positive")
Competitor Analysis
from promptbeacon import Beacon, Provider
beacon = (
Beacon("Nike")
.with_competitors("Adidas", "Puma", "New Balance")
.with_providers(Provider.OPENAI, Provider.ANTHROPIC)
.with_categories("running shoes", "athletic wear", "sports brand")
.with_prompt_count(20)
)
report = beacon.scan()
# Compare against competitors
for name, score in report.competitor_comparison.items():
print(f"{name}: {score.visibility_score:.1f}")
Historical Tracking
from promptbeacon import Beacon
beacon = Beacon("Nike").with_storage("~/.promptbeacon/data.db")
# Scan and auto-save
report = beacon.scan()
# Get 30-day trends
history = beacon.get_history(days=30)
print(f"Trend: {history.trend_direction}") # up, down, or stable
# Compare with previous scan
diff = beacon.compare_with_previous()
if diff:
print(f"Change: {diff.score_change:+.1f} points")
Actionable Insights
# Get explanations for your visibility
for exp in report.explanations:
print(f"[{exp.impact.upper()}] {exp.message}")
# Get prioritized recommendations
for rec in report.recommendations:
print(f"[{rec.priority}] {rec.action}")
print(f" Why: {rec.rationale}")
CLI Usage
# Basic scan
promptbeacon scan "Nike"
# With competitors
promptbeacon scan "Nike" -c "Adidas" -c "Puma" -p openai -p anthropic
# Compare brands
promptbeacon compare "Nike" --against "Adidas" --against "Puma"
# View history
promptbeacon history "Nike" --days 30
# Output formats
promptbeacon scan "Nike" --format json
promptbeacon scan "Nike" --format markdown
# Check provider status
promptbeacon providers
Configuration
Environment Variables
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
Default Models
| Provider | Model |
|---|---|
| OpenAI | gpt-4o-mini |
| Anthropic | claude-3-haiku-20240307 |
| gemini-1.5-flash |
API Reference
Beacon Class
beacon = Beacon(brand: str)
| Method | Description |
|---|---|
.with_competitors(*brands) |
Add competitor brands |
.with_providers(*providers) |
Set LLM providers |
.with_categories(*topics) |
Set analysis categories |
.with_prompt_count(n) |
Prompts per category |
.with_storage(path) |
Enable DuckDB storage |
.with_temperature(t) |
LLM temperature (0-2) |
.with_timeout(seconds) |
Request timeout |
.scan() |
Run synchronous scan |
.scan_async() |
Run async scan |
.get_history(days) |
Get historical data |
.compare_with_previous() |
Compare scans |
Report Object
report.visibility_score # 0-100 score
report.mention_count # Total mentions
report.sentiment_breakdown # positive/neutral/negative
report.competitor_comparison # Competitor scores
report.explanations # Why insights
report.recommendations # Action items
report.metrics # Detailed metrics
Export Functions
from promptbeacon import to_json, to_csv, to_markdown, to_html, to_dataframe
to_json(report) # JSON string
to_csv(report) # CSV string
to_markdown(report) # Markdown
to_html(report) # HTML page
to_dataframe(report) # pandas DataFrame
Development
git clone https://github.com/yotambraun/promptbeacon
cd promptbeacon
# Setup with uv
uv venv
uv sync --all-extras
# Run tests
uv run pytest --cov -v
# Lint
uv run ruff check .
uv run ruff format .
uv run mypy src/promptbeacon
Documentation
Full documentation is available in the docs/ folder:
- Quickstart Guide - Get up and running in 5 minutes
- API Reference - Complete API documentation
- CLI Reference - Command-line interface guide
- Provider Setup - Configure OpenAI, Anthropic, Google
- Storage Guide - Historical tracking with DuckDB
- Advanced Usage - Custom prompts, async, advanced analysis
- Examples - Real-world usage patterns
Contributing
Contributions welcome! See TODO.md for the roadmap.
License
MIT License - see LICENSE for details.
Acknowledgements
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptbeacon-0.1.0.tar.gz.
File metadata
- Download URL: promptbeacon-0.1.0.tar.gz
- Upload date:
- Size: 88.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
69f8395775cf64a62e5afea9b7fdbe433c1beaa67cbca016db0c31a2c4601973
|
|
| MD5 |
b422c77ac45a12e7a208a858e19d5d0f
|
|
| BLAKE2b-256 |
122af565d9072524ff7675bb7bbabbdd3272c11dcb9d79d1ae6117e689b5f010
|
File details
Details for the file promptbeacon-0.1.0-py3-none-any.whl.
File metadata
- Download URL: promptbeacon-0.1.0-py3-none-any.whl
- Upload date:
- Size: 50.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd2ad32c33737ffed1c3d9a7e02bc678ca556e6ec96f8a41cc12424e370634cb
|
|
| MD5 |
c2c69696a08fd61a6f00f8c792865669
|
|
| BLAKE2b-256 |
a4319d97f23ec03c0b4780c73b238fc12eee8edfcfe92f63780a05b6068ec6a7
|