LLM visibility monitoring for brands - track how your brand appears in AI-generated responses
Project description
PromptBeacon
The open-source Generative Engine Optimization (GEO) toolkit for Python. Track how AI sees your brand across ChatGPT, Claude, Gemini, Mistral, and more.
The AI visibility space is dominated by $99-300+/month SaaS tools. PromptBeacon is the only open-source alternative — free, local-first, and extensible.
What It Does
Prompt: "What are the best running shoe brands?"
ChatGPT Claude Gemini
| | |
v v v
"Nike is a top "I'd recommend "Popular brands
choice for..." Nike and..." include Nike..."
| | |
+------------+------------+------------+-------------+
|
PromptBeacon
|
+---------------+----------------+
| | |
Visibility Sentiment Citations
Score: 78/100 82% positive nike.com (3x)
runnersworld.com
Three lines of code. Six providers. One score.
from promptbeacon import Beacon
report = Beacon("Nike").scan()
print(f"Visibility: {report.visibility_score}/100") # 78.3
BeaconGuard: Real-Time Brand Safety
Deploying a customer-facing AI chatbot? BeaconGuard ensures LLM outputs don't recommend competitors or trash your brand — no API calls, pure local processing.
from promptbeacon import BeaconGuard
guard = BeaconGuard("Nike", competitors=["Adidas", "Puma"])
result = guard.analyze("Try Adidas instead — Nike has quality issues.")
print(result.risk_level) # "high"
print(result.flags) # ["Competitor mentioned: Adidas", "Negative sentiment detected"]
Works as middleware in any LLM pipeline, or with LangChain via callback handler/output parser:
pip install 'promptbeacon[langchain]'
See Advanced Usage: Real-Time Brand Safety for integration patterns.
Why PromptBeacon?
As AI assistants replace search engines, your brand's AI visibility is your new SEO. PromptBeacon answers the questions that matter:
- "How visible is my brand?" — 0-100 score based on mention frequency, sentiment, position, and recommendation rate
- "What do LLMs say about me?" — Sentiment analysis with negation detection ("not great" = negative)
- "How do I compare to competitors?" — Side-by-side benchmarking across providers
- "Why did my score change?" — Evidence-based explanations with actual quotes from LLM responses
- "Which sources does the AI cite?" — Citation tracking: URLs, "According to X" patterns, brand associations
- "Is my score statistically reliable?" — Confidence intervals, volatility scoring, significance testing
Features
| Feature | Description |
|---|---|
| 6 LLM Providers | OpenAI, Anthropic, Google, Mistral, Cohere, Perplexity — query them all simultaneously |
| Citation Tracking | See which sources LLMs cite when discussing your brand |
| Brand Aliases | "Nike Inc", "Nike Corporation" all count as Nike mentions |
| Industry Templates | Pre-built prompts for ecommerce, SaaS, finance, healthcare, travel, food, tech |
| Response Caching | Skip identical queries with file-based caching (configurable TTL) |
| Score Breakdown | See which of the 4 scoring factors (mentions, sentiment, position, recommendations) drags your score |
| Fluent API | Chainable, readable Python interface |
| Historical Tracking | DuckDB-powered local storage for trend analysis |
| CLI + Python | Full command-line and programmatic access |
| 5 Export Formats | JSON, CSV, Markdown, HTML, pandas DataFrame |
| BeaconGuard | Real-time brand safety for LLM outputs — flag competitors, negative sentiment, anti-recommendations |
| LangChain Integration | Callback handler + output parser for LangChain pipelines |
| Async-First | Built for performance with concurrent provider queries |
| Local-First | All data stays on your machine — no cloud, no subscription |
Installation
pip install promptbeacon
Or with uv (recommended):
uv add promptbeacon
Prerequisites
You need at least one LLM provider API key:
export OPENAI_API_KEY="sk-..." # https://platform.openai.com/api-keys
export ANTHROPIC_API_KEY="sk-ant-..." # https://console.anthropic.com/settings/keys
export GOOGLE_API_KEY="..." # https://aistudio.google.com/apikey
Verify your setup:
promptbeacon providers
Quick Start
3-Line Scan
from promptbeacon import Beacon
report = Beacon("Nike").scan()
print(f"Visibility: {report.visibility_score}/100")
print(f"Mentions: {report.mention_count}")
print(f"Sentiment: {report.sentiment_breakdown.positive:.0%} positive")
Full Competitive Analysis
from promptbeacon import Beacon, Provider
report = (
Beacon("Nike")
.with_aliases("Nike Inc", "Nike Corporation") # count all name variants
.with_competitors("Adidas", "Puma", "New Balance")
.with_providers(Provider.OPENAI, Provider.ANTHROPIC)
.with_industry("ecommerce") # industry-tuned prompts
.with_cache() # skip duplicate queries
.with_storage("~/.promptbeacon/nike.db") # track history
.scan()
)
# Visibility score with factor breakdown
print(f"Score: {report.visibility_score}/100")
bd = report.metrics.score_breakdown
print(f" Mentions: {bd.mention_frequency:.0f} Sentiment: {bd.sentiment:.0f}")
print(f" Position: {bd.position:.0f} Recommendations: {bd.recommendation:.0f}")
# Competitor comparison
for name, score in report.competitor_comparison.items():
print(f"{name}: {score.visibility_score:.1f}")
# Citations the LLM used
for cit in report.citation_summary.citations[:5]:
print(f" Source: {cit.source_name} -> {cit.brand_associated}")
# Evidence-based recommendations
for rec in report.recommendations[:3]:
print(f"[{rec.priority.upper()}] {rec.action}")
Historical Tracking
beacon = Beacon("Nike").with_storage("~/.promptbeacon/data.db")
report = beacon.scan()
history = beacon.get_history(days=30)
print(f"Trend: {history.trend_direction}") # up, down, or stable
diff = beacon.compare_with_previous()
if diff:
print(f"Change: {diff.score_change:+.1f} points")
CLI Usage
# Quick 3-prompt scan (fast check)
promptbeacon quick "Nike"
# Full scan
promptbeacon scan "Nike"
# With competitors and multiple providers
promptbeacon scan "Nike" -c "Adidas" -c "Puma" -p openai -p anthropic
# Compare brands head-to-head
promptbeacon compare "Nike" --against "Adidas" --against "Puma"
# View 30-day history
promptbeacon history "Nike" --days 30
# Export as JSON or Markdown
promptbeacon scan "Nike" --format json
promptbeacon scan "Nike" --format markdown
# Check which providers are configured
promptbeacon providers
Supported Providers
| Provider | Default Model | Env Variable |
|---|---|---|
| OpenAI | gpt-4o-mini | OPENAI_API_KEY |
| Anthropic | claude-3-5-haiku-20241022 | ANTHROPIC_API_KEY |
| gemini-2.0-flash | GOOGLE_API_KEY |
|
| Mistral | mistral-small-latest | MISTRAL_API_KEY |
| Cohere | command-r | COHERE_API_KEY |
| Perplexity | sonar | PERPLEXITY_API_KEY |
API at a Glance
Beacon Configuration
beacon = Beacon("Nike")
| Method | Description |
|---|---|
.with_competitors(*brands) |
Add competitor brands to track |
.with_aliases(*names) |
Alternative brand names (counted as primary) |
.with_providers(*providers) |
Set LLM providers to query |
.with_industry(name) |
Use industry-specific prompt templates |
.with_categories(*topics) |
Set custom analysis categories |
.with_prompt_count(n) |
Number of prompts per category |
.with_cache(ttl_seconds=...) |
Enable response caching |
.with_storage(path) |
Enable DuckDB historical storage |
.with_scoring_weights(...) |
Customise visibility score weights |
.with_prompts(list) |
Use fully custom prompt templates |
.with_temperature(t) |
LLM temperature (0.0-2.0) |
.with_timeout(seconds) |
Request timeout |
.scan() |
Run synchronous scan |
.scan_async() |
Run async scan |
.get_history(days) |
Get historical trend data |
.compare_with_previous() |
Compare with last scan |
Report Object
report.visibility_score # 0-100 overall score
report.mention_count # total brand mentions
report.sentiment_breakdown # .positive / .neutral / .negative
report.competitor_comparison # {name: CompetitorScore}
report.citation_summary # .citations, .total_citations, .unique_domains
report.metrics.score_breakdown # .mention_frequency / .sentiment / .position / .recommendation
report.explanations # evidence-based insights
report.recommendations # prioritised action items
Export Functions
from promptbeacon import to_json, to_csv, to_markdown, to_html, to_dataframe
to_json(report) # JSON string
to_csv(report) # CSV string
to_markdown(report) # Markdown report
to_html(report) # Standalone HTML page
to_dataframe(report) # pandas DataFrame
Documentation
Full docs are in docs/:
- Quickstart Guide - Up and running in 5 minutes
- API Reference - Complete API documentation
- CLI Reference - Command-line interface guide
- Provider Setup - Configure all 6 providers
- Storage Guide - Historical tracking with DuckDB
- Advanced Usage - Custom prompts, async, integrations
- Examples - Real-world usage patterns
Development
git clone https://github.com/yotambraun/promptbeacon
cd promptbeacon
uv venv && uv sync --all-extras
uv run pytest --cov -v # tests
uv run ruff check . # lint
uv run ruff format . # format
Contributing
Contributions welcome! See TODO.md for the roadmap.
License
Apache License 2.0 - see LICENSE for details.
Acknowledgements
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptbeacon-0.3.0.tar.gz.
File metadata
- Download URL: promptbeacon-0.3.0.tar.gz
- Upload date:
- Size: 113.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3ec58837d01ca146dd7dfeb04b14c7b55ef1d97724dc612453a74361cce08d3
|
|
| MD5 |
2bef5304fcfc4de9891f5c7703da64bb
|
|
| BLAKE2b-256 |
5db5a6e2a7d233ae3bc19e6d44c2cf6adde8c4798fdd45748dba6480b080c4e7
|
Provenance
The following attestation bundles were made for promptbeacon-0.3.0.tar.gz:
Publisher:
release.yml on yotambraun/promptbeacon
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptbeacon-0.3.0.tar.gz -
Subject digest:
b3ec58837d01ca146dd7dfeb04b14c7b55ef1d97724dc612453a74361cce08d3 - Sigstore transparency entry: 1148103655
- Sigstore integration time:
-
Permalink:
yotambraun/promptbeacon@3032bdb1b17fad6f9070bea358dcdc24bb132f32 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/yotambraun
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3032bdb1b17fad6f9070bea358dcdc24bb132f32 -
Trigger Event:
push
-
Statement type:
File details
Details for the file promptbeacon-0.3.0-py3-none-any.whl.
File metadata
- Download URL: promptbeacon-0.3.0-py3-none-any.whl
- Upload date:
- Size: 67.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0aa09c249e5fa9eaa6d0e087bc1e7264a5a7bbc0c65e7b52d9ab3f4fa124b447
|
|
| MD5 |
d98ffc07dc786ca1a8354d6af694dd39
|
|
| BLAKE2b-256 |
4e01ad73bf1a4e3d713f943d0ea5c64471bffc3eceab2d7ed679ad6fb8a907c3
|
Provenance
The following attestation bundles were made for promptbeacon-0.3.0-py3-none-any.whl:
Publisher:
release.yml on yotambraun/promptbeacon
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptbeacon-0.3.0-py3-none-any.whl -
Subject digest:
0aa09c249e5fa9eaa6d0e087bc1e7264a5a7bbc0c65e7b52d9ab3f4fa124b447 - Sigstore transparency entry: 1148103994
- Sigstore integration time:
-
Permalink:
yotambraun/promptbeacon@3032bdb1b17fad6f9070bea358dcdc24bb132f32 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/yotambraun
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3032bdb1b17fad6f9070bea358dcdc24bb132f32 -
Trigger Event:
push
-
Statement type: