Skip to main content

LLM visibility monitoring for brands - track how your brand appears in AI-generated responses

Project description

PromptBeacon

The open-source Generative Engine Optimization (GEO) toolkit for Python. Track how AI sees your brand across ChatGPT, Claude, Gemini, Mistral, and more.

PyPI version Downloads CI Python 3.10+ License: Apache 2.0 codecov


The AI visibility space is dominated by $99-300+/month SaaS tools. PromptBeacon is the only open-source alternative — free, local-first, and extensible.

What It Does

Prompt: "What are the best running shoe brands?"

     ChatGPT                    Claude                     Gemini
        |                         |                          |
        v                         v                          v
  "Nike is a top              "I'd recommend             "Popular brands
   choice for..."              Nike and..."               include Nike..."
        |                         |                          |
        +------------+------------+------------+-------------+
                     |
              PromptBeacon
                     |
     +---------------+----------------+
     |               |                |
  Visibility    Sentiment         Citations
  Score: 78/100  82% positive    nike.com (3x)
                                 runnersworld.com

Three lines of code. Six providers. One score.

from promptbeacon import Beacon

report = Beacon("Nike").scan()
print(f"Visibility: {report.visibility_score}/100")  # 78.3

Why PromptBeacon?

As AI assistants replace search engines, your brand's AI visibility is your new SEO. PromptBeacon answers the questions that matter:

  • "How visible is my brand?" — 0-100 score based on mention frequency, sentiment, position, and recommendation rate
  • "What do LLMs say about me?" — Sentiment analysis with negation detection ("not great" = negative)
  • "How do I compare to competitors?" — Side-by-side benchmarking across providers
  • "Why did my score change?" — Evidence-based explanations with actual quotes from LLM responses
  • "Which sources does the AI cite?" — Citation tracking: URLs, "According to X" patterns, brand associations
  • "Is my score statistically reliable?" — Confidence intervals, volatility scoring, significance testing

Features

Feature Description
6 LLM Providers OpenAI, Anthropic, Google, Mistral, Cohere, Perplexity — query them all simultaneously
Citation Tracking See which sources LLMs cite when discussing your brand
Brand Aliases "Nike Inc", "Nike Corporation" all count as Nike mentions
Industry Templates Pre-built prompts for ecommerce, SaaS, finance, healthcare, travel, food, tech
Response Caching Skip identical queries with file-based caching (configurable TTL)
Score Breakdown See which of the 4 scoring factors (mentions, sentiment, position, recommendations) drags your score
Fluent API Chainable, readable Python interface
Historical Tracking DuckDB-powered local storage for trend analysis
CLI + Python Full command-line and programmatic access
5 Export Formats JSON, CSV, Markdown, HTML, pandas DataFrame
Async-First Built for performance with concurrent provider queries
Local-First All data stays on your machine — no cloud, no subscription

Installation

pip install promptbeacon

Or with uv (recommended):

uv add promptbeacon

Prerequisites

You need at least one LLM provider API key:

export OPENAI_API_KEY="sk-..."          # https://platform.openai.com/api-keys
export ANTHROPIC_API_KEY="sk-ant-..."   # https://console.anthropic.com/settings/keys
export GOOGLE_API_KEY="..."             # https://aistudio.google.com/apikey

Verify your setup:

promptbeacon providers

Quick Start

3-Line Scan

from promptbeacon import Beacon

report = Beacon("Nike").scan()
print(f"Visibility: {report.visibility_score}/100")
print(f"Mentions: {report.mention_count}")
print(f"Sentiment: {report.sentiment_breakdown.positive:.0%} positive")

Full Competitive Analysis

from promptbeacon import Beacon, Provider

report = (
    Beacon("Nike")
    .with_aliases("Nike Inc", "Nike Corporation")       # count all name variants
    .with_competitors("Adidas", "Puma", "New Balance")
    .with_providers(Provider.OPENAI, Provider.ANTHROPIC)
    .with_industry("ecommerce")                          # industry-tuned prompts
    .with_cache()                                        # skip duplicate queries
    .with_storage("~/.promptbeacon/nike.db")             # track history
    .scan()
)

# Visibility score with factor breakdown
print(f"Score: {report.visibility_score}/100")
bd = report.metrics.score_breakdown
print(f"  Mentions: {bd.mention_frequency:.0f}  Sentiment: {bd.sentiment:.0f}")
print(f"  Position: {bd.position:.0f}  Recommendations: {bd.recommendation:.0f}")

# Competitor comparison
for name, score in report.competitor_comparison.items():
    print(f"{name}: {score.visibility_score:.1f}")

# Citations the LLM used
for cit in report.citation_summary.citations[:5]:
    print(f"  Source: {cit.source_name} -> {cit.brand_associated}")

# Evidence-based recommendations
for rec in report.recommendations[:3]:
    print(f"[{rec.priority.upper()}] {rec.action}")

Historical Tracking

beacon = Beacon("Nike").with_storage("~/.promptbeacon/data.db")
report = beacon.scan()

history = beacon.get_history(days=30)
print(f"Trend: {history.trend_direction}")  # up, down, or stable

diff = beacon.compare_with_previous()
if diff:
    print(f"Change: {diff.score_change:+.1f} points")

CLI Usage

# Quick 3-prompt scan (fast check)
promptbeacon quick "Nike"

# Full scan
promptbeacon scan "Nike"

# With competitors and multiple providers
promptbeacon scan "Nike" -c "Adidas" -c "Puma" -p openai -p anthropic

# Compare brands head-to-head
promptbeacon compare "Nike" --against "Adidas" --against "Puma"

# View 30-day history
promptbeacon history "Nike" --days 30

# Export as JSON or Markdown
promptbeacon scan "Nike" --format json
promptbeacon scan "Nike" --format markdown

# Check which providers are configured
promptbeacon providers

Supported Providers

Provider Default Model Env Variable
OpenAI gpt-4o-mini OPENAI_API_KEY
Anthropic claude-3-5-haiku-20241022 ANTHROPIC_API_KEY
Google gemini-2.0-flash GOOGLE_API_KEY
Mistral mistral-small-latest MISTRAL_API_KEY
Cohere command-r COHERE_API_KEY
Perplexity sonar PERPLEXITY_API_KEY

API at a Glance

Beacon Configuration

beacon = Beacon("Nike")
Method Description
.with_competitors(*brands) Add competitor brands to track
.with_aliases(*names) Alternative brand names (counted as primary)
.with_providers(*providers) Set LLM providers to query
.with_industry(name) Use industry-specific prompt templates
.with_categories(*topics) Set custom analysis categories
.with_prompt_count(n) Number of prompts per category
.with_cache(ttl_seconds=...) Enable response caching
.with_storage(path) Enable DuckDB historical storage
.with_scoring_weights(...) Customise visibility score weights
.with_prompts(list) Use fully custom prompt templates
.with_temperature(t) LLM temperature (0.0-2.0)
.with_timeout(seconds) Request timeout
.scan() Run synchronous scan
.scan_async() Run async scan
.get_history(days) Get historical trend data
.compare_with_previous() Compare with last scan

Report Object

report.visibility_score        # 0-100 overall score
report.mention_count           # total brand mentions
report.sentiment_breakdown     # .positive / .neutral / .negative
report.competitor_comparison   # {name: CompetitorScore}
report.citation_summary        # .citations, .total_citations, .unique_domains
report.metrics.score_breakdown # .mention_frequency / .sentiment / .position / .recommendation
report.explanations            # evidence-based insights
report.recommendations         # prioritised action items

Export Functions

from promptbeacon import to_json, to_csv, to_markdown, to_html, to_dataframe

to_json(report)       # JSON string
to_csv(report)        # CSV string
to_markdown(report)   # Markdown report
to_html(report)       # Standalone HTML page
to_dataframe(report)  # pandas DataFrame

Documentation

Full docs are in docs/:

Development

git clone https://github.com/yotambraun/promptbeacon
cd promptbeacon
uv venv && uv sync --all-extras

uv run pytest --cov -v        # tests
uv run ruff check .           # lint
uv run ruff format .          # format

Contributing

Contributions welcome! See TODO.md for the roadmap.

License

Apache License 2.0 - see LICENSE for details.

Acknowledgements

Built with LiteLLM, Pydantic, DuckDB, Typer, and Rich.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptbeacon-0.2.0.tar.gz (105.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptbeacon-0.2.0-py3-none-any.whl (62.3 kB view details)

Uploaded Python 3

File details

Details for the file promptbeacon-0.2.0.tar.gz.

File metadata

  • Download URL: promptbeacon-0.2.0.tar.gz
  • Upload date:
  • Size: 105.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for promptbeacon-0.2.0.tar.gz
Algorithm Hash digest
SHA256 613373c86651cf18c40c1ab2f84910be40627eaf6015d366daf56a08661f1716
MD5 168fde6c74aae940c4234ac149caf0df
BLAKE2b-256 dfe0cd79b99a101804ef49e84536649b8010d43b585d6b6fd225f4a788db21b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptbeacon-0.2.0.tar.gz:

Publisher: release.yml on yotambraun/promptbeacon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file promptbeacon-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: promptbeacon-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 62.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for promptbeacon-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4fdc2782b28a8c3af847d5dcc4daed0944cd570a1082f2c18564a6614f4169d4
MD5 f330f275c555a7d9d5d442f53df23ba1
BLAKE2b-256 a7eebd2dcf81ec6c9b4608b54e5587510cbd02b77401a6f3c5b65ff233881353

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptbeacon-0.2.0-py3-none-any.whl:

Publisher: release.yml on yotambraun/promptbeacon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page