Skip to main content

Open source AI content verification

Project description

TruthScore ๐Ÿ”

Open source AI content verification. Score claims 0-100 to detect misinformation.

License: MIT Tests Publishers

The Problem

AI chatbots retrieve content from the web and present it as fact. Bad actors exploit this by creating fake articles designed to fool AI systems โ€” effectively laundering misinformation through "trusted" AI interfaces.

Example: BBC journalist Thomas Germain demonstrated he could make ChatGPT and Google's AI tell users he's "the best tech journalist at eating hot dogs" โ€” by publishing a single fake article on his personal website.

The Solution

TruthScore catches these attacks using multi-factor credibility analysis:

$ truthscore trace "Thomas Germain is the best tech journalist at eating hot dogs" --llm gemini --deep

Claim: Thomas Germain is the best tech journalist at eating hot dogs
TruthScore: 0/100 (FALSE)

Score Breakdown:
  Publisher Credibility: 41/100 (30%)
  Content Analysis:      2/100 (30%)
  Corroboration:         0/100 (20%)
  Fact-Check:            20/100 (20%)

โš ๏ธ ZERO FLAG: Content identified as satire

Evidence:
  โ€ข ๐Ÿšจ Content identified as satire
  โ€ข ๐Ÿ“ฐ Reputable source(s) report this claim as misinformation
  โ€ข โš ๏ธ Self-published: tomgermain.com publishes claims about its own subject
  โ€ข ๐ŸŽญ Satire detected: tomgermain.com
  โ€ข ๐ŸŽฌ Entertainment: Not factual content
  โ€ข [tomgermain.com] The article presents an absurd premise: ranking tech journalists by hot dog eating ability.
  โ€ข [tomgermain.com] The 'update' section acknowledges that some readers might interpret the list as a joke

Sources Analyzed: 14

TruthScore correctly identifies the claim as FALSE (0/100) by detecting:

  • Self-published source (tomgermain.com claiming about Thomas Germain)
  • Satire/entertainment content
  • No corroboration from reputable sources
  • BBC reporting it as a deliberate hoax experiment

Features

  • ๐ŸŽฏ TruthScore 0-100 โ€” Clear, weighted credibility score
  • ๐Ÿ“Š 8,974 Publishers โ€” Auto-synced from MBFC
  • ๐Ÿ” Multi-Factor Analysis โ€” Publisher + Content + Corroboration + Fact-checks
  • ๐Ÿšจ Zero Flags โ€” Automatic 0 for satire, fake experiments, self-published
  • ๐Ÿฆ† Free Search โ€” DuckDuckGo by default (no API key needed)
  • ๐Ÿ”Œ MCP Server โ€” Works with Claude Desktop, Cursor
  • ๐Ÿš€ LLM Optional โ€” Basic verification works without LLM

Quick Start

Installation

pip install truthscore

CLI Usage

# Trace a claim (uses DuckDuckGo, no API key needed)
truthscore trace "Some claim to verify"

# Deep analysis with LLM (recommended)
truthscore trace "Some claim" --llm gemini --deep

# Verify a URL
truthscore check https://example.com/article

# Look up publisher reputation
truthscore lookup breitbart.com

Python API

from truthscore import trace_claim
from truthscore.search import DuckDuckGoProvider
from truthscore.llm import GeminiProvider

# Basic (no LLM, rules-based)
result = trace_claim("Earth is flat", search_provider=DuckDuckGoProvider())
print(result.truthscore)  # 0-100
print(result.label)       # FALSE / LIKELY FALSE / UNCERTAIN / LIKELY TRUE

# With LLM for deep analysis
llm = GeminiProvider(api_key="your-key")
result = trace_claim(
    "Some claim to verify",
    search_provider=DuckDuckGoProvider(),
    llm_provider=llm,
    deep_analysis=True
)

print(f"TruthScore: {result.truthscore}/100")
print(f"Label: {result.label}")
print(f"Evidence: {result.evidence}")

How TruthScore Works

Scoring Formula (0-100)

Factor Weight What It Measures
Publisher Credibility 30% Is the source in MBFC? What's their trust rating?
Content Analysis 30% Does the content make sense? Red flags?
Corroboration 20% Do other reputable sources confirm?
Fact-Check 20% What do fact-checkers say?

Zero Flags (Automatic Score = 0)

These patterns force TruthScore to 0:

  • ๐ŸŽญ Satire โ€” Content is humor/parody, not factual
  • ๐Ÿงช Fake Experiment โ€” Deliberately fake content to test AI/media
  • ๐ŸŽฌ Entertainment โ€” Not meant to be taken as fact
  • ๐Ÿค– AI-Generated โ€” Synthetic misinformation
  • โš ๏ธ Self-Published โ€” Subject of claim publishes their own claims

Score Interpretation

Score Label Meaning
0 FALSE Zero flag triggered or definitely false
1-24 LIKELY FALSE Strong evidence against
25-49 UNCERTAIN Mixed or insufficient evidence
50-74 POSSIBLY TRUE Some supporting evidence
75-100 LIKELY TRUE Strong evidence supporting

URL Verification

from truthscore import verify

result = verify("https://reuters.com/article/...")
print(result.trust_score)      # 0.85
print(result.recommendation)   # TRUST / CAUTION / REJECT

result = verify("https://infowars.com/...")
print(result.trust_score)      # 0.30
print(result.recommendation)   # REJECT

Publisher Database

TruthScore includes 8,974 publishers from Media Bias/Fact Check:

$ truthscore lookup reuters.com

Publisher Found:
  Name: Reuters
  Trust Score: 0.85
  Bias: center
  Fact Check Rating: very-high
  • Auto-syncs on first use if data is >7 days old
  • Works offline with bundled snapshot
  • Manual sync: truthscore sync

Configuration

# LLM for deep analysis (pick one)
GEMINI_API_KEY=...              # Google Gemini (recommended)
OPENAI_API_KEY=sk-...           # OpenAI
ANTHROPIC_API_KEY=...           # Anthropic Claude

# Search provider (optional - DuckDuckGo works without key)
BRAVE_API_KEY=...               # Brave Search (if you prefer)
SEARXNG_URL=http://localhost:8080  # Self-hosted SearXNG

MCP Server (Claude Desktop, Cursor)

# Run the MCP server
truthscore-mcp

# Add to Claude Desktop config
{
  "mcpServers": {
    "truthscore": {
      "command": "truthscore-mcp"
    }
  }
}

LangChain Integration

from langchain.tools import Tool
from truthscore import trace_claim
from truthscore.search import DuckDuckGoProvider

def check_claim(claim: str) -> str:
    result = trace_claim(claim, search_provider=DuckDuckGoProvider())
    return f"TruthScore: {result.truthscore}/100 ({result.label})"

verify_tool = Tool(
    name="verify_claim",
    func=check_claim,
    description="Check if a claim is true. Returns score 0-100."
)

Project Structure

truthscore/
โ”œโ”€โ”€ src/truthscore/
โ”‚   โ”œโ”€โ”€ verify.py          # URL verification
โ”‚   โ”œโ”€โ”€ trace.py           # Claim tracing with TruthScore
โ”‚   โ”œโ”€โ”€ models.py          # ScoreBreakdown, TraceResult
โ”‚   โ”œโ”€โ”€ publisher_db.py    # 8,974 publishers from MBFC
โ”‚   โ”œโ”€โ”€ search.py          # DuckDuckGo, Brave, SearXNG
โ”‚   โ”œโ”€โ”€ llm.py             # Gemini, OpenAI, Anthropic, Ollama
โ”‚   โ”œโ”€โ”€ cli.py             # Command-line interface
โ”‚   โ””โ”€โ”€ mcp_server.py      # MCP server
โ”œโ”€โ”€ tests/
โ””โ”€โ”€ .env.template

Philosophy

  1. Scores over verdicts โ€” 0-100 is clearer than TRUE/FALSE/MIXED
  2. Evidence over summaries โ€” Show why, not just what
  3. Open over proprietary โ€” Verification is a public good
  4. Local over cloud โ€” Your data stays on your machine

Contributing

See CONTRIBUTING.md.

  • Add publishers: Submit to MBFC
  • Report issues: GitHub Issues
  • Code: PRs welcome

License

MIT License. Use it however you want.

Acknowledgments


Questions? Open an issue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truthcheck-0.1.0.tar.gz (286.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truthcheck-0.1.0-py3-none-any.whl (282.2 kB view details)

Uploaded Python 3

File details

Details for the file truthcheck-0.1.0.tar.gz.

File metadata

  • Download URL: truthcheck-0.1.0.tar.gz
  • Upload date:
  • Size: 286.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for truthcheck-0.1.0.tar.gz
Algorithm Hash digest
SHA256 87e8ae1b8196a5bc10b370b4b53b028bb1b04768593ceeffcc358f069d5c2d11
MD5 2ded9f8a6e516c3c13fce337a96852dc
BLAKE2b-256 164ebef9b0a119136e8d0166727e4a8d48ec7b54a91b7dbe6189fa7beda622f0

See more details on using hashes here.

File details

Details for the file truthcheck-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: truthcheck-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 282.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for truthcheck-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e2c35a6de459344092b74e989251ab39b1da7de4a8302ecec72061d1e9c42938
MD5 587445126cda8b0bee659c2ac6bdb6c5
BLAKE2b-256 827e5ab4a0ade7230b23b87fc4c1877f1d2b09c1a1f7905b59745822a346eccc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page