Skip to main content

The universal AI trust layer

Project description

TrustLayer

You bring the AI. We bring the trust.

The universal trust layer for every AI tool you use. Verify outputs, track costs, compare models, and keep your data local — all from one open-source app that runs on your machine.

License: MIT Python 3.11+ Website

Website · Quick Start · CLI Reference · REST API


🚀 One Command to Get Started

pip install trustlayer-ai && trustlayer server

That's it. Runs locally. Auto-detects Ollama. No account needed. See full Quick Start below ↓


Demo

[See TrustLayer in action — demo GIF coming soon]


Why TrustLayer?

People don't trust AI. Not because it's incapable — but because:

  • You can't verify if an output is accurate or hallucinated
  • Your data goes to multiple cloud providers you don't control
  • 100 new AI tools launch daily — impossible to evaluate all of them
  • No single place to track what you're spending across all providers

TrustLayer wraps around all of them. You bring whatever AI you already trust. We add the trust layer on top.


Features

Feature What it does
Universal Connector Plug in any AI: Ollama (auto-detected), Claude, GPT-4, Gemini. One interface for all.
Verification Engine Every output gets a trust score 0–100. Hallucination and overconfidence flags.
Personal Learning Learns how you work across sessions. Stored 100% locally.
Cost Tracker Real-time spending dashboard across all providers. Budget alerts.
Model Comparison Test your actual tasks across models side-by-side. Personal benchmarks.
Offline Knowledge Base Index your docs, PDFs, code repos. Works fully offline with Ollama.
No-Code Workflows Visual workflow builder. Summarize emails, auto-verify, doc Q&A.
Adaptive Personality Honest for facts. Creative for brainstorming. Adapts automatically.

Quick Start

# Install
pip install trustlayer-ai

# Start the server + web UI
trustlayer server
# → Auto-detects Ollama if running
# → Opens http://localhost:8000

That's it. Add API keys if you want cloud providers:

export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_API_KEY=...

CLI Usage

# Verify any AI output
trustlayer verify "The earth is 4.5 billion years old."
# → Trust Score: 94/100 (HIGH) — No concerns

# Ask any connected AI
trustlayer ask "Summarize this codebase" --provider ollama --model llama3.2

# Compare multiple providers side-by-side on the same prompt
trustlayer compare "Write unit tests for this function"

# Check your spending across all providers
trustlayer costs

# Detect what AI tools are available on your machine
trustlayer detect

# Upload documents to your local knowledge base
trustlayer knowledge upload ./my-docs/

# Learn and track your session
trustlayer learn

REST API

# Verify content
curl -X POST http://localhost:8000/api/verify \
  -H "Content-Type: application/json" \
  -d '{"content": "AI output here"}'

# Response
{
  "trust_score": 87,
  "trust_label": "high",
  "summary": "This response is 87% trusted. 0 concern(s) flagged.",
  "issues": []
}

# Compare providers
curl -X POST http://localhost:8000/api/compare \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Explain quantum entanglement", "providers": ["ollama", "anthropic"]}'

# Check costs
curl http://localhost:8000/api/costs

# List connected providers
curl http://localhost:8000/api/connectors

Full interactive docs at http://localhost:8000/docs (Swagger UI) when the server is running.


Architecture

trustlayer/
├── backend/              # FastAPI backend (async SQLite)
│   ├── main.py           # Application entry point + lifespan
│   ├── config.py         # Configuration (env vars)
│   ├── database.py       # SQLite with SQLAlchemy async
│   ├── providers/        # AI provider adapters (Ollama, OpenAI-compat)
│   └── routers/          # 8 feature routers
│       ├── verify.py     # Verification engine + trust scoring
│       ├── compare.py    # Multi-provider comparison
│       ├── connectors.py # Provider detection & management
│       ├── costs.py      # Cost tracking + budget alerts
│       ├── knowledge.py  # Local knowledge base (RAG)
│       ├── learn.py      # Personal learning & session memory
│       ├── workflows.py  # No-code workflow builder
│       └── settings.py   # Runtime configuration
├── frontend/             # React + TypeScript + Tailwind CSS
│   └── src/pages/        # Dashboard, Verify, Compare, Costs, Knowledge,
│                         # Connectors, Workflows, Settings
├── cli/                  # Python CLI (Typer) with rich output
└── docs/                 # GitHub Pages website

All data stored in ~/.trustlayer/ — nothing leaves your machine unless you configure cloud providers.


Privacy & Local-First Design

  • No telemetry. No usage data sent anywhere.
  • No accounts. TrustLayer itself requires no sign-up.
  • No cloud sync. SQLite database lives at ~/.trustlayer/trustlayer.db.
  • Fully offline. Works completely without internet when using Ollama.
  • Your keys, your calls. API calls go directly from your machine to providers.

Development

git clone https://github.com/acunningham-ship-it/trustlayer
cd trustlayer

# Install (from PyPI)
pip install trustlayer-ai

# Or install from source
pip install -r requirements.txt
uvicorn backend.main:app --reload
# → http://localhost:8000

# Frontend (React + Vite)
cd frontend && npm install && npm run dev
# → http://localhost:5173

# CLI
pip install -e .
trustlayer --help

# Tests
pytest tests/

Contributing

Issues and PRs are welcome. TrustLayer is MIT licensed — use it, fork it, build on it.


License

MIT — free to use, modify, and distribute.

**

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trustlayer_ai-0.2.1.tar.gz (51.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trustlayer_ai-0.2.1-py3-none-any.whl (55.5 kB view details)

Uploaded Python 3

File details

Details for the file trustlayer_ai-0.2.1.tar.gz.

File metadata

  • Download URL: trustlayer_ai-0.2.1.tar.gz
  • Upload date:
  • Size: 51.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for trustlayer_ai-0.2.1.tar.gz
Algorithm Hash digest
SHA256 60c21025f856c347f220642fb7bee4b2d4f7b52b9f188459f80db4d0b467b96f
MD5 ccfdc53c65d61783820828e6a95533d8
BLAKE2b-256 181bd5c9434f46888c0a3cffeadb8ca69f8f66fe14bfb5fb37e9087c587e08a5

See more details on using hashes here.

File details

Details for the file trustlayer_ai-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: trustlayer_ai-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 55.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for trustlayer_ai-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e5cb6690c52d0a26d7d0a63a5a9dc74c4a73a98a8c9b276d47c7616df83c330b
MD5 7b1ea929c2e6829b678bc9c2eecb313a
BLAKE2b-256 336211f99b08a6201a9dba97e70cff7563b8c927667e5f7e2cf88a014f23221e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page