The universal AI trust layer
Project description
TrustLayer
You bring the AI. We bring the trust.
The universal trust layer for every AI tool you use. Verify outputs, track costs, compare models, and keep your data local — all from one open-source app that runs on your machine.
Why TrustLayer?
People don't trust AI. Not because it's incapable — but because:
- You can't verify if an output is accurate or hallucinated
- Your data goes to multiple cloud providers you don't control
- 100 new AI tools launch daily — impossible to evaluate all of them
- No single place to track what you're spending across all providers
TrustLayer wraps around all of them. You bring whatever AI you already trust. We add the trust layer on top.
Features
| Feature | What it does |
|---|---|
| Universal Connector | Plug in any AI: Ollama (auto-detected), Claude, GPT-4, Gemini. One interface for all. |
| Verification Engine | Every output gets a trust score 0–100. Hallucination and overconfidence flags. |
| Personal Learning | Learns how you work across sessions. Stored 100% locally. |
| Cost Tracker | Real-time spending dashboard across all providers. Budget alerts. |
| Model Comparison | Test your actual tasks across models side-by-side. Personal benchmarks. |
| Offline Knowledge Base | Index your docs, PDFs, code repos. Works fully offline with Ollama. |
| No-Code Workflows | Visual workflow builder. Summarize emails, auto-verify, doc Q&A. |
| Adaptive Personality | Honest for facts. Creative for brainstorming. Adapts automatically. |
Quick Start
# Install
pip install trustlayer
# Start the server + web UI
trustlayer server
# → Auto-detects Ollama if running
# → Opens http://localhost:8000
That's it. Add API keys if you want cloud providers:
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_API_KEY=...
CLI Usage
# Verify any AI output
trustlayer verify "The earth is 4.5 billion years old."
# → Trust Score: 94/100 (HIGH) — No concerns
# Ask any connected AI
trustlayer ask "Summarize this codebase" --provider ollama --model llama3.2
# Compare multiple providers side-by-side on the same prompt
trustlayer compare "Write unit tests for this function"
# Check your spending across all providers
trustlayer costs
# Detect what AI tools are available on your machine
trustlayer detect
# Upload documents to your local knowledge base
trustlayer knowledge upload ./my-docs/
# Learn and track your session
trustlayer learn
REST API
# Verify content
curl -X POST http://localhost:8000/api/verify \
-H "Content-Type: application/json" \
-d '{"content": "AI output here"}'
# Response
{
"trust_score": 87,
"trust_label": "high",
"summary": "This response is 87% trusted. 0 concern(s) flagged.",
"issues": []
}
# Compare providers
curl -X POST http://localhost:8000/api/compare \
-H "Content-Type: application/json" \
-d '{"prompt": "Explain quantum entanglement", "providers": ["ollama", "anthropic"]}'
# Check costs
curl http://localhost:8000/api/costs
# List connected providers
curl http://localhost:8000/api/connectors
Full interactive docs at http://localhost:8000/docs (Swagger UI) when the server is running.
Architecture
trustlayer/
├── backend/ # FastAPI backend (async SQLite)
│ ├── main.py # Application entry point + lifespan
│ ├── config.py # Configuration (env vars)
│ ├── database.py # SQLite with SQLAlchemy async
│ ├── providers/ # AI provider adapters (Ollama, OpenAI-compat)
│ └── routers/ # 8 feature routers
│ ├── verify.py # Verification engine + trust scoring
│ ├── compare.py # Multi-provider comparison
│ ├── connectors.py # Provider detection & management
│ ├── costs.py # Cost tracking + budget alerts
│ ├── knowledge.py # Local knowledge base (RAG)
│ ├── learn.py # Personal learning & session memory
│ ├── workflows.py # No-code workflow builder
│ └── settings.py # Runtime configuration
├── frontend/ # React + TypeScript + Tailwind CSS
│ └── src/pages/ # Dashboard, Verify, Compare, Costs, Knowledge,
│ # Connectors, Workflows, Settings
├── cli/ # Python CLI (Typer) with rich output
└── docs/ # GitHub Pages website
All data stored in ~/.trustlayer/ — nothing leaves your machine unless you configure cloud providers.
Privacy & Local-First Design
- No telemetry. No usage data sent anywhere.
- No accounts. TrustLayer itself requires no sign-up.
- No cloud sync. SQLite database lives at
~/.trustlayer/trustlayer.db. - Fully offline. Works completely without internet when using Ollama.
- Your keys, your calls. API calls go directly from your machine to providers.
Development
git clone https://github.com/acunningham-ship-it/trustlayer
cd trustlayer
# Backend (FastAPI)
pip install -r requirements.txt
uvicorn backend.main:app --reload
# → http://localhost:8000
# Frontend (React + Vite)
cd frontend && npm install && npm run dev
# → http://localhost:5173
# CLI
pip install -e .
trustlayer --help
# Tests
pytest tests/
Contributing
Issues and PRs are welcome. TrustLayer is MIT licensed — use it, fork it, build on it.
License
MIT — free to use, modify, and distribute.
**
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trustlayer_ai-0.1.0.tar.gz.
File metadata
- Download URL: trustlayer_ai-0.1.0.tar.gz
- Upload date:
- Size: 30.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
268bb9c13eb6ef90bf8b2f7282475a4f2fdd30d0c8dd95fda459ab657aecd17c
|
|
| MD5 |
1aaf95bc03000759c3fe0aa067ba8c42
|
|
| BLAKE2b-256 |
a2ed22ba39e46370af315b1ceca52b25d2956999060e6dfcf36a0074f66508ec
|
File details
Details for the file trustlayer_ai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: trustlayer_ai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 31.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ac841e50e8ebed763194f7538ec07518dff5f5412b5ba1d8e3dfea1bdca282a
|
|
| MD5 |
74c7bb2c6cc7aa810f0d346efc62f8de
|
|
| BLAKE2b-256 |
80d5e7de07861dbee67cf54089928dec4bdbffa42e24a888a2aa4b3299d5c9cd
|