The First Database Built for Agents to Own, Not Just Query
Project description
KameleonDB
Find the true color of your data.
The First Database Built for Agents to Own, Not Just Query
Most databases let agents query data that humans structured. KameleonDB goes further: agents own the entire data lifecycle—from schema design to data ingestion to continuous evolution. You provide the goals and policies, agents build and manage the database.
Built on PostgreSQL (JSONB) or SQLite (JSON1) with schema-as-data storage, agents can restructure information on the fly without migrations, DDL, or human intervention.
Philosophy: Agents as Data Engineers
In traditional databases, humans are the data engineers: they design schemas, write migrations, and structure data for agents to query.
KameleonDB makes agents the data engineers. Agents don't just consume data—they design the schema, ingest records, evolve structure, and reshape information as they reason about it. Humans shift from data architects to policy makers, defining what agents can do, not how to structure every field.
This is schema-on-reason: structure emerges from agent reasoning, not upfront human design. As agents learn more about the data, they adapt the schema to match their understanding.
First Principles:
- Radical Simplicity — Perfection achieved by removing, not adding
- Agent-First Design — APIs optimized for agent reasoning patterns
- Schema-on-Reason — Schema emerges from reasoning, not upfront design
- Provenance & Auditability — Every decision traceable
- Policy-Driven Governance — Autonomy bounded by declarative policies
- Security by Design — Zero-trust architecture
- Enterprise-Grade Reliability — ACID guarantees and multi-tenancy
See FIRST-PRINCIPLES.md for detailed explanations and AGENTS.md for the complete agent-native design philosophy.
Features
- Dynamic Schema: Create and modify entity fields at runtime without migrations
- Multi-Database: PostgreSQL (JSONB) and SQLite (JSON1) support
- Agent-First Design: Every operation is a tool for AI agents with JSON-serializable I/O
- Self-Describing: Agents can discover schema before querying
- Idempotent Operations: Safe for agents to call repeatedly
- Audit Trail: Track who made schema changes and why
- Zero-Lock Evolution: Schema changes are metadata-only, no table locks
Installation
# Core only (SQLite works out of the box)
pip install kameleondb
# With PostgreSQL support
pip install kameleondb[postgresql]
# With MCP server
pip install kameleondb[mcp]
# For development
pip install kameleondb[dev,postgresql]
# Everything
pip install kameleondb[all]
Database Requirements:
- SQLite: 3.9+ with JSON1 extension (included in Python stdlib)
- PostgreSQL: 12+ with JSONB support
Quick Start
Option 1: MCP Server (Recommended for AI Agents)
The MCP (Model Context Protocol) server exposes KameleonDB as tools that AI agents can use directly.
Installation:
pip install kameleondb[mcp]
Start the MCP server:
# PostgreSQL
kameleondb-mcp --database postgresql://user:pass@localhost/kameleondb
# SQLite (for development)
kameleondb-mcp --database sqlite:///./kameleondb.db
Configure in Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"kameleondb": {
"command": "kameleondb-mcp",
"args": ["--database", "postgresql://localhost/kameleondb"]
}
}
}
Available MCP Tools:
kameleondb_describe()- Discover database schemakameleondb_create_entity()- Create new entity typeskameleondb_insert()- Add recordskameleondb_execute_sql()- Query with LLM-generated SQLkameleondb_materialize_entity()- Optimize storage for performance- ...and 20+ more tools
See MCP Documentation for client setup.
Option 2: Command-Line Interface
Installation:
pip install kameleondb
Initialize and create your first entity:
# Initialize database
kameleondb init
# Create an entity
kameleondb schema create Contact \
--field "name:string:required" \
--field "email:string:unique" \
--field "phone:string"
# Insert data (inline JSON)
kameleondb data insert Contact '{"name": "Alice", "email": "alice@example.com"}'
# Insert from file
kameleondb data insert Contact --from-file contact.json
# List records
kameleondb data list Contact
# Query with SQL
kameleondb query run "SELECT * FROM kdb_records WHERE entity_id='...' LIMIT 10"
JSON output for scripting:
kameleondb --json schema list | jq .
kameleondb --json data insert Contact '{"name":"Bob","email":"bob@example.com"}'
Available Commands:
schema- Create, list, describe, modify entitiesdata- Insert, get, update, delete, list recordsquery- Execute and validate SQLstorage- Materialize entities, check storage modeadmin- Initialize, info, changelog
See kameleondb --help for full command reference.
Option 3: Python API Integration
For developers integrating KameleonDB into Python applications:
from kameleondb import KameleonDB
# Initialize with PostgreSQL
db = KameleonDB("postgresql://user:pass@localhost/kameleondb")
# Or use SQLite for development/testing
# db = KameleonDB("sqlite:///./kameleondb.db")
# Create an entity with fields
contacts = db.create_entity(
name="Contact",
fields=[
{"name": "first_name", "type": "string", "required": True},
{"name": "email", "type": "string", "unique": True},
],
created_by="my-agent",
if_not_exists=True, # Idempotent - safe to call multiple times
)
# Add a field later (with reasoning for audit)
contacts.add_field(
name="linkedin_url",
field_type="string",
created_by="enrichment-agent",
reason="Found LinkedIn profiles in documents",
if_not_exists=True,
)
# Insert data
contact_id = contacts.insert({
"first_name": "John",
"email": "john@example.com",
})
# Retrieve by ID
contact = contacts.find_by_id(contact_id)
print(contact) # {"id": "...", "first_name": "John", "email": "john@example.com", ...}
# For complex queries, use SQL generation via schema context
context = db.get_schema_context()
# Use context with an LLM to generate SQL, then:
# results = db.execute_sql("SELECT ... FROM kdb_records WHERE ...")
# Discover schema (agents call this first)
schema = db.describe()
print(schema)
# {
# "entities": {
# "Contact": {
# "fields": ["first_name", "email", "linkedin_url"],
# ...
# }
# }
# }
Tool Integration:
# Get all operations as tools for AI agents
tools = db.get_tools()
# Each tool has:
# - name: "kameleondb_create_entity"
# - description: Human-readable description
# - parameters: JSON Schema for inputs
# - function: Callable to execute
See docs/ARCHITECTURE.md for technical details.
Agent Framework Integrations
OpenClaw Skill
KameleonDB is available as an OpenClaw skill for seamless integration with OpenClaw agents. OpenClaw is an open-source agent framework that bridges messaging apps to AI agents with 700+ community skills.
Installation:
# Install KameleonDB
pip install kameleondb
# Set database URL
export KAMELEONDB_URL="sqlite:///./kameleondb.db"
# Initialize
kameleondb admin init
# Copy skill to OpenClaw (or install from ClawHub)
cp -r openclaw-skill ~/.openclaw/skills/kameleondb
What OpenClaw Agents Can Do:
- 🧠 Remember information across conversations (contacts, tasks, notes)
- 🔗 Track entities and relationships without planning schemas upfront
- 📚 Build knowledge bases that evolve as they learn
- 🌐 Ingest external data (APIs, web scraping, CSVs)
- 📊 Query with SQL using schema context for LLM-generated queries
- ⚡ Self-optimize with performance hints and materialization
Key Features for Agents:
- Schema-on-Reason: Start storing data immediately, add fields as you discover them
- Agent Hints Pattern: Query results include optimization suggestions with exact commands
- Audit Trail: Every schema change records why the agent made it
- Zero Migrations: Old records don't break when adding new fields
The skill provides the full CLI via --json flag, optimized for agent consumption. See openclaw-skill/SKILL.md for usage examples and workflows.
Coming Soon: ClawHub listing for one-click installation 🦎
Development
# Clone the repository
git clone https://github.com/marcosnataqs/kameleondb.git
cd kameleondb
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run linting
ruff check src tests
mypy src/kameleondb
# Run pre-commit hooks
pre-commit install
pre-commit run --all-files
Roadmap
- v0.1: Core schema engine ✅
- v0.2: Relationships + Hybrid Storage + Query Intelligence ✅
- Relationship metadata (many-to-one, one-to-many, many-to-many)
- Schema context for SQL generation
- Query validation and execution
- SQLite support
- Hybrid storage (shared/dedicated modes)
- Storage migration (materialize/dematerialize)
- Query metrics and materialization suggestions
- v0.3: Relational queries + Many-to-many (planned)
- Cross-entity queries with JOINs
- Cascading operations
- Many-to-many junction tables
- v0.4: Natural language queries (planned)
- LLM-powered query generation
- Query caching and optimization
See docs/tasks/BACKLOG.md for detailed roadmap.
License
Apache 2.0 License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kameleondb-0.1.2.tar.gz.
File metadata
- Download URL: kameleondb-0.1.2.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
171649b6dc3b614ee5e99b8b1a582fb8788b6dd4647be2930cf0ccdc697bc665
|
|
| MD5 |
6457c5a096ad07c05b7a951c78b20b89
|
|
| BLAKE2b-256 |
7cb069998ce133aaa773a5391dbb23e285bee101372e5d77f9b6ba5c6c28d996
|
Provenance
The following attestation bundles were made for kameleondb-0.1.2.tar.gz:
Publisher:
publish.yml on marcosnataqs/kameleondb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kameleondb-0.1.2.tar.gz -
Subject digest:
171649b6dc3b614ee5e99b8b1a582fb8788b6dd4647be2930cf0ccdc697bc665 - Sigstore transparency entry: 926948441
- Sigstore integration time:
-
Permalink:
marcosnataqs/kameleondb@b545f7851a62b9b4553b05a6673796353d949351 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/marcosnataqs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b545f7851a62b9b4553b05a6673796353d949351 -
Trigger Event:
release
-
Statement type:
File details
Details for the file kameleondb-0.1.2-py3-none-any.whl.
File metadata
- Download URL: kameleondb-0.1.2-py3-none-any.whl
- Upload date:
- Size: 85.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
54cd0b32bf5ae58bab26fd0a8a0878235afb1486c5f123e4c8e5c932fd3aa097
|
|
| MD5 |
6ba982d590096776cec6327b036144aa
|
|
| BLAKE2b-256 |
ef2c9913f28ae75c1ee5f74834197867111a9af98ac1d8511c8467d6e9ddce4e
|
Provenance
The following attestation bundles were made for kameleondb-0.1.2-py3-none-any.whl:
Publisher:
publish.yml on marcosnataqs/kameleondb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kameleondb-0.1.2-py3-none-any.whl -
Subject digest:
54cd0b32bf5ae58bab26fd0a8a0878235afb1486c5f123e4c8e5c932fd3aa097 - Sigstore transparency entry: 926948443
- Sigstore integration time:
-
Permalink:
marcosnataqs/kameleondb@b545f7851a62b9b4553b05a6673796353d949351 -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/marcosnataqs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b545f7851a62b9b4553b05a6673796353d949351 -
Trigger Event:
release
-
Statement type: