Skip to main content

A lightweight, agent-first semantic layer for AI agents

Project description

SLayer — AI agent operating a semantic layer

PyPI Python Docs License GitHub stars

SLayer is a lightweight semantic layer that lets AI agents query data without writing SQL.

If you find SLayer useful, a ⭐ helps others discover it!


What is SLayer?

SLayer is a semantic layer that sits between your database and whatever consumes the data – AI agents, internal tools, dashboards, or scripts. You define your data models (or let SLayer auto-generate them from the schema), and query using a structured API of measures, dimensions, and filters instead of writing SQL directly.

SLayer compiles these queries into the correct SQL for your database, handling joins, aggregations, time-based calculations, and dialect differences so that consumers don't have to.

SLayer is

  1. dynamic – models can be updated at any time and used immediately; aggregations are defined in queries, not models
  2. simple – query structure is intuitive and easily understood by LLMs and humans
  3. expressiveallows to query things like "month-on-month % increase in total revenue, compared to the previous year"
  4. embeddable – can be used as a standalone service or imported as a Python module with no extra server
  5. flexible – exposes several interfaces – MCP, REST API, CLI and Python, supports most popular DB dialects

Key features include automatic model ingestion, queries-as-models, auto-applied filters; see the full documentation.

Why not just let agents write SQL? Several reasons: accuracy, consistency, interpretability, and more – see our blog post and dbt's benchmark analysis.

Quickstart

We recommend using uv, especially if you don't work in a Python project.

To run the server:

uvx --from 'motley-slayer[all]' slayer serve

Or to add the MCP server:

claude mcp add slayer -- uvx --from 'motley-slayer[all]' slayer mcp

Then configure a datasource or ask your agent to help you do it.

Read more on how to get started with MCP, CLI, REST API, Python in the docs.

Interfaces

REST API

# Query
curl -X POST http://localhost:5143/query \
  -H "Content-Type: application/json" \
  -d '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'

# List models (returns name + description)
curl http://localhost:5143/models

# Get a single datasource (credentials masked)
curl http://localhost:5143/datasources/my_postgres

See more in the docs.

MCP Server

SLayer supports two MCP transports, HTTP (served alongside the API) and stdio (serverless, spawned by the agent).

# 1. stdio-based, does not require a running server
claude mcp add slayer -- slayer mcp

# 2. HTTP-based (SSE), provided SLayer server is already running
claude mcp add slayer-remote --transport sse --url http://localhost:5143/mcp/sse

SLayer does not expose credentials to consumers once created.

Both transports expose the same tools, allowing to inspect, create and update datasources and models and run queries. More info in the docs.

Python Client

Useful for agents working in code execution environments, e.g. for AI data analytics, as well as any Python apps.

from slayer.client.slayer_client import SlayerClient
from slayer.core.query import SlayerQuery, ColumnRef

# Remote mode (connects to running server)
client = SlayerClient(url="http://localhost:5143")

# Or local mode (no server needed)
from slayer.storage.yaml_storage import YAMLStorage
client = SlayerClient(storage=YAMLStorage(base_dir="./my_models"))

# Query data
query = SlayerQuery(
    model="orders",
    fields=[{"formula": "*:count"}, {"formula": "revenue:sum"}],
    dimensions=[ColumnRef(name="status")],
    limit=10,
)
df = client.query_df(query)
print(df)

CLI

# Run a query directly from the terminal
slayer query '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'

# Or from a file
slayer query @query.json --format json

These commands do not depend on a running server.

Models

By default, models are defined as YAML files. Add an optional description to help users and agents understand complex models:

name: orders
sql_table: public.orders
data_source: my_postgres
description: "Core orders table with revenue metrics"

dimensions:
  - name: id
    sql: id
    type: number
    primary_key: true
  - name: status
    sql: status
    type: string
  - name: created_at
    sql: created_at
    type: time

measures:
  - name: revenue
    sql: amount
  - name: quantity
    sql: qty

Fields

The fields parameter specifies what data columns to return.

{
  "model": "orders",
  "dimensions": ["status"],
  "time_dimensions": [{"dimension": "created_at", "granularity": "month"}],
  "fields": [
    {"formula": "*:count"},
    {"formula": "revenue:sum"},
    {"formula": "revenue:sum / *:count", "name": "aov", "label": "Average Order Value"},
    {"formula": "cumsum(revenue:sum)"},
    {"formula": "change_pct(revenue:sum)"},
    {"formula": "last(revenue:sum)", "name": "latest_rev"},
    {"formula": "time_shift(revenue:sum, -1, 'year')", "name": "rev_last_year"},
    {"formula": "time_shift(revenue:sum, -2)", "name": "rev_2_periods_ago"},
    {"formula": "lag(revenue:sum, 1)", "name": "rev_prev_row"},
    {"formula": "rank(revenue:sum)"},
    {"formula": "change(cumsum(revenue:sum))", "name": "cumsum_delta"}
  ]
}

Available functions: cumsum, time_shift, change, lag, and more – see docs. Formulas support arbitrary nesting — e.g., change(cumsum(revenue:sum)) or cumsum(revenue:sum) / *:count.

Filters

Filters use simple formula strings — no verbose JSON objects:

{
  "model": "orders",
  "fields": [{"formula": "*:count"}, {"formula": "revenue:sum"}],
  "filters": [
    "status == 'completed'",
    "amount > 100"
  ]
}

Filters support a variety of operators, composition, pattern matching. Transforms & computed columns can also be used for filtering. See docs for more.

Auto-Ingestion

Connect to a database and generate models automatically. SLayer introspects the schema, detects foreign key relationships, and creates models with explicit join metadata.

For example, given tables orders → customers → regions (via FKs), the orders model will automatically include:

  • Joined dimensions: customers.name, regions.name, etc. (dotted syntax)
  • Count-distinct measures: customers.*:count_distinct, regions.*:count_distinct
  • Explicit joins — LEFT JOINs are constructed dynamically at query time
# Via CLI
slayer ingest --datasource my_postgres --schema public

# Via API
curl -X POST http://localhost:5143/ingest \
  -d '{"datasource": "my_postgres", "schema_name": "public"}'

Via MCP, agents can do this conversationally:

  1. create_datasource(name="mydb", type="postgres", host="localhost", database="app", username="user", password="pass")
  2. ingest_datasource_models(datasource_name="mydb", schema_name="public")
  3. models_summary(datasource_name="mydb")inspect_model(model_name="orders")query(...)

Datasource Setup

By default, datasources are configured as individual YAML files in the datasources/ directory:

# datasources/my_postgres.yaml
name: my_postgres
type: postgres
host: ${DB_HOST}
port: 5432
database: ${DB_NAME}
username: ${DB_USER}
password: ${DB_PASSWORD}

Environment variable references (${VAR}) are resolved at read time.

See more in the docs.

Storage Backends

SLayer ships with two storage backends:

  • YAMLStorage (default) — models and datasources as YAML files on disk. Great for version control.
  • SQLiteStorage — everything in a single SQLite file. Good for embedded use or when you don't want to manage files.

SLayer allows easily implementing your own storage backends, which is useful for features such as tenant isolation.

See the documentation page for storage backends for more.

Roadmap

# Step Status
1 Dynamic joins
2 Multi-stage queries
3 Cross-model measures
4 Aggregation at query time
5 Smart output formatting (currency, percentages)
6 Unpivoting
7 Auto-propagating filters
8 Asof joins
9 Chart generation (eCharts)

Examples

The examples/ directory contains runnable examples that also serve as integration tests:

Example Description
embedded SQLite, no server needed
postgres Docker Compose with Postgres + REST API
mysql Docker Compose with MySQL + REST API
clickhouse Docker Compose with ClickHouse + REST API

Tutorials

The docs/examples/ directory contains Jupyter notebooks that walk through SLayer's features step by step.

Notebook Topic
SQL vs DSL How model SQL and query DSL stay cleanly separated
Auto-Ingestion Schema introspection, FK graph discovery, automatic model generation
Time Operations change, change_pct, time_shift, lag, lead, last — composable time transforms
Joins Dot syntax, multi-hop joins, diamond join disambiguation
Joined Measures Cross-model measures with sub-query isolation
Multistage Queries Query chaining, queries-as-models, ModelExtension

Claude Code Skills

SLayer includes Claude Code skills in .claude/skills/ to help Claude understand the codebase:

  • slayer-overview — architecture, package structure, MCP tools list
  • slayer-query — how to construct queries with fields, dimensions, filters, time dimensions
  • slayer-models — model definitions, datasource configs, auto-ingestion, incremental editing

Known limitations

SLayer currently has no caching or pre-aggregation engine. If you need to process lots of requests to large databases at sub-second latency, consider adding a caching layer or pre-aggregation engine.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

motley_slayer-0.3.0.tar.gz (120.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

motley_slayer-0.3.0-py3-none-any.whl (137.2 kB view details)

Uploaded Python 3

File details

Details for the file motley_slayer-0.3.0.tar.gz.

File metadata

  • Download URL: motley_slayer-0.3.0.tar.gz
  • Upload date:
  • Size: 120.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for motley_slayer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 15f9712c6cc1437fda48203b7a8fb8478c05fc871bb37c0575e76db03860e8d8
MD5 06b5f489cc2ae74a460f8f1b821a4cc2
BLAKE2b-256 a3176e0d09da435e4544da28cba3d82a43a60a3719045e7821046a1bb0a70b9c

See more details on using hashes here.

Provenance

The following attestation bundles were made for motley_slayer-0.3.0.tar.gz:

Publisher: publish.yml on MotleyAI/slayer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file motley_slayer-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: motley_slayer-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 137.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for motley_slayer-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 eedff5067d1fa67d89310ad1bbf5a0fd77cddaeb0502a186cdb6188eced61c28
MD5 df3196ed34be883f399f17c9d0681962
BLAKE2b-256 63d00d53cce312205cabb4bc7b39470455c987eaf5203e31753206bce39d6be5

See more details on using hashes here.

Provenance

The following attestation bundles were made for motley_slayer-0.3.0-py3-none-any.whl:

Publisher: publish.yml on MotleyAI/slayer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page