Skip to main content

Feathers and claws for your data lake

Project description

owlbear

Owlbear

Feathers and claws for your data lake.

Owlbear is a Python client that bridges Athena and Trino to Polars DataFrames via PyArrow. A wise chimera — part Owl (Athena, goddess of wisdom), part Bear (Polars, the bear constellation). Query your data lake with SQL, get back fast, typed DataFrames — no serialization or ODBC overhead.

Features

  • Two backends: AthenaClient (AWS Athena via boto3) and TrinoClient (direct Trino connection)
  • Shared Presto-family type conversion — both backends produce identically typed Polars DataFrames
  • Parameterized queries for safe value binding
  • Athena result reuse for repeated queries
  • Pagination support for large result sets (Athena) and row limits (both)
  • Query cancellation and execution monitoring (Athena)
  • Built-in retry logic with exponential backoff (Athena)
  • MCP server for AI assistant integration (schema discovery + query execution)

Installation

# With Athena backend
pip install "owlbear[athena]"

# With Trino backend
pip install "owlbear[trino]"

# Both backends
pip install "owlbear[all]"

# MCP server (includes both backends)
pip install "owlbear[mcp]"

For Development

git clone https://github.com/jdonaldson/owlbear.git
cd owlbear
pip install -e ".[dev]"

Prerequisites

  • Python 3.10+
  • Athena: AWS credentials configured (via AWS CLI, environment variables, or IAM roles) and an S3 bucket for query results
  • Trino: A running Trino cluster with network access

Quick Start

Athena

from owlbear import AthenaClient

client = AthenaClient(
    database="my_database",
    output_location="s3://my-bucket/athena-results/",
    region="us-east-1"
)

execution_id = client.query("SELECT * FROM orders LIMIT 5")
df = client.results(execution_id)
print(df)
shape: (5, 4)
┌─────────────┬────────────┬──────────────┬────────────┐
│ customer_id ┆ order_date ┆ order_amount ┆ status     │
│ ---         ┆ ---        ┆ ---          ┆ ---        │
│ i64         ┆ date       ┆ f64          ┆ str        │
╞═════════════╪════════════╪══════════════╪════════════╡
│ 1001        ┆ 2024-03-15 ┆ 249.99       ┆ shipped    │
│ 1002        ┆ 2024-03-15 ┆ 89.50        ┆ delivered  │
│ 1003        ┆ 2024-03-16 ┆ 1024.00      ┆ processing │
│ 1001        ┆ 2024-03-17 ┆ 54.25        ┆ shipped    │
│ 1004        ┆ 2024-03-17 ┆ 399.99       ┆ delivered  │
└─────────────┴────────────┴──────────────┴────────────┘

Trino

from owlbear import TrinoClient

client = TrinoClient(
    host="trino.example.com",
    port=443,
    user="analyst",
    catalog="hive",
    schema="default",
)

df = client.query("SELECT * FROM orders LIMIT 5")
print(df)
shape: (5, 4)
┌─────────────┬────────────┬──────────────┬────────────┐
│ customer_id ┆ order_date ┆ order_amount ┆ status     │
│ ---         ┆ ---        ┆ ---          ┆ ---        │
│ i64         ┆ date       ┆ f64          ┆ str        │
╞═════════════╪════════════╪══════════════╪════════════╡
│ 1001        ┆ 2024-03-15 ┆ 249.99       ┆ shipped    │
│ 1002        ┆ 2024-03-15 ┆ 89.50        ┆ delivered  │
│ 1003        ┆ 2024-03-16 ┆ 1024.00      ┆ processing │
│ 1001        ┆ 2024-03-17 ┆ 54.25        ┆ shipped    │
│ 1004        ┆ 2024-03-17 ┆ 399.99       ┆ delivered  │
└─────────────┴────────────┴──────────────┴────────────┘

Usage Examples

Parameterized Queries

# Athena — parameters are passed as strings
execution_id = client.query(
    "SELECT * FROM orders WHERE customer_id = ?",
    parameters=["1001"],
)
df = client.results(execution_id)

# Trino — parameters are passed as native Python values
df = trino_client.query(
    "SELECT * FROM orders WHERE customer_id = ?",
    parameters=[1001],
)

Result Reuse (Athena)

# Re-use cached results for up to 60 minutes
execution_id = client.query(
    "SELECT COUNT(*) FROM orders",
    result_reuse_max_age=60,
)

Asynchronous Query Execution

# Start query without waiting
execution_id = client.query(
    "SELECT * FROM large_table",
    wait_for_completion=False
)

# Check query status
query_info = client.get_query_info(execution_id)
print(f"Query status: {query_info['Status']['State']}")

# Wait for completion and get results when ready
client._wait_for_completion(execution_id)
df = client.results(execution_id)

Using Work Groups

execution_id = client.query(
    query="SELECT COUNT(*) FROM my_table",
    work_group="my-workgroup"
)
df = client.results(execution_id)

Using with Existing boto3 Session

import boto3
from owlbear import AthenaClient

session = boto3.Session(profile_name='my-profile')
client = AthenaClient.from_session(
    session=session,
    database="my_db",
    output_location="s3://my-bucket/results/"
)

Query Management

# List available work groups
work_groups = client.list_work_groups()

# Cancel a running query
client.cancel_query(execution_id)

# Get detailed query information
query_info = client.get_query_info(execution_id)
print(f"Execution time: {query_info['Statistics']['TotalExecutionTimeInMillis']}ms")
print(f"Data processed: {query_info['Statistics']['DataProcessedInBytes']} bytes")

MCP Server

Owlbear includes an MCP server so AI assistants can query your data lake directly. It exposes four tools: execute_query, list_databases, list_tables, and describe_table.

Install

pip install "owlbear[mcp]"

Environment Variables

Variable Backend Description Default
OWLBEAR_BACKEND both athena or trino athena
OWLBEAR_DATABASE both Default database/schema default
OWLBEAR_S3_OUTPUT_LOCATION athena S3 path for query results (required)
AWS_REGION athena AWS region us-east-1
AWS_PROFILE athena AWS profile (via boto3)
OWLBEAR_TRINO_HOST trino Trino hostname (required)
OWLBEAR_TRINO_PORT trino Trino port 443
OWLBEAR_TRINO_USER trino Trino user
OWLBEAR_TRINO_CATALOG trino Trino catalog

Example .mcp.json (Athena)

{
  "mcpServers": {
    "owlbear": {
      "command": "owlbear-mcp",
      "env": {
        "OWLBEAR_DATABASE": "my_database",
        "OWLBEAR_S3_OUTPUT_LOCATION": "s3://my-bucket/athena-results/",
        "AWS_REGION": "us-east-1",
        "AWS_PROFILE": "my-profile"
      }
    }
  }
}

Example .mcp.json (Trino)

{
  "mcpServers": {
    "owlbear": {
      "command": "owlbear-mcp",
      "env": {
        "OWLBEAR_BACKEND": "trino",
        "OWLBEAR_TRINO_HOST": "trino.example.com",
        "OWLBEAR_TRINO_PORT": "443",
        "OWLBEAR_TRINO_USER": "analyst",
        "OWLBEAR_TRINO_CATALOG": "hive",
        "OWLBEAR_DATABASE": "default"
      }
    }
  }
}

Type Mapping

Owlbear automatically converts Presto/Trino/Athena SQL types to PyArrow (and then to Polars):

SQL Type PyArrow Type
boolean bool_()
tinyint int8()
smallint int16()
integer int32()
bigint int64()
real / float float32()
double float64()
decimal(p,s) decimal128(p, s)
varchar / char / string string()
varbinary / binary binary()
date date32()
timestamp timestamp("us")
timestamp with time zone timestamp("us", tz="UTC")
time time64("us")
interval day to second duration("us")
interval year to month month_day_nano_interval()
array<T> list_(T)
map<K,V> map_(K, V)

Nested types like array<array<integer>> and map<varchar,array<bigint>> are fully supported.

Configuration

Environment Variables

export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_DEFAULT_REGION=us-east-1

IAM Permissions

Your AWS credentials need the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "athena:StartQueryExecution",
                "athena:GetQueryExecution",
                "athena:GetQueryResults",
                "athena:StopQueryExecution",
                "athena:ListWorkGroups"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::your-athena-results-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "glue:GetDatabase",
                "glue:GetTable",
                "glue:GetPartitions"
            ],
            "Resource": "*"
        }
    ]
}

Testing

pytest tests/ -v

Development

git clone https://github.com/jdonaldson/owlbear.git
cd owlbear
pip install -e ".[dev]"

black .        # format
ruff check .   # lint
mypy src/      # type check

License

MIT License - see LICENSE file for details.

Contributing

  1. Fork the repository on GitHub
  2. Create a feature branch
  3. Make your changes with tests
  4. Ensure all tests pass and code is formatted
  5. Submit a pull request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

owlbear-0.4.1.tar.gz (1.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

owlbear-0.4.1-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file owlbear-0.4.1.tar.gz.

File metadata

  • Download URL: owlbear-0.4.1.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for owlbear-0.4.1.tar.gz
Algorithm Hash digest
SHA256 5adb41c676509b0fcb6a4a8f1c576cfbf1352bcba03f317d7ff49f0656ccff1d
MD5 5a331c86b79ebeb186650340f4fe9e40
BLAKE2b-256 f7683c1ecd3c9c392483b500d162f6775dad2eed96438d3651ef661a2fe95260

See more details on using hashes here.

File details

Details for the file owlbear-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: owlbear-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 17.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for owlbear-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2ae454880857aa21a1b8686db2a5b054b4e500a4b0f4a9ceb8d6491adcac1080
MD5 8adb9ad3d98a29dfc55483308cecdd0f
BLAKE2b-256 94641f84735c77952e3057f18f8886629802cd64e48af732b50ee5e23ec56045

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page