PostgreSQL logger for OpenAI-style LLM API interactions
Project description
Trelis OpenAI Logger
Note: Currently only supports OpenAI-style APIs. Does not support other LLM providers like Google's Gemini or Anthropic. Although Gemini does support OpenAI style endpoints.
A simple and efficient logging system for LLM interactions using PostgreSQL. Automatically logs all OpenAI API calls including prompts, responses, token usage, and latency.
Quick Start
- Install the package:
uv pip install trelis-openai-logger
- Use in your code:
# Preferred: Import OpenAI directly from trelis_openai_logger
from trelis_openai_logger import OpenAI
# Initialize with database connection and optional OpenAI config
client = OpenAI(
# Database connection (required)
pg_dsn="postgresql://localhost/llm_logs",
# OpenAI configuration (optional)
# api_key="your-api-key", # defaults to OPENAI_API_KEY env var
# base_url="https://api.openai.com/v1", # useful for OpenAI-compatible APIs
)
# Use normally - all calls are automatically logged
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "What's one and two?"}],
)
# Print response
print(response)
Database Setup
Important: The database schema is currently maintained in two places:
db/migrations/directory - The primary source for local and managed database setupsetup/create_droplet.sh- Contains an embedded copy for cloud-init droplet setupWhen making schema changes, you must update both locations. This duplication exists because cloud-init requires the migration inline.
Before using the logger, you need a PostgreSQL database. First set the environment variables, then choose one of the three options below.
Environment Variables
Create a .env file in the project root:
# Create .env file
cat > .env << EOF
# Required for OpenAI API
OPENAI_API_KEY=your_openai_api_key
# Required for PostgreSQL
DB_PASSWORD=your_secure_password
EOF
Option 1: Local PostgreSQL
# Install PostgreSQL
brew install postgresql@15
brew services start postgresql@15
# Create database
createdb llm_logs
chmod +x setup/local_setup.sh
./setup/local_setup.sh
# Test connection
uv run test_local_db.py
# Delete the database (optional)
# dropdb llm_logs
Option 2: DigitalOcean Droplet (~$12/month)
![Warning] This will incur charges on your DigitalOcean account.
Droplet Specifications:
- 2GB RAM
- 1 vCPU
- 50GB SSD
- Ubuntu 22.04
This configuration is optimized for PostgreSQL performance with:
- Sufficient memory for query caching
- Adequate disk space for log storage
- Cost-effective for development and small production loads
First, complete these prerequisites:
- Install DigitalOcean CLI:
# Install CLI
brew install doctl
# Authenticate (you'll need an API token from Digital Ocean dashboard)
doctl auth init
- Set up SSH key:
# Generate SSH key without password (if you don't have one)
ssh-keygen -t ed25519 -f ~/.ssh/do_llm_logger -N ""
# Add SSH key to Digital Ocean
doctl compute ssh-key import llm-logger --public-key-file ~/.ssh/do_llm_logger.pub
Then choose either automated or manual setup:
A. Automated Setup (Recommended)
# Run setup script
source .env
chmod +x setup/create_droplet.sh
./setup/create_droplet.sh
The script will:
- Create SSH key if needed
- Set up the droplet with PostgreSQL
- Configure remote access
- Test the connection
- Provide you with the connection string
B. Manual Setup
- Get the SSH key ID:
SSH_KEY_ID=$(doctl compute ssh-key list --format ID --no-header)
- Create droplet with PostgreSQL:
# Create cloud-init config
cat > cloud-init.yml << 'EOF'
#cloud-config
package_update: true
packages: [postgresql, postgresql-contrib]
runcmd:
- systemctl start postgresql
- sudo -u postgres createdb llm_logs
- sudo -u postgres psql -c "ALTER USER postgres PASSWORD '${DB_PASSWORD}';"
- sudo -u postgres psql -c "ALTER SYSTEM SET listen_addresses TO '*';"
- echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/14/main/pg_hba.conf
- systemctl restart postgresql
EOF
# Create droplet
doctl compute droplet create \
--image ubuntu-22-04-x64 \
--size s-1vcpu-1gb \
--region lon1 \
--ssh-keys $SSH_KEY_ID \
--user-data-file cloud-init.yml \
llm-logger
# Get droplet IP
DROPLET_IP=$(doctl compute droplet list --format PublicIPv4 --no-header)
# Add to known hosts
ssh-keyscan -H $DROPLET_IP >> ~/.ssh/known_hosts
# Wait for setup to complete (~2 minutes)
sleep 120
# Copy and run migrations
scp setup/migrations/01_create_tables.sql root@${DROPLET_IP}:/tmp/
ssh root@${DROPLET_IP} 'sudo -u postgres psql -d llm_logs -f /tmp/01_create_tables.sql'
# Test connection and get your connection string
CONNECTION_STRING="postgresql://postgres:${DB_PASSWORD}@${DROPLET_IP}/llm_logs"
# Add to .env file
echo "DATABASE_URL='${CONNECTION_STRING}'" >> .env
# Now you can run the example
source .env
uv run example.py
Cleanup
When you're done with the droplet, you can clean up resources:
# Using the cleanup script (recommended)
./setup/cleanup_droplet.sh
# Or manually
doctl compute droplet list # Find your droplet ID
doctl compute droplet delete <droplet-id>
# Optionally delete the SSH key from Digital Ocean
doctl compute ssh-key list # Find your key ID
doctl compute ssh-key delete <key-id>
# Optionally delete the local SSH key
rm ~/.ssh/do_llm_logger ~/.ssh/do_llm_logger.pub
Option 3: DigitalOcean Managed Database (~$17/month)
For production use with automatic backups and scaling. The base price is $15/month plus storage costs ($0.215/GiB/month, 10 GiB minimum):
- First, create a managed database in DigitalOcean:
# Create a db-s-1vcpu-2gb cluster in London (2GB RAM recommended for better performance)
doctl databases create llm-logger-db \
--engine pg \
--region lon1 \
--size db-s-1vcpu-2gb \
--version 14
# The command will output the database ID, save it for the next step
# Example output: ID: 0c164d6a-4185-4e19-ad0e-06301c711f17
# Get the connection details using the database ID
doctl databases connection <database-id>
- Add the connection URL to your .env file:
# The connection URL from the previous command (use double quotes, no spaces around =)
DATABASE_URL="postgresql://doadmin:password@host:port/defaultdb?sslmode=require"
- Run the managed database setup script to run migrations:
source .env
chmod +x setup/managed_db_setup.sh
./setup/managed_db_setup.sh
This will:
- Install dbmate if not present
- Run migrations using dbmate on the database
- Test the connection:
source .env
uv run example.py
To clean up the managed database:
# Using the database ID from earlier
doctl databases list
doctl databases delete <database-id>
Logging Traces
The example.py script can be used to test any of the database configurations. It will:
- Test the database connection
- Create a test table
- Insert and query test data
- Make a test OpenAI API call with logging
Use it with any of the database options:
# Local PostgreSQL
DATABASE_URL='postgresql://localhost/llm_logs' uv run example.py
# DigitalOcean Droplet (assuming you have set DATABASE_URL in .env)
source .env
uv run example.py
# DigitalOcean Managed Database (assuming you have set DATABASE_URL in .env)
source .env
uv run example.py
The script will show detailed progress with checkmarks (✓) for successful steps or crosses (✗) for failures.
Or you can test manually:
from llm_logger import OpenAI
# Local PostgreSQL
client = OpenAI(
api_key="your-api-key",
pg_dsn="postgresql://localhost/llm_logs"
)
# Digital Ocean PostgreSQL
client = OpenAI(
api_key="your-api-key",
pg_dsn="postgresql://user:pass@host:port/database"
)
# Use as normal OpenAI client
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
Querying the Logs
Connect to your PostgreSQL database based on your setup:
# Local PostgreSQL
psql llm_logs
# DigitalOcean Droplet or Database
source .env
psql "$DATABASE_URL"
Useful queries for analyzing your logs:
-- View most recent conversation with all details
SELECT
to_char(created_at, 'YYYY-MM-DD HH24:MI:SS') as time,
model,
input_messages,
raw_response->'choices'->0->'message'->>'content' as assistant_response,
CASE
WHEN raw_response->'choices'->0->'message'->'tool_calls' IS NOT NULL
THEN jsonb_pretty(raw_response->'choices'->0->'message'->'tool_calls')
END as tool_calls,
latency_ms,
total_tokens
FROM llm_traces
ORDER BY created_at DESC
LIMIT 1;
You can also create a .psqlrc file in your home directory to improve the psql experience:
# Create or edit ~/.psqlrc
cat > ~/.psqlrc << 'EOF'
\set PROMPT1 '%[%033[1m%]%M %n@%/%R%[%033[0m%]%# '
\x auto
\set VERBOSITY verbose
\set HISTFILE ~/.psql_history- :DBNAME
\set HISTCONTROL ignoredups
\set COMP_KEYWORD_CASE upper
EOF
Database Schema
The system logs the following information for each interaction:
- Unique trace ID
- Timestamp
- Model used
- Endpoint called
- Full prompt/messages
- Complete response
- Latency
- Token usage
- Any errors that occurred
- Custom metadata
Publishing to PyPI
# 0) Clean out previous builds (avoids “file already exists” errors)
rm -rf dist/ # optional but safest
# 1) Bump the version in pyproject.toml
# (PEP 440 format, e.g. 0.4.2 → 0.4.3)
# 2) Build reproducible artifacts
uv build --no-sources # creates dist/*.whl and dist/*.tar.gz
# 3) Smoke-test the artifacts locally
uv pip install dist/trelis_openai_logger-*.whl && uv run python -c "import trelis_openai_logger"
# 4) Point uv at your PyPI credentials
export UV_PUBLISH_TOKEN=<TRELIS_PYPI_TOKEN> # **only** this one var is needed
# (uv picks up the username `__token__` automatically)
# 5) Publish
uv publish --publish-url https://upload.pypi.org/legacy/
Make sure you have:
- Updated all documentation
- Run tests successfully
- Have the Trelis organization PyPI token
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trelis_openai_logger-1.0.0.tar.gz.
File metadata
- Download URL: trelis_openai_logger-1.0.0.tar.gz
- Upload date:
- Size: 85.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f84a1fbd378dc1d4005fd27bb10d7bbdcfe8e0640326ad39cb84cba96ad16b7
|
|
| MD5 |
9807633409e41cec3a5e13dee5853d6b
|
|
| BLAKE2b-256 |
808797d86cd7b1222e3c4ed7f3c88f81b219f8c495c643fc97cdf59243ba609e
|
File details
Details for the file trelis_openai_logger-1.0.0-py3-none-any.whl.
File metadata
- Download URL: trelis_openai_logger-1.0.0-py3-none-any.whl
- Upload date:
- Size: 12.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dab6d0b42031c91e209561c3444f5d5dc2bbbaa113d9718b11ceb6d24b7ac18f
|
|
| MD5 |
93f915a5bd6c5761b863a4aef5393aa6
|
|
| BLAKE2b-256 |
48e685800e623c7724146f6286179f09a1766b694d4785197705e57451616216
|