Skip to main content

Monkey patches LLM client libraries to print all prompts and responses

Project description

Run Tests

Osmosis

A Python library that monkey patches LLM client libraries to send all prompts and responses to the Osmosis API for logging and monitoring.

Supported Libraries

  • Anthropic: Logs all Claude API requests and responses (both sync and async clients)
  • OpenAI: Logs all OpenAI API requests and responses (supports v1 and v2 API versions, both sync and async clients)
  • LangChain: Currently supports prompt template logging (LLM and ChatModel support varies by LangChain version)

Installation

pypi

# Basic installation with minimal dependencies
pip install osmosis-ai

# Install with specific provider support
pip install "osmosis-ai[openai]"     # Only OpenAI support
pip install "osmosis-ai[anthropic]"  # Only Anthropic support

# Install with LangChain support
pip install "osmosis-ai[langchain]"         # Base LangChain support
pip install "osmosis-ai[langchain-openai]"  # LangChain + OpenAI support
pip install "osmosis-ai[langchain-anthropic]" # LangChain + Anthropic support

# Install with all dependencies
pip install "osmosis-ai[all]"

Or install from source:

git clone https://github.com/your-username/osmosis-sdk-python.git
cd osmosis-ai
pip install -e .

For development, you can install all dependencies using:

pip install -r requirements.txt

Environment Setup

osmosisrequires a OSMOSIS API key to log LLM usage. Create a .env file in your project directory:

# Copy the sample .env file
cp .env.sample .env

# Edit the .env file with your API keys

Edit the .env file to add your API keys:

# Required for logging
OSMOSIS_API_KEY=your_osmosis_api_key_here

# Optional: Only needed if you're using these services
ANTHROPIC_API_KEY=your_anthropic_key_here
OPENAI_API_KEY=your_openai_key_here

Usage

First, import and initialize osmosiswith your OSMOSIS API key:

import os
import osmosis_ai

# Initialize with your OSMOSIS API key
osmosis_ai.init("your-osmosis-api-key")

# Or load from environment variable
osmosis_api_key = os.environ.get("OSMOSIS_API_KEY")
osmosis_ai.init(osmosis_api_key)

Once you import osmosis_ai and initialize it, the library automatically patches the supported LLM clients. You can then use your LLM clients normally, and all API calls will be logged to OSMOSIS:

Anthropic Example

# Import osmosis_ai first and initialize it
import osmosis_ai
osmosis_ai.init(os.environ.get("OSMOSIS_API_KEY"))

# Then import and use Anthropic as normal
from anthropic import Anthropic

# Create and use the Anthropic client as usual - it's already patched
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))

# All API calls will now be logged to OSMOSIS automatically
response = client.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1000,
    messages=[
        {"role": "user", "content": "Hello, Claude!"}
    ]
)

# Async client is also supported and automatically patched
from anthropic import AsyncAnthropic
import asyncio

async def call_claude_async():
    async_client = AsyncAnthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))
    response = await async_client.messages.create(
        model="claude-3-haiku-20240307",
        max_tokens=1000,
        messages=[
            {"role": "user", "content": "Hello, async Claude!"}
        ]
    )
    return response

# All async API calls will be logged to OSMOSIS as well
asyncio.run(call_claude_async())

OpenAI Example

# Import osmosis_ai first and initialize it
import osmosis_ai
osmosis_ai.init(os.environ.get("OSMOSIS_API_KEY"))

# Then import and use OpenAI as normal
from openai import OpenAI

# Create and use the OpenAI client as usual - it's already patched
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# All API calls will now be logged to OSMOSIS automatically
response = client.chat.completions.create(
    model="gpt-4o-mini",
    max_tokens=150,
    messages=[
        {"role": "user", "content": "Hello, GPT!"}
    ]
)

# Async client is also supported and automatically patched
from openai import AsyncOpenAI
import asyncio

async def call_openai_async():
    async_client = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
    response = await async_client.chat.completions.create(
        model="gpt-4o-mini",
        max_tokens=150,
        messages=[
            {"role": "user", "content": "Hello, async GPT!"}
        ]
    )
    return response

# All async API calls will be logged to OSMOSIS as well
asyncio.run(call_openai_async())

LangChain Example

# Import osmosis_ai first and initialize it
import osmosis_ai
osmosis_ai.init(os.environ.get("OSMOSIS_API_KEY"))

# Then use LangChain as normal
from langchain_core.prompts import PromptTemplate

# Use LangChain prompt templates as usual
template = PromptTemplate(
    input_variables=["topic"],
    template="Write a short paragraph about {topic}."
)

# Formatting the prompt will be logged to OSMOSIS automatically
formatted_prompt = template.format(topic="artificial intelligence")
print(f"Formatted prompt: {formatted_prompt}")

# Multiple prompt templates are also captured
template2 = PromptTemplate(
    input_variables=["name", "profession"],
    template="My name is {name} and I work as a {profession}."
)
formatted_prompt2 = template2.format(name="Alice", profession="data scientist")
print(f"Formatted prompt 2: {formatted_prompt2}")

Configuration

You can configure the behavior of the library by modifying the following variables:

import osmosis_ai

# Disable logging to OSMOSIS (default: True)
osmosis_ai.enabled = False

How it Works

This library uses monkey patching to override the LLM clients' methods that make API calls. When you import the osmosis_ai module, it automatically patches the supported LLM client libraries. When methods are called on these clients, the library intercepts the calls and sends the request parameters and response data to the OSMOSIS API for logging and monitoring.

The data sent to OSMOSIS includes:

  • Timestamp (UTC)
  • Request parameters
  • Response data
  • HTTP status code

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

osmosis_ai-0.1.7.tar.gz (19.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

osmosis_ai-0.1.7-py3-none-any.whl (23.9 kB view details)

Uploaded Python 3

File details

Details for the file osmosis_ai-0.1.7.tar.gz.

File metadata

  • Download URL: osmosis_ai-0.1.7.tar.gz
  • Upload date:
  • Size: 19.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for osmosis_ai-0.1.7.tar.gz
Algorithm Hash digest
SHA256 eb10fdc3586bc1c17fba16d76013673fc03d4187f9953526420a678c55ba4fd0
MD5 2c96d82ee666ae845137c8d9abc2d4fb
BLAKE2b-256 b7d4dfe6fa4b45a6143b5b22cc0928e1775d42dec02a1d390f40f2111a785e60

See more details on using hashes here.

File details

Details for the file osmosis_ai-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: osmosis_ai-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 23.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for osmosis_ai-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 8ed9eb368b51a8a8ce5abdff83920d551b58137063b17e24d3bdca14e0de2376
MD5 4fe7aad2f33eddb3da545bdf959f26e6
BLAKE2b-256 2a51c946e21b3fbcf6c227d0d29c4fa00b1c0f4382a658bea923795b5c49fbc2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page