A context management library for Python applications with callback support
Project description
Call Context Lib
A Python context management library designed for LLM applications with LangChain callback integration. Manage execution context, metadata, and experiment logging seamlessly across your AI application stack using standard LangChain callback patterns.
Features
- Context Management: Track user sessions, turns, and metadata across function calls
- LangChain Integration: Native support for LangChain's BaseCallbackHandler pattern
- Async Support: Full support for async/await patterns and async generators
- Callback System: Execute callbacks on context completion with standard LangChain interface
- Metadata Handling: Store and retrieve metadata with support for multiple values per key
- Streaming Support: Built-in support for streaming responses with context preservation
- Type Safety: Fully typed with Python type hints
Installation
pip install call-context-lib
For development:
pip install call-context-lib[dev]
Quick Start
Basic Usage with LangChain Integration
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Create a context
ctx = CallContext(user_id="user123", turn_id="turn456")
# Set metadata
ctx.set_meta("request_type", "chat")
ctx.set_meta("model", "gpt-4")
# Create LangChain callback with context
callback = CallContextCallbackHandler(ctx)
# Use with LangChain LLM
llm = ChatOpenAI(model="gpt-4", callbacks=[callback])
result = await llm.ainvoke([HumanMessage(content="Hello")])
# Complete context callbacks
await ctx.on_complete()
Streaming Support with LangChain
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
async def stream_with_context(input_text: str):
ctx = CallContext(user_id="user123", turn_id="turn456")
ctx.set_meta("model", "gpt-4")
# Create callback with context
callback = CallContextCallbackHandler(ctx)
# Stream with LangChain
llm = ChatOpenAI(model="gpt-4", streaming=True, callbacks=[callback])
async for chunk in llm.astream([HumanMessage(content=input_text)]):
if chunk.content:
yield chunk.content
# Complete context callbacks
await ctx.on_complete()
# Usage
async for token in stream_with_context("Tell me about Python"):
print(token, end="")
Multiple Callback Pattern
from call_context_lib import CallContext, CallContextCallbackHandler
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Custom experiment logging callback
class ExperimentLogger(BaseCallbackHandler):
def __init__(self, ctx: CallContext):
self.ctx = ctx
def on_llm_end(self, response, **kwargs):
print(f"Experiment completed for user {self.ctx.get_user_id()}")
print(f"Model: {self.ctx.get_meta('model')}")
async def multi_callback_example():
ctx = CallContext(user_id="user123", turn_id="turn456")
ctx.set_meta("model", "gpt-4")
# Combine multiple callbacks
callbacks = [
CallContextCallbackHandler(ctx),
ExperimentLogger(ctx)
]
llm = ChatOpenAI(model="gpt-4", callbacks=callbacks)
result = await llm.ainvoke([HumanMessage(content="Hello")])
await ctx.on_complete()
return result
Multiple Values for Same Key
ctx.set_meta("tag", "python")
ctx.set_meta("tag", "async")
ctx.set_meta("tag", "context")
# Get the most recent value
latest_tag = ctx.get_meta("tag") # Returns "context"
# Get all values
all_tags = ctx.get_meta("tag", all_values=True) # Returns ["python", "async", "context"]
API Reference
CallContext
The main context class that manages execution state and metadata.
Constructor
CallContext(user_id: str, turn_id: str, meta: dict = None, callbacks: list = None)
Methods
get_user_id() -> str: Get the user IDget_turn_id() -> str: Get the turn IDget_meta(key: str, all_values: bool = False) -> Any: Get metadata value(s)set_meta(key: str, value: Any) -> None: Set metadata valueset_error(error: Exception): Set error stateon_complete() -> None: Execute all registered callbacks
CallContextCallbackHandler
LangChain BaseCallbackHandler integration for context management.
from call_context_lib import CallContextCallbackHandler
# Create callback handler with context
ctx = CallContext(user_id="user123", turn_id="turn456")
callback = CallContextCallbackHandler(ctx)
# Use with any LangChain component
llm = ChatOpenAI(callbacks=[callback])
Callback Methods
on_llm_start(*args, **kwargs): Called when LLM startson_llm_end(response, **kwargs): Called when LLM completeson_llm_error(error, **kwargs): Called when LLM encounters error
Examples
The examples/ directory contains practical examples:
- FastAPI Integration: How to use the library with FastAPI streaming applications
- LangChain Callback Integration: Examples using CallContextCallbackHandler with LangChain LLMs
- Custom Experiment Logging: Implementing custom BaseCallbackHandler for experiment tracking
- Multiple Callback Patterns: Combining context callbacks with other LangChain callbacks
Running Examples
# Install dependencies
cd examples
uv sync
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key"
# Run the FastAPI server
python -m uvicorn main:app --reload --port 8001
Example API Endpoints
POST /openai-stream-example: Streaming LLM response with contextPOST /openai-invoke-example: Single LLM response with contextPOST /llm-module-stream-example: Custom LLM module streamingPOST /llm-module-invoke-example: Custom LLM module invoke
Development
Setting up development environment
# Clone the repository
git clone https://github.com/jitokim/call-context-lib.git
cd call-context-lib
# Install development dependencies
make install-dev
# Run tests
make test
# Run linting
make lint
# Format code
make format
Available Make Commands
make install- Install packagemake install-dev- Install with development dependenciesmake test- Run testsmake test-cov- Run tests with coveragemake lint- Run lintingmake format- Format codemake build- Build packagemake publish- Publish to PyPImake clean- Clean build artifacts
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Changelog
See CHANGELOG.md for a list of changes and version history.
Support
If you encounter any problems or have questions, please open an issue on GitHub.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file call_context_lib-0.2.6.tar.gz.
File metadata
- Download URL: call_context_lib-0.2.6.tar.gz
- Upload date:
- Size: 267.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8c4ff70e0ca64efda568aef20a617f7d10feaf83f77e0763ae96b34e01686e4e
|
|
| MD5 |
b7f424b8ac7685d7863904d568e4019e
|
|
| BLAKE2b-256 |
8a3d65d4f9dd71ca20769698b2fa359a2e8d8bd156f25cfc9d73a87423ae69c4
|
File details
Details for the file call_context_lib-0.2.6-py3-none-any.whl.
File metadata
- Download URL: call_context_lib-0.2.6-py3-none-any.whl
- Upload date:
- Size: 8.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a56352ce0a714ab20e4be151396eca9a157654bc4ee038fcfac493a98a84e6b6
|
|
| MD5 |
74c6464f33b51b33254dd4fe545be63c
|
|
| BLAKE2b-256 |
dc1147886b29d26b814355438b1accc7f61b9db83bf3bc081c37a1d2ea01247a
|