Skip to main content

A Python package for extracting and managing LLM logs to build a collaborative workspace

Project description

Neatlogs Python SDK

A comprehensive LLM tracking system that automatically captures and logs all LLM API calls with detailed metrics.

Auto-instruments LLM calls, frameworks, and custom code with just 6 exports.

Python 3.12+ License: MIT PyPI version

Features

  • Auto-Instrumentation: Automatically tracks OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, Haystack, and more
  • Rich Metrics: Token usage, costs, latency, streaming metrics, and custom attributes
  • Multi-Provider: OpenAI, Anthropic, Google Gemini, Azure OpenAI, Cohere, Groq, Together and 20+ more
  • Simple API: Just 6 exports - init(), flush(), shutdown(), @span(), trace(), PromptTemplate
  • Zero Config: Works out-of-the-box with frameworks (LangChain, CrewAI, LlamaIndex, Haystack)
  • OpenTelemetry Native: Built on OpenTelemetry + OpenInference standards
  • Session-Aware: Track multi-turn conversations with automatic session grouping
  • Prompt Versioning: Track prompt templates, variables, and versions

Installation

pip install neatlogs

Quick Start

1. Framework Code (Auto-Instrumented)

Just init() and your framework code is automatically tracked:

from neatlogs import init, flush, shutdown
from langchain.chains import LLMChain
from openai import OpenAI

# Initialize (that's all you need!)
init(
    api_key="your-api-key",
    endpoint="https://api.neatlogs.com/v4/batch",
    instrumentations=["langchain", "openai"],
)

# Your code works normally - fully auto-instrumented!
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What is AI?"}]
)

flush()
shutdown()

2. Custom Code with @span()

Instrument your custom orchestration functions:

from neatlogs import init, span, trace, PromptTemplate

init(api_key="...", instrumentations=["openai"])

# Define prompt template for versioning
template = PromptTemplate([
    {"role": "user", "content": "{{question}}"}
])

@span(kind="RETRIEVER", name="vector_search")
def retrieve_docs(query: str):
    """Your custom retrieval logic."""
    return vector_db.search(query)

@span(kind="AGENT", role="Assistant", goal="Answer questions")
def answer_agent(question: str):
    """Agent with prompt tracking."""
    # Track prompt template inside the function that uses it
    with trace(prompt_template=template):
        messages = template.compile(question=question)
        response = llm.create(messages=messages)
        return response.choices[0].message.content

@span(kind="WORKFLOW", name="qa_workflow")
def qa_workflow(question: str):
    """Top-level workflow orchestration."""
    docs = retrieve_docs(question)
    answer = answer_agent(question)
    return answer

# Just call your workflow - no extra wrapper needed!
result = qa_workflow("What is quantum computing?")

flush()
shutdown()

API Reference

Core Functions

init()

Initialize the SDK (call once at startup).

init(
    api_key="your-api-key",                    # Required: your Neatlogs API key
    endpoint="https://api.neatlogs.com/v4/batch",  # Optional: custom endpoint
    instrumentations=["openai", "langchain"],  # Auto-instrument frameworks
    metadata={"env": "production"}             # Optional: global metadata
)

Supported instrumentations: openai, anthropic, langchain, llama-index, crewai, haystack, google-genai, mcp

flush()

Send all pending spans to the server.

flush()

shutdown()

Gracefully shutdown the SDK (call at app exit).

shutdown()

Custom Instrumentation

@span(kind=...)

The only decorator you need.

@span(
    kind="WORKFLOW",           # Required: WORKFLOW, AGENT, CHAIN, TOOL, RETRIEVER, EMBEDDING, MCP_TOOL
    name="custom_function",    # Optional: span name (defaults to function name)
    
    # Optional: add custom attributes
    role="Assistant",          # For AGENT: agent role
    goal="Answer questions",   # For AGENT: agent goal
    model="gpt-4o",            # For LLM/EMBEDDING: model name
    tool_name="search_api",    # For TOOL/MCP_TOOL: tool name
)
def my_function():
    pass

Supported span kinds:

  • WORKFLOW: Top-level orchestration workflows
  • AGENT: AI agents (agentic behavior)
  • CHAIN: Sequential or conditional chains
  • TOOL: Tool/function calls
  • RETRIEVER: Vector search, document retrieval
  • EMBEDDING: Embedding generation
  • MCP_TOOL: Model Context Protocol tools

trace()

Context manager for:

  1. Prompt tracking - Track prompt templates inside LLM calls
  2. Session management - Group multi-turn conversations
  3. Grouping top-level operations - If you have multiple workflows in main()
# Use case 1: Track prompts
@span(kind="AGENT")
def answer_question(q: str):
    template = PromptTemplate([{"role": "user", "content": "{{q}}"}])
    with trace(prompt_template=template):
        messages = template.compile(q=q)
        response = llm.create(messages=messages)
        return response

# Use case 2: Group operations in main()
def main():
    with trace(name="batch_processor", session_id="batch-123"):
        workflow_1()
        workflow_2()

# Use case 3: Multi-turn session tracking
with trace(session_id="user-456", thread_id="conversation-1"):
    for message in conversation:
        agent_workflow(message)

PromptTemplate

Tracks prompt templates, variables and versions.

template = PromptTemplate([
    {"role": "system", "content": "You are a {{role}}."},
    {"role": "user", "content": "{{question}}"}
])

# Compile with variables
messages = template.compile(role="assistant", question="What is AI?")

# Use with trace() to log prompt
with trace(prompt_template=template):
    response = llm.create(messages=messages)

Supported Frameworks

Auto-Instrumented

  • LangChain - Chains, agents, tools, retrievers, LLMs
  • LlamaIndex - Queries, retrievals, agents, tools
  • CrewAI - Agents, tasks, crews, tools
  • Haystack - Pipelines, components, retrievers
  • OpenAI - Chat, completions, embeddings, streaming
  • Anthropic - Claude chat, streaming
  • Google GenAI - Gemini models
  • Cohere - Chat, embeddings
  • Model Context Protocol (MCP) - MCP tools and servers

Supported LLM Providers

OpenAI • Anthropic • Google Gemini • Azure OpenAI • Cohere • Groq • Together • Anyscale • Perplexity • Mistral • AWS Bedrock • Replicate • HuggingFace • Ollama • LiteLLM • and 20+ more


Common Patterns

Pattern 1: Pure Framework Code

from neatlogs import init, flush, shutdown
from langchain.chains import LLMChain

init(api_key="...", instrumentations=["langchain", "openai"])

# Your existing code - zero changes!
chain = LLMChain(...)
result = chain.run("query")

flush()
shutdown()

Pattern 2: Custom Workflow with Prompt Tracking

from neatlogs import init, span, trace, PromptTemplate

init(api_key="...", instrumentations=["openai"])

template = PromptTemplate([{"role": "user", "content": "{{q}}"}])

@span(kind="AGENT", role="QA Agent")
def answer_question(q: str):
    with trace(prompt_template=template):
        messages = template.compile(q=q)
        response = llm.create(messages=messages)
        return response

@span(kind="WORKFLOW")
def qa_workflow(q: str):
    return answer_question(q)

result = qa_workflow("What is AI?")
flush()
shutdown()

Pattern 3: RAG Pipeline

from neatlogs import init, span, trace, PromptTemplate

init(api_key="...", instrumentations=["openai"])

@span(kind="RETRIEVER", name="vector_search")
def retrieve_docs(query: str):
    return vector_db.search(query, top_k=5)

@span(kind="TOOL", tool_name="rerank")
def rerank_docs(docs, query: str):
    return reranker.rerank(docs, query)

@span(kind="AGENT", role="RAG Agent")
def generate_answer(query: str, docs):
    template = PromptTemplate([
        {"role": "user", "content": "Context: {{context}}\nQuestion: {{query}}"}
    ])
    with trace(prompt_template=template):
        context = "\n".join([d.content for d in docs])
        messages = template.compile(context=context, query=query)
        return llm.create(messages=messages)

@span(kind="WORKFLOW", name="rag_pipeline")
def rag_pipeline(query: str):
    docs = retrieve_docs(query)
    ranked = rerank_docs(docs, query)
    answer = generate_answer(query, ranked)
    return answer

result = rag_pipeline("What is quantum computing?")
flush()
shutdown()

Pattern 4: Multi-Turn Conversation

from neatlogs import init, span, trace

# Enable auto_session for automatic session management
init(api_key="...", instrumentations=["openai"], auto_session=True)

@span(kind="AGENT", role="Chat Assistant")
def chat(message: str, history: list):
    messages = history + [{"role": "user", "content": message}]
    return llm.create(messages=messages)

# Group entire conversation with session tracking
with trace(session_id="user-123", thread_id="chat-456"):
    history = []
    for user_message in conversation:
        response = chat(user_message, history)
        history.append({"role": "assistant", "content": response})

flush()
shutdown()

Configuration

Environment Variables

NEATLOGS_API_KEY=your-api-key
NEATLOGS_ENDPOINT=https://api.neatlogs.com/v4/batch

Initialization Options

init(
    api_key="...",                                  # Required: API key
    endpoint="https://api.neatlogs.com/v4/batch",  # Optional: endpoint
    instrumentations=["openai", "langchain"],       #  Optional: project ID
    workflow_name="my-workflow",                    # Optional: workflow name
)

Best Practices

  1. Use auto-instrumentation when possible - Just init() and you're done
  2. @span() for custom orchestration - Wrap your custom workflow, agent, and tool functions
  3. trace() for prompts - Track prompt templates inside functions that use LLMs
  4. trace() for sessions - Group multi-turn conversations with session_id and thread_id
  5. Always flush() and shutdown() - Ensure all spans are sent before exit

Examples

See the /examples directory for 60+ comprehensive examples:

  • Framework examples (LangChain, CrewAI, LlamaIndex, Haystack)
  • Provider examples (OpenAI, Anthropic, Google, Cohere)
  • Pattern examples (RAG, agents, tools, streaming, async)
  • Guardrail integration examples

License

MIT License - see LICENSE file for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neatlogs-1.2.1.tar.gz (74.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neatlogs-1.2.1-py3-none-any.whl (75.6 kB view details)

Uploaded Python 3

File details

Details for the file neatlogs-1.2.1.tar.gz.

File metadata

  • Download URL: neatlogs-1.2.1.tar.gz
  • Upload date:
  • Size: 74.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for neatlogs-1.2.1.tar.gz
Algorithm Hash digest
SHA256 db00d06fb77d2da99aabdadc24c8652814b7ee5f3e87ddcae185e024f3e2fee5
MD5 ed124d1fc1abbed9ebd3d534088b5240
BLAKE2b-256 91026e6adaae6ff23e9af609ec4c388d74995cada7cf7e393f587b6a4521f446

See more details on using hashes here.

File details

Details for the file neatlogs-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: neatlogs-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 75.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for neatlogs-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6c9971f42f6fa7d668c9d46828366e80bbadf5cdad883afbe13ea001c20bca71
MD5 991d0be8d050c5880c52d8267512d357
BLAKE2b-256 2fe7a8134f4186fc0b4708155cd45a66ba8f36ea5307bc935cdbbcb1ad314565

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page