Skip to main content

Integrate neoapi.ai LLM Analytics with your LLM pipelines.

Project description

NeoAPI SDK

The official Python SDK for integrating neoapi.ai LLM Analytics with your LLM pipelines. Track, analyze, and optimize your Language Model outputs with real-time analytics.

Installation

pip install neoapi-sdk

Quick Start Guide

First, set your API key as an environment variable:

export NEOAPI_API_KEY="your-api-key"

Basic Usage

from neoapi import NeoApiClient, track_llm_output

# The context manager handles client lifecycle automatically
with NeoApiClient() as client:
    # Track both prompt and response
    @track_llm_output(
        client=client,
        prompt=lambda x: f"User query: {x}",  # Dynamic prompt tracking
        metadata={"model": "gpt-4", "temperature": 0.7}
    )
    def get_llm_response(prompt: str) -> str:
        # Your LLM logic here
        return "AI generated response"
    
    # Use your function normally
    response = get_llm_response("What is machine learning?")

Async Support

import asyncio
from neoapi import NeoApiClient, track_llm_output_async

async def main():
    client = NeoApiClient()
    await client.start_async()

    try:
        @track_llm_output_async(
            client=client,
            project="chatbot",
            need_analysis_response=True,  # Get analytics feedback
            prompt=lambda x: f"User query: {x}",  # Dynamic prompt tracking
            metadata={"model": "gpt-4", "session_id": "async-123"}
        )
        async def get_llm_response(prompt: str) -> str:
            # Your async LLM logic here
            await asyncio.sleep(0.1)  # Simulated API call
            return "Async AI response"
        
        response = await get_llm_response("Explain async programming")
    finally:
        await client.stop_async()

# Run your async code
asyncio.run(main())

OpenAI Integration Example

from openai import OpenAI
from neoapi import NeoApiClient, track_llm_output

def chat_with_gpt():
    openai_client = OpenAI()  # Uses OPENAI_API_KEY env variable
    
    with NeoApiClient() as neo_client:
        @track_llm_output(
            client=neo_client,
            project="gpt4_chat",
            need_analysis_response=True,  # Get quality metrics
            format_json_output=True,      # Pretty-print analytics
            prompt=lambda x: f"GPT prompt: {x}",  # Track OpenAI prompts
            metadata={
                "model": "gpt-4",
                "temperature": 0.7,
                "session_id": "openai-123"
            }
        )
        def ask_gpt(prompt: str) -> str:
            response = openai_client.chat.completions.create(
                messages=[{"role": "user", "content": prompt}],
                model="gpt-4"
            )
            return response.choices[0].message.content

        # Use the tracked function
        response = ask_gpt("What are the key principles of clean code?")
        print(response)  # Analytics will be logged automatically

Key Features

  • 🔄 Automatic Tracking: Decorator-based output monitoring
  • 📝 Prompt Tracking: Track both input prompts and output responses
  • Async Support: Built for high-performance async applications
  • 🔍 Real-time Analytics: Get immediate feedback on output quality
  • 🛠 Flexible Integration: Works with any LLM provider
  • 🔧 Configurable: Extensive customization options
  • 🔐 Secure: Environment-based configuration

Configuration Options

Environment Variables

# Required
export NEOAPI_API_KEY="your-api-key"

Client Configuration

client = NeoApiClient(
    # Basic settings
    api_key="your-api-key",      # Optional if env var is set
    check_frequency=1,           # Process every Nth output
    
    # Performance tuning
    batch_size=10,               # Outputs per batch
    flush_interval=5.0,          # Seconds between flushes
    max_retries=3,              # Retry attempts on failure
    timeout=10.0,               # Request timeout in seconds
    
    # Advanced options
    api_url="custom-url",        # Optional API endpoint
)

Decorator Options

@track_llm_output(
    client=client,
    
    # Organization
    project="my_project",        # Project identifier
    group="experiment_a",        # Subgroup within project
    analysis_slug="v1.2",        # Version or analysis identifier
    
    # Analytics
    need_analysis_response=True, # Get quality metrics
    format_json_output=True,     # Pretty-print analytics
    
    # Prompt Tracking
    prompt="Static prompt",      # Static prompt text
    # OR
    prompt=lambda x: f"Dynamic: {x}",  # Dynamic prompt function
    
    # Custom data
    metadata={                   # Additional tracking info
        "model": "gpt-4",
        "temperature": 0.7,
        "session_id": "abc123"
    }
)

Best Practices

  1. Use Context Managers: They handle client lifecycle automatically

    with NeoApiClient() as client:
        # Your code here
    
  2. Track Prompts: Include input prompts for better analysis

    @track_llm_output(
        client=client,
        prompt=lambda x: f"User: {x}"
    )
    
  3. Group Related Outputs: Use project and group parameters

    @track_llm_output(
        client=client,
        project="chatbot",
        group="user_support"
    )
    
  4. Add Relevant Metadata: Include context for better analysis

    @track_llm_output(
        client=client,
        metadata={
            "model": "gpt-4",
            "temperature": 0.7,
            "session_id": "abc123"
        }
    )
    

Resources

License

Apache License 2.0 - See LICENSE for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neoapi-sdk-0.1.6.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

neoapi_sdk-0.1.6-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file neoapi-sdk-0.1.6.tar.gz.

File metadata

  • Download URL: neoapi-sdk-0.1.6.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for neoapi-sdk-0.1.6.tar.gz
Algorithm Hash digest
SHA256 b1d81fe56d2a45ee774e91275017ce80c691c0700b473965955a857038735853
MD5 e86bb523c761fcafdd84ad184e8c89e9
BLAKE2b-256 b26e4e43fb884c2adf27c41e5556bf83d00b29f3932585513c59b2f582310207

See more details on using hashes here.

File details

Details for the file neoapi_sdk-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: neoapi_sdk-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 14.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for neoapi_sdk-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 9c7c7c77eaa80394c438e91449195e98c03e9f5eef42baafb4d802e13c6e85d9
MD5 35020a1a2ec5174072e0f7c9aa617059
BLAKE2b-256 769c6224f1c92d3c77447436a413e6d14684a3be157b8d411ad397e59a4a0d45

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page