Skip to main content

Integrate neoapi.ai LLM Analytics with your LLM pipelines.

Project description

NeoAPI SDK

The official Python SDK for integrating neoapi.ai LLM Analytics with your LLM pipelines. Track, analyze, and optimize your Language Model outputs with real-time analytics.

Installation

pip install neoapi-sdk

Quick Start Guide

First, set your API key as an environment variable:

export NEOAPI_API_KEY="your-api-key"

Basic Usage

from neoapi import NeoApiClientSync, track_llm_output

# The context manager handles client lifecycle automatically
with NeoApiClientSync() as client:
    # Decorate your LLM function to track its outputs
    @track_llm_output(client=client)
    def get_llm_response(prompt: str) -> str:
        # Your LLM logic here
        return "AI generated response"
    
    # Use your function normally
    response = get_llm_response("What is machine learning?")

Async Support

import asyncio
from neoapi import NeoApiClientAsync, track_llm_output

async def main():
    async with NeoApiClientAsync() as client:
        @track_llm_output(
            client=client,
            project="chatbot",
            need_analysis_response=True  # Get analytics feedback
        )
        async def get_llm_response(prompt: str) -> str:
            # Your async LLM logic here
            await asyncio.sleep(0.1)  # Simulated API call
            return "Async AI response"
        
        response = await get_llm_response("Explain async programming")

# Run your async code
asyncio.run(main())

OpenAI Integration Example

from openai import OpenAI
from neoapi import NeoApiClientSync, track_llm_output

def chat_with_gpt():
    openai_client = OpenAI()  # Uses OPENAI_API_KEY env variable
    
    with NeoApiClientSync() as neo_client:
        @track_llm_output(
            client=neo_client,
            project="gpt4_chat",
            need_analysis_response=True,  # Get quality metrics
            format_json_output=True       # Pretty-print analytics
        )
        def ask_gpt(prompt: str) -> str:
            response = openai_client.chat.completions.create(
                messages=[{"role": "user", "content": prompt}],
                model="gpt-4o-mini"
            )
            return response.choices[0].message.content

        # Use the tracked function
        response = ask_gpt("What are the key principles of clean code?")
        print(response)  # Analytics will be logged automatically

Key Features

  • 🔄 Automatic Tracking: Decorator-based output monitoring
  • Async Support: Built for high-performance async applications
  • 🔍 Real-time Analytics: Get immediate feedback on output quality
  • 🛠 Flexible Integration: Works with any LLM provider
  • 🔧 Configurable: Extensive customization options
  • 🔐 Secure: Environment-based configuration

Configuration Options

Environment Variables

# Required
export NEOAPI_API_KEY="your-api-key"


### Client Configuration
```python
client = NeoApiClientSync(
    # Basic settings
    api_key="your-api-key",      # Optional if env var is set
    check_frequency=1,           # Process every Nth output
    
    # Performance tuning
    batch_size=10,               # Outputs per batch
    flush_interval=5.0,          # Seconds between flushes
    max_retries=3,              # Retry attempts on failure
    
    # Advanced options
    api_url="custom-url",        # Optional API endpoint
    max_batch_size=100,         # Maximum batch size
)

Decorator Options

@track_llm_output(
    client=client,
    
    # Organization
    project="my_project",        # Project identifier
    group="experiment_a",        # Subgroup within project
    analysis_slug="v1.2",        # Version or analysis identifier
    
    # Analytics
    need_analysis_response=True, # Get quality metrics
    format_json_output=True,     # Pretty-print analytics
    
    # Custom data
    metadata={                   # Additional tracking info
        "model": "gpt-4",
        "temperature": 0.7,
        "user_id": "user123"
    },
    save_text=True              # Store output text
)

Best Practices

  1. Use Context Managers: They handle client lifecycle automatically

    with NeoApiClientSync() as client:
        # Your code here
    
  2. Group Related Outputs: Use project and group parameters

    @track_llm_output(client=client, project="chatbot", group="user_support")
    
  3. Add Relevant Metadata: Include context for better analysis

    @track_llm_output(
        client=client,
        metadata={"user_type": "premium", "session_id": "abc123"}
    )
    

Resources

License

Apache License 2.0 - See LICENSE for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neoapi-sdk-0.1.4.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

neoapi_sdk-0.1.4-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file neoapi-sdk-0.1.4.tar.gz.

File metadata

  • Download URL: neoapi-sdk-0.1.4.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for neoapi-sdk-0.1.4.tar.gz
Algorithm Hash digest
SHA256 50d46546819bf769597e3da7173958df3eb9589c3b1b397ed42356d4e94a40dc
MD5 3128a285a7fb27c70b64eaf5e888c082
BLAKE2b-256 446f8eb5eed0ec7f2fdabfbed724a4eb17f8befbf29f0459a4540fca167c3b03

See more details on using hashes here.

File details

Details for the file neoapi_sdk-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: neoapi_sdk-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for neoapi_sdk-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 70e26c6836f7fc502666c943ed3c2cdb7738f8531c4e2dd16379261738554d9b
MD5 f6768c61d2b42e13c5879cb960d1fb7d
BLAKE2b-256 edc507b48128a96421d5174be7da60c82a58f66c0da46187e72ee7c710192b48

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page