Skip to main content

Intelligent Research and Experimentation AI for LLM experimentation production. ⚠️ This package is in BETA and under active development.

Project description

intura-banner

Intura-AI: Intelligent Research and Experimentation AI

PyPI version LangChain Compatible

intura-ai is a Python package designed to streamline Large Language Model (LLM) experimentation and production. It provides tools for logging LLM usage and managing experiment predictions, with seamless LangChain compatibility.

⚠️ Beta Status

IMPORTANT: Intura AI is currently in BETA and under active development. While we're working hard to ensure stability, you may encounter:

  • API changes without prior notice
  • Incomplete features
  • Bugs and performance issues

We welcome your feedback and contributions to help improve the library!

Getting Started with Intura

This guide will help you start experimenting with Large Language Models (LLMs) using Intura in under 5 minutes. We'll walk you through setting up your first experiment using either our SDK or the Intura Dashboard.

Quick Start Options

  • SDK-Based Approach:
    • Use the Intura AI SDK for programmatic experiment creation and management, offering flexibility and integration into your existing workflows.
  • Intura Dashboard:
    • Start experimenting immediately with our user-friendly dashboard at Intura Dashboard. This option is perfect for quickly exploring Intura's capabilities without writing code.

Prerequisites

Before you begin, ensure you have:

  • Python 3.10 or Later:
    • Download and install Python from python.org/downloads
    • During installation, select the option to add Python to your system's PATH
    • This ensures seamless SDK functionality

Installation and Setup

Step 1: Install the Intura AI SDK

Open your terminal or command prompt and run:

pip install intura-ai

Step 2: Obtain Your Intura API Key

Your API key authenticates your access to the Intura platform. You can:

Store this key securely, as it grants access to your Intura resources.

Step 3: Install Required LangChain Partners

Intura uses LangChain to integrate with different LLM providers. Install the package(s) for your preferred LLM provider(s):

# Install all supported LangChain partners
pip install intura-ai[all-langchain-partner]

# Or install specific partners
pip install intura-ai[openai]      # For OpenAI models
pip install intura-ai[anthropic]   # For Claude models
pip install intura-ai[google-genai] # For Gemini models
pip install intura-ai[deepseek]    # For DeepSeek models
pip install intura-ai[together]    # For Together.ai models

Step 4: Obtain LLM Provider API Keys

You'll need API keys from the LLM providers you plan to use in your experiments. These can be obtained from:

Creating Your First Experiment

Now you can define an experiment to compare different LLMs or prompting strategies.

Step 1: Define Your Experiment

import os
from intura_ai.platform import DashboardPlatform
from intura_ai.platform.domain import ExperimentModel, ExperimentTreatmentModel

# Initialize the platform client with your Intura API key
client = DashboardPlatform(intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>"))

# Create an experiment with multiple treatment variations
experiment_id = client.create_experiment(ExperimentModel(
    experiment_name="Motivation Messages Comparison",
    treatment_list=[
        # Treatment 1: Using Gemini model
        ExperimentTreatmentModel(
            treatment_model_name="gemini-1.5-flash",
            treatment_model_provider="Google",
            prompt="Act as a motivational coach providing inspiring daily messages"
        ),
        # Treatment 2: Using Claude model
        ExperimentTreatmentModel(
            treatment_model_name="claude-3-5-sonnet-20240620",
            treatment_model_provider="Anthropic",
            prompt="Act as a motivational coach providing inspiring daily messages"
        ),
    ]
))

print(f"Experiment created with ID: {experiment_id}")

In this example:

  • We create an experiment comparing two different LLMs (Gemini and Claude)
  • Both use the same prompt, allowing us to compare model performance
  • You could also test different prompts with the same model

Step 2: Run Your Experiment

After creating your experiment, you can run it and collect results:

import os
from intura_ai.experiments import ChatModelExperiment

# Set your LLM provider API keys as environment variables
os.environ["GOOGLE_API_KEY"] = "your_google_api_key"
os.environ["ANTHROPIC_API_KEY"] = "your_anthropic_api_key"
# Add keys for any other providers you're using

# Initialize the experiment client
chat_client = ChatModelExperiment(
    intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>")
)

# Build the experiment chain with user-specific features
llm, prompts = chat_client.build(
    experiment_id=experiment_id,  # Use the ID from the experiment you created
    features={
        "user_id": "user123",     # User identifier
        "subscription_tier": "FREE",  # Can be used for segmentation
        "user_type": "FULL_TIME",
        "location": "US"          # Any custom features you want to track
    },
    messages=[{
        "role": "human",
        "content": "I'm feeling unmotivated today. Can you help me get back on track?"
    }]
)
chain = prompts | llm

# Invoke the experiment (Intura will automatically select one of your treatments)
response = chain.invoke({})
print(response)

Step 3: View and Analyze Results

After running your experiment with multiple users or queries, you can:

  1. Log into the Intura Dashboard
  2. Navigate to your experiment
  3. View performance metrics, including:
    • Response times
    • User segments and their interactions
    • Cost analysis
    • Response quality comparisons

Advanced Usage

Parameter Substitution in Prompts

You can create dynamic prompts using parameter substitution:

# Create an experiment with parameter placeholders
experiment_id = client.create_experiment(ExperimentModel(
    experiment_name="Personalized Motivation",
    treatment_list=[
        ExperimentTreatmentModel(
            treatment_model_name="gpt-4o",
            treatment_model_provider="OpenAI",
            prompt="Create a motivational message for a {user_occupation} who is feeling {mood}"
        ),
    ]
))

# When invoking, provide the parameters
response = chain.invoke({
    "user_occupation": "software developer",
    "mood": "stressed about deadlines"
})

Multi-Turn Conversations

For multi-turn conversations, you can add to the messages array:

llm, prompts = chat_client.build(
    experiment_id=experiment_id,
    features={"user_id": "user123"},
    messages=[
        {"role": "human", "content": "Help me plan a healthy diet"},
        {"role": "assistant", "content": "I'd be happy to help you plan a healthy diet! What are your dietary preferences or restrictions?"},
        {"role": "human", "content": "I'm vegetarian and allergic to nuts"}
    ]
)

Troubleshooting

If you encounter issues:

  1. API Key Errors: Verify your Intura API key and LLM provider API keys are correct
  2. Installation Problems: Ensure you're using Python 3.10+ and have installed the correct LangChain partner packages
  3. Model Unavailability: Check that you have access to the specific models in your treatments
  4. Request Failures: Verify your internet connection and that the LLM providers' services are operational

For further assistance, contact support at support@intura.co

Contributing

We welcome contributions to Intura-AI! Please feel free to:

  • Submit pull requests
  • Open issues for bug reports
  • Suggest feature enhancements
  • Improve documentation

See our Contributing Guidelines for more details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intura_ai-0.0.6.9.tar.gz (29.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intura_ai-0.0.6.9-py3-none-any.whl (30.9 kB view details)

Uploaded Python 3

File details

Details for the file intura_ai-0.0.6.9.tar.gz.

File metadata

  • Download URL: intura_ai-0.0.6.9.tar.gz
  • Upload date:
  • Size: 29.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for intura_ai-0.0.6.9.tar.gz
Algorithm Hash digest
SHA256 0fb650dc1370c02ae60580028e4e98ce53c978584923959b4b778ef7b7f79198
MD5 26c5983011d9e03b4f0992ef2c91d1eb
BLAKE2b-256 ccdcfceefea8ef27f25e5a8e2f1b6312f1dadb7be775c3856404dc0ebd9ade1c

See more details on using hashes here.

File details

Details for the file intura_ai-0.0.6.9-py3-none-any.whl.

File metadata

  • Download URL: intura_ai-0.0.6.9-py3-none-any.whl
  • Upload date:
  • Size: 30.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for intura_ai-0.0.6.9-py3-none-any.whl
Algorithm Hash digest
SHA256 ee312b687b41641fd01797ffa63daddc698888f535fbd23805df3de90b67430e
MD5 87b8dc7733851257601ccab1fef6c114
BLAKE2b-256 529e2fddc4819f65469f67c3bfbd8e2ec2f55986bf5896234656e3e52da14e48

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page