Skip to main content

Intelligent Research and Experimentation AI for LLM experimentation production.

Project description

Intura-AI: Intelligent Research and Experimentation AI

PyPI version LangChain Compatible

intura-ai is a Python package designed to streamline LLM experimentation and production. It provides tools for logging LLM usage and managing experiment predictions, with seamless LangChain compatibility.

Getting Started with Intura

Dive into the world of Intura and begin experimenting with Large Language Models (LLMs) in under 5 minutes. This guide will walk you through the essential steps to set up your first experiment, either through our SDK or directly via the Intura Dashboard.

Quick Start Options

  • SDK-Based Experiment Creation:
    • Utilize the Intura AI SDK for programmatic experiment creation and management, offering flexibility and integration into your existing workflows.
  • Intura Dashboard:
    • Jump straight into experimentation with our user-friendly dashboard, accessible at intura.dashboard. This option is perfect for quickly exploring Intura's capabilities.

Prerequisites

Before you begin, ensure you have the following:

  • Python 3.10 or Later:
    • Download and install Python from python.org/downloads. During installation, select the option to add Python to your system's PATH and install all necessary dependencies. This ensures seamless SDK functionality.

SDK Initialization and Setup

To leverage the Intura AI SDK, you'll need to install it and configure your environment.

  1. Install the Intura AI SDK:

    • Open your terminal or command prompt and execute the following command:
    pip install intura-ai
    
  2. Obtain Your API Key:

    • Your API key is essential for authenticating with the Intura platform. You can find it within the Intura Dashboard or by contacting admin@intura.co. Store this key securely, as it grants access to your Intura resources.

Creating Your First Experiment

With the SDK installed and your API key ready, you can now define your experiment.

  1. Experiment Definition:

    • Use the DashboardPlatform functions from the intura_ai.platform module to operate your experiment. You'll also need to import and use ExperimentModel and ExperimentTreatmentModel from intura_ai.platform.domain to define your experiment and its variations.
    from intura_ai.platform import DashboardPlatform
    from intura_ai.platform.domain import ExperimentModel, ExperimentTreatmentModel
    
    client = DashboardPlatform(intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>"))
    experiment_id = client.create_experiment(ExperimentModel(
        experiment_name="Example Experiment",
        treatment_list=[
            ExperimentTreatmentModel(
                treatment_model_name="gemini-1.5-flash",
                treatment_model_provider="Google",
                prompt="Act as personal assistant"
            ),
            ExperimentTreatmentModel(
                treatment_model_name="claude-3-5-sonnet-20240620",
                treatment_model_provider="Anthropic",
                prompt="Act as personal assistant"
            ),
        ]
    ))
    

Running Your Experiment

After defining your experiment, you can run it and analyze the results.

  1. Initialize the Experiment Client:

    • Use the ChatModelExperiment class to create a client that interacts with your experiment.
    import os
    from intura_ai.experiments import ChatModelExperiment
    
    client = ChatModelExperiment(
        intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>")
    )
    
    choiced_model, model_config, chat_prompts = client.build(
        experiment_id=experiment_id,
        features={
            "user_id": "Rama12345",
            "membership": "FREE",
            "employment_type": "FULL_TIME",
            "feature_x": "your custom features"
        }
    )
    
  2. Craft the Final Prompt:

    • Add your final user prompt to the chat prompts list.
    chat_prompts.append(('human', 'give me motivation for today'))
    
  3. Set Up and Invoke the Model:

    • Initialize the chosen LLM model with its configuration and API key, then invoke it with the crafted prompts.
    import os
    
    # Set your LLM API keys as environment variables
    os.environ["GOOGLE_API_KEY"] = "xxx"
    os.environ["ANTHROPIC_API_KEY"] = "xxx"
    os.environ["DEEPSEEK_API_KEY"] = "xxx"
    os.environ["OPENAI_API_KEY"] = "xxx"
    os.environ["ANOTHER_LLM_KEY"] = "xxx"
    
    model = choiced_model(**model_config)
    
    # Or set LLM API keys as parameters
    # model = choiced_model(**model_config, api_key="<YOUR_LLM_API_KEY>")
    
    response = model.invoke(chat_prompts)
    print(response)
    

By following these steps, you can quickly set up and run your first experiment with Intura, exploring the power of LLMs and optimizing their performance for your specific needs.

Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues for bug reports or feature requests.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intura_ai-0.0.3.16.tar.gz (13.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intura_ai-0.0.3.16-py3-none-any.whl (16.5 kB view details)

Uploaded Python 3

File details

Details for the file intura_ai-0.0.3.16.tar.gz.

File metadata

  • Download URL: intura_ai-0.0.3.16.tar.gz
  • Upload date:
  • Size: 13.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for intura_ai-0.0.3.16.tar.gz
Algorithm Hash digest
SHA256 60d2ac14d279a5e8279d8d5d9861f81c82fa3407688b314e6ad32c1021f50089
MD5 5fd224e019edfcc1e6ad2af599fed031
BLAKE2b-256 9cb863b57d244c0bf0590ae1dc27a428403442a87e32b68217395f8617e1b213

See more details on using hashes here.

File details

Details for the file intura_ai-0.0.3.16-py3-none-any.whl.

File metadata

  • Download URL: intura_ai-0.0.3.16-py3-none-any.whl
  • Upload date:
  • Size: 16.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for intura_ai-0.0.3.16-py3-none-any.whl
Algorithm Hash digest
SHA256 9e22120362689f64221859fd86da6e83f2aa7ddfe9ddc396d65e96e7c34f96fd
MD5 020a0d804199f79775967aa772d66749
BLAKE2b-256 ce5f8c5eb2f383bc93c10409969f04fb4edf97bb36cef1a2ad69cd7e8148e8c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page