Intelligent Research and Experimentation AI for LLM experimentation production.
Project description
Intura-AI: Intelligent Research and Experimentation AI
intura-ai is a Python package designed to streamline LLM experimentation and production. It provides tools for logging LLM usage and managing experiment predictions, with seamless LangChain compatibility.
Getting Started with Intura
Dive into the world of Intura and begin experimenting with Large Language Models (LLMs) in under 5 minutes. This guide will walk you through the essential steps to set up your first experiment, either through our SDK or directly via the Intura Dashboard.
Quick Start Options
- SDK-Based Experiment Creation:
- Utilize the Intura AI SDK for programmatic experiment creation and management, offering flexibility and integration into your existing workflows.
- Intura Dashboard:
- Jump straight into experimentation with our user-friendly dashboard, accessible at intura.dashboard. This option is perfect for quickly exploring Intura's capabilities.
Prerequisites
Before you begin, ensure you have the following:
- Python 3.10 or Later:
- Download and install Python from python.org/downloads. During installation, select the option to add Python to your system's PATH and install all necessary dependencies. This ensures seamless SDK functionality.
SDK Initialization and Setup
To leverage the Intura AI SDK, you'll need to install it and configure your environment.
-
Install the Intura AI SDK:
- Open your terminal or command prompt and execute the following command:
pip install intura-ai
-
Obtain Your API Key:
- Your API key is essential for authenticating with the Intura platform. You can find it within the Intura Dashboard or by contacting
admin@intura.co. Store this key securely, as it grants access to your Intura resources.
- Your API key is essential for authenticating with the Intura platform. You can find it within the Intura Dashboard or by contacting
Creating Your First Experiment
With the SDK installed and your API key ready, you can now define your experiment.
-
Experiment Definition:
- Use the
DashboardPlatformfunctions from theintura_ai.platformmodule to operate your experiment. You'll also need to import and useExperimentModelandExperimentTreatmentModelfromintura_ai.platform.domainto define your experiment and its variations.
from intura_ai.platform import DashboardPlatform from intura_ai.platform.domain import ExperimentModel, ExperimentTreatmentModel client = DashboardPlatform(intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>")) experiment_id = client.create_experiment(ExperimentModel( experiment_name="Example Experiment", treatment_list=[ ExperimentTreatmentModel( treatment_model_name="gemini-1.5-flash", treatment_model_provider="Google", prompt="Act as personal assistant" ), ExperimentTreatmentModel( treatment_model_name="claude-3-5-sonnet-20240620", treatment_model_provider="Anthropic", prompt="Act as personal assistant" ), ] ))
- Use the
Running Your Experiment
After defining your experiment, you can run it and analyze the results.
-
Initialize the Experiment Client:
- Use the
ChatModelExperimentclass to create a client that interacts with your experiment.
import os from intura_ai.experiments import ChatModelExperiment client = ChatModelExperiment( intura_api_key=os.environ.get("INTURA_API_KEY", "<INTURA_API_KEY>") ) choiced_model, model_config, chat_prompts = client.build( experiment_id=experiment_id, features={ "user_id": "Rama12345", "membership": "FREE", "employment_type": "FULL_TIME", "feature_x": "your custom features" } )
- Use the
-
Craft the Final Prompt:
- Add your final user prompt to the chat prompts list.
chat_prompts.append(('human', 'give me motivation for today'))
-
Set Up and Invoke the Model:
- Initialize the chosen LLM model with its configuration and API key, then invoke it with the crafted prompts.
import os # Set your LLM API keys as environment variables os.environ["GOOGLE_API_KEY"] = "xxx" os.environ["ANTHROPIC_API_KEY"] = "xxx" os.environ["DEEPSEEK_API_KEY"] = "xxx" os.environ["OPENAI_API_KEY"] = "xxx" os.environ["ANOTHER_LLM_KEY"] = "xxx" model = choiced_model(**model_config) # Or set LLM API keys as parameters # model = choiced_model(**model_config, api_key="<YOUR_LLM_API_KEY>") response = model.invoke(chat_prompts) print(response)
By following these steps, you can quickly set up and run your first experiment with Intura, exploring the power of LLMs and optimizing their performance for your specific needs.
Contributing
Contributions are welcome! Please feel free to submit pull requests or open issues for bug reports or feature requests.
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file intura_ai-0.0.3.15.tar.gz.
File metadata
- Download URL: intura_ai-0.0.3.15.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e008308a5bb8019caf77a47e62fc0d1c232b8200a56071432c4a5ce5f4176e1d
|
|
| MD5 |
41454b5246b4928cdc5515b6c5ceb60c
|
|
| BLAKE2b-256 |
c2bb439796d9ee38335d303c45d55eb40e50123edecbc6369f86a77d26e30b1c
|
File details
Details for the file intura_ai-0.0.3.15-py3-none-any.whl.
File metadata
- Download URL: intura_ai-0.0.3.15-py3-none-any.whl
- Upload date:
- Size: 16.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
efb520c71121ad448b96d553f20f2422418ad07617ccdb6319afba086d2cc29b
|
|
| MD5 |
d4ddf3ee976f5b0b260fea95f30b14f5
|
|
| BLAKE2b-256 |
9aa56cc9c5194efd44c37ac88661374969a047e188b85e66758874e5e6b0ddd2
|