Skip to main content

Intelligent Research and Experimentation AI for LLM experimentation production.

Project description

Intura-AI: Intelligent Research and Experimentation AI

PyPI version LangChain Compatible

intura-ai is a Python package designed to streamline LLM experimentation and production. It provides tools for logging LLM usage and managing experiment predictions, with seamless LangChain compatibility.

Dashboard: dashboard.intura.co

Features

  • Callbacks:
    • UsageTrackCallback: Log LLM usage details for analysis and monitoring.
  • Experiment Prediction:
    • ChatModelExperiment: Facilitates the selection and execution of LangChain models based on experiment configurations.
  • LangChain Compatibility:
    • Designed to integrate smoothly with LangChain workflows.

Installation

pip install intura-ai

Usage

Initialization

Before using intura-ai, you need to initialize the client with your API key.

import os
from intura_ai.client import intura_initialization

INTURA_API_KEY = "..."
intura_initialization(INTURA_API_KEY)

os.environ["GOOGLE_API_KEY"] = "..."
os.environ["ANTHROPIC_API_KEY"] = "..."
os.environ["DEEPSEEK_API_KEY"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
os.environ["XXX_API_KEY"] = "..."

Experiment Prediction

Use ChatModelExperiment to fetch and execute pre-configured LangChain models.

from intura_ai.experiments import ChatModelExperiment

EXPERIMENT_ID = "..."
client = ChatModelExperiment(EXPERIMENT_ID)
llm, messages = client.build()
messages.append(('human', 'give me today quote for programmer'))
llm.invoke(messages)

Usage Tracking Callback

Integrate UsageTrackCallback to log LLM usage during execution.

from intura_ai.callbacks import UsageTrackCallback
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage

EXPERIMENT_ID = "..."
llm = ChatGoogleGenerativeAI(
    model="gemini-1.5-pro",
    max_tokens=300,
    timeout=None,
    max_retries=2,
    callbacks=[
        UsageTrackCallback(EXPERIMENT_ID)
    ]
)

messages = [HumanMessage(content="What is the capital of France?")]
llm.invoke(messages)

Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues for bug reports or feature requests.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intura_ai-0.0.3.2.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intura_ai-0.0.3.2-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file intura_ai-0.0.3.2.tar.gz.

File metadata

  • Download URL: intura_ai-0.0.3.2.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for intura_ai-0.0.3.2.tar.gz
Algorithm Hash digest
SHA256 4159f61b3a1d6dd6a9e37c18df8fc089810ffb939241e695303b50e9b59bbeab
MD5 2159a29203cd717dbee84a29b5fdffc9
BLAKE2b-256 075fd9d5bb44d25868a641113365e29ecd5dac268f5b68d37dff6bc62ffd39ea

See more details on using hashes here.

File details

Details for the file intura_ai-0.0.3.2-py3-none-any.whl.

File metadata

  • Download URL: intura_ai-0.0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 9.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for intura_ai-0.0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 61a5da3ab1805d5781aa621563c40a243993b209b17bcba113a4373cb586ab7b
MD5 6745b2c9bb605821cd3c2c888470badf
BLAKE2b-256 f0603a5eba11ce3f3c6cdaa026cdf4ab4ef0ef8b367a2adec0085ee811beb0bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page