Skip to main content

A flexible LLM manager that shifts between multiple models

Project description

ThinkShift_LLM

ThinkShift_LLM is a flexible LLM manager that shifts between multiple language models to ensure robust and uninterrupted AI-powered conversations.

Features

  • Manages multiple LiteLLM clients
  • Shifts between models when errors occur
  • Supports both streaming and non-streaming completions
  • Detailed logging of all interactions
  • Round-robin client selection

Installation

You can install tShift_LLM using pip:

pip install tshift-llm

Usage

Here's a quick example of how to use tShift_LLM:

from tshift_llm import tShift_LLM, LiteLLMClient

clients = [
    LiteLLMClient("gpt-3.5-turbo", "your-openai-key"),
    LiteLLMClient("claude-2", "your-anthropic-key"),
    LiteLLMClient("command-nightly", "your-cohere-key")
]

tshift_llm = tShift_LLM(clients)

response = tshift_llm.completion(
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response.choices[0].message.content)

For more detailed usage instructions, please refer to the documentation.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tshift_llm-0.2.2.tar.gz (4.3 kB view hashes)

Uploaded Source

Built Distribution

tshift_llm-0.2.2-py3-none-any.whl (4.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page