Skip to main content

A Python package for interacting with the Unify API

Project description

Unify

We're on a mission to simplify the LLM landscape, Unify lets you:

  • 🔑 Use any LLM from any Provider: With a single interface, you can use all LLMs from all providers by simply changing one string. No need to manage several API keys or handle different input-output formats. Unify handles all of that for you!

  • 📊 Improve LLM Performance: Add your own custom tests and evals, and benchmark your own prompts on all models and providers. Comparing quality, cost and speed, and iterate on your system prompt until all test cases pass, and you can deploy your app!

  • 🔀 Route to the Best LLM: Improve quality, cost and speed by routing to the perfect model and provider for each individual prompt.

Quickstart

Simply install the package:

pip install unifyai

Then sign up to get your API key, then you're ready to go! 🚀

import unify
client = unify.Unify("gpt-4o@openai", api_key=<your_key>)
client.generate("hello world!")

[!NOTE] We recommend using python-dotenv to add UNIFY_KEY="My API Key" to your .env file, avoiding the need to use the api_key argument as above. For the rest of the README, we will assume you set your key as an environment variable.

Listing Models, Providers and Endpoints

You can list all models, providers and endpoints (<model>@<provider> pair) as follows:

models = unify.list_models()
providers = unify.list_providers()
endpoints = unify.list_endpoints()

You can also filter within these functions as follows:

import random
anthropic_models = unify.list_models("anthropic")
client.set_endpoint(random.choice(anthropic_models) + "@anthropic")

latest_llama3p1_providers = unify.list_providers("llama-3.1-405b-chat")
client.set_endpoint("llama-3.1-405b-chat@" + random.choice(latest_llama3p1_providers))

openai_endpoints = unify.list_endpoints("openai")
client.set_endpoint(random.choice(openai_endpoints))

mixtral8x7b_endpoints = unify.list_endpoints("mixtral-8x7b-instruct-v0.1")
client.set_endpoint(random.choice(mixtral8x7b_endpoints))

Changing Models, Providers and Endpoints

If you want change the endpoint, model or the provider, you can do so using the .set_endpoint, .set_model, .set_provider methods respectively.

client.set_endpoint("mistral-7b-instruct-v0.3@deepinfra")
client.set_model("mistral-7b-instruct-v0.3")
client.set_provider("deepinfra")

Custom Prompting

You can influence the model's persona using the system_message argument in the .generate function:

response = client.generate(
    user_message="Hello Llama! Who was Isaac Newton?",  system_message="You should always talk in rhymes"
)

If you'd like to send multiple messages using the .generate function, you should use the messages argument as follows:

messages=[
   {"role": "user", "content": "Who won the world series in 2020?"},
   {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
   {"role": "user", "content": "Where was it played?"}
]
res = client.generate(messages=messages)

Asynchronous Usage

For optimal performance in handling multiple user requests simultaneously, such as in a chatbot application, processing them asynchronously is recommended. To use the AsyncUnify client, simply import AsyncUnify instead of Unify and use await with the .generate function.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@fireworks-ai")
asyncio.run(async_client.generate("Hello Llama! Who was Isaac Newton?"))

Functionality wise, the Async and Sync clients are identical.

Streaming Responses

You can enable streaming responses by setting stream=True in the .generate function.

import unify
client = unify.Unify("llama-3-8b-chat@fireworks-ai")
stream = client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
for chunk in stream:
    print(chunk, end="")

It works in exactly the same way with Async clients.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@fireworks-ai")

async def stream():
   async_stream = await async_client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
   async for chunk in async_stream:
       print(chunk, end="")

asyncio.run(stream())

Dive Deeper

To learn more about our more advanced API features, benchmarking, and LLM routing, go check out our comprehensive docs!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unifyai-0.9.4.tar.gz (33.5 kB view details)

Uploaded Source

Built Distribution

unifyai-0.9.4-py3-none-any.whl (43.7 kB view details)

Uploaded Python 3

File details

Details for the file unifyai-0.9.4.tar.gz.

File metadata

  • Download URL: unifyai-0.9.4.tar.gz
  • Upload date:
  • Size: 33.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.5 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.4.tar.gz
Algorithm Hash digest
SHA256 3b12300dff14cef17234a332d6e48c7086c982de4242f075e1a22b42d7e672f5
MD5 3b51fa628abc2ea27f908efe08f514da
BLAKE2b-256 d4dd4ba00f21219d7c7dbe2632b53c50b2765a3068572d0c658389fd55fa0233

See more details on using hashes here.

File details

Details for the file unifyai-0.9.4-py3-none-any.whl.

File metadata

  • Download URL: unifyai-0.9.4-py3-none-any.whl
  • Upload date:
  • Size: 43.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.5 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.4-py3-none-any.whl
Algorithm Hash digest
SHA256 91bd64cd9980f092c6a76cfa7844968fee765a2aea904683cc89ee7e9d6d9c5d
MD5 21dd2c3025ba3c00da7a4d75cba43ef5
BLAKE2b-256 5f6ca69f170316755c12b135f008ace2b464b6259debc70d70995f82187cf7cd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page