Skip to main content

A Python package for interacting with the Unify API

Project description

Unify

We're on a mission to simplify the LLM landscape, Unify lets you:

  • 🔑 Use any LLM from any Provider: With a single interface, you can use all LLMs from all providers by simply changing one string. No need to manage several API keys or handle different input-output formats. Unify handles all of that for you!

  • 📊 Improve LLM Performance: Add your own custom tests and evals, and benchmark your own prompts on all models and providers. Comparing quality, cost and speed, and iterate on your system prompt until all test cases pass, and you can deploy your app!

  • 🔀 Route to the Best LLM: Improve quality, cost and speed by routing to the perfect model and provider for each individual prompt.

Quickstart

Simply install the package:

pip install unifyai

Then sign up to get your API key, then you're ready to go! 🚀

import unify
client = unify.Unify("gpt-4o@openai", api_key=<your_key>)
client.generate("hello world!")

[!NOTE] We recommend using python-dotenv to add UNIFY_KEY="My API Key" to your .env file, avoiding the need to use the api_key argument as above. For the rest of the README, we will assume you set your key as an environment variable.

Listing Models, Providers and Endpoints

You can list all models, providers and endpoints (<model>@<provider> pair) as follows:

models = unify.utils.list_models()
providers = unify.utils.list_providers()
endpoints = unify.utils.list_endpoints()

You can also filter within these functions as follows:

import random
anthropic_models = unify.utils.list_models("anthropic")
client.set_endpoint(random.choice(anthropic_models) + "@anthropic")

latest_llama3p1_providers = unify.utils.list_providers("llama-3.1-405b-chat")
client.set_endpoint("llama-3.1-405b-chat@" + random.choice(latest_llama3p1_providers))

openai_endpoints = unify.utils.list_endpoints("openai")
client.set_endpoint(random.choice(openai_endpoints))

mixtral8x7b_endpoints = unify.utils.list_endpoints("mixtral-8x7b-instruct-v0.1")
client.set_endpoint(random.choice(mixtral8x7b_endpoints))

Changing Models, Providers and Endpoints

If you want change the endpoint, model or the provider, you can do so using the .set_endpoint, .set_model, .set_provider methods respectively.

client.set_endpoint("mistral-7b-instruct-v0.3@deepinfra")
client.set_model("mistral-7b-instruct-v0.3")
client.set_provider("deepinfra")

Custom Prompting

You can influence the model's persona using the system_prompt argument in the .generate function:

response = client.generate(
    user_prompt="Hello Llama! Who was Isaac Newton?",  system_prompt="You should always talk in rhymes"
)

If you'd like to send multiple messages using the .generate function, you should use the messages argument as follows:

messages=[
   {"role": "user", "content": "Who won the world series in 2020?"},
   {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
   {"role": "user", "content": "Where was it played?"}
]
res = client.generate(messages=messages)

Asynchronous Usage

For optimal performance in handling multiple user requests simultaneously, such as in a chatbot application, processing them asynchronously is recommended. To use the AsyncUnify client, simply import AsyncUnify instead of Unify and use await with the .generate function.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@anyscale")

async def main():
   responses = await async_client.generate("Hello Llama! Who was Isaac Newton?")

asyncio.run(main())

Functionality wise, the Async and Sync clients are identical.

Streaming Responses

You can enable streaming responses by setting stream=True in the .generate function.

import unify
client = unify.Unify("llama-3-8b-chat@anyscale")
stream = client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
for chunk in stream:
    print(chunk, end="")

It works in exactly the same way with Async clients.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@anyscale")

async def main():
   async_stream = await async_client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
   async for chunk in async_stream:
       print(chunk, end="")

asyncio.run(main())

Dive Deeper

To learn more about our more advanced API features, benchmarking, and LLM routing, go check out our comprehensive docs!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unifyai-0.9.0.tar.gz (14.7 kB view details)

Uploaded Source

Built Distribution

unifyai-0.9.0-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file unifyai-0.9.0.tar.gz.

File metadata

  • Download URL: unifyai-0.9.0.tar.gz
  • Upload date:
  • Size: 14.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.0.tar.gz
Algorithm Hash digest
SHA256 1fd1b9798ef1a110fd883f457cbf9309a7cc4ad316dac39bd0dce19b6a5a356c
MD5 8282bda4ffb35ef7da44d42f9fd5f547
BLAKE2b-256 59c5033d0dfac39b4e6bbb2f43a40c952a45def00a9be65e7229fdb4dffa4df8

See more details on using hashes here.

File details

Details for the file unifyai-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: unifyai-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 15.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1849d409c026eaa521f8b45b2623ac66332feefd4e4e0939b0b7be31623769da
MD5 9c77721be566abd9d328b4462a07a14a
BLAKE2b-256 9677e24efb30c4c6e6ebd72d664eccd676f7b261c2a13f677b1e817ce6e3c219

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page