Skip to main content

A Python package for interacting with the Unify API

Project description

Unify

We're on a mission to simplify the LLM landscape, Unify lets you:

  • 🔑 Use any LLM from any Provider: With a single interface, you can use all LLMs from all providers by simply changing one string. No need to manage several API keys or handle different input-output formats. Unify handles all of that for you!

  • 📊 Improve LLM Performance: Add your own custom tests and evals, and benchmark your own prompts on all models and providers. Comparing quality, cost and speed, and iterate on your system prompt until all test cases pass, and you can deploy your app!

  • 🔀 Route to the Best LLM: Improve quality, cost and speed by routing to the perfect model and provider for each individual prompt.

Quickstart

Simply install the package:

pip install unifyai

Then sign up to get your API key, then you're ready to go! 🚀

import unify
client = unify.Unify("gpt-4o@openai", api_key=<your_key>)
client.generate("hello world!")

[!NOTE] We recommend using python-dotenv to add UNIFY_KEY="My API Key" to your .env file, avoiding the need to use the api_key argument as above. For the rest of the README, we will assume you set your key as an environment variable.

Listing Models, Providers and Endpoints

You can list all models, providers and endpoints (<model>@<provider> pair) as follows:

models = unify.list_models()
providers = unify.list_providers()
endpoints = unify.list_endpoints()

You can also filter within these functions as follows:

import random
anthropic_models = unify.list_models("anthropic")
client.set_endpoint(random.choice(anthropic_models) + "@anthropic")

latest_llama3p1_providers = unify.list_providers("llama-3.1-405b-chat")
client.set_endpoint("llama-3.1-405b-chat@" + random.choice(latest_llama3p1_providers))

openai_endpoints = unify.list_endpoints("openai")
client.set_endpoint(random.choice(openai_endpoints))

mixtral8x7b_endpoints = unify.list_endpoints("mixtral-8x7b-instruct-v0.1")
client.set_endpoint(random.choice(mixtral8x7b_endpoints))

Changing Models, Providers and Endpoints

If you want change the endpoint, model or the provider, you can do so using the .set_endpoint, .set_model, .set_provider methods respectively.

client.set_endpoint("mistral-7b-instruct-v0.3@deepinfra")
client.set_model("mistral-7b-instruct-v0.3")
client.set_provider("deepinfra")

Custom Prompting

You can influence the model's persona using the system_message argument in the .generate function:

response = client.generate(
    user_message="Hello Llama! Who was Isaac Newton?",  system_message="You should always talk in rhymes"
)

If you'd like to send multiple messages using the .generate function, you should use the messages argument as follows:

messages=[
   {"role": "user", "content": "Who won the world series in 2020?"},
   {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
   {"role": "user", "content": "Where was it played?"}
]
res = client.generate(messages=messages)

Asynchronous Usage

For optimal performance in handling multiple user requests simultaneously, such as in a chatbot application, processing them asynchronously is recommended. To use the AsyncUnify client, simply import AsyncUnify instead of Unify and use await with the .generate function.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@anyscale")

async def main():
   responses = await async_client.generate("Hello Llama! Who was Isaac Newton?")

asyncio.run(main())

Functionality wise, the Async and Sync clients are identical.

Streaming Responses

You can enable streaming responses by setting stream=True in the .generate function.

import unify
client = unify.Unify("llama-3-8b-chat@anyscale")
stream = client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
for chunk in stream:
    print(chunk, end="")

It works in exactly the same way with Async clients.

import unify
import asyncio
async_client = unify.AsyncUnify("llama-3-8b-chat@anyscale")

async def main():
   async_stream = await async_client.generate("Hello Llama! Who was Isaac Newton?", stream=True)
   async for chunk in async_stream:
       print(chunk, end="")

asyncio.run(main())

Dive Deeper

To learn more about our more advanced API features, benchmarking, and LLM routing, go check out our comprehensive docs!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unifyai-0.9.3.tar.gz (32.3 kB view details)

Uploaded Source

Built Distribution

unifyai-0.9.3-py3-none-any.whl (42.0 kB view details)

Uploaded Python 3

File details

Details for the file unifyai-0.9.3.tar.gz.

File metadata

  • Download URL: unifyai-0.9.3.tar.gz
  • Upload date:
  • Size: 32.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.5 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.3.tar.gz
Algorithm Hash digest
SHA256 ff9ed52a31f7d24994551f446e67883c0e7747be3ce9f161eb1907c8915f9ad5
MD5 1d9c46fba8459482cc13b5464111015c
BLAKE2b-256 cfd26a365d9ca7232e53a9998611cec8ff72a8340cf2f0da64f7599f379450b2

See more details on using hashes here.

File details

Details for the file unifyai-0.9.3-py3-none-any.whl.

File metadata

  • Download URL: unifyai-0.9.3-py3-none-any.whl
  • Upload date:
  • Size: 42.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.5 Linux/6.5.0-1025-azure

File hashes

Hashes for unifyai-0.9.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e709ea9e23a7f84d533c627c0692b9e0df0cea8cf5768e8322a8fa1d9462dc22
MD5 663318d18cddb8887baa6877c100552d
BLAKE2b-256 96213e83d3c500fb8cf4f39d6c1ba1de06e0c8b1ffcc570de3e4e6b12dbfe7f5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page