Skip to main content

A Python package for interacting with the Unify API

Project description

Unify Python API Library

The Unify Python Package provides access to the Unify REST API, allowing you to query Large Language Models (LLMs) from any Python 3.7.1+ application. It includes Synchronous and Asynchronous clients with Streaming responses support.

Just like the REST API, you can:

  • 🔑 Use any endpoint with one key: Access all LLMs at any provider with just one Unify API Key.

  • 🚀 Route to the best endpoint: Each prompt is sent to the endpoint that will yield the best performance for your target metric, including high-throughput, low cost or low latency. See the routing section to learn more about this!

Installation

You can use pip to install the package as follows:

pip install unifyai

Basic Usage

import os
from unify import Unify
unify = Unify(
    # This is the default and optional to include.
    api_key=os.environ.get("UNIFY_KEY"),
    endpoint="llama-2-13b-chat@anyscale"
)
response = unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?")

Here, response is a string containing the model's output.

You can also pass the model and provider as separate arguments as shown below:

unify = Unify(
    # This is the default and optional to include.
    api_key=os.environ.get("UNIFY_KEY"),
    model="llama-2-13b-chat",
    provider="anyscale"
)

You can influence the model's persona using the system_prompt argument in the .generate function:

response = unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?", system_prompt="You should always talk in rhymes")

If you want change the endpoint, model or the provider, you can do so using the .set_endpoint, .set_model, .set_provider methods respectively.

unify.set_endpoint("mistral-7b-instruct-v0.1@deepinfra")
unify.set_model("mistral-7b-instruct-v0.1")
unify.set_provider("deepinfra")

Supported Models

The list of supported models and providers is available in the platform.

You can also get this information directly in Python using list_models(), list_providers() and list_endpoints().

models = unify.list_models()
providers = unify.list_providers("mistral-7b-instruct-v0.1")
endpoints = unify.list_endpoints("mistral-7b-instruct-v0.1")

API Key

You can get an API Key from the Unify console

[!NOTE] You can provide an api_key keyword argument, but we recommend using python-dotenv to add UNIFY_KEY="My API Key" to your .env file so that your API Key is not stored in source control.

Sending multiple messages

If you'd like to send multiple messages using the .generate function, you should use the messages argument as follows:

messages=[
   {"role": "user", "content": "Who won the world series in 2020?"},
   {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
   {"role": "user", "content": "Where was it played?"}
]
res = unify.generate(messages=messages)

Asynchronous Usage

For optimal performance in handling multiple user requests simultaneously, such as in a chatbot application, processing them asynchronously is recommended. To use the AsyncUnify client, simply import AsyncUnify instead of Unify and use await with the .generate function.

from unify import AsyncUnify
import os
import asyncio
async_unify = AsyncUnify(
   # This is the default and optional to include.
   api_key=os.environ.get("UNIFY_KEY"),
   endpoint="llama-2-13b-chat@anyscale"
)

async def main():
   responses = await async_unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?")

asyncio.run(main())

Functionality wise, the Async and Sync clients are identical.

Streaming Responses

You can enable streaming responses by setting stream=True in the .generate function.

import os
from unify import Unify
unify = Unify(
    # This is the default and optional to include.
    api_key=os.environ.get("UNIFY_KEY"),
    endpoint="llama-2-13b-chat@anyscale"
)
stream = unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?", stream=True)
for chunk in stream:
    print(chunk, end="")

It works in exactly the same way with Async clients.

from unify import AsyncUnify
import os
import asyncio
async_unify = AsyncUnify(
   # This is the default and optional to include.
   api_key=os.environ.get("UNIFY_KEY"),
   endpoint="llama-2-13b-chat@anyscale"
)

async def main():
   async_stream = await async_unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?", stream=True)
   async for chunk in async_stream:
       print(chunk, end="")

asyncio.run(main())

Get Current Credit Balance

You can use the .get_credit_balance method to the credit balance for the authenticated account as follows:

credits = unify.get_credit_balance()

Dynamic Routing

As evidenced by our benchmarks, the optimal provider for each model varies by geographic location and time of day due to fluctuating API performances. With our dynamic routing, we automatically direct your requests to the "top-performing provider" at that moment. To enable this feature, simply replace your query's provider with one of the available routing modes. As an example, you can query the llama-2-7b-chat endpoint to get the provider with the lowest input-cost as follows:

import os
from unify import Unify
unify = Unify(
    # This is the default and optional to include.
    api_key=os.environ.get("UNIFY_KEY"),
    endpoint="llama-2-13b-chat@lowest-input-cost"
)
response = unify.generate(user_prompt="Hello Llama! Who was Isaac Newton?")

You can see the provider chosen by printing the .provider attribute of the client:

print(unify.provider)

Dynamic routing works with both Synchronous and Asynchronous clients. For more information on Dynamic Routing, check our documentation.

ChatBot Agent

Our ChatBot allows you to start an interactive chat session with any of our supported llm endpoints with only a few lines of code:

from unify import ChatBot
agent = ChatBot(
    # This is the default and optional to include.
    api_key=os.environ.get("UNIFY_KEY"),
    endpoint="llama-2-13b-chat@lowest-input-cost"
)
agent.run()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unifyai-0.8.2.tar.gz (14.3 kB view details)

Uploaded Source

Built Distribution

unifyai-0.8.2-py3-none-any.whl (14.5 kB view details)

Uploaded Python 3

File details

Details for the file unifyai-0.8.2.tar.gz.

File metadata

  • Download URL: unifyai-0.8.2.tar.gz
  • Upload date:
  • Size: 14.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.10.13 Linux/6.2.0-1019-azure

File hashes

Hashes for unifyai-0.8.2.tar.gz
Algorithm Hash digest
SHA256 48fb06dbc40c00388dc213b45df6686ebfe0adaf46187d74923b5f0628a7ae92
MD5 23306652e73ef8fe3a6f6192612ae2ac
BLAKE2b-256 8484bd2cebe8910153dc44f162fee1e10201ca51c0675093eaadd56dbf9127a8

See more details on using hashes here.

File details

Details for the file unifyai-0.8.2-py3-none-any.whl.

File metadata

  • Download URL: unifyai-0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 14.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.10.13 Linux/6.2.0-1019-azure

File hashes

Hashes for unifyai-0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 42814a3d3bd2e86aaa2d9a1c34db43f153d8bdfbbcd26a4e88a9b7fbeb0326c4
MD5 922f392b1760fc44bd9467be56234f61
BLAKE2b-256 079c29667ad52c06698a7be04955d4e6abc42680502278dea52f02083db81a78

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page