Python Client SDK Generated by Speakeasy.
Project description
Friendli Python SDK
Supercharge Generative AI Serving with Friendli 🚀
Token Setup
When using Friendli Python SDK, you need to provide a Friendli Token for authentication and authorization purposes. A Friendli Token serves as an alternative method of authorization to signing in with an email and a password. You can generate a new Friendli Token through the Friendli Suite, at your "User settings" page by following the steps below.
- Go to the Friendli Suite and sign in with your account.
- Click the profile icon at the top-right corner of the page.
- Click "User settings" menu.
- Go to the "Tokens" tab on the navigation bar.
- Create a new Friendli Token by clicking the "Create token" button.
- Copy the token and save it in a safe place. You will not be able to see this token again once the page is refreshed.
Table of Contents
- SDK Installation
- SDK Example Usage
- Available Resources and Operations
- Server-sent event streaming
- Retries
- Error Handling
- Server Selection
- Custom HTTP Client
- Debugging
- IDE Support
SDK Installation
The SDK can be installed with either pip or poetry package managers.
PIP
PIP is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
pip install friendli
Poetry
Poetry is a modern tool that simplifies dependency management and package publishing by using a single pyproject.toml
file to handle project metadata and dependencies.
poetry add friendli
SDK Example Usage
Chat completions
Given a list of messages forming a conversation, the model generates a response.
# Synchronous Example
from friendli import Friendli
import os
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
# handle response
pass
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from friendli import Friendli
import os
async def main():
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = await s.serverless.chat.complete_async(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
# handle response
pass
asyncio.run(main())
Tool assisted chat completions
Given a list of messages forming a conversation, the model generates a response. Additionally, the model can utilize built-in tools for tool calls, enhancing its capability to provide more comprehensive and actionable responses.
# Synchronous Example
from friendli import Friendli
import os
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.tool_assisted_chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "What is 3 + 6?",
},
], max_tokens=200, tools=[
{
"type": "math:calculator",
},
])
if res is not None:
# handle response
pass
The same SDK client can also be used to make asychronous requests by importing asyncio.
# Asynchronous Example
import asyncio
from friendli import Friendli
import os
async def main():
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = await s.serverless.tool_assisted_chat.complete_async(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "What is 3 + 6?",
},
], max_tokens=200, tools=[
{
"type": "math:calculator",
},
])
if res is not None:
# handle response
pass
asyncio.run(main())
Available Resources and Operations
Available methods
dedicated
dedicated.chat
dedicated.completions
dedicated.token
- tokenization - Tokenization
- detokenization - Detokenization
serverless
serverless.chat
serverless.completions
serverless.token
- tokenization - Tokenization
- detokenization - Detokenization
serverless.tool_assisted_chat
Server-sent event streaming
Server-sent events are used to stream content from certain
operations. These operations will expose the stream as Generator that
can be consumed using a simple for
loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
from friendli import Friendli
import os
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.chat.stream(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
for event in res:
# handle event
print(event, flush=True)
Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a RetryConfig
object to the call:
from friendli import Friendli
from friendli.utils import BackoffStrategy, RetryConfig
import os
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200,
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
if res is not None:
# handle response
pass
If you'd like to override the default retry strategy for all operations that support retries, you can use the retry_config
optional parameter when initializing the SDK:
from friendli import Friendli
from friendli.utils import BackoffStrategy, RetryConfig
import os
s = Friendli(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
# handle response
pass
Error Handling
Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.
By default, an API error will raise a models.SDKError exception, which has the following properties:
Property | Type | Description |
---|---|---|
.status_code |
int | The HTTP status code |
.message |
str | The error message |
.raw_response |
httpx.Response | The raw HTTP response |
.body |
str | The response content |
When custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective Errors tables in SDK docs for more details on possible exception types for each operation. For example, the complete_async
method may raise the following exceptions:
Error Type | Status Code | Content Type |
---|---|---|
models.SDKError | 4XX, 5XX | */* |
Example
from friendli import Friendli, models
import os
s = Friendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = None
try:
res = s.serverless.chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
# handle response
pass
except models.SDKError as e:
# handle exception
raise(e)
Server Selection
Override Server URL Per-Client
The default server can also be overridden globally by passing a URL to the server_url: str
optional parameter when initializing the SDK client instance. For example:
from friendli import Friendli
import os
s = Friendli(
server_url="https://api.friendli.ai",
token=os.getenv("FRIENDLI_TOKEN", ""),
)
res = s.serverless.chat.complete(model="meta-llama-3.1-8b-instruct", messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
], max_tokens=200)
if res is not None:
# handle response
pass
Custom HTTP Client
The Python SDK makes API calls using the httpx HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of HttpClient
or AsyncHttpClient
respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of httpx.Client
or httpx.AsyncClient
directly.
For example, you could specify a header for every request that this sdk makes as follows:
from friendli import Friendli
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = Friendli(client=http_client)
or you could wrap the client with your own custom logic:
from friendli import Friendli
from friendli.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = Friendli(async_client=CustomClient(httpx.AsyncClient()))
Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
from friendli import Friendli
import logging
logging.basicConfig(level=logging.DEBUG)
s = Friendli(debug_logger=logging.getLogger("friendli"))
You can also enable a default debug logger by setting an environment variable FRIENDLI_DEBUG
to true.
IDE Support
PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file friendli-0.3.0.tar.gz
.
File metadata
- Download URL: friendli-0.3.0.tar.gz
- Upload date:
- Size: 73.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.8.18 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e962a036aa2f42bec58e38a6300f03b29419254c4561a605a277c1ad28fe6a07 |
|
MD5 | feecf0421d1ab231e0e527a678b50059 |
|
BLAKE2b-256 | f7a18d567080990341cb9d53ec286eb3e42a438173f38bcce714a103189552fa |
File details
Details for the file friendli-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: friendli-0.3.0-py3-none-any.whl
- Upload date:
- Size: 136.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.8.18 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b98cefe0832952098b7446a03a3b78e30e1210123d737250cf72aa5b3f55b7a |
|
MD5 | 56284ca3ca99fa7e4d3111c8c064dd8a |
|
BLAKE2b-256 | 11a7e870fb7b404c8e92f1d42338ab8bc77adb874e68d3b5034fb4a5215071a4 |