Skip to main content

Python Client SDK for the Livepeer AI API.

Project description

Livepeer AI Python Library

Welcome to the Livepeer AI Python! This library offers a seamless integration with the Livepeer AI API, enabling you to easily incorporate powerful AI capabilities into your Python applications, whether they run in the browser or on the server side.

SDK Installation

The SDK can be installed with either pip or poetry package managers.

PIP

PIP is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.

pip install livepeer-ai

Poetry

Poetry is a modern tool that simplifies dependency management and package publishing by using a single pyproject.toml file to handle project metadata and dependencies.

poetry add livepeer-ai

IDE Support

PyCharm

Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.

SDK Example Usage

Example

# Synchronous Example
from livepeer_ai import LivepeerAI

s = LivepeerAI(
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
})

if res.image_response is not None:
    # handle response
    pass

The same SDK client can also be used to make asychronous requests by importing asyncio.

# Asynchronous Example
import asyncio
from livepeer_ai import LivepeerAI

async def main():
    s = LivepeerAI(
        http_bearer="<YOUR_BEARER_TOKEN_HERE>",
    )
    res = await s.generate.text_to_image_async(request={
        "prompt": "<value>",
    })
    if res.image_response is not None:
        # handle response
        pass

asyncio.run(main())

Available Resources and Operations

Available methods

generate

File uploads

Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.

[!TIP]

For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.

from livepeer_ai import LivepeerAI

s = LivepeerAI(
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.image_to_image(request={
    "prompt": "<value>",
    "image": {
        "file_name": "example.file",
        "content": open("example.file", "rb"),
    },
})

if res.image_response is not None:
    # handle response
    pass

Retries

Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.

To change the default retry strategy for a single API call, simply provide a RetryConfig object to the call:

from livepeer_ai import LivepeerAI
from livepeerai.utils import BackoffStrategy, RetryConfig

s = LivepeerAI(
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
},
    RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))

if res.image_response is not None:
    # handle response
    pass

If you'd like to override the default retry strategy for all operations that support retries, you can use the retry_config optional parameter when initializing the SDK:

from livepeer_ai import LivepeerAI
from livepeerai.utils import BackoffStrategy, RetryConfig

s = LivepeerAI(
    retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
})

if res.image_response is not None:
    # handle response
    pass

Error Handling

Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an error. If Error objects are specified in your OpenAPI Spec, the SDK will raise the appropriate Error type.

Error Object Status Code Content Type
models.HTTPError 400,401,500 application/json
models.HTTPValidationError 422 application/json
models.SDKError 4xx-5xx /

Example

from livepeer_ai import LivepeerAI, models

s = LivepeerAI(
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = None
try:
    res = s.generate.text_to_image(request={
        "prompt": "<value>",
    })

    if res.image_response is not None:
        # handle response
        pass

except models.HTTPError as e:
    # handle e.data: models.HTTPErrorData
    raise(e)
except models.HTTPValidationError as e:
    # handle e.data: models.HTTPValidationErrorData
    raise(e)
except models.SDKError as e:
    # handle exception
    raise(e)

Server Selection

Select Server by Index

You can override the default server globally by passing a server index to the server_idx: int optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the indexes associated with the available servers:

# Server Variables
0 https://dream-gateway.livepeer.cloud None
1 https://livepeer.studio/api/beta/generate None

Example

from livepeer_ai import LivepeerAI

s = LivepeerAI(
    server_idx=1,
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
})

if res.image_response is not None:
    # handle response
    pass

Override Server URL Per-Client

The default server can also be overridden globally by passing a URL to the server_url: str optional parameter when initializing the SDK client instance. For example:

from livepeer_ai import LivepeerAI

s = LivepeerAI(
    server_url="https://dream-gateway.livepeer.cloud",
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
})

if res.image_response is not None:
    # handle response
    pass

Custom HTTP Client

The Python SDK makes API calls using the httpx HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance. Depending on whether you are using the sync or async version of the SDK, you can pass an instance of HttpClient or AsyncHttpClient respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls. This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of httpx.Client or httpx.AsyncClient directly.

For example, you could specify a header for every request that this sdk makes as follows:

from livepeer_ai import LivepeerAI
import httpx

http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = LivepeerAI(client=http_client)

or you could wrap the client with your own custom logic:

from livepeer_ai import LivepeerAI
from livepeer_ai.httpclient import AsyncHttpClient
import httpx

class CustomClient(AsyncHttpClient):
    client: AsyncHttpClient

    def __init__(self, client: AsyncHttpClient):
        self.client = client

    async def send(
        self,
        request: httpx.Request,
        *,
        stream: bool = False,
        auth: Union[
            httpx._types.AuthTypes, httpx._client.UseClientDefault, None
        ] = httpx.USE_CLIENT_DEFAULT,
        follow_redirects: Union[
            bool, httpx._client.UseClientDefault
        ] = httpx.USE_CLIENT_DEFAULT,
    ) -> httpx.Response:
        request.headers["Client-Level-Header"] = "added by client"

        return await self.client.send(
            request, stream=stream, auth=auth, follow_redirects=follow_redirects
        )

    def build_request(
        self,
        method: str,
        url: httpx._types.URLTypes,
        *,
        content: Optional[httpx._types.RequestContent] = None,
        data: Optional[httpx._types.RequestData] = None,
        files: Optional[httpx._types.RequestFiles] = None,
        json: Optional[Any] = None,
        params: Optional[httpx._types.QueryParamTypes] = None,
        headers: Optional[httpx._types.HeaderTypes] = None,
        cookies: Optional[httpx._types.CookieTypes] = None,
        timeout: Union[
            httpx._types.TimeoutTypes, httpx._client.UseClientDefault
        ] = httpx.USE_CLIENT_DEFAULT,
        extensions: Optional[httpx._types.RequestExtensions] = None,
    ) -> httpx.Request:
        return self.client.build_request(
            method,
            url,
            content=content,
            data=data,
            files=files,
            json=json,
            params=params,
            headers=headers,
            cookies=cookies,
            timeout=timeout,
            extensions=extensions,
        )

s = LivepeerAI(async_client=CustomClient(httpx.AsyncClient()))

Authentication

Per-Client Security Schemes

This SDK supports the following security scheme globally:

Name Type Scheme
http_bearer http HTTP Bearer

To authenticate with the API the http_bearer parameter must be set when initializing the SDK client instance. For example:

from livepeer_ai import LivepeerAI

s = LivepeerAI(
    http_bearer="<YOUR_BEARER_TOKEN_HERE>",
)

res = s.generate.text_to_image(request={
    "prompt": "<value>",
})

if res.image_response is not None:
    # handle response
    pass

Debugging

You can setup your SDK to emit debug logs for SDK requests and responses.

You can pass your own logger class directly into your SDK.

from livepeer_ai import LivepeerAI
import logging

logging.basicConfig(level=logging.DEBUG)
s = LivepeerAI(debug_logger=logging.getLogger("livepeer_ai"))

Summary

Livepeer AI Runner: An application to run AI pipelines

Table of Contents

Development

Maturity

This SDK is in alpha, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally looking for the latest version.

Contributions

While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation. We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.

SDK Created by Speakeasy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

livepeer_ai-0.4.1.tar.gz (33.8 kB view details)

Uploaded Source

Built Distribution

livepeer_ai-0.4.1-py3-none-any.whl (49.9 kB view details)

Uploaded Python 3

File details

Details for the file livepeer_ai-0.4.1.tar.gz.

File metadata

  • Download URL: livepeer_ai-0.4.1.tar.gz
  • Upload date:
  • Size: 33.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1025-azure

File hashes

Hashes for livepeer_ai-0.4.1.tar.gz
Algorithm Hash digest
SHA256 2aa9bb3ec5bf92c4d94e5045f7f7ec4a97ba5cb49b8b69a5b5bd323dccb10875
MD5 1d892860c82d5f17c5abf1720b35c7b4
BLAKE2b-256 08e9d28c2bef9df7b524b39c53fa0d0684ff1de8cec2fd6de01dfde39be0971f

See more details on using hashes here.

File details

Details for the file livepeer_ai-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: livepeer_ai-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 49.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1025-azure

File hashes

Hashes for livepeer_ai-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e0c619b6b2e991d36d992d339dc1845fa39610fb7170be5246fb2d36fa1da020
MD5 7cbea8999470358153cfb3220324a0cf
BLAKE2b-256 0ab40f31c15eb871ace94fe15e5197b687f45197a8c84f44e443679b91496e1d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page