Skip to main content

The official Python library for the lumaai API

Project description

LumaAI Python API library

PyPI version

The LumaAI Python library provides convenient access to the LumaAI REST API from any Python 3.9+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by httpx.

Documentation

The REST API documentation can be found on lumalabs.ai. The full API of this library can be found in api.md.

Installation

# install from PyPI
pip install lumaai

Usage

The full API of this library can be found in api.md.

import os
from lumaai import LumaAI

client = LumaAI(
    auth_token=os.environ.get("LUMAAI_API_KEY"),  # This is the default and can be omitted
)

generation = client.generations.create(
    model="ray-2",
    aspect_ratio="16:9",
    loop=False,
    prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
)
print(generation.id)

While you can provide a auth_token keyword argument, we recommend using python-dotenv to add LUMAAI_API_KEY="My Auth Token" to your .env file so that your Auth Token is not stored in source control.

Async usage

Simply import AsyncLumaAI instead of LumaAI and use await with each API call:

import os
import asyncio
from lumaai import AsyncLumaAI

client = AsyncLumaAI(
    auth_token=os.environ.get("LUMAAI_API_KEY"),  # This is the default and can be omitted
)


async def main() -> None:
    generation = await client.generations.create(
        model="ray-2",
        aspect_ratio="16:9",
        loop=False,
        prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
    )
    print(generation.id)


asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

With aiohttp

By default, the async client uses httpx for HTTP requests. However, for improved concurrency performance you may also use aiohttp as the HTTP backend.

You can enable this by installing aiohttp:

# install from PyPI
pip install lumaai[aiohttp]

Then you can enable it by instantiating the client with http_client=DefaultAioHttpClient():

import os
import asyncio
from lumaai import DefaultAioHttpClient
from lumaai import AsyncLumaAI


async def main() -> None:
    async with AsyncLumaAI(
        auth_token=os.environ.get("LUMAAI_API_KEY"),  # This is the default and can be omitted
        http_client=DefaultAioHttpClient(),
    ) as client:
        generation = await client.generations.create(
            model="ray-2",
            aspect_ratio="16:9",
            loop=False,
            prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
        )
        print(generation.id)


asyncio.run(main())

Using types

Nested request parameters are TypedDicts. Responses are Pydantic models which also provide helper methods for things like:

  • Serializing back into JSON, model.to_json()
  • Converting to a dictionary, model.to_dict()

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set python.analysis.typeCheckingMode to basic.

Nested params

Nested parameters are dictionaries, typed using TypedDict, for example:

from lumaai import LumaAI

client = LumaAI()

generation = client.generations.create(
    model="ray-2",
    keyframes={
        "frame0": {
            "type": "image",
            "url": "https://example.com/image.jpg",
        },
        "frame1": {
            "id": "123e4567-e89b-12d3-a456-426614174000",
            "type": "generation",
        },
    },
)
print(generation.keyframes)

Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of lumaai.APIConnectionError is raised.

When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of lumaai.APIStatusError is raised, containing status_code and response properties.

All errors inherit from lumaai.APIError.

import lumaai
from lumaai import LumaAI

client = LumaAI()

try:
    client.generations.create(
        model="ray-2",
        aspect_ratio="16:9",
        loop=False,
        prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
    )
except lumaai.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
except lumaai.RateLimitError as e:
    print("A 429 status code was received; we should back off a bit.")
except lumaai.APIStatusError as e:
    print("Another non-200-range status code was received")
    print(e.status_code)
    print(e.response)

Error codes are as follows:

Status Code Error Type
400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A APIConnectionError

Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use the max_retries option to configure or disable retry settings:

from lumaai import LumaAI

# Configure the default for all requests:
client = LumaAI(
    # default is 2
    max_retries=0,
)

# Or, configure per-request:
client.with_options(max_retries=5).generations.create(
    model="ray-2",
    aspect_ratio="16:9",
    loop=False,
    prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
)

Timeouts

By default requests time out after 1 minute. You can configure this with a timeout option, which accepts a float or an httpx.Timeout object:

from lumaai import LumaAI

# Configure the default for all requests:
client = LumaAI(
    # 20 seconds (default is 1 minute)
    timeout=20.0,
)

# More granular control:
client = LumaAI(
    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)

# Override per-request:
client.with_options(timeout=5.0).generations.create(
    model="ray-2",
    aspect_ratio="16:9",
    loop=False,
    prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
)

On timeout, an APITimeoutError is thrown.

Note that requests that time out are retried twice by default.

Advanced

Logging

We use the standard library logging module.

You can enable logging by setting the environment variable LUMAAI_LOG to info.

$ export LUMAAI_LOG=info

Or to debug for more verbose logging.

How to tell whether None means null or missing

In an API response, a field may be explicitly null, or missing entirely; in either case, its value is None in this library. You can differentiate the two cases with .model_fields_set:

if response.my_field is None:
  if 'my_field' not in response.model_fields_set:
    print('Got json like {}, without a "my_field" key present at all.')
  else:
    print('Got json like {"my_field": null}.')

Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing .with_raw_response. to any HTTP method call, e.g.,

from lumaai import LumaAI

client = LumaAI()
response = client.generations.with_raw_response.create(
    model="ray-2",
    aspect_ratio="16:9",
    loop=False,
    prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
)
print(response.headers.get('X-My-Header'))

generation = response.parse()  # get the object that `generations.create()` would have returned
print(generation.id)

These methods return an APIResponse object.

The async client returns an AsyncAPIResponse with the same structure, the only difference being awaitable methods for reading the response content.

.with_streaming_response

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use .with_streaming_response instead, which requires a context manager and only reads the response body once you call .read(), .text(), .json(), .iter_bytes(), .iter_text(), .iter_lines() or .parse(). In the async client, these are async methods.

with client.generations.with_streaming_response.create(
    model="ray-2",
    aspect_ratio="16:9",
    loop=False,
    prompt="A teddy bear in sunglasses playing electric guitar, dancing and headbanging in the jungle in front of a large beautiful waterfall",
) as response:
    print(response.headers.get("X-My-Header"))

    for line in response.iter_lines():
        print(line)

The context manager is required so that the response will reliably be closed.

Making custom/undocumented requests

This library is typed for convenient access to the documented API.

If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can make requests using client.get, client.post, and other http verbs. Options on the client will be respected (such as retries) when making this request.

import httpx

response = client.post(
    "/foo",
    cast_to=httpx.Response,
    body={"my_param": True},
)

print(response.headers.get("x-foo"))

Undocumented request params

If you want to explicitly send an extra param, you can do so with the extra_query, extra_body, and extra_headers request options.

Undocumented response properties

To access undocumented response properties, you can access the extra fields like response.unknown_prop. You can also get all the extra fields on the Pydantic model as a dict with response.model_extra.

Configuring the HTTP client

You can directly override the httpx client to customize it for your use case, including:

import httpx
from lumaai import LumaAI, DefaultHttpxClient

client = LumaAI(
    # Or use the `LUMAAI_BASE_URL` env var
    base_url="http://my.test.server.example.com:8083",
    http_client=DefaultHttpxClient(
        proxy="http://my.test.proxy.example.com",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)

You can also customize the client on a per-request basis by using with_options():

client.with_options(http_client=DefaultHttpxClient(...))

Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is garbage collected. You can manually close the client using the .close() method if desired, or with a context manager that closes when exiting.

from lumaai import LumaAI

with LumaAI() as client:
  # make requests here
  ...

# HTTP client is now closed

Versioning

This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals.)
  3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an issue with questions, bugs, or suggestions.

Determining the installed version

If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.

You can determine the version that is being used at runtime with:

import lumaai
print(lumaai.__version__)

Requirements

Python 3.9 or higher.

Contributing

See the contributing documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lumaai-1.20.1.tar.gz (121.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lumaai-1.20.1-py3-none-any.whl (95.3 kB view details)

Uploaded Python 3

File details

Details for the file lumaai-1.20.1.tar.gz.

File metadata

  • Download URL: lumaai-1.20.1.tar.gz
  • Upload date:
  • Size: 121.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for lumaai-1.20.1.tar.gz
Algorithm Hash digest
SHA256 2b35e232b2ba09d310c4eb784e498fff4eae3b5b43a233c7af16fd7586fa5f74
MD5 b4ddba91db6cb3430dfefb09f8908e4b
BLAKE2b-256 bc06c02539753c28f49ec64046a43d3f6b77221956cffa33dc601fb30fcb24f7

See more details on using hashes here.

File details

Details for the file lumaai-1.20.1-py3-none-any.whl.

File metadata

  • Download URL: lumaai-1.20.1-py3-none-any.whl
  • Upload date:
  • Size: 95.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for lumaai-1.20.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a7f5be568898da0e979997c42519fca75247fb8f67c55d7f927013dadac6e028
MD5 7a400907950210d7f739065813db6b51
BLAKE2b-256 ccab43e271939d432c65c2c57bd41bfb45e35f1ffd18ac59933f27db64cfe382

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page