Skip to main content

A Python SDK for Hume AI

Project description

Hume AI Python SDK

Integrate Hume APIs directly into your Python application



Migration Guide for Version 0.7.0 and Above

There were major breaking changes in version 0.7.0 of the SDK. If upgrading from a previous version, please View the Migration Guide. That release deprecated several interfaces and moved them to the hume[legacy] package extra. The legacy extra was removed in 0.9.0. The last version to include legacy was 0.8.6.

Documentation

API reference documentation is available here.

Compatibility

The Hume Python SDK is compatible across several Python versions and operating systems.

Below is a table which shows the version and operating system compatibilities by product:

Python Version Operating System
Empathic Voice Interface 3.9, 3.10, 3.11 macOS, Linux
Text-to-speech (TTS) 3.9, 3.10, 3.11, 3.12 macOS, Linux, Windows
Expression Measurement 3.9, 3.10, 3.11, 3.12 macOS, Linux, Windows

Installation

pip install hume
# or
poetry add hume
# or
uv add hume

Other Resources

from hume.client import HumeClient

client = HumeClient(api_key="YOUR_API_KEY")
client.empathic_voice.configs.list_configs()

Async Client

The SDK also exports an async client so that you can make non-blocking calls to our API.

import asyncio

from hume.client import AsyncHumeClient

client = AsyncHumeClient(api_key="YOUR_API_KEY")

async def main() -> None:
    await client.empathic_voice.configs.list_configs()

asyncio.run(main())

Writing File

Writing files with an async stream of bytes can be tricky in Python! aiofiles can simplify this some. For example, you can download your job artifacts like so:

import aiofiles

from hume import AsyncHumeClient

client = AsyncHumeClient()
async with aiofiles.open('artifacts.zip', mode='wb') as file:
    async for chunk in client.expression_measurement.batch.get_job_artifacts(id="my-job-id"):
        await file.write(chunk)

Namespaces

This SDK contains the APIs for empathic voice, tts, and expression measurement. Even if you do not plan on using more than one API to start, the SDK provides easy access in case you would like to use additional APIs in the future.

Each API is namespaced accordingly:

from hume.client import HumeClient

client = HumeClient(api_key="YOUR_API_KEY")

client.emapthic_voice.         # APIs specific to Empathic Voice
client.tts.                    # APIs specific to Text-to-speech
client.expression_measurement. # APIs specific to Expression Measurement

Exception Handling

All errors thrown by the SDK will be subclasses of ApiError.

import hume.client

try:
  client.expression_measurement.batch.get_job_predictions(...)
except hume.core.ApiError as e: # Handle all errors
  print(e.status_code)
  print(e.body)

Pagination

Paginated requests will return a SyncPager or AsyncPager, which can be used as generators for the underlying object. For example, list_tools will return a generator over ReturnUserDefinedTool and handle the pagination behind the scenes:

import hume.client

client = HumeClient(api_key="YOUR_API_KEY")

for tool in client.empathic_voice.tools.list_tools():
  print(tool)

you could also iterate page-by-page:

for page in client.empathic_voice.tools.list_tools().iter_pages():
  print(page.items)

or manually:

pager = client.empathic_voice.tools.list_tools()
# First page
print(pager.items)
# Second page
pager = pager.next_page()
print(pager.items)

WebSockets

We expose a websocket client for interacting with the EVI API as well as Expression Measurement.

When interacting with these clients, you can use them very similarly to how you'd use the common websockets library:

from hume import StreamDataModels

client = AsyncHumeClient(api_key=os.getenv("HUME_API_KEY"))

async with client.expression_measurement.stream.connect(
    options={"config": StreamDataModels(...)}
) as hume_socket:
    print(await hume_socket.get_job_details())

The underlying connection, in this case hume_socket, will support intellisense/autocomplete for the different functions that are available on the socket!

Advanced

Retries

The Hume SDK is instrumented with automatic retries with exponential backoff. A request will be retried as long as the request is deemed retriable and the number of retry attempts has not grown larger than the configured retry limit.

A request is deemed retriable when any of the following HTTP status codes is returned:

  • 408 (Timeout)
  • 409 (Conflict)
  • 429 (Too Many Requests)
  • 5XX (Internal Server Errors)

Use the max_retries request option to configure this behavior.

from hume.client import HumeClient
from hume.core import RequestOptions

client = HumeClient(...)

# Override retries for a specific method
client.expression_measurement.batch.get_job_predictions(...,
    request_options=RequestOptions(max_retries=5)
)

Timeouts

By default, requests time out after 60 seconds. You can configure this with a timeout option at the client or request level.

from hume.client import HumeClient
from hume.core import RequestOptions

client = HumeClient(
    # All timeouts are 20 seconds
    timeout=20.0,
)

# Override timeout for a specific method
client.expression_measurement.batch.get_job_predictions(...,
    request_options=RequestOptions(timeout_in_seconds=20)
)

Custom HTTP client

You can override the httpx client to customize it for your use-case. Some common use-cases include support for proxies and transports.

import httpx

from hume.client import HumeClient

client = HumeClient(
    http_client=httpx.Client(
        proxies="http://my.test.proxy.example.com",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)

Contributing

While we value open-source contributions to this SDK, this library is generated programmatically.

Additions made directly to this library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!

On the other hand, contributions to the README are always very welcome!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hume-0.13.6.tar.gz (143.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hume-0.13.6-py3-none-any.whl (346.5 kB view details)

Uploaded Python 3

File details

Details for the file hume-0.13.6.tar.gz.

File metadata

  • Download URL: hume-0.13.6.tar.gz
  • Upload date:
  • Size: 143.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.9.25 Linux/6.11.0-1018-azure

File hashes

Hashes for hume-0.13.6.tar.gz
Algorithm Hash digest
SHA256 6a35086ca1622d410ffa04dcb7c4fd942bf83cb66f7ac8cc730be8609900460a
MD5 a228b346720392733a720615da5dfadb
BLAKE2b-256 713626d002af7011340324c44670ad9ef0af15aa2714ad4b4e06da7cbbbf93c9

See more details on using hashes here.

File details

Details for the file hume-0.13.6-py3-none-any.whl.

File metadata

  • Download URL: hume-0.13.6-py3-none-any.whl
  • Upload date:
  • Size: 346.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.9.25 Linux/6.11.0-1018-azure

File hashes

Hashes for hume-0.13.6-py3-none-any.whl
Algorithm Hash digest
SHA256 0aa601f2132800a1ea810be05817019c7be839c9ea7a4e80483e90c8d84d005c
MD5 27f92ee25dcf10c5a73eaf3383c11d2d
BLAKE2b-256 d0a6b0a2f54cd884b291b12e0d2717dbd815cce3e06d5bb7dd2c8aaf27ff1732

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page