Skip to main content

The Liminal SDK for Python

Project description

Liminal Python SDK

CI PyPI Version License

The Liminal SDK for Python provides a clean, straightforward, asyncio-based interface for interacting with the Liminal API.

Installation

pip install liminal-sdk-python

Python Versions

liminal is currently supported on:

  • Python 3.11
  • Python 3.12

Quickstart

You can see several examples of how to use this API object via the examples folder in this repo.

Authentication

Via Auth Provider

Liminal supports the concept of authenticating via various auth providers. Currently, the following auth providers are supported:

  • Microsoft Entra ID

Microsoft Entra ID

Device Code Flow

This authentication process with Microsoft Entra ID involves an OAuth 2.0 Device Authorization Grant. This flow requires you to start your app, retrieve a device code from the logs produced by this SDK, and provide that code to Microsoft via a web browser. Once you complete the login process, the SDK will be authenticated for use with your Liminal instance.

To authenticate with this flow, you will need an Entra ID client and tenant ID:

  • Log into your Azure portal.
  • Navigate to Microsoft Entra ID.
  • Click on App registrations.
  • Either create a new app registration or select an existing one.
  • In the Overview of the registration, look for the Application (client) ID and Directory (tenant) ID values.

With a client ID and tenant ID, you can create a Liminal client object and authenticate it:

import asyncio

from liminal import Client
from liminal.auth.microsoft.device_code_flow import DeviceCodeFlowProvider


async def main() -> None:
    """Run!"""
    # Create an auth provider to authenticate the user:
    auth_provider = DeviceCodeFlowProvider("<TENANT_ID>", "<CLIENT_ID>")

    # Create the liminal SDK instance and authenticate it:
    liminal = await Client.authenticate_from_auth_provider()(
        "https://api.my-tenant.liminal.ai", auth_provider
    )


asyncio.run(main())

In your application logs, you will see a message that looks like this:

INFO:liminal:To sign in, use a web browser to open the page
https://microsoft.com/devicelogin and enter the code XXXXXXXXX to authenticate.

Leaving your application running, open a browser at that URL and input the code as instructed. Once you successfully complete authentication via Entra ID, your Liminal client object will automatically authenticate with your Liminal API server.

Via Session ID

After the initial authentication with your auth provider, the Liminal client object will internally manage sessions to ensure the ongoing ability to communicate with your Liminal API server. The client object will automatically handle using the stored refresh token to request new access tokens as appropriate.

The Liminal client object will have a session_id property that contains the session info. Maintain careful control of this session ID, as it can be used to repeat authentication with your Liminal API server.

Assuming you have a session ID, it is simple to create a new Liminal client using that ID:

import asyncio

from liminal import Client


async def main() -> None:
    """Run!"""
    # Create the client:
    liminal = await Client.authenticate_from_session_id(
        "https://api.my-tenant.liminal.ai", "my-session-id"
    )


asyncio.run(main())

Via Liminal-Provided Environment Token

Presuming you have received one from Liminal, you may also used an environment token to create an authenticated client object:

import asyncio

from liminal import Client


async def main() -> None:
    """Run!"""
    # Create the client:
    liminal = await Client.authenticate_from_token(
        "https://api.my-tenant.liminal.ai", "my-token"
    )


asyncio.run(main())

Endpoints

Getting Model Instances

Every LLM instance connected in the Liminal admin dashboard is referred to as a "model instance." The SDK provides several methods to interact with model instances:

# Get all available model instances:
model_instances = await liminal.llm.get_available_model_instances()
# >>> [ModelInstance(...), ModelInstance(...)]

# Get a specific model instance (if it exists):
model_instance = await liminal.llm.get_model_instance("My Model")
# >>> ModelInstance(...)

Managing Threads

Threads are conversations with an LLM instance:

# Get all available threads:
threads = await liminal.thread.get_available()
# >>> [Thread(...), Thread(...)]

# Get a specific thread by ID:
thread = await liminal.thread.get_by_id(123)
# >>> Thread(...)

# Some operations require a model instance:
model_instance = await liminal.llm.get_model_instance("My Model")

# Create a new thread:
thread = await liminal.thread.create(model_instance.id, "New Thread")
# >>> Thread(...)

Submitting Prompts

# Prompt operations require a model instance:
model_instance = await liminal.llm.get_model_instance(model_instance_name)

# Prompt operations optionally take an existing thread:
thread = await liminal.thread.get_by_id(123)
# >>> Thread(...)

# Analayze a prompt for sensitive info:
findings = await liminal.prompt.analyze(model_instance.id, "Here is a sensitive prompt")
# >>> AnalyzeResponse(...)

# Cleanse input text by applying the policies defined in the Liminal admin
# dashboard. You can optionally provide existing analysis finidings; if not
# provided, analyze is # called automatically):
cleansed = await liminal.prompt.cleanse(
    model_instance.id,
    "Here is a sensitive prompt",
    findings=findings,
    thread_id=thread.id,
)
# >>> CleanseResponse(...)

# Submit a prompt to an LLM, cleansing it in the process (once again, providing optional
# findings), and receive the whole response:
response = await liminal.prompt.submit(
    model_instance.id,
    "Here is a sensitive prompt",
    findings=findings,
    thread_id=thread.id,
)
# >>> SubmitResponse(...)

# Submit a prompt, but this time, stream the response back chunk by chunk:
response = liminal.prompt.stream(
    model_instance.id,
    "Here is a sensitive prompt",
    findings=findings,
    thread_id=thread.id,
)
async for chunk in resp:
    # Each chunk is a liminal.endpoints.prompt.models.StreamResponseChunk object:
    print(chunk.content)
    print(chunk.finish_reason)

# Rehydrate a response with sensitive data:
hydrated = await liminal.prompt.hydrate(
    model_instance.id, "Here is a response to rehdyrate", thread_id=thread.id
)
# >>> HydrateResponse(...)

Connection Pooling

By default, the library creates a new connection to the Liminal API server with each coroutine. If you are calling a large number of coroutines (or merely want to squeeze out every second of runtime savings possible), an httpx AsyncClient can be used for connection pooling:

import asyncio

from liminal import Client
from liminal.auth.microsoft.device_code_flow import DeviceCodeFlowProvider


async def main() -> None:
    # Create an auth provider to authenticate the user:
    microsoft_auth_provider = MicrosoftAuthProvider("<TENANT_ID>", "<CLIENT_ID>")

    # Create the liminal SDK instance with a shared HTTPX AsyncClient:
    async with httpx.AsyncClient() as client:
        liminal = Client(
            microsoft_auth_provider, "<LIMINAL_API_SERVER_URL>", httpx_client=client
        )

        # Get to work!
        # ...


asyncio.run(main())

Check out the examples, the tests, and the source files themselves for method signatures and more examples.

Running Examples

You can see examples of how to use this SDK via the examples folder in this repo. Each example follows a similar "call" format by asking for inputs via environment variables; for example:

LIMINAL_API_SERVER_URL=https://api.DOMAIN.liminal.ai \
CLIENT_ID=xxxxxxxxxxxxxxxx \
TENANT_ID=xxxxxxxxxxxxxxxx \
MODEL_INSTANCE_NAME=model-instance-name \
python3 examples/quickstart_with_microsoft.py

Contributing

Thanks to all of our contributors so far!

  1. Check for open features/bugs or initiate a discussion on one.
  2. Fork the repository.
  3. (optional, but highly recommended) Create a virtual environment: python3 -m venv .venv
  4. (optional, but highly recommended) Enter the virtual environment: source ./.venv/bin/activate
  5. Install the dev environment: ./scripts/setup.sh
  6. Code your new feature or bug fix on a new branch.
  7. Write tests that cover your new functionality.
  8. Run tests and ensure 100% code coverage: poetry run pytest --cov liminal tests
  9. Update README.md with any new documentation.
  10. Submit a pull request!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

liminal_sdk_python-2024.8.0b0.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

liminal_sdk_python-2024.8.0b0-py3-none-any.whl (23.5 kB view details)

Uploaded Python 3

File details

Details for the file liminal_sdk_python-2024.8.0b0.tar.gz.

File metadata

  • Download URL: liminal_sdk_python-2024.8.0b0.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.8 Linux/6.5.0-1024-azure

File hashes

Hashes for liminal_sdk_python-2024.8.0b0.tar.gz
Algorithm Hash digest
SHA256 652768d80f90676a70e39993f951d6b92e7523b2f1c222baec814b2011875d99
MD5 1095b6be812962826650098a60799972
BLAKE2b-256 d71a5ab556cd0d4ce63819be54cf9db2b12913b40ed8ddab6a568bc4599d26f3

See more details on using hashes here.

File details

Details for the file liminal_sdk_python-2024.8.0b0-py3-none-any.whl.

File metadata

File hashes

Hashes for liminal_sdk_python-2024.8.0b0-py3-none-any.whl
Algorithm Hash digest
SHA256 d13a56f842b148372cb1b0f0b96006f6c66e15410ac65e35effdeff2043deca1
MD5 b60dc9dc654038047a0be2c524c2e8f5
BLAKE2b-256 e934b59e664be5f4046c278d5871412b649963063cd2580e214a7caed1502f33

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page