Skip to main content

ALEA LLM client abstraction library for Python

Project description

ALEA LLM Client

PyPI version License: MIT Python Versions

This is a simple, two-dependency (httpx, pydantic) LLM client for ~OpenAI APIs like:

  • OpenAI
  • Anthropic
  • VLLM

Supported Patterns

It provides the following patterns for all endpoints:

  • complete and complete_async -> str via ModelResponse
  • chat and chat_async -> str via ModelResponse
  • json and json_async -> dict via JSONModelResponse
  • pydantic and pydantic_async -> pydantic models

Default Caching

Result caching is enabled by default for all methods.

To disable caching, you can either:

  • set ignore_cache=True for each method call (complete, chat, json, pydantic)
  • set ignore_cache=True as a kwarg at model construction

Cached objects are stored in ~/.alea/cache/{provider}/{endpoint_model_hash}/{call_hash}.json in compressed .json.gz format. You can delete these files to clear the cache.

Authentication

Authentication is handled in the following priority order:

  • an api_key provided at model construction
  • a standard environment variable (e.g., ANTHROPIC_API_KEY or OPENAI_API_KEY)
  • a key stored in ~/.alea/keys/{provider} (e.g., openai, anthropic)

Streaming

Given the research focus of this library, streaming generation is not supported. However, you can directly access the httpx objects on .client and .async_client to stream responses directly if you prefer.

Installation

pip install alea-llm-client

Examples

Basic JSON Example

from alea_llm_client import VLLMModel

if __name__ == "__main__":
    model = VLLMModel(
        endpoint="http://my.vllm.server:8000",
        model="meta-llama/Meta-Llama-3.1-8B-Instruct"
    )

    messages = [
        {
            "role": "user",
            "content": "Give me a JSON object with keys 'name' and 'age' for a person named Alice who is 30 years old.",
        },
    ]

    print(model.json(messages=messages, system="Respond in JSON.").data)

# Output: {'name': 'Alice', 'age': 30}

Basic Completion Example with KL3M

from alea_llm_client import VLLMModel

if __name__ == "__main__":
    model = VLLMModel(
        model="kl3m-1.7b", ignore_cache=True
    )

    prompt = "My name is "
    print(model.complete(prompt=prompt, temperature=0.5).text)

# Output: Dr. Hermann Kamenzi, and

Pydantic Example

from pydantic import BaseModel
from alea_llm_client import AnthropicModel
from alea_llm_client.llms.prompts.sections import format_prompt, format_instructions

class Person(BaseModel):
    name: str
    age: int

if __name__ == "__main__":
    model = AnthropicModel(ignore_cache=True)

    instructions = [
        "Provide one random record based on the SCHEMA below.",
    ]
    prompt = format_prompt(
        {
            "instructions": format_instructions(instructions),
            "schema": Person,
        }
    )

    person = model.pydantic(prompt, system="Respond in JSON.", pydantic_model=Person)
    print(person)

# Output: name='Olivia Chen' age=29

Design

Class Inheritance

classDiagram
    BaseAIModel <|-- OpenAICompatibleModel
    OpenAICompatibleModel <|-- AnthropicModel
    OpenAICompatibleModel <|-- OpenAIModel
    OpenAICompatibleModel <|-- VLLMModel

    class BaseAIModel {
        <<abstract>>
    }
    class OpenAICompatibleModel
    class AnthropicModel
    class OpenAIModel
    class VLLMModel

Example Call Flow

sequenceDiagram
    participant Client
    participant BaseAIModel
    participant OpenAICompatibleModel
    participant SpecificModel
    participant API

    Client->>BaseAIModel: json()
    BaseAIModel->>BaseAIModel: _retry_wrapper()
    BaseAIModel->>OpenAICompatibleModel: _json()
    OpenAICompatibleModel->>OpenAICompatibleModel: format()
    OpenAICompatibleModel->>OpenAICompatibleModel: _make_request()
    OpenAICompatibleModel->>API: HTTP POST
    API-->>OpenAICompatibleModel: Response
    OpenAICompatibleModel->>OpenAICompatibleModel: _handle_json_response()
    OpenAICompatibleModel-->>BaseAIModel: JSONModelResponse
    BaseAIModel-->>Client: JSONModelResponse

License

The ALEA LLM client is released under the MIT License. See the LICENSE file for details.

Support

If you encounter any issues or have questions about using the ALEA LLM client library, please open an issue on GitHub.

Learn More

To learn more about ALEA and its software and research projects like KL3M and leeky, visit the ALEA website.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alea_llm_client-0.1.1.tar.gz (20.2 kB view hashes)

Uploaded Source

Built Distribution

alea_llm_client-0.1.1-py3-none-any.whl (25.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page