Skip to main content

Swarmauri Lepton AI Model

Project description

Swarmauri Logo

PyPI - Downloads Hits PyPI - Python Version PyPI - License PyPI - swarmauri_llm_leptonai


Swarmauri LLM LeptonAI

Integration package for calling Lepton AI's hosted language and image generation models from Swarmauri agents. Ships LLM and image-gen adapters with synchronous, streaming, and asynchronous workflows that match Swarmauri conventions.

Features

  • Chat completion support for Lepton AI models (e.g., llama3-8b, mixtral-8x7b) with automatic usage tracking.
  • Streaming and async token generation for latency-sensitive experiences.
  • SDXL-based image generation with convenience helpers to save or display returned bytes.
  • Single configuration surface for model name, base URL, and API key; reuse the same credential for both text and image endpoints.

Prerequisites

  • Python 3.10 or newer.
  • A Lepton AI API key stored outside source control (environment variables or secret stores recommended).
  • Network access to *.lepton.run endpoints; the openai Python client is installed automatically as a dependency.

Installation

# pip
pip install swarmauri_llm_leptonai

# poetry
poetry add swarmauri_llm_leptonai

# uv (pyproject-based projects)
uv add swarmauri_llm_leptonai

Quickstart: Chat Completions

import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

api_key = os.environ["LEPTON_API_KEY"]

conversation = Conversation()
conversation.add_message(HumanMessage(content="Summarize Swarmauri in two sentences."))

model = LeptonAIModel(api_key=api_key, name="llama3-8b")
response = model.predict(conversation=conversation)

print(response.get_last().content)
print("Tokens used", response.get_last().usage.total_tokens)

Async and Streaming

import asyncio
import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

async def ask_async(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"], name="mixtral-8x7b")
    result = await model.apredict(conversation=convo)
    print(result.get_last().content)

def stream_story(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"])
    for token in model.stream(conversation=convo):
        print(token, end="", flush=True)

# asyncio.run(ask_async("Draft a product announcement."))
# stream_story("Write a haiku about distributed agents.")

Generate Images with SDXL

import os
from pathlib import Path
from swarmauri_llm_leptonai import LeptonAIImgGenModel

img_model = LeptonAIImgGenModel(api_key=os.environ["LEPTON_API_KEY"], model_name="sdxl")

prompt = "A cyberpunk skyline at blue hour in watercolor style"
image_bytes = img_model.generate_image(prompt=prompt, width=768, height=512)

output = Path("leptonai_cyberpunk.png")
img_model.save_image(image_bytes, output.as_posix())

# Display in a notebook or desktop environment
# img_model.display_image(image_bytes)

Operational Tips

  • Models are invoked via https://<model>.lepton.run/api/v1/; updating name on LeptonAIModel switches endpoints without altering the client setup.
  • Streaming responses emit usage data at stream completion; consume the generator fully before inspecting conversation.get_last().usage.
  • Respect Lepton AI rate limits—add retries with exponential backoff or queue requests during traffic spikes.
  • Store API keys securely and rotate them regularly; avoid hard-coding credentials in notebooks or scripts.
  • Large image generations may take longer and consume more credits; adjust width, height, steps, and guidance_scale to balance quality versus latency.

Want to help?

If you want to contribute to swarmauri-sdk, read up on our guidelines for contributing that will help you get started.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarmauri_llm_leptonai-0.9.3.dev4.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swarmauri_llm_leptonai-0.9.3.dev4-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file swarmauri_llm_leptonai-0.9.3.dev4.tar.gz.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.3.dev4.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.3.dev4.tar.gz
Algorithm Hash digest
SHA256 19f08679db9d82b090a5cced87b547d1479857b3894f69f365ec9a16584e9204
MD5 c1f9a5296622295160c543f42e091005
BLAKE2b-256 92cf048258ad462268244ba0ee268d1ba2e6d199967dfb9e03d2a977a4903d75

See more details on using hashes here.

File details

Details for the file swarmauri_llm_leptonai-0.9.3.dev4-py3-none-any.whl.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.3.dev4-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.3.dev4-py3-none-any.whl
Algorithm Hash digest
SHA256 658ff7611048cdb00cf86922b07fda5f11565c20683311a6a1a6d73d9177d6ed
MD5 4561b9f19c0ce4eaca510e5102a11cd9
BLAKE2b-256 35c3e9272725156af20c73e936f92c350b7792bac6031a498b147b2fe48a7c24

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page