Skip to main content

Swarmauri Lepton AI Model

Project description

Swarmauri Logo

PyPI - Downloads Hits PyPI - Python Version PyPI - License PyPI - swarmauri_llm_leptonai


Swarmauri LLM LeptonAI

Integration package for calling Lepton AI's hosted language and image generation models from Swarmauri agents. Ships LLM and image-gen adapters with synchronous, streaming, and asynchronous workflows that match Swarmauri conventions.

Features

  • Chat completion support for Lepton AI models (e.g., llama3-8b, mixtral-8x7b) with automatic usage tracking.
  • Streaming and async token generation for latency-sensitive experiences.
  • SDXL-based image generation with convenience helpers to save or display returned bytes.
  • Single configuration surface for model name, base URL, and API key; reuse the same credential for both text and image endpoints.

Prerequisites

  • Python 3.10 or newer.
  • A Lepton AI API key stored outside source control (environment variables or secret stores recommended).
  • Network access to *.lepton.run endpoints; the openai Python client is installed automatically as a dependency.

Installation

# pip
pip install swarmauri_llm_leptonai

# poetry
poetry add swarmauri_llm_leptonai

# uv (pyproject-based projects)
uv add swarmauri_llm_leptonai

Quickstart: Chat Completions

import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

api_key = os.environ["LEPTON_API_KEY"]

conversation = Conversation()
conversation.add_message(HumanMessage(content="Summarize Swarmauri in two sentences."))

model = LeptonAIModel(api_key=api_key, name="llama3-8b")
response = model.predict(conversation=conversation)

print(response.get_last().content)
print("Tokens used", response.get_last().usage.total_tokens)

Async and Streaming

import asyncio
import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

async def ask_async(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"], name="mixtral-8x7b")
    result = await model.apredict(conversation=convo)
    print(result.get_last().content)

def stream_story(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"])
    for token in model.stream(conversation=convo):
        print(token, end="", flush=True)

# asyncio.run(ask_async("Draft a product announcement."))
# stream_story("Write a haiku about distributed agents.")

Generate Images with SDXL

import os
from pathlib import Path
from swarmauri_llm_leptonai import LeptonAIImgGenModel

img_model = LeptonAIImgGenModel(api_key=os.environ["LEPTON_API_KEY"], model_name="sdxl")

prompt = "A cyberpunk skyline at blue hour in watercolor style"
image_bytes = img_model.generate_image(prompt=prompt, width=768, height=512)

output = Path("leptonai_cyberpunk.png")
img_model.save_image(image_bytes, output.as_posix())

# Display in a notebook or desktop environment
# img_model.display_image(image_bytes)

Operational Tips

  • Models are invoked via https://<model>.lepton.run/api/v1/; updating name on LeptonAIModel switches endpoints without altering the client setup.
  • Streaming responses emit usage data at stream completion; consume the generator fully before inspecting conversation.get_last().usage.
  • Respect Lepton AI rate limits—add retries with exponential backoff or queue requests during traffic spikes.
  • Store API keys securely and rotate them regularly; avoid hard-coding credentials in notebooks or scripts.
  • Large image generations may take longer and consume more credits; adjust width, height, steps, and guidance_scale to balance quality versus latency.

Want to help?

If you want to contribute to swarmauri-sdk, read up on our guidelines for contributing that will help you get started.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarmauri_llm_leptonai-0.9.0.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swarmauri_llm_leptonai-0.9.0-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file swarmauri_llm_leptonai-0.9.0.tar.gz.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.0.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.0.tar.gz
Algorithm Hash digest
SHA256 ac5ffa136cfccc00ee877553d8bac57f31cbb45bd2d4a7ab61169f4ae66440f0
MD5 3a49e330dbbdd002c77f6b59c0d78b28
BLAKE2b-256 f90f986d9916dd77ea83f27c1104dfd9d8755dbf91888b893078c1f124062ced

See more details on using hashes here.

File details

Details for the file swarmauri_llm_leptonai-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e616c71a22ec4afd15d6b3ecf086203225b27645a282b6c885687caed4ac5aeb
MD5 8482db3e26ca3871d9e9094d6b992027
BLAKE2b-256 2f6e4c3702c5a1755898433b877a3f0869d7800430d51f0aa3e041c8bb36e8c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page