Skip to main content

Swarmauri Lepton AI Model

Project description

Swarmauri Logo

PyPI - Downloads Hits PyPI - Python Version PyPI - License PyPI - swarmauri_llm_leptonai


Swarmauri LLM LeptonAI

Integration package for calling Lepton AI's hosted language and image generation models from Swarmauri agents. Ships LLM and image-gen adapters with synchronous, streaming, and asynchronous workflows that match Swarmauri conventions.

Features

  • Chat completion support for Lepton AI models (e.g., llama3-8b, mixtral-8x7b) with automatic usage tracking.
  • Streaming and async token generation for latency-sensitive experiences.
  • SDXL-based image generation with convenience helpers to save or display returned bytes.
  • Single configuration surface for model name, base URL, and API key; reuse the same credential for both text and image endpoints.

Prerequisites

  • Python 3.10 or newer.
  • A Lepton AI API key stored outside source control (environment variables or secret stores recommended).
  • Network access to *.lepton.run endpoints; the openai Python client is installed automatically as a dependency.

Installation

# pip
pip install swarmauri_llm_leptonai

# poetry
poetry add swarmauri_llm_leptonai

# uv (pyproject-based projects)
uv add swarmauri_llm_leptonai

Quickstart: Chat Completions

import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

api_key = os.environ["LEPTON_API_KEY"]

conversation = Conversation()
conversation.add_message(HumanMessage(content="Summarize Swarmauri in two sentences."))

model = LeptonAIModel(api_key=api_key, name="llama3-8b")
response = model.predict(conversation=conversation)

print(response.get_last().content)
print("Tokens used", response.get_last().usage.total_tokens)

Async and Streaming

import asyncio
import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage

async def ask_async(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"], name="mixtral-8x7b")
    result = await model.apredict(conversation=convo)
    print(result.get_last().content)

def stream_story(prompt: str) -> None:
    convo = Conversation()
    convo.add_message(HumanMessage(content=prompt))

    model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"])
    for token in model.stream(conversation=convo):
        print(token, end="", flush=True)

# asyncio.run(ask_async("Draft a product announcement."))
# stream_story("Write a haiku about distributed agents.")

Generate Images with SDXL

import os
from pathlib import Path
from swarmauri_llm_leptonai import LeptonAIImgGenModel

img_model = LeptonAIImgGenModel(api_key=os.environ["LEPTON_API_KEY"], model_name="sdxl")

prompt = "A cyberpunk skyline at blue hour in watercolor style"
image_bytes = img_model.generate_image(prompt=prompt, width=768, height=512)

output = Path("leptonai_cyberpunk.png")
img_model.save_image(image_bytes, output.as_posix())

# Display in a notebook or desktop environment
# img_model.display_image(image_bytes)

Operational Tips

  • Models are invoked via https://<model>.lepton.run/api/v1/; updating name on LeptonAIModel switches endpoints without altering the client setup.
  • Streaming responses emit usage data at stream completion; consume the generator fully before inspecting conversation.get_last().usage.
  • Respect Lepton AI rate limits—add retries with exponential backoff or queue requests during traffic spikes.
  • Store API keys securely and rotate them regularly; avoid hard-coding credentials in notebooks or scripts.
  • Large image generations may take longer and consume more credits; adjust width, height, steps, and guidance_scale to balance quality versus latency.

Want to help?

If you want to contribute to swarmauri-sdk, read up on our guidelines for contributing that will help you get started.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarmauri_llm_leptonai-0.9.3.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

swarmauri_llm_leptonai-0.9.3-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file swarmauri_llm_leptonai-0.9.3.tar.gz.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.3.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.0 {"installer":{"name":"uv","version":"0.11.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.3.tar.gz
Algorithm Hash digest
SHA256 650c311b3629fba9be23533d5c6af0203293f104d9602eeefcd04e8b2857347f
MD5 5a64308be8898f30e2623a99b28b0dfa
BLAKE2b-256 45346cc0c2b5239ab278a0f4781e4c3d9faa1b9dbe8ba089ad31e2fa2e46772a

See more details on using hashes here.

File details

Details for the file swarmauri_llm_leptonai-0.9.3-py3-none-any.whl.

File metadata

  • Download URL: swarmauri_llm_leptonai-0.9.3-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.0 {"installer":{"name":"uv","version":"0.11.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_llm_leptonai-0.9.3-py3-none-any.whl
Algorithm Hash digest
SHA256 de6f1149d04d1a0895a5c938f0ace8dbab69540e695834d1cfd29bdc4372d07b
MD5 8c43581b84b35b7d2dcde9f001149180
BLAKE2b-256 cb018c4d07372e8e703e84625367af0849c0d9cf7af10d7b80f0274ce5a089fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page