Skip to main content

Lightweight Orchestrated LLM Manager - Multi-provider LLM configuration and management

Project description

LOLM - Lightweight Orchestrated LLM Manager

PyPI version Python 3.9+ License: Apache 2.0 Code style: black

Reusable LLM configuration and management package for Python projects.

Multi-provider support with automatic fallback, priority routing, and unified configuration.

✨ Features

  • 🔄 Multi-provider support - OpenRouter, Ollama, Groq, Together, LiteLLM
  • Automatic fallback - Seamless provider switching on failure
  • 🎯 Priority routing - Configure provider order by priority
  • 🔧 Unified config - .env, litellm_config.yaml, ~/.lolm/
  • 🖥️ CLI interface - Similar to reclapp llm

🚀 Installation

pip install lolm

Or with optional dependencies:

pip install lolm[full]      # All providers
pip install lolm[ollama]    # Ollama support
pip install lolm[litellm]   # LiteLLM support

📖 Quick Start

CLI

# Show provider status
lolm status

# Set default provider
lolm set-provider openrouter

# Set model
lolm set-model openrouter nvidia/nemotron-3-nano-30b-a3b:free

# Manage API keys
lolm key set openrouter YOUR_API_KEY

# Test generation
lolm test

Python API

from lolm import get_client, LLMManager

# Simple usage
client = get_client()
response = client.generate("Explain this code")

# With specific provider
client = get_client(provider='openrouter')
response = client.generate("Hello!", system="You are helpful")

# With manager for fallback
manager = LLMManager()
manager.initialize()

response = manager.generate_with_fallback(
    "Generate code",
    providers=['openrouter', 'groq', 'ollama']
)

⚙️ Configuration

Environment Variables (.env)

# API Keys
OPENROUTER_API_KEY=sk-or-v1-...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...

# Default provider
LLM_PROVIDER=auto

# Model overrides
OPENROUTER_MODEL=nvidia/nemotron-3-nano-30b-a3b:free
OLLAMA_MODEL=qwen2.5-coder:14b

litellm_config.yaml

model_list:
  - model_name: code-analyzer
    litellm_params:
      model: ollama/qwen2.5-coder:7b
      api_base: http://localhost:11434
    priority: 10

router_settings:
  routing_strategy: simple-shuffle
  num_retries: 3

🖥️ CLI Reference

Command Description
lolm status Show provider status
lolm set-provider PROVIDER Set default provider
lolm set-model PROVIDER MODEL Set model for provider
lolm key set PROVIDER KEY Set API key
lolm key show Show configured keys
lolm models [PROVIDER] List recommended models
lolm test [--provider P] Test LLM generation
lolm config show Show configuration
lolm priority set-provider P N Set provider priority
lolm priority set-mode MODE Set priority mode

🔌 Supported Providers

Provider Type Free Tier Default Model
OpenRouter Cloud nvidia/nemotron-3-nano-30b-a3b:free
Ollama Local qwen2.5-coder:14b
Groq Cloud llama-3.1-70b-versatile
Together Cloud - Qwen/Qwen2.5-Coder-32B-Instruct
LiteLLM Universal - gpt-4

🧰 Monorepo (code2logic) workflow

If you use lolm inside the code2logic monorepo, you can manage all packages from the repository root:

make test-all
make build-subpackages
make publish-all

See: docs/19-monorepo-workflow.md.

🧪 Development

# Install dev dependencies
make install-dev

# Run tests
make test

# Format code
make format

# Lint
make lint

# Build package
make build

# Publish to PyPI
make publish

📄 License

Apache 2.0 License - see LICENSE for details.

🔗 Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lolm-0.1.9.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lolm-0.1.9-py3-none-any.whl (3.6 kB view details)

Uploaded Python 3

File details

Details for the file lolm-0.1.9.tar.gz.

File metadata

  • Download URL: lolm-0.1.9.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.9.tar.gz
Algorithm Hash digest
SHA256 85b1dee0aea5af3e56da2c47c72df2e5ca29a54ca9711b68baff7e2277ca1919
MD5 8a7e85d6e97e4e2e28dce0c1cc8b5409
BLAKE2b-256 1d005bcaa92339516a7433e979b763038ff738a776e571bd0911a7560ca55901

See more details on using hashes here.

File details

Details for the file lolm-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: lolm-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 3.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 7b5f2f8818940ab95f2d124fcd2aab9d6b98f503f1bd8b730329bf2a3f42f023
MD5 028f91557a884be321dec083a9534d1c
BLAKE2b-256 9895472212a0b1b865ee1d26c5e6c5a14d88c58eefe8fa06da45185d1fc2fe6a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page