Skip to main content

Lightweight Orchestrated LLM Manager - Multi-provider LLM configuration and management

Project description

LOLM - Lightweight Orchestrated LLM Manager

PyPI version Python 3.9+ License: Apache 2.0 Code style: black

Reusable LLM configuration and management package for Python projects.

Multi-provider support with automatic fallback, priority routing, and unified configuration.

✨ Features

  • 🔄 Multi-provider support - OpenRouter, Ollama, Groq, Together, LiteLLM
  • Automatic fallback - Seamless provider switching on failure
  • 🎯 Priority routing - Configure provider order by priority
  • 🔧 Unified config - .env, litellm_config.yaml, ~/.lolm/
  • 🖥️ CLI interface - Similar to reclapp llm

🚀 Installation

pip install lolm

Or with optional dependencies:

pip install lolm[full]      # All providers
pip install lolm[ollama]    # Ollama support
pip install lolm[litellm]   # LiteLLM support

📖 Quick Start

CLI

# Show provider status
lolm status

# Set default provider
lolm set-provider openrouter

# Set model
lolm set-model openrouter nvidia/nemotron-3-nano-30b-a3b:free

# Manage API keys
lolm key set openrouter YOUR_API_KEY

# Test generation
lolm test

Python API

from lolm import get_client, LLMManager

# Simple usage
client = get_client()
response = client.generate("Explain this code")

# With specific provider
client = get_client(provider='openrouter')
response = client.generate("Hello!", system="You are helpful")

# With manager for fallback
manager = LLMManager()
manager.initialize()

response = manager.generate_with_fallback(
    "Generate code",
    providers=['openrouter', 'groq', 'ollama']
)

⚙️ Configuration

Environment Variables (.env)

# API Keys
OPENROUTER_API_KEY=sk-or-v1-...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...

# Default provider
LLM_PROVIDER=auto

# Model overrides
OPENROUTER_MODEL=nvidia/nemotron-3-nano-30b-a3b:free
OLLAMA_MODEL=qwen2.5-coder:14b

litellm_config.yaml

model_list:
  - model_name: code-analyzer
    litellm_params:
      model: ollama/qwen2.5-coder:7b
      api_base: http://localhost:11434
    priority: 10

router_settings:
  routing_strategy: simple-shuffle
  num_retries: 3

🖥️ CLI Reference

Command Description
lolm status Show provider status
lolm set-provider PROVIDER Set default provider
lolm set-model PROVIDER MODEL Set model for provider
lolm key set PROVIDER KEY Set API key
lolm key show Show configured keys
lolm models [PROVIDER] List recommended models
lolm test [--provider P] Test LLM generation
lolm config show Show configuration
lolm priority set-provider P N Set provider priority
lolm priority set-mode MODE Set priority mode

🔌 Supported Providers

Provider Type Free Tier Default Model
OpenRouter Cloud nvidia/nemotron-3-nano-30b-a3b:free
Ollama Local qwen2.5-coder:14b
Groq Cloud llama-3.1-70b-versatile
Together Cloud - Qwen/Qwen2.5-Coder-32B-Instruct
LiteLLM Universal - gpt-4

🧰 Monorepo (code2logic) workflow

If you use lolm inside the code2logic monorepo, you can manage all packages from the repository root:

make test-all
make build-subpackages
make publish-all

See: docs/19-monorepo-workflow.md.

🧪 Development

# Install dev dependencies
make install-dev

# Run tests
make test

# Format code
make format

# Lint
make lint

# Build package
make build

# Publish to PyPI
make publish

📄 License

Apache 2.0 License - see LICENSE for details.

🔗 Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lolm-0.1.6.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lolm-0.1.6-py3-none-any.whl (3.6 kB view details)

Uploaded Python 3

File details

Details for the file lolm-0.1.6.tar.gz.

File metadata

  • Download URL: lolm-0.1.6.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.6.tar.gz
Algorithm Hash digest
SHA256 7053c609441c612bdcf0362ea07dfbbec01d82bb23425231951baa4a8704dbfb
MD5 ed5bfd8c733813f9c96f5b3b79b8e7ca
BLAKE2b-256 d07c619eefab8a0bb6ca52ce783501cb8e09e02343a8aa3a06f5691805b5abb1

See more details on using hashes here.

File details

Details for the file lolm-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: lolm-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 3.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 362ef214c8718fb7b9cc66ab3e0159b7df272715043ab10a0f7efcd70db5ae81
MD5 d46310951483f01a33eb10730a678b11
BLAKE2b-256 2952d0772725556c508821cd0678d66629d226c8540882cce5b00bca9a8793e3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page