Skip to main content

Lightweight Orchestrated LLM Manager - Multi-provider LLM configuration and management

Project description

LOLM - Lightweight Orchestrated LLM Manager

PyPI version Python 3.9+ License: Apache 2.0 Code style: black

Reusable LLM configuration and management package for Python projects.

Multi-provider support with automatic fallback, priority routing, and unified configuration.

✨ Features

  • 🔄 Multi-provider support - OpenRouter, Ollama, Groq, Together, LiteLLM
  • Automatic fallback - Seamless provider switching on failure
  • 🎯 Priority routing - Configure provider order by priority
  • 🔧 Unified config - .env, litellm_config.yaml, ~/.lolm/
  • 🖥️ CLI interface - Similar to reclapp llm

🚀 Installation

pip install lolm

Or with optional dependencies:

pip install lolm[full]      # All providers
pip install lolm[ollama]    # Ollama support
pip install lolm[litellm]   # LiteLLM support

📖 Quick Start

CLI

# Show provider status
lolm status

# Set default provider
lolm set-provider openrouter

# Set model
lolm set-model openrouter nvidia/nemotron-3-nano-30b-a3b:free

# Manage API keys
lolm key set openrouter YOUR_API_KEY

# Test generation
lolm test

Python API

from lolm import get_client, LLMManager

# Simple usage
client = get_client()
response = client.generate("Explain this code")

# With specific provider
client = get_client(provider='openrouter')
response = client.generate("Hello!", system="You are helpful")

# With manager for fallback
manager = LLMManager()
manager.initialize()

response = manager.generate_with_fallback(
    "Generate code",
    providers=['openrouter', 'groq', 'ollama']
)

⚙️ Configuration

Environment Variables (.env)

# API Keys
OPENROUTER_API_KEY=sk-or-v1-...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...

# Default provider
LLM_PROVIDER=auto

# Model overrides
OPENROUTER_MODEL=nvidia/nemotron-3-nano-30b-a3b:free
OLLAMA_MODEL=qwen2.5-coder:14b

litellm_config.yaml

model_list:
  - model_name: code-analyzer
    litellm_params:
      model: ollama/qwen2.5-coder:7b
      api_base: http://localhost:11434
    priority: 10

router_settings:
  routing_strategy: simple-shuffle
  num_retries: 3

🖥️ CLI Reference

Command Description
lolm status Show provider status
lolm set-provider PROVIDER Set default provider
lolm set-model PROVIDER MODEL Set model for provider
lolm key set PROVIDER KEY Set API key
lolm key show Show configured keys
lolm models [PROVIDER] List recommended models
lolm test [--provider P] Test LLM generation
lolm config show Show configuration
lolm priority set-provider P N Set provider priority
lolm priority set-mode MODE Set priority mode

🔌 Supported Providers

Provider Type Free Tier Default Model
OpenRouter Cloud nvidia/nemotron-3-nano-30b-a3b:free
Ollama Local qwen2.5-coder:14b
Groq Cloud llama-3.1-70b-versatile
Together Cloud - Qwen/Qwen2.5-Coder-32B-Instruct
LiteLLM Universal - gpt-4

🧰 Monorepo (code2logic) workflow

If you use lolm inside the code2logic monorepo, you can manage all packages from the repository root:

make test-all
make build-subpackages
make publish-all

See: docs/19-monorepo-workflow.md.

🧪 Development

# Install dev dependencies
make install-dev

# Run tests
make test

# Format code
make format

# Lint
make lint

# Build package
make build

# Publish to PyPI
make publish

📄 License

Apache 2.0 License - see LICENSE for details.

🔗 Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lolm-0.1.8.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lolm-0.1.8-py3-none-any.whl (3.6 kB view details)

Uploaded Python 3

File details

Details for the file lolm-0.1.8.tar.gz.

File metadata

  • Download URL: lolm-0.1.8.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.8.tar.gz
Algorithm Hash digest
SHA256 beaafc21ec9dc6b1476e5a92a4ebdeb6efa604414ee9f599ee4dae5ed042c90d
MD5 2cc577e4a680b0f0cbefff7e6a135cc9
BLAKE2b-256 0676514495df84ce9cc8261f34b1ee0422c0d242f20477c3fa74af974b31c9e7

See more details on using hashes here.

File details

Details for the file lolm-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: lolm-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 3.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for lolm-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 bd5302b6f5665356bb17188b8217f8b1137aca413190d48e0c30f9f57c6174e3
MD5 bb162abbbfc75918979797337f759038
BLAKE2b-256 c06d234d31072c0005ab081e4e1384eca9a677a403abd8712d954d3d5bfcc360

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page