Skip to main content

Simple, unified interface for all major LLMs

Project description

dazllm 🚀

Simple, unified interface for all major LLMs

Stop juggling different APIs and libraries. dazllm gives you a clean, consistent way to chat with any LLM - from GPT-4 and Claude to local Ollama models.

Features

Unified API - Same interface for OpenAI, Anthropic, Google, and local models
🔧 Smart Model Selection - Choose by name, type, or let it auto-select
🔐 Secure Configuration - API keys stored safely in system keyring
📝 Structured Output - Get Pydantic models directly from LLM responses
🎨 Image Generation - Create images with DALL-E and more
💻 CLI & Python API - Use from command line or import in your code

Quick Start

Installation

pip install dazllm

Setup

Configure your API keys using keyring:

keyring set dazllm openai_api_key YOUR_OPENAI_KEY
keyring set dazllm anthropic_api_key YOUR_ANTHROPIC_KEY
keyring set dazllm google_api_key YOUR_GOOGLE_KEY
keyring set dazllm ollama_url http://localhost:11434

Check everything is working:

dazllm --check

Usage

Command Line

# Simple chat
dazllm chat "What's the capital of France?"

# Use specific model  
dazllm chat --model openai:gpt-4 "Explain quantum computing"

# Use model type (auto-selects best available)
dazllm chat --model-type paid_best "Write a poem"

# Use provider default
dazllm chat --model openai "Tell me about AI"

# Structured output
dazllm structured "List 3 colors" --schema '{"type":"array","items":{"type":"string"}}'

# Generate images
dazllm image "a red cat wearing a hat" cat.png

# From file
dazllm chat --file prompt.txt --output response.txt

Python API

from dazllm import Llm, ModelType
from pydantic import BaseModel

# Instance-based usage
llm = Llm("openai:gpt-4")
response = llm.chat("Hello!")

# Static/module-level usage
response = Llm.chat("Hello!", model="anthropic:claude-3-5-sonnet-20241022")
response = Llm.chat("Hello!", model_type=ModelType.PAID_BEST)

# Structured output with Pydantic
class ColorList(BaseModel):
    colors: list[str]

result = Llm.chat_structured("List 3 colors", ColorList)
print(result.colors)  # ['red', 'green', 'blue']

# Image generation
Llm.image("a sunset over mountains", "sunset.png")

# Conversation history
conversation = [
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi there!"},
    {"role": "user", "content": "What's your name?"}
]
response = Llm.chat(conversation, model="ollama:mistral-small")

Model Types

Instead of remembering model names, use semantic types:

  • local_small - ~1B parameter models (fast, basic)
  • local_medium - ~7B parameter models (good balance)
  • local_large - ~14B parameter models (best local quality)
  • paid_cheap - Cost-effective cloud models
  • paid_best - Highest quality cloud models

Model Format

All models use the format provider:model:

  • OpenAI: openai:gpt-4o, openai:gpt-4o-mini, openai:dall-e-3
  • Anthropic: anthropic:claude-3-5-sonnet-20241022, anthropic:claude-3-haiku-20240307
  • Google: google:gemini-pro, google:gemini-flash
  • Ollama: ollama:mistral-small, ollama:llama3:8b, ollama:codellama:7b

You can also use just the provider name (e.g., openai) to use that provider's default model.

Configuration

API keys are stored securely in your system keyring:

# Set API keys
keyring set dazllm openai_api_key YOUR_OPENAI_KEY
keyring set dazllm anthropic_api_key YOUR_ANTHROPIC_KEY
keyring set dazllm google_api_key YOUR_GOOGLE_KEY
keyring set dazllm ollama_url http://localhost:11434

# Set default model (optional)
keyring set dazllm default_model openai:gpt-4o

# Check what's configured
dazllm --check

Examples

Building a Chatbot

from dazllm import Llm

def chatbot():
    llm = Llm.model_named("openai:gpt-4o")
    conversation = []
    
    while True:
        user_input = input("You: ")
        if user_input.lower() == 'quit':
            break
            
        conversation.append({"role": "user", "content": user_input})
        response = llm.chat(conversation)
        conversation.append({"role": "assistant", "content": response})
        
        print(f"AI: {response}")

chatbot()

Data Extraction

from dazllm import Llm
from pydantic import BaseModel

class Person(BaseModel):
    name: str
    age: int
    city: str

class People(BaseModel):
    people: list[Person]

text = "John Doe, age 30, lives in New York. Jane Smith, age 25, lives in LA."

result = Llm.chat_structured(
    f"Extract people info from: {text}",
    People,
    model="openai:gpt-4o-mini"
)

for person in result.people:
    print(f"{person.name} is {person.age} years old and lives in {person.city}")

Image Generation Pipeline

from dazllm import Llm

# Generate image description
description = Llm.chat(
    "Describe a serene mountain landscape in detail",
    model_type="paid_cheap"
)

# Generate the image
image_path = Llm.image(description, "mountain.png", width=1024, height=768)
print(f"Image saved to {image_path}")

Requirements

  • Python 3.8+
  • API keys for desired providers (OpenAI, Anthropic, Google)
  • Ollama installed for local models

License

MIT License

Contributing

Contributions welcome! Please see the GitHub repository for guidelines.


dazllm - Making LLMs accessible to everyone! 🚀

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dazllm-0.1.0.tar.gz (522.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dazllm-0.1.0-py3-none-any.whl (27.9 kB view details)

Uploaded Python 3

File details

Details for the file dazllm-0.1.0.tar.gz.

File metadata

  • Download URL: dazllm-0.1.0.tar.gz
  • Upload date:
  • Size: 522.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for dazllm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e389a43ccdeeff53550bb9e7a9c7948f9f16aacfbd3b584c2069f158c56c2025
MD5 70d92d15ef11cfe030dd4300645f0af0
BLAKE2b-256 fd40963eb6a5096475d830b27d3278ae0c0e972c4bc242e5537ca880e4f61e33

See more details on using hashes here.

File details

Details for the file dazllm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: dazllm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 27.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for dazllm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a2028108a99035270d95371377bda61a9be3165151be1107ff762cd3873b1e4a
MD5 deeb7eef523bd183fa4268d0087b6b79
BLAKE2b-256 6b2b97b7b93044d1cbc9607facb216c21aa3a6fef5dfa9320ed3ebff8fa155ae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page