Skip to main content

A comprehensive LLM configuration package supporting multiple providers (OpenAI, VLLM, Gemini, Infinity) for chat assistants and embeddings

Project description

Langchain LLM Config

Yet another redundant Langchain abstraction: comprehensive Python package for managing and using multiple LLM providers (OpenAI, VLLM, Gemini, Infinity) with a unified interface for both chat assistants and embeddings.

PyPI version Python package

Features

  • 🤖 Multiple Chat Providers: Support for OpenAI, VLLM, and Gemini
  • 🔗 Multiple Embedding Providers: Support for OpenAI, VLLM, and Infinity
  • ⚙️ Unified Configuration: Single YAML configuration file for all providers
  • 🚀 Easy Setup: CLI tool for quick configuration initialization
  • 🔄 Easy Context Concatenation: Simplified process for combining contexts into chat
  • 🔒 Environment Variables: Secure API key management
  • 📦 Self-Contained: No need to import specific paths
  • Async Support: Full async/await support for all operations
  • 🌊 Streaming Chat: Real-time streaming responses for interactive experiences
  • 🛠️ Enhanced CLI: Environment setup and validation commands

Installation

Using pip

pip install langchain-llm-config

Using uv (recommended)

uv add langchain-llm-config

Development installation

git clone https://github.com/liux2/Langchain-LLM-Config.git
cd langchain-llm-config
uv sync --dev
uv run pip install -e .

Quick Start

1. Initialize Configuration

# Initialize config in current directory
llm-config init

# Or specify a custom location
llm-config init ~/.config/api.yaml

This creates an api.yaml file with all supported providers configured.

2. Set Up Environment Variables

# Set up environment variables and create .env file
llm-config setup-env

# Or with custom config path
llm-config setup-env --config-path ~/.config/.env

This creates a .env file with placeholders for your API keys.

3. Configure Your Providers

Edit the generated api.yaml file with your API keys and settings:

llm:
  openai:
    chat:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "gpt-3.5-turbo"
      temperature: 0.7
      max_tokens: 8192
    embeddings:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "text-embedding-ada-002"
  
  vllm:
    chat:
      api_base: "http://localhost:8000/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "meta-llama/Llama-2-7b-chat-hf"
      temperature: 0.6
  
  default:
    chat_provider: "openai"
    embedding_provider: "openai"

4. Set Environment Variables

Edit the .env file with your actual API keys:

OPENAI_API_KEY=your-openai-api-key
GEMINI_API_KEY=your-gemini-api-key

5. Use in Your Code

Basic Usage (Synchronous)

from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List

# Define your response model
class ArticleAnalysis(BaseModel):
    summary: str = Field(..., description="Article summary")
    keywords: List[str] = Field(..., description="Key topics")
    sentiment: str = Field(..., description="Overall sentiment")

# Create an assistant
assistant = create_assistant(
    response_model=ArticleAnalysis,
    system_prompt="You are a helpful article analyzer.",
    provider="openai"  # or "vllm", "gemini"
)

# Use the assistant (synchronous)
result = assistant.ask("Analyze this article: ...")
print(result["summary"])

# Create an embedding provider
embedding_provider = create_embedding_provider(provider="openai")

# Get embeddings (synchronous)
texts = ["Hello world", "How are you?"]
embeddings = embedding_provider.embed_texts(texts)

Advanced Usage (Asynchronous)

import asyncio

# Use the assistant (asynchronous)
result = await assistant.ask_async("Analyze this article: ...")
print(result["summary"])

# Get embeddings (asynchronous)
embeddings = await embedding_provider.embed_texts_async(texts)

Streaming Chat

from langchain_llm_config import create_chat_streaming

# Create streaming chat assistant
streaming_chat = create_chat_streaming(
    provider="openai",
    system_prompt="You are a helpful assistant."
)

# Stream responses in real-time
async for chunk in streaming_chat.chat_stream("Tell me a story"):
    if chunk["type"] == "stream":
        print(chunk["content"], end="", flush=True)
    elif chunk["type"] == "final":
        print(f"\n\nProcessing time: {chunk['processing_time']:.2f}s")

Supported Providers

Chat Providers

Provider Models Features
OpenAI GPT-3.5, GPT-4, etc. Streaming, function calling, structured output
VLLM Any HuggingFace model Local deployment, high performance
Gemini Gemini Pro, etc. Google's latest models

Embedding Providers

Provider Models Features
OpenAI text-embedding-ada-002, etc. High quality, reliable
VLLM BGE, sentence-transformers Local deployment
Infinity Various embedding models Fast inference

CLI Commands

# Initialize a new configuration file
llm-config init [path]

# Set up environment variables and create .env file
llm-config setup-env [path] [--force]

# Validate existing configuration
llm-config validate [path]

# Show package information
llm-config info

Advanced Usage

Custom Configuration Path

from langchain_llm_config import create_assistant

assistant = create_assistant(
    response_model=MyModel,
    config_path="/path/to/custom/api.yaml"
)

Context-Aware Conversations

# Add context to your queries
result = await assistant.ask_async(
    query="What are the main points?",
    context="This is a research paper about machine learning...",
    extra_system_prompt="Focus on technical details."
)

Direct Provider Usage

from langchain_llm_config import VLLMAssistant, OpenAIEmbeddingProvider

# Use providers directly
vllm_assistant = VLLMAssistant(
    config={"api_base": "http://localhost:8000/v1", "model_name": "llama-2"},
    response_model=MyModel
)

openai_embeddings = OpenAIEmbeddingProvider(
    config={"api_key": "your-key", "model_name": "text-embedding-ada-002"}
)

Complete Example with Error Handling

import asyncio
from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List

class ChatResponse(BaseModel):
    message: str = Field(..., description="The assistant's response message")
    confidence: float = Field(..., description="Confidence score", ge=0.0, le=1.0)
    suggestions: List[str] = Field(default_factory=list, description="Follow-up questions")

async def main():
    try:
        # Create assistant
        assistant = create_assistant(
            response_model=ChatResponse,
            provider="openai",
            system_prompt="You are a helpful AI assistant."
        )
        
        # Chat conversation
        response = await assistant.ask_async("What is the capital of France?")
        print(f"Assistant: {response['message']}")
        print(f"Confidence: {response['confidence']:.2f}")
        
        # Create embedding provider
        embedding_provider = create_embedding_provider(provider="openai")
        
        # Get embeddings
        texts = ["Hello world", "How are you?"]
        embeddings = await embedding_provider.embed_texts_async(texts)
        print(f"Generated {len(embeddings)} embeddings")
        
    except Exception as e:
        print(f"Error: {e}")

# Run the example
asyncio.run(main())

Configuration Reference

Environment Variables

The package supports environment variable substitution in configuration:

api_key: "${OPENAI_API_KEY}"  # Will be replaced with actual value

Configuration Structure

llm:
  provider_name:
    chat:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "model-name"
      temperature: 0.7
      max_tokens: 8192
      top_p: 1.0
      connect_timeout: 60
      read_timeout: 60
      model_kwargs: {}
      # ... other parameters
    embeddings:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "embedding-model"
      # ... other parameters
  default:
    chat_provider: "provider_name"
    embedding_provider: "provider_name"

Development

Running Tests

uv run pytest

Code Formatting

uv run black .
uv run isort .

Type Checking

uv run mypy .

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_llm_config-0.1.3.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_llm_config-0.1.3-py3-none-any.whl (805.5 kB view details)

Uploaded Python 3

File details

Details for the file langchain_llm_config-0.1.3.tar.gz.

File metadata

  • Download URL: langchain_llm_config-0.1.3.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for langchain_llm_config-0.1.3.tar.gz
Algorithm Hash digest
SHA256 67b4b94ca9a4173fa4c8d24bd2c42f8dca4a829ae26ed6b5db4a80121f8a2371
MD5 e9c9c8de0c1196ee69c9ff7f189825b1
BLAKE2b-256 68b4f51d2899a52e814a149fefba31ed37868783de240e44566826b19f7c6fab

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_llm_config-0.1.3.tar.gz:

Publisher: python-publish.yml on liux2/Langchain-LLM-Config

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_llm_config-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_llm_config-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 bc39bc8a105a7e5940f0c0be10100e1ea64e2b767958da3a90c4dfd5c5aa31d9
MD5 f750891f79d5a18ff142509fa90d5318
BLAKE2b-256 83e06518dc0eacde0a649d26f56d8d53871645d1634b7cd8dbef3e0974bf97b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_llm_config-0.1.3-py3-none-any.whl:

Publisher: python-publish.yml on liux2/Langchain-LLM-Config

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page