Skip to main content

A comprehensive LLM configuration package supporting multiple providers (OpenAI, VLLM, Gemini, Infinity) for chat assistants and embeddings

Project description

Langchain LLM Config

Yet another redundant Langchain abstraction: comprehensive Python package for managing and using multiple LLM providers (OpenAI, VLLM, Gemini, Infinity) with a unified interface for both chat assistants and embeddings.

Features

  • 🤖 Multiple Chat Providers: Support for OpenAI, VLLM, and Gemini
  • 🔗 Multiple Embedding Providers: Support for OpenAI, VLLM, and Infinity
  • ⚙️ Unified Configuration: Single YAML configuration file for all providers
  • 🚀 Easy Setup: CLI tool for quick configuration initialization
  • 🔄 Easy Context Concatenation: Simplified process for combining contexts into chat
  • 🔒 Environment Variables: Secure API key management
  • 📦 Self-Contained: No need to import specific paths
  • Async Support: Full async/await support for all operations
  • 🌊 Streaming Chat: Real-time streaming responses for interactive experiences
  • 🛠️ Enhanced CLI: Environment setup and validation commands

Installation

Using pip

pip install langchain-llm-config

Using uv (recommended)

uv add langchain-llm-config

Development installation

git clone https://github.com/liux2/Langchain-LLM-Config.git
cd langchain-llm-config
uv sync --dev
uv run pip install -e .

Quick Start

1. Initialize Configuration

# Initialize config in current directory
llm-config init

# Or specify a custom location
llm-config init ~/.config/api.yaml

This creates an api.yaml file with all supported providers configured.

2. Set Up Environment Variables

# Set up environment variables and create .env file
llm-config setup-env

# Or with custom config path
llm-config setup-env --config-path ~/.config/.env

This creates a .env file with placeholders for your API keys.

3. Configure Your Providers

Edit the generated api.yaml file with your API keys and settings:

llm:
  openai:
    chat:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "gpt-3.5-turbo"
      temperature: 0.7
      max_tokens: 8192
    embeddings:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "text-embedding-ada-002"
  
  vllm:
    chat:
      api_base: "http://localhost:8000/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "meta-llama/Llama-2-7b-chat-hf"
      temperature: 0.6
  
  default:
    chat_provider: "openai"
    embedding_provider: "openai"

4. Set Environment Variables

Edit the .env file with your actual API keys:

OPENAI_API_KEY=your-openai-api-key
GEMINI_API_KEY=your-gemini-api-key

5. Use in Your Code

Basic Usage (Synchronous)

from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List

# Define your response model
class ArticleAnalysis(BaseModel):
    summary: str = Field(..., description="Article summary")
    keywords: List[str] = Field(..., description="Key topics")
    sentiment: str = Field(..., description="Overall sentiment")

# Create an assistant
assistant = create_assistant(
    response_model=ArticleAnalysis,
    system_prompt="You are a helpful article analyzer.",
    provider="openai"  # or "vllm", "gemini"
)

# Use the assistant (synchronous)
result = assistant.ask("Analyze this article: ...")
print(result["summary"])

# Create an embedding provider
embedding_provider = create_embedding_provider(provider="openai")

# Get embeddings (synchronous)
texts = ["Hello world", "How are you?"]
embeddings = embedding_provider.embed_texts(texts)

Advanced Usage (Asynchronous)

import asyncio

# Use the assistant (asynchronous)
result = await assistant.ask_async("Analyze this article: ...")
print(result["summary"])

# Get embeddings (asynchronous)
embeddings = await embedding_provider.embed_texts_async(texts)

Streaming Chat

from langchain_llm_config import create_chat_streaming

# Create streaming chat assistant
streaming_chat = create_chat_streaming(
    provider="openai",
    system_prompt="You are a helpful assistant."
)

# Stream responses in real-time
async for chunk in streaming_chat.chat_stream("Tell me a story"):
    if chunk["type"] == "stream":
        print(chunk["content"], end="", flush=True)
    elif chunk["type"] == "final":
        print(f"\n\nProcessing time: {chunk['processing_time']:.2f}s")

Supported Providers

Chat Providers

Provider Models Features
OpenAI GPT-3.5, GPT-4, etc. Streaming, function calling, structured output
VLLM Any HuggingFace model Local deployment, high performance
Gemini Gemini Pro, etc. Google's latest models

Embedding Providers

Provider Models Features
OpenAI text-embedding-ada-002, etc. High quality, reliable
VLLM BGE, sentence-transformers Local deployment
Infinity Various embedding models Fast inference

CLI Commands

# Initialize a new configuration file
llm-config init [path]

# Set up environment variables and create .env file
llm-config setup-env [path] [--force]

# Validate existing configuration
llm-config validate [path]

# Show package information
llm-config info

Advanced Usage

Custom Configuration Path

from langchain_llm_config import create_assistant

assistant = create_assistant(
    response_model=MyModel,
    config_path="/path/to/custom/api.yaml"
)

Context-Aware Conversations

# Add context to your queries
result = await assistant.ask_async(
    query="What are the main points?",
    context="This is a research paper about machine learning...",
    extra_system_prompt="Focus on technical details."
)

Direct Provider Usage

from langchain_llm_config import VLLMAssistant, OpenAIEmbeddingProvider

# Use providers directly
vllm_assistant = VLLMAssistant(
    config={"api_base": "http://localhost:8000/v1", "model_name": "llama-2"},
    response_model=MyModel
)

openai_embeddings = OpenAIEmbeddingProvider(
    config={"api_key": "your-key", "model_name": "text-embedding-ada-002"}
)

Complete Example with Error Handling

import asyncio
from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List

class ChatResponse(BaseModel):
    message: str = Field(..., description="The assistant's response message")
    confidence: float = Field(..., description="Confidence score", ge=0.0, le=1.0)
    suggestions: List[str] = Field(default_factory=list, description="Follow-up questions")

async def main():
    try:
        # Create assistant
        assistant = create_assistant(
            response_model=ChatResponse,
            provider="openai",
            system_prompt="You are a helpful AI assistant."
        )
        
        # Chat conversation
        response = await assistant.ask_async("What is the capital of France?")
        print(f"Assistant: {response['message']}")
        print(f"Confidence: {response['confidence']:.2f}")
        
        # Create embedding provider
        embedding_provider = create_embedding_provider(provider="openai")
        
        # Get embeddings
        texts = ["Hello world", "How are you?"]
        embeddings = await embedding_provider.embed_texts_async(texts)
        print(f"Generated {len(embeddings)} embeddings")
        
    except Exception as e:
        print(f"Error: {e}")

# Run the example
asyncio.run(main())

Configuration Reference

Environment Variables

The package supports environment variable substitution in configuration:

api_key: "${OPENAI_API_KEY}"  # Will be replaced with actual value

Configuration Structure

llm:
  provider_name:
    chat:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "model-name"
      temperature: 0.7
      max_tokens: 8192
      top_p: 1.0
      connect_timeout: 60
      read_timeout: 60
      model_kwargs: {}
      # ... other parameters
    embeddings:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "embedding-model"
      # ... other parameters
  default:
    chat_provider: "provider_name"
    embedding_provider: "provider_name"

Development

Running Tests

uv run pytest

Code Formatting

uv run black .
uv run isort .

Type Checking

uv run mypy .

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_llm_config-0.1.0.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_llm_config-0.1.0-py3-none-any.whl (802.2 kB view details)

Uploaded Python 3

File details

Details for the file langchain_llm_config-0.1.0.tar.gz.

File metadata

  • Download URL: langchain_llm_config-0.1.0.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for langchain_llm_config-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b37049fe5851c193a74d75e2bb985035d97217d13ba49c7f28106076a68fcb47
MD5 7f0acd7e004bb1df53cbde145ee61c4b
BLAKE2b-256 4e033b7e92081269bf2a2400f3a010cf6415e21ccc9ff1b4419ece96aa3cb698

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_llm_config-0.1.0.tar.gz:

Publisher: python-publish.yml on liux2/Langchain-LLM-Config

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_llm_config-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_llm_config-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5196a66a930b9e53b39ef3750ff09d6ff694c86d36066ae01e984a2c55070e47
MD5 9132eb8f06e017dfbab31c4c2c64b9d9
BLAKE2b-256 edff83d1b5a88dbe1d30525160116fc3a910e4ba9316bba8e85d53a6d4856780

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_llm_config-0.1.0-py3-none-any.whl:

Publisher: python-publish.yml on liux2/Langchain-LLM-Config

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page