Skip to main content

All popular Framework HF integration kit

Project description

Litegen

Litegen is a lightweight Python wrapper for managing LLM interactions, supporting both local Ollama models and OpenAI API. It provides a simple, unified interface for chat completions with streaming capabilities.

Installation

pip install litegen

Features

  • 🚀 Simple unified interface for LLM interactions
  • 🤖 Support for both local Ollama models and OpenAI
  • 📡 Built-in streaming capabilities
  • 🛠 Function calling support
  • 🔄 Context management for conversations
  • 🎯 GPU support for enhanced performance

Quick Start

from litegen import completion, pp_completion

# Simple completion
response = completion(
    model="mistral",  # or any Ollama/OpenAI model
    prompt="What is the capital of France?"
)
print(response.choices[0].message.content)

# Streaming completion with pretty print
pp_completion(
    model="llama2",
    prompt="Write a short story about a robot",
    temperature=0.7
)

Advanced Usage

System Prompts and Context

response = completion(
    model="mistral",
    system_prompt="You are a helpful math tutor",
    prompt="Explain the Pythagorean theorem",
    context=[
        {"role": "user", "content": "Can you help me with math?"},
        {"role": "assistant", "content": "Of course! What would you like to know?"}
    ]
)

Function Calling

def get_weather(location: str, unit: str = "celsius"):
    """Get weather for a location"""
    pass

response = completion(
    model="gpt-3.5-turbo",
    prompt="What's the weather in Paris?",
    tools=[get_weather]
)

GPU Support

response = completion(
    model="mistral",
    prompt="Complex calculation task",
    gpu=True  # Enable GPU acceleration
)

Configuration

The client can be configured with custom settings:

from litegen import get_client

client = get_client(gpu=True)  # Enable GPU support
# Use client directly for more control

Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key (optional, for OpenAI models)

API Reference

completion(...)

Main function for chat completions.

completion(
    model: str,                    # Model name
    messages: Optional[List[Dict[str, str]]] | str = None,  # Raw messages or prompt string
    system_prompt: str = "You are helpful Assistant",  # System prompt
    prompt: str = "",              # User prompt
    context: Optional[List[Dict[str, str]]] = None,  # Conversation history
    temperature: Optional[float] = None,  # Temperature for response randomness
    max_tokens: Optional[int] = None,  # Max tokens in response
    stream: bool = False,          # Enable streaming
    stop: Optional[List[str]] = None,  # Stop sequences
    tools: Optional[List] = None,  # Function calling tools
    gpu: bool = False,            # Enable GPU
    **kwargs                      # Additional parameters
)

pp_completion(...)

Streaming-enabled completion with pretty printing. Takes the same parameters as completion().

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litegen-0.0.40.tar.gz (15.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litegen-0.0.40-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file litegen-0.0.40.tar.gz.

File metadata

  • Download URL: litegen-0.0.40.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.0rc1 Linux/6.8.0-51-generic

File hashes

Hashes for litegen-0.0.40.tar.gz
Algorithm Hash digest
SHA256 ae6cd79a5dfc29f71254c43a21cd25944b8c0a836f3390dfd254f57f9678360f
MD5 a4bb0a4ad174778c135b5e4ab7ebdaf6
BLAKE2b-256 fe6049b8bd613ea47719967c243b7b2f2ee8301051e6b03a0086554acee2acc9

See more details on using hashes here.

File details

Details for the file litegen-0.0.40-py3-none-any.whl.

File metadata

  • Download URL: litegen-0.0.40-py3-none-any.whl
  • Upload date:
  • Size: 17.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.0rc1 Linux/6.8.0-51-generic

File hashes

Hashes for litegen-0.0.40-py3-none-any.whl
Algorithm Hash digest
SHA256 fe058a578c5d8503450b22639b2d0230b8f5dc18d31cf2afb5a91ab6d36d8c87
MD5 c7b01f95af9108e71d6c70351c2ff308
BLAKE2b-256 9654bc21d58c6f249208782df5ae7b5f8a415f2306824c99cad2cce080c9700a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page