Skip to main content

All popular Framework HF integration kit

Project description

Litegen

Litegen is a lightweight Python wrapper for managing LLM interactions, supporting both local Ollama models and OpenAI API. It provides a simple, unified interface for chat completions with streaming capabilities.

Installation

pip install litegen

Features

  • 🚀 Simple unified interface for LLM interactions
  • 🤖 Support for both local Ollama models and OpenAI
  • 📡 Built-in streaming capabilities
  • 🛠 Function calling support
  • 🔄 Context management for conversations
  • 🎯 GPU support for enhanced performance

Quick Start

from litegen import completion, pp_completion

# Simple completion
response = completion(
    model="mistral",  # or any Ollama/OpenAI model
    prompt="What is the capital of France?"
)
print(response.choices[0].message.content)

# Streaming completion with pretty print
pp_completion(
    model="llama2",
    prompt="Write a short story about a robot",
    temperature=0.7
)

Advanced Usage

System Prompts and Context

response = completion(
    model="mistral",
    system_prompt="You are a helpful math tutor",
    prompt="Explain the Pythagorean theorem",
    context=[
        {"role": "user", "content": "Can you help me with math?"},
        {"role": "assistant", "content": "Of course! What would you like to know?"}
    ]
)

Function Calling

def get_weather(location: str, unit: str = "celsius"):
    """Get weather for a location"""
    pass

response = completion(
    model="gpt-3.5-turbo",
    prompt="What's the weather in Paris?",
    tools=[get_weather]
)

GPU Support

response = completion(
    model="mistral",
    prompt="Complex calculation task",
    gpu=True  # Enable GPU acceleration
)

Configuration

The client can be configured with custom settings:

from litegen import get_client

client = get_client(gpu=True)  # Enable GPU support
# Use client directly for more control

Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key (optional, for OpenAI models)

API Reference

completion(...)

Main function for chat completions.

completion(
    model: str,                    # Model name
    messages: Optional[List[Dict[str, str]]] | str = None,  # Raw messages or prompt string
    system_prompt: str = "You are helpful Assistant",  # System prompt
    prompt: str = "",              # User prompt
    context: Optional[List[Dict[str, str]]] = None,  # Conversation history
    temperature: Optional[float] = None,  # Temperature for response randomness
    max_tokens: Optional[int] = None,  # Max tokens in response
    stream: bool = False,          # Enable streaming
    stop: Optional[List[str]] = None,  # Stop sequences
    tools: Optional[List] = None,  # Function calling tools
    gpu: bool = False,            # Enable GPU
    **kwargs                      # Additional parameters
)

pp_completion(...)

Streaming-enabled completion with pretty printing. Takes the same parameters as completion().

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litegen-0.0.35.tar.gz (15.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litegen-0.0.35-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file litegen-0.0.35.tar.gz.

File metadata

  • Download URL: litegen-0.0.35.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.0rc1 Linux/6.8.0-51-generic

File hashes

Hashes for litegen-0.0.35.tar.gz
Algorithm Hash digest
SHA256 6d40eff94186539e554bdc2f216d57f6e1104c6497dc1f2b6f9a867844f29ba8
MD5 1636645128ba04484c9aae8e8687786e
BLAKE2b-256 3980adc4199e7099c5f9f45092ec99578915b78ac4c732400b80afde87520fff

See more details on using hashes here.

File details

Details for the file litegen-0.0.35-py3-none-any.whl.

File metadata

  • Download URL: litegen-0.0.35-py3-none-any.whl
  • Upload date:
  • Size: 17.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.11.0rc1 Linux/6.8.0-51-generic

File hashes

Hashes for litegen-0.0.35-py3-none-any.whl
Algorithm Hash digest
SHA256 c13f0e46f11af93ddcfe9455db5cb211e3bf231f62a9cb0b26fb448333fd1d33
MD5 268057ce0bb47256d405f6317d903e5a
BLAKE2b-256 26c6b4ff6e2d7f21c039a3bdd1ed676dad24c041f0961e847da5d8a4c4572108

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page