Skip to main content

LLMX: A library for LLM Text Generation

Project description

LLMX - An API for Chat Fine-Tuned Language Models

PyPI version

A simple python package that provides a unified interface to several LLM providers of chat fine-tuned models [OpenAI, AzureOpenAI, PaLM, Cohere and local HuggingFace Models].

Note llmx wraps multiple api providers and its interface may change as the providers as well as the general field of LLMs evolve.

There is nothing particularly special about this library, but some of the requirements I needed when I started building this (that other libraries did not have):

  • Unified Model Interface: Single interface to create LLM text generators with support for multiple LLM providers.
from llmx import  llm

gen = llm(provider="openai") # support azureopenai models too.
gen = llm(provider="palm") # or google
gen = llm(provider="cohere") # or palm
gen = llm(provider="hf", model="HuggingFaceH4/zephyr-7b-beta", device_map="auto") # run huggingface model locally
  • Unified Messaging Interface. Standardizes on the OpenAI ChatML message format and is designed for chat finetuned models. For example, the standard prompt sent a model is formatted as an array of objects, where each object has a role (system, user, or assistant) and content (see below). A single request is list of only one message (e.g., write code to plot a cosine wave signal). A conversation is a list of messages e.g. write code for x, update the axis to y, etc. Same format for all models.
messages = [
    {"role": "user", "content": "You are a helpful assistant that can explain concepts clearly to a 6 year old child."},
    {"role": "user", "content": "What is  gravity?"}
]
  • Good Utils (e.g., Caching etc): E.g. being able to use caching for faster responses. General policy is that cache is used if config (including messages) is the same. If you want to force a new response, set use_cache=False in the generate call.
response = gen.generate(messages=messages, config=TextGeneratorConfig(n=1, use_cache=True))

Output looks like

TextGenerationResponse(
  text=[Message(role='assistant', content="Gravity is like a magical force that pulls things towards each other. It's what keeps us on the ground and stops us from floating away into space. ... ")],
  config=TextGenerationConfig(n=1, temperature=0.1, max_tokens=8147, top_p=1.0, top_k=50, frequency_penalty=0.0, presence_penalty=0.0, provider='openai', model='gpt-4', stop=None),
  logprobs=[], usage={'prompt_tokens': 34, 'completion_tokens': 69, 'total_tokens': 103})

Are there other libraries that do things like this really well? Yes! I'd recommend looking at guidance which does a lot more. Interested in optimized inference? Try somthing like vllm.

Installation

Install from pypi. Please use python3.10 or higher.

pip install llmx

Install in development mode

git clone
cd llmx
pip install -e .

Note that you may want to use the latest version of pip to install this package. python3 -m pip install --upgrade pip

Usage

Set your api keys first for each service.

# for openai and cohere
export OPENAI_API_KEY=<your key>
export COHERE_API_KEY=<your key>

# for PALM via MakerSuite
export PALM_API_KEY=<your key>

# for PaLM (Vertex AI), setup a gcp project, and get a service account key file
export PALM_SERVICE_ACCOUNT_KEY_FILE= <path to your service account key file>
export PALM_PROJECT_ID=<your gcp project id>
export PALM_PROJECT_LOCATION=<your project location>

You can also set the default provider and list of supported providers via a config file. Use the yaml format in this sample config.default.yml file and set the LLMX_CONFIG_PATH to the path of the config file.

from llmx import llm
from llmx.datamodel import TextGenerationConfig

messages = [
    {"role": "system", "content": "You are a helpful assistant that can explain concepts clearly to a 6 year old child."},
    {"role": "user", "content": "What is  gravity?"}
]

openai_gen = llm(provider="openai")
openai_config = TextGenerationConfig(model="gpt-4", max_tokens=50)
openai_response = openai_gen.generate(messages, config=openai_config, use_cache=True)
print(openai_response.text[0].content)

See the tutorial for more examples.

A Note on Using Local HuggingFace Models

While llmx can use the huggingface transformers library to run inference with local models, you might get more mileage from using a well-optimized server endpoint like vllm, or FastChat. The general idea is that these tools let you provide an openai-compatible endpoint but also implement optimizations such as dynamic batching, quantization etc to improve throughput. The general steps are:

  • install vllm, setup endpoint e.g., on port 8000
  • use openai as your provider to access that endpoint.
from llmx import  llm
hfgen_gen = llm(
    provider="openai",
    api_base="http://localhost:8000",
    api_key="EMPTY,
)
...

Current Work

Caveats

  • Prompting. llmx makes some assumptions around how prompts are constructed e.g., how the chat message interface is assembled into a prompt for each model type. If your application or use case requires more control over the prompt, you may want to use a different library (ideally query the LLM models directly).
  • Inference Optimization. For hosted models (GPT-4, PalM, Cohere) etc, this library provides an excellent unified interface as the hosted api already takes care of inference optimizations. However, if you are looking for a library that is optimized for inference with local models(e.g., huggingface) (tensor parrelization, distributed inference etc), I'd recommend looking at vllm or tgi.

Citation

If you use this library in your work, please cite:

@software{victordibiallmx,
author = {Victor Dibia},
license = {MIT},
month =  {10},
title = {LLMX - An API for Chat Fine-Tuned Language Models},
url = {https://github.com/victordibia/llmx},
year = {2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmx-0.0.21a0.tar.gz (23.6 kB view details)

Uploaded Source

Built Distribution

llmx-0.0.21a0-py3-none-any.whl (20.1 kB view details)

Uploaded Python 3

File details

Details for the file llmx-0.0.21a0.tar.gz.

File metadata

  • Download URL: llmx-0.0.21a0.tar.gz
  • Upload date:
  • Size: 23.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.16

File hashes

Hashes for llmx-0.0.21a0.tar.gz
Algorithm Hash digest
SHA256 384a3ac086834e4b7302c3f4ace9a1c631521f281347f12971123a017b2efba3
MD5 fba1315dd5ac80daab588751302b40c8
BLAKE2b-256 3c30a85ab892d810159e91e408b2f46f19fe8bbd9890d58fcbc4e32f8948b691

See more details on using hashes here.

File details

Details for the file llmx-0.0.21a0-py3-none-any.whl.

File metadata

  • Download URL: llmx-0.0.21a0-py3-none-any.whl
  • Upload date:
  • Size: 20.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.16

File hashes

Hashes for llmx-0.0.21a0-py3-none-any.whl
Algorithm Hash digest
SHA256 f8877752a790a5f7924248292e6ba9c215ab17f5aea8dc7293d3290f4edf98a6
MD5 1b7f7091e2c207fcd48da7a7a5c05af7
BLAKE2b-256 a651c74b8cea0d8008e356c8d7de9464c65aac73ca6dad0ca809a03bc463cc44

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page