Skip to main content

Python functions backed by language models

Project description

LMdef

CI

Easily create Python functions backed by language models. Just define the signature and docstring and add the @lmdef decorator:

from lmfunctions import lmdef

@lmdef
def contextual_qa(context: str, query: str) -> str:
    """
    Answer the question using information from the context
    """

The resulting language function invokes a language model backend under the hood, but can be used just like a regular function:

context = """John is planning a vacation. He wants to visit a country with a rich history,
             delicious cuisine, and beautiful beaches. He also prefers places where English
             is commonly spoken."""
query = "Where should John go?"

contextual_qa(context,query)

# Based on the given context, ...
Language model backend

The default backend can be configured to invoke a remote API (such as OpenAI's GPT):

lmf.set_backend.litellm(model="gpt-4o")

or run a local model (for example via llama.cpp):

import lmfunctions as lmf
lmf.set_backend.llamacpp(model="hf://Qwen/Qwen2-0.5B-Instruct-GGUF/qwen2-0_5b-instruct-q4_k_m.gguf")

See all supported language model backends.

Add Constraints

Constraints on inputs and outputs can be enforced via type hints. For instance, a text classification task can be expressed as follows:

from typing import Literal

@lmdef
def sentiment(comment: str) -> Literal["negative","neutral","positive"]:
    """ Analyze the sentiment of the given comment """
sentiment("I feel under the weather today")
# <Output.negative: 'negative'>
Complex Constraints

Pydantic models or JSON schemas can be used to specify complex constraints and inject additional information about fields, useful to guide the model

from lmfunctions import lmdef
from pydantic import BaseModel, Field

class CityInfo(BaseModel):
    country: str
    population: float = Field(description="Population expressed in Millions")
    languages_spoken: list[str]

@lmdef
def city_info(input: str) -> CityInfo:
    """
    Returns information about the city
    """

city_info("Paris")
# CityInfo(country='France', population=2.16, languages_spoken=['French'])
Structured Data Generation

Generating structured data can be accomplished by simply defining a language function without input arguments:

from lmfunctions import lmdef
from pydantic import BaseModel

class Cocktail(BaseModel):
    name: str
    glass_type: str
    ingredients: list[str]
    instructions: list[str]

@lmdef
def cocktail() -> Cocktail:
    """Invent a new cocktail"""
cocktail()
# Cocktail(name='Sakura Sunset', glass_type='Coupe glass', ingredients=['1 1/2 oz Japanese whiskey', '1/2 oz cherry liqueur', ...
Serialization

Language functions can be serialized

from lmfunctions import from_string, lmdef
from typing import Literal

@lmdef
def sentiment(comment: str) -> Literal["negative","neutral","positive"]:
    """ Analyze the sentiment of the given comment """

sentiment_yaml = sentiment.dumps(format='yaml')

and deserialized

sentiment_deserialized = from_string(sentiment_yaml)
sentiment_deserialized("This is an excellent Python package")
# <Output.positive: 'positive'>

This allows to store them in text files and dynamically load them from a remote artifact:

from lmfunctions import from_store
route = from_store("steerable/lmfunc/route")
route(origin="Seattle",destination="New York")
# FlightRoute(airports=['SEA', 'ORD', 'JFK'], cost_of_flight=350)
Observability Event managers and callbacks allow to instrument all execution stages, gaining visibility into internal variables and metrics.
Additional examples

See this notebook.

Installation

  • Install at least one of the supported language model backend:

    • llama.cpp (CPU only):
    pip install llama-cpp-python
    
    • llama.cpp (GPU with CUDA support):
    CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
    
    • Other options to build llama.cpp are listed here

    • Transformers

    pip install transformers[torch]
    
    • Litellm (API-based language models)
    pip install litellm
    
  • Install the package with pip or Poetry

pip install lmdef
poetry add lmdef

Language Model Backend

The backends currently supported are

The default backend can be set using shorcuts. For example, the following sets llamacpp and retrieves a model from from HuggingFace Hub:

import lmfunctions as lmf

lmf.set_backend.llamacpp(model="hf://Qwen/Qwen2-1.5B-Instruct-GGUF/qwen2-1_5b-instruct-q4_k_m.gguf")

API providers such as OpenAI (GPT), Anthropic (Claude), Cohere, and many others can be accessed using the litellm backend. For example, to use OpenaAI's GPT-4o API:

lmf.set_backend.litellm(model="gpt-4o")

This requires setting suitable API keys in the environment (in this case an OpenAI API key obtainable by creating an OpenAI account).

The default backend can be overridden when calling the language function:

from lmfunctions.lmbackend import LiteLLMBackend
gpt4omini = LiteLLMBackend(model="gpt-4o-mini")
contextual_qa(context,query,backend=gpt4omini)

To display information about the current language model backend settings:

lmf.default.backend.info()
# ...

Retry Policy

A retry policy specifies what to do when an exception occurs while executing the language function, for example when when the language model is unable to generate an output in the desired format. Tenacity is used to implement the retries callbacks, with the class RetryPolicy wrapping some tenacity's input arguments in a serializable format

from lmfunctions import RetryPolicy

retrypolicy = RetryPolicy(stop_max_attempt= 2, wait="fixed")
retrypolicy.info()

The default RetryPolicy can be modified as follows:

import lmfunctions import lmf
lmf.retrypolicy.stop_max_attempt=10

Event Manager

Execution of a language function proceeds through several steps:

  • Call start
  • Prompt template render
  • Token or character processed
  • Retry in case of exceptions
  • Failure
  • Success in obtaining and parsing the output

Event Managers can be used to introduce callback handlers for each of these events.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmfunctions-0.1.0.tar.gz (25.5 kB view hashes)

Uploaded Source

Built Distribution

lmfunctions-0.1.0-py3-none-any.whl (35.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page