Skip to main content

Python functions backed by language models

Project description

CI codecov

lmfunctions

Easily express language model tasks as Python functions. Just define the signature and docstring and add the @lmdef decorator:

from lmfunctions import lmdef

@lmdef
def qa(context: str, query: str) -> str:
    """
    Answer the question using information from the context
    """

Calling the function will invoke a language model under the hood:

context = """John started his first job right after graduating from college in 2005.
He spent five years working in that company before deciding to pursue a master's degree,
which took him two years to complete. After obtaining his master's degree, he worked
in various companies for another decade before landing his current job, which he has been in
for the past three years. John mentioned that he entered college at the typical age of 18"""

query = "How old is John?"

qa(context,query)

# Based on the given context, ...
Backends

The default backend can be configured to invoke a remote API (such as OpenAI's GPT):

lmf.set_backend.litellm(model="gpt-4o")

or a local model via llama.cpp or HF Transformers

import lmfunctions as lmf
lmf.set_backend.llamacpp(model="hf://Qwen/Qwen2-0.5B-Instruct-GGUF/qwen2-0_5b-instruct-q4_k_m.gguf")
Tasks

Constraints on inputs and outputs can be enforced via type hints. For instance, a text classification task can be expressed as follows:

from typing import Literal

@lmdef
def sentiment(comment: str) -> Literal["negative","neutral","positive"]:
    """ Analyze the sentiment of the given comment """
sentiment("I feel under the weather today")
# <Output.negative: 'negative'>

Pydantic models or JSON schemas can be used to specify more complex constraints and inject information about the fields:

from lmfunctions import lmdef
from pydantic import BaseModel, Field

class CityInfo(BaseModel):
    country: str
    population: float = Field(description="Population expressed in Millions")
    languages_spoken: list[str]

@lmdef
def city_info(input: str) -> CityInfo:
    """
    Returns information about the city
    """

city_info("Paris")
# CityInfo(country='France', population=2.16, languages_spoken=['French'])

Generating structured data can be accomplished by simply defining a language function without input arguments:

from lmfunctions import lmdef
from pydantic import BaseModel

class Cocktail(BaseModel):
    name: str
    glass_type: str
    ingredients: list[str]
    instructions: list[str]

@lmdef
def cocktail() -> Cocktail:
    """Invent a new cocktail"""
cocktail()
# Cocktail(name='Sakura Sunset', glass_type='Coupe glass', ingredients=['1 1/2 oz Japanese whiskey' ...
Serialization

Language functions can be serialized

from lmfunctions import from_string, lmdef
from typing import Literal

@lmdef
def sentiment(comment: str) -> Literal["negative","neutral","positive"]:
    """ Analyze the sentiment of the given comment """

sentiment_yaml = sentiment.dumps(format='yaml')

and deserialized

sentiment_deserialized = from_string(sentiment_yaml)
sentiment_deserialized("This is an excellent Python package")
# <Output.positive: 'positive'>

This allows to store them in text files and dynamically load them from remote artifacts:

from lmfunctions import from_store
route = from_store("steerable/lmfunc/route")
route(origin="Seattle",destination="New York")
# FlightRoute(airports=['SEA', 'ORD', 'JFK'], cost_of_flight=350)
Observability Event managers and callbacks allow to instrument all execution stages, gaining visibility into internal variables and metrics.

Installation

  • Requirements: Python>3.10

  • Install at least one language model backend and the package using pip (comment out those you don't need)

    pip install llama-cpp-python==0.2.83 #CPU Only
    pip install transformers[torch] 
    pip install litellm
    pip install lmfunctions
    
  • If you have an NVIDIA GPU, you can build llama.cpp with CUDA support:

      CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python==0.2.83
      pip install transformers[torch] 
      pip install litellm
      pip install lmfunctions
    

Language Model Backend

The backends currently supported are

The default language model can be set using shorcuts. For example, the following sets llamacpp and retrieves a model from from HuggingFace Hub:

import lmfunctions as lmf

lmf.set_backend.llamacpp(model="hf://Qwen/Qwen2-1.5B-Instruct-GGUF/qwen2-1_5b-instruct-q4_k_m.gguf")

API providers such as OpenAI (GPT), Anthropic (Claude), Cohere, and many others can be accessed using the litellm backend. For example, to use OpenaAI's GPT-4o API:

lmf.set_backend.litellm(model="gpt-4o")

The necessary API keys (in this case OpenAI API) need to be set as environmnet variables.

The default backend can be overridden when calling the language function:

from lmfunctions.lmbackend import LiteLLMBackend
gpt_4o_mini = LiteLLMBackend(model="gpt-4o-mini")
qa(context,query,backend=gpt_4o_mini)

To display information about the current language model backend settings:

lmf.default.backend.info()
# ...

Retry Policy

A retry policy specifies what to do when an exception occurs while executing the language function, for example when when the language model is unable to generate an output in the desired format. Tenacity is used to implement the retries callbacks, with the class RetryPolicy wrapping some tenacity's input arguments in a serializable format

from lmfunctions import RetryPolicy

retrypolicy = RetryPolicy(stop_max_attempt= 2, wait="fixed")
retrypolicy.info()

The default RetryPolicy can be modified as follows:

import lmfunctions import lmf
lmf.retrypolicy.stop_max_attempt=10

Event Manager

Execution of a language function proceeds through several steps:

  • Call start
  • Prompt template render
  • Token or character processed
  • Retry in case of exceptions
  • Failure
  • Success in obtaining and parsing the output

Event Managers can be used to introduce callback handlers for each of these events.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmfunctions-0.1.1.tar.gz (25.5 kB view hashes)

Uploaded Source

Built Distribution

lmfunctions-0.1.1-py3-none-any.whl (35.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page