Skip to main content

Fast and easy wrapper around LLMs.

Project description

FastLLM

Fast and simple wrapper around LLMs. The package aims to be simply, precise and allows for fast prototyping of agents and applications around LLMs. At the moment focus around OpenAI's chat models.

Warning - experimental package and subject to change. For features and plans see the roadmap.

Samples

Require an openai api key in OPENAI_API_KEY environment variable or .env file.

export OPENAI_API_KEY=...

Agents

from fastllm import Agent

find_cities = Agent("List {{ n }} cities comma separated in {{ country }}.")
cities = find_cities(n=3, country="Austria").split(",")

print(cities)
['Vienna', 'Salzburg', 'Graz']
from fastllm import Agent, Message, Model, Prompt, Role

creative_name_finder = Agent(
    Message("You are an expert name finder.", Role.SYSTEM),
    Prompt("Find {{ n }} names.", temperature=2.0),
    Prompt("Print names comma separated, nothing else!"),
    model=Model(name="gpt-4"),
)

names = creative_name_finder(n=3).split(",")

print(names)
['Ethan Gallagher, Samantha Cheng, Max Thompson']

Functions

Functions can be added to Agents, Models or Prompts. Either as initial arguments or as decorator. Functions type hints, documentation and name are inferred from the function and added to the model call.

from typing import Literal

from fastllm import Agent, Prompt

calculator_agent = Agent(
    Prompt("Calculate the result for task: {{ task }}"),
    Prompt("Only give the result number as result without anything else!"),
)

@calculator_agent.function
def calculator(a, b, operator: Literal["+", "-", "*", "/"]):
    """A basic calculator using various operators."""

    match operator:
        case "+":
            return a + b
        case "-":
            return a - b
        case "*":
            return a * b
        case "/":
            return a / b
        case _:
            raise ValueError(f"Unknown operator {operator}")


result = calculator_agent(task="give the final result for (11 + 14) * (6 - 2)")

print(result)

another_result = calculator_agent(
    task="If I have 114 apples and 3 elephants, how many apples will each elephant get?"
)

print(another_result)
100
38

Roadmap

Features

  • Prompts using jinja2 templates
  • LLM calling with backoff and retry
  • Able to register functions to agents, models and prompts using decorators
  • Possible to register functions on multiple levels (agent, model, prompt). The function call is only available on the level it was registered.
  • Conversation history. The Model class keeps track of the conversation history.
  • Function schema is inferred from python function type hints, documentation and name
  • Function calling is handled by the Model class itself. Meaning if a LLM response indicate a function call, the Model class will call the function and return the result back to the LLM
  • Function calling can result in an infinite loop if LLM can not provide function name or arguments properly. This needs to be handled by the Model class.
  • Prompts with pattern using logit bias to guide LLM completion.
  • Able to switch between models (e.g. 3.5 and 4) within one agent over different prompts.
  • Handling of multiple response messages from LLMs in a single call. At the moment only the first response is kept.
  • Supporting non chat based LLMs (e.g. OpenAI's completion LLMs).
  • Supporting other LLMs over APIs except OpenAI's. (e.g. Google etc.)
  • Supporting local LLMs (e.g. llama-1, llama-2, etc.)

Package

  • Basic package structure and functionality
  • Test cases and high test coverage
  • Tests against multiple python versions
  • 100% test coverage (at the moment around 90%)
  • Better documentation including readthedocs site.
  • Better error handling and logging
  • Better samples using jupyter notebooks
  • Set up of pre-commit
  • CI using github actions
  • Release and versioning

Development

Using poetry.

poetry install

Tests

poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastllm-0.1.2.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

fastllm-0.1.2-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file fastllm-0.1.2.tar.gz.

File metadata

  • Download URL: fastllm-0.1.2.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/5.15.90.1-microsoft-standard-WSL2

File hashes

Hashes for fastllm-0.1.2.tar.gz
Algorithm Hash digest
SHA256 0302f208cb03085209a5b50c7596735a3ca86d05a0a3116566c1d74fc72c2a26
MD5 f4859a8c60f2ec13ae35fef6e949f551
BLAKE2b-256 1240cbd13e164d719991f13528c22e0436b237bc40593b43606e7e80d271ef79

See more details on using hashes here.

File details

Details for the file fastllm-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: fastllm-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 12.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/5.15.90.1-microsoft-standard-WSL2

File hashes

Hashes for fastllm-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1609ea054f469843cf8a4049348a651a1ebd78564cf2c14a1dcc70820d448022
MD5 858a396a4aab6612cfdfdb03040400cf
BLAKE2b-256 bc720810b2a1db8bb9b605e4823615ab4aabf4ced00b0c592c527537256e5558

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page