Skip to main content

A tool for generating function arguments and choosing what function to call with local LLMs

Project description

Local LLM function calling

Documentation Status PyPI version

Overview

The local-llm-function-calling project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's function calling feature, but actually enforcing the schema unlike OpenAI.

The project provides a Generator class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the local-llm-function-calling library, users can conveniently control the output of text generation models. It uses my own quickly sketched json-schema-enforcer project as the enforcer.

Features

  • Constrains the generation of Hugging Face text generation models to follow a JSON schema.
  • Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.
  • Simplifies the text generation process through a user-friendly Generator class.

Installation

To install the local-llm-function-calling library, use the following command:

pip install local-llm-function-calling

Usage

Here's a simple example demonstrating how to use local-llm-function-calling:

from local_llm_function_calling import Generator

# Define a function and models
functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                    "maxLength": 20,
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },
    }
]

# Initialize the generator with the Hugging Face model and our functions
generator = Generator.hf(functions, "gpt2")

# Generate text using a prompt
function_call = generator.generate("What is the weather like today in Brooklyn?")
print(function_call)

Custom constraints

You don't have to use my prompting methods; you can craft your own prompts and your own constraints, and still benefit from the constrained generation:

from local_llm_function_calling import Constrainer
from local_llm_function_calling.model.huggingface import HuggingfaceModel

# Define your own constraint
# (you can also use local_llm_function_calling.JsonSchemaConstraint)
def lowercase_sentence_constraint(text: str):
    # Has to return (is_valid, is_complete)
    return [text.islower(), text.endswith(".")]

# Create the constrainer
constrainer = Constrainer(HuggingfaceModel("gpt2"))

# Generate your text
generated = constrainer.generate("Prefix.\n", lowercase_sentence_constraint, max_len=10)

Extending and Customizing

To extend or customize the prompt structure, you can subclass the TextPrompter class. This allows you to modify the prompt generation process according to your specific requirements.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local_llm_function_calling-0.1.23.tar.gz (14.0 kB view hashes)

Uploaded Source

Built Distribution

local_llm_function_calling-0.1.23-py3-none-any.whl (18.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page