functionlayer
Project description
FunctionLayer
FunctionLayer: A python toolkit to enable AI agents to communicate with humans in tool-based and asynchronous workflows. By incorporating humans-in-the-loop, agentic tools can be given access to much more powerful and meaningful tool calls and tasks.
Bring your LLM (OpenAI, Llama, Claude, etc) and Framework (LangChain, CrewAI, etc) and start giving your AI agents safe access to the world.
Table of contents
Getting Started
To get started, check out Getting Started, watch the 2:30 Getting Started Video, or jump straight into one of the Examples:
- 🦜⛓️ LangChain
- 🚣 CrewAI
- 🦾 ControlFlow
- 🧠 Raw OpenAI Client
pip install functionlayer
or for the bleeding edge
pip install git+https://github.com/functionlayer/functionlayer
Set FUNCTIONLAYER_API_TOKEN
and wrap your AI function in require_approval()
from functionlayer import ApprovalMethod, FunctionLayer
fl = FunctionLayer(approval_method=ApprovalMethod.CLOUD) # or CLI
@fl.require_approval()
def send_email(to: str, subject: str, body: str):
"""Send an email to the customer"""
...
# made up method, use whatever framework you prefer
run_llm_task(
prompt="Send an email welcoming the customer to the platform and encouraging them to invite a team member.",
tools=[send_email],
llm=OpenAI(model="gpt-4o")
)
Then you can start manging LLM actions in slack, email, or whatever channel you prefer:
Check out the FunctionLayer Docs and the Getting Started Guide for more information.
Why FunctionLayer?
Functions and tools are a key part of Agentic Workflows. They enable LLMs to interact meaningfully with the outside world and automate broad scopes of impactful work. Correct and accurate function calling is essential for AI agents that do meaningful things like book appointments, interact with customers, manage billing information, write+execute code, and more.
From https://louis-dupont.medium.com/transforming-software-interactions-with-tool-calling-and-llms-dc39185247e9
However, the most useful functions we can give to an LLM are also the most risky. We can all imagine the value of an AI Database Administrator that constantly tunes and refactors our SQL database, but most teams wouldn't give an LLM access to run arbitrary SQL statements against a production database (heck, we mostly don't even let humans do that). That is:
Even with state-of-the-art agentic reasoning and prompt routing, LLMs are not sufficiently reliable to be given access to high-stakes functions without human oversight
To better define what is meant by "high stakes", some examples:
- Low Stakes: Read Access to public data (e.g. search wikipedia, access public APIs and DataSets)
- Low Stakes: Communicate with agent author (e.g. an engineer might empower an agent to send them a private Slack message with updates on progress)
- Medium Stakes: Read Access to Private Data (e.g. read emails, access calendars, query a CRM)
- Medium Stakes: Communicate with strict rules (e.g. sending based on a specific sequence of hard-coded email templates)
- High Stakes: Communicate on my Behalf or on behalf of my Company (e.g. send emails, post to slack, publish social/blog content)
- High Stakes: Write Access to Private Data (e.g. update CRM records, modify feature toggles, update billing information)
The high stakes functions are the ones that are the most valuable and promise the most impact in automating away human workflows. The sooner teams can get Agents reliably and safely calling these tools, the sooner they can reap massive benefits.
FunctionLayer provides a set of tools to deterministically guarantee human oversight of high stakes function calls. Even if the LLM makes a mistake or hallucinates, FunctionLayer is baked into the tool/function itself, guaranteeing human oversight of high stakes function calls.
FunctionLayer provides a set of tools to *deterministically* guarantee human oversight of high stakes function calls
Key Features
- Require Human Approval for Function Calls: the
@fl.require_approval()
decorator blocks specifc function calls until a human has been consulted - upon denial, feedback will be passed to the LLM - Human as Tool: generic
fl.human_as_tool()
allows for contacting a human for answers, advice, or feedback - OmniChannel Contact: Contact humans and collect responses across Slack, Email, Discord, and more
- Granular Routing: Route approvals to specific teams or individuals
- Bring your own LLM + Framework: Because FunctionLayer is implemented at tools layer, it supports any LLM and all major orchestration frameworks that support tool calling.
Examples
You can test different real life examples of FunctionLayer in the examples folder:
- 🦜⛓️ LangChain Math
- 🦜⛓️ LangChain Human As Tool
- 🚣 CrewAI Math
- 🦾 ControlFlow Math
- 🧠 Raw OpenAI Client
Roadmap
Feature | Status |
---|---|
Require Approval | ⚗️ Alpha |
Human as Tool | ⚗️ Alpha |
CLI Approvals | ⚗️ Alpha |
CLI Human as Tool | 🗓️ Planned |
Slack Approvals | ⚗️ Alpha |
Langchain Support | ⚗️ Alpha |
Controlflow Support | ⚗️ Alpha |
CrewAI Support | ⚗️ Alpha |
Open Protocol for BYO server | 🗓️ Planned |
Composite Contact Channels | 🚧 Work in progress |
Discord Approvals | 🗓️ Planned |
Email Approvals | 🗓️ Planned |
LLamaIndex Support | 🗓️ Planned |
Haystack Support | 🗓️ Planned |
Contributing
FunctionLayer is open-source and we welcome contributions in the form of issues, documentation, pull requests, and more. See CONTRIBUTING.md for more details.
License
The FunctionLayer SDK in this repo is licensed under the Apache 2 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for functionlayer_ai-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5f52a3c19535319d93663618527f24337436b35212a4f8a0ad82e2c3ce295a18 |
|
MD5 | a67b6574367ad90d875a8aae62521a26 |
|
BLAKE2b-256 | 7bea4e297842349f6e7c8332f35cbecdf12351a7b3abf9f695da0e18100c402d |