Build AI Assistants using function calling
Project description
phidata
Function calling is all you need
โจ What is phidata?
Phidata is a toolkit for building AI Assistants using function calling.
Function calling enables LLMs to achieve tasks by calling functions and intelligently choosing their next step based on the response, just like how humans solve problems.
๐กHow it works
- Step 1: Create an
Assistant
- Step 2: Add Tools (functions), Knowledge (vectordb) and Storage (database)
- Step 3: Serve using Streamlit, FastApi or Django to build your AI application
๐ฉโ๐ป Getting Started
Installation
pip install -U phidata
Create an Assistant
- Create a file
assistant.py
and install openai usingpip install openai
from phi.assistant import Assistant
assistant = Assistant(description="You help people with their health and fitness goals.")
assistant.print_response("Share a quick healthy breakfast recipe.")
- Run the
Assistant
python assistant.py
- Add the ability to search DuckDuckGo
from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
assistant = Assistant(tools=[DuckDuckGo()], show_tool_calls=True)
assistant.print_response("Whats happening in France?")
- Run the
Assistant
pip install duckduckgo-search
python assistant.py
See more examples below
Checkout the cookbook for more examples
โญ๏ธ Use phidata to build
- Knowledge Assistants: Answer questions from documents (PDFs, text)
- Research Assistants: Perform research and summarize findings.
- Data Assistants: Analyze data by running SQL queries.
- Python Assistants: Perform tasks by running python code.
- Customer Assistants: Answer customer queries using product descriptions and purchase history.
- Marketing Assistants: Provide marketing insights, copywriting and content ideas.
- Travel Assistants: Help plan travel by researching destinations, flight and hotel prices.
- Meal Prep Assistants: Help plan meals by researching recipes and adding ingredients to shopping lists.
๐ Demos
- PDF AI that summarizes and answers questions from PDFs.
- arXiv AI that answers questions about ArXiv papers using the ArXiv API.
- HackerNews AI that interacts with the HN API to summarize stories, users, find out what's trending, summarize topics.
- Demo Streamlit App serving a PDF, Image and Website Assistant (password: admin)
- Demo FastApi serving a PDF Assistant.
๐๏ธ Templates
After building an Assistant, serve it using Streamlit, FastApi or Django to build your AI application. Instead of wiring these tools manually, phidata provides pre-built templates for AI Apps that you can run locally or deploy to AWS with 1 command. Here's how they work:
- Create your AI App using a template:
phi ws create
- Run your app locally:
phi ws up
- Run your app on AWS:
phi ws up prd:aws
๐ Examples
Create an Assistant with a function call
- Create a file
hn_assistant.py
that can call a function to summarize the top stories on Hacker News
import json
import httpx
from phi.assistant import Assistant
def get_top_hackernews_stories(num_stories: int = 10) -> str:
"""Use this function to get top stories from Hacker News.
Args:
num_stories (int): Number of stories to return. Defaults to 10.
Returns:
str: JSON string of top stories.
"""
# Fetch top story IDs
response = httpx.get('https://hacker-news.firebaseio.com/v0/topstories.json')
story_ids = response.json()
# Fetch story details
stories = []
for story_id in story_ids[:num_stories]:
story_response = httpx.get(f'https://hacker-news.firebaseio.com/v0/item/{story_id}.json')
story = story_response.json()
if "text" in story:
story.pop("text", None)
stories.append(story)
return json.dumps(stories)
assistant = Assistant(tools=[get_top_hackernews_stories], show_tool_calls=True)
assistant.print_response("Summarize the top stories on hackernews?")
- Run the
hn_assistant.py
file
python hn_assistant.py
- See it work through the problem
โญโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Message โ Summarize the top stories on hackernews? โ
โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Response โ โ
โ (51.1s) โ โข Running: get_top_hackernews_stories(num_stories=5) โ
โ โ โ
โ โ Here's a summary of the top stories on Hacker News: โ
โ โ โ
โ โ 1 Boeing Whistleblower: Max 9 Production Line Has "Enormous โ
โ โ Volume of Defects" A whistleblower has revealed that Boeing's โ
โ โ Max 9 production line is riddled with an "enormous volume of โ
โ โ defects," with instances where bolts were not installed. The โ
โ โ story has garnered attention with a score of 140. Read more โ
โ โ 2 Arno A. Penzias, 90, Dies; Nobel Physicist Confirmed Big Bang โ
โ โ Theory Arno A. Penzias, a Nobel Prize-winning physicist known โ
โ โ for his work that confirmed the Big Bang Theory, has passed โ
โ โ away at the age of 90. His contributions to science have been โ
โ โ significant, leading to discussions and tributes in the โ
โ โ scientific community. The news has a score of 207. Read more โ
โ โ 3 Why the fuck are we templating YAML? (2019) This provocative โ
โ โ article from 2019 questions the proliferation of YAML โ
โ โ templating in software, sparking a larger conversation about โ
โ โ the complexities and potential pitfalls of this practice. With โ
โ โ a substantial score of 149, it remains a hot topic of debate. โ
โ โ Read more โ
โ โ 4 Forging signed commits on GitHub Researchers have discovered a โ
โ โ method for forging signed commits on GitHub which is causing โ
โ โ concern within the tech community about the implications for โ
โ โ code security and integrity. The story has a current score of โ
โ โ 94. Read more โ
โ โ 5 Qdrant, the Vector Search Database, raised $28M in a Series A โ
โ โ round Qdrant, a company specializing in vector search โ
โ โ databases, has successfully raised $28 million in a Series A โ
โ โ funding round. This financial milestone indicates growing โ
โ โ interest and confidence in their technology. The story has โ
โ โ attracted attention with a score of 55. Read more โ
โฐโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Create an Assistant that can analyze data using SQL
The DuckDbAssistant
can perform data analysis using SQL queries.
- Create a file
data_assistant.py
and install duckdb usingpip install duckdb
import json
from phi.assistant.duckdb import DuckDbAssistant
duckdb_assistant = DuckDbAssistant(
semantic_model=json.dumps({
"tables": [
{
"name": "movies",
"description": "Contains information about movies from IMDB.",
"path": "https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
}
]
}),
)
duckdb_assistant.print_response("What is the average rating of movies? Show me the SQL.")
- Run the
data_assistant.py
file
python data_assistant.py
- See it work through the problem
INFO Running: SHOW TABLES
INFO Running: CREATE TABLE IF NOT EXISTS 'movies'
AS SELECT * FROM
'https://phidata-public.s3.amazonaws.com/demo_
data/IMDB-Movie-Data.csv'
INFO Running: DESCRIBE movies
INFO Running: SELECT AVG(Rating) AS average_rating
FROM movies
โญโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Message โ What is the average rating of movies? Show me the SQL. โ
โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Response โ The average rating of movies in the dataset is 6.72. โ
โ (7.6s) โ โ
โ โ Here is the SQL query used to calculate the average โ
โ โ rating: โ
โ โ โ
โ โ โ
โ โ SELECT AVG(Rating) AS average_rating โ
โ โ FROM movies; โ
โ โ โ
โฐโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Create an Assistant that achieves tasks using python
The PythonAssistant
can perform virtually any task using python code.
- Create a file
python_assistant.py
and install pandas usingpip install pandas
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile
python_assistant = PythonAssistant(
files=[
CsvFile(
path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
description="Contains information about movies from IMDB.",
)
],
pip_install=True,
show_tool_calls=True,
)
python_assistant.print_response("What is the average rating of movies?")
- Run the
python_assistant.py
file
python python_assistant.py
- See it work through the problem
WARNING PythonTools can run arbitrary code, please provide human supervision.
INFO Saved: /Users/zu/ai/average_rating
INFO Running /Users/zu/ai/average_rating
โญโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Message โ What is the average rating of movies? โ
โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Response โ โ
โ (4.1s) โ โข Running: save_to_file_and_run(file_name=average_rating, โ
โ โ code=..., variable_to_return=average_rating) โ
โ โ โ
โ โ The average rating of movies is approximately 6.72. โ
โฐโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Generate pydantic models using an Assistant
One of our favorite features is generating structured data (i.e. a pydantic model) from sparse information.
Meaning we can use Assistants to return pydantic models and generate content which previously could not be possible.
In this example, our movie assistant generates an object of the MovieScript
class.
- Create a file
movie_assistant.py
from typing import List
from pydantic import BaseModel, Field
from rich.pretty import pprint
from phi.assistant import Assistant
class MovieScript(BaseModel):
setting: str = Field(..., description="Provide a nice setting for a blockbuster movie.")
ending: str = Field(..., description="Ending of the movie. If not available, provide a happy ending.")
genre: str = Field(..., description="Genre of the movie. If not available, select action, thriller or romantic comedy.")
name: str = Field(..., description="Give a name to this movie")
characters: List[str] = Field(..., description="Name of characters for this movie.")
storyline: str = Field(..., description="3 sentence storyline for the movie. Make it exciting!")
movie_assistant = Assistant(
description="You help people write movie ideas.",
output_model=MovieScript,
)
pprint(movie_assistant.run("New York"))
- Run the
movie_assistant.py
file
python movie_assistant.py
- See how the assistant generates a structured output
MovieScript(
โ setting='A bustling and vibrant New York City',
โ ending='The protagonist saves the city and reconciles with their estranged family.',
โ genre='action',
โ name='City Pulse',
โ characters=['Alex Mercer', 'Nina Castillo', 'Detective Mike Johnson'],
โ storyline='In the heart of New York City, a former cop turned vigilante, Alex Mercer, teams up with a street-smart activist, Nina Castillo, to take down a corrupt political figure who threatens to destroy the city. As they navigate through the intricate web of power and deception, they uncover shocking truths that push them to the brink of their abilities. With time running out, they must race against the clock to save New York and confront their own demons.'
)
Create a PDF Assistant with Knowledge & Storage
- Knowledge Base: information that the Assistant can search to improve its responses. Uses a vector db.
- Storage: provides long term memory for Assistants. Uses a database.
Let's run PgVector
as it can provide both, knowledge and storage for our Assistants.
- Install docker desktop for running PgVector in a container.
- Create a file
resources.py
with the following contents
from phi.docker.app.postgres import PgVectorDb
from phi.docker.resources import DockerResources
# -*- PgVector running on port 5432:5432
vector_db = PgVectorDb(
pg_user="ai",
pg_password="ai",
pg_database="ai",
debug_mode=True,
)
# -*- DockerResources
dev_docker_resources = DockerResources(apps=[vector_db])
- Start
PgVector
using
phi start resources.py
- Create a file
pdf_assistant.py
and install libraries usingpip install pgvector pypdf psycopg sqlalchemy
import typer
from rich.prompt import Prompt
from typing import Optional, List
from phi.assistant import Assistant
from phi.storage.assistant.postgres import PgAssistantStorage
from phi.knowledge.pdf import PDFUrlKnowledgeBase
from phi.vectordb.pgvector import PgVector
from resources import vector_db
knowledge_base = PDFUrlKnowledgeBase(
urls=["https://www.family-action.org.uk/content/uploads/2019/07/meals-more-recipes.pdf"],
vector_db=PgVector(
collection="recipes",
db_url=vector_db.get_db_connection_local(),
),
)
storage = PgAssistantStorage(
table_name="recipe_assistant",
db_url=vector_db.get_db_connection_local(),
)
def recipe_assistant(new: bool = False, user: str = "user"):
run_id: Optional[str] = None
if not new:
existing_run_ids: List[str] = storage.get_all_run_ids(user)
if len(existing_run_ids) > 0:
run_id = existing_run_ids[0]
assistant = Assistant(
run_id=run_id,
user_id=user,
knowledge_base=knowledge_base,
storage=storage,
# use_tools=True adds functions to
# search the knowledge base and chat history
use_tools=True,
show_tool_calls=True,
# Uncomment the following line to use traditional RAG
# add_references_to_prompt=True,
)
if run_id is None:
run_id = assistant.run_id
print(f"Started Run: {run_id}\n")
else:
print(f"Continuing Run: {run_id}\n")
assistant.knowledge_base.load(recreate=False)
while True:
message = Prompt.ask(f"[bold] :sunglasses: {user} [/bold]")
if message in ("exit", "bye"):
break
assistant.print_response(message)
if __name__ == "__main__":
typer.run(recipe_assistant)
- Run the
pdf_assistant.py
file
python pdf_assistant.py
- Ask a question:
How do I make chicken tikka salad?
- See how the Assistant searches the knowledge base and returns a response.
Show output
Started Run: d28478ea-75ed-4710-8191-22564ebfb140
INFO Loading knowledge base
INFO Reading:
https://www.family-action.org.uk/content/uploads/2019/07/meals-more-recipes.pdf
INFO Loaded 82 documents to knowledge base
๐ user : How do I make chicken tikka salad?
โญโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Message โ How do I make chicken tikka salad? โ
โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Response โ โ
โ (7.2s) โ โข Running: search_knowledge_base(query=chicken tikka salad) โ
โ โ โ
โ โ I found a recipe for Chicken Tikka Salad that serves 2. Here are the โ
โ โ ingredients and steps: โ
โ โ โ
โ โ Ingredients: โ
...
- Message
bye
to exit, start the app again and ask:
What was my last message?
See how the assistant now maintains storage across sessions.
- Run the
pdf_assistant.py
file with the--new
flag to start a new run.
python pdf_assistant.py --new
- Stop PgVector
Play around and then stop PgVector
using phi stop resources.py
phi stop resources.py
Build an AI App using Streamlit, FastApi and PgVector
Phidata provides pre-built templates for AI Apps that you can use as a starting point. The general workflow is:
- Create your codebase using a template:
phi ws create
- Run your app locally:
phi ws up dev:docker
- Run your app on AWS:
phi ws up prd:aws
Let's build an AI App using GPT-4 as the LLM, Streamlit as the chat interface, FastApi as the backend and PgVector for knowledge and storage. Read the full tutorial here.
Step 1: Create your codebase
Create your codebase using the ai-app
template
phi ws create -t ai-app -n ai-app
This will create a folder ai-app
with a pre-built AI App that you can customize and make your own.
Step 2: Serve your App using Streamlit
Streamlit allows us to build micro front-ends and is extremely useful for building basic applications in pure python. Start the app
group using:
phi ws up --group app
Press Enter to confirm and give a few minutes for the image to download.
PDF Assistant
- Open localhost:8501 to view streamlit apps that you can customize and make your own.
- Click on PDF Assistant in the sidebar
- Enter a username and wait for the knowledge base to load.
- Choose either the
RAG
orAutonomous
Assistant type. - Ask "How do I make chicken curry?"
- Upload PDFs and ask questions
Step 3: Serve your App using FastApi
Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js
backed by a RestApi built using a framework like FastApi
.
Your AI App comes ready-to-use with FastApi endpoints, start the api
group using:
phi ws up --group api
Press Enter to confirm and give a few minutes for the image to download.
-
View API Endpoints
-
Open localhost:8000/docs to view the API Endpoints.
-
Load the knowledge base using
/v1/assitants/load-knowledge-base
-
Test the
v1/assitants/chat
endpoint with{"message": "How do I make chicken curry?"}
-
The Api comes pre-built with endpoints that you can integrate with your front-end.
Optional: Run Jupyterlab
A jupyter notebook is a must-have for AI development and your ai-app
comes with a notebook pre-installed with the required dependencies. Enable it by updating the workspace/settings.py
file:
...
ws_settings = WorkspaceSettings(
...
# Uncomment the following line
dev_jupyter_enabled=True,
...
Start jupyter
using:
phi ws up --group jupyter
Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.
View Jupyterlab UI
-
Open localhost:8888 to view the Jupyterlab UI. Password: admin
-
Play around with cookbooks in the
notebooks
folder. -
Delete local resources
Step 4: Stop the workspace
Play around and stop the workspace using:
phi ws down
Step 5: Run your AI App on AWS
Read how to run your AI App on AWS.
๐ Documentation
- You can find the full documentation here
- You can also chat with us on discord
- Or email us at help@phidata.com
Contributions
We're an open-source project and welcome contributions, please read the contributing guide for more information.
Request a feature
- If you have a feature request, please open an issue or make a pull request.
- If you have ideas on how we can improve, please create a discussion.
Roadmap
Our roadmap is available here. If you have a feature request, please open an issue/discussion.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.