AI Toolkit for Engineers
Project description
phidata
AI Toolkit for Engineers
๐งฐ Phidata is a everything-included toolkit for building products using LLMs
It solves the problem of building LLM applications by providing:
๐ป Software layer
- Components for building LLM apps: RAG, Agents, Workflows
- Components for extending LLM apps: VectorDbs, Storage, Memory, Cache
- Components for monitoring LLM apps: Model Inputs/Outputs, Quality, Cost
- Components for improving LLM apps: Fine-tuning, RLHF
- Components for securing LLM apps: I/O Validation, Guardrails
๐ฑ Application layer
- Tools for serving LLM apps: FastApi, Django, Streamlit
- Tools for serving LLM components: PgVector, Postgres, Redis
๐ Infrastructure layer
- Infrastructure for running LLM apps locally: Docker
- Infrastructure for running LLM apps in production: AWS
- Best practices like testing, formatting, CI/CD, security and secret management.
Our goal is to integrate the 3 layers of software development using 1 toolkit and build production-grade LLM Apps.
๐ How it works
- Create your LLM app from a template using
phi ws create
- Run your app locally using
phi ws up dev:docker
- Run your app on AWS using
phi ws up prd:aws
๐ฏ For more information:
- Read the documentation
- Read about phidata basics
- Chat with us on Discord
- Email us at help@phidata.com
๐ฉโ๐ป Quickstart: Build a LLM App ๐งโ๐ป
Let's build a LLM App with GPT-4 using PgVector for Knowledge Base and Storage. We'll serve the app using Streamlit and FastApi, running locally on Docker.
Install docker desktop before moving ahead
Setup
Open the Terminal
and create a python virtual environment
python3 -m venv ~/.venvs/llmenv
source ~/.venvs/llmenv/bin/activate
Install phidata
pip install phidata
Create your codebase
Create your codebase using the llm-app
template that is pre-configured with FastApi, Streamlit and PgVector. Use this codebase as a starting point for your LLM product.
phi ws create -t llm-app -n llm-app
This will create a folder named llm-app
with the following structure:
llm-app
โโโ api # directory for FastApi routes
โโโ app # directory for Streamlit apps
โโโ db # directory for database components
โโโ llm # directory for LLM components
โโโ conversations # LLM conversations
โโโ knowledge_base.py # LLM knowledge base
โโโ storage.py # LLM storage
โโโ notebooks # directory for Jupyter notebooks
โโโ Dockerfile # Dockerfile for the application
โโโ pyproject.toml # python project definition
โโโ requirements.txt # python dependencies generated by pyproject.toml
โโโ scripts # directory for helper scripts
โโโ utils # directory for shared utilities
โโโ workspace
โโโ dev_resources.py # Dev resources running locally
โโโ prd_resources.py # Production resources running on AWS
โโโ jupyter # Jupyter notebook resources
โโโ secrets # directory for storing secrets
โโโ settings.py # Phidata workspace settings
Set OpenAI Key
Set the OPENAI_API_KEY
environment variable. You can get one from OpenAI here.
export OPENAI_API_KEY=sk-***
Serve you LLM App using Streamlit
Streamlit allows us to build micro front-ends for our LLM App and is extremely useful for building basic applications in pure python. Start the app
group using:
phi ws up --group app
Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.
Open localhost:8501 to view streamlit apps that you can customize and make your own.
Chat with PDFs
- Click on Chat with PDFs in the sidebar
- Enter a username and wait for the knowledge base to load.
- Choose between
RAG
orAutonomous
mode. - Ask "How do I make chicken tikka salad?"
- The streamlit apps are defined in the
app
folder. - The
Conversations
powering these apps are defined in thellm/conversations
folder. - The Streamlit application is defined in the
workspace/dev_resources.py
file.
Serve your LLM App using FastApi
Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js
backed by a RestApi built with a framework like FastApi
.
Your LLM App comes pre-configured with FastApi, start the api
group using:
phi ws up --group api
Press Enter to confirm and give a few minutes for the image to download.
View API Endpoints
- Open localhost:8000/docs to view the API Endpoints.
- Test the
v1/pdf/conversation/chat
endpoint with{"message": "how do I make chicken tikka salad"}
- Checkout the
api/routes/pdf_routes.py
file for endpoints that you can integrate with your front-end or product. - The FastApi application is defined in the
workspace/dev_resources.py
file.
Delete local resources
Play around and stop the workspace using:
phi ws down
Run your LLM App on AWS
Read how to run your LLM App on AWS here.
More information:
- Read the documentation
- Read about phidata basics
- Chat with us on Discord
- Email us at help@phidata.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.