Skip to main content

AI Toolkit for Engineers

Project description

phidata

AI Toolkit for Engineers

version pythonversion downloads build-status


๐Ÿงฐ Phidata is a everything-included toolkit for building products using LLMs


It solves the problem of building LLM applications by providing:

๐Ÿ’ป Software layer

  • Components for building LLM apps: RAG, Agents, Workflows
  • Components for extending LLM apps: VectorDbs, Storage, Memory, Cache
  • Components for monitoring LLM apps: Model Inputs/Outputs, Quality, Cost
  • Components for improving LLM apps: Fine-tuning, RLHF
  • Components for securing LLM apps: I/O Validation, Guardrails

๐Ÿ“ฑ Application layer

  • Tools for serving LLM apps: FastApi, Django, Streamlit
  • Tools for serving LLM components: PgVector, Postgres, Redis

๐ŸŒ‰ Infrastructure layer

  • Infrastructure for running LLM apps locally: Docker
  • Infrastructure for running LLM apps in production: AWS
  • Best practices like testing, formatting, CI/CD, security and secret management.

Our goal is to integrate the 3 layers of software development using 1 toolkit and build production-grade LLM Apps.

๐Ÿš€ How it works

  • Create your LLM app from a template using phi ws create
  • Run your app locally using phi ws up dev:docker
  • Run your app on AWS using phi ws up prd:aws

๐ŸŽฏ For more information:

๐Ÿ‘ฉโ€๐Ÿ’ป Quickstart: Build a LLM App ๐Ÿง‘โ€๐Ÿ’ป

Let's build a LLM App with GPT-4 using PgVector for Knowledge Base and Storage. We'll serve the app using Streamlit and FastApi, running locally on Docker.

Install docker desktop before moving ahead

Setup

Open the Terminal and create a python virtual environment

python3 -m venv ~/.venvs/llmenv
source ~/.venvs/llmenv/bin/activate

Install phidata

pip install phidata

Create your codebase

Create your codebase using the llm-app template that is pre-configured with FastApi, Streamlit and PgVector. Use this codebase as a starting point for your LLM product.

phi ws create -t llm-app -n llm-app
create-llm-app

This will create a folder named llm-app with the following structure:

llm-app
โ”œโ”€โ”€ api               # directory for FastApi routes
โ”œโ”€โ”€ app               # directory for Streamlit apps
โ”œโ”€โ”€ db                # directory for database components
โ”œโ”€โ”€ llm               # directory for LLM components
    โ”œโ”€โ”€ conversations       # LLM conversations
    โ”œโ”€โ”€ knowledge_base.py   # LLM knowledge base
    โ””โ”€โ”€ storage.py          # LLM storage
โ”œโ”€โ”€ notebooks         # directory for Jupyter notebooks
โ”œโ”€โ”€ Dockerfile        # Dockerfile for the application
โ”œโ”€โ”€ pyproject.toml    # python project definition
โ”œโ”€โ”€ requirements.txt  # python dependencies generated by pyproject.toml
โ”œโ”€โ”€ scripts           # directory for helper scripts
โ”œโ”€โ”€ utils             # directory for shared utilities
โ””โ”€โ”€ workspace
    โ”œโ”€โ”€ dev_resources.py  # Dev resources running locally
    โ”œโ”€โ”€ prd_resources.py  # Production resources running on AWS
    โ”œโ”€โ”€ jupyter           # Jupyter notebook resources
    โ”œโ”€โ”€ secrets           # directory for storing secrets
    โ””โ”€โ”€ settings.py       # Phidata workspace settings

Set OpenAI Key

Set the OPENAI_API_KEY environment variable. You can get one from OpenAI here.

export OPENAI_API_KEY=sk-***

Serve you LLM App using Streamlit

Streamlit allows us to build micro front-ends for our LLM App and is extremely useful for building basic applications in pure python. Start the app group using:

phi ws up --group app
run-llm-app

Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.

Open localhost:8501 to view streamlit apps that you can customize and make your own.

Chat with PDFs

  • Click on Chat with PDFs in the sidebar
  • Enter a username and wait for the knowledge base to load.
  • Choose between RAG or Autonomous mode.
  • Ask "How do I make chicken tikka salad?"
  • The streamlit apps are defined in the app folder.
  • The Conversations powering these apps are defined in the llm/conversations folder.
  • The Streamlit application is defined in the workspace/dev_resources.py file.
chat-with-pdf

Serve your LLM App using FastApi

Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js backed by a RestApi built with a framework like FastApi.

Your LLM App comes pre-configured with FastApi, start the api group using:

phi ws up --group api

Press Enter to confirm and give a few minutes for the image to download.

View API Endpoints

  • Open localhost:8000/docs to view the API Endpoints.
  • Test the v1/pdf/conversation/chat endpoint with {"message": "how do I make chicken tikka salad"}
  • Checkout the api/routes/pdf_routes.py file for endpoints that you can integrate with your front-end or product.
  • The FastApi application is defined in the workspace/dev_resources.py file.

Delete local resources

Play around and stop the workspace using:

phi ws down

Run your LLM App on AWS

Read how to run your LLM App on AWS here.

More information:

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

phidata-2.0.16.tar.gz (291.5 kB view hashes)

Uploaded Source

Built Distribution

phidata-2.0.16-py3-none-any.whl (431.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page