Skip to main content

AI Toolkit for Engineers

Project description

phidata

AI Toolkit for Engineers

version pythonversion downloads build-status

Build LLM applications using production ready templates


Phidata is an everything-included AI toolkit. It provides pre-built templates for LLM apps.

🚀 How it works

  • Create your LLM app using a template: phi ws create
  • Run your app locally: phi ws up dev:docker
  • Run your app on AWS: phi ws up prd:aws

For example, run a RAG Chatbot built with FastApi, Streamlit and PgVector in 2 commands:

phi ws create -t llm-app -n llm-app  # create the llm-app codebase
phi ws up                            # run the llm-app locally

It solves the problem of building LLM powered products by providing:

💻 Software layer

  • Access to LLMs using a human-like Conversation interface.
  • Components for building LLM apps: RAG, Agents, Tasks
  • Components for extending LLM apps: Knowledge Base, Storage, Memory, Cache
  • Components for monitoring LLM apps: Model Inputs/Outputs, Quality, Cost
  • Components for improving LLM apps: Fine-tuning, RLHF

📱 Application layer

  • Tools for running LLM apps: FastApi, Django, Streamlit
  • Tools for running LLM components: PgVector, Postgres, Redis

🌉 Infrastructure layer

  • Infrastructure for running LLM apps locally: Docker
  • Infrastructure for running LLM apps in production: AWS
  • Best practices like testing, formatting, CI/CD, security and secret management.

Phidata bridges the 3 layers of software development to deliver production-grade LLM Apps that you can run with 1 command.

🎯 For more information:

👩‍💻 Example: Build a RAG LLM App 🧑‍💻

Let's build a RAG LLM App with GPT-4. We'll use PgVector for Knowledge Base and Storage and serve the app using Streamlit and FastApi. Read the full tutorial here.

Install docker desktop to run this app locally.

Setup

Open the Terminal and create an ai directory with a python virtual environment.

mkdir ai && cd ai

python3 -m venv aienv
source aienv/bin/activate

Install phidata

pip install phidata

Create your codebase

Create your codebase using the llm-app template pre-configured with FastApi, Streamlit and PgVector. Use this codebase as a starting point for your LLM product.

phi ws create -t llm-app -n llm-app

This will create a folder named llm-app

create-llm-app

Serve your LLM App using Streamlit

Streamlit allows us to build micro front-ends for our LLM App and is extremely useful for building basic applications in pure python. Start the app group using:

phi ws up --group app
run-llm-app

Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.

Chat with PDFs

  • Open localhost:8501 to view streamlit apps that you can customize and make your own.
  • Click on Chat with PDFs in the sidebar
  • Enter a username and wait for the knowledge base to load.
  • Choose the RAG Conversation type.
  • Ask "How do I make chicken curry?"
  • Upload PDFs and ask questions
chat-with-pdf

Serve your LLM App using FastApi

Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js backed by a RestApi built using a framework like FastApi.

Your LLM App comes ready-to-use with FastApi endpoints, start the api group using:

phi ws up --group api

Press Enter to confirm and give a few minutes for the image to download.

View API Endpoints

  • Open localhost:8000/docs to view the API Endpoints.
  • Load the knowledge base using /v1/pdf/conversation/load-knowledge-base
  • Test the v1/pdf/conversation/chat endpoint with {"message": "How do I make chicken curry?"}
  • The LLM Api comes pre-built with endpoints that you can integrate with your front-end.
chat-with-pdf

Optional: Run Jupyterlab

A jupyter notebook is a must have for AI development and your llm-app comes with a notebook pre-installed with the required dependencies. Enable it by updating the workspace/settings.py file:

...
ws_settings = WorkspaceSettings(
    ...
    # Uncomment the following line
    dev_jupyter_enabled=True,
...

Start jupyter using:

phi ws up --group jupyter

Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.

View Jupyterlab UI

  • Open localhost:8888 to view the Jupyterlab UI. Password: admin
  • Play around with cookbooks in the notebooks folder.

Delete local resources

Play around and stop the workspace using:

phi ws down

Run your LLM App on AWS

Read how to run your LLM App on AWS here.

More information:

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

phidata-2.0.39.tar.gz (314.3 kB view hashes)

Uploaded Source

Built Distribution

phidata-2.0.39-py3-none-any.whl (463.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page