Skip to main content

Llama Stack

Project description

Llama Stack

PyPI version PyPI - Downloads License Discord Unit Tests Integration Tests

Quick Start | Documentation | Colab Notebook | Discord

🚀 One-Line Installer 🚀

To try Llama Stack locally, run:

curl -LsSf https://github.com/llamastack/llama-stack/raw/main/scripts/install.sh | bash

Overview

Llama Stack defines and standardizes the core building blocks that simplify AI application development. It provides a unified set of APIs with implementations from leading service providers. More specifically, it provides:

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals.
  • Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
  • Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack

Llama Stack Benefits

  • Flexibility: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
  • Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
  • Robust Ecosystem: Llama Stack is integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.

For more information, see the Benefits of Llama Stack documentation.

API Providers

Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack. Please checkout for full list

API Provider Environments Agents Inference VectorIO Safety Post Training Eval DatasetIO
Meta Reference Single Node
SambaNova Hosted
Cerebras Hosted
Fireworks Hosted
AWS Bedrock Hosted
Together Hosted
Groq Hosted
Ollama Single Node
TGI Hosted/Single Node
NVIDIA NIM Hosted/Single Node
ChromaDB Hosted/Single Node
Milvus Hosted/Single Node
Qdrant Hosted/Single Node
Weaviate Hosted/Single Node
SQLite-vec Single Node
PG Vector Single Node
PyTorch ExecuTorch On-device iOS
vLLM Single Node
OpenAI Hosted
Anthropic Hosted
Gemini Hosted
WatsonX Hosted
HuggingFace Single Node
TorchTune Single Node
NVIDIA NEMO Hosted
NVIDIA Hosted

Note: Additional providers are available through external packages. See External Providers documentation.

Distributions

A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario. For example, you can begin with a local setup of Ollama and seamlessly transition to production, with fireworks, without changing your application code. Here are some of the distributions we support:

Distribution Llama Stack Docker Start This Distribution
Starter Distribution llamastack/distribution-starter Guide
Meta Reference llamastack/distribution-meta-reference-gpu Guide
PostgreSQL llamastack/distribution-postgres-demo

For full documentation on the Llama Stack distributions see the Distributions Overview page.

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

Check out our client SDKs for connecting to a Llama Stack server in your preferred language.

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Typescript llama-stack-client-typescript NPM version
Kotlin llama-stack-client-kotlin Maven version

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.

Community

We hold regular community calls to discuss the latest developments and get feedback from the community.

🌟 GitHub Star History

Star History

Star History Chart

✨ Contributors

Thanks to all of our amazing contributors!

Project details


Release history Release notifications | RSS feed

This version

0.5.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_stack-0.5.0.tar.gz (16.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_stack-0.5.0-py3-none-any.whl (4.0 MB view details)

Uploaded Python 3

File details

Details for the file llama_stack-0.5.0.tar.gz.

File metadata

  • Download URL: llama_stack-0.5.0.tar.gz
  • Upload date:
  • Size: 16.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llama_stack-0.5.0.tar.gz
Algorithm Hash digest
SHA256 ec3b0455eaa5b24fa53c1b789c18ff8d14580109c7f4ccb64da069b25d0ccde9
MD5 242afa93accc858715a7378d7cab0a89
BLAKE2b-256 7179e54a4c242bec4656314bccbc65fccc9acac337f83299e12ae934f320c5a2

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_stack-0.5.0.tar.gz:

Publisher: pypi.yml on llamastack/llama-stack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llama_stack-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: llama_stack-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 4.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llama_stack-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a1bb2aacf61c293693e9d1986a173cd5088c3ee7269d9fd1f584bbca98e98f40
MD5 207625eb3627cd296f9649072d33fdbe
BLAKE2b-256 ebd44c02eef79b6bd8b4b6670a2f0386981cf81059ef79bbec4fd6e7433c7346

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_stack-0.5.0-py3-none-any.whl:

Publisher: pypi.yml on llamastack/llama-stack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page