Skip to main content

Llama-recipes is a companion project to the Llama models. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models.

Project description

Llama Recipes: Examples to get started using the Llama models from Meta

The 'llama-recipes' repository is a companion to the Meta Llama models. We support the latest version, Llama 3.2 Vision and Llama 3.2 Text, in this repository. This repository contains example scripts and notebooks to get started with the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here use Llama locally, in the cloud, and on-prem.

[!TIP] Get started with Llama 3.2 with these new recipes:

[!NOTE] Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token <|image|> representing the input image for the multimodal models.

More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found on the documentation website.

Table of Contents

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.

Prerequisites

PyTorch Nightlies

If you want to use PyTorch nightlies instead of the stable release, go to this guide to retrieve the right --extra-index-url URL parameter for the pip install commands on your platform.

Installing

Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.

[!NOTE] Ensure you use the correct CUDA version (from nvidia-smi) when installing the PyTorch wheels. Here we are using 11.8 as cu118. H100 GPUs work better with CUDA >12.0

Install with pip

pip install llama-recipes

Install with optional dependencies

Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with:

pip install llama-recipes[tests]

For the vLLM example we need additional requirements that can be installed with:

pip install llama-recipes[vllm]

To use the sensitive topics safety checker install with:

pip install llama-recipes[auditnlg]

Some recipes require the presence of langchain. To install the packages follow the recipe description or install with:

pip install llama-recipes[langchain]

Optional dependencies can also be combines with [option1,option2].

Install from source

To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package.

git clone git@github.com:meta-llama/llama-recipes.git
cd llama-recipes
pip install -U pip setuptools
pip install -e .

For development and contributing to llama-recipes please install all optional dependencies:

git clone git@github.com:meta-llama/llama-recipes.git
cd llama-recipes
pip install -U pip setuptools
pip install -e .[tests,auditnlg,vllm]

Getting the Llama models

You can find Llama models on Hugging Face hub here, where models with hf in the name are already converted to Hugging Face checkpoints so no further conversion is needed. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well.

Model conversion to Hugging Face

If you have the model checkpoints downloaded from the Meta website, you can convert it to the Hugging Face format with:

## Install Hugging Face Transformers from source
pip freeze | grep transformers ## verify it is version 4.45.0 or higher

git clone git@github.com:huggingface/transformers.git
cd transformers
pip install protobuf
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
   --input_dir /path/to/downloaded/llama/weights --model_size 3B --output_dir /output/path

Repository Organization

Most of the code dealing with Llama usage is organized across 2 main folders: recipes/ and src/.

recipes/

Contains examples are organized in folders by topic:

Subfolder Description
quickstart The "Hello World" of using Llama, start here if you are new to using Llama.
use_cases Scripts showing common applications of Meta Llama3
3p_integrations Partner owned folder showing common applications of Meta Llama3
responsible_ai Scripts to use PurpleLlama for safeguarding model outputs
experimental Meta Llama implementations of experimental LLM techniques

src/

Contains modules which support the example recipes:

Subfolder Description
configs Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking.
datasets Contains individual scripts for each dataset to download and process. Note
inference Includes modules for inference for the fine-tuned models.
model_checkpointing Contains FSDP checkpoint handlers.
policies Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode).
utils Utility files for:
- train_utils.py provides training/eval loop and more train utils.
- dataset_utils.py to get preprocessed datasets.
- config_utils.py to override the configs received from CLI.
- fsdp_utils.py provides FSDP wrapping policy for PEFT methods.
- memory_utils.py context manager to track different memory stats in train loop.

Supported Features

The recipes and modules in this repository support the following features:

Feature
HF support for inference
HF support for finetuning
PEFT
Deferred initialization ( meta init)
Low CPU mode for multi GPU
Mixed precision
Single node quantization
Flash attention
Activation checkpointing FSDP
Hybrid Sharded Data Parallel (HSDP)
Dataset packing & padding
BF16 Optimizer (Pure BF16)
Profiling & MFU tracking
Gradient accumulation
CPU offloading
FSDP checkpoint conversion to HF for inference
W&B experiment tracker

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

See the License file for Meta Llama 3.2 here and Acceptable Use Policy here

See the License file for Meta Llama 3.1 here and Acceptable Use Policy here

See the License file for Meta Llama 3 here and Acceptable Use Policy here

See the License file for Meta Llama 2 here and Acceptable Use Policy here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_recipes-0.0.4.post1.tar.gz (25.6 MB view details)

Uploaded Source

Built Distribution

llama_recipes-0.0.4.post1-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file llama_recipes-0.0.4.post1.tar.gz.

File metadata

  • Download URL: llama_recipes-0.0.4.post1.tar.gz
  • Upload date:
  • Size: 25.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for llama_recipes-0.0.4.post1.tar.gz
Algorithm Hash digest
SHA256 a27b152f573a55dc53db5586ee9b50f9d0892d4d3d872896aeebfcc50326d83c
MD5 e4600ac9868ae1ba7c87744d36b71cad
BLAKE2b-256 154e62adac846b531a8982ef0129e6baae8c39099c4fa647f716f9bf3c89a4a8

See more details on using hashes here.

File details

Details for the file llama_recipes-0.0.4.post1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_recipes-0.0.4.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 924ea2b6d9d2b4cef6d9d767f2ecbe59d75623bf2c9904b51e007ebab80e0de9
MD5 f50503468240045a5e24e2f9488d8611
BLAKE2b-256 f20714ee96ad31847dac1a034e6ebb6d3dd9fcab0d2f5dc72e26636e1bfc090c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page