Skip to main content

vibe-llama is a set of tools that are designed to help developers build working and reliable applications with LlamaIndex, LlamaCloud Services and llama-index-workflows.

Project description

vibe-llama

vibe-llama is a set of tools that are designed to help developers build working and reliable applications with LlamaIndex, LlamaCloud Services and llama-index-workflows.

This command-line tool provides two main capabilities:

Context Injection: Add relevant LlamaIndex context as rules to any coding agent of your choice (think Cursor, Claude Code, GitHub Copilot etc.). You select a coding agent and the LlamaIndex services you're working with, and vibe-llama generates rule files that give your AI assistant up-to-date knowledge about APIs, best practices, and common patterns.

Once you've made your choice, vibe-llama will generate a rule file for your coding agent. For example, if you selected Cursor, a new rule will be added to .cursor/rules. Now, all of the context and instructions about your chosen LlamaIndex service will be available to your coding agent of choice.

Workflow Generation: An interactive CLI agent that helps you build document-processing workflows from scratch. Describe what you want in natural language, provide reference documents, and get complete workflow code with detailed explanations.

Installation

User settings

You can install and run vibe-llama using uv:

uvx vibe-llama@latest --help

Or you can use pip to install it first and run it in a second moment:

pip install vibe-llama

Developer settings

Clone the GitHub repository:

git clone https://github.com/run-llama/vibe-llama
cd vibe-llama

Build and install the project:

uv build

For regular installation:

uv pip install dist/*.whl

For editable installation (development):

# Activate virtual environment first
uv venv
source .venv/bin/activate  # On Unix/macOS

# Then install in editable mode
uv pip install -e .

Usage

vibe-llama is a CLI command, and has the following subcommands:

starter

starter provides your coding agents with up-to-date documentation about LlamIndex, LlamaCloud Services and llama-index-workflows, so that they can build reliable and working applications! You can launch a terminal user interface by running vibe-llama starter and select your desired coding agents and services from there, or you can directly pass your agent (-a, --agent flag) and chosen service (-s, --service flag) from command line interface.

Use the -v/--verbose flag (independently from TUI or CLI) if you want verbose logging of what processes are being executed while the application runs.

Use the -w/--overwrite flag (works only from CLI) if you want to overwrite local files with the incoming ones downloaded by vibe-llama starter. On the TUI, you will be prompted to choose whether you want to overwrite existing files or not.

With starter, you can also launch a local MCP server (at http://127.0.0.1:8000/mcp) using the -m/--mcp flag. This server exposes a tool (get_relevant_context) that allows you to retrieve relevant documentation content based on a specific query. If you are interested in interacting with vibe-llama MCP programmatically, you can check the SDK guide.

Example usage

vibe-llama starter # Launch a TUI
vibe-llama starter -a 'GitHub Copilot' -s LlamaIndex -v # Select GitHub Copilot and LlamaIndex and enable verbose logging
vibe-llama starter -a 'Claude Code' -s llama-index-workflows -w # Select Claude Code and llama-index-workflows and allow to overwrite the existing CLAUDE.md
vibe-llama starter --mcp # Launch an MCP server

docuflows

docuflows is a CLI agent that enables you to build and edit workflows that are oriented to intelligent document processing (combining llama-index-workflows and LlamaCloud).

In order to use this command, you need to first set your OpenAI API key and your LlamaCloud API key as environment variables. Optionally, if you wish to use Anthropic LLMs, you should also set the Anthropic API key in your environment.

On MacOS/Linux

export OPENAI_API_KEY="your-openai-api-key"
export LLAMA_CLOUD_API_KEY="your-llama-cloud-api-key"
# optionally, for Anthropic usage
export ANTHROPIC_API_KEY="your-anthropic-api-key"

On Windows

Set-Location Env:
$Env:OPENAI_API_KEY="your-openai-api-key"
$Env:LLAMA_CLOUD_API_KEY="your-llama-cloud-api-key"
# optionally, for Anthropic usage
$Env:ANTHROPIC_API_KEY="your-anthropic-api-key"

Once you have the needed API keys in the environment, running vibe-llama docuflows will start a terminal interface where you will be able to interactively talk to the agent and create or edit document-centered workflows with the help of it.

Example usage

vibe-llama docuflows

[!NOTE]

vibe-llama docuflows uses AGENTS.md as an instructions file (located under .vibe-llama/rules/). If you wish, you can directly create AGENTS.md with the starter command, by selecting vibe-llama docuflows as your agent. Alternatively, if an AGENTS.md is not present in your environment, vibe-llama docuflows will create one on the fly.

During an open session with docuflows, you will be prompted to configure your LlamaCloud settings (project and organization ID are required for this step), and then you will be able to create or edit workflows.

During the editing or generation process, you will be asked to provide reference files for your workflow (e.g. an invoice file if you are asking for an invoice-processing workflow), so make sure to prepare them.

Once the workflow generation/editing is finished, you will be able to save the code and the code-related explanation in a folder that will be created under generated_workflows/. In the folder you will find a workflow.py file, containing the code, and a runbook.md file, containing instructions and explanations related to the code.

scaffold

scaffold is a command that allows you to generate working examples of AI-powered workflows for a variety of use cases.

You can use it from command line, and you can pass the -u/--use_case flag to select the use case and -p/--path flag to define the path where the example workflow will be stored (defaults to .vibe-llama/scaffold).

Alternatively, you can launch a terminal user interface by running vibe-llama scaffold.

Once you chose the use case to download and the path to save the code to, scaffold will populate the specified path with a workflow.py (containing the actual workflow code), a README.md (with explanation on how to set up and run the workflow, as well as on the workflow structure) and a pyproject.toml with all the project details.

Example usage

vibe-llama scaffold --use_case document_parsing --path examples/document_parsing_workflow/ # save the document parsing use case to examples/document_parsing_workflow/
vibe-llama scaffold # launch the terminal interface

[!NOTE]

You can find all the examples in the templates folder

SDK

vibe-llama also comes with a programmatic interface that you can call within your python scripts.

VibeLlamaStarter

To replicate the starter command on the CLI and fetch all the needed instructions for your coding agents, you can use the following code:

from vibe_llama.sdk import VibeLlamaStarter

starter = VibeLlamaStarter(
    agents=["GitHub Copilot", "Cursor"],
    services=["LlamaIndex", "llama-index-workflows"],
)

await starter.write_instructions(
    verbose=True, max_retries=20, retry_interval=0.7
)

VibeLlamaMCPClient

[!NOTE]

To interact with vibe-llama MCP server you can use any MCP client of your liking.

This class implements an MCP client to interact directly and in a well-integrated way with vibe-llama MCP server.

You can use it as follows:

from vibe_llama.sdk import VibeLlamaMCPClient

client = VibeLlamaMCPClient()

# list the available tools
await client.list_tools()

# retrieve specific documentation content
await client.retrieve_docs(query="Parsing pre-sets in LlamaParse")

# retrieve a certain number of matches
await client.retrieve_docs(query="Human in the loop", top_k=4)

# retrieve matches and parse the returned XML string
result = await client.retrieve_docs(
    query="Workflow Design Patterns", top_k=3, parse_xml=True
)
if "result" in result:
    print(result["result"])  # -> List of the top three matches for your query
else:
    print(result["error"])  # -> List of error messages

VibeLlamaDocsRetriever

This class implements a retriever for vibe-llama documentation, leveraging BM25 (enhanced with stemming) for lightweight, on-disk indexing and retrieval.

You can use it as follows:

from vibe_llama.sdk import VibeLlamaDocsRetriever

retriever = VibeLlamaDocsRetriever()

# retrieve a maximum of 10 relevant documents pertaining to the query 'What is LlamaExtract?'
await retriever.retrieve(query="What is LlamaExtract?", top_k=10)

VibeLlamaScaffold

VibeLlamaScaffold allows you to download human-curated, end-to-end workflows templates for various use cases.

You can use it as follows:

from vibe_llama.sdk import VibeLlamaScaffold

scaffolder = VibeLlamaScaffold(
    colored_output=True
)  # you can enable/disable colored output

await scaffolder.get_template(
    template_name="invoice_extraction",
    save_path="examples/invoice_extraction/",
)  # if you do not provide a `save_path`, it will default to `.vibe-llama/scaffold`

Contributing

We welcome contributions! Please read our Contributing Guide to get started.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vibe_llama-0.4.1.tar.gz (287.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vibe_llama-0.4.1-py3-none-any.whl (93.1 kB view details)

Uploaded Python 3

File details

Details for the file vibe_llama-0.4.1.tar.gz.

File metadata

  • Download URL: vibe_llama-0.4.1.tar.gz
  • Upload date:
  • Size: 287.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.15

File hashes

Hashes for vibe_llama-0.4.1.tar.gz
Algorithm Hash digest
SHA256 9a51e5a9f8d4380c214da7c81bdbf10dd0fd67399351b0311a37cd76b226c6c2
MD5 a0343da1de7ecc4bf6e45f68a43d373c
BLAKE2b-256 59e676c9ebbfb0641a36cfac6aaac23ac7543d6e36e1437f9dbed50413514c37

See more details on using hashes here.

File details

Details for the file vibe_llama-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: vibe_llama-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 93.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.15

File hashes

Hashes for vibe_llama-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7d8008fb6fd00b40976058929892992ee265053e7cf0006deac0aa14ed7b730d
MD5 68a7f8a5050de4b93ce8d4ed61d51cf0
BLAKE2b-256 5c3542d3ce1ff7817c193ec9ce73c98c638a3cb7d6db99488a3b502031536fe6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page