Add your description here
Project description
vibe-llama
vibe-llama is a set of tools that are designed to help developers build working and reliable applications with LlamaIndex, LlamaCloud Services and llama-index-workflows.
This command-line tool aims to add the relevant context as rules to any coding agent of your choice (think Cursor, Claude Code, GitHub Copilot etc.):
- You select a coding agent.
- You select the LlamaIndex service (such as LlamaCloud, the LlamaIndex framework or the Workflows package)
Once you've made your choice, vibe-llama will generate a rule file for your coding agent. For example, if you selected Cursor, a new rule will be added to .cursor/rules. Now, all of the context and instructions about your chosen LlamaIndex service will be available to your coding agent of choice.
Installation
User settings
You can install and run vibe-llama using uv:
uvx vibe-llama@latest --help
Or you can use pip to install it first and run it in a second moment:
pip install vibe-llama
Developer settings
Clone the GitHub repository:
git clone https://github.com/run-llama/vibe-llama
cd vibe-llama
Build and install the project:
uv build
For regular installation:
uv pip install dist/*.whl
For editable installation (development):
# Activate virtual environment first
uv venv
source .venv/bin/activate # On Unix/macOS
# Then install in editable mode
uv pip install -e .
Usage
vibe-llama is a CLI command, and has the following subcommands:
starter
starter provides your coding agents with up-to-date documentation about LlamIndex, LlamaCloud Services and llama-index-workflows, so that they can build reliable and working applications! You can launch a terminal user interface by running vibe-llama starter and select your desired coding agents and services from there, or you can directly pass your agent (-a, --agent flag) and chosen service (-s, --service flag) from command line interface.
Use the -v/--verbose flag (independently from TUI or CLI) if you want verbose logging of what processes are being executed while the application runs.
Example usage
vibe-llama starter # launch a TUI
vibe-llama starter -a 'GitHub Copilot' -s LlamaIndex -v # Select GitHub Copilot and LlamaIndex and enable verbose logging
More commands coming soon!🎉
Contributing
We welcome contributions! Please read our Contributing Guide to get started.
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vibe_llama-0.1.0.post1.tar.gz.
File metadata
- Download URL: vibe_llama-0.1.0.post1.tar.gz
- Upload date:
- Size: 39.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ed9777500870ea71f76a21c0b2bee64df5bba214a9234253024d5b2d7c2adb7
|
|
| MD5 |
2c3daf976f3dac6c1716e9527f8f4552
|
|
| BLAKE2b-256 |
8c348c09b923b7eb5c2eebd309343dac7651c0dc036c99d8ddd8fe8db8d67b54
|
File details
Details for the file vibe_llama-0.1.0.post1-py3-none-any.whl.
File metadata
- Download URL: vibe_llama-0.1.0.post1-py3-none-any.whl
- Upload date:
- Size: 8.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23e1cfd5952ec1a7315e01c91cca0a9638421b59c500cbb99366258702ee3b44
|
|
| MD5 |
9a92df6ea53d4c61453ae1de2bfc56bc
|
|
| BLAKE2b-256 |
93b583afb23885918dd6700fb0648a2911c0d03c1370ec42c72de3c1817a9d9e
|