Skip to main content

CLI tool to run local vision models like stabilityai/sd-turbo

Project description

Vllama

Vllama is a powerful yet simple command-line tool designed to make running vision models (like Stable Diffusion) easy for everyone. Whether you have a powerful local GPU or need to offload the heavy lifting to the cloud (Kaggle), Vllama handles it seamlessly.

It also includes autonomous data preprocessing tools to help you clean and prepare your datasets for machine learning with a single command.

Features

  • 🚀 Run Locally: Generate high-quality images on your own machine using models like stabilityai/sd-turbo.
  • ☁️ Cloud Execution: Seamlessly offload generation to Kaggle GPUs if your local hardware is limited.
  • 💬 Interactive Mode: Keep the model loaded and generate multiple images in a chat-like session for faster results.
  • 🧹 Autonomous Data Cleaning: Automatically handle missing values, encode categories, scale features, and detect outliers in your datasets.
  • 📦 Model Management: Easily download, install, and manage different vision models.
  • 🔐 Secure & Private: Your credentials and data stay with you.

Installation

You can install Vllama directly from the source:

git clone https://github.com/ManvithGopu13/Vllama.git
cd Vllama
pip install -r requirements.txt

Quick Start

1. Generate an image locally:

vllama run stabilityai/sd-turbo --prompt "A cyberpunk city at night"

2. Generate an image on Kaggle (requires login):

Requires Kaggle API credentials.

vllama login --service kaggle --username YOUR_USER --key YOUR_KEY
vllama run stabilityai/sd-turbo --prompt "A cat in space" --service kaggle

3. Clean a dataset:

vllama data --path my_data.csv --target price

Configuration Guide (Kaggle Setup)

To use Vllama with Kaggle, you need your API credentials.

  1. Go to your Kaggle Account Settings.
  2. Scroll down to the API section.
  3. Click Create New Token to download kaggle.json.

You can then configure Vllama in two ways:

Option A: Login Command (Recommended) Run the following command to securely store your credentials:

vllama login --service kaggle --username <your_username> --key <your_api_key>

Option B: Manual Setup Place your kaggle.json file in the default location:

  • Windows: C:\Users\<User>\.kaggle\kaggle.json
  • Linux/Mac: ~/.kaggle/kaggle.json

Command Reference

Here is the full list of commands you can use with Vllama:

Core Commands

Command Description Example
vllama run <model> Run a model. Add --prompt for a single image, or leave empty for interactive mode. vllama run stabilityai/sd-turbo
vllama install <model> Download and install a model for local use. vllama install stabilityai/sd-turbo
vllama show models List all available models you can use. vllama show models
vllama stop Stop the currently running model session to free up memory. vllama stop

Remote & Cloud Commands

Command Description Example
vllama login Log in to a cloud service. Options: --service, --username, --key. vllama login --service kaggle
vllama init gpu Initialize a GPU session on a remote service. vllama init gpu --service kaggle
vllama logout Log out and remove stored credentials. vllama logout

Data Tools

Command Description Example
vllama data Preprocess a dataset. Options: --path, --target, --test_size, --output_dir. vllama data --path data.csv

Arguments Guide

  • --prompt, -p: The text description for the image you want to generate.
  • --service, -s: The cloud service to use (currently supports kaggle).
  • --output_dir, -o: Where to save the generated images or processed data.
  • --test_size, -t: The proportion of the dataset to include in the test split (e.g., 0.2 for 20%).

Troubleshooting

Issue Possible Cause Solution
CUDA out of memory Your GPU doesn't have enough VRAM. Try running with --service kaggle to use cloud GPUs.
403 Forbidden (Kaggle) Invalid or expired API credentials. Run vllama logout then vllama login with new keys.
Model not found The model name is incorrect or not installed. Check spelling or run vllama install <model>.
ImportError Missing dependencies. Run pip install -r requirements.txt.

Security Best Practices

  • Protect your API Keys: Never share your kaggle.json or commit it to public repositories.
  • Review Prompts: When using remote execution, avoid sending sensitive personal information in prompts.
  • Check Permissions: Ensure your output directories have appropriate permissions if working on a shared machine.
  • Report Vulnerabilities: See SECURITY.md for how to report security issues.

Future Roadmap

We are actively working on these exciting new features:

  • Model Management: Commands to list installed models and remove old ones.
  • Progress Bars: Visual indicators for downloads and long-running tasks.
  • Configuration: Save your preferences (like default model) in a config file.
  • Batch Processing: Generate hundreds of images from a list of prompts in one go.
  • Advanced Editing: Image-to-Image generation and Inpainting support.
  • Web UI: A beautiful browser-based interface for those who prefer not to use the terminal.
  • Negative Prompts: Specify what you don't want in your images (e.g., "blurry", "low quality").

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details on how to get started.

License

This project is open source.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllama-0.3.2.tar.gz (32.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vllama-0.3.2-py3-none-any.whl (30.9 kB view details)

Uploaded Python 3

File details

Details for the file vllama-0.3.2.tar.gz.

File metadata

  • Download URL: vllama-0.3.2.tar.gz
  • Upload date:
  • Size: 32.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for vllama-0.3.2.tar.gz
Algorithm Hash digest
SHA256 f3016fffd0a98962afc88149979537fb931f10ac741874f20e4c26f0eb3e7e85
MD5 dfbfeca0f1f33ffaf6e11cdc6121731e
BLAKE2b-256 705396857358e9f938963bda1b82c95a8031b772363c10b1fb4fe19da4ecb439

See more details on using hashes here.

File details

Details for the file vllama-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: vllama-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 30.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for vllama-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c28a487501384fbd9ca981217de45b7283986eb8f703028c602a4c479a879b4a
MD5 29aed68d87960601df097ce350b1c07e
BLAKE2b-256 0906d3926eb4cfe5415c42876ca93ba89c9846ebd173c2ded41d4e70fa745a9d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page