Skip to main content

An AI assistant powered by Llama models

Project description

Llama Assistant

🌟 Llama Assistant 🌟

Your Local AI Assistant with Llama Models

Python Llama 3 License Version Stars Forks Issues

A simple AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephasing sentences, answering questions, writing emails, and more.

This assistant can run offline on your local machine, and it respects your privacy by not sending any data to external servers.

Screenshot

TODO

  • Support other text models: Llama 3.x.
  • Support multimodal models: LLaVA, Llama 3.2 + Vision.
  • Add offline STT support: WhisperCPP.
  • Add wake word detection: "Hey Llama!".
  • Knowledge database.
  • Video interaction support.
  • Plugin system for extensibility.

Features

  • 🎙️ Voice recognition for hands-free interaction
  • 💬 Natural language processing with Llama 3.2
  • 🖼️ Image analysis capabilities (TODO)
  • ⚡ Global hotkey for quick access (Cmd+Shift+Space on macOS)
  • 🎨 Customizable UI with adjustable transparency

Note: This project is a work in progress, and new features are being added regularly.

Technologies Used

  • Python
  • Llama
  • SpeechRecognition
  • PyQt

Installation

Install from PyPI:

pip install llama-assistant
pip install pyaudio

Or install from source:

  1. Clone the repository:

    git clone https://github.com/vietanhdev/llama-assistant.git
    cd llama-assistant
    
  2. Install the required dependencies:

    pip install -r requirements.txt
    pip install pyaudio
    

Speed Hack for Apple Silicon (M1, M2, M3) users: 🔥🔥🔥

  • Install Xcode:
# check the path of your xcode install 
xcode-select -p

# xcode installed returns
# /Applications/Xcode-beta.app/Contents/Developer

# if xcode is missing then install it... it takes ages;
xcode-select --install
  • Build llama-cpp-python with METAL support:
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DGGML_METAL=on" pip install -U llama-cpp-python --no-cache-dir
pip install 'llama-cpp-python[server]'

# you should now have llama-cpp-python v0.1.62 or higher installed
llama-cpp-python         0.1.68

Usage

Run the assistant using the following command:

llama-assistant

# Or with a
python -m llama_assistant.main

Use the global hotkey (default: Cmd+Shift+Space) to quickly access the assistant from anywhere on your system.

Configuration

The assistant's settings can be customized by editing the settings.json file located in your home directory: ~/llama_assistant/settings.json.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_assistant-0.1.15.tar.gz (2.2 MB view hashes)

Uploaded Source

Built Distribution

llama_assistant-0.1.15-py3-none-any.whl (33.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page