Skip to main content

A terminal-based interface for interacting with large language models (LLMs)

Project description

NoLlama

NoLlama is a terminal-based interface for interacting with large language models (LLMs) that you can't run locally on your laptop. Inspired by Ollama, NoLlama provides a streamlined experience for chatting with models like GPT-4o, GPT-4o-mini, Claude 3 haiku, Mixtral, LLaMA 70B, and more, directly from your terminal.

While Ollama offers a neat interface for running local LLMs, their performance and capabilities often fall short of these massive models. NoLlama bridges this gap by allowing you to interact with these powerful models using a lightweight terminal UI, complete with colorful markdown rendering, multiple model choices, and efficient memory usage.

NoLlama

Features

  • Multiple Model Choices: Switch between various LLMs like GPT-4o, GPT-4o-mini, Mixtral, LLaMA 70B, Claude 3 haiku and more.
  • Neat Terminal UI: Enjoy a clean and intuitive interface for your interactions.
  • Colorful Markdown Rendering: Unlike Ollama, NoLlama supports rich text formatting in markdown.
  • Low Memory Usage: Efficient memory management makes it lightweight compared to using a browser for similar tasks.
  • Easy Model Switching: Simply type model in the chat to switch between models.
  • Clear Chat History: Type clear to clear the chat history.
  • Exit Prompt: Type q, quit, or exit to leave the chat.
  • Default Mode: NoLlama runs in standard mode by default—just type nollama in the terminal to start.
  • Experimental Feature: Enable live streaming of output with the --stream flag (unstable).
  • Anonymous and Private Usage: Use torsocks to route all traffic through the Tor network for privacy.

Installation

  1. Download the Binary:

    Download the latest binary from the Releases page.

  2. Move the Binary to /usr/bin/:

    After downloading, move the binary to /usr/bin/ for easy access from anywhere in your terminal:

    sudo mv nollama /usr/bin/
    
  3. Run NoLlama:

    Start NoLlama from the terminal by simply typing:

    nollama
    

    This will start NoLlama in the default mode.

Building from Source

If you'd like to build NoLlama from source, follow these steps:

  1. Clone the Repository:

    git clone https://github.com/spignelon/nollama.git
    cd nollama
    
  2. Install Dependencies:

    You can install the required dependencies using pip: Creating a python virtual environment:

    virtualenv .venv
    source .venv/bin/activate
    
    pip install -r requirements.txt
    
  3. Compile the Script (Optional):

    If you want to compile the script into a standalone executable, you can use PyInstaller:

    First set version_check: bool = False in .venv/lib/python3.12/site-packages/g4f/debug.py

    Then:

    pyinstaller --onefile --name=nollama --collect-all readchar nollama.py
    
  4. Move the Executable to /usr/bin/:

    After compilation, move the binary to /usr/bin/:

    sudo mv dist/nollama /usr/bin/nollama
    
  5. Run NoLlama:

    Start NoLlama by typing:

    nollama
    

Usage

  • Switch Models: Type model in the chat to choose a different LLM.

  • Clear Chat: Type clear to clear the chat history.

  • Exit: Type q, quit, or exit to leave the chat.

  • Default Mode: Run NoLlama without any flags for standard operation:

    nollama
    

Anonymous and Private Usage

For enhanced privacy and anonymity, you can use torsocks to route NoLlama's traffic through the Tor network. This ensures that all requests are anonymized and cannot be traced back to you.

Step 1: Install Tor

Debian/Ubuntu:

sudo apt update
sudo apt install tor

Arch Linux:

sudo pacman -S tor

Fedora:

sudo dnf install tor

Step 2: Enable and Start Tor

After installation, you need to enable and start the Tor service:

sudo systemctl enable tor
sudo systemctl start tor

Step 3: Run NoLlama with Tor

Once Tor is running, you can use torsocks to run NoLlama anonymously:

torsocks nollama

This will ensure that all your interactions with NoLlama are routed through the Tor network, providing a layer of privacy and anonymity.

Experimental Feature

  • Streaming Mode:

    NoLlama includes an experimental streaming mode that allows you to see responses as they are generated. This mode is currently unstable and may cause issues. To enable streaming, use the --stream flag:

    nollama --stream
    

Contribution

Contributions are welcome! If you have suggestions for new features or improvements, feel free to open an issue or submit a pull request.

Acknowledgments

  • g4f: Used for connecting to various LLMs.
  • Python Rich: Used for colorful markdown rendering and improved terminal UI.

Disclaimer

NoLlama is not affiliated with Ollama. It is an independent project inspired by the concept of providing a neat terminal interface for interacting with language models, particularly those that are too large to run locally on typical consumer hardware or not available for self hosting.

License

This project is licensed under the GPL-3.0 License.
GNU GPLv3 Image

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nollama-0.2.1.tar.gz (20.3 kB view details)

Uploaded Source

Built Distribution

nollama-0.2.1-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file nollama-0.2.1.tar.gz.

File metadata

  • Download URL: nollama-0.2.1.tar.gz
  • Upload date:
  • Size: 20.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0

File hashes

Hashes for nollama-0.2.1.tar.gz
Algorithm Hash digest
SHA256 6c0be5d7ae5f6b83bbd8955b39350cb32bd729e3435f198016a9b01dd52cfda4
MD5 29f7b2a6a3f12ea7aec5391f1745f8af
BLAKE2b-256 05703d9fac51062f6c9885899da69a9f68b5899771644fcfff64b615f1ea4751

See more details on using hashes here.

File details

Details for the file nollama-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: nollama-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.13.0

File hashes

Hashes for nollama-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ab8dc2896db42f5aeeb5bbcaffa0c52be6e3f748800d94de64d46e1b8f7a1a62
MD5 68355a41b6e5f98fb76abbf1232e84ca
BLAKE2b-256 baaad7baa4675cb1b11b0491db083c5d72e0ab4ed020a7554d391135d2f8745e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page