Skip to main content

Professional open source client for running large language models locally

Project description

LLMocal: Open Source Local AI Client

LLMocal Demo

LLMocal is a professional-grade, open-source client for running large language models locally. Built on the principle that open source models deserve open source clients, LLMocal provides a complete local AI solution without vendor lock-in or proprietary restrictions.

๐ŸŽฏ Why Open Source? The AI ecosystem thrives when both models and clients are open. LLMocal ensures you have full control over your AI infrastructureโ€”no subscriptions, no data collection, no proprietary dependencies.

Optimized for Apple Silicon (M1/M2/M3/M4) with cross-platform support for Linux and Windows.


CI License: MIT

๐ŸŒŸ Features

  • 100% Private & Offline: Your conversations never leave your machine. No APIs, no data collection.
  • High-Performance: Optimized for Apple Silicon, providing fast, streaming responses.
  • State-of-the-Art Models: Comes pre-configured with Mistral-7B-Instruct, a top-tier open-source model.
  • Easy to Use: A simple, clean, and intuitive command-line interface.
  • Customizable: Easily swap out models, adjust performance settings, and extend functionality.
  • Reproducible Setup: Uses uv for fast and reliable dependency management.
  • Cross-Platform: Works on macOS, Linux, and Windows (WSL recommended).

๐Ÿš€ Quick Start

LLMocal can be used in two ways: as a pip-installable package for easy integration into your projects, or by cloning the repository for development.

Option 1: Install as a Python Package (Recommended)

Prerequisites: Python 3.11+

# Install with pip
pip install llmocal

# Or install with uv (faster)
uv add llmocal

Programmatic Usage

import llmocal

# First time setup - explicit model download
client = llmocal.LLMocal()
client.download_model()  # Downloads ~4.4GB model
client.setup()           # Load the model

# Chat with the AI
response = client.chat("Explain quantum computing in simple terms")
print(response)

# Or start an interactive session
client.start_interactive_chat()

# Alternative: auto-download if needed
client = llmocal.LLMocal()
client.setup(auto_download=True)  # Downloads if model doesn't exist

# Use a custom model
custom_client = llmocal.LLMocal(
    repo_id="TheBloke/CodeLlama-7B-Instruct-GGUF",
    filename="codellama-7b-instruct.Q4_K_M.gguf"
)
custom_client.download_model()  # Explicit download
custom_client.setup()           # Load the model
code_response = custom_client.chat("Write a Python function to sort a list")
print(code_response)

Advanced Programmatic Usage

import llmocal
from llmocal import LLMocalConfig

# Advanced configuration
config = LLMocalConfig(
    n_ctx=8192,        # Larger context window
    n_threads=8,       # More CPU threads
    n_gpu_layers=35    # Use GPU acceleration (if available)
)

client = llmocal.LLMocal(config=config)
client.setup()

# Access lower-level components
engine = client.engine
model_manager = client.model_manager

# Direct model management
model_path = model_manager.get_model_path(
    "microsoft/DialoGPT-medium", 
    "model.gguf"
)

Command Line Usage

After installation, you can use the llmocal command:

# Start interactive chat
llmocal chat

# Use a different model
llmocal chat --repo-id "TheBloke/Llama-2-7B-Chat-GGUF" --filename "llama-2-7b-chat.Q4_K_M.gguf"

Option 2: Development Setup

For development or if you want to modify the code:

Prerequisites:

# Clone the repository
git clone https://github.com/alexnicita/llmocal.git
cd llmocal

# Run the startup script
./scripts/start.sh

The first time you run start.sh, it will download the model (approx. 4.4 GB), so it may take some time.

๐Ÿ’ฌ How to Use

Once the application is running, you'll be greeted by the AI assistant. Just type your questions and press Enter.

Sample Conversation

๐Ÿค– Welcome to LLMocal!

You are now chatting with an AI model running entirely on your machine.
- Model: TheBloke/Mistral-7B-Instruct-v0.2-GGUF
- Privacy: 100% offline and private. No data leaves your computer.

Type /exit or /quit to end the chat. Use /help for more commands.

You: Can you explain the concept of zero-knowledge proofs in simple terms?

AI:  Of course! Imagine you have a secret, like the password to a treasure chest, but you want to prove to a friend that you know the password without actually revealing it. A zero-knowledge proof is a cryptographic method that lets you do just that. You can convince your friend you have the secret key without them ever learning what it is. It's a fundamental concept in modern cryptography, enabling privacy and security in digital transactions.

You: Write a Python function to find the factorial of a number.

AI:  Certainly! Here is a simple and efficient Python function to calculate the factorial of a non-negative integer using recursion:

```python
def factorial(n):
    """
    Calculates the factorial of a non-negative integer.
    
    Args:
        n: The number to calculate the factorial of.
        
    Returns:
        The factorial of n.
    """
    if n < 0:
        raise ValueError("Factorial is not defined for negative numbers")
    elif n == 0:
        return 1
    else:
        return n * factorial(n - 1)

# Example usage:
print(f"The factorial of 5 is: {factorial(5)}")  # Output: 120

### Special Commands

- `/exit` or `/quit`: Exit the chat application.
- `/help`: Display a list of available commands.
- `/model`: Show details about the currently loaded AI model.

## ๐Ÿ”ง Customization: Changing the Model

This project is designed to be model-agnostic. You can easily switch to any GGUF-compatible model from [Hugging Face](https://huggingface.co/models?search=gguf).

**To change the model, you can either:**

1.  **Use command-line arguments (easiest):**

    ```bash
    uv run python -m llmocal.cli chat --repo-id "TheBloke/Llama-2-7B-Chat-GGUF" --filename "llama-2-7b-chat.Q4_K_M.gguf"
    ```

2.  **Set environment variables:**

    ```bash
    export MODEL_REPO_ID="TheBloke/Llama-2-7B-Chat-GGUF"
    export MODEL_FILENAME="llama-2-7b-chat.Q4_K_M.gguf"
    ./scripts/start.sh
    ```

3.  **Edit the configuration:**

    Change the `DEFAULT_REPO_ID` and `DEFAULT_FILENAME` variables in `llmocal/core/config.py`.

## ๐Ÿ”ฌ Running Tests

A full suite of unit tests is included to ensure everything is working as expected. To run the tests:

```bash
uv run python -m tests.test_core

๐Ÿ› ๏ธ Project Structure

llmocal/
โ”œโ”€โ”€ .github/workflows/ci.yml  # GitHub Actions CI/CD workflow
โ”œโ”€โ”€ .gitignore                # Files to ignore for Git
โ”œโ”€โ”€ LICENSE                   # MIT License
โ”œโ”€โ”€ README.md                 # This file
โ”œโ”€โ”€ pyproject.toml            # Project dependencies and metadata
โ”œโ”€โ”€ scripts/
โ”‚   โ””โ”€โ”€ start.sh              # Easy startup script
โ”œโ”€โ”€ llmocal/                  # Main package
โ”‚   โ”œโ”€โ”€ __init__.py           # Package initialization
โ”‚   โ”œโ”€โ”€ cli.py                # Command-line interface
โ”‚   โ”œโ”€โ”€ core/                 # Core functionality
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py       # Core module initialization
โ”‚   โ”‚   โ”œโ”€โ”€ config.py         # Configuration management
โ”‚   โ”‚   โ”œโ”€โ”€ engine.py         # AI engine and model loading
โ”‚   โ”‚   โ””โ”€โ”€ chat.py           # Chat interface
โ”‚   โ”œโ”€โ”€ models/               # Model management
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py       # Models module initialization
โ”‚   โ”‚   โ””โ”€โ”€ manager.py        # Model downloading and management
โ”‚   โ”œโ”€โ”€ api/                  # API server components
โ”‚   โ”œโ”€โ”€ ui/                   # User interface components
โ”‚   โ””โ”€โ”€ utils/                # Utility functions
โ”œโ”€โ”€ tests/                    # Test suite
โ”‚   โ””โ”€โ”€ test_core.py          # Core functionality tests
โ”œโ”€โ”€ docs/                     # Documentation
โ””โ”€โ”€ examples/                 # Usage examples

๐Ÿค Contributing

Contributions are welcome! Whether it's bug fixes, feature additions, or documentation improvements, please feel free to open a pull request. Please make sure all tests pass before submitting.

๐Ÿ“œ License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmocal-0.0.2.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmocal-0.0.2-py3-none-any.whl (17.0 kB view details)

Uploaded Python 3

File details

Details for the file llmocal-0.0.2.tar.gz.

File metadata

  • Download URL: llmocal-0.0.2.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for llmocal-0.0.2.tar.gz
Algorithm Hash digest
SHA256 1a2cd876a7fb7f206178146a467b8631f1d337a56fce80c34ba7d115ef4d3ef7
MD5 f22b25747d08d635798a1ae7f69e5a19
BLAKE2b-256 0ce310658d75288ad0db4ae0d104763f67e023aed4eacd45e7a92d8f963ea571

See more details on using hashes here.

File details

Details for the file llmocal-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: llmocal-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 17.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for llmocal-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2934f389165359ac7267d24d4b02866577aab5d40548b27ca0bce6a2ceedfb75
MD5 72c5d9ef731d7d6937ad3a7688eb0ec4
BLAKE2b-256 8c1c721f7b1e77726a3de2ccffc671f8ca194a290a21f7613d4f24fdb1424146

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page