Skip to main content

GBA Model Toolkit for MLX

Project description

GBA Model Toolkit for MLX

Introduction

Welcome to the GreenBitAI (GBA) Model Toolkit for MLX! This comprehensive Python package not only facilitates the conversion of GreenBitAI's Low-bit Language Models (LLMs) to MLX framework compatible format but also supports generation, model loading, and other essential scripts tailored for GBA quantized models. Designed to enhance the integration and deployment of GBA models within the MLX ecosystem, this toolkit enables the efficient execution of GBA models on a variety of platforms, with special optimizations for Apple devices to enable local inference and natural language content generation.

Installation

To get started with this package, simply run:

pip install gbx-lm

Optional dependencies: gbx-lm supports various optional features that can be installed as needed:

# Install with LangChain integration
pip install gbx-lm[langchain]

# Install with support for MLX-LM models in FastAPI server
pip install gbx-lm[mlx-lm]

# Install with development tools (testing)
pip install gbx-lm[dev]

# Install all optional dependencies
pip install gbx-lm[all]

Each extension provides specific functionality:

  • langchain: Integration with LangChain for building AI applications
  • mlx-lm: Support for loading and serving MLX-LM community models
  • dev: Development and testing utilities

Or clone the repository and install the required dependencies (for Python >= 3.9):

git clone https://github.com/GreenBitAI/gbx-lm.git

via requirements.txt file:

pip install -r requirements.txt

or via setup.py:

# Basic editable installation
pip install -e . -v

# Install editable mode plus specific optional dependencies
pip install -e ".[langchain]" -v
pip install -e ".[mlx-lm]" -v
pip install -e ".[dev]" -v

# Install all optional dependencies
pip install -e ".[all]" -v

Alternatively you can also use the prepared conda environment configuration:

conda env create -f environment.yml
conda activate gbai_mlx_lm

Usage

Generating Content

To generate natural language content using a converted model:

  • Example using terminal:
python -m gbx_lm.generate --model GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx  --max-tokens 100 --prompt "calculate 4*8+1024="
  • Example code integration:
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx")

prompt = "What is the capital of France?"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
print(response)

Interactive Chat

python -m gbx_lm.chat --model GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx  --max-tokens 100

Managing Local Model

You can use the following scripts to explore and delete local models stored in the Hugging Face cache.

# List local models
python -m gbx_lm.manage --scan

# Specify a `--pattern`:
python -m gbx_lm.manage --scan --pattern GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx

# To delete a model
python -m gbx_lm.manage --delete --pattern GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx

FastAPI Model Server

A high-performance HTTP API for text generation with GreenBitAI's mlx models. Improvements over the original mlx-lm/server.py:

  • Concurrent Processing: Handles multiple requests simultaneously
  • Enhanced Performance: Faster response times and better resource utilization
  • Robust Validation: Automatic request validation and error handling
  • Interactive Docs: Built-in Swagger UI for easy testing

Quick Start

  1. Run:
    python -m gbx_lm.fastapi_server --model GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx
    
  2. Use:
    # Chat
    curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" \
      -d '{"model": "default_model", "messages": [{"role": "user", "content": "Hello!"}]}'
    
    # Chat stream
    curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json"  \
      -d '{"model": "default_model", "messages": [{"role": "user", "content": "Hello!"}], "stream": "True"}'
    
  3. To enable support for MLX-LM community models:
    pip install gbx-lm[mlx-lm]
    
    Then you can use models from the mlx-community organization:
    python -m gbx_lm.fastapi_server --model mlx-community/Qwen3-4B-4bit
    

Features

  • Chat and text completion endpoints
  • Streaming responses
  • Customizable generation parameters
  • Support for custom models and adapters

For API details, visit http://localhost:8000/docs after starting the server.

Note: Not recommended for production without additional security measures.

Converting Models

To convert a GreenBitAI's Low-bit LLM to the MLX format, run:

python -m gbx_lm.gba2mlx --hf-path <input file path or a Hugging Face repo> --mlx-path <output file path> --hf-token <your huggingface token> --upload-repo <a Hugging Face repo name>

# Example
python -m gbx_lm.gba2mlx --hf-path GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0 --mlx-path Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx/ --hf-token <your huggingface token> --upload-repo GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx

Evaluating Models

To evaluate a model, run:

gbx_lm.evaluate \
    --model gbx_model \
    --tasks winogrande boolq arc_challenge arc_easy hellaswag openbookqa piqa social_iqa   

Requirements

  • Python >= 3.9
  • See setup.py for a complete list of dependencies

License

The original code was released under its respective license and copyrights, i.e.:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gbx_lm-0.4.1.tar.gz (95.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gbx_lm-0.4.1-py3-none-any.whl (117.3 kB view details)

Uploaded Python 3

File details

Details for the file gbx_lm-0.4.1.tar.gz.

File metadata

  • Download URL: gbx_lm-0.4.1.tar.gz
  • Upload date:
  • Size: 95.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for gbx_lm-0.4.1.tar.gz
Algorithm Hash digest
SHA256 f154c61bd26ea9a86836c489d35b75f8b925eea2484de6609bdb57ce276cbd6c
MD5 fc0eb3b59730276f5c664111a8e4b99f
BLAKE2b-256 d30f52a0ef84e9f587843bfd7e046cc26e0a35a6f08b9792091575f2423d4190

See more details on using hashes here.

File details

Details for the file gbx_lm-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: gbx_lm-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 117.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for gbx_lm-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5e2051b12d6907447def7513d4283a5dec35c450a7438eab387cbfeb724bd8a9
MD5 4c1e92b252d3c4386bb989a34bd149a7
BLAKE2b-256 c82ea7a74f0b1317f5f99de960c9b5172fdaa2cc80c6fcf6fd98b942f46aa12c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page