Skip to main content

GBA Model Toolkit for MLX

Project description

GBA Model Toolkit for MLX

Introduction

Welcome to the GreenBitAI (GBA) Model Toolkit for MLX! This comprehensive Python package not only facilitates the conversion of GreenBitAI's Low-bit Language Models (LLMs) to MLX framework compatible format but also supports generation, model loading, and other essential scripts tailored for GBA quantized models. Designed to enhance the integration and deployment of GBA models within the MLX ecosystem, this toolkit enables the efficient execution of GBA models on a variety of platforms, with special optimizations for Apple devices to enable local inference and natural language content generation.

Installation

To get started with this package, simply run:

pip install gbx-lm

or clone the repository and install the required dependencies (for Python >= 3.9):

git clone https://github.com/GreenBitAI/gbx-lm.git
pip install -r requirements.txt

Alternatively you can also use the prepared conda environment configuration:

conda env create -f environment.yml
conda activate gbai_mlx_lm

Usage

Generating Content

To generate natural language content using a converted model:

python -m gbx_lm.generate --model <path to a converted model or a Hugging Face repo name>

# Example
python -m gbx_lm.generate --model GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx  --max-tokens 100 --prompt "calculate 4*8+1024=" --eos-token '<|im_end|>'

Managing Local Model

You can use the following scripts to explore and delete local models stored in the Hugging Face cache.

# List local models
python -m gbx_lm.manage --scan

# Specify a `--pattern`:
python -m gbx_lm.manage --scan --pattern GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx

# To delete a model
python -m gbx_lm.manage --delete --pattern GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx

FastAPI Model Server

A high-performance HTTP API for text generation with GreenBitAI's mlx models. Improvements over the original mlx-lm/server.py:

  • Concurrent Processing: Handles multiple requests simultaneously
  • Enhanced Performance: Faster response times and better resource utilization
  • Robust Validation: Automatic request validation and error handling
  • Interactive Docs: Built-in Swagger UI for easy testing

Quick Start

  1. Run:
    python -m gbx_lm.fastapi_server --model GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-4.0-mlx
    
  2. Use:
    # Chat
    curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" \
      -d '{"model": "default_model", "messages": [{"role": "user", "content": "Hello!"}]}'
    
    # Chat stream
    curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json"  \ 
    -d '{"model": "default_model", "messages": [{"role": "user", "content": "Hello!"}], "stream": "True"}'
    

Features

  • Chat and text completion endpoints
  • Streaming responses
  • Customizable generation parameters
  • Support for custom models and adapters

For API details, visit http://localhost:8000/docs after starting the server.

Note: Not recommended for production without additional security measures.

Converting Models

To convert a GreenBitAI's Low-bit LLM to the MLX format, run:

python -m gbx_lm.gba2mlx --hf-path <input file path or a Hugging Face repo> --mlx-path <output file path> --hf-token <your huggingface token> --upload-repo <a Hugging Face repo name>

# Example
python -m gbx_lm.gba2mlx --hf-path GreenBitAI/yi-6b-chat-w4a16g128 --mlx-path yi-6b-chat-w4a16g128-mlx/ --hf-token <your huggingface token> --upload-repo GreenBitAI/yi-6b-chat-w4a16g128-mlx

Requirements

  • Python 3.x
  • See requirements.txt or environment.yml for a complete list of dependencies

Web Demo

We also prepared a demo for deploying chat applications by leveraging the capabilities of FastChat and Gradio. By following this instruction, you can quickly build a local chat demo page.

License

The original code was released under its respective license and copyrights, i.e.:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gbx-lm-0.3.3.tar.gz (98.7 kB view details)

Uploaded Source

Built Distribution

gbx_lm-0.3.3-py3-none-any.whl (115.5 kB view details)

Uploaded Python 3

File details

Details for the file gbx-lm-0.3.3.tar.gz.

File metadata

  • Download URL: gbx-lm-0.3.3.tar.gz
  • Upload date:
  • Size: 98.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for gbx-lm-0.3.3.tar.gz
Algorithm Hash digest
SHA256 471e61693cc30fd5457aff8df7bbce3bd155e065d6f57094fca96b342da3f3c0
MD5 0a1bfa2c3230ae0683053e665a804876
BLAKE2b-256 3e3b3fd47e998202c05bf9d8ce8feba9d02fee29132102562b1fc4cec9bc5d6c

See more details on using hashes here.

File details

Details for the file gbx_lm-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: gbx_lm-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 115.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for gbx_lm-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c4f51ded16c3c965873d18096efa439402f5c55867db131e06c1760aa1acf8f6
MD5 6c0f78e509d5cc9d65191685093c7c4f
BLAKE2b-256 9f606ff3b407dde4adf1947e37e00b829921f1521f481db71efd2abc9b6226b9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page