Skip to main content

AlienSky optimized inference engine for Qwen on Apple Silicon

Project description

AlienSkyQwen

10x longer conversations. Better accuracy. On your Mac. Works with Qwen3.6 and Qwen3.5 models.

AlienSkyQwen brings enterprise-grade inference optimization to your Mac. Built on the same AlienSky technology licensed to hyperscalers and inference providers, it reduces the memory needed for conversation history by up to 16x while improving reasoning accuracy by up to +7.8 percentage points — no model changes, no quality trade-offs.

AlienSky improves reasoning accuracy on every model tested. Perplexity improves or shows near-zero impact — the optimized model predicts text as well or better than baseline.

AlienSky.ai  |  @AlienSkyAI

KV Cache Memory — 16x Reduction

Reasoning Accuracy — AlienSky Improves Every Model

Perplexity — Minimal Impact, Often Improves

Quick Start

Step 1: Install Python 3.11 or newer

Open Terminal and check your Python version:

python3 --version

If it shows 3.10 or older, install a newer version:

brew install python@3.14

Don't have Homebrew? Install it first: https://brew.sh

Step 2: Create a virtual environment

python3 -m venv ~/.aliensky
source ~/.aliensky/bin/activate

You'll see (.aliensky) appear in your terminal prompt. This means the environment is active.

Note: Every time you open a new terminal window, you need to re-activate:

source ~/.aliensky/bin/activate

Step 3: Install AlienSkyQwen

pip install alienskyqwen

This will also install mlx and mlx-lm automatically.

Step 4: Run your first model

python3 -c "
from alienskyqwen import aliensky_load
import mlx_lm

model, tokenizer = aliensky_load('mlx-community/Qwen3.5-27B-4bit')
print(mlx_lm.generate(model, tokenizer, 'Explain quantum computing in simple terms', max_tokens=500))
"

The first run will download the model (~14 GB). Subsequent runs use the cached version.

Or run as an OpenAI-compatible server

alienskyqwen-serve --model mlx-community/Qwen3.5-27B-4bit --port 8080

Then connect any chat interface (Open WebUI, chatbot-ui, or curl) to http://localhost:8080/v1.

Optimization Profiles

Profile KV Compression Best For
aliensky_v1_std 9.85x General use with dense models (default)
aliensky_v1_fast 16x Maximum memory savings, long contexts
aliensky_v1_quality 6.56x Peak accuracy on dense models
aliensky_v1_balance 10.67x MoE models (recommended for 35B-A3B, 122B)
model, tokenizer = aliensky_load("mlx-community/Qwen3.6-35B-A3B-4bit",
                                  profile="aliensky_v1_balance")

Supported Models

Model Model Size (Q4) Recommended Profile
Qwen3.6-35B-A3B ~18 GB aliensky_v1_balance
Qwen3.5-27B ~14 GB aliensky_v1_std
Qwen3.5-27B-Claude-Distilled ~15 GB aliensky_v1_std
Qwen3.5-122B-A10B ~68 GB aliensky_v1_balance
Qwen3.5-9B ~5 GB aliensky_v1_std

Results at a Glance

All benchmarks run on Mac Studio M3 Ultra (512 GB). Full methodology and per-context breakdowns available in BENCHMARKS.

Accuracy

Model Profile ARC-Challenge MMLU (5-shot) HellaSwag Perplexity (C4)
Qwen3.6-35B-A3B (Q4) v1_balance +0.34pp +0.20pp -0.50pp +2.1%
Qwen3.5-27B (Q4) v1_std +7.76pp -2.00pp -1.90pp -4.3%
Qwen3.5-27B-Claude-Distilled (Q4) v1_quality +2.22pp -3.5%
Qwen3.5-27B (8-bit) v1_quality +2.82pp -1.00pp -1.30pp +0.5%
Qwen3.5-122B-A10B (Q4) v1_quality +1.28pp -2.20pp -1.45pp +1.7%
Qwen3.5-9B (Q4) v1_std +1.54pp -2.25pp

AlienSky improves ARC-Challenge reasoning accuracy on every model tested. Perplexity improves on both 27B variants (lower is better) — the optimized model predicts text more accurately than baseline. The 8-bit and MoE models show near-zero PPL impact.

Memory Savings (KV Cache)

Context Length Baseline (27B) AlienSky (v1_std, 9.85x) AlienSky (v1_fast, 16x)
32K tokens 2.0 GB 208 MB 128 MB
131K tokens 8.0 GB 832 MB 512 MB
262K tokens 16.0 GB 1.6 GB 1.0 GB

MoE models (35B-A3B, 122B-A10B) achieve 10.67x savings with the v1_balance profile.

Decode Speed

Context Baseline AlienSky (v1_std) Ratio
4K tokens 103.7 tok/s 88.6 tok/s 0.85x
8K tokens 99.3 tok/s 85.8 tok/s 0.86x
32K tokens 87.1 tok/s 72.0 tok/s 0.83x

On Apple Silicon, decode speed is bottlenecked by model weight loading, not the KV cache. AlienSky's overhead is minimal in practice.

How It Works

When an LLM generates text, it stores the entire conversation in a structure called the KV cache. This cache grows with every token and can consume gigabytes of RAM.

AlienSky optimizes this at multiple levels:

  1. Compresses conversation memory to a fraction of its original size
  2. Computes attention directly on the compressed form — no decompression needed, up to 5x faster attention at long contexts
  3. Runs on Apple Metal via a custom GPU kernel for maximum speed

The result: 10-16x less memory, faster attention, and in many cases better answers — the optimization acts as noise reduction for the model's internal state.

What this unlocks: With 16x less memory per conversation, you can run multiple conversations in parallel, keep several models loaded simultaneously, or handle book-length contexts that would otherwise exhaust your Mac's RAM.

Requirements

  • Apple Silicon Mac (M1 or newer)
  • macOS 14.0+
  • Python 3.11-3.14
  • 16 GB+ unified memory (64 GB+ recommended for 27B models)

API Reference

aliensky_load(model, profile, data_path)

Load a model with AlienSky optimization.

  • model — HuggingFace model ID or local path
  • profile — Optimization profile (default: "aliensky_v1_std")
  • data_path — Path to AlienSky data file (auto-detected by default)

Returns (model, tokenizer) for use with mlx_lm.generate() or mlx_lm.server.

alienskyqwen-serve

alienskyqwen-serve [--model MODEL] [--profile PROFILE] [--port PORT] [--host HOST]

OpenAI-compatible API server. Default model: Qwen3.5-27B-4bit. Default port: 8080.

FAQ

Does AlienSky modify the model weights? No. It only optimizes how the model stores and retrieves conversation memory.

Why does reasoning accuracy improve? The compression acts as regularization, filtering noise in the model's internal representations. This effect is strongest on reasoning benchmarks and distilled models.

Can I use this with LM Studio? Not directly. Run alienskyqwen-serve and point your chat UI to http://localhost:8080/v1.

What about Llama, Gemma, or other models? AlienSky currently supports Qwen3.5 and Qwen3.6. Additional architectures are planned.

License

Free for personal use. Commercial use requires a license. No modifications permitted. See LICENSE for full terms.

Copyright (c) 2026 AlienSky LLC.

"Qwen" is a trademark of Alibaba Group. AlienSky LLC is not affiliated with, endorsed by, or sponsored by Alibaba Group.


AlienSky.ai  |  @AlienSkyAI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

alienskyqwen-0.1.2-cp314-cp314-macosx_14_0_arm64.whl (74.4 MB view details)

Uploaded CPython 3.14macOS 14.0+ ARM64

alienskyqwen-0.1.2-cp313-cp313-macosx_14_0_arm64.whl (74.4 MB view details)

Uploaded CPython 3.13macOS 14.0+ ARM64

alienskyqwen-0.1.2-cp312-cp312-macosx_14_0_arm64.whl (74.4 MB view details)

Uploaded CPython 3.12macOS 14.0+ ARM64

alienskyqwen-0.1.2-cp311-cp311-macosx_14_0_arm64.whl (74.4 MB view details)

Uploaded CPython 3.11macOS 14.0+ ARM64

File details

Details for the file alienskyqwen-0.1.2-cp314-cp314-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for alienskyqwen-0.1.2-cp314-cp314-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 6269c29b91816aa776fa7a009f6ba60f2fd646608ed11056b13fd795b87140ef
MD5 2844c04c026e5c51a4d8d989090e5461
BLAKE2b-256 76bf65fed3f878c15dc4bcdf26479a5ac190557c8aece6a42dff0567ab1467b6

See more details on using hashes here.

File details

Details for the file alienskyqwen-0.1.2-cp313-cp313-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for alienskyqwen-0.1.2-cp313-cp313-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 b83a629831e650b94ef155892a4231cee1ac6d221a6bc6694fc21d8dc881fa6e
MD5 9f0cac645d93c55dbd1aeb07369c4c1c
BLAKE2b-256 ebdd29042578adc5120b47ad98807354ccdfb1ccf30b3cbba7814daa25753ddb

See more details on using hashes here.

File details

Details for the file alienskyqwen-0.1.2-cp312-cp312-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for alienskyqwen-0.1.2-cp312-cp312-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 2e887d9226a952d24c35090ce0ed51cd845f1f767dd43b3b491d8047ac7ec4dc
MD5 7b44f6efd81446e7204c0e5918dee324
BLAKE2b-256 7993ea487459bf5fba7df406a11e4dd41d3e9be687c2bda6e17aa040f1da8841

See more details on using hashes here.

File details

Details for the file alienskyqwen-0.1.2-cp311-cp311-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for alienskyqwen-0.1.2-cp311-cp311-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 5df554d3d844cd7846d5da88c7882a6d226d40ad145f69434fd63852f7062037
MD5 15680efa89c455b8148edbb0cda9c2f5
BLAKE2b-256 eebbeead97fbcd9a5c550852b91668c6f2ce88b42c0500bdf9e29f12fe0ace51

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page