Skip to main content

Hybrid LLM runtime — minimal VRAM, always-on GPU prefill, optimised CPU inference

Project description

Krasis

Rust + PyO3 MoE runtime for large mixture-of-experts LLMs. Runs 350B+ parameter models on commodity hardware with full GPU prefill and efficient CPU decode.

You can contact me here but please don't ask for help getting Krasis working. If a model doesn't work or a particular hardware config then you can try to narrow it down and then report an issue.

Krasis runs MoE LLMs fast on consumer level hardware

Krasis can run MoE language models that are much too large to fit in a consumer GPU (multi-hundred gigabyte modesl with 100 - 500+ billion parameters) on consumer or accessible server hardware you can actually buy without a second mortgage and your own personal power station.

Crucially, it runs these models at a speed that is usable.

Qwen3-Coder-Next / 856 tok/s prefill / 10.5 tok/s decode##

For example, running Qwen3-Coder-Next (80B params, 146GB BF16) on a single-cpu Epyc server (7742) with 2x Ada 2000 16GB, Krasis achieves 856 tokens/sec prefill and 10.5 tokens/sec decode

How LLMs work

LLM model operation consist of two key steps:

  1. Prefill (handling potentially large amounts of input coming into the model)
  2. Decode (handling the generation of text after processing the input data)

These are essentially the LLM reading (prefill) and writing (decode).

Prefill is best handled by the GPUs (large amounts of very parallel matrix multiplication, but on typical LLM runtimes its not possible to do more than offload a little of the large model onto the GPU.

The result is that you enter a simple chat prompt and it responds in a reasonable time, but if you hand it a file to read or try to work with it in an IDE, you wait minutes for it to even start generating text.

Krasis employs a different approach that utilises the GPU and system RAM more heavily which results in much faster prefill times. In practice this means the model will generate text at a similar speed (faster in some cases due to other optimisations) but you wait much less time for an answer, and the model can read files much more quickly.

Krasis tradeoffs

In order to achieve these speeds, Krasis has a few requirements.

  • Krasis uses more system RAM than other runtimes, you may need 2x the model weights worth of system ram (so to run a 100GB model you may need 200GB of system ram), but this is almost always far more achievable than the equivalent VRAM.
  • Krasis must be given the BF16 safetensors model* downloaded from (HuggingFace)[https://huggingface.co/]
  • Krasis can build everything it needs from this model or if you prefer you can give it a second GGUF model (in addition to the BF16 safetensors model) which takes advantage of more advanced quantisation (e.g. unsloth Q4_K models)
  • Krasis currently only works with NVidia GPUs
  • Krasis may take some time on the first run as it is doing a lot of pre-run work to optimise everything, major parts of this are cached for later runs though so they are generally much shorter startup times.
  • Krasis optimises models and caches them in .krasis, these can be large so you may need the original model x3 space or if you provide a GGUF in addition to the BF16 you may need 4x the space.

Known Supported Models and Benchmark Speeds

Speeds reported in the following models are benchmarked on the following hardware:

  • Epyc 7742
  • DDR4 2666 RAM (8x channels)
  • 2x RTX Ada 2000
Model Params BF16 Size Experts Attention Prefill Decode
Qwen3-Coder-Next 80B 148 GB 512 routed, top-10 Hybrid (36 linear + 12 GQA) 812 tok/s 10.5 tok/s
Qwen3-235B-A22B 235B 438 GB 128 routed, top-8 GQA 198 tok/s 1.65 tok/s
DeepSeek V2-Lite 16B 29 GB 64 + 2 shared, top-6 MLA 2,400 tok/s 5.8 tok/s
GLM-4.7 358B 667 GB 160 + 1 shared, top-8 GQA (partial RoPE, bias) untested untested

Quick Start

Install

# Install pipx if you don't have it
sudo apt install pipx   # Ubuntu/Debian
# or: pip install --user pipx

# Install Krasis
pipx install krasis
pipx ensurepath        # adds ~/.local/bin to PATH (restart terminal or source ~/.bashrc)

# Run setup — installs CUDA toolkit, PyTorch, FlashInfer, ninja
# (will prompt for your password when installing system packages)
krasis-setup

Download a model

# Install huggingface-cli if you don't have it
pip install huggingface-hub

# Download a model into ~/.krasis/models/
huggingface-cli download Qwen/Qwen3-Coder-Next \
    --local-dir ~/.krasis/models/Qwen3-Coder-Next

Run

krasis

That's it. The launcher walks you through model selection and configuration. First run takes longer as Krasis builds optimised weight caches.

WSL (Windows Subsystem for Linux)

Krasis works on WSL2. By default WSL only uses 50% of your system RAM, which is usually not enough for large models. Create or edit C:\Users\<YourUsername>\.wslconfig:

[wsl2]
memory=120GB

Adjust the value to leave ~8 GB for Windows. Then restart WSL from PowerShell:

wsl --shutdown

Then follow the install steps above inside WSL.

Alternative: pip in a venv

python3 -m venv ~/.krasis-env && source ~/.krasis-env/bin/activate
pip install krasis
krasis-setup

Alternative: from source

git clone https://github.com/brontoguana/krasis.git
cd krasis
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
krasis-setup
./krasis

Usage

Interactive Launcher

krasis

The launcher walks you through a TUI with four screens:

  1. Model selection — scans ~/.krasis/models/ for safetensors models, shows architecture, layer count, expert count, and estimated RAM
  2. CPU expert source — build INT4 or INT8 from the native model, or select an existing GGUF file
  3. GPU selection — multi-select your GPUs (Space to toggle, Enter to confirm)
  4. Configuration editor — tune all quantization and runtime options with a live VRAM budget display showing per-GPU memory usage and estimated context length

All settings are saved to ~/.krasis/config and reloaded on subsequent launches.

On the final screen you can choose to launch immediately or run a benchmark first.

Non-Interactive Launch

# Use saved config from last TUI session
krasis --non-interactive

# Override specific settings
krasis --non-interactive --model-path /path/to/model --num-gpus 2 --benchmark

Benchmark Suite

Run all model × config combinations automatically from a single config file. Edit benchmarks/benchmark_suite.toml to define which models and hardware configurations to test:

[[config]]
num_gpus = 1
gpu_expert_bits = 4
cpu_expert_bits = 4

[[config]]
num_gpus = 2
gpu_expert_bits = 4
cpu_expert_bits = 4

[[model]]
name = "DeepSeek-V2-Lite"

[[model]]
name = "Qwen3-235B-A22B"
gguf_name = "Qwen3-235B-A22B-GGUF"   # searched in ~/.krasis/models/ subdirs

Model name is the directory name under ~/.krasis/models/. Use gguf_name to pair a native model with a GGUF for CPU experts (filename searched in models dir), or gguf_path for an absolute path. Config fields include num_gpus, gpu_expert_bits, cpu_expert_bits, attention_quant, kv_dtype, and more — see the config file comments for the full list.

Run the suite:

krasis --benchmark-suite                           # uses benchmarks/benchmark_suite.toml
krasis --benchmark-suite /path/to/custom.toml      # custom config

Each combination runs as an isolated subprocess. Per-combo logs are saved to benchmarks/suite_logs/ and a markdown summary table is generated at the end.

For launcher flags, per-component quantization options, and direct server usage, see ADVANCED.md.

Chat Client

krasis-chat                          # auto-discovers running servers
krasis-chat --port 8012              # connect to specific port
krasis-chat --url http://host:8012   # connect to remote server
krasis-chat --temperature 0.3        # override sampling temperature

The chat client auto-discovers running Krasis servers via ~/.krasis/servers/. Commands: /new (clear history), /system PROMPT (change system prompt), /exit.

API

The server exposes an OpenAI-compatible API at http://localhost:8012/v1/chat/completions with SSE streaming, compatible with Cursor, OpenCode, and any OpenAI SDK client.

Additional endpoints:

  • GET /health — server status
  • GET /v1/models — list loaded models
  • POST /v1/timing — toggle instrumentation at runtime

License

AGPL-3.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

krasis-0.1.28.tar.gz (575.5 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

krasis-0.1.28-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

krasis-0.1.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

krasis-0.1.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

krasis-0.1.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

File details

Details for the file krasis-0.1.28.tar.gz.

File metadata

  • Download URL: krasis-0.1.28.tar.gz
  • Upload date:
  • Size: 575.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for krasis-0.1.28.tar.gz
Algorithm Hash digest
SHA256 fa102946ea09647d852eeebaaa0bab79df8169398927e58b2db829a15d6c5347
MD5 dc4b4ef642f85fda012d013f6a41f682
BLAKE2b-256 751498c9b475018dc5a9d477b3d60160345f9f499c101e4b1488b05865a358da

See more details on using hashes here.

File details

Details for the file krasis-0.1.28-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for krasis-0.1.28-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 248182e54347bcea3ed94853f6f5c03c5c1978ab6e857a23890be6505f65a4c8
MD5 829277a4421754f39204978662922995
BLAKE2b-256 8a1a6442da1e87d51ab3bf92fe2e57c0aab69c11cff1cfdfe69daa9faea14c53

See more details on using hashes here.

File details

Details for the file krasis-0.1.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for krasis-0.1.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8a2c4c8eddc7f3ee0b57cb55c7bd4a1fb417d35e874f35055f719ffa0e910135
MD5 0ab14888ac4dfa6bb3e7df6fa5a70f50
BLAKE2b-256 e4767a2526c3abe3b452f8842d67e02610ef1743ce43a94fc669e9e943414f1f

See more details on using hashes here.

File details

Details for the file krasis-0.1.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for krasis-0.1.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e52ce1981b3621c62ac972cc329246e3f4c4ef8e16d7644ba39c97f75f21dafc
MD5 2189567ad1ec94463ff8449a2df40e36
BLAKE2b-256 8531cab7ed230a20fd7ebcd5ad6355a1ff2a97349fdc6f7abd5191d470ae00cf

See more details on using hashes here.

File details

Details for the file krasis-0.1.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for krasis-0.1.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fdd896dea2c29690a4b1bab8d455f3963be780bc8b67469da3323bb165d19c65
MD5 18c1e8588b9faef279bce1d3fcf37dfc
BLAKE2b-256 070925debd05863b9176310afe8e578d47955bb2dffc69a90995d07c7756e0ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page