Skip to main content

MaxText is a simple, performant and scalable Jax LLM!

Project description

MaxText

MaxText Tests

NOTE: We recommend running MaxText with Python 3.12, as it is our primary supported version. Other Python versions may encounter compatibility issues.

MaxText is a high performance, highly scalable, open-source LLM library and reference implementation written in pure Python/JAX and targeting Google Cloud TPUs and GPUs for training.

MaxText provides a library of high performance models to choose from, including Gemma, Llama, DeepSeek, Qwen, and Mistral. For each of these models, MaxText supports pre-training (up to tens of thousands of chips) and scalable post-training, with popular techniques like Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO, a type of Reinforcement Learning) and Group Sequence Policy Optimization (GSPO, a type of Reinforcement Learning).

MaxText achieves high Model FLOPs Utilization (MFU) and tokens/second from single host to very large clusters while staying simple and largely "optimization-free" thanks to the power of JAX and the XLA compiler.

MaxText is the launching point for ambitious LLM projects both in research and production. We encourage you to start by experimenting with MaxText out of the box and then fork and modify MaxText to meet your needs.

Check out our Read The Docs site or directly Get Started with your first MaxText run. If you’re interested in Diffusion models (Wan 2.1, Flux, etc), see the MaxDiffusion repository in our AI Hypercomputer GitHub organization.

Installation

See our installation guide to install MaxText with pip from PyPI.

Decoupled mode

See our guide on running MaxText in decoupled mode, without any GCP dependencies in Decoupled Mode Guide.

🔥 Latest news 🔥

  • [March 6, 2026] New features from DeepSeek-AI are now supported: Conditional Memory via Scalable Lookup (Engram) and Manifold-Constrained Hyper-Connections (mHC). Try them out with our deepseek-custom starter config.
  • [March 5, 2026] New tpu-post-train target in PyPI. Please also use this installation option for running vllm_decode. See the MaxText installation instructions for more info.
  • [March 5, 2026] Qwen3-Next is now supported.
  • [February 27, 2026] New MaxText structure! MaxText has been restructured according to RESTRUCTURE.md. Please feel free to share your thoughts and feedback.
  • [December 22, 2025] Muon optimizer is now supported.
  • [December 10, 2025] DeepSeek V3.1 is now supported. Use existing configs for DeepSeek V3 671B and load in V3.1 checkpoint to use model.
  • [December 9, 2025] New RL and SFT Notebook tutorials are available.
  • [December 4, 2025] The ReadTheDocs documentation site has been reorganized.
  • [December 3, 2025] Multi-host support for GSPO and GRPO is now available via new RL tutorials.
  • [November 20, 2025] A new guide, What is Post Training in MaxText?, is now available.
  • [November 6, 2025] Ironwood TPU co-designed AI stack announced. Read the blog post on its co-design with MaxText.
  • [October 29, 2025] Optimized models tiering documentation has been refreshed.
  • [October 12, 2025] Added Versioning. Check out our first set of release notes!
  • [October 10, 2025] Post-Training (SFT, RL) via Tunix is now available.
  • [September 26, 2025] Vocabulary tiling (PR) is now supported in MaxText! Adjust config num_vocab_tiling to unlock more efficient memory usage.
  • [September 24, 2025] The GPT-OSS family of models (20B, 120B) is now supported.
  • [September 15, 2025] MaxText is now available as a PyPI package. Users can now install maxtext through pip.
  • [September 5, 2025] MaxText has moved to an src layout as part of RESTRUCTURE.md. For existing environments, please run pip install -e . from MaxText root.
  • [August 13, 2025] The Qwen3 2507 MoE family of models is now supported: MoEs: 235B Thinking & 480B Coder as well as existing dense models: 0.6B, 4B, 8B, 14B, and 32B.
  • [July 27, 2025] Updated TFLOPS/s calculation (PR) to account for causal attention, dividing the attention flops in half. Accounted for sliding window and chunked attention reduced attention flops in PR and PR. Changes impact large sequence configs, as explained in this doc
  • [July 16, 2025] We will be restructuring the MaxText repository for improved organization and clarity. Please review the proposed structure and provide feedback.
  • [July 11, 2025] Multi-Token Prediction (MTP) training support! Adds an auxiliary loss based on predicting multiple future tokens, inspired by DeepSeek-V3 paper, to enhance training efficiency.
  • [June 25, 2025] DeepSeek R1-0528 variant is now supported.
  • [April 24, 2025] Llama 4 Maverick models are now supported.

Use cases

MaxText provides a library of models and demonstrates how to perform pre-training or post-training with high performance and scale.

MaxText leverages JAX AI libraries and presents a cohesive and comprehensive demonstration of training at scale by using Flax (neural networks), Tunix (post-training), Orbax (checkpointing), Optax (optimization), and Grain (dataloading).

In addition to pure text-based LLMs, we also support multi-modal training with Gemma 3 and Llama 4 VLMs.

Pre-training

If you’re building models from scratch, MaxText can serve as a reference implementation for experimentation, ideation, and inspiration - just fork and modify MaxText to train your model, whether it’s a small dense model like Llama 8B, or a large MoE like DeepSeek-V3. Experiment with configs and model design to build the most efficient model on TPU or GPU.

MaxText provides opinionated implementations for how to achieve optimal performance across a wide variety of dimensions like sharding, quantization, and checkpointing.

Post-training

If you are post-training a model, whether it is proprietary or open source, MaxText provides a scalable framework using Tunix. For RL (like GRPO), we leverage vLLM for sampling and Pathways (soon) for multi-host.

Our goal is to provide a variety of models (dimension “a”) and techniques (dimension “b”), so you can easily explore (a) * (b) combinations and efficiently train the perfect model for your use case.

Check out these getting started guides:

Model library

MaxText aims to provide you with the best OSS models, whether as a reference implementation, or to post-train and then serve with vLLM.

Supported JAX models in MaxText

  • Google
    • Gemma 3 (4B, 12B, 27B)
    • Gemma 2 (2B, 9B, 27B)
    • Gemma 1 (2B, 7B)
  • Alibaba
    • Qwen 2.5 (7B, 14B)
    • Qwen 3 MoE 2507 (235B, 480B)
    • Qwen 3 MoE (30B, 235B)
    • Qwen 3 Dense (0.6B, 1.7B, 4B, 8B, 14B, 32B)
  • DeepSeek
    • DeepSeek V3.1 (671B)
    • DeepSeek V3 0324 (671B) & DeepSeek R1 0528 (671B)
    • DeepSeek V2 (16B, 236B)
  • Meta
    • Llama 4 Scout (109B) & Maverick (400B)
    • Llama 3.3 70B, 3.1 (8B, 70B, 405B), 3.0 (8B, 70B, 405B)
    • Llama 2 (7B, 13B, 70B)
  • Open AI
    • GPT-OSS (20B, 120B)
    • GPT3 (52K, 6B, 22B, 175B)
  • Mistral
    • Mixtral (8x7B, 8x22B)
    • Mistral (7B)
  • Diffusion Models

Get involved

Please join our Discord Channel and if you have feedback, you can file a feature request, documentation request, or bug report here.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maxtext-0.2.1-py3-none-any.whl (10.2 MB view details)

Uploaded Python 3

File details

Details for the file maxtext-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: maxtext-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 10.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for maxtext-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f2bcd4a84975f7376b97b3c64e1c25c16e98ad2318fc869716a61cda4e97e740
MD5 5f031c0a9ae76b17790620b6ec7b299f
BLAKE2b-256 f64afe04656450d019871a804f50382a541f6cd380ef2c8382e3ae86d4b44bad

See more details on using hashes here.

Provenance

The following attestation bundles were made for maxtext-0.2.1-py3-none-any.whl:

Publisher: pypi_release.yml on AI-Hypercomputer/maxtext

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page