Skip to main content

A functional and lightweight neural network library for JAX.

Project description

blox logo

blox

A functional and lightweight neural network library for JAX.

blox is released under the MIT license Python 3.11+ JAX 0.8+


blox unlocks the full potential of JAX by embracing its functional nature instead of fighting it.

Most JAX neural network libraries try to force Object-Oriented paradigms to make JAX feel like PyTorch, usually by introducing implicit global state, hidden contexts, or clever magic that seems helpful at first but eventually results in unnecessary cognitive overhead and a steep learning curve.

blox takes the opposite approach. Instead of hiding JAX's functional approach, it leans into it, building a minimal abstraction layer on top. By stripping away the "magic", blox ensures explicit data flow and keeps your code transparent, free of side effects, and trivially compatible with JAX's powerful transformations.

🎯 Who is blox for?

blox is mainly designed for:

  • Students who want to learn JAX without having to learn a ton of additional abstractions. With blox, what you see is what you get: pure functions, explicit state, and no hidden magic. This makes it an excellent educational tool for understanding how neural networks actually work at the JAX level.

  • Practitioners who want full control over the execution stack. If you're tired of fighting frameworks that hide important details, blox gives you complete transparency while still providing the conveniences you need for building high performance models.

⚡ Core Principles & Features

  • Native JAX compatibility: Works with all JAX transformations, including jax.jit, jax.grad, jax.vmap, jax.shard_map, jax.checkpoint, and others. No special wrappers or decorators are required.
  • Functional purity: Models are stateless transformations. Parameters are explicit arguments, never hidden in self or global registries.
  • Explicit data flow: Every function returns (outputs, params), making data dependencies crystal clear and eliminating side effects. You can trace the path of every single tensor just by reading the function signature.
  • Lazy initialization: Define your model structure abstractly, then run a single forward pass to materialize parameters automatically.
  • Structural RNG keys: Randomness is handled as part of the Params structure. Getting a new random key simply returns an updated Params object, ensuring deterministic reproducibility without the boilerplate of manually threading keys.
  • Interactive inspection: Debugging is easier when you can see your model. blox integrates with Treescope to let you interactively inspect your model's architecture, hierarchy, and parameter shapes.

📦 Installation

Since blox uses JAX, check out the JAX installation instructions for your specific hardware (CPU/GPU/TPU).

You will need Python 3.11 or later. Install blox from PyPi:

pip install jax-blox

🚀 Quick Start

In blox, a module is just a structural container (__init__) and a set of pure mathematical functions (like __call__).

Define your layers

Notice the signature: params carries the state (weights + RNG), while inputs is your data.

import jax
import jax.numpy as jnp
import blox as bx

class CustomLinear(bx.Module):

  def __init__(
      self,
      graph: bx.Graph,
      output_size: int,
      rng: bx.Rng,
  ) -> None:
    super().__init__(graph)
    self.output_size = output_size
    self.rng = rng

  def __call__(
      self,
      params: bx.Params,
      inputs: jax.Array,
  ) -> tuple[jax.Array, bx.Params]:
    # Param initialization is lazy which serves two important purposes:
    # 1. Avoids the need to specify input dimensions at construction.
    # 2. Prevents accidental allocation of params on device.
    kernel, params = self.get_param(
        params=params,
        name='kernel',
        shape=(inputs.shape[-1], self.output_size),
        init=jax.nn.initializers.glorot_uniform(),
        rng=self.rng,
    )
    bias, params = self.get_param(
        params=params,
        name='bias',
        shape=(self.output_size,),
        init=jax.nn.initializers.zeros,
        rng=self.rng,
    )
    return inputs @ kernel + bias, params

Composition & Dependency Injection

Because blox modules are standard Python objects, composing them via dependency injection is intuitive.

Instead of hardcoding layers, you can inject them. The injected modules keep their original position in the hierarchy, while internal layers become children.

class CustomMLP(bx.Module):

  def __init__(
      self,
      graph: bx.Graph,
      hidden_size: int,
      # We can inject externally created modules...
      output_projection: bx.Module,
      rng: bx.Rng,
  ) -> None:
    super().__init__(graph)
    # ... or create new ones internally.
    self.hidden_proj = CustomLinear(graph.child('hidden'), hidden_size, rng=rng)
    self.output_projection = output_projection

  def __call__(
      self,
      params: bx.Params,
      inputs: jax.Array,
  ) -> tuple[jax.Array, bx.Params]:
    # Chain the functional transformations.
    hidden, params = self.hidden_proj(params, inputs)
    hidden = jax.nn.relu(hidden)
    return self.output_projection(params, hidden)

Initialization & Inspection

We cleanly separate the "Initialization phase" (traversing the graph to create parameters) from the "Runtime phase" (updating trainable and non-trainable parameters).

# Define the structure for wiring modules.
graph = bx.Graph('net')
rng = bx.Rng(graph.child('rng'))

# Create the output layer explicitly and use it to create our CustomMLP.
output_projection = CustomLinear(graph.child('linear'), output_size=1, rng=rng)
model = CustomMLP(
    graph.child('mlp'),
    hidden_size=32,
    output_projection=output_projection,
    rng=rng,
)

# Create dummy input data to infer shapes.
inputs = jnp.ones((1, 10))

# Initialize the parameters.
params = bx.Params()  # Create empty container to hold the full model state.
params = rng.seed(params, seed=42)  # Initialize the Rng's state.

# Run a forward pass to trigger lazy initialization to populates Params.
unused_outputs, params = model(params, inputs)

# Finalize to prevent accidentally adding new parameters in the future and to
# be able to use params.initialized property to control the execution flow.
params = params.finalized()

# Visualize the full graph and parameter structure.
bx.display(graph, params)

Output: Notice how linear and mlp are siblings in the graph, while hidden is nested inside mlp. The output_projection in mlp.__init__ shows a reference to linear's constructor.

net: Graph # Param: 387 (1.5 KB)
linear=CustomLinear # Param: 33 (132 B)
__init__=CustomLinear(output_size=1)
kernel=Param[T](shape=(32, 1), dtype=float32, value=≈-0.048 ±0.21)
bias=Param[T](shape=(1,), dtype=float32, value=0.0)
mlp=CustomMLP # Param: 352 (1.4 KB)
__init__=CustomMLP(hidden_size=32, output_projection=CustomLinear(output_size=1))
hidden=CustomLinear # Param: 352 (1.4 KB)
__init__=CustomLinear(output_size=32)
kernel=Param[T](shape=(10, 32), dtype=float32, value=≈-0.0016 ±0.22)
bias=Param[T](shape=(32,), dtype=float32, value=0.0)
rng=Rng # Param: 2 (12 B)
__init__=Rng()
seed=Param[N](shape=(), dtype=key, metadata={'tag': 'rng_seed'})
counter=Param[N](shape=(), dtype=uint32, metadata={'tag': 'rng_counter'}, value=2)

⚡ JIT Compilation

Since blox modules are pure functions with no hidden state, they work directly with jax.jit:

# Just wrap and call - no special decorators needed.
outputs, params = jax.jit(model)(params, inputs)

🎯 Training

The Params container holds everything: weights, RNG state, batch norm statistics, EMA moving averages, ...

When training, we usually want to differentiate w.r.t. trainable parameters, such as weights, but still update non-trainable parameters like the RNG state. blox makes this partitioning explicit and simple.

@jax.jit(donate_argnames='params')
def train_step(params, inputs, targets):
  # Split params into two sets:
  # Trainable: weights, biases (we want gradients for these).
  # Non-trainable: Rng, batch stats, EMA (we just want the updated values).
  trainable, non_trainable = params.split()

  def loss_fn(t, nt):
    # Merge parameters to run the forward pass.
    predictions, new_params = model(t.merge(nt), inputs)

    # Calculate the loss.
    loss = jnp.mean((predictions - targets) ** 2)

    # Extract the updated non-trainable state to pass it out.
    _, new_non_trainable = new_params.split()
    return loss, new_non_trainable

  # Calculate gradients and capture the auxiliary state (non_trainable updates).
  grads, new_non_trainable = jax.grad(loss_fn, has_aux=True)(
      trainable, non_trainable
  )

  # Update the trainable weights using SGD.
  new_trainable = jax.tree.map(lambda w, g: w - 0.01 * g, trainable, grads)

  # Merge the updated weights with the updated non-trainable state.
  return new_trainable.merge(new_non_trainable)

🔀 Batching & Parallel RNG (vmap & shard_map)

JAX's jit handles RNG splitting automatically. However, when using explicit parallelization like jax.vmap or jax.shard_map, you want distinct behavior on each device or batch element (e.g. unique dropout masks or params per shard).

If you simply passed the same params (and thus the same RNG state) to every device, they would all produce identical random numbers. blox solves this by automatically "folding in" axes when using bx.Rng(..., auto_fold_in_axes=True). This keeps the base RNG state replicated (identical across devices) but folds in the device/batch index to generate unique keys per device.

# By default, Rng automatically folds in vmap/shard_map axes.
rng = bx.Rng(graph.child('rng'), auto_fold_in_axes=True)

def apply_model(params, inputs):
  # No manual folding needed! Rng detects the 'batch' axis from vmap.
  outputs, params = dropout(params, inputs, is_training=True)
  return outputs, params

# Note that params (including the Rng) are replicated.
batched_outputs = jax.vmap(
    apply_model,
    in_axes=(None, 0),
    out_axes=(0, None),
    axis_name='batch'
)(params, inputs)

Manual RNG Control

For full control over how RNG keys are generated (e.g. specialized folding or key manipulation), you can disable automatic folding.

# Disable automatic folding for manual control.
rng = bx.Rng(graph.child('rng'), auto_fold_in_axes=False)

def apply_model(params, inputs):
  # Manually get and manipulate the seed.
  original_seed = rng.get_seed(params)
  folded_seed = jax.random.fold_in(original_seed, jax.lax.axis_index('batch'))
  
  # Update params with the new seed.
  params = rng.seed(params, seed=folded_seed)
  
  # Forward pass uses the manual seed.
  outputs, params = model(params, inputs)
  
  # Restore the original seed before returning to keep state replicated.
  return outputs, rng.seed(params, seed=original_seed)

📈 Scaling Up

To run models that don't fit on a single device, parameters must be created directly on their target devices. This requires specifying how parameters are sharded. You can do this manually by inspecting the parameter structure, or automatically by baking sharding metadata into the model definition.

Below we show how to specify sharding metadata per module and extract it without any FLOPs using jax.eval_shape.

from jax.sharding import NamedSharding, PartitionSpec as P

graph = bx.Graph('net')
rng = bx.Rng(graph.child('rng'))
linear = bx.Linear(
  graph.child('linear'),
  output_size=1024,
  rng=rng,
  kernel_metadata={'sharding': (None, 'model')},
  bias_metadata={'sharding': ('model',)},
)

# Define an initialization function.
def init(x):
  params = rng.seed(bx.Params(), seed=42)
  _, params = linear(params, x)
  return params.finalized()

# Abstract evaluation to get the Params structure (no memory allocation).
inputs = jnp.ones((4, 4))
abstract_params = jax.eval_shape(init, inputs)

# Create the sharding specification from metadata.
mesh = jax.make_mesh((4,), ('model',))

params_sharding = jax.tree.map(
    lambda p: NamedSharding(mesh, P(*p.sharding)),
    abstract_params,
    is_leaf=lambda x: isinstance(x, bx.Param)
)

# JIT-compile the init function with out_shardings.
# Params are created directly on the correct devices, with no memory overhead.
sharded_init = jax.jit(init, out_shardings=params_sharding)
sharded_params = sharded_init(inputs)

@jax.jit(in_shardings=(params_sharding, None), donate_argnames='params')
def forward(params, x):
  return linear(params, x)

out, new_params = forward(sharded_params, inputs)

🔄 Recurrence & Scanning

blox provides bx.SequenceBase for general sequence models that handle sequence data using a step-wise __call__ and a sequence-processing apply. bx.RecurrenceBase is a subclass of bx.SequenceBase where sequence-processing apply function by default iteratively applies __call__. In simple terms, bx.SequenceBase should be used to implement a Transformer model and bx.RecurrenceBase to implement a standard LSTM.

lstm = bx.LSTM(graph.child('lstm'), hidden_size=128, rng=rng)

# Initialize the LSTM state.
state, params = lstm.initial_state(params, inputs)

# Run efficient compiled scan over a sequence [Batch, Time, Features].
# It automatically handles carry propagation.
(outputs, final_state), params = lstm.apply(
    params, inputs_sequence, prev_state=state
)

🧠 Under the Hood

blox is transparent by design. The abstraction is really just automated path handling to keep your code clean and your state pure.

  • The Graph: A lightweight object representing a location in the hierarchy (e.g. net -> mlp -> dense1). graph.child('name') appends to the path, ensuring every module has a unique address space.
  • The Params: A flat, immutable dictionary holding all state, keyed by tuple paths (e.g. ('net', 'mlp', 'dense1', 'kernel')). It supports simple partitioning for gradients or custom metadata.
  • The Rng: Params maintains an Rng module such that when a module requests randomness, Params generates a unique, deterministic key via the Rng module and an updated Params structure with the Rng counter incremented. Modules can use a custom Rng module, to decouple their randomness from the Params randomness. See Dropout as an example.

⚠️ Gotchas

Because blox is functional, methods on Params return new instances rather than mutating in place. You must always reassign:

# ✗ Wrong - result is discarded.
params.finalized()

# ✓ Correct - reassign to capture the new instance.
params = params.finalized()

The same applies to Rng accessor methods like seed():

# ✗ Wrong - params is not updated.
rng.seed(params, counter=0)

# ✓ Correct - capture the returned params.
params = rng.seed(params, counter=0)

Random keys generated under jax.vmap or jax.shard_map will be identical for each batch element/device unless you use auto_fold_in_axes=True (the default) on your Rng module. axis_name must also be specified for every vmap that wraps code using Rng to generate new random keys. See the Batching & Parallel RNG section for more information.

⚖️ Why blox?

blox chooses clarity over brevity.

Most frameworks rely on implicit global state or thread-local contexts to hide parameters and RNG keys. While this makes simple scripts shorter, it creates a "black box" that is hard to debug and even harder to customize.

OOP-style Wrappers blox
out = layer(x) outputs, params = layer(params, inputs)
Implicit global state Explicit state passing
Opaque variable scopes Explicit bx.Graph paths
Custom vmap / jit / grad wrappers Standard jax.vmap / jax.jit / jax.grad

By accepting slightly more verbose function signatures, you gain:

  1. Total transparency: You know exactly what data your function touches.
  2. JIT safety: No global state means no side-effect leaks or tracer errors.
  3. Maximum performance: Zero overhead abstractions.

📄 License

MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jax_blox-0.1.5.tar.gz (33.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jax_blox-0.1.5-py3-none-any.whl (33.7 kB view details)

Uploaded Python 3

File details

Details for the file jax_blox-0.1.5.tar.gz.

File metadata

  • Download URL: jax_blox-0.1.5.tar.gz
  • Upload date:
  • Size: 33.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.1

File hashes

Hashes for jax_blox-0.1.5.tar.gz
Algorithm Hash digest
SHA256 b4987c472b10c2add6088dc9b418bd42af066292aa9b958b372ece9d456c8940
MD5 48c858fc4e67fb0f422b4bf38818b653
BLAKE2b-256 4c8a39a5c75bb8a2af0130528f7c5e57dcdb7823e91a2a86fb945058d15b770e

See more details on using hashes here.

File details

Details for the file jax_blox-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: jax_blox-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 33.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.1

File hashes

Hashes for jax_blox-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 59f1f628ca3c3b52418ae97df3d9bd50329532a1a4fd13e647d43b00441f9df8
MD5 b14574eefea96bbaabe505f35809d792
BLAKE2b-256 1fcc66f1ac26619a31bd17e3b0fdc59f5cd831d63e2c946a013bacf8dd2c916b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page