A JAX-based kernel library for Gaussian Processes with automatic differentiation and composable operations
Project description
Kernax
A JAX-based kernel library for Gaussian Processes with automatic differentiation, JIT compilation, and composable kernel operations.
⚠️ Project Status: Kernax is in early development. The API may change, and some features are still experimental.
Features
- Fast JIT-compiled computations using JAX's
@jitdecorator - Automatic dimension handling for scalars, vectors, matrices, and batched operations
- NaN-aware computations for working with padded/masked data
- Composable kernels through operator overloading (
+,*,-) - Distinct hyperparameters per batch for efficient multi-task learning
- PyTree integration for seamless use with JAX transformations (grad, vmap, etc.)
Installation
Install from PyPI:
pip install kernax-ml
Or clone the repository for development:
git clone https://github.com/SimLej18/kernax-ml
cd kernax-ml
Requirements:
- Python >= 3.14
- JAX >= 0.8.0
Using Conda (recommended):
conda create -n kernax-ml python=3.14
conda activate kernax-ml
pip install -e .
Using pip:
pip install -e .
Quick Start
import jax.numpy as jnp
from kernax import SEKernel, LinearKernel, DiagKernel, ExpKernel, BatchKernel, ARDKernel
# Create a simple Squared Exponential kernel
kernel = SEKernel(length_scale=1.0)
# Compute covariance between two points
x1 = jnp.array([1.0, 2.0])
x2 = jnp.array([1.5, 2.5])
cov = kernel(x1, x2)
# Compute covariance matrix for a set of points
X = jnp.array([[1.0], [2.0], [3.0]])
K = kernel(X, X) # Returns 3x3 covariance matrix
# Compose kernels using operators
composite_kernel = SEKernel(length_scale=1.0) + DiagKernel(ExpKernel(0.1)) # SE + noise
# Use BatchKernel for distinct hyperparameters per batch
base_kernel = SEKernel(length_scale=1.0)
batched_kernel = BatchKernel(base_kernel, batch_size=10, batch_in_axes=0, batch_over_inputs=True)
# Use ARDKernel for Automatic Relevance Determination
length_scales = jnp.array([1.0, 2.0, 0.5]) # Different scale per dimension
ard_kernel = ARDKernel(SEKernel(length_scale=1.0), length_scales=length_scales)
Available Kernels
Base Kernels
-
SEKernel(Squared Exponential, aka RBF or Gaussian)- Hyperparameters:
length_scale
- Hyperparameters:
-
LinearKernel- Hyperparameters:
variance_b,variance_v,offset_c
- Hyperparameters:
-
MaternKernelfamilyMatern12Kernel(ν=1/2, equivalent to Exponential)Matern32Kernel(ν=3/2)Matern52Kernel(ν=5/2)- Hyperparameters:
length_scale
-
PeriodicKernel- Hyperparameters:
length_scale,variance,period
- Hyperparameters:
-
RationalQuadraticKernel- Hyperparameters:
length_scale,variance,alpha
- Hyperparameters:
-
ConstantKernel- Hyperparameters:
value
- Hyperparameters:
Composite Kernels
SumKernel: Adds two kernels (usekernel1 + kernel2)ProductKernel: Multiplies two kernels (usekernel1 * kernel2)
Wrapper Kernels
Transform or modify kernel behavior:
DiagKernel: Returns value only when inputs are equal (creates diagonal matrices)ExpKernel: Applies exponential to kernel outputLogKernel: Applies logarithm to kernel outputNegKernel: Negates kernel output (use-kernel)BatchKernel: Adds batch handling with distinct hyperparameters per batchBlockKernel: Constructs block covariance matrices for grouped dataActiveDimsKernel: Selects specific input dimensions before kernel computationARDKernel: Applies Automatic Relevance Determination (different length scale per dimension)
Architecture
Kernax is built on Equinox, making kernels PyTorch-like modules with clean differentiation.
Each kernel uses a dual-class pattern:
- Static Class (e.g.,
StaticSEKernel): Contains JIT-compiled computation logic - Instance Class (e.g.,
SEKernel): Extendseqx.Module, holds hyperparameters
This design enables:
- Efficient JIT compilation with Equinox's
filter_jit - Automatic PyTree registration through
eqx.Module - Seamless integration with JAX transformations (grad, vmap, etc.)
- Clean hyperparameter management with automatic array conversion
See CLAUDE.md for detailed architecture documentation.
Benchmarks
Kernax is designed for performance. Preliminary benchmarks show:
- Scalar operations: ~13-15 μs per covariance computation
- Matrix operations (10k × 15k): ~674-855 ms
- Batched operations (50 batches, 100×150): ~2.35-6.37 ms
- Composite kernels: Minimal overhead compared to base kernels
See benchmarks/ directory for detailed performance comparisons.
Development Status
✅ Completed
- Core kernel implementations (SE, Linear, Matern, Periodic, etc.)
- Kernel composition via operators
- Automatic dimension handling
- NaN-aware computations
- Equinox Module integration
- BatchKernel wrapper for batched hyperparameters
- ARDKernel wrapper for Automatic Relevance Determination
- ActiveDimsKernel wrapper for dimension selection
🚧 In Progress / Planned
- Rewrite inheritance with StationaryKernel and IsotropicKernel base classes
- Add computation engines for special cases (diagonal-only, etc.)
- Comprehensive test suite covering all new features
- Expanded documentation and tutorials
- PyPI package distribution
- Benchmarks against other libraries (GPJax, TinyGP, etc.)
Contributing
This project is in early development. Contributions, bug reports, and feature requests are welcome!
Related Projects
Kernax is developed alongside MagmaClust, a clustering and Gaussian Process library.
License
MIT License - see LICENSE file for details.
Citation
[Citation information to be added]
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kernax_ml-0.1.8a0.tar.gz.
File metadata
- Download URL: kernax_ml-0.1.8a0.tar.gz
- Upload date:
- Size: 15.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
155c23bf4d432ab74799a117e15ea37f087fd07efb04990025cf6fa0c1d8a5c8
|
|
| MD5 |
9c2434ce034d944e432b1aca846e575a
|
|
| BLAKE2b-256 |
e9a5ab11b9affa49acede921f8cbb9d1ffbbad995f4d799b5066b6b111f6368c
|
File details
Details for the file kernax_ml-0.1.8a0-py3-none-any.whl.
File metadata
- Download URL: kernax_ml-0.1.8a0-py3-none-any.whl
- Upload date:
- Size: 17.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1df6fed2e9cd2863a86bb571aa11f2f996ec1430a8f43f9fb257146b82bc1661
|
|
| MD5 |
50e724d82e4cd324aa87eca2b11bce8d
|
|
| BLAKE2b-256 |
d6e14e2cdd473de9a4c235124d414d8fc052592a71c4f226adc2540c3079475c
|