PyTorch module for differentiable parametric curves with learnable coefficients
Project description
A PyTorch module for differentiable parametric curves with learnable coefficients, such as a B-Spline curve with learnable control points.
Fully differentiable curve implementations that integrate seamlessly with PyTorch's autograd system. It streamlines use cases such as continuous numerical embeddings for embedding-based models (e.g. factorization machines [6] or transformers [2,3]), Kolmogorov-Arnold networks [1], or path planning in robotics.
Docs
- Documentation site.
- Example notebooks for you to try our
Features
- Fully Differentiable: Custom autograd function ensures gradients flow properly through the curve evaluation.
- Batch Processing: Vectorized operations for efficient batch and multi-curve evaluation.
- Efficient numerics: Clenshaw recursion for polynomials, Cox-DeBoor for splines.
Installation
pip install torchcurves
uv add torchcurves
Use cases
There are examples in the examples directory showing how to build models using
this library. Here we show some simple code snippets to appreciate the library.
Use case 1 - continuous embeddings
import torchcurves as tc
from torch import nn
import torch
def Net(nn.Module):
def __init__(self, num_categorical, num_numerical, dim, num_knots=10):
super().__init__()
self.cat_emb = nn.Embedding(num_categorical, dim)
self.num_emb = tc.BSplineCurve(num_numerical, dim, knots_config=num_knots)
self.model_that_requires_embeddings = MySuperDuperModel()
def forward(self, x_categorical, x_numerical):
embeddings = torch.cat([self.cat_emb(x_categorical), self.num_emb(x_numerical)], axis=-2)
return self.model_that_requires_embeddings(embeddings)
Use case 2 - monotone functions
Splines are monotone if their coefficient vectors are monotone. Want an increasing function? Just make sure the coefficients are increasing!
Here is a small example of a model for the probability of winning an auction, which has to be an increasing function of the bid, using a simple idea:
- Auction encoder encodes auction some vector $v$
- We transform $v$ to an increasing vector $c$
- Output is a spline function of the bid with coefficient vector $c$
import torch
from torch import nn
import torchcurves.functional as tcf
class AuctionWinModel(nn.Module):
def __init__(self, num_auction_features, num_bid_coefficients):
self.auction_encoder = make_auction_encoder( # example - an MLP, a transformer, etc.
input_features=num_auction_features,
output_features=num_bid_coefficients,
)
self.spline_knots = nn.Buffer(tcf.uniform_augmented_knots(
n_control_points=num_bid_coefficients,
degree=3,
k_min=0,
k_max=1
))
def forward(self, auction_features, bids):
# map auction features to increasing spline coefficients
spline_coeffs = self._make_increasing(self.auction_encoder(auction_features))
# map bids to [0, 1] using the arctan (or any other) normalization
mapped_bid = tcf.arctan(bids)
# evaluate the spline at the mapped bids, treating each
# mini-batch sample as a separate curve
return tcf.bspline_curves(
mapped_bid.unsqueeze(0), # 1 x B (B curves in 1 dimension)
spline_coeffs.unsqueeze(-1), # B x C x 1 (B curves with C coefs in 1 dimension)
self.knots,
degree=3
)
def _make_increasing(self, x):
# transform a mini-batch of vectors to a mini-batch of increasing vectors
initial = x[..., :1]
increments = nn.functional.softplus(x[..., 1:])
concatenated = torch.concat((initial, increments), dim=-1)
return torch.cumsum(concatenated, dim=-1)
Now we can train the model to predict the probability of winning auctions given auction features and bid:
import torch.functional as F
for auction_features, bids, win_labels in train_loader:
win_logits = model(auction_features, bids)
loss = F.binary_cross_entropy_with_logits( # or any loss we desire
win_logits,
win_labels
)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Use case 3 - Kolmogorov-Arnold networks
A KAN [1] based on the B-Spline basis, along the lines of the original paper:
import torchcurves as tc
from torch import nn
input_dim = 2
intermediate_dim = 5
num_control_points = 10
kan = nn.Sequential(
# layer 1
tc.BSplineCurve(input_dim, intermediate_dim, knots_config=num_control_points),
tc.Sum(dim=-2),
# layer 2
tc.BSplineCurve(intermediate_dim, intermediate_dim, knots_config=num_control_points),
tc.Sum(dim=-2),
# layer 3
tc.BSplineCurve(intermediate_dim, 1, knots_config=num_control_points),
tc.Sum(dim=-2),
)
Yes, we know the original KAN paper used a different curve parametrization, B-Spline + arcsinh, but the whole point of this repo is showing that KAN activations can be parametrized in arbitrary ways.
For example, here is a KAN based on Legendre polynomials of degree 5:
import torchcurves as tc
from torch import nn
input_dim = 2
intermediate_dim = 5
degree = 5
kan = nn.Sequential(
# layer 1
tc.LegendreCurve(input_dim, intermediate_dim, degree=degree),
tc.Sum(dim=-2),
# layer 2
tc.LegendreCurve(intermediate_dim, intermediate_dim, degree=degree),
tc.Sum(dim=-2),
# layer 3
tc.LegendreCurve(intermediate_dim, 1, degree=degree),
tc.Sum(dim=-2),
)
Since KANs are the primary use case for the tc.Sum() layer, we can omit the dim=-2 argument, but it is provided
here for clarity.
Advanced features
The curves we provide here typically rely on their inputs to lie in a compact interval, typically [-1, 1]. Arbitrary inputs need to be normalized to this interval. We provide two simple out-of-the-box normalization strategies described below.
Rational scaling
This is the default strategy — this strategy computes
x \to \frac{x}{\sqrt{s^2 + x^2}},
and is based on the paper
Wang, Z.Q. and Guo, B.Y., 2004. Modified Legendre rational spectral method for the whole line. Journal of Computational Mathematics, pp.457-474.
In Python it looks like this:
tc.BSplineCurve(curve_dim, normalization_fn='rational', normalization_scale=s)
Arctan scaling
This strategy computes
x \to \frac{2}{\pi} \arctan(x / s).
This kind of scaling function, up to constants, is the CDF of the Cauchy distribution. It is useful when our inputs are assumed to be heavy tailed.
In Python it looks like this:
tc.BSplineCurve(curve_dim, normalization_fn='arctan', normalization_scale=s)
Clamping
The inputs are simply clipped to [-1, 1] after scaling, i.e.
x \to \max(\min(1, x / s), -1)
In Python it looks like this:
tc.BSplineCurve(curve_dim, normalization_fn='clamp', normalization_scale=s)
Custom normalization
Provide a custom function that maps its input to the designated range after scaling. Example:
def erf_clamp(x: Tensor, scale: float = 1, out_min: float = -1, out_max: float = 1) -> Tensor:
mapped = torch.special.erf(x / scale)
return ((mapped + 1) * (out_max - out_min)) / 2 + out_min
tc.BSplineCurve(curve_dim, normalization_fn=erf_clamp, normalization_scale=s)
Example: B-Spline KAN with clamping
A KAN based on rationally scaled B-Spline basis with the default scale of $s=1$:
spline_kan = nn.Sequential(
# layer 1
tc.BSplineCurve(input_dim, intermediate_dim, knots_config=knots, normalization_fn='clamp'),
tc.Sum(),
# layer 2
tc.BSplineCurve(intermediate_dim, intermediate_dim, knots_config=knots, normalization_fn='clamp'),
tc.Sum(),
# layer 3
tc.BSplineCurve(intermediate_dim, 1, knots_config=knots, normalization_fn='clamp'),
tc.Sum(),
)
Legendre KAN with rational clamping
import torchcurves as tc
from torch import nn
input_dim = 2
intermediate_dim = 5
degree = 5
config = dict(degree=degree, normalization_fn="clamp")
kan = nn.Sequential(
# layer 1
tc.LegendreCurve(input_dim, intermediate_dim, **config),
tc.Sum(),
# layer 2
tc.LegendreCurve(intermediate_dim, intermediate_dim, **config),
tc.Sum(),
# layer 3
tc.LegendreCurve(intermediate_dim, 1, **config),
tc.Sum(),
)
Development
Development Installation
Using uv (recommended):
# Clone the repository
git clone https://github.com/alexshtf/torchcurves.git
cd torchcurves
# Create virtual environment and install
uv venv
uv sync --all-groups
Running Tests
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=torchcurves
# Run specific test file
uv run pytest tests/test_bspline.py -v
Performance Benchmarks
This project includes opt-in performance benchmarks (forward and backward passes) using pytest-benchmark.
Location: benchmarks/
Run benchmarks:
# Run all benchmarks
uv run pytest benchmarks -q
# Or select only perf-marked tests if you mix them into tests/
uv run pytest -m perf -q
CUDA timing notes: We synchronize before/after timed regions for accurate GPU timings.
Compare runs and fail CI on regressions:
# Save a baseline
uv run pytest benchmarks --benchmark-save=legendre_baseline
# Compare current run to baseline (fail if mean slower by 10% or more)
uv run pytest benchmarks --benchmark-compare --benchmark-compare-fail=mean:10%
Export results:
uv run pytest benchmarks --benchmark-json=bench.json
Building the docs
# Prepare API docs
cd docs
make html
Citation
If you use this package in your research, please cite:
@software{torchcurves,
author = {Shtoff, Alex},
title = {torchcurves: Differentiable Parametric Curves in PyTorch},
year = {2025},
publisher = {GitHub},
url = {https://github.com/alexshtf/torchcurves}
}
References
[1]: Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljacic, Thomas Y. Hou, Max Tegmark. "KAN: Kolmogorov–Arnold Networks." ICLR (2025).
[2]: Juergen Schmidhuber. "Learning to control fast-weight memories: An alternative to dynamic recurrent networks." Neural Computation, 4(1), pp.131-139. (1992)
[3]: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." Advances in neural information processing systems 30 (2017).
[4]: Alex Shtoff, Elie Abboud, Rotem Stram, and Oren Somekh. "Function Basis Encoding of Numerical Features in Factorization Machines." Transactions on Machine Learning Research.
[5]: Rügamer, David. "Scalable Higher-Order Tensor Product Spline Models." In International Conference on Artificial Intelligence and Statistics, pp. 1-9. PMLR, 2024.
[6]: Steffen Rendle. "Factorization machines." In 2010 IEEE International conference on data mining, pp. 995-1000. IEEE, 2010.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file torchcurves-0.2.1.tar.gz.
File metadata
- Download URL: torchcurves-0.2.1.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
134bc661653e9bbcab6680e9ae01e5575f754fea5f669623845941fc457eeba3
|
|
| MD5 |
82e9cc0182710c348decd9332ce9f41f
|
|
| BLAKE2b-256 |
0b925259092d20f0328c66c1e19bbcacea3f7067c65e24f532368dfb133ecb73
|
File details
Details for the file torchcurves-0.2.1-py3-none-any.whl.
File metadata
- Download URL: torchcurves-0.2.1-py3-none-any.whl
- Upload date:
- Size: 22.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b8de1b231079ad034ca4294058f988f8efc1212592fbe42948bf75e4873c5903
|
|
| MD5 |
e1b90ae1864e728509c2775b6ba3106c
|
|
| BLAKE2b-256 |
ce96e89068a79e0980e234afd6bdbd0178cb8c700e15d76eaae13e90c3c68dfa
|