A curated list of machine learning and deep learning helpers.
Project description
Core
Device Management
This section demonstrates how to easily select computation devices when using PyTorch. The homa helpers provide consistent interfaces for CPU, CUDA-enabled GPUs, and Apple Silicon MPS.
cpu(): Forces tensors or models onto the CPU.cuda(): Moves tensors or models onto a CUDA GPU (if available). Commonly used in high‑performance training.mps(): Uses Apple's Metal Performance Shaders backend on macOS.get_device(): Automatically infers the best available device in the order: CUDA → MPS → CPU.
from homa import cpu, mps, cuda, get_device
# explicitly selecting devices
torch.tensor([1, 2, 3, 4, 5]).to(cpu())
torch.tensor([1, 2, 3, 4, 5]).to(cuda())
torch.tensor([1, 2, 3, 4, 5]).to(mps())
# automatic device selection
torch.tensor([1, 2, 3, 4, 5]).to(get_device())
This design mirrors common best practices in deep learning workflows, promoting device‑agnostic code.
Loading Settings
homa.settings allows you to attach a settings.json file to your project and access its values directly in your code. This is useful for hyperparameters, configuration management, or experiment logging.
Example settings.json:
{
"epochs": 100,
"learning_rate": 0.001
}
Loading settings in Python:
from homa import settings
for epoch in range(settings("epochs")):
pass
The helper reads and caches the JSON content, providing dictionary‑like access without requiring boilerplate file‑loading logic.
Vision
Resnet
homa.vision.Resnet implements a standard ResNet‑50 architecture, commonly used in image classification tasks. This class bundles the model, optimizer, and training loop helpers for fast prototyping.
You can train the model directly using a PyTorch DataLoader:
from homa.vision import Resnet
model = Resnet(num_classes=10, lr=0.001)
for epoch in range(10):
model.train(train_dataloader)
Alternatively, you may manually unpack the DataLoader and pass data batches yourself:
from homa.vision import Resnet
model = Resnet(num_classes=10, lr=0.001)
for epoch in range(10):
for x, y in train_dataloader:
model.train(x, y)
This interface is influenced by modern PyTorch training utilities and mirrors patterns seen in high‑level frameworks while keeping full transparency over the training loop.
Loss Functions
Logit Normalization
LogitNorm is a modified cross‑entropy‑style loss that normalizes logits before computing the loss. This technique was introduced to improve calibration, robustness, and especially performance in ensembling scenarios where varied model outputs can lead to instability.
Typical benefits of LogitNorm include:
- more stable gradients
- improved probabilistic calibration
- robustness to logit scaling differences across models
from homa.loss import LogitNorm
criterion = LogitNorm()
Logit normalization is related to works studying the effect of logit scaling on generalization and calibration in deep networks.
Activation Functions
The table below lists every module that subclasses ActivationFunction, summarizing the computation performed in forward, linking to the implementation, and indicating whether the module exposes learnable parameters.
| ID | Activation | Formula (plain text) |
|---|---|---|
| 1 | ADA | f(x) = x if x ≥ 0, else x * exp(x) |
| 2 | AOAF | f(x) = max(0, x − b · a) + c · a |
| 3 | AReLU | f(x) = (1 + σ(b)) * max(0, x) + clamp(a, 0.01, 0.99) * min(0, x) |
| 4 | ASiLU | f(x) = arctan(x * σ(x)) |
| 5 | AbsLU | f(x) = x if x ≥ 0, else α * |x| |
| 6 | BaseDLReLU | f(x) = x if x ≥ 0, else a * bₜ * x |
| 7 | CaLU | f(x) = x * (arctan(x) / π + 0.5) |
| 8 | DLReLU | Inherits BaseDLReLU |
| 9 | DLU | f(x) = x if x ≥ 0, else x / (1 − x) |
| 10 | DPReLU | f(x) = a x if x ≥ 0, else b x |
| 11 | DRLU | f(x) = max(0, x − α) |
| 12 | DerivativeSiLU | f(x) = σ(x) * (1 + x (1 − σ(x))) |
| 13 | DiffELU | f(x) = x if x ≥ 0, else a * (x eˣ − b e^(b x)) |
| 14 | DoubleSiLU | f(x) = x / (1 + exp(−(−x / (1 + e^(−x))))) |
| 15 | DualLine | f(x) = a x + m if x ≥ 0, else b x + m |
| 16 | EANAF | f(x) = x · g(h(x)) |
| 17 | Elliot | f(x) = 0.5 + (0.5 x)/(1 + |x|) |
| 18 | ExponentialDLReLU | Inherits BaseDLReLU |
| 19 | ExponentialSwish | f(x) = exp(−x) · σ(x) |
| 20 | FReLU | f(x) = x + b if x ≥ 0, else b |
| 21 | FlattedTSwish | f(x) = max(0, x) · σ(x) + t |
| 22 | GeneralizedSwish | f(x) = x · σ(exp(−x)) |
| 23 | Gish | f(x) = x · ln(2 − exp(−exp(x))) |
| 24 | IpLU | f(x) = x if x ≥ 0, else x / (1 + |x|^α) |
| 25 | LaLU | f(x) = x · (1 − 0.5 e^(−x)) if x ≥ 0, else x · (0.5 e^x) |
| 26 | LeLeLU | f(x) = a x if x ≥ 0, else 0.01 a x |
| 27 | LogSigmoid | f(x) = ln(σ(x)) |
| 28 | Logish | f(x) = x · ln(1 + σ(x)) |
| 29 | MSiLU | f(x) = x σ(x) + 1/4 · e^(−x² − 1) |
| 30 | MaxSig | f(x) = max(x, σ(x)) |
| 31 | MinSin | f(x) = min(x, sin(x)) |
| 32 | NLReLU | f(x) = ln(1 + β · max(0, x)) |
| 33 | NReLU | f(x) = x + a if x ≥ 0, else 0 |
| 34 | NoisyReLU | Inherits NReLU |
| 35 | OAF | f(x) = max(0, x) + x · σ(x) |
| 36 | PERU | f(x) = a x if x ≥ 0, else a x · e^(b x) |
| 37 | PFLU | f(x) = x · 0.5 · (1 + x / √(1 + x²)) |
| 38 | PLAF | f(x) = x − δ if x ≥ 1; = −x − δ if x < −1; = |x|^d / d otherwise; δ = 1 − 1/d |
| 39 | Phish | f(x) = x · tanh(GELU(x)) |
| 40 | PiLU | f(x) = a x + c (1 − a) if x ≥ c, else b x + c (1 − b) |
| 41 | PoLU | f(x) = x if x ≥ 0, else (1 − x)^(−α) − 1 |
| 42 | PolyLU | f(x) = x if x ≥ 0, else 1/(1 − x) − 1 |
| 43 | REU | f(x) = x if x ≥ 0, else x · exp(x) |
| 44 | RReLU | f(x) = x if x ≥ 0, else x / a, where a ∈ [lower, upper] |
| 45 | RandomizedSlopedReLU | Inherits SlopedReLU |
| 46 | ReCU | Inherits RePU |
| 47 | RePU | f(x) = max(0, x^α) |
| 48 | ReQU | Inherits RePU |
| 49 | ReSP | f(x) = α x + ln 2 if x ≥ 0, else ln(1 + e^x) |
| 50 | ReSech | f(x) = x · sech(x) |
| 51 | SGELU | f(x) = α x · erf(x / √2) |
| 52 | SaRa | f(x) = x if x ≥ 0, else x / (1 + α e^(−β x)) |
| 53 | Serf | f(x) = x · erf(ln(1 + e^x)) |
| 54 | ShiLU | f(x) = a · max(0, x) + b |
| 55 | ShiftedReLU | f(x) = max(x, −1) |
| 56 | SiELU | f(x) = x · σ(2 √(2/π) · (x + 0.044715 x³)) |
| 57 | SigLU | f(x) = x if x ≥ 0, else (1 − e^(−2x)) / (1 + e^(−2x)) |
| 58 | SigmoidDerivative | f(x) = e^(−x) · σ(x)² |
| 59 | SinSig | f(x) = x · sin((π/2) · σ(x)) |
| 60 | SineReLU | f(x) = x if x ≥ 0, else ε · (sin x − cos x) |
| 61 | SlopedReLU | f(x) = α x if x ≥ 0, else 0 |
| 62 | Smish | f(x) = x · tanh(ln(1 + σ(x))) |
| 63 | SoftModulusQ | f(x) = x² (2 − |x|) if |x| ≤ 1, else |x| |
| 64 | SoftModulusT | f(x) = x · tanh(x / α) |
| 65 | SoftsignRReLU | f(x) = 1/(1 + x)² + x if x ≥ 0, else 1/(1 + x)² + a x |
| 66 | StarReLU | f(x) = a · (max(0, x))² + b |
| 67 | Suish | f(x) = max(x, x · exp(−|x|)) |
| 68 | TBSReLU | f(x) = x · tanh((1 − e^(−x)) / (1 + e^(−x))) |
| 69 | TSReLU | f(x) = x · tanh(σ(x)) |
| 70 | TSiLU | Let α = x / (1 + e^(−x)), then f(x) = (e^α − e^(−α)) / (e^α + e^(−α)) |
| 71 | TangentBipolarSigmoidReLU | Inherits TBSReLU |
| 72 | TangentSigmoidReLU | Inherits TSReLU |
| 73 | TanhExp | f(x) = x · tanh(e^x) |
| 74 | TeLU | f(x) = x · tanh(e^x) |
| 75 | ThLU | f(x) = x if x ≥ 0, else tanh(x / 2) |
| 76 | TripleStateSwish | Let a = σ(x), b = σ(x − α), c = σ(x − β); f(x) = x · a · (a + b + c) |
| 77 | mReLU | f(x) = min(max(0, 1 − x), max(0, 1 + x)) |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file homa-0.3.313.tar.gz.
File metadata
- Download URL: homa-0.3.313.tar.gz
- Upload date:
- Size: 38.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4fe41fd0e0f0b8e4f1fa9046074c4678eb5dd55b9225f8f0ad75d5d48d099b9d
|
|
| MD5 |
bbdb141eb40bc64870113512930e1a83
|
|
| BLAKE2b-256 |
12b3f33fd537152fdbebf5bcf5dfaa738648343407e1afda1f70c322368c338f
|
File details
Details for the file homa-0.3.313-py3-none-any.whl.
File metadata
- Download URL: homa-0.3.313-py3-none-any.whl
- Upload date:
- Size: 83.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
75d002996adb93147f915f55752c052d4af07e4b906f34c359c372e9caba7af7
|
|
| MD5 |
5417ebc5ae3e0a25f22f82a8bc4adcec
|
|
| BLAKE2b-256 |
d59870386c270a71e496657802c927cc548c111a08b66d72c0e0d25de6c8cd3a
|