Skip to main content

Thermodynamically efficient optimizer with Lipschitz-adaptive learning rates and information-weighted sampling

Project description

H3 Optimizer โšก

License: MIT Python 3.8+ PyTorch 1.12+ Code style: black Validated Hardware Energy PyPI

Thermodynamically efficient deep learning optimizer achieving better accuracy AND lower energy consumption.

H3 combines Lipschitz-adaptive learning rates, information-weighted sampling, and energy tracking to maximize learning efficiency. Based on H3: A Thermodynamically Efficient Machine Learning Framework by Nuno Cardoso (2025).


๐Ÿ“Š Validated Performance (CIFAR-10 + ResNet-18, 20 epochs)

Tested on Apple Silicon (M4) - Real hardware measurements

Conservative Mode (Balanced - Recommended)

Configuration: keep_frac=0.75, uniform_mix=0.50
Adam baseline:  74.58% accuracy, 239s, 14.2 kJ
H3 conservador: 76.46% accuracy, 197s, 11.7 kJ

Improvements:   +1.88pp accuracy โœ…
                -17.8% training time โœ…
                -17.9% energy consumption โœ…
                +12% thermodynamic efficiency (ฮท) โœ…

Aggressive Mode (Green - Maximum Efficiency)

Configuration: keep_frac=0.55, uniform_mix=0.30
Adam baseline:  74.58% accuracy, 239s, 14.2 kJ
H3 agressivo:   77.83% accuracy, 169s, 10.0 kJ

Improvements:   +3.25pp accuracy โœ…
                -29.4% training time โœ…
                -29.6% energy consumption โœ…
                +23% thermodynamic efficiency (ฮท) โœ…

Key Finding: H3 often IMPROVES accuracy while saving energy and time, due to intelligent data selection focusing on informative examples and adaptive learning rates based on local curvature.

Configuration Details

  • Phases: 15% warmup, 70% thermodynamic, 15% consolidation
  • Minimum 20 epochs recommended for convergence
  • Conservative uses more data (75%) with higher exploration (50% uniform)
  • Aggressive uses less data (55%) with focused sampling (30% uniform)

Note: Always profile your dataset first with H3Profiler to establish baselines before using the optimizer.


๐Ÿ”ฅ MNIST Baseline (Simple Dataset)

Metric Adam Baseline H3 Optimizer Difference
Training Time 23.07s 20.27s โšก 12.2% faster
Test Accuracy 99.20% 99.13% ๐ŸŽฏ -0.07pp
Energy Used 1357.68 J 1172.02 J ๐Ÿ’ก 13.7% reduction
Efficiency ฮท 0.002375 bits/J 0.002816 bits/J ๐Ÿ“Š +18.6%

MNIST with SimpleCNN (5 epochs). Measured on Apple M4 with PowerMetrics energy monitoring. MNIST is a simple dataset - see CIFAR-10 results above for performance on complex datasets.

โœ… Real Hardware Validation

H3 has been validated on multiple platforms with real energy measurements:

Apple M4 (Mac Studio/MacBook Pro):

  • Energy Backend: PowerMetrics (real-time power monitoring)
  • MNIST: 12.2% speedup, 18.6% efficiency gain
  • Package: pip install h3-optimizer from PyPI

Testing Environment:

# Anyone can reproduce these results:
pip install h3-optimizer

# Run the validation:
from h3 import H3Optimizer, LossTracker, EnergyTracker
# ... see examples/quickstart.py for complete code

Three-Phase Training Observed:

  • Warmup (epoch 1): ฮท = 0.010686 bits/J (initialization)
  • Thermodynamic (epochs 2-4): ฮท = 0.006572 โ†’ 0.003766 bits/J (optimization)
  • Consolidation (epoch 5): ฮท = 0.002816 bits/J (stabilization)

The efficiency metric ฮท changes across phases as expected from theory!


โšก Quick Start

from h3.optimizer import H3Optimizer
from h3.sampler import LossTracker, InformationWeightedSampler, IndexedDataset
from h3.energy_tracker import EnergyTracker

# Wrap dataset for loss tracking
dataset = IndexedDataset(your_dataset)
loss_tracker = LossTracker(num_samples=len(dataset))

# Initialize H3 optimizer
optimizer = H3Optimizer(model.parameters(), lr=1e-3)

# Track thermodynamic efficiency
tracker = EnergyTracker(device='cuda')
tracker.start()

# Three-phase training
for epoch in range(num_epochs):
    phase = 'warmup' if epoch < 1 else 'thermodynamic' if epoch < 4 else 'consolidation'
    sampler = InformationWeightedSampler(
        dataset, loss_tracker,
        keep_frac=0.65 if phase == 'thermodynamic' else 1.0,
        uniform_mix=0.2 if phase == 'thermodynamic' else 1.0
    )
    # ... train with sampler ...

results = tracker.stop()
print(f"Efficiency: {results['efficiency_bits_per_j']:.6f} bits/J")

Run the complete demo:

python examples/quickstart.py  # H3 vs Adam comparison on MNIST

๐ŸŽฏ Preset Configurations

H3 provides pre-tuned configurations for common use cases - no hyperparameter tuning required:

from h3.presets import h3_mnist_fast

# One-liner setup for MNIST-like datasets
optimizer, loss_tracker, indexed_dataset = h3_mnist_fast(
    model.parameters(),
    train_dataset
)

# Training loop with automatic phase management
from torch.utils.data import DataLoader

for epoch in range(10):
    phase, sampler = preset.get_sampler(epoch, total_epochs=10)
    loader = DataLoader(indexed_dataset, sampler=sampler, batch_size=64)
    # ... your training code ...

Available presets:

  • h3_mnist_fast() - For highly redundant data (MNIST, Fashion-MNIST)
    • 10-15% speedup, 15-20% efficiency gain
    • Accuracy maintained (< 0.2pp difference)
  • h3_cifar_safe() - For complex datasets (CIFAR-10, CIFAR-100)
    • 5-8% speedup, 8-12% efficiency gain
    • Small accuracy trade-off (-0.5 to -1.5pp)
  • h3_edge() - For edge devices (IoT, mobile ML)
    • 15-25% speedup, 20-30% efficiency gain
    • Accepts larger accuracy trade-off (-2 to -4pp) for maximum energy savings

See docs/PRESETS.md for detailed documentation.


๐Ÿ“Š Training Analysis & Logging

H3 includes built-in logging and analysis tools for tracking thermodynamic efficiency:

from h3.hooks import ThermoAuditLogger

# Setup logging
logger = ThermoAuditLogger(
    "my_experiment",
    metadata={"model": "ResNet-18", "dataset": "CIFAR-10"},
    config={"epochs": 20, "batch_size": 128, "lr": 1e-3}
)

# In training loop
logger.log_epoch(
    epoch=epoch,
    phase=phase,
    keep_frac=0.65,
    uniform_mix=0.2,
    train_loss=loss.item(),
    val_acc=accuracy,
    energy_stats=tracker.get_current_stats()
)

# Analyze results
from h3 import explain_thermo_log
print(explain_thermo_log(logger.get_path()))

CLI tool for analysis:

# Analyze single run
h3-report --h3-log mnist_h3.csv

# Compare H3 vs baseline
h3-report --h3-log mnist_h3.csv --baseline-log mnist_adam.csv

# Compare multiple configurations
h3-report --compare run1.csv run2.csv run3.csv

The h3-report tool provides:

  • Final metrics summary (accuracy, energy, efficiency)
  • Phase breakdown analysis
  • Heuristic assessment (starvation/aggressive/moderate/conservative)
  • Green Score calculation (energy efficiency ratio)
  • Automatic verdict with tuning recommendations

๐Ÿ“ฆ Feature Overview

Core Features (Production-Ready)

Feature Description Status
H3Optimizer Thermodynamic optimizer with adaptive learning โœ… Validated
H3Profiler Zero-risk profiling for any optimizer โœ… Stable
AutoH3 Zero-config automation with auto-tuning โœ… Ready
ฮท-Controller Automatic hyperparameter adjustment โœ… Functional
ThermoAuditLogger Experiment tracking and analysis โœ… Stable
Preset Configurations One-liner setup (mnist_fast, cifar_safe, etc.) โœ… Ready
CLI Tools h3-report for analysis and comparison โœ… Stable

All features tested and validated on real hardware (Apple M4).


๐Ÿš€ Advanced Features

H3 provides three levels of automation to match your needs:

1. H3Profiler - Zero-Risk Profiling ๐Ÿ”ฌ

Profile ANY optimizer (Adam, SGD, AdamW, etc.) without changing your training code:

from h3 import H3Profiler

# Wrap your existing training loop
profiler = H3Profiler(device='cuda', name="mnist_adam")
profiler.start()

# Your normal training loop - no changes needed!
for epoch in range(20):
    for batch in train_loader:
        # ... your normal training code with Adam/SGD ...
        profiler.log_batch(loss.item())

    acc = evaluate(model, test_loader)
    profiler.log_epoch(accuracy=acc)

# Get comprehensive analysis
results = profiler.stop()
print(profiler.get_report())  # Detailed thermodynamic analysis
profiler.export_csv("./profiles/adam_run.csv")

What it does:

  • โœ… Measures thermodynamic efficiency (ฮท = bits/joule) in real-time
  • โœ… Finds optimal stopping point (diminishing returns detection)
  • โœ… Calculates energy waste: "You used 20% more energy than needed"
  • โœ… Suggests when H3 optimization could help
  • โœ… Zero risk - just measurement, no changes to training

Use when: You want to understand your current training efficiency before committing to H3.


2. ฮท-Controller - Automatic Hyperparameter Tuning ๐ŸŽ›๏ธ

Let H3 tune itself based on real-time efficiency measurements:

from h3 import create_controlled_h3

# One-liner setup with automatic tuning
optimizer, controller, loss_tracker, indexed_dataset = create_controlled_h3(
    model.parameters(),
    train_dataset,
    mode="balanced",          # safe / balanced / green / extreme
    accuracy_tolerance=1.0,   # Max acceptable accuracy loss (pp)
    total_epochs=20
)

for epoch in range(20):
    # Controller automatically adjusts parameters
    keep_frac, uniform_mix, phase = controller.control_step()

    # Create sampler with auto-tuned parameters
    sampler = InformationWeightedSampler(
        indexed_dataset, loss_tracker,
        keep_frac=keep_frac,
        uniform_mix=uniform_mix
    )
    loader = DataLoader(indexed_dataset, sampler=sampler, batch_size=128)

    # ... training loop ...

    # Controller learns and adapts
    controller.observe(eta, accuracy, loss, energy, info)

    # Optional: check status
    if epoch % 5 == 0:
        print(controller.get_report())

Operating modes:

  • "safe" - Maximize accuracy (keep_frac ~0.70, conservative)
  • "balanced" - Balance speed/accuracy (keep_frac ~0.55, recommended)
  • "green" - Maximize energy savings (keep_frac ~0.45, aggressive)
  • "extreme" - Maximum savings (keep_frac ~0.35, accepts accuracy loss)

What it does:

  • โœ… Auto-adjusts keep_frac and uniform_mix in real-time
  • โœ… Respects accuracy constraints (won't sacrifice too much accuracy)
  • โœ… Prevents data starvation (detects loss volatility)
  • โœ… Automatic phase transitions (warmup โ†’ thermodynamic โ†’ consolidation)
  • โœ… Self-regulating thermodynamic system

Use when: You want H3's benefits but don't want to manually tune hyperparameters.


3. AutoH3 - Complete Zero-Config Automation ๐Ÿค–

The simplest possible API - combining profiler + controller + everything:

from h3 import AutoH3

# Single line creates everything
auto = AutoH3(
    model.parameters(),
    train_dataset,
    mode="balanced",
    name="mnist_auto_experiment"
)

auto.start()

# Simple training interface
for epoch in range(20):
    loader = auto.get_loader(batch_size=64)

    for data, target, indices in loader:
        data, target = data.to(device), target.to(device)

        # One-line training step
        loss = auto.training_step(model, data, target, criterion, indices)

    # One-line evaluation
    accuracy = auto.evaluate_epoch(model, test_loader)

# Comprehensive final report
results = auto.finish()

What it does:

  • โœ… Combines H3Profiler + ฮท-Controller + EnergyTracker
  • โœ… Zero configuration - just pick a mode
  • โœ… Automatic profiling and hyperparameter tuning
  • โœ… Simple training interface (training_step, evaluate_epoch)
  • โœ… Comprehensive final reports with all metrics
  • โœ… Works with standard PyTorch models and datasets

Use when: You want the absolute simplest H3 experience with maximum automation.


Feature Comparison

Feature Manual H3 Presets ฮท-Controller AutoH3 H3Profiler
Setup complexity High Low Medium Very Low Minimal
Hyperparameter tuning Manual Pre-tuned Automatic Automatic N/A
Works with any optimizer No No No No โœ… YES
Real-time adaptation No No โœ… YES โœ… YES No
Zero risk No No No No โœ… YES
Best for Research Quick start Production Beginners Profiling

Recommended Workflow

  1. Start with H3Profiler - Profile your existing training to understand baseline efficiency
  2. Try AutoH3 - Get H3 benefits with zero configuration
  3. Tune with ฮท-Controller - Fine-tune for production if needed
  4. Use Presets - If you want manual control with good defaults

๐Ÿง  What is H3?

H3 treats machine learning as a thermodynamic process that converts electrical energy into predictive information. Instead of just minimizing loss, H3 maximizes:

ฮท_thermo = ฮ”I / E (bits of information gained per joule of energy)

This is achieved through three innovations:

1. Lipschitz-Adaptive Learning Rates (Eq. 4.10)

ฮท_k+1 = min(ฮท_max, ฮณ/(Lฬ‚_k + ฮต))

Adjusts step size based on local loss curvature:

  • Larger steps in smooth regions โ†’ faster convergence
  • Smaller steps in steep regions โ†’ better stability

2. Information-Weighted Sampling (Eq. 5.1)

L_i^(t) = (1-ฮฒ)*L_i^(t-1) + ฮฒ*L_current

Prioritizes high-loss (high-information) examples:

  • Warmup: Uniform sampling for initialization
  • Thermodynamic: Weighted sampling (top 65% by loss)
  • Consolidation: Uniform sampling for rebalancing

3. Energy Tracking (Eq. 4.6, 4.7)

E = โˆซ P(t) dt    (trapezoidal integration)
ฮ”I = [L_initial - L_final] / ln(2)
ฮท = ฮ”I / E

Real-time monitoring via:

  • NVML (NVIDIA GPUs)
  • PowerMetrics (Apple Silicon)
  • Fallback (TDP estimation)

This is not a metaphor โ€” thermodynamic efficiency is a measurable physical quantity with units bits/joule.


๐Ÿ“ฆ Installation

Validated on PyPI (v0.1.1):

pip install h3-optimizer

This is the same package used in our validation tests. No local builds needed.

From source (for development):

git clone https://github.com/nfocardoso/EMSTI.git
cd EMSTI
pip install -e .

Requirements

  • Python โ‰ฅ 3.8
  • PyTorch โ‰ฅ 1.12.0
  • NumPy โ‰ฅ 1.21.0
  • torchvision โ‰ฅ 0.13.0 (for examples)

Optional for GPU power monitoring:

  • pynvml (NVIDIA GPUs)
  • powermetrics (Apple Silicon, requires sudo)

๐Ÿš€ Full Example

import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from h3.optimizer import H3Optimizer
from h3.sampler import LossTracker, InformationWeightedSampler, IndexedDataset
from h3.energy_tracker import EnergyTracker

# Setup
model = YourModel()
dataset = IndexedDataset(YourDataset())
loss_tracker = LossTracker(len(dataset), smoothing=0.1)
optimizer = H3Optimizer(
    model.parameters(),
    lr=1e-3,
    lipschitz_safety=0.9,
    lipschitz_update_interval=10
)
tracker = EnergyTracker(device='cuda')
criterion = nn.CrossEntropyLoss(reduction='none')

# Training with three phases
tracker.start()

for epoch in range(num_epochs):
    # Determine phase
    if epoch < warmup_epochs:
        phase = 'warmup'
        sampler = InformationWeightedSampler(
            dataset, loss_tracker, uniform_mix=1.0
        )
    elif epoch >= (num_epochs - consolidation_epochs):
        phase = 'consolidation'
        sampler = InformationWeightedSampler(
            dataset, loss_tracker, uniform_mix=1.0
        )
    else:
        phase = 'thermodynamic'
        sampler = InformationWeightedSampler(
            dataset, loss_tracker,
            keep_frac=0.65,    # Top 65% by loss
            uniform_mix=0.2    # 80% weighted, 20% uniform
        )

    loader = DataLoader(dataset, sampler=sampler, batch_size=64)

    for data, target, indices in loader:
        data, target = data.to(device), target.to(device)

        optimizer.zero_grad()
        output = model(data)

        # Compute per-sample losses
        loss_vector = criterion(output, target)
        loss = loss_vector.mean()

        # Update loss tracker
        loss_tracker.update(indices, loss_vector.detach())
        tracker.log_loss(loss.item())

        loss.backward()
        optimizer.step()

    print(f"Epoch {epoch+1} [{phase}]: Loss={loss.item():.4f}")

# Results
results = tracker.stop()
print(f"\nThermodynamic Efficiency: {results['efficiency_bits_per_j']:.6f} bits/J")
print(f"Total Energy: {results['total_energy_j']:.2f} J")
print(f"Information Gain: {results['info_gain_bits']:.4f} bits")

๐Ÿ“Š Benchmark Results

MNIST Classification (Validated)

Hardware: Apple M4 with Metal Performance Shaders Energy Backend: PowerMetrics (real hardware monitoring) Model: SimpleCNN (Conv โ†’ Conv โ†’ FC โ†’ FC) Dataset: MNIST (60k train, 10k test) Epochs: 5 Batch size: 64

Baseline (Adam):

  • Uniform sampling, standard training
  • Time: 23.07s
  • Final accuracy: 99.20%
  • Energy: 1357.68 J
  • Efficiency: 0.002375 bits/J

H3 Optimizer:

  • Three-phase training (warmup โ†’ thermodynamic โ†’ consolidation)
  • Lipschitz-adaptive LR (ฮณ=0.9, update_interval=10)
  • Information-weighted sampling (keep_frac=0.65 in thermodynamic phase)
  • Time: 20.27s (โšก 12.2% faster)
  • Final accuracy: 99.13% (๐ŸŽฏ maintained)
  • Energy: 1172.02 J (๐Ÿ’ก 13.7% less)
  • Efficiency: 0.002816 bits/J (๐Ÿ“Š 18.6% improvement)

Key Observations:

  • Speedup achieved even on highly efficient Apple Silicon
  • Real energy savings measured via PowerMetrics (not estimates)
  • Accuracy maintained with minimal variance
  • Three-phase strategy shows clear efficiency progression

Original Paper Results (Reference)

For comparison, the original H3 paper reported results on different hardware:

Dataset Model Baseline Time H3 Time Speedup Efficiency Gain
CIFAR-10 ResNet-18 752.7s 537.2s 28.6% ~50%
CIFAR-100 ResNet-18 272.8s 231.6s 15.0% ~50%
Tiny-ImageNet ResNet-18 1056.3s 875.7s 17.1% ~53%

Note: These were measured on different hardware with TDP-based estimates. Our Apple M4 results use real PowerMetrics measurements and show conservative but reproducible gains.


๐Ÿ”ฌ Theory: Thermodynamic Efficiency

H3 is grounded in information thermodynamics. The key insight:

Learning is energy-to-information conversion

Just as a heat engine converts thermal energy to mechanical work with efficiency ฮท = W/Q, a learning system converts electrical energy to predictive information:

ฮท_thermo = ฮ”I / E

Where:

  • ฮ”I = Information gained (bits) via cross-entropy reduction
  • E = Energy consumed (joules) via hardware power integration

This is not a metaphor โ€” it's a measurable physical quantity with units bits/joule.

The Three Core Equations

1. Information Gain (Eq. 4.3):

ฮ”I_bits = [L_initial - L_final] / ln(2)

Measures reduction in average code length (bits needed to encode labels).

2. Energy Integration (Eq. 4.6):

E = โˆซ P(t) dt โ‰ˆ ฮฃ [(P_k + P_{k+1})/2] * ฮ”t

Trapezoidal integration of instantaneous power measurements.

3. Thermodynamic Efficiency (Eq. 4.7):

ฮท_thermo = ฮ”I / E

Bits of information gained per joule of energy consumed.

Why This Matters

Traditional optimizers minimize loss without considering computational cost. H3 explicitly maximizes information-per-energy by:

  1. Intelligent Resource Allocation: Information-weighted sampling focuses computation on high-value examples
  2. Adaptive Step Sizing: Lipschitz-based LR adjusts steps to loss landscape geometry
  3. Direct Measurement: Real-time efficiency tracking enables optimization

This approach is particularly valuable for:

  • โšก Edge devices with limited battery
  • ๐ŸŒ Large-scale training with energy costs
  • ๐Ÿ“ฑ On-device learning constrained by thermal limits

๐Ÿ—๏ธ Architecture

H3-Optimizer/
โ”œโ”€โ”€ h3/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ optimizer.py         # H3Optimizer with Lipschitz-adaptive LR
โ”‚   โ”œโ”€โ”€ sampler.py          # Information-weighted sampling + LossTracker
โ”‚   โ”œโ”€โ”€ energy_tracker.py   # Multi-backend power monitoring
โ”‚   โ”œโ”€โ”€ scheduler.py        # Learning rate schedulers
โ”‚   โ””โ”€โ”€ utils.py            # Helper functions
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ quickstart.py       # MNIST demo (H3 vs Adam)
โ”‚   โ””โ”€โ”€ cifar10_demo.py     # CIFAR-10 example
โ”œโ”€โ”€ tests/                  # Unit tests
โ”œโ”€โ”€ benchmarks/             # Performance benchmarks
โ”œโ”€โ”€ docs/                   # Documentation
โ”œโ”€โ”€ setup.py               # Package installation
โ”œโ”€โ”€ requirements.txt       # Dependencies
โ””โ”€โ”€ README.md             # This file

Key Components:

  • H3Optimizer: Drop-in replacement for PyTorch optimizers with Lipschitz-adaptive learning rates
  • LossTracker: Per-sample loss tracking with exponential smoothing
  • InformationWeightedSampler: Three-phase sampling strategy (warmup โ†’ thermodynamic โ†’ consolidation)
  • EnergyTracker: Multi-backend energy monitoring (NVML, PowerMetrics, fallback)
  • IndexedDataset: Wrapper to add sample indices for loss tracking

๐Ÿ“š Citation

If you use H3 in your research, please cite:

@article{cardoso2025h3,
  title={H3: A Thermodynamically Efficient Machine Learning Framework},
  subtitle={Bridging Information Theory, Energy Dissipation, and Learning Dynamics},
  author={Cardoso, Nuno},
  journal={Zenodo},
  year={2025},
  month={October},
  doi={10.5281/zenodo.17433760},
  url={https://zenodo.org/records/14357760},
  note={Implementation validated on Apple M4 hardware with real energy measurements}
}

For the software package:

@software{cardoso2025h3impl,
  title={h3-optimizer: PyPI Package},
  author={Cardoso, Nuno},
  year={2025},
  url={https://pypi.org/project/h3-optimizer/},
  version={0.1.1}
}

Paper: H3: A Thermodynamically Efficient Machine Learning Framework Package: h3-optimizer on PyPI


๐Ÿค Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Areas of interest:

  • Additional backends for energy tracking (AMD, Intel, ARM)
  • Benchmarks on larger models (Transformers, Vision Transformers, LLMs)
  • Hyperparameter tuning strategies and AutoML integration
  • Integration with popular frameworks (HuggingFace Transformers, PyTorch Lightning, fastai)
  • Multi-GPU and distributed training support
  • Additional sampling strategies
  • Theoretical analysis and proofs

Development setup:

git clone https://github.com/nfocardoso/EMSTI.git
cd EMSTI
pip install -e ".[dev]"
pytest tests/

๐Ÿ“„ License

MIT License - see LICENSE file for details.

Copyright (c) 2025 Nuno Cardoso


๐Ÿ‘ค Author

Nuno Cardoso Independent Researcher


๐Ÿ™ Acknowledgments

This work builds upon the EMSTI (Emergent Matterโ€“Spaceโ€“Timeโ€“Information) theoretical framework, which proposes a unified view of thermodynamics, information theory, and learning dynamics. The H3 implementation demonstrates that thermodynamic principles can guide practical machine learning optimization.

Special thanks to:

  • The PyTorch team for the excellent deep learning framework
  • The open-source community for tools and inspiration
  • Reviewers and early adopters for valuable feedback

๐Ÿ”— Related Work

Theoretical Foundation:

Practical Applications:

  • examples/quickstart.py - Complete MNIST demo
  • examples/cifar10_demo.py - CIFAR-10 training
  • Paper Benchmarks - Full experimental results

๐Ÿ“ Changelog

v0.1.1 (2025-01-21)

  • โœ… Published to PyPI: https://pypi.org/project/h3-optimizer/
  • โœ… Validated on Apple M4 with real PowerMetrics measurements
  • ๐Ÿ”ง Fixed package imports in h3/__init__.py
  • ๐Ÿ“Š Confirmed results: 12.2% speedup, 18.6% efficiency gain

v0.1.0 (2025-01-21)

  • Initial release
  • H3Optimizer with Lipschitz-adaptive learning rates
  • Information-weighted sampler with three-phase training
  • Multi-backend energy tracking (NVML, PowerMetrics, fallback)
  • Complete MNIST demo
  • Full documentation and examples

โšก H3: Where thermodynamics meets deep learning ๐Ÿ“Š

Train smarter, not harder. Maximize bits per joule.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

h3_optimizer-0.2.1.tar.gz (57.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

h3_optimizer-0.2.1-py3-none-any.whl (54.9 kB view details)

Uploaded Python 3

File details

Details for the file h3_optimizer-0.2.1.tar.gz.

File metadata

  • Download URL: h3_optimizer-0.2.1.tar.gz
  • Upload date:
  • Size: 57.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for h3_optimizer-0.2.1.tar.gz
Algorithm Hash digest
SHA256 1cb3f10b231b99054d7b0c15a356184f2d6898ffd83b1d199b603ecc33fb9f29
MD5 c10d067731203adfe46a2ffe397f8dd5
BLAKE2b-256 6ae011ae2e27ddbcc1213ce7f1be87103b5b18163d9f92b72135b84430d625ec

See more details on using hashes here.

File details

Details for the file h3_optimizer-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: h3_optimizer-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 54.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for h3_optimizer-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6556b3d3ac49cb5b2d38599445a60058719da0aa4da890c84f195692fcf56a1b
MD5 ae4c2bb8f89b4583ba563ea52752a955
BLAKE2b-256 fee945ee3f5a7490b37c8bcc197393951c6a9a740839f863ceaa4a391c0414d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page