Thermodynamically efficient optimizer with Lipschitz-adaptive learning rates and information-weighted sampling
Project description
H3 Optimizer โก
Thermodynamically efficient deep learning optimizer achieving better accuracy AND lower energy consumption.
H3 combines Lipschitz-adaptive learning rates, information-weighted sampling, and energy tracking to maximize learning efficiency. Based on H3: A Thermodynamically Efficient Machine Learning Framework by Nuno Cardoso (2025).
๐ Validated Performance (CIFAR-10 + ResNet-18, 20 epochs)
Tested on Apple Silicon (M4) - Real hardware measurements
Conservative Mode (Balanced - Recommended)
Configuration: keep_frac=0.75, uniform_mix=0.50
Adam baseline: 74.58% accuracy, 239s, 14.2 kJ
H3 conservador: 76.46% accuracy, 197s, 11.7 kJ
Improvements: +1.88pp accuracy โ
-17.8% training time โ
-17.9% energy consumption โ
+12% thermodynamic efficiency (ฮท) โ
Aggressive Mode (Green - Maximum Efficiency)
Configuration: keep_frac=0.55, uniform_mix=0.30
Adam baseline: 74.58% accuracy, 239s, 14.2 kJ
H3 agressivo: 77.83% accuracy, 169s, 10.0 kJ
Improvements: +3.25pp accuracy โ
-29.4% training time โ
-29.6% energy consumption โ
+23% thermodynamic efficiency (ฮท) โ
Key Finding: H3 often IMPROVES accuracy while saving energy and time, due to intelligent data selection focusing on informative examples and adaptive learning rates based on local curvature.
Configuration Details
- Phases: 15% warmup, 70% thermodynamic, 15% consolidation
- Minimum 20 epochs recommended for convergence
- Conservative uses more data (75%) with higher exploration (50% uniform)
- Aggressive uses less data (55%) with focused sampling (30% uniform)
Note: Always profile your dataset first with H3Profiler to establish baselines before using the optimizer.
๐ฅ MNIST Baseline (Simple Dataset)
| Metric | Adam Baseline | H3 Optimizer | Difference |
|---|---|---|---|
| Training Time | 23.07s | 20.27s | โก 12.2% faster |
| Test Accuracy | 99.20% | 99.13% | ๐ฏ -0.07pp |
| Energy Used | 1357.68 J | 1172.02 J | ๐ก 13.7% reduction |
| Efficiency ฮท | 0.002375 bits/J | 0.002816 bits/J | ๐ +18.6% |
MNIST with SimpleCNN (5 epochs). Measured on Apple M4 with PowerMetrics energy monitoring. MNIST is a simple dataset - see CIFAR-10 results above for performance on complex datasets.
โ Real Hardware Validation
H3 has been validated on multiple platforms with real energy measurements:
Apple M4 (Mac Studio/MacBook Pro):
- Energy Backend: PowerMetrics (real-time power monitoring)
- MNIST: 12.2% speedup, 18.6% efficiency gain
- Package:
pip install h3-optimizerfrom PyPI
Testing Environment:
# Anyone can reproduce these results:
pip install h3-optimizer
# Run the validation:
from h3 import H3Optimizer, LossTracker, EnergyTracker
# ... see examples/quickstart.py for complete code
Three-Phase Training Observed:
- Warmup (epoch 1): ฮท = 0.010686 bits/J (initialization)
- Thermodynamic (epochs 2-4): ฮท = 0.006572 โ 0.003766 bits/J (optimization)
- Consolidation (epoch 5): ฮท = 0.002816 bits/J (stabilization)
The efficiency metric ฮท changes across phases as expected from theory!
โก Quick Start
from h3.optimizer import H3Optimizer
from h3.sampler import LossTracker, InformationWeightedSampler, IndexedDataset
from h3.energy_tracker import EnergyTracker
# Wrap dataset for loss tracking
dataset = IndexedDataset(your_dataset)
loss_tracker = LossTracker(num_samples=len(dataset))
# Initialize H3 optimizer
optimizer = H3Optimizer(model.parameters(), lr=1e-3)
# Track thermodynamic efficiency
tracker = EnergyTracker(device='cuda')
tracker.start()
# Three-phase training
for epoch in range(num_epochs):
phase = 'warmup' if epoch < 1 else 'thermodynamic' if epoch < 4 else 'consolidation'
sampler = InformationWeightedSampler(
dataset, loss_tracker,
keep_frac=0.65 if phase == 'thermodynamic' else 1.0,
uniform_mix=0.2 if phase == 'thermodynamic' else 1.0
)
# ... train with sampler ...
results = tracker.stop()
print(f"Efficiency: {results['efficiency_bits_per_j']:.6f} bits/J")
Run the complete demo:
python examples/quickstart.py # H3 vs Adam comparison on MNIST
๐ฏ Preset Configurations
H3 provides pre-tuned configurations for common use cases - no hyperparameter tuning required:
from h3.presets import h3_mnist_fast
# One-liner setup for MNIST-like datasets
optimizer, loss_tracker, indexed_dataset = h3_mnist_fast(
model.parameters(),
train_dataset
)
# Training loop with automatic phase management
from torch.utils.data import DataLoader
for epoch in range(10):
phase, sampler = preset.get_sampler(epoch, total_epochs=10)
loader = DataLoader(indexed_dataset, sampler=sampler, batch_size=64)
# ... your training code ...
Available presets:
h3_mnist_fast()- For highly redundant data (MNIST, Fashion-MNIST)- 10-15% speedup, 15-20% efficiency gain
- Accuracy maintained (< 0.2pp difference)
h3_cifar_safe()- For complex datasets (CIFAR-10, CIFAR-100)- 5-8% speedup, 8-12% efficiency gain
- Small accuracy trade-off (-0.5 to -1.5pp)
h3_edge()- For edge devices (IoT, mobile ML)- 15-25% speedup, 20-30% efficiency gain
- Accepts larger accuracy trade-off (-2 to -4pp) for maximum energy savings
See docs/PRESETS.md for detailed documentation.
๐ Training Analysis & Logging
H3 includes built-in logging and analysis tools for tracking thermodynamic efficiency:
from h3.hooks import ThermoAuditLogger
# Setup logging
logger = ThermoAuditLogger(
"my_experiment",
metadata={"model": "ResNet-18", "dataset": "CIFAR-10"},
config={"epochs": 20, "batch_size": 128, "lr": 1e-3}
)
# In training loop
logger.log_epoch(
epoch=epoch,
phase=phase,
keep_frac=0.65,
uniform_mix=0.2,
train_loss=loss.item(),
val_acc=accuracy,
energy_stats=tracker.get_current_stats()
)
# Analyze results
from h3 import explain_thermo_log
print(explain_thermo_log(logger.get_path()))
CLI tool for analysis:
# Analyze single run
h3-report --h3-log mnist_h3.csv
# Compare H3 vs baseline
h3-report --h3-log mnist_h3.csv --baseline-log mnist_adam.csv
# Compare multiple configurations
h3-report --compare run1.csv run2.csv run3.csv
The h3-report tool provides:
- Final metrics summary (accuracy, energy, efficiency)
- Phase breakdown analysis
- Heuristic assessment (starvation/aggressive/moderate/conservative)
- Green Score calculation (energy efficiency ratio)
- Automatic verdict with tuning recommendations
๐ฆ Feature Overview
Core Features (Production-Ready)
| Feature | Description | Status |
|---|---|---|
| H3Optimizer | Thermodynamic optimizer with adaptive learning | โ Validated |
| H3Profiler | Zero-risk profiling for any optimizer | โ Stable |
| AutoH3 | Zero-config automation with auto-tuning | โ Ready |
| ฮท-Controller | Automatic hyperparameter adjustment | โ Functional |
| ThermoAuditLogger | Experiment tracking and analysis | โ Stable |
| Preset Configurations | One-liner setup (mnist_fast, cifar_safe, etc.) | โ Ready |
| CLI Tools | h3-report for analysis and comparison | โ Stable |
All features tested and validated on real hardware (Apple M4).
๐ Advanced Features
H3 provides three levels of automation to match your needs:
1. H3Profiler - Zero-Risk Profiling ๐ฌ
Profile ANY optimizer (Adam, SGD, AdamW, etc.) without changing your training code:
from h3 import H3Profiler
# Wrap your existing training loop
profiler = H3Profiler(device='cuda', name="mnist_adam")
profiler.start()
# Your normal training loop - no changes needed!
for epoch in range(20):
for batch in train_loader:
# ... your normal training code with Adam/SGD ...
profiler.log_batch(loss.item())
acc = evaluate(model, test_loader)
profiler.log_epoch(accuracy=acc)
# Get comprehensive analysis
results = profiler.stop()
print(profiler.get_report()) # Detailed thermodynamic analysis
profiler.export_csv("./profiles/adam_run.csv")
What it does:
- โ Measures thermodynamic efficiency (ฮท = bits/joule) in real-time
- โ Finds optimal stopping point (diminishing returns detection)
- โ Calculates energy waste: "You used 20% more energy than needed"
- โ Suggests when H3 optimization could help
- โ Zero risk - just measurement, no changes to training
Use when: You want to understand your current training efficiency before committing to H3.
2. ฮท-Controller - Automatic Hyperparameter Tuning ๐๏ธ
Let H3 tune itself based on real-time efficiency measurements:
from h3 import create_controlled_h3
# One-liner setup with automatic tuning
optimizer, controller, loss_tracker, indexed_dataset = create_controlled_h3(
model.parameters(),
train_dataset,
mode="balanced", # safe / balanced / green / extreme
accuracy_tolerance=1.0, # Max acceptable accuracy loss (pp)
total_epochs=20
)
for epoch in range(20):
# Controller automatically adjusts parameters
keep_frac, uniform_mix, phase = controller.control_step()
# Create sampler with auto-tuned parameters
sampler = InformationWeightedSampler(
indexed_dataset, loss_tracker,
keep_frac=keep_frac,
uniform_mix=uniform_mix
)
loader = DataLoader(indexed_dataset, sampler=sampler, batch_size=128)
# ... training loop ...
# Controller learns and adapts
controller.observe(eta, accuracy, loss, energy, info)
# Optional: check status
if epoch % 5 == 0:
print(controller.get_report())
Operating modes:
"safe"- Maximize accuracy (keep_frac ~0.70, conservative)"balanced"- Balance speed/accuracy (keep_frac ~0.55, recommended)"green"- Maximize energy savings (keep_frac ~0.45, aggressive)"extreme"- Maximum savings (keep_frac ~0.35, accepts accuracy loss)
What it does:
- โ Auto-adjusts keep_frac and uniform_mix in real-time
- โ Respects accuracy constraints (won't sacrifice too much accuracy)
- โ Prevents data starvation (detects loss volatility)
- โ Automatic phase transitions (warmup โ thermodynamic โ consolidation)
- โ Self-regulating thermodynamic system
Use when: You want H3's benefits but don't want to manually tune hyperparameters.
3. AutoH3 - Complete Zero-Config Automation ๐ค
The simplest possible API - combining profiler + controller + everything:
from h3 import AutoH3
# Single line creates everything
auto = AutoH3(
model.parameters(),
train_dataset,
mode="balanced",
name="mnist_auto_experiment"
)
auto.start()
# Simple training interface
for epoch in range(20):
loader = auto.get_loader(batch_size=64)
for data, target, indices in loader:
data, target = data.to(device), target.to(device)
# One-line training step
loss = auto.training_step(model, data, target, criterion, indices)
# One-line evaluation
accuracy = auto.evaluate_epoch(model, test_loader)
# Comprehensive final report
results = auto.finish()
What it does:
- โ Combines H3Profiler + ฮท-Controller + EnergyTracker
- โ Zero configuration - just pick a mode
- โ Automatic profiling and hyperparameter tuning
- โ Simple training interface (training_step, evaluate_epoch)
- โ Comprehensive final reports with all metrics
- โ Works with standard PyTorch models and datasets
Use when: You want the absolute simplest H3 experience with maximum automation.
Feature Comparison
| Feature | Manual H3 | Presets | ฮท-Controller | AutoH3 | H3Profiler |
|---|---|---|---|---|---|
| Setup complexity | High | Low | Medium | Very Low | Minimal |
| Hyperparameter tuning | Manual | Pre-tuned | Automatic | Automatic | N/A |
| Works with any optimizer | No | No | No | No | โ YES |
| Real-time adaptation | No | No | โ YES | โ YES | No |
| Zero risk | No | No | No | No | โ YES |
| Best for | Research | Quick start | Production | Beginners | Profiling |
Recommended Workflow
- Start with H3Profiler - Profile your existing training to understand baseline efficiency
- Try AutoH3 - Get H3 benefits with zero configuration
- Tune with ฮท-Controller - Fine-tune for production if needed
- Use Presets - If you want manual control with good defaults
๐ง What is H3?
H3 treats machine learning as a thermodynamic process that converts electrical energy into predictive information. Instead of just minimizing loss, H3 maximizes:
ฮท_thermo = ฮI / E (bits of information gained per joule of energy)
This is achieved through three innovations:
1. Lipschitz-Adaptive Learning Rates (Eq. 4.10)
ฮท_k+1 = min(ฮท_max, ฮณ/(Lฬ_k + ฮต))
Adjusts step size based on local loss curvature:
- Larger steps in smooth regions โ faster convergence
- Smaller steps in steep regions โ better stability
2. Information-Weighted Sampling (Eq. 5.1)
L_i^(t) = (1-ฮฒ)*L_i^(t-1) + ฮฒ*L_current
Prioritizes high-loss (high-information) examples:
- Warmup: Uniform sampling for initialization
- Thermodynamic: Weighted sampling (top 65% by loss)
- Consolidation: Uniform sampling for rebalancing
3. Energy Tracking (Eq. 4.6, 4.7)
E = โซ P(t) dt (trapezoidal integration)
ฮI = [L_initial - L_final] / ln(2)
ฮท = ฮI / E
Real-time monitoring via:
- NVML (NVIDIA GPUs)
- PowerMetrics (Apple Silicon)
- Fallback (TDP estimation)
This is not a metaphor โ thermodynamic efficiency is a measurable physical quantity with units bits/joule.
๐ฆ Installation
Validated on PyPI (v0.1.1):
pip install h3-optimizer
This is the same package used in our validation tests. No local builds needed.
From source (for development):
git clone https://github.com/nfocardoso/EMSTI.git
cd EMSTI
pip install -e .
Requirements
- Python โฅ 3.8
- PyTorch โฅ 1.12.0
- NumPy โฅ 1.21.0
- torchvision โฅ 0.13.0 (for examples)
Optional for GPU power monitoring:
pynvml(NVIDIA GPUs)powermetrics(Apple Silicon, requires sudo)
๐ Full Example
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from h3.optimizer import H3Optimizer
from h3.sampler import LossTracker, InformationWeightedSampler, IndexedDataset
from h3.energy_tracker import EnergyTracker
# Setup
model = YourModel()
dataset = IndexedDataset(YourDataset())
loss_tracker = LossTracker(len(dataset), smoothing=0.1)
optimizer = H3Optimizer(
model.parameters(),
lr=1e-3,
lipschitz_safety=0.9,
lipschitz_update_interval=10
)
tracker = EnergyTracker(device='cuda')
criterion = nn.CrossEntropyLoss(reduction='none')
# Training with three phases
tracker.start()
for epoch in range(num_epochs):
# Determine phase
if epoch < warmup_epochs:
phase = 'warmup'
sampler = InformationWeightedSampler(
dataset, loss_tracker, uniform_mix=1.0
)
elif epoch >= (num_epochs - consolidation_epochs):
phase = 'consolidation'
sampler = InformationWeightedSampler(
dataset, loss_tracker, uniform_mix=1.0
)
else:
phase = 'thermodynamic'
sampler = InformationWeightedSampler(
dataset, loss_tracker,
keep_frac=0.65, # Top 65% by loss
uniform_mix=0.2 # 80% weighted, 20% uniform
)
loader = DataLoader(dataset, sampler=sampler, batch_size=64)
for data, target, indices in loader:
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
# Compute per-sample losses
loss_vector = criterion(output, target)
loss = loss_vector.mean()
# Update loss tracker
loss_tracker.update(indices, loss_vector.detach())
tracker.log_loss(loss.item())
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1} [{phase}]: Loss={loss.item():.4f}")
# Results
results = tracker.stop()
print(f"\nThermodynamic Efficiency: {results['efficiency_bits_per_j']:.6f} bits/J")
print(f"Total Energy: {results['total_energy_j']:.2f} J")
print(f"Information Gain: {results['info_gain_bits']:.4f} bits")
๐ Benchmark Results
MNIST Classification (Validated)
Hardware: Apple M4 with Metal Performance Shaders Energy Backend: PowerMetrics (real hardware monitoring) Model: SimpleCNN (Conv โ Conv โ FC โ FC) Dataset: MNIST (60k train, 10k test) Epochs: 5 Batch size: 64
Baseline (Adam):
- Uniform sampling, standard training
- Time: 23.07s
- Final accuracy: 99.20%
- Energy: 1357.68 J
- Efficiency: 0.002375 bits/J
H3 Optimizer:
- Three-phase training (warmup โ thermodynamic โ consolidation)
- Lipschitz-adaptive LR (ฮณ=0.9, update_interval=10)
- Information-weighted sampling (keep_frac=0.65 in thermodynamic phase)
- Time: 20.27s (โก 12.2% faster)
- Final accuracy: 99.13% (๐ฏ maintained)
- Energy: 1172.02 J (๐ก 13.7% less)
- Efficiency: 0.002816 bits/J (๐ 18.6% improvement)
Key Observations:
- Speedup achieved even on highly efficient Apple Silicon
- Real energy savings measured via PowerMetrics (not estimates)
- Accuracy maintained with minimal variance
- Three-phase strategy shows clear efficiency progression
Original Paper Results (Reference)
For comparison, the original H3 paper reported results on different hardware:
| Dataset | Model | Baseline Time | H3 Time | Speedup | Efficiency Gain |
|---|---|---|---|---|---|
| CIFAR-10 | ResNet-18 | 752.7s | 537.2s | 28.6% | ~50% |
| CIFAR-100 | ResNet-18 | 272.8s | 231.6s | 15.0% | ~50% |
| Tiny-ImageNet | ResNet-18 | 1056.3s | 875.7s | 17.1% | ~53% |
Note: These were measured on different hardware with TDP-based estimates. Our Apple M4 results use real PowerMetrics measurements and show conservative but reproducible gains.
๐ฌ Theory: Thermodynamic Efficiency
H3 is grounded in information thermodynamics. The key insight:
Learning is energy-to-information conversion
Just as a heat engine converts thermal energy to mechanical work with efficiency ฮท = W/Q, a learning system converts electrical energy to predictive information:
ฮท_thermo = ฮI / E
Where:
- ฮI = Information gained (bits) via cross-entropy reduction
- E = Energy consumed (joules) via hardware power integration
This is not a metaphor โ it's a measurable physical quantity with units bits/joule.
The Three Core Equations
1. Information Gain (Eq. 4.3):
ฮI_bits = [L_initial - L_final] / ln(2)
Measures reduction in average code length (bits needed to encode labels).
2. Energy Integration (Eq. 4.6):
E = โซ P(t) dt โ ฮฃ [(P_k + P_{k+1})/2] * ฮt
Trapezoidal integration of instantaneous power measurements.
3. Thermodynamic Efficiency (Eq. 4.7):
ฮท_thermo = ฮI / E
Bits of information gained per joule of energy consumed.
Why This Matters
Traditional optimizers minimize loss without considering computational cost. H3 explicitly maximizes information-per-energy by:
- Intelligent Resource Allocation: Information-weighted sampling focuses computation on high-value examples
- Adaptive Step Sizing: Lipschitz-based LR adjusts steps to loss landscape geometry
- Direct Measurement: Real-time efficiency tracking enables optimization
This approach is particularly valuable for:
- โก Edge devices with limited battery
- ๐ Large-scale training with energy costs
- ๐ฑ On-device learning constrained by thermal limits
๐๏ธ Architecture
H3-Optimizer/
โโโ h3/
โ โโโ __init__.py
โ โโโ optimizer.py # H3Optimizer with Lipschitz-adaptive LR
โ โโโ sampler.py # Information-weighted sampling + LossTracker
โ โโโ energy_tracker.py # Multi-backend power monitoring
โ โโโ scheduler.py # Learning rate schedulers
โ โโโ utils.py # Helper functions
โโโ examples/
โ โโโ quickstart.py # MNIST demo (H3 vs Adam)
โ โโโ cifar10_demo.py # CIFAR-10 example
โโโ tests/ # Unit tests
โโโ benchmarks/ # Performance benchmarks
โโโ docs/ # Documentation
โโโ setup.py # Package installation
โโโ requirements.txt # Dependencies
โโโ README.md # This file
Key Components:
H3Optimizer: Drop-in replacement for PyTorch optimizers with Lipschitz-adaptive learning ratesLossTracker: Per-sample loss tracking with exponential smoothingInformationWeightedSampler: Three-phase sampling strategy (warmup โ thermodynamic โ consolidation)EnergyTracker: Multi-backend energy monitoring (NVML, PowerMetrics, fallback)IndexedDataset: Wrapper to add sample indices for loss tracking
๐ Citation
If you use H3 in your research, please cite:
@article{cardoso2025h3,
title={H3: A Thermodynamically Efficient Machine Learning Framework},
subtitle={Bridging Information Theory, Energy Dissipation, and Learning Dynamics},
author={Cardoso, Nuno},
journal={Zenodo},
year={2025},
month={October},
doi={10.5281/zenodo.17433760},
url={https://zenodo.org/records/14357760},
note={Implementation validated on Apple M4 hardware with real energy measurements}
}
For the software package:
@software{cardoso2025h3impl,
title={h3-optimizer: PyPI Package},
author={Cardoso, Nuno},
year={2025},
url={https://pypi.org/project/h3-optimizer/},
version={0.1.1}
}
Paper: H3: A Thermodynamically Efficient Machine Learning Framework Package: h3-optimizer on PyPI
๐ค Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Areas of interest:
- Additional backends for energy tracking (AMD, Intel, ARM)
- Benchmarks on larger models (Transformers, Vision Transformers, LLMs)
- Hyperparameter tuning strategies and AutoML integration
- Integration with popular frameworks (HuggingFace Transformers, PyTorch Lightning, fastai)
- Multi-GPU and distributed training support
- Additional sampling strategies
- Theoretical analysis and proofs
Development setup:
git clone https://github.com/nfocardoso/EMSTI.git
cd EMSTI
pip install -e ".[dev]"
pytest tests/
๐ License
MIT License - see LICENSE file for details.
Copyright (c) 2025 Nuno Cardoso
๐ค Author
Nuno Cardoso Independent Researcher
- GitHub: @nfocardoso
- Paper: Zenodo
- Repository: EMSTI
๐ Acknowledgments
This work builds upon the EMSTI (Emergent MatterโSpaceโTimeโInformation) theoretical framework, which proposes a unified view of thermodynamics, information theory, and learning dynamics. The H3 implementation demonstrates that thermodynamic principles can guide practical machine learning optimization.
Special thanks to:
- The PyTorch team for the excellent deep learning framework
- The open-source community for tools and inspiration
- Reviewers and early adopters for valuable feedback
๐ Related Work
Theoretical Foundation:
- EMSTI Framework - Unified thermodynamic-information theory
- Information Thermodynamics - Foundation of efficiency metrics
Practical Applications:
examples/quickstart.py- Complete MNIST demoexamples/cifar10_demo.py- CIFAR-10 training- Paper Benchmarks - Full experimental results
๐ Changelog
v0.1.1 (2025-01-21)
- โ Published to PyPI: https://pypi.org/project/h3-optimizer/
- โ Validated on Apple M4 with real PowerMetrics measurements
- ๐ง Fixed package imports in
h3/__init__.py - ๐ Confirmed results: 12.2% speedup, 18.6% efficiency gain
v0.1.0 (2025-01-21)
- Initial release
- H3Optimizer with Lipschitz-adaptive learning rates
- Information-weighted sampler with three-phase training
- Multi-backend energy tracking (NVML, PowerMetrics, fallback)
- Complete MNIST demo
- Full documentation and examples
โก H3: Where thermodynamics meets deep learning ๐
Train smarter, not harder. Maximize bits per joule.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file h3_optimizer-0.2.1.tar.gz.
File metadata
- Download URL: h3_optimizer-0.2.1.tar.gz
- Upload date:
- Size: 57.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1cb3f10b231b99054d7b0c15a356184f2d6898ffd83b1d199b603ecc33fb9f29
|
|
| MD5 |
c10d067731203adfe46a2ffe397f8dd5
|
|
| BLAKE2b-256 |
6ae011ae2e27ddbcc1213ce7f1be87103b5b18163d9f92b72135b84430d625ec
|
File details
Details for the file h3_optimizer-0.2.1-py3-none-any.whl.
File metadata
- Download URL: h3_optimizer-0.2.1-py3-none-any.whl
- Upload date:
- Size: 54.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6556b3d3ac49cb5b2d38599445a60058719da0aa4da890c84f195692fcf56a1b
|
|
| MD5 |
ae4c2bb8f89b4583ba563ea52752a955
|
|
| BLAKE2b-256 |
fee945ee3f5a7490b37c8bcc197393951c6a9a740839f863ceaa4a391c0414d5
|