Python bindings to five lightweight C neural network libraries (Tinn, GENANN, FANN, nn1, KANN).
Project description
cynn
cynn is a thin Cython wrapper around a set of minimal, dependency-free neural network libraries written in C, providing a zero-dependency Python library for learning about neural networks and for embedding lightweight models in applications where large machine-learning frameworks are impractical or unnecessary.
Overview
cynn provides Python bindings to five lightweight neural network libraries:
- Tinn – A tiny 3-layer neural network library.
- GENANN – A minimal multi-layer neural network library.
- FANN – Fast Artificial Neural Network library.
- nn1 – Convolutional Neural Network library.
- kann – Multi-layer perceptrons, convolutional neural networks, and recurrent neural networks (including LSTM and GRU).
The project uses Cython to create efficient Python bindings to these C implementations, enabling training and inference with minimal overhead and no dependencies.
When to Use cynn
- Learning neural network fundamentals - Simple C implementations make it easier to understand what's happening compared to opaque framework internals
- Embedded/resource-constrained environments - No required dependencies beyond Python and a C compiler (NumPy is optional)
- Simple prediction tasks - Small classifiers, basic regression, sequence modeling where a full ML stack is overkill
- Minimal footprint applications - Ship trained models without pulling in hundreds of MB of dependencies
- Parallel inference workloads - GIL-free execution enables true multithreading for batch predictions
When NOT to Use cynn
- Production ML at scale - Use PyTorch, TensorFlow, or JAX for GPU acceleration and ecosystem support
- Large datasets - No GPU parallelism; training is CPU-bound
- Complex architectures - Transformers, attention mechanisms, and modern architectures aren't available
- Pre-trained models - No ecosystem of pre-trained weights or model hubs
- Research requiring flexibility - Framework autograd is far more flexible for novel architectures
Choosing a Network
| Use Case | Recommended | Why |
|---|---|---|
| Learning basics | Tinn | Simplest API, single hidden layer, easy to understand |
| Simple classification/regression | Tinn, Genann | Lightweight, fast training |
| Deep feedforward networks | Genann, FANN | Multiple hidden layers, flexible architecture |
| Need momentum/learning rate tuning | FANN | Settable learning parameters |
| Image processing / CNNs | nn1, KANN | Convolutional layer support |
| Sequence modeling (text, time series) | KANN | LSTM, GRU, RNN with BPTT |
| Custom architectures | KANN | GraphBuilder API for computational graphs |
| Numerical precision critical | Genann, nn1 | Float64 precision |
| Memory constrained | Tinn, FANN | Float32, minimal overhead |
| Sparse/partially connected networks | FANN | Configurable connection rate |
Library Comparison
| Library | Architecture | Precision | Key Strength |
|---|---|---|---|
| Tinn | Fixed 3-layer (in, hidden, out) | float32 | Simplicity |
| Genann | Multi-layer MLP | float64 | Deep networks, precision |
| FANN | Flexible MLP | float32 | Learning parameters, sparse networks |
| nn1 | CNN (conv + dense layers) | float64 | Image/spatial data |
| KANN | MLP, CNN, LSTM, GRU, RNN | float32 | Recurrent networks, custom graphs |
Features
- Five network implementations:
TinnNetwork: Simple 3-layer architecture (input, hidden, output) using float32GenannNetwork: Flexible multi-layer architecture with arbitrary depth using float64FannNetwork: Flexible multi-layer architecture with settable learning parameters using float32CNNNetwork: Layer-based convolutional neural network with input, conv, and fully-connected layers using float64KannNeuralNetwork(KANN): Advanced neural networks including MLPs, LSTMs, GRUs, and RNNs using float32
- Backpropagation training with configurable learning rate
- Save/load trained models to disk
- Buffer protocol support - works with lists, tuples, array.array, NumPy arrays, etc.
- GIL-free execution - true multithreading support for parallel inference/training
- Fast C implementation with Python convenience
- Zero required dependencies (NumPy is optional)
Installation
Requirements
- Python >= 3.13
- uv (recommended) or pip
- CMake >= 3.15
- C compiler
Build from source
# Clone the repository
git clone https://github.com/shakfu/cynn
cd cynn
# Build and install to a local .venv (using uv)
make
Usage
Basic Example - TinnNetwork
from cynn.tinn import TinnNetwork
# Create a network: 2 inputs, 4 hidden neurons, 1 output
net = TinnNetwork(2, 4, 1)
# Make a prediction
inputs = [0.5, 0.3]
output = net.predict(inputs)
print(f"Prediction: {output}")
# Train the network
targets = [0.8]
learning_rate = 0.5
loss = net.train(inputs, targets, learning_rate)
print(f"Loss: {loss}")
Basic Example - GenannNetwork
from cynn.genann import GenannNetwork
# Create a network: 2 inputs, 2 hidden layers with 4 neurons each, 1 output
net = GenannNetwork(2, 2, 4, 1)
# Make a prediction
inputs = [0.5, 0.3]
output = net.predict(inputs)
print(f"Prediction: {output}")
# Train the network
targets = [0.8]
learning_rate = 0.1
loss = net.train(inputs, targets, learning_rate)
print(f"Loss: {loss}")
# GenannNetwork has additional features
print(f"Total weights: {net.total_weights}")
print(f"Total neurons: {net.total_neurons}")
# Create a copy of the network
net_copy = net.copy()
# Randomize weights
net.randomize()
Basic Example - CNNNetwork
from cynn.cnn import CNNNetwork
# Create a convolutional neural network
net = CNNNetwork()
net.create_input_layer(1, 28, 28) # 28x28 grayscale image input
net.add_conv_layer(8, 24, 24, kernel_size=5, stride=1) # 8 filters, 5x5 kernel
net.add_conv_layer(16, 12, 12, kernel_size=5, stride=2) # 16 filters, stride 2
net.add_full_layer(10) # 10 output classes
# Prepare input (flattened 28x28 image)
inputs = [0.5] * (28 * 28) # 784 values
targets = [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # one-hot encoded
# Train the network
error = net.train(inputs, targets, learning_rate=0.01)
print(f"Training error: {error}")
# Make predictions
outputs = net.predict(inputs)
predicted_class = outputs.index(max(outputs))
print(f"Predicted class: {predicted_class}")
# CNNNetwork has additional features
print(f"Network layers: {net.num_layers}")
print(f"Input shape: {net.input_shape}")
print(f"Output size: {net.output_size}")
# Access individual layers
for layer in net.layers:
print(f"Layer {layer.layer_id}: type={layer.layer_type}, shape={layer.shape}")
Basic Example - FannNetwork
from cynn.fann import FannNetwork
# Create a network: [2 inputs, 4 hidden layer 1, 3 hidden layer 2, 1 output]
net = FannNetwork([2, 4, 3, 1])
# Make a prediction
inputs = [0.5, 0.3]
output = net.predict(inputs)
print(f"Prediction: {output}")
# Adjust learning parameters
net.learning_rate = 0.7
net.learning_momentum = 0.1
# Train the network
targets = [0.8]
loss = net.train(inputs, targets)
print(f"Loss: {loss}")
# FannNetwork has additional features
print(f"Network layers: {net.layers}")
print(f"Total connections: {net.total_connections}")
print(f"Learning rate: {net.learning_rate}")
# Create a sparse network (50% connectivity)
sparse_net = FannNetwork([2, 8, 1], connection_rate=0.5)
# Create a copy of the network
net_copy = net.copy()
# Randomize weights to specific range
net.randomize_weights(-0.5, 0.5)
Basic Example - KannNeuralNetwork (KANN)
from cynn.kann import KannNeuralNetwork, COST_MSE, COST_MULTI_CROSS_ENTROPY
import array
# Create a multi-layer perceptron
net = KannNeuralNetwork.mlp(
input_size=4,
hidden_sizes=[16, 8], # Two hidden layers
output_size=3,
cost_type=COST_MULTI_CROSS_ENTROPY,
dropout=0.1
)
# Network properties
print(f"Input dimension: {net.input_dim}")
print(f"Output dimension: {net.output_dim}")
print(f"Number of trainable variables: {net.n_var}")
# Prepare data (KANN uses float32 typed memoryviews)
x_train = array.array('f', [0.1, 0.2, 0.3, 0.4] * 100) # 100 samples
y_train = array.array('f', [1.0, 0.0, 0.0] * 100) # One-hot labels
# Reshape for 2D memoryview (100 samples x 4 features)
# In practice, use numpy or Array2D helper
import numpy as np
x = np.array(x_train, dtype=np.float32).reshape(100, 4)
y = np.array(y_train, dtype=np.float32).reshape(100, 3)
# Train (returns number of epochs)
epochs = net.train(x, y, learning_rate=0.001, max_epochs=50)
print(f"Trained for {epochs} epochs")
# Single inference
inputs = array.array('f', [0.1, 0.2, 0.3, 0.4])
output = net.apply(inputs)
print(f"Prediction: {list(output)}")
# Save and load models
net.save("model.kann")
loaded = KannNeuralNetwork.load("model.kann")
KANN - LSTM for Sequence Modeling
from cynn.kann import KannNeuralNetwork, COST_MULTI_CROSS_ENTROPY
# Create an LSTM network for sequence modeling
lstm = KannNeuralNetwork.lstm(
input_size=128, # Vocabulary size (one-hot)
hidden_size=256, # LSTM hidden state size
output_size=128, # Output vocabulary size
cost_type=COST_MULTI_CROSS_ENTROPY
)
# Train on sequences (e.g., for text generation)
sequences = [
[10, 20, 30, 40, 50, 60], # Token sequences
[15, 25, 35, 45, 55, 65],
# ... more sequences
]
history = lstm.train_rnn(
sequences,
seq_length=32, # BPTT sequence length
vocab_size=128,
learning_rate=0.001,
max_epochs=100,
grad_clip=5.0, # Gradient clipping
verbose=1
)
print(f"Final loss: {history['loss'][-1]}")
KANN - Custom Network with GraphBuilder
from cynn.kann import GraphBuilder
# Build a custom architecture
builder = GraphBuilder()
# Define network graph
x = builder.input(10)
h = builder.dense(x, 32)
h = builder.relu(h)
h = builder.dropout(h, 0.2)
h = builder.dense(h, 16)
h = builder.tanh(h)
cost = builder.softmax_cross_entropy(h, 5)
# Create network from graph
net = builder.build(cost)
# Use like any other KANN network
print(f"Variables: {net.n_var}")
Batch Training
All network types support batch training for improved efficiency:
from cynn.genann import GenannNetwork
net = GenannNetwork(2, 1, 4, 1)
# Prepare batch data (XOR problem)
inputs_list = [
[0.0, 0.0],
[0.0, 1.0],
[1.0, 0.0],
[1.0, 1.0]
]
targets_list = [
[0.0],
[1.0],
[1.0],
[0.0]
]
# Train on entire batch with optional shuffling
stats = net.train_batch(inputs_list, targets_list, rate=0.1, shuffle=True)
print(f"Batch mean loss: {stats['mean_loss']}")
print(f"Batch total loss: {stats['total_loss']}")
print(f"Examples trained: {stats['count']}")
# Train for multiple epochs
for epoch in range(100):
stats = net.train_batch(inputs_list, targets_list, rate=0.1, shuffle=True)
if epoch % 10 == 0:
print(f"Epoch {epoch}: loss = {stats['mean_loss']:.4f}")
Evaluating Without Training
Use evaluate() to compute loss without updating weights (useful for validation):
from cynn.tinn import TinnNetwork
net = TinnNetwork(2, 4, 1)
# Training data
train_inputs = [0.5, 0.3]
train_targets = [0.8]
# Validation data
val_inputs = [0.4, 0.6]
val_targets = [0.7]
# Train on training data
train_loss = net.train(train_inputs, train_targets, rate=0.5)
print(f"Training loss: {train_loss}")
# Evaluate on validation data (no weight updates)
val_loss = net.evaluate(val_inputs, val_targets)
print(f"Validation loss: {val_loss}")
# Verify evaluate doesn't change weights
val_loss2 = net.evaluate(val_inputs, val_targets)
assert val_loss == val_loss2 # Should be identical
Training with Validation
Combine batch training with evaluation for train/validation splits:
from cynn.fann import FannNetwork
net = FannNetwork([2, 8, 1])
net.learning_rate = 0.5
# Split data into train/validation
train_inputs = [[0.0, 0.0], [0.0, 1.0]]
train_targets = [[0.0], [1.0]]
val_inputs = [[1.0, 0.0], [1.0, 1.0]]
val_targets = [[1.0], [0.0]]
for epoch in range(50):
# Train on training set
train_stats = net.train_batch(train_inputs, train_targets, shuffle=True)
# Evaluate on validation set (no weight updates)
val_losses = [net.evaluate(inp, tgt) for inp, tgt in zip(val_inputs, val_targets)]
val_loss = sum(val_losses) / len(val_losses)
if epoch % 10 == 0:
print(f"Epoch {epoch}: train_loss={train_stats['mean_loss']:.4f}, val_loss={val_loss:.4f}")
Context Manager Support
All network types support Python's context manager protocol (with statement) for cleaner code:
from cynn.tinn import TinnNetwork, GenannNetwork, CNNNetwork
# Automatic resource management with context manager
with TinnNetwork(2, 4, 1) as net:
output = net.predict([0.5, 0.3])
loss = net.train([0.5, 0.3], [0.8], rate=0.5)
print(f"Loss: {loss}")
# Works with all network types
with GenannNetwork(2, 1, 4, 1) as net:
loss = net.train([0.5, 0.3], [0.8], rate=0.1)
# Useful for temporary networks
with CNNNetwork() as net:
net.create_input_layer(1, 4, 4)
net.add_full_layer(2)
result = net.predict([0.5] * 16)
# Networks remain usable after exiting context
net = TinnNetwork(2, 4, 1)
with net as network:
network.train([0.5, 0.3], [0.8], rate=0.5)
# Network 'net' is still valid and usable here
output = net.predict([0.5, 0.3])
Note: The context manager protocol ensures clean resource handling, but cynn networks already handle cleanup automatically via __dealloc__, so using with is optional and primarily for code clarity. This may be used for other purposes in the future, such as triggering graph drawing.
XOR Problem
from cynn.tinn import TinnNetwork, seed
import random
import time
# Seed random number generators
seed(int(time.time()))
random.seed(int(time.time()))
# XOR training data
xor_data = [
([0.0, 0.0], [0.0]),
([0.0, 1.0], [1.0]),
([1.0, 0.0], [1.0]),
([1.0, 1.0], [0.0]),
]
# Create network
net = TinnNetwork(2, 4, 1)
# Train with constant learning rate
rate = 0.5
for epoch in range(3000):
random.shuffle(xor_data)
total_error = 0.0
for inputs, targets in xor_data:
error = net.train(inputs, targets, rate)
total_error += error
avg_error = total_error / len(xor_data)
if epoch % 500 == 0:
print(f"Epoch {epoch}: avg error = {avg_error:.6f}")
# Test predictions
for inputs, expected in xor_data:
pred = net.predict(inputs)
result = "✓" if abs(pred[0] - expected[0]) < 0.3 else "✗"
print(f"{result} {inputs} -> {pred[0]:.4f} (expected {expected[0]})")
Example output:
% uv run python tests/examples/xor_problem.py
Epoch 0: avg error = 0.129680
Epoch 500: avg error = 0.127645
Epoch 1000: avg error = 0.123747
Epoch 1500: avg error = 0.029109
Epoch 2000: avg error = 0.008168
Epoch 2500: avg error = 0.004388
✓ [0.0, 0.0] -> 0.0893 (expected 0.0)
✓ [1.0, 0.0] -> 0.9285 (expected 1.0)
✓ [0.0, 1.0] -> 0.9284 (expected 1.0)
✓ [1.0, 1.0] -> 0.0721 (expected 0.0)
NumPy Support
The library supports any object implementing the buffer protocol, including NumPy arrays:
import numpy as np
from cynn.tinn import TinnNetwork
# Create network
net = TinnNetwork(2, 4, 1)
# Use NumPy arrays (float32 recommended, but float64 works too)
inputs = np.array([0.5, 0.3], dtype=np.float32)
targets = np.array([0.8], dtype=np.float32)
# Train with numpy arrays
loss = net.train(inputs, targets, 0.1)
# Predict with numpy arrays
prediction = net.predict(inputs)
# Batch processing
batch = np.array([
[0.1, 0.2],
[0.3, 0.4],
[0.5, 0.6],
], dtype=np.float32)
predictions = [net.predict(row) for row in batch]
Save and Load Models
from cynn.tinn import TinnNetwork
# Train a network
net = TinnNetwork(2, 4, 1)
# ... training code ...
# Save to disk
net.save("model.tinn")
# Load from disk
loaded_net = TinnNetwork.load("model.tinn")
# Use loaded network
prediction = loaded_net.predict([0.5, 0.3])
Examples
The tests/examples/ directory contains 9 documented examples demonstrating each network type with real tasks:
| Example | Network | Task | Dataset |
|---|---|---|---|
tinn_xor.py |
TinnNetwork | XOR classification | Inline |
genann_iris.py |
GenannNetwork | Iris classification | iris.csv |
fann_regression.py |
FannNetwork | Sine wave regression | sine_wave.csv |
cnn_mnist.py |
CNNNetwork | Digit classification | mnist_subset.csv |
kann_mlp_iris.py |
KannNeuralNetwork.mlp() | Iris classification | iris.csv |
kann_lstm_sequence.py |
KannNeuralNetwork.lstm() | Sequence prediction | sequences.csv |
kann_gru_text.py |
KannNeuralNetwork.gru() | Text modeling | shakespeare_tiny.txt |
kann_rnn_timeseries.py |
KannNeuralNetwork.mlp() | Time series | sine_wave.csv |
kann_text_generation.py |
KannNeuralNetwork.lstm() | Text generation | shakespeare_tiny.txt |
# Run any example
uv run python tests/examples/tinn_xor.py
# Run with options
uv run python tests/examples/kann_lstm_sequence.py --epochs 100 --hidden-size 64
# Run all examples with summary
uv run python tests/examples/run_all_examples.py
# Run at 50% training intensity (faster)
uv run python tests/examples/run_all_examples.py --ratio 0.5
# Run at 200% training intensity (slower)
uv run python tests/examples/run_all_examples.py --ratio 2.0
Datasets are stored in tests/data/. See tests/examples/README.md for detailed documentation.
Development
Building
# Standard build
make build
# CMake build (for debugging)
make cmake
# Clean build artifacts
make clean
Testing
# Run all tests
make test
# Run with verbose output
uv run pytest -v
# Run specific test file
uv run pytest tests/test_basic.py -v
Project Structure
cynn/
├── src/
│ └── cynn/
│ ├── __init__.py # Package entry (lazy imports)
│ ├── _common.pxi # Shared Cython code
│ ├── tinn.pyx # TinnNetwork wrapper
│ ├── genann.pyx # GenannNetwork wrapper
│ ├── fann.pyx # FannNetwork wrapper
│ ├── cnn.pyx # CNNNetwork, CNNLayer wrappers
│ ├── kann.pyx # KannNeuralNetwork, GraphBuilder, etc.
│ ├── tinn.pxd # Tinn C declarations
│ ├── genann.pxd # Genann C declarations
│ ├── ffann.pxd # FANN C declarations
│ ├── cnn.pxd # nn1 CNN C declarations
│ ├── kann.pxd # KANN C declarations
│ └── CMakeLists.txt # Build configuration
├── thirdparty/
│ ├── tinn/ # Vendored Tinn C library
│ ├── genann/ # Vendored Genann C library
│ ├── fann/ # Vendored FANN C library
│ ├── nn1/ # Vendored nn1 CNN C library
│ └── kann/ # Vendored KANN C library
├── tests/ # pytest test suite
├── CMakeLists.txt # Root CMake config
├── Makefile # Build shortcuts
└── pyproject.toml # Python package metadata
API Reference
seed()
def seed(seed_value: int = 0) -> None
Seed the C random number generator used for weight initialization. If seed_value is 0 (default), uses current time. Call this before creating networks for reproducible results.
TinnNetwork
class TinnNetwork:
def __init__(self, inputs: int, hidden: int, outputs: int)
Create a new 3-layer neural network (float32 precision).
Parameters:
inputs: Number of input neuronshidden: Number of hidden layer neuronsoutputs: Number of output neurons
Properties:
input_size: Number of inputshidden_size: Number of hidden neuronsoutput_size: Number of outputsshape: Tuple of (inputs, hidden, outputs)
Methods:
predict()
def predict(self, inputs: list[float]) -> list[float]
Make a prediction given input values.
train()
def train(self, inputs: list[float], targets: list[float], rate: float) -> float
Train the network on one example. Returns the mean squared error for this training step.
evaluate()
def evaluate(self, inputs: list[float], targets: list[float]) -> float
Compute loss without training. Returns mean squared error between prediction and targets.
train_batch()
def train_batch(
self,
inputs_list: list,
targets_list: list,
rate: float,
shuffle: bool = False
) -> dict[str, float]
Train on multiple examples in batch. Returns dict with keys: 'mean_loss', 'total_loss', 'count'.
save()
def save(self, path: str | bytes | os.PathLike) -> None
Save the network weights to a file.
load()
@classmethod
def load(cls, path: str | bytes | os.PathLike) -> TinnNetwork
Load a network from a file.
__enter__() / __exit__()
def __enter__(self) -> TinnNetwork
def __exit__(self, exc_type, exc_val, exc_tb) -> bool
Context manager protocol support. Enables use of with statement for cleaner code. The network handles cleanup automatically via __dealloc__, so context manager usage is optional.
GenannNetwork
class GenannNetwork:
def __init__(self, inputs: int, hidden_layers: int, hidden: int, outputs: int)
Create a new multi-layer neural network (float64 precision).
Parameters:
inputs: Number of input neuronshidden_layers: Number of hidden layershidden: Number of neurons per hidden layeroutputs: Number of output neurons
Properties:
input_size: Number of inputshidden_layers: Number of hidden layershidden_size: Number of neurons per hidden layeroutput_size: Number of outputsshape: Tuple of (inputs, hidden_layers, hidden, outputs)total_weights: Total number of weights in the networktotal_neurons: Total number of neurons plus inputs
Methods:
predict()
def predict(self, inputs: list[float]) -> list[float]
Make a prediction given input values.
train()
def train(self, inputs: list[float], targets: list[float], rate: float) -> float
Train the network on one example using backpropagation. Returns mean squared error.
evaluate()
def evaluate(self, inputs: list[float], targets: list[float]) -> float
Compute loss without training. Returns mean squared error between prediction and targets.
train_batch()
def train_batch(
self,
inputs_list: list,
targets_list: list,
rate: float,
shuffle: bool = False
) -> dict[str, float]
Train on multiple examples in batch. Returns dict with keys: 'mean_loss', 'total_loss', 'count'.
randomize()
def randomize(self) -> None
Randomize all network weights.
copy()
def copy(self) -> GenannNetwork
Create a deep copy of the network.
save()
def save(self, path: str | bytes | os.PathLike) -> None
Save the network weights to a file.
load()
@classmethod
def load(cls, path: str | bytes | os.PathLike) -> GenannNetwork
Load a network from a file.
__enter__() / __exit__()
def __enter__(self) -> GenannNetwork
def __exit__(self, exc_type, exc_val, exc_tb) -> bool
Context manager protocol support. Enables use of with statement for cleaner code. The network handles cleanup automatically via __dealloc__, so context manager usage is optional.
FannNetwork
class FannNetwork:
def __init__(self, layers: list[int] | None = None, connection_rate: float = 1.0)
Create a new multi-layer neural network (float32 precision) using the FANN library.
Parameters:
layers: List of layer sizes [input, hidden1, ..., hiddenN, output]. Must have at least 2 layers.connection_rate: Connection density (0.0 to 1.0). 1.0 = fully connected, < 1.0 = sparse network.
Properties:
input_size: Number of inputsoutput_size: Number of outputstotal_neurons: Total number of neuronstotal_connections: Total number of connectionsnum_layers: Number of layerslayers: List of neuron counts for each layerlearning_rate: Get or set the learning ratelearning_momentum: Get or set the learning momentum
Methods:
predict()
def predict(self, inputs: list[float]) -> list[float]
Make a prediction given input values.
train()
def train(self, inputs: list[float], targets: list[float]) -> float
Train the network on one example using backpropagation. Uses current learning_rate and learning_momentum. Returns mean squared error.
evaluate()
def evaluate(self, inputs: list[float], targets: list[float]) -> float
Compute loss without training. Returns mean squared error between prediction and targets.
train_batch()
def train_batch(
self,
inputs_list: list,
targets_list: list,
shuffle: bool = False
) -> dict[str, float]
Train on multiple examples in batch. Uses current learning_rate and learning_momentum. Returns dict with keys: 'mean_loss', 'total_loss', 'count'.
randomize_weights()
def randomize_weights(self, min_weight: float = -0.1, max_weight: float = 0.1) -> None
Randomize all network weights to values in [min_weight, max_weight].
copy()
def copy(self) -> FannNetwork
Create a deep copy of the network.
save()
def save(self, path: str | bytes | os.PathLike) -> None
Save the network to a file (FANN text format).
load()
@classmethod
def load(cls, path: str | bytes | os.PathLike) -> FannNetwork
Load a network from a file.
__enter__() / __exit__()
def __enter__(self) -> FannNetwork
def __exit__(self, exc_type, exc_val, exc_tb) -> bool
Context manager protocol support. Enables use of with statement for cleaner code. The network handles cleanup automatically via __dealloc__, so context manager usage is optional.
CNNNetwork
class CNNNetwork:
def __init__(self)
Create a new convolutional neural network (float64 precision). Networks are built by adding layers sequentially.
Properties:
input_shape: Tuple of (depth, width, height) for the input layeroutput_size: Number of output nodes in the final layernum_layers: Total number of layers in the networklayers: List of CNNLayer wrappers
Methods:
create_input_layer()
def create_input_layer(self, depth: int, width: int, height: int) -> CNNLayer
Create an input layer. Must be called first when building a network.
add_conv_layer()
def add_conv_layer(
self,
depth: int,
width: int,
height: int,
kernel_size: int,
padding: int = 0,
stride: int = 1,
std: float = 0.1
) -> CNNLayer
Add a convolutional layer with specified output dimensions and convolution parameters.
add_full_layer()
def add_full_layer(self, num_nodes: int, std: float = 0.1) -> CNNLayer
Add a fully-connected layer.
predict()
def predict(self, inputs: list[float]) -> list[float]
Make a prediction. Input should be a flat array of size depth × width × height.
train()
def train(
self,
inputs: list[float],
targets: list[float],
learning_rate: float
) -> float
Train the network on one example. Returns mean squared error.
evaluate()
def evaluate(self, inputs: list[float], targets: list[float]) -> float
Compute loss without training. Returns mean squared error between prediction and targets.
train_batch()
def train_batch(
self,
inputs_list: list,
targets_list: list,
learning_rate: float,
shuffle: bool = False
) -> dict[str, float]
Train on multiple examples in batch. Returns dict with keys: 'mean_loss', 'total_loss', 'count'.
dump()
def dump(self) -> None
Print debug information about all layers to stdout.
__enter__() / __exit__()
def __enter__(self) -> CNNNetwork
def __exit__(self, exc_type, exc_val, exc_tb) -> bool
Context manager protocol support. Enables use of with statement for cleaner code. The network handles cleanup automatically via __dealloc__, so context manager usage is optional.
CNNLayer
class CNNLayer
Represents a single layer in a CNN. Created by CNNNetwork methods, not directly instantiated.
Properties:
layer_id: Layer ID in the networkshape: Tuple of (depth, width, height)depth,width,height: Individual dimensionsnum_nodes: Total nodes (depth × width × height)num_weights,num_biases: Weight and bias countslayer_type: String ('input', 'conv', or 'full')kernel_size,padding,stride: Conv layer parameters (raises ValueError for non-conv layers)
Methods:
get_outputs()
def get_outputs(self) -> list[float]
Get the output values of this layer.
KannNeuralNetwork (KANN)
class KannNeuralNetwork
Advanced neural network class supporting MLPs, LSTMs, GRUs, and simple RNNs (float32 precision). Based on the KANN (Klib Artificial Neural Network) library.
Factory Methods:
mlp()
@staticmethod
def mlp(
input_size: int,
hidden_sizes: list[int],
output_size: int,
cost_type: int = COST_MULTI_CROSS_ENTROPY,
dropout: float = 0.0
) -> KannNeuralNetwork
Create a multi-layer perceptron with arbitrary hidden layer configuration.
lstm()
@staticmethod
def lstm(
input_size: int,
hidden_size: int,
output_size: int,
cost_type: int = COST_MULTI_CROSS_ENTROPY,
rnn_flags: int = 0
) -> KannNeuralNetwork
Create an LSTM network for sequence modeling.
gru()
@staticmethod
def gru(
input_size: int,
hidden_size: int,
output_size: int,
cost_type: int = COST_MULTI_CROSS_ENTROPY,
rnn_flags: int = 0
) -> KannNeuralNetwork
Create a GRU network for sequence modeling.
rnn()
@staticmethod
def rnn(
input_size: int,
hidden_size: int,
output_size: int,
cost_type: int = COST_MULTI_CROSS_ENTROPY,
rnn_flags: int = 0
) -> KannNeuralNetwork
Create a simple RNN network.
load()
@staticmethod
def load(filename: str) -> KannNeuralNetwork
Load a network from a file.
Properties:
n_nodes: Number of nodes in the computational graphinput_dim: Input dimensionoutput_dim: Output dimensionn_var: Total number of trainable variablesn_const: Total number of constants
Methods:
train()
def train(
self,
x: float[:, :],
y: float[:, :],
learning_rate: float = 0.001,
mini_batch_size: int = 64,
max_epochs: int = 100,
min_epochs: int = 0,
max_drop_streak: int = 10,
validation_fraction: float = 0.1
) -> int
Train the network using built-in feedforward trainer with RMSprop optimizer and early stopping. Returns number of epochs trained.
train_rnn()
def train_rnn(
self,
sequences: list,
seq_length: int,
vocab_size: int,
learning_rate: float = 0.001,
mini_batch_size: int = 32,
max_epochs: int = 100,
grad_clip: float = 10.0,
validation_fraction: float = 0.1,
verbose: int = 1
) -> dict
Train RNN/LSTM/GRU using backpropagation through time (BPTT). Returns dict with 'loss' and 'val_loss' history lists.
apply()
def apply(self, x: float[:]) -> array.array
Apply the network to a single input. Returns output as array.array('f', ...).
cost()
def cost(self, x: float[:, :], y: float[:, :]) -> float
Compute the cost over a dataset.
save()
def save(self, filename: str) -> None
Save the network to a file.
clone()
def clone(self, batch_size: int = 1) -> KannNeuralNetwork
Clone the network with a different batch size.
unroll()
def unroll(self, length: int) -> KannNeuralNetwork
Unroll an RNN for a specified number of time steps.
switch_mode()
def switch_mode(self, is_training: bool) -> None
Switch between training and inference mode.
close()
def close(self) -> None
Explicitly release resources.
__enter__() / __exit__()
Context manager protocol support.
GraphBuilder
class GraphBuilder
Low-level graph builder for creating custom network architectures.
Methods:
input(size): Create an input layerdense(inp, output_size): Create a dense (fully connected) layerdropout(inp, rate): Create a dropout layerlayernorm(inp): Create a layer normalization layerrelu(inp),sigmoid(inp),tanh(inp),softmax(inp): Activation functionslstm(inp, hidden_size, flags),gru(inp, hidden_size, flags),rnn(inp, hidden_size, flags): Recurrent layersconv1d(inp, n_filters, kernel_size, stride, pad): 1D convolutionconv2d(inp, n_filters, k_rows, k_cols, stride_r, stride_c, pad_r, pad_c): 2D convolutionadd(x, y),sub(x, y),mul(x, y),matmul(x, y): Arithmetic operationssoftmax_cross_entropy(inp, n_out): Softmax + cross-entropy costsigmoid_cross_entropy(inp, n_out): Sigmoid + binary cross-entropy costmse_layer(inp, n_out): MSE cost layerbuild(cost): Build the neural network from the cost node
DataSet
class DataSet
Wrapper for loading tabular data from TSV files.
Methods:
load(filename): Load data from a TSV fileget_row(index): Get a single row of dataget_row_name(index),get_col_name(index): Get row/column namessplit_xy(label_cols): Split data into features and labelsto_2d_array(): Convert to Array2D
Properties:
n_rows,n_cols,n_groups,shape,row_names,col_names
KANN Helper Functions
kann_set_seed(seed: int) -> None
Set the random seed for reproducibility.
kann_set_verbose(level: int) -> None
Set verbosity level for KANN operations.
one_hot_encode(values: int[:], num_classes: int) -> list
One-hot encode an array of integer values.
softmax_sample(probs: float[:], temperature: float = 1.0) -> int
Sample from a probability distribution with temperature scaling.
prepare_sequence_data(sequences, seq_length: int, vocab_size: int) -> tuple
Prepare sequence data for RNN training.
KANN Constants
Cost Functions:
COST_BINARY_CROSS_ENTROPY: Binary cross-entropy (sigmoid)COST_MULTI_CROSS_ENTROPY: Multi-class cross-entropy (softmax)COST_BINARY_CROSS_ENTROPY_NEG: Binary cross-entropy for tanh (-1,1)COST_MSE: Mean square error
Node Flags:
KANN_FLAG_IN,KANN_FLAG_OUT,KANN_FLAG_TRUTH,KANN_FLAG_COST
RNN Flags:
RNN_VAR_H0: Variable initial hidden statesRNN_NORM: Layer normalization
Choosing Between Network Implementations
| Feature | TinnNetwork | GenannNetwork | FannNetwork | CNNNetwork | KannNeuralNetwork (KANN) |
|---|---|---|---|---|---|
| Precision | float32 | float64 | float32 | float64 | float32 |
| Architecture | Fixed 3-layer | Multi-layer | Flexible | Layer-based CNN | MLP/LSTM/GRU/RNN |
| Layer Spec | (in, hid, out) | (in, nlayers, hid, out) | [in, h1, h2, out] | Build API | Factory methods |
| Learning Rate | Per-train | Per-train | Settable property | Per-train | Per-train |
| Momentum | No | No | Yes | No | RMSprop built-in |
| Sparse Networks | No | No | Yes | No | No |
| Convolutional | No | No | No | Yes | Yes (via GraphBuilder) |
| Recurrent (RNN) | No | No | No | No | Yes (LSTM/GRU/RNN) |
| Returns Loss | Yes | Yes | Yes | Yes | Yes |
| Memory | Low | Medium | Low | High | Medium |
| NumPy Default | Converts | Native | Converts | Native | Converts |
Use TinnNetwork when:
- You need a simple 3-layer network
- Memory efficiency is important (float32 uses less memory)
- You want the training method to return loss values
- You prefer a simpler API with fixed architecture
Use GenannNetwork when:
- You need multiple hidden layers (deep networks)
- Higher precision is required (float64)
- You need to copy networks
- You want to randomize weights after creation
- You need to query total weights/neurons
- You prefer the constructor pattern:
GenannNetwork(inputs, hidden_layers, hidden_size, outputs)
Use FannNetwork when:
- You need flexible multi-layer architectures
- You want to control learning rate and momentum during training
- You need sparse networks (partial connectivity)
- You prefer list-based layer specification:
FannNetwork([2, 4, 3, 1]) - You want settable learning parameters
- Memory efficiency is important (float32 uses less memory than float64)
Use CNNNetwork when:
- You need convolutional layers for image processing or spatial data
- Building custom CNN architectures (e.g., MNIST, CIFAR-10 style networks)
- You want fine-grained control over layer configuration (kernel size, stride, padding)
- Working with 2D/3D structured input data
- Need to inspect individual layer properties and outputs
- Implementing image classification, object detection, or computer vision tasks
- Higher precision is required (float64)
- You prefer a layer-by-layer building API over fixed architecture
Use KannNeuralNetwork (KANN) when:
- You need recurrent networks (LSTM, GRU, or simple RNN) for sequence modeling
- Building text generation, time series prediction, or language models
- You want built-in RMSprop optimizer with early stopping
- You need backpropagation through time (BPTT) support
- You want a computational graph approach with automatic differentiation
- Building custom architectures with the GraphBuilder API
- You need convolution layers combined with recurrent layers
- Training with gradient clipping for RNNs
- You want built-in train/validation splitting
Performance Considerations
- The C implementation is fast but operates on single examples (no batch processing)
- GIL-free execution: All computational operations (
train,predict, network creation) release the Python GIL, enabling true parallel execution across multiple threads - Thread-safe: Multiple threads can safely share the same network for predictions
- For production machine learning, consider TensorFlow, PyTorch, or JAX
- This library is ideal for:
- Learning neural network fundamentals
- Embedded systems with limited resources
- Simple prediction tasks
- Parallel inference workloads
- Environments where large ML frameworks aren't available
Multithreading Example
from concurrent.futures import ThreadPoolExecutor
from cynn.tinn import TinnNetwork
import numpy as np
# Create a shared network
net = TinnNetwork(100, 50, 10)
def process_batch(batch_data):
"""Process a batch of inputs in parallel."""
results = []
for inputs in batch_data:
pred = net.predict(inputs)
results.append(pred)
return results
# Prepare data batches
data = [np.random.rand(100).astype(np.float32) for _ in range(1000)]
batches = [data[i:i+250] for i in range(0, 1000, 250)]
# Process batches in parallel (GIL-free!)
with ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(process_batch, batches))
Credits
- Tinn - Original C neural network library by glouw
- GENANN - Minimal C neural network library by codeplea
- FANN - Fast Artificial Neural Network library by Steffen Nissen
- nn1 - Convolutional Neural Network in C by euske
- KANN - Klib Artificial Neural Network library by Attractive Chaos
- Built with Cython
- Build system uses scikit-build-core
License
See LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cynn-0.1.5.tar.gz.
File metadata
- Download URL: cynn-0.1.5.tar.gz
- Upload date:
- Size: 801.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b5c3c9586fa9f38437f3f102d201ff2237d9309ea382e21af4e475ee25f005c0
|
|
| MD5 |
a1b2a43df9d1692b3f8b651a829e3398
|
|
| BLAKE2b-256 |
155cc87ec0cf5d075fb33f483a8deb573171e3e9c64108ec21cf5a28493bdd80
|
File details
Details for the file cynn-0.1.5-cp314-cp314-win_amd64.whl.
File metadata
- Download URL: cynn-0.1.5-cp314-cp314-win_amd64.whl
- Upload date:
- Size: 599.5 kB
- Tags: CPython 3.14, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b78f370bcfad3243ca0f270da7c29d73c2002c57698f759715d101cb0e3100c
|
|
| MD5 |
d73df4344dd9219ec2224729bc54b5cc
|
|
| BLAKE2b-256 |
e5a92601c8b30bd553669eddf7209f7d73a38237eb5b2b67ea4610c25459b081
|
File details
Details for the file cynn-0.1.5-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
- Upload date:
- Size: 771.8 kB
- Tags: CPython 3.14, manylinux: glibc 2.24+ x86-64, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c1d5f9ddcaa4b0c160286a3d4c68a2d4d15728ed58e2ec649f029e4c690ebaa8
|
|
| MD5 |
9f1756b6bf7a16aef2940bdde6b7cd55
|
|
| BLAKE2b-256 |
b3495bbbede83bf8956ae2d4ba51d4375e400912d0271be3fb279a9671545278
|
File details
Details for the file cynn-0.1.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl.
File metadata
- Download URL: cynn-0.1.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
- Upload date:
- Size: 739.4 kB
- Tags: CPython 3.14, manylinux: glibc 2.17+ ARM64, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c99cba1e461472557c847f7de8b7b0108797f76f111123829775a86383276bd
|
|
| MD5 |
cfdbf7c2a9bb350faa2a5bffefd9784f
|
|
| BLAKE2b-256 |
b17e935ffd6f533350fd92b6b6e27abab29ce808f45aa2f8a9f7afd298fb78f0
|
File details
Details for the file cynn-0.1.5-cp314-cp314-macosx_11_0_arm64.whl.
File metadata
- Download URL: cynn-0.1.5-cp314-cp314-macosx_11_0_arm64.whl
- Upload date:
- Size: 573.5 kB
- Tags: CPython 3.14, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4fe9598550e0dc9b0259fa3aa5995b910a56e6b78b6058197d64e89b18a1ad15
|
|
| MD5 |
a9ed11b90284cc75840fa85b5550934f
|
|
| BLAKE2b-256 |
b5748ef4ea7369ae769c3b0b3b4bddc85d8590e083f7e82b321984e4a4b5ed98
|
File details
Details for the file cynn-0.1.5-cp314-cp314-macosx_10_15_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp314-cp314-macosx_10_15_x86_64.whl
- Upload date:
- Size: 654.1 kB
- Tags: CPython 3.14, macOS 10.15+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
684f28fc64c6a4b75f5b68141da729c044bbefc0ea6d7a5524e97f15fbce19bc
|
|
| MD5 |
cc40da66215145c618a30cbccd2ac57c
|
|
| BLAKE2b-256 |
b4204c0d4cb6509371041e1a7696647371225fac62e669f410f7449187b970ac
|
File details
Details for the file cynn-0.1.5-cp313-cp313-win_amd64.whl.
File metadata
- Download URL: cynn-0.1.5-cp313-cp313-win_amd64.whl
- Upload date:
- Size: 582.8 kB
- Tags: CPython 3.13, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c2561718d057f25c6eca7cca76cf4e787b4765002bd82dd24d99442313eeb7a
|
|
| MD5 |
601a3f00d1f9bde6e5524103ce698f08
|
|
| BLAKE2b-256 |
9047d8521bbe599c6c3389a4a29afd7719c69424d811391ecf17a0c1527e1b3f
|
File details
Details for the file cynn-0.1.5-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
- Upload date:
- Size: 768.5 kB
- Tags: CPython 3.13, manylinux: glibc 2.24+ x86-64, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba24aac16e4428af050d33dc10c576e02426734f40fcadd8d05a4b78a96e1424
|
|
| MD5 |
350229bfd07d5a5a8ad97aae3375961b
|
|
| BLAKE2b-256 |
bf5d83d9bc03321c7199826a65de160d6cb01b9d91cd9b009d11a38a9d179ed9
|
File details
Details for the file cynn-0.1.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl.
File metadata
- Download URL: cynn-0.1.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
- Upload date:
- Size: 731.8 kB
- Tags: CPython 3.13, manylinux: glibc 2.17+ ARM64, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3837a2336756df2b1472d384ee3f266c29c062cdba22d4850a337a08339f3ff
|
|
| MD5 |
3db0852800fbcc9525a9e6dfb1eae418
|
|
| BLAKE2b-256 |
a6b64cdfbad73e1ec200b46bd11f45ee93d84936f67eab85dacb264c9b8e6807
|
File details
Details for the file cynn-0.1.5-cp313-cp313-macosx_11_0_arm64.whl.
File metadata
- Download URL: cynn-0.1.5-cp313-cp313-macosx_11_0_arm64.whl
- Upload date:
- Size: 570.0 kB
- Tags: CPython 3.13, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e8eda8a7fbe8bb4dee0b185b0ef217fada6dcf190c7a308139ab282e398e162
|
|
| MD5 |
3c647a9d78b290cc721f5e28f8c41282
|
|
| BLAKE2b-256 |
5c36ff843879e079f2e7871bbd3f35189fa4afdad8aa51f496078cbc6cfad313
|
File details
Details for the file cynn-0.1.5-cp313-cp313-macosx_10_13_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp313-cp313-macosx_10_13_x86_64.whl
- Upload date:
- Size: 651.9 kB
- Tags: CPython 3.13, macOS 10.13+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
48b773932cefd333ba4772559f45a714b1c0caf55346d9c99f784762a1dc1224
|
|
| MD5 |
8d12c94d0ec2156d03f91fc7ee7acf99
|
|
| BLAKE2b-256 |
a9c93767584e7da5656e28c1e8c2ae5fa2e89c8b5e12fd673d22644d58b5c820
|
File details
Details for the file cynn-0.1.5-cp312-cp312-win_amd64.whl.
File metadata
- Download URL: cynn-0.1.5-cp312-cp312-win_amd64.whl
- Upload date:
- Size: 583.3 kB
- Tags: CPython 3.12, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4974d3e8f2813425c6504f53a819ee8f6dc29d2704b90e139169dcdc0268e2ab
|
|
| MD5 |
bddac35d71d0fba8d8b673b5ba095f61
|
|
| BLAKE2b-256 |
48eaa083321fbaf21d9384c07379113ca90a3fbc222a3616f411030117fced94
|
File details
Details for the file cynn-0.1.5-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
- Upload date:
- Size: 770.2 kB
- Tags: CPython 3.12, manylinux: glibc 2.24+ x86-64, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a82aeac25e216962fae675b0c3290f25ff0ad3a8f4aa887fc7a6dde99e527bad
|
|
| MD5 |
e2295093e6d3a4151777add4c7b47c8e
|
|
| BLAKE2b-256 |
d0b51b0d13738855089bdd08e8cf461b3fa2f56150186dae3654fed81cb707b7
|
File details
Details for the file cynn-0.1.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl.
File metadata
- Download URL: cynn-0.1.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
- Upload date:
- Size: 732.6 kB
- Tags: CPython 3.12, manylinux: glibc 2.17+ ARM64, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0c56ddcbb8412b8ede1f4cb35636ce9e80c1763841de22e45d2a00c0ca72534f
|
|
| MD5 |
6b2ef59252c3a6b8ffd4c899a27276bb
|
|
| BLAKE2b-256 |
3df9f6b8294588506d1f88ef43e61663d65ebd3807a1045592d9013057831f7f
|
File details
Details for the file cynn-0.1.5-cp312-cp312-macosx_11_0_arm64.whl.
File metadata
- Download URL: cynn-0.1.5-cp312-cp312-macosx_11_0_arm64.whl
- Upload date:
- Size: 573.5 kB
- Tags: CPython 3.12, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3377618d7b89b939f685fcaed42f6273531a9f5164fd74e4a0f538442ca33273
|
|
| MD5 |
2e7d437413574e74939123c8fb25f3aa
|
|
| BLAKE2b-256 |
8feb1d4d2547f561e4e67a9c3453431d44f6ce65917840fd519b1116c5c04a86
|
File details
Details for the file cynn-0.1.5-cp312-cp312-macosx_10_13_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp312-cp312-macosx_10_13_x86_64.whl
- Upload date:
- Size: 655.9 kB
- Tags: CPython 3.12, macOS 10.13+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d05f644c6fd5562435afe45a31935e953d6062da072f5558f44ac45286ab5b7f
|
|
| MD5 |
e3317ed6af2e3bd8a0891f6e1c54af87
|
|
| BLAKE2b-256 |
1d9a6edcf07cea83330d410ff608f475c074fe6a3b44ff890675e04c0fb859f7
|
File details
Details for the file cynn-0.1.5-cp311-cp311-win_amd64.whl.
File metadata
- Download URL: cynn-0.1.5-cp311-cp311-win_amd64.whl
- Upload date:
- Size: 604.0 kB
- Tags: CPython 3.11, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8278a0b21a4f3a47b5f03c7384ce709afb20ebf61c5c1003b692c196e8169470
|
|
| MD5 |
2773d83fbc83aaddfd525c8394909e60
|
|
| BLAKE2b-256 |
c7d62508f2999a0b9d8613ebd043383a45c2301669622f4eb3422f6f3ef116b3
|
File details
Details for the file cynn-0.1.5-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
- Upload date:
- Size: 795.1 kB
- Tags: CPython 3.11, manylinux: glibc 2.24+ x86-64, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
376991c61983d57431c6d78ce0ef56710e336645aea5b0553d89d9ac780c2a73
|
|
| MD5 |
ca68286403fe91ffda32a2605835f6fc
|
|
| BLAKE2b-256 |
f7227d3140f00d157ecaf04f9c9bcfe2d1c6db76cf83c30a4ae34bc9466fcca3
|
File details
Details for the file cynn-0.1.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl.
File metadata
- Download URL: cynn-0.1.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
- Upload date:
- Size: 759.1 kB
- Tags: CPython 3.11, manylinux: glibc 2.17+ ARM64, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28c89045f455fd7e7bb8bc457fa90f1fb370a7161bb6352d6d07b5f9d7f65699
|
|
| MD5 |
9f9f949154ac4be805c5fc1810b5bb25
|
|
| BLAKE2b-256 |
ba69d64ac1ba41238abb7db7625f3a265d0bd9bdb322b13b73ed755a79db6a62
|
File details
Details for the file cynn-0.1.5-cp311-cp311-macosx_11_0_arm64.whl.
File metadata
- Download URL: cynn-0.1.5-cp311-cp311-macosx_11_0_arm64.whl
- Upload date:
- Size: 574.1 kB
- Tags: CPython 3.11, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f16111f17bf4fcd49cadd31652e423c17409a68bff17148976c7e6e562424640
|
|
| MD5 |
0cdd1e689e2b75c5b457f195c4820609
|
|
| BLAKE2b-256 |
ca64483804778a234ade76620d7855ab4846ccfdd2dd26d532f783dcb2cf3fb5
|
File details
Details for the file cynn-0.1.5-cp311-cp311-macosx_10_9_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp311-cp311-macosx_10_9_x86_64.whl
- Upload date:
- Size: 643.7 kB
- Tags: CPython 3.11, macOS 10.9+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
255306a55e4907373857dc2fa3d0b0c8a1d7390a560b0906f6e3935cfa20bb2b
|
|
| MD5 |
5cbb1c47040c2f73f6da014ea8719274
|
|
| BLAKE2b-256 |
f4320b728d28ee398dc5433ad719f93154ab3750dac0e79a1a69a8f1630af84a
|
File details
Details for the file cynn-0.1.5-cp310-cp310-win_amd64.whl.
File metadata
- Download URL: cynn-0.1.5-cp310-cp310-win_amd64.whl
- Upload date:
- Size: 603.0 kB
- Tags: CPython 3.10, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
123402e2c45606a142ec671a49ac45bc37ad43b03ccfb9cb13edf64360c3ee30
|
|
| MD5 |
507ec3a6c4647a0de0c9e1951a027721
|
|
| BLAKE2b-256 |
d59c59014de34e975e41c6829c559015cff5d3df102b11585716c14188e96a69
|
File details
Details for the file cynn-0.1.5-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
- Upload date:
- Size: 796.6 kB
- Tags: CPython 3.10, manylinux: glibc 2.24+ x86-64, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba9e5585d8aa8c7330af54ec3be7e1117b33c4ea796cee73a6e3ff2d34d427b6
|
|
| MD5 |
f69855068475580011fa221e5bad3957
|
|
| BLAKE2b-256 |
93f9cf8aba2bbe0ef2b87959df932804e0b42f9db986f9b9ca80da46abaa5fe2
|
File details
Details for the file cynn-0.1.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl.
File metadata
- Download URL: cynn-0.1.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
- Upload date:
- Size: 759.8 kB
- Tags: CPython 3.10, manylinux: glibc 2.17+ ARM64, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42e23ee77f0975b20056d9cbe421bb8cd74e10ac96ad372345b979fa6ce89b16
|
|
| MD5 |
2125d1ce5926db74a539474f9b27038c
|
|
| BLAKE2b-256 |
27a5a20148f8d0a44eeb72c3a61a866e6005f7f4d976f15df1f1403cc14ee946
|
File details
Details for the file cynn-0.1.5-cp310-cp310-macosx_11_0_arm64.whl.
File metadata
- Download URL: cynn-0.1.5-cp310-cp310-macosx_11_0_arm64.whl
- Upload date:
- Size: 575.7 kB
- Tags: CPython 3.10, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a37b1c9e7bf33ed6d6ca1212466f6551aa7a5fdf1915c39ebe3c647ecfc8eb95
|
|
| MD5 |
7f557b3d2d7a9070ed2103af6198c273
|
|
| BLAKE2b-256 |
4515db9c20c2fd52e2611404d7b9354487cdce7a2550726fcd68d08ee1661c2b
|
File details
Details for the file cynn-0.1.5-cp310-cp310-macosx_10_9_x86_64.whl.
File metadata
- Download URL: cynn-0.1.5-cp310-cp310-macosx_10_9_x86_64.whl
- Upload date:
- Size: 645.0 kB
- Tags: CPython 3.10, macOS 10.9+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21da7e4e23388500d790d9ba5d32cb6c45742210822a3c5a5d3074af2ce3b11a
|
|
| MD5 |
b9ca4fc021f7a837923219aa37b70232
|
|
| BLAKE2b-256 |
f779ebca3ecd272fffeb703efeb84cbe06ce01e91e3d98801018216bcc3a6324
|