A simple neural network library built from scratch in Python ensuring easy application and understanding of artificial neural networks.
Project description
PyNNet: A Pythonic Neural Network Library
Welcome to PyNNet! 👋
PyNNet is a beginner-friendly neural network library that helps you learn and understand deep learning from the ground up. Built in pure Python and NumPy, it provides a clean, intuitive API similar to popular frameworks while being transparent about what's happening under the hood.
🎯 Perfect for:
- Learning how neural networks work
- Experimenting with deep learning concepts
- Educational projects and assignments
- Small to medium-sized machine learning tasks
🚀 Why Choose PyNNet?
- Simple, Keras-like API that's easy to learn
- Clear, documented implementation you can understand
- Minimal dependencies (just NumPy!)
- Great for learning deep learning fundamentals
📚 Getting Started
Installation
Install PyNNet using pip:
pip install pynnet
Quick Start Guide
Building a neural network with PyNNet is as simple as 1-2-3:
- Create a model:
from pynnet.network import sequential
model = Sequential()
- Add layers:
from pynnet.layers import dense
from pynnet.activation import relu, sigmoid
# Input layer with ReLU activation
model.add(dense(input_size=2, output_size=4, weight_init='he'))
model.add(relu)
# Output layer with Sigmoid activation
model.add(dense(input_size=4, output_size=1, weight_init='xavier'))
model.add(sigmoid)
- Compile and train:
from pynnet.optimizer import Adam
from pynnet.loss import mse, mse_derivative
# Compile the model
model.compile(
loss=mse,
loss_derivative=mse_derivative,
optimizer=Adam(learning_rate=0.01)
)
# Train the model
model.fit(x_train, y_train, epochs=1000, verbose=True)
🎓 Learning by Example
We provide several example implementations to help you get started:
1. XOR Gate (examples/01_xor_example.py)
Learn how to create your first neural network by implementing the XOR logic gate. This is a perfect starting point for beginners!
# XOR Truth Table:
# Input Output
# 0 0 => 0
# 0 1 => 1
# 1 0 => 1
# 1 1 => 0
2. Binary Classification (examples/02_binary_classification.py)
Learn how to classify points into two categories using a simple dataset of concentric circles. Great for understanding:
- Binary classification problems
- Using multiple layers
- Working with 2D input data
3. Regression (examples/03_regression.py)
Learn how to predict continuous values by fitting a sine wave. Demonstrates:
- Regression problems
- Using different activation functions
- Handling continuous output
- Data visualization
🛠 Features
Network Types
Sequential: Build networks by stacking layers one after another
Layers
Dense: Fully connected layer with customizable features:- Weight Initialization:
'he': Best for ReLU activation (default)'xavier': Best for tanh/sigmoid'lecun': For normalized inputs'identity': For deep networks'orthogonal': For better training'random': Simple random initialization
- Bias Initialization:
'zeros': All zeros (default)'ones': All ones'random': Random values'constant': Custom value
- Weight Initialization:
Activation Functions
relu: Rectified Linear Unitsigmoid: Sigmoid function (0 to 1)tanh: Hyperbolic tangent (-1 to 1)linear: No transformation (for regression)
Optimizers
SGD: Stochastic Gradient DescentAdam: Adaptive Moment Estimation
Loss Functions
mse: Mean Squared Error (for regression)binary_cross_entropy: For binary classification
💡 Tips for Success
Choosing Layer Sizes
- Input Layer: Must match your data's feature count
- Hidden Layers: Generally start with powers of 2 (e.g., 32, 64, 128)
- Output Layer:
- Binary classification: 1 unit with sigmoid
- Regression: 1 unit with linear activation
- Multi-class: One unit per class with softmax
Picking Initialization Methods
- With ReLU: Use
'he'initialization - With Sigmoid/Tanh: Use
'xavier'initialization - Deep Networks: Try
'orthogonal'initialization - When in Doubt: Start with
'he'initialization
Training Tips
- Start Small: Begin with a simple network and gradually add complexity
- Monitor Loss: Use
verbose=Trueto watch training progress - Learning Rate:
- Start with 0.01
- If loss is unstable: decrease it
- If learning is slow: increase it
- Save Your Models: Use
model.save_weights()to save progress
Common Patterns
Binary Classification:
model = Sequential()
model.add(Dense(input_size=n_features, output_size=64, weight_init='he'))
model.add(relu)
model.add(Dense(input_size=64, output_size=1, weight_init='xavier'))
model.add(sigmoid)
Regression:
model = Sequential()
model.add(dense(input_size=n_features, output_size=64, weight_init='he'))
model.add(relu)
model.add(dense(input_size=64, output_size=1, weight_init='he'))
model.add(linear)
🚀 Advanced Features
Model Persistence
Save and load your trained models:
# Save model weights
model.save_weights('my_model.npz')
# Load model weights
model.load_weights('my_model.npz')
Custom Training Loop
Monitor and control the training process:
for epoch in range(n_epochs):
loss = model.fit(X, y, epochs=1, verbose=False)
if epoch % 100 == 0:
predictions = model.predict(X_test)
print(f"Epoch {epoch}: Loss = {loss:.4f}")
📂 Project Structure
pynnet/
├── layers/ # Neural network layer implementations
│ ├── base.py # Base layer class
│ └── dense.py # Dense layer implementation
├── activation.py # Activation functions
├── loss.py # Loss functions
├── network.py # Core neural network implementation
├── optimizer.py # Optimization algorithms
└── test.py # Unit tests
🤝 Getting Help
- Check the examples in the
examples/directory - Review the docstrings in the code
- Create an issue on GitHub
- Send an email to the author
Requirements
- Python 3.7 or higher
- NumPy >= 1.20.0
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License - see the LICENSE file for details.
Author
- Zain Qamar - GitHub
- Email: zainqamarch@gmail.com
Acknowledgments
- Thanks to all contributors who help improve this library
- Special thanks to the NumPy community for providing the foundation for numerical computations
Citation
If you use PyNNet in your research, please cite it as:
@software{pynnet2025,
author = {Qamar, Zain},
title = {PyNNet: A Pythonic Neural Network Library},
year = {2025},
publisher = {GitHub},
url = {https://github.com/prime-programmer-ar/pynnet_project.git}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pynnet-0.0.2.tar.gz.
File metadata
- Download URL: pynnet-0.0.2.tar.gz
- Upload date:
- Size: 19.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
06d9d9546ee8d25500bdc51a44698a4b069b71076cd5dae61bb6a4a744845f3d
|
|
| MD5 |
bab6ec9d9198b6da3e8c79d90b1f7b34
|
|
| BLAKE2b-256 |
a982bc9a8fde9a0de132f370337c1cd77f235a291b2dd5bea7f3eb0867171566
|
File details
Details for the file pynnet-0.0.2-py3-none-any.whl.
File metadata
- Download URL: pynnet-0.0.2-py3-none-any.whl
- Upload date:
- Size: 16.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01bfbf06eb862935cc8996a37c3ba30a8565923f605de11c67f4ae540751c6ba
|
|
| MD5 |
c19d4d9abd23399eaa011e99e4b3d7a5
|
|
| BLAKE2b-256 |
1e7e62d9fc3d600192ac2a735f101891d82aff46ebf398f9d0f3aa222ee0d20a
|