PyDyNet: Neuron Network (MLP, CNN, RNN, Transformer, ...) implementation using Numpy with Autodiff
Project description
PyDyNet:NumPy-based Dynamic Deep Learning Framework
Chinese README: cnREADME.md
Towards Large Language Model
In the summer of 2025, I restart the development of PyDyNet after two years. PyDyNet implemented a pure inference version of Llama3 (6-layer Transformer, vocab-size=32000). The implementation is inspired by the NumPy version and dataset available here. To run it, download the dataset into the llm/llama folder and execute:
>>> python -m llm.llama.infer
There was a boy named Timmy. He loved to play with hi toy and run around outside. One day, Timmy' mom asked him to help her with the laundry. Timmy didn't want to help because he wanted to play. But hi mom said, "Timmy, you need to help me. It' important to help out."
Timmy didn't want to help, but he knew he had to. So, he put on hi shoe and went outside to help hi mom. A they were folding the clothe, Timmy saw a big pile of laundry on the floor. He wanted to help, so he started to pick it up. But then, he accidentally knocked over a pile of clothe and they fell on him. Timmy wa okay, but he felt bad.
Hi mom saw what happened and said, "Timmy, you need to be more careful. You could have hurt yourself." Timmy felt bad and said sorry. Hi mom hugged him and said, "It' okay, accident happen. Let' clean up the laundry together." Timmy learned that it' important to be careful and help out when you need it.
Token count: 262, elapsed: 0.87s, 300 tokens/s
For parameter fine-tuning, run:
python -m llm.llama.finetune --text "A short domain sample for adaptation" --steps 30 --lr 1e-4 --trainable lm_head --save llm/llama/data/finetuned_params.npz
Then load the tuned weights for generation:
python -m llm.llama.infer --prompt "A short domain sample" --finetuned llm/llama/data/finetuned_params.npz
We also implemented a pure inference version of CLIP, inspired by the NumPy version and dataset available NPCLIP. To run it, imigrate data folder of MPCLIP into llm/clip folder and execute:
>>> python -m llm.clip.infer
Label probs: [0.000953 0.48176003 0.51728696]
For parameter fine-tuning on CLIP, run:
python -m llm.clip.finetune --image llm/clip/picture.png --labels "a fish,a dog,a cat" --target 2 --steps 20 --lr 1e-5 --trainable text_encoder.proj,image_encoder.proj --save llm/clip/data/finetuned_clip_params.npz
Then load tuned parameters for inference:
python -m llm.clip.infer --image llm/clip/picture.png --labels "a fish,a dog,a cat" --finetuned llm/clip/data/finetuned_clip_params.npz
for the following image and query ["a fish", "a dog", "a cat"]
Overview
PyDyNet is a neural network framework implemented entirely in NumPy (with CuPy support since version 0.0.7, using the same API). Its syntax is inspired by PyTorch, and its structure is as follows:
graph LR
N(numpy/cupy.ndarray)--Backend--> A(Tensor) --> ds(Dataset) ---> Data(DataLoader)---> Mission
A --Eager execution--> B(Basic operators:<br> add, exp, etc)
B -.Autograd-.-> A
B --> CO(Complex<br>operators)
--> f(Function:<br>img2col, etc)
--> M(Basic Module:<br>Linear, etc)
--> CM(Advanced Module: CNN, RNN, Transformer, etc)
--> Mission(Learning task)
A --> GD(Optimizer:<br> SGD, Adam, etc) ---> LS(lr_scheduler: <br>StepLR, etc)---> Mission
Dashed lines indicate that users can disable automatic differentiation using no_grad.
Install
Just
pip install pydynet
or
git clone https://github.com/Kaslanarian/PyDyNet
cd PyDyNet
python setup.py install
Example
Examples can be found in the examples/pydynet directory, with equivalent PyTorch implementations in examples/pytorch. To run an example, use:
python -m examples.pydynet.xxx
Automatic Differentiation
The example autodiff1d.py demonstrates automatic differentiation by performing gradient descent on a one-dimensional convex function:
A multi-variable convex function example is provided in autodiff2d.py:
MLP & LeNet
The example mlp_cnn.py uses MLP and LeNet to classify MNIST digits. The training and testing accuracies are shown below:
Dropout & Batch Normalization
The example mlp_dropout_bn.py compares the performance of three networks on the fetch_olivetti_faces dataset (64×64 pixel images):
- Three-layer MLP;
- Three-layer MLP with Dropout;
- Three-layer MLP with Batch Normalization.
Recurrent Neural Network (RNN)
The example ts_prediction.py demonstrates time series prediction using a GRU:
Transformer
The example transformer.py shows how to train a text classification model using a Transformer. The training results are as follows:
Dataset (CoLA) link: https://nyu-mll.github.io/CoLA/cola_public_1.1.zip
Cuda Acceleration
PyDyNet supports CUDA acceleration through CuPy. To use it, simply install CuPy and use the same API as NumPy. We compare the performance of PyDyNet with CuPy and NumPy as follows on Nvidia GeForce RTX 4090:
| Network structure | Dataset | CPU time (s) per epoch | GPU time (s) per epoch |
|---|---|---|---|
| 3-layer MLP | MNIST (80000×574) | 7.256±0.138 | 1.203±.0181 |
| LeNet | MNIST (80000×574) | 239.664±2.108 | 2.841±0.026 |
| 1-layer Transformer (dim=512, head=4) | CoLA (8551×45×64) | 17.503±0.251 | 1.075±0.002 |
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydynet-1.1.tar.gz.
File metadata
- Download URL: pydynet-1.1.tar.gz
- Upload date:
- Size: 31.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6504843cd18bbc3e2e78e385a6242ef0949a265eb39a0407206014d1cf792ec
|
|
| MD5 |
6e9440cc67fbfccf8e82372475dd7059
|
|
| BLAKE2b-256 |
b007ab524ec815bf46e729da4404f0a4cba9e0bd7d75d394292b572054eefd27
|
Provenance
The following attestation bundles were made for pydynet-1.1.tar.gz:
Publisher:
python-publish.yml on WeltXing/PyDyNet
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydynet-1.1.tar.gz -
Subject digest:
b6504843cd18bbc3e2e78e385a6242ef0949a265eb39a0407206014d1cf792ec - Sigstore transparency entry: 975717042
- Sigstore integration time:
-
Permalink:
WeltXing/PyDyNet@cc9d6934aa79cbf8f9c374509b488f77d112f7be -
Branch / Tag:
refs/tags/ver1.1 - Owner: https://github.com/WeltXing
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@cc9d6934aa79cbf8f9c374509b488f77d112f7be -
Trigger Event:
release
-
Statement type:
File details
Details for the file pydynet-1.1-py3-none-any.whl.
File metadata
- Download URL: pydynet-1.1-py3-none-any.whl
- Upload date:
- Size: 33.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9247dcccca16e4f112dc720335898c3c1851d208dc3b8524622d65d05d6d401f
|
|
| MD5 |
6497e083f655131a456fa589b9cd2ae4
|
|
| BLAKE2b-256 |
e06e603244ca6d12d5ac9558bc985bc651793cde30f64b58fbc859c3426c2c94
|
Provenance
The following attestation bundles were made for pydynet-1.1-py3-none-any.whl:
Publisher:
python-publish.yml on WeltXing/PyDyNet
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydynet-1.1-py3-none-any.whl -
Subject digest:
9247dcccca16e4f112dc720335898c3c1851d208dc3b8524622d65d05d6d401f - Sigstore transparency entry: 975717046
- Sigstore integration time:
-
Permalink:
WeltXing/PyDyNet@cc9d6934aa79cbf8f9c374509b488f77d112f7be -
Branch / Tag:
refs/tags/ver1.1 - Owner: https://github.com/WeltXing
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@cc9d6934aa79cbf8f9c374509b488f77d112f7be -
Trigger Event:
release
-
Statement type: