Yet another ML library!
Project description
rawML
rawML is a hobby project where neural networks are implemented from scratch using pure Python and Numpy, with a class-based structure to define layers, optimizers, and loss functions. The goal is to implement how neural networks work at a low level, and create a (somewhat) modular custom code framework to implement ML algorithms, without relying on high-level deep learning frameworks like TensorFlow or PyTorch. rawML is a hobby project where neural networks are implemented from scratch using pure Python and Numpy, with a class-based structure to define layers, optimizers, and loss functions. The goal is to implement how neural networks work at a low level, and create a (somewhat) modular custom code framework to implement ML algorithms, without relying on high-level deep learning frameworks like TensorFlow or PyTorch. In a nutshell, my lite version of PyTorch/TF, powered by NumPy.
Overview of current implementations:
- Linear Layers: Fully connected layers with He-initialization of weights.
- Activation Functions: ReLU activation implemented using numpy.
- Custom Tensor Class:
jTensor, an extension of Numpy's ndarray, supports storing gradients in the.gdattribute. - Optimization: Gradient Descent Optimizer (
GDOptimizer) is implemented with learning rate control. - Loss Function: Mean Squared Error (MSE) Loss is implemented to compute the loss during training.
- Model Class: The
CreateModelclass stiches all layers together, providing methods for forward passes and training with backpropagation. I plan on making it more customizable. - Other basic functionality: General essential functions, like mean, min, max, std, rand, randn etc are implemented in the rawML library. All are powered by NumPy.
Requirements
just numpy :)
pip install numpy
Usage Instructions
Just clone this repository. In the cloned folder, create your python file, and start using away! (the rawML folder is essential. Copy it wherever you wish, and import from it)
Code Example
The demo.py file shows how one can define a rML model, define the loss function, optimzer and stich it together. Model training and inference examples are also shown.
import rawML as rML
from rawML.layers import relu, linear
from rawML.optimizers import GDOptimizer
from rawML.losses import MSELoss
from sklearn.model_selection import train_test_split as tts
LayerList = [
linear(100,20),
relu(),
linear(20,40)
]
opt = GDOptimizer(lr = 1e-2)
loss = MSELoss()
model = rML.createModel(LayerList, opt, loss)
X = rML.rand((16, 100))
Y = rML.rand((16, 40))
##sklearn train-test-split works with jTensors
x_train, x_val, y_train, y_val = tts(X,Y,train_size=0.8)
#Training
model.train((x_train,y_train),(x_val,y_val),epochs=20,verbose_freq=5)
#Predicting
y = model(rML.randn([40,100]))
print(y.shape)
- As jTensors inherit(and behave very similar to) from numpy arrays, they support operations like .shape, and they can also be fed into scikit-learn's train test split
Future Implementations
This project is at a very initial stage, and I aim to expand it further. I will add implementations of more Optimizers, Loss functions along with other layers like MaxPool2D, CNNs. There is no implementation of the concept of "batch size" which will be added very soon. An easier way to add more custom metrics will also be implemented into the model.train() method. Verbose control will be added Currently, the CreateModel class is restrictive to a sequential NN, which I plan on changing by implementing a more "functional" NN, to make more complex architectures like skip connections etc. The further aim to implement the famed UNet architecture using RawML. I also plan to explore GPU acceleration possibilities by migrating to CuPy instead of NumPy and etc...
(PS, there exists an rML.about())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rawml-0.1.1.tar.gz.
File metadata
- Download URL: rawml-0.1.1.tar.gz
- Upload date:
- Size: 5.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
921e5170c61a6783394a99b9024e7ab5ed8b8d40a97bea4d3c85c4837d921274
|
|
| MD5 |
f4e82f33d240f5140095cf828d07aef2
|
|
| BLAKE2b-256 |
631b6ed0ed4d89075cc3f60c2b70edad49ac4f0eaf9daf6bb800c7e8551b2086
|
File details
Details for the file rawML-0.1.1-py3-none-any.whl.
File metadata
- Download URL: rawML-0.1.1-py3-none-any.whl
- Upload date:
- Size: 6.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
996ab329b04093b207ce867f4dff631cbd14146bbb3617ed4f5e4c071fbfe544
|
|
| MD5 |
09b9b479bdd2a24af5486728b886f15e
|
|
| BLAKE2b-256 |
550e95e2f4c291aac498336f2e1dd8bc37ec8805ca130f97238ad4edaf4d870e
|