Light weight fast machine learning framework.
Project description
EXtremely Rapid Neural Networks (xrnn) is a Python machine learning framework for building layers and neural networks by exposing an easy-to-use interface for building neural networks as a series of layers, similar to Keras, while being as lightweight, fast, compatible and extendable as possible.
Table of Contents
- The advantages of this package over existing machine learning frameworks
- Installation
- Examples
- Features
- Building from Source
- Testing
- Current Design Limitations
- Project Status
- License
The advantages of this package over existing machine learning frameworks
- Works on any Python version 3.6 (released in 2016) and above.
- Requires only one dependency, which is Numpy (you most likely already have it).
- Lightweight in terms of size.
- Very fast startup time, which is the main version behind developing this project, meaning that importing the package,
building a network and straiting training takes less than a second (compared to
Tensorflow
for example which can take more than 10 seconds). - High performance, even on weak hardware, reached 90% validation accuracy on MNIST dataset using a CNN on a 2 core 2.7 GHZ cpu (i7-7500U) in 20 seconds.
- Memory efficient, uses less RAM than Tensorflow (~25% less) for a full CNN training/inference pipeline.
- Compatibility, there's no OS-specific code (OS and hardware independent), so the package can pretty much be built and run on any platform that has python >= 3.6 and any C compiler that has come out in the last 20 years.
Installation
Run the following command:
pip install xrnn
- Pre-built distributions (wheels) are provided for pretty much every platform, so the installation should be quick and error-free.
- The source distribution is also present if there isn't a wheel for the platform you are running on.
Examples
This example will show how to build a CNN for classification, add layers to it, train it on dummy data, validate it and use it for inference.
import numpy as np
# Create a dummy dataset, which contains 1000 images, where each image is 28 pixels in height and width and has 3 channels.
number_of_samples = 1000
height = 28
width = 28
channels = 3
number_of_classes = 9 # How many classes are in the dataset, for e.g., cat, car, dog, etc.
x_dummy = np.random.random((number_of_samples, height, width, channels))
y_dummy = np.random.randint(number_of_classes, size=(number_of_samples, ))
# Build the network.
batch_size = 64 # How many samples are in each batch (slice) of the data.
epochs = 2 # How many full iterations over the dataset to train the network for.
from xrnn.model import Model # The neural network blueprint (houses the layers)
from xrnn.layers import Conv2D, BatchNormalization, Flatten, Dense, MaxPooling2D
from xrnn.activations import ReLU, Softmax
from xrnn.losses import CategoricalCrossentropy # This loss is used for classification problems.
from xrnn.optimizers import Adam
model = Model()
model.add(Conv2D(16, 3, 2, 'same'))
model.add(ReLU())
model.add(BatchNormalization())
model.add(MaxPooling2D(2, 2, 'same'))
model.add(Flatten())
model.add(Dense(100))
model.add(ReLU())
model.add(Dense(number_of_classes)) # The output layer, has the same number of neurons as the number of unique classes in the dataset.
model.add(Softmax())
model.set(Adam(), CategoricalCrossentropy())
model.train(x_dummy, y_dummy, epochs=epochs, batch_size=batch_size, validation_split=0.1) # Use 10% of the data for validation.
x_dummy_predict = np.random.random((batch_size, height, width, channels))
prediction = model.inference(x_dummy_predict) # Same as model.predict(x_dummy_predict).
# Model predicts on batches, so even if one sample is provided, it's turned into a batch of 1, that's why we take
# the first sample.
prediction = prediction[0]
# The model returns a probability for each label, `np.argmax` returns the index with the largest probability.
label = np.argmax(prediction)
print(f"Prediction: {label} - Actual: {y_dummy[0]}.")
And that's it! You've built, trained and validated a convolutional neural network in just a few lines. It's true that the data is random
therefor the model isn't going to learn, but this demonstrates how to use the package, just replace 'x_dummy' and 'y_dummy' with
actual data and see how the magic happens!
A complete example that demonstrate the above with actual data can be found in example.py
script that is bundled with the package.
It trains a CNN on MNIST data set, just import the script using from xrnn import example
and run example.mnist_example()
.
Alternatively, you can run it from the command line using python -m xrnn.example
Note that the script will download the MNIST dataset (~12 megabytes) and store it locally.
Features
xrnn.layers
: Implements Conv2D, Dropout, Dense, Max/AvgPool2D, Flatten and BatchNormalization layers.xrnn.optimizers
: Implements Adam, SGD (with momentum support), RMSprop and Adagrad optimizers.xrnn.losees
: Implements BinaryCrossentropy, CategoricalCrossentropy and MeanSquaredError (MSE) loss functions.xrnn.activations
: Implements ReLU, LeakyReLU, Softmax, Sigmoid and Tanh activation functions.xrnn.models
: Implements theModel
class, which is similar to Keras Sequential model. and can be used to build, train, validate and use (inference) a neural network.
For more information on how to use each feature (like the Model
class), look at said feature docstring (for example help(Conv2D.forward)
).
Nonetheless, if you are acquainted with Keras, it's pretty easy to get started with this package because it has almost
the same interface as Keras, the only notable difference is that keras model.fit
is equivalent to model.train
in this package.
Building From Source
Running python -m build
should suffice.
Tested Platforms | Tested Compilers | Tested Architectures |
---|---|---|
Windows Server 2022 | MSVC, GCC (MinGW), Clang | 64/32 bit |
Linux (Ubuntu 20.04) | GCC, Clang² | 64/32 bit |
MacOS (Intel + Arm) | Clang³, GCC | 64 bit/ARM |
¹ The compiler used to build the package is in bold.
² You might encounter omp.h
file not found error, to fix this, install libomp
using sudo apt install libomp-dev
.
³ You might encounter an error indicating that omp.h
couldn't be found,
to fix this, install libomp
using Homebrew brew install libomp
.
To set the compiler you want to use for compilation, change the value of compiler
under [tool.xrnn]
in pyproject.tmol
.
It can be a full path to the compiler executable or just the shortcut (gcc for e.g.) if it's in your path.
Testing
For testing the package, first you need to download pytest
if you don't have it via:
pip install pytest
Then run:
pytest PATH/TO/TESTS -p xrnn
Note That you need to install the package first if you built it from source
Platform | Tested Python versions |
---|---|
Windows 64 bit | 3.6 - 3.12 |
Linux x86_64 | 3.6 - 3.12 |
MacOS x86_64 | 3.6 - 3.12 |
MacOS arm64 | 3.10 - 3.12 |
Windows 32 bit | 3.10 |
Linux i386 | 3.10 |
Linux arm64 | 3.10 |
Current Design Limitations
The current design philosophy is compatibility, being able to port/build this package on any OS or hardware, so only native Python/C code is used with no dependence on any third party libraries (except for numpy), this is great for compatibility but not so for performance, because the usage of optimized libraries like Eigen, Intel's oneDNN or cuda is prohibited, which in turn makes this machine learning framework unusable for large datasets and big models.
Project Status
This project still has a long way to go, and I'm currently polishing its API. I might add the following features in the future:
- Add Support for Cuda.
- Optimize CPU performance to match the mature frameworks like Pytorch and Tensorflow.
- Add support for automatic differentiation to make building custom layers easier.
- Add more layers, mainly recurrent, attention and other convolution (transpose, separable) layers.
- Add support for multiple inputs/outputs to the layers and models.
While keeping with the core vision of the project, which is to make it as easy to install, compatible with all platforms and extendable as possible
License
This project is licensed under the MIT license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
File details
Details for the file xrnn-1.1.1.tar.gz
.
File metadata
- Download URL: xrnn-1.1.1.tar.gz
- Upload date:
- Size: 78.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea2f1027774ed89ccbacdaebfe825898de973acb1f1e744d672ff913997a6ae6 |
|
MD5 | d0a4b7e4cba2149cb492877992be7375 |
|
BLAKE2b-256 | 29cf0d545d5ba682e60c6a54e569a711286d606f9756fa09ea01edb6694a0cd6 |
File details
Details for the file xrnn-1.1.1-py3-none-win_amd64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-win_amd64.whl
- Upload date:
- Size: 124.8 kB
- Tags: Python 3, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7772fa8dbcbfd6d00efe091f5516021cb4178b99624febe8782df815358219b7 |
|
MD5 | 2862df7f3d072cf8323ef240acf79c05 |
|
BLAKE2b-256 | 01b7731ee07f89abb82831ba404c6169b75d420d97326ce284498aac93fdf6ef |
File details
Details for the file xrnn-1.1.1-py3-none-win32.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-win32.whl
- Upload date:
- Size: 113.3 kB
- Tags: Python 3, Windows x86
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6cbd48c689de57eaeddd78fa935152c8bb6ab0eade3a855fc5cd425e91ca9f39 |
|
MD5 | 3f0f6250b4af8148d68f7c23bcde247b |
|
BLAKE2b-256 | bcba4dc41a8b4f183951c5baccd02bbfaa5d7832408f364df3c458bac07bf99e |
File details
Details for the file xrnn-1.1.1-py3-none-musllinux_1_1_x86_64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-musllinux_1_1_x86_64.whl
- Upload date:
- Size: 154.7 kB
- Tags: Python 3, musllinux: musl 1.1+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b31687a42c678ba75aa59b68ea844df56f90f500278d5a0b23913ff5a039cb6 |
|
MD5 | c97b1c29640d30068ea38c7f24a9602b |
|
BLAKE2b-256 | c2f7deacdaf9d94d45fef02c2a23173063960b1fa8cc7a2951632d3828f5b8b7 |
File details
Details for the file xrnn-1.1.1-py3-none-musllinux_1_1_i686.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-musllinux_1_1_i686.whl
- Upload date:
- Size: 156.6 kB
- Tags: Python 3, musllinux: musl 1.1+ i686
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9eebffb41ec4a46b719449fadf73d29704bd9fa47649b8b86b024c282195371a |
|
MD5 | 792922032b32151df735b85c124bf717 |
|
BLAKE2b-256 | 148847a3072a10101faef3a4a2c9ca2916a1b2a92c64a2f9e8fbb4aa24710f06 |
File details
Details for the file xrnn-1.1.1-py3-none-musllinux_1_1_aarch64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-musllinux_1_1_aarch64.whl
- Upload date:
- Size: 154.2 kB
- Tags: Python 3, musllinux: musl 1.1+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f18932e1019d17ba7dff55ee5b88e9e959369ea140a9f0e5d2d8f83bdd591b18 |
|
MD5 | acc37ddd31f7b39099143b7968038d3b |
|
BLAKE2b-256 | 80a78c01acdb06294c5bad77c9dbd8eda63dae7c71a5439dfe299ef316ff20e7 |
File details
Details for the file xrnn-1.1.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 144.0 kB
- Tags: Python 3, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 409f6436e06d0dfe9121084318257becf77f94ea1f874eb20a50c83a6bf37284 |
|
MD5 | 17a8d0807f314fb7fc8bfc216d9a3b5e |
|
BLAKE2b-256 | 1001e2419cd1999887c7322237eb63d418c21dc7ed36863aefa08ee5736e90f9 |
File details
Details for the file xrnn-1.1.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl
- Upload date:
- Size: 146.9 kB
- Tags: Python 3, manylinux: glibc 2.17+ i686
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bfdd748f2ca5225c9516761d412e94d6df4f57bf06ad03ebd76fe2cbe6c885b9 |
|
MD5 | e63c343205c0b6e5c86f28d5f4485ab7 |
|
BLAKE2b-256 | c00228ebb10e72c4c6227aa1a9bcd7336a924460654abf44374f18e85ad26ae1 |
File details
Details for the file xrnn-1.1.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 145.0 kB
- Tags: Python 3, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a6db5d9bfb7b24b4ab54429c32e76e6631cd481c73ac57467130f1b4251df5ae |
|
MD5 | 8e4b313e8b9e180829c1c5e5c7759caa |
|
BLAKE2b-256 | 32435c89db2552be78e053993a2d25c514303cbec522ca52453fe60bde86199b |
File details
Details for the file xrnn-1.1.1-py3-none-macosx_11_0_arm64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-macosx_11_0_arm64.whl
- Upload date:
- Size: 319.1 kB
- Tags: Python 3, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 61824c1336d655dfa9ddf1b54557f24d4ef6286c6e47810be5af2d278915051b |
|
MD5 | a06c4c2d04bf82b90374b4b9b5f80586 |
|
BLAKE2b-256 | 138c72f55ab31aeb8f9a7df20ff0259a8def1f20d1bcf02386ce7142a363913b |
File details
Details for the file xrnn-1.1.1-py3-none-macosx_10_9_x86_64.whl
.
File metadata
- Download URL: xrnn-1.1.1-py3-none-macosx_10_9_x86_64.whl
- Upload date:
- Size: 352.1 kB
- Tags: Python 3, macOS 10.9+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 66e3f3f3b24b0b0f3c29c8dc1fea562d17f53614b1c4e82446873224290a9a18 |
|
MD5 | d5f5d1c91ad904f655bba229e8ffe778 |
|
BLAKE2b-256 | 1eb3f28ccb7c79da02f00d89b34160f58e759f4c36f108a6cba3a579d103644f |