Just another deep learning framework
Project description
Installation • Releases • Contributing • Features
MatterIx is a simple deep learning framework built to understand the fundamental concepts of autodiff, optimizers and loss functions from a first principle basis. It provide features such as automatic differentiation (autodiff), optimizers, loss functions and basic modules to create your own neural networks.
Feature  Description  Function/Specs 

Autodiff  Allows to compute gradients for tensors.  Firstorder derivative 
Loss functions  Provides a metric to evaluate the model or function  Mean squared error (MSE), Root mean squared error (RMSE) 
Optimizers  Updates the parameters of a model for a specific optimization problem  Stochastic gradient descent (SGD) 
Activation functions  It basically decides whether a neuron should be activated or not. Activation function is a nonlinear transformation which applied to the output before passing it to the next layer  Sigmoid, tanh, ReLU 
Module  Serves as a base class to design your own neural networks  NIL 
The core value of matterix is that it is a distilled version of pytorch so it is easier to understand what is happening under the hood.
Installation
a. Install it from github# Install either with option1 or option2 # Option1 (Preferred) pip install git+https://github.com/SiddeshSambasivam/MatterIx.git#egg=MatterIx # Option2 git clone https://github.com/SiddeshSambasivam/MatterIx.git python setup.py install
(or)
b. Install from PyPI
# Install directly from PyPI repository
pip install upgrade matterix
Features
1. Autodiff
Gradients are computed using reversemode autodiff. All computations are representated as a graph of tensors with each tensor holding a reference to a function which can compute the local gradient of that tensor. The calculation of the partial derivative for each tensor is completed when the entire graph is traversed.
The fundamental idea behind autodiff
is that it calculates the local derivative for each variable rather than its partial derivative. This way traversing through the computational graph is simple and modular, i.e we could calculate the partial derivative of any variable with respect to the output with just one traversal, with a complexity of O(n)
.
The difference between partial and local derivative is the way each variable is treated in each equation. When calculating the partial derivative of a function, the expression is broken down into variables, for example c= a* b
and d=a+b+c
, instead of using c
, we say a*b
in the d= a+b+(a*b)
. On the other hand, when calculating the local derivative of a function, each element in the expression is considered a variable. I understand this might not be clear, so refer to the following explanation.
2. Loss functions
2.1 Mean squared error. Example
from matterix.functions import MSE y_train = ... # Actual/true value y_pred = ... # model prediction loss = MSE(y_train, y_pred)
2.2 Root Mean squared error
from matterix.functions import RMSE y_train = ... # Actual/true value y_pred = ... # model prediction loss = RMSE(y_train, y_pred)
3. Optimizers
3.1 Stochastic gradient descent
from matterix.optimizer import SGD optimizer = SGD(model, model.parameters(), lr=0.001) # model, parameters to optimize, learning rate # To set the gradient of the parameters to zero optimizer.zero_grad() # To update the parameters optimizer.step()
4. Activation functions
Functions: sigmoid, tanh, relu.
All the activation functions are available from matterix.functions
. Example,
from matterix.functions import sigmoid
5. Module
Module provides the necessary functions to design your own neural network. It has methods to set all the gradients of the parameters to zero, get all the parameters of the network.
 Create a class which inherits from
nn.Module
to define for network  Initiate your parameters
 Write a forward function
See the example below.
from matterix import Tensor import matterix.nn as nn # To define a neural network, just inherit `Module` from `nn` class SampleModel(nn.Module): def __init__(self) > None: # Initilalize your parameters self.w1 = Tensor.randn(5, requires_grad=True) self.w2 = Tensor.randn(14, requires_grad=True) ... def forward(self, x) > Tensor: out_1 = x @ self.w1 ... return output model = SampleModel() model.zero_grad() # Sets the gradient of all the parameters to zero model.parameters() # Gets all the parameters
Example
The following is a simple example
# Simple linear regression from matterix import Tensor from matterix.nn import Module, Linear from matterix.optim import SGD from matterix.loss import MSE from tqdm import trange x_data = Tensor.randn(100, 5) coef = Tensor([1, 3, 2, 8, 6]) y_data = x_data @ coef + 5.0 class Model(Module): def __init__(self): # Linear is essentially an abstraction provided for a linear model which contains a weights and bias initialised for the layer. self.l1 = Linear(5) def forward(self, x) > Tensor: output = self.l1(x) return output model = Model() optimizer = SGD(model, model.parameters(), lr=0.001) epochs = 100 for epoch in (t := trange(epochs)): optimizer.zero_grad() y_pred = model(x_data) loss = MSE(y_data, y_pred, norm=False) loss.backward() optimizer.step() t.set_description("Epoch: %.0f Loss: %.10f" % (epoch, loss.data)) t.refresh() print(model.l1.w) # Tensor([1.000003 3.00000593 1.99999385 7.99999544 6.0000062 ], shape=(5,)) print(model.l1.b) # Tensor(5.000001599010044, shape=(1,))
Development setup
Install the necessary dependecies in a seperate virtual environment
# Create a virtual environment during development to avoid dependency issues pip install r requirements.txt # Before submitting a PR, run the unittests locally pytest v
Release history

1.0.1
 Used 1.0.0 for testing
 ADD: Tanh function, RMSE loss, randn and randint

0.1.1
 ADD: Optimizer: SGD
 ADD: Functions: Relu
 ADD: Loss functions: RMSE, MSETensor
 ADD: Module: For defining neural networks
 FIX: Floating point precision issue when calculating gradient

0.1.0
 First stable release
 ADD: Tensor, tensor operations, sigmoid functions
 FIX: Inaccuracies with gradient computation
Contributing

Fork it

Create your feature branch
git checkout b feature/new_feature

Commit your changes
git commit m 'add new feature'

Push to the branch
git push origin feature/new_feature

Create a new pull request (PR)
Siddesh Sambasivam Suseela  @ssiddesh45  plutocrat45@gmail.com
Distributed under the MIT license. See LICENSE
for more information.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.