Deep learning framework realized by Numpy purely, supports for both Dynamic Graph and Static Graph with GPU acceleration
Project description
XShinnosuke : Deep Learning Framework
Descriptions
XShinnosuke(short as XS) is a high-level neural network framework which supports for both Dynamic Graph and Static Graph, and has almost the same API to Keras and Pytorch with slightly differences. It was written by Python only, and dedicated to realize experimentations quickly.
Here are some features of XS:
- Based on Cupy(GPU version)/Numpy and native to Python.
- Without any other 3rd-party deep learning library.
- Keras and Pytorch style API, easy to start up.
- Supports commonly used layers such as: Dense, Conv2D, MaxPooling2D, LSTM, SimpleRNN, etc, and commonly used function: conv2d, max_pool2d, relu, etc.
- Sequential in Pytorch and Keras, Model in Keras and Module in Pytorch, all of them are supported by XS.
- Training and inference supports for both dynamic graph and static graph.
- Autograd is supported .
XS is compatible with: Python 3.x (3.7 is recommended) ==> C++ version
Getting started
Compared with Pytorch and Keras
ResNet18(5 Epochs, 32 Batch_size) | XS_static_graph(cpu) | XS_dynamic_graph(cpu) | Pytorch(cpu) | Keras(cpu) |
---|---|---|---|---|
Speed(Ratio - seconds) | 1x - 65.05 | 0.98x - 66.33 | 2.67x - 24.39 | 1.8x - 35.97 |
Memory(Ratio - GB) | 1x - 0.47 | 0.47x- 0.22 | 0.55x - 0.26 | 0.96x - 0.45 |
ResNet18(5 Epochs, 32 Batch_size) | XS_static_graph(gpu) | XS_dynamic_graph(gpu) | Pytorch(gpu) | Keras(gpu) |
---|---|---|---|---|
Speed(Ratio - seconds) | 1x - 9.64 | 1.02x - 9.45 | 3.47x - 2.78 | 1.07x - 9.04 |
Memory(Ratio - GB) | 1x - 0.48 | 1.02x - 0.49 | 4.4x - 2.11 | 4.21x - 2.02 |
XS holds the best memory usage!
1. Static Graph
The core networks of XS is a model, which provide a way to combine layers. There are two model types: Sequential (a linear stack of layers) and Functional (build a graph for layers).
For Sequential model:
from xs.nn.models import Sequential
model = Sequential()
Using .add()
to connect layers:
from xs.layers import Dense
model.add(Dense(out_features=500, activation='relu', input_shape=(784, ))) # must be specify input_shape if current layer is the first layer of model
model.add(Dense(out_features=10))
Once you have constructed your model, you should configure it with .compile()
before training or inference:
model.compile(loss='cross_entropy', optimizer='sgd')
If your labels are one-hot
encoded vectors/matrix, you shall specify loss as sparse_crossentropy, otherwise use crossentropy instead.
Use print(model)
to see details of model:
***************************************************************************
Layer(type) Output Shape Param Connected to
###########################################################################
dense0 (Dense) (None, 500) 392500
---------------------------------------------------------------------------
dense1 (Dense) (None, 10) 5010 dense0
---------------------------------------------------------------------------
***************************************************************************
Total params: 397510
Trainable params: 397510
Non-trainable params: 0
Start training your network by fit()
:
# trainX and trainy are ndarrays
history = model.fit(trainX, trainy, batch_size=128, epochs=5)
Once completing training your model, you can save or load your model by save()
/ load()
, respectively.
model.save(save_path)
model.load(model_path)
Evaluate your model performance by evaluate()
:
# testX and testy are Cupy/Numpy ndarray
accuracy, loss = model.evaluate(testX, testy, batch_size=128)
Inference through predict()
:
predict = model.predict(testX)
For Functional model:
from xs.nn.models import Model
from xs.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense
X_input = Input(input_shape = (1, 28, 28)) # (channels, height, width)
X = Conv2D(8, (2, 2), activation='relu')(X_input)
X = MaxPooling2D((2, 2))(X)
X = Flatten()(X)
X = Dense(10)(X)
model = Model(inputs=X_input, outputs=X)
model.compile(optimizer='sgd', loss='cross_entropy')
model.fit(trainX, trainy, batch_size=256, epochs=80)
Pass inputs and outputs layer to Model()
, then compile and fit model as Sequential
model.
2. Dynamic Graph
First design your own network, make sure your network is inherited from Module and override the __init__()
and forward()
function:
from xs.nn.models import Module
from xs.layers import Conv2D, ReLU, Flatten, Dense
import xs.nn.functional as F
class MyNet(Module):
def __init__(self):
super().__init__()
self.conv1 = Conv2D(out_channels=8, kernel_size=3) # don't need to specify in_channels, which is simple than Pytorch
self.relu = ReLU(inplace=True)
self.flat = Flatten()
self.fc = Dense(10)
def forward(self, x, *args):
x = self.conv1(x)
x = self.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = self.flat(x)
x = self.fc(x)
return x
Then manually set the training/ testing flow:
from xs.nn.optimizers import SGD
from xs.utils.data import DataSet, DataLoader
import xs.nn as nn
import numpy as np
# random generate data
X = np.random.randn(100, 3, 12, 12)
Y = np.random.randint(0, 10, (100, ))
# generate training dataloader
train_dataset = DataSet(X, Y)
train_loader = DataLoader(dataset=train_dataset, batch_size=10, shuffle=True)
# initialize net
net = MyNet()
# specify optimizer and critetion
optimizer = SGD(net.parameters(), lr=0.1)
critetion = nn.CrossEntropyLoss()
# start training
EPOCH = 5
for epoch in range(EPOCH):
for x, y in train_loader:
optimizer.zero_grad()
out = net(x)
loss = critetion(out, y)
loss.backward()
optimizer.step()
train_acc = critetion.calc_acc(out, y)
print(f'epoch -> {epoch}, train_acc: {train_acc}, train_loss: {loss.item()}')
Building an image classification model, a question answering system or any other model is just as convenient and fast~
Autograd
XS autograd supports for basic operators such as: +, -, *, \, **, etc
and some common functions: matmul(), mean(), sum(), log(), view(), etc
.
from xs.nn import Tensor
a = Tensor(5, requires_grad=True)
b = Tensor(10, requires_grad=True)
c = Tensor(3, requires_grad=True)
x = (a + b) * c
y = x ** 2
print('x: ', x) # x: Variable(45.0, requires_grad=True, grad_fn=<MultiplyBackward>)
print('y: ', y) # y: Variable(2025.0, requires_grad=True, grad_fn=<PowBackward>)
x.retain_grad()
y.backward()
print('x grad:', x.grad) # x grad: 90.0
print('c grad:', c.grad) # c grad: 1350.0
print('b grad:', b.grad) # b grad: 270.0
print('a grad:', a.grad) # a grad: 270.0
Here are examples of autograd.
Installation
Before installing XS, please install the following dependencies:
- Numpy
- Cupy (Optional)
Then you can install XS by using pip:
$ pip install xshinnosuke
Supports
functional
- admm
- mm
- relu
- flatten
- conv2d
- max_pool2d
- avg_pool2d
- reshape
- sigmoid
- tanh
- softmax
- dropout2d
- batch_norm
- groupnorm2d
- layernorm
- pad_2d
- embedding
Two basic class:
- Layer:
- Dense
- Flatten
- Conv2D
- MaxPooling2D
- AvgPooling2D
- ChannelMaxPooling
- ChannelAvgPooling
- Activation
- Input
- Dropout
- BatchNormalization
- LayerNormalization
- GroupNormalization
- TimeDistributed
- SimpleRNN
- LSTM
- Embedding
- ZeroPadding2D
- Add
- Multiply
- Matmul
- Log
- Negative
- Exp
- Sum
- Abs
- Mean
- Pow
- Tenosr:
- Parameter
Optimizers
- SGD
- Momentum
- RMSprop
- AdaGrad
- AdaDelta
- Adam
Waiting for implemented more
Objectives
- MSELoss
- MAELoss
- BCELoss
- SparseCrossEntropy
- CrossEntropyLoss
Activations
- ReLU
- Sigmoid
- Tanh
- Softmax
Initializations
- Zeros
- Ones
- Uniform
- LecunUniform
- GlorotUniform
- HeUniform
- Normal
- LecunNormal
- GlorotNormal
- HeNormal
- Orthogonal
Regularizes
waiting for implement.
Preprocess
- to_categorical (convert inputs to one-hot vector/matrix)
- pad_sequences (pad sequences to the same length)
Contact
- Email: eleven_1111@outlook.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file xshinnosuke-0.1.8.tar.gz
.
File metadata
- Download URL: xshinnosuke-0.1.8.tar.gz
- Upload date:
- Size: 38.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20200814 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ec4743709f90897550d6636068d458995929450f6956146f5d72218c422d869e |
|
MD5 | b79fd142d8d588d94890ef936e2fe5ef |
|
BLAKE2b-256 | d859e6461a32a5575e080942df50adee93d18157beb971a86e8d243858c0e6c8 |
File details
Details for the file xshinnosuke-0.1.8-py3-none-any.whl
.
File metadata
- Download URL: xshinnosuke-0.1.8-py3-none-any.whl
- Upload date:
- Size: 44.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20200814 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 533a0ea794c654d9413835e681bcf462224e70609924abd71c00d685c9d5110b |
|
MD5 | 24906c8c019196037c148821842beaf1 |
|
BLAKE2b-256 | 9a56befc9b5ee59268c7d61e1c62c33b43dd0360840e3b3d371692f0384d43be |