Deep Learning for Everybody.
You have just found TensorHub.
The open source library to help you develop and train ML models, easy and fast as never before with
TensorHub is a global collection of
building blocks and
ready to serve models.
It is a wrapper library of deep learning models and neural lego blocks designed to make deep learning more accessible and accelerate ML research.
TensorHub if you need a deep learning library that:
Reproducibility - Reproduce the results of existing pre-training models (such as Google BERT, XLNet)
Model modularity - TensorHub divided into multiple components: ready-to-serve models, layers, neural-blocks etc. Ample modules are implemented in each component. Clear and robust interface allows users to combine modules with as few restrictions as possible.
Prototyping - Code less build more. Apply
TensorHubto create fast prototypes with the help of modulear blocks, custom layers, custom activation support.
Platform Independent - Supports both
TensorFlow 2.0. Run your model on CPU, single GPU or using a distributed training strategy.
Getting started: 30 seconds to TensorHub
Here is the
Sequential model for
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, Dense # Use Tensorhub to accelerate your prototyping from tensorhub.layers import InceptionV1 # Custom Inception Layer from tensorhub.models.image.classifiers import CNNClassifier, VGG16 # Cooked Models from tensorhub.utilities.activations import gelu, softmax # Custom Activations Supported ## Initiate a sequential model model = Sequential() ## Stacking layers is as easy as `.add()` model.add(Conv2D(32, (3, 3), activation=gelu, input_shape=(100, 100, 3))) model.add(Conv2D(32, (3, 3), activation=gelu)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) ## Add custom layer like any other standard layer model.add(InceptionV1(32)) model.add(Dense(units=64, activation=gelu, input_dim=100)) model.add(Dense(units=10, activation=softmax)) # Or ## Use one of our pre-cooked models model = VGG16(n_classes=10, num_nodes=64, img_height=100, img_width=100) ## Once your model looks good, configure its learning process with `.compile()` model.compile( loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'] ) ## Alternatively, if you need to, you can further configure your compile configuration model.compile( loss=tensorflow.keras.losses.categorical_crossentropy, optimizer=tensorflow.keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True), metrics=['acc'] ) ## You can now iterate on your training data in batches ## x_train and y_train are Numpy arrays model.fit(x_train, y_train, epochs=5, batch_size=32) ## Alternatively, you can feed batches to your model manually model.train_on_batch(x_batch, y_batch) ## Evaluate your performance in one line: loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128) ## Or generate predictions on new data classes = model.predict(x_test, batch_size=128)
Building a question answering system, an image classification model, a Neural Turing Machine, or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?
For a more in-depth tutorial about Keras, you can check out:
In the examples folder of this repository, you will find much more advanced examples coming your way very soon.
What's coming in V1.0
Image Classification (Supports Transfer Learning with ImageNet Weights)
- SqueezeNet (Without Transfer Learning) *
- RNN Model
- LSTM Model
- GRU Model
Neural Machine Translation *
- Encoder-Decoder Sequence Translation Model
- Translation with Attention
Text Generation *
- RNN, LSTM, GRU Based Model
Named Entity Recogniton *
- RNN, LSTM, GRU Based Model
- Standard Layers
- Inception V1 Layer
- Inception V1 with Reduction Layer
- Inception V2 Layer *
- Inception V3 Layer *
- Attention layers
- Bahdanau Attention
- Luong Attention
- Self-Attention *
- Standard Layers
- Custom Word and Character Tokenizer
- Load Pre-trained Embeddings
- Create Vocabulary Matrix
- Image *
- Image Augmentation
- Hard Sigmoid
- Trainer (Generic TF2.0 train and validation pipelines) *
* - Support coming soon
TensorHub, please install its backend engines: TensorFlow (TensorFlow 2.0 is Recommended).
You may also consider installing the following optional dependencies:
- cuDNN (Recommended if you plan on running Keras on GPU).
- HDF5 and h5py (Required if you plan on saving Keras models to disk).
Then, you can install TensorHub itself.
sudo pip install tensorhub==1.0.0a3
If you are using a virtualenv, you may want to avoid using sudo:
pip install tensorhub==1.0.0a3
- Alternatively: Install TensorHub from the GitHub source:
First, clone TensorHub using
git clone https://github.com/nityansuman/tensorhub.git
cd to the TensroHub folder and run the install command:
cd tensorhub sudo python setup.py install
We are eager to collaborate with you. Feel free to open an issue on or send along a pull request. If you like the work, show your appreciation by "FORK", "STAR", or "SHARE".
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size tensorhub-1.0.0a4-py3-none-any.whl (28.9 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size tensorhub-1.0.0a4.tar.gz (17.9 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for tensorhub-1.0.0a4-py3-none-any.whl