neon is NErvana’s pythON based Deep Learning Framework! We have designed it with the following functionality in mind:
Features that are unique to neon include:
Basic information to get started is below. Please consult the full documentation for more information.
On a Mac OSX or Linux box enter the following to download and install neon, and use it to train your first multi-layer perceptron or convolutional neural networks below.
git clone https://github.com/NervanaSystems/neon.git cd neon sudo make install
The above will install neon system-wide. If you don’t have sufficient privileges or would prefer an isolated installation, see our virtualenv based install.
There are several examples built-in to neon in the examples directory for a user to get started. The YAML format is plain-text and can be edited to change various aspects of the model. See the ANNOTATED_EXAMPLE.yaml for some of the definitions and possible choices.
# for nervangpu (requires Maxwell GPUs) neon --gpu nervanagpu examples/convnet/i1k-alexnet-fp32.yaml # for cudanet (works with Kepler or Maxwell GPUs) neon --gpu cudanet examples/convnet/i1k-alexnet-fp32.yaml
neon --gpu nervanagpu examples/convnet/i1k-alexnet-fp16.yaml
backends --- implementation of different hardware backends datasets --- support for common datasets CIFAR-10, ImageNet, MNIST etc. diagnostics --- hooks to measure timing and numeric ranges hyperopt --- hooks for hyperparameter optimization layers --- layer code models --- model code optimizers --- learning rules transforms --- activation & cost functions metrics --- performance evaluation metrics
The complete documentation for neon is available here. Some useful starting points are:
For any bugs or feature requests please:
The MOP is an abstraction layer for Nervana’s system software and hardware which includes the Nervana Engine, a custom distributed processor for deep learning.
The MOP consists of linear algebra and other operations required by deep learning. Some MOP operations are currently exposed in neon, while others, such as distributed primitives, will be exposed in later versions as well as in other forthcoming Nervana libraries.
Defining models in a MOP-compliant manner guarantees they will run on all provided backends. It also provides a way for existing Deep Learning frameworks such as theano, torch, and caffe to interface with the Nervana Engine.
We have separate, upcoming efforts on the following fronts:
TODO: Figure out how to actually get changelog content.
Changelog content for this version goes here.