Skip to main content

Intel's deep learning framework

Project description

neon is Intel’s reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.

For fast iteration and model exploration, neon has the fastest performance among deep learning libraries (2x speed of cuDNNv4, see benchmarks). * 2.5s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 26 hrs) * Training VGG with 16-bit floating point on 1 Titan X takes ~10 days (original paper: 4 GPUs for 2-3 weeks)

We use neon internally at Intel Nervana to solve our customers’ problems across many domains. We are hiring across several roles. Apply here!

See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation.

Quick Install

On a Mac OSX or Linux machine, enter the following to download and install neon (conda users see the guide), and use it to train your first multi-layer perceptron. To force a python2 or python3 install, replace make below with either make python2 or make python3.

git clone https://github.com/NervanaSystems/neon.git
cd neon
make
. .venv/bin/activate

Starting after neon v2.2.0, the master branch of neon will be updated weekly with work-in-progress toward the next release. Check out a release tag (e.g., “git checkout v2.2.0”) for a stable release. Or simply check out the “latest” release tag to get the latest stable release (i.e., “git checkout latest”)

From version 2.4.0, we re-enabled pip install. Neon can be installed using package name nervananeon.

pip install nervananeon

It is noted that aeon needs to be installed separately. The latest release v2.6.0 uses aeon v1.3.0.

Warning

Between neon v2.1.0 and v2.2.0, the aeon manifest file format has been changed. When updating from neon < v2.2.0 manifests have to be recreated using ingest scripts (in examples folder) or updated using this script.

Use a script to run an example

python examples/mnist_mlp.py

Selecting a backend engine from the command line

The gpu backend is selected by default, so the above command is equivalent to if a compatible GPU resource is found on the system:

python examples/mnist_mlp.py -b gpu

When no GPU is available, the optimized CPU (MKL) backend is now selected by default as of neon v2.1.0, which means the above command is now equivalent to:

python examples/mnist_mlp.py -b mkl

If you are interested in comparing the default mkl backend with the non-optimized CPU backend, use the following command:

python examples/mnist_mlp.py -b cpu

Use a yaml file to run an example

Alternatively, a yaml file may be used run an example.

neon examples/mnist_mlp.yaml

To select a specific backend in a yaml file, add or modify a line that contains backend: mkl to enable mkl backend, or backend: cpu to enable cpu backend. The gpu backend is selected by default if a GPU is available.

Documentation

The complete documentation for neon is available here. Some useful starting points are:

Support

For any bugs or feature requests please:

  1. Search the open and closed issues list to see if we’re already working on what you have uncovered.

  2. Check that your issue/request hasn’t already been addressed in our Frequently Asked Questions (FAQ) or neon-users Google group.

  3. File a new issue or submit a new pull request if you have some code you’d like to contribute

For other questions and discussions please post a message to the neon-users Google group

License

We are releasing neon under an open source Apache 2.0 License. We welcome you to contact us with your use cases.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

nervananeon-2.6.0-py3-none-any.whl (78.5 MB view details)

Uploaded Python 3

nervananeon-2.6.0-py2-none-any.whl (78.5 MB view details)

Uploaded Python 2

File details

Details for the file nervananeon-2.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for nervananeon-2.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e21fc8e9d713f8b3b43096ecf7717a7efe28f962ce26475ee1b2b3960ea54645
MD5 5d6a35291c9c3475a537d55266e25f14
BLAKE2b-256 c6714e8f2c1ff5318d21952a80b83d8eeb6bf48c7c3a7f41ceac97d6e8b8e537

See more details on using hashes here.

File details

Details for the file nervananeon-2.6.0-py2-none-any.whl.

File metadata

File hashes

Hashes for nervananeon-2.6.0-py2-none-any.whl
Algorithm Hash digest
SHA256 a61f80d557430a548107cf3c0062855ff6c399949c0b03eb07c3bb31f97f858f
MD5 888477434d2aac5b235ccb3a11eab981
BLAKE2b-256 686a8dde6aa8ecef2ef7260d74c2cf75c0e6ed7d4262f43f2b62d579b5a778b5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page