Skip to main content

Deep neural networks without the learning cliff! A wrapper library compatible with scikit-learn.

Project description

orphan:

scikit-neuralnetwork

Deep neural network implementation without the learning cliff! This library implements multi-layer perceptrons as a wrapper for the powerful Lasagne library that’s compatible with scikit-learn for a more user-friendly and Pythonic interface.

NOTE: This project is possible thanks to the nucl.ai Conference on July 18-20. Join us in Vienna!

Documentation Status Code Coverage License Type Project Stars


Features

Thanks to the underlying Lasagne implementation, this library supports the following neural network features, which are exposed in an intuitive and well documented API:

  • Activation Functions —
    • Nonlinear: Sigmoid, Tanh, Rectifier.

    • Linear: Linear, Gaussian, Softmax.

  • Layer Types — Convolution (greyscale and color, 2D), Dense (standard, 1D).

  • Learning Rules — sgd, momentum, nesterov, adadelta, adagrad, rmsprop.

  • Regularization — L1, L2 and dropout.

  • Dataset Formats — numpy.ndarray, scipy.sparse, coming soon: iterators.

If a feature you need is missing, consider opening a GitHub Issue with a detailed explanation about the use case and we’ll see what we can do.

Installation

To download and setup the latest official release, you can do so from PYPI directly:

> pip install scikit-neuralnetwork

This will install a copy of Lasagne too as a dependency. We recommend you use a virtual environment for Python.

Then, you can run the tests using nosetests -v sknn, and other samples or benchmarks are available in the examples/ folder.

Getting Started

The library supports both regressors (to estimate continuous outputs from inputs) and classifiers (to predict labels from features). This is the sklearn-compatible API:

from sknn.mlp import Classifier, Layer

nn = Classifier(
    layers=[
        Layer("Rectifier", units=100),
        Layer("Linear")],
    learning_rate=0.02,
    n_iter=10)
nn.fit(X_train, y_train)

y_valid = nn.predict(X_valid)

score = nn.score(X_test, y_test)

The generated documentation as a standalone page where you can find more information about parameters, as well as examples in the User Guide.


Documentation Status Code Coverage License Type Project Stars

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scikit-neuralnetwork-0.7.tar.gz (33.3 kB view details)

Uploaded Source

File details

Details for the file scikit-neuralnetwork-0.7.tar.gz.

File metadata

File hashes

Hashes for scikit-neuralnetwork-0.7.tar.gz
Algorithm Hash digest
SHA256 5a08a01759ece55fdd9a16a1227a7b8d7d103b97abe60320972c9f077a9a3eb8
MD5 9af56af932cf3e12343e8d7ecf267997
BLAKE2b-256 dd37bfc84fc1b1bfc7364f564469cf76d90336accf16087dfd5a1bf589bd1dd9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page