Skip to main content

Neural Pipeline Search helps deep learning experts find the best neural pipeline.

Project description

Neural Pipeline Search (NePS)

PyPI version Python versions License Tests

Welcome to NePS, a powerful and flexible Python library for hyperparameter optimization (HPO) and neural architecture search (NAS) with its primary goal: make HPO and NAS usable for deep learners in practice.

NePS houses recently published and also well-established algorithms that can all be run massively parallel on distributed setups, with tools to analyze runs, restart runs, etc., all tailored to the needs of deep learning experts.

Take a look at our documentation for all the details on how to use NePS!

Key Features

In addition to the features offered by traditional HPO and NAS libraries, NePS, e.g., stands out with:

  1. Hyperparameter Optimization (HPO) with Prior Knowledge and Cheap Proxies:

    NePS excels in efficiently tuning hyperparameters using algorithms that enable users to make use of their prior knowledge within the search space. This is leveraged by the insights presented in:
  2. Neural Architecture Search (NAS) with General Search Spaces:

    NePS is equipped to handle context-free grammar search spaces, providing advanced capabilities for designing and optimizing architectures. this is leveraged by the insights presented in:
  3. Easy Parallelization and Design Tailored to DL:

    NePS simplifies the process of parallelizing optimization tasks both on individual computers and in distributed computing environments. As NePS is made for deep learners, all technical choices are made with DL in mind and common DL tools such as Tensorboard are embraced.

Installation

To install the latest release from PyPI run

pip install neural-pipeline-search

To get the latest version from Github run

pip install git+https://github.com/automl/neps.git

Basic Usage

Using neps always follows the same pattern:

  1. Define a run_pipeline function capable of evaluating different architectural and/or hyperparameter configurations for your problem.
  2. Define a search space named pipeline_space of those Parameters e.g. via a dictionary
  3. Call neps.run to optimize run_pipeline over pipeline_space

In code, the usage pattern can look like this:

import neps
import logging


# 1. Define a function that accepts hyperparameters and computes the validation error
def run_pipeline(
    hyperparameter_a: float, hyperparameter_b: int, architecture_parameter: str
) -> dict:
    # Create your model
    model = MyModel(architecture_parameter)

    # Train and evaluate the model with your training pipeline
    validation_error = train_and_eval(
        model, hyperparameter_a, hyperparameter_b
    )
    return validation_error


# 2. Define a search space of parameters; use the same parameter names as in run_pipeline
pipeline_space = dict(
    hyperparameter_a=neps.FloatParameter(
        lower=0.001, upper=0.1, log=True  # The search space is sampled in log space
    ),
    hyperparameter_b=neps.IntegerParameter(lower=1, upper=42),
    architecture_parameter=neps.CategoricalParameter(["option_a", "option_b"]),
)


# 3. Run the NePS optimization
logging.basicConfig(level=logging.INFO)
neps.run(
    run_pipeline=run_pipeline,
    pipeline_space=pipeline_space,
    root_directory="path/to/save/results",  # Replace with the actual path.
    max_evaluations_total=100,
)

Examples

Discover how NePS works through these examples:

Contributing

Please see the documentation for contributors.

Citations

For pointers on citing the NePS package and papers refer to our documentation on citations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_pipeline_search-0.12.2.tar.gz (258.4 kB view details)

Uploaded Source

Built Distribution

neural_pipeline_search-0.12.2-py3-none-any.whl (334.1 kB view details)

Uploaded Python 3

File details

Details for the file neural_pipeline_search-0.12.2.tar.gz.

File metadata

  • Download URL: neural_pipeline_search-0.12.2.tar.gz
  • Upload date:
  • Size: 258.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.8 Linux/6.6.32-1-MANJARO

File hashes

Hashes for neural_pipeline_search-0.12.2.tar.gz
Algorithm Hash digest
SHA256 0e4b4ade4f4b3ca9833e7066e9b366878644b380122ee096d745459ec523dd86
MD5 951edfe265f7c9bf56c822891aedde53
BLAKE2b-256 617727e1b27d1c9cf8a83b345274fdf34fa71e083fa852984c4e06b479b8409e

See more details on using hashes here.

File details

Details for the file neural_pipeline_search-0.12.2-py3-none-any.whl.

File metadata

File hashes

Hashes for neural_pipeline_search-0.12.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fe1198779b10b8ba97f0abe838a6e34a110fd963dd4b4515ca3da9afde8ab020
MD5 e45957b7c0710ed85b571671b2844f30
BLAKE2b-256 8c11742a3c7336affd561be5c25eeca25788a858a102e1d8e0e72d0642d4469d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page