Skip to main content

Platform for Neural Architecture Search

Project description

Archai logo

Archai accelerates your Neural Architecture Search (NAS) through fast, reproducible and modular research, enabling the generation of efficient deep networks for various applications.

Release version Open issues Contributors PyPI downloads License

Installation

Archai can be installed through various methods, however, it is recommended to utilize a virtual environment such as conda or pyenv for optimal results.

To install Archai via PyPI, the following command can be executed:

pip install archai

Archai requires Python 3.8+ and PyTorch 1.7.0+ to function properly.

For further information, please consult the installation guide.

Quickstart

In this quickstart example, we will apply Archai in Natural Language Processing to find the optimal Pareto-frontier Transformers' configurations according to a set of objectives.

Creating the Search Space

We start by importing the TransformerFlexSearchSpace class which represents the search space for the Transformer architecture:

from archai.discrete_search.search_spaces.nlp.transformer_flex.search_space import TransformerFlexSearchSpace

space = TransformerFlexSearchSpace("gpt2")

Defining Search Objectives

Next, we define the objectives we want to optimize. In this example, we use NonEmbeddingParamsProxy, TransformerFlexOnnxLatency, and TransformerFlexOnnxMemory to define the objectives:

from archai.discrete_search.api.search_objectives import SearchObjectives
from archai.discrete_search.evaluators.nlp.parameters import NonEmbeddingParamsProxy
from archai.discrete_search.evaluators.nlp.transformer_flex_latency import TransformerFlexOnnxLatency
from archai.discrete_search.evaluators.nlp.transformer_flex_memory import TransformerFlexOnnxMemory

search_objectives = SearchObjectives()
search_objectives.add_objective(
   "non_embedding_params",
   NonEmbeddingParamsProxy(),
   higher_is_better=True,
   compute_intensive=False,
   constraint=(1e6, 1e9),
)
search_objectives.add_objective(
   "onnx_latency",
   TransformerFlexOnnxLatency(space),
   higher_is_better=False,
   compute_intensive=False,
)
search_objectives.add_objective(
   "onnx_memory",
   TransformerFlexOnnxMemory(space),
   higher_is_better=False,
   compute_intensive=False,
)

Initializing the Algorithm

We use the EvolutionParetoSearch algorithm to conduct the search:

from archai.discrete_search.algos.evolution_pareto import EvolutionParetoSearch

algo = EvolutionParetoSearch(
   space,
   search_objectives,
   None,
   "tmp",
   num_iters=5,
   init_num_models=10,
   seed=1234,
)

Performing the Search

Finally, we call the search() method to start the NAS process:

algo.search()

The algorithm will iterate through different network architectures, evaluate their performance based on the defined objectives, and ultimately produce a frontier of Pareto-optimal results.

Tasks

To demonstrate and showcase the capabilities/functionalities of Archai, a set of end-to-end tasks are provided:

Documentation

The official documentation also provides a series of notebooks.

Support

If you have any questions or feedback about the Archai project or the open problems in Neural Architecture Search, please feel free to contact us using the following information:

We welcome any questions, feedback, or suggestions you may have and look forward to hearing from you.

Team

Archai has been created and maintained by Shital Shah, Debadeepta Dey, Gustavo de Rosa, Caio Mendes, Piero Kauffmann, Chris Lovett, Allie Del Giorno, Mojan Javaheripi, and Ofer Dekel at Microsoft Research.

Contributions

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademark

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

License

This project is released under the MIT License. Please review the file for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

archai-1.0.0.tar.gz (425.2 kB view details)

Uploaded Source

Built Distribution

archai-1.0.0-py3-none-any.whl (587.2 kB view details)

Uploaded Python 3

File details

Details for the file archai-1.0.0.tar.gz.

File metadata

  • Download URL: archai-1.0.0.tar.gz
  • Upload date:
  • Size: 425.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for archai-1.0.0.tar.gz
Algorithm Hash digest
SHA256 24d42a8d9e558aa9495918de84b408068f11feca8159214226b5585debd59faf
MD5 5abbafa891ae4ac86395488ba51eeb41
BLAKE2b-256 4f0abce59a90fe915b8017e509be278417ceffb0f5be264d9302d8004b0a5957

See more details on using hashes here.

File details

Details for the file archai-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: archai-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 587.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for archai-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 86628c12a820eb42284b331c4bfe7bdd6494046d85d03864d82b114470c26c9b
MD5 cef6cec12cce1e84fb772ee10c32917c
BLAKE2b-256 66ec7de6f8ebe74b568d73d3897694d1e465f18fd56dda4dffdaab6f1ce62f66

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page