Skip to main content

Accelerate

Project description



License Documentation GitHub release Contributor Covenant

Run your *raw* PyTorch training script on any kind of device

Easy to integrate

🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boiler code needed to use multi-GPUs/TPU/fp16.

🤗 Accelerate abstracts exactly and only the boiler code related to multi-GPUs/TPU/fp16 and let the rest of your code unchanged.

Here is an example:

Original training code
(CPU or mono-GPU only)
With Accelerate
(CPU/GPU/multi-GPUs/TPUs/fp16)
import torch
import torch.nn.functional as F
from datasets import load_dataset



device = 'cpu'

model = torch.nn.Transformer().to(device)
optim = torch.optim.Adam(
    model.parameters()
)

dataset = load_dataset('my_dataset')
data = torch.utils.data.Dataloader(
    dataset
)





model.train()
for epoch in range(10):
    for source, targets in data:
        source = source.to(device)
        targets = targets.to(device)

        optimizer.zero_grad()

        output = model(source, targets)
        loss = F.cross_entropy(
            output, targets
        )

        loss.backward()

        optimizer.step()
  import torch
  import torch.nn.functional as F
  from datasets import load_dataset

+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ device = accelerator.device

  model = torch.nn.Transformer().to(device)
  optim = torch.optim.Adam(
      model.parameters()
  )

  dataset = load_dataset('my_dataset')
  data = torch.utils.data.Dataloader(
      dataset
  )

+ model, optim, data = accelerator.prepare(
+     model, optim, data
+ )

  model.train()
  for epoch in range(10):
      for source, targets in data:
          source = source.to(device)
          targets = targets.to(device)

          optimizer.zero_grad()

          output = model(source, targets)
          loss = F.cross_entropy(
              output, targets
          )

+         accelerate.backward(loss)

          optimizer.step()

As you can see on this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16).

The same code can then in particular run without modification on your local machine for debugging or your training environment.

🤗 Accelerate even handles the device placement for you (a bit more changes to your code but safer in general), so you can even simplify your training loop further:

Original training code
(CPU or mono-GPU only)
With Accelerate
(CPU/GPU/multi-GPUs/TPUs/fp16)
import torch
import torch.nn.functional as F
from datasets import load_dataset



device = 'cpu'

model = torch.nn.Transformer().to(device)
optim = torch.optim.Adam(
    model.parameters()
)

dataset = load_dataset('my_dataset')
data = torch.utils.data.Dataloader(
    dataset
)





model.train()
for epoch in range(10):
    for source, targets in data:
        source = source.to(device)
        targets = targets.to(device)

        optimizer.zero_grad()

        output = model(source, targets)
        loss = F.cross_entropy(
            output, targets
        )

        loss.backward()

        optimizer.step()
  import torch
  import torch.nn.functional as F
  from datasets import load_dataset

+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ device = accelerator.device

+ model = torch.nn.Transformer()
  optim = torch.optim.Adam(
      model.parameters()
  )

  dataset = load_dataset('my_dataset')
  data = torch.utils.data.Dataloader(
      dataset
  )

+ model, optim, data = accelerator.prepare(
+     model, optim, data
+ )

  model.train()
  for epoch in range(10):
      for source, targets in data:
-
-

          optimizer.zero_grad()

          output = model(source, targets)
          loss = F.cross_entropy(
              output, targets
          )

+         accelerate.backward(loss)

          optimizer.step()

Launching script

🤗 Accelerate also provides a CLI tool that allows you to quickly configure and test your training environment then launch the scripts. No need to remember how to use torch.distributed.launch or to write a specific launcher for TPU training! On your machine(s) just run:

accelerate config

and answer the questions asked. This will generate a config file that will be used automatically to properly set the default options when doing

accelerate launch my_script.py --args_to_my_script

For instance, here is how you would run the GLUE example on the MRPC task (from the root of the repo):

accelerate launch examples/glue_example.py --task_name mrpc --model_name_or_path bert-base-cased

Why should I use 🤗 Accelerate?

You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the Accelerator object.

Why shouldn't use 🤗 Accelerate?

You shouldn't use 🤗 Accelerate if you don't want to write a training loop yourself. There are plenty of high-level libraries above PyTorch that will offer you that, 🤗 Accelerate is not one of them.

Installation

This repository is tested on Python 3.6+ and PyTorch 1.4.0+

You should install 🤗 Accelerate in a virtual environment. If you're unfamiliar with Python virtual environments, check out the user guide.

First, create a virtual environment with the version of Python you're going to use and activate it.

Then, you will need to install PyTorch: refer to the official installation page regarding the specific install command for your platform. Then 🤗 Accelerate can be installed using pip as follows:

pip install accelerate

Supported integrations

  • CPU only
  • single GPU
  • multi-GPU on one node (machine)
  • multi-GPU on several nodes (machines)
  • TPU
  • FP16 with native AMP (apex on the roadmap)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

accelerate-0.1.0.tar.gz (24.2 kB view details)

Uploaded Source

Built Distribution

accelerate-0.1.0-py3-none-any.whl (34.1 kB view details)

Uploaded Python 3

File details

Details for the file accelerate-0.1.0.tar.gz.

File metadata

  • Download URL: accelerate-0.1.0.tar.gz
  • Upload date:
  • Size: 24.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.49.0 CPython/3.7.9

File hashes

Hashes for accelerate-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1f8e9f49305c56228794456a477fda8d66890cb68621e191d3c3da383f2540d0
MD5 859ec0ff296991401a784bd3a375e986
BLAKE2b-256 7faf9c18dc2e7903618738d008c0e5f3940d5289d24e36965ac7c0f047cc34a3

See more details on using hashes here.

File details

Details for the file accelerate-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: accelerate-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 34.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.49.0 CPython/3.7.9

File hashes

Hashes for accelerate-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1b477eef26d4f6e116d508f638c8e0327c56cc0dc2e86789e8d79a47d36313d1
MD5 a3317fcc0e615f484638a310c15208ab
BLAKE2b-256 06b24399fbe5c1fc154384bc85e12befdf2f2ef31427b0d0f2e1916f7908ff27

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page