Skip to main content

Distributed Neural Network implementation on COINSTAC.

Project description

coinstac-dinunet

Distributed Neural Network implementation on COINSTAC.

YourActionName Actions Status versions

pip install coinstac-dinunet

Install supported pytorch & torchvision binaries in your device/docker ecosystem:

torch==1.5.1+cu92
torchvision==0.6.1+cu92

Highlights:

1. Handles multi-network/complex training schemes.
2. Automatic data splitting/k-fold cross validation.
3. Automatic model checkpointing.
4. GPU enabled local sites.
5. Customizable metrics(w/Auto serialization between nodes) to work with any schemes.
...

Pipeline for reducing gradients across sites.

DINUNET

Full working examples

  1. FreeSurfer volumes classification.
  2. VBM 3D images classification.

General use case:

imports

from coinstac_dinunet import COINNDataset, COINNTrainer, COINNRemote, COINNLocal
from coinstac_dinunet.metrics import COINNAverages, Prf1a

1. Define Data Loader

class MyDataset(COINNDataset):
    def __init__(self, **kw):
        super().__init__(**kw)
        self.labels = None

    def load_index(self, id, file):
        data_dir = self.path(id, 'data_dir') # data_dir comes from inputspecs.json
        ...
        self.indices.append([id, file])

    def __getitem__(self, ix):
        id, file = self.indices[ix]
        data_dir = self.path(id, 'data_dir') # data_dir comes from inputspecs.json
        label_dir = self.path(id, 'label_dir') # label_dir comes from inputspecs.json
        ...
        # Logic to load, transform single data item.
        ...
        return {'inputs':.., 'labels': ...}

2. Define Trainer

class MyTrainer(COINNTrainer):
    def __init__(self, **kw):
        super().__init__(**kw)

    def _init_nn_model(self):
        self.nn['model'] = MYModel(in_size=self.cache['input_size'], out_size=self.cache['num_class'])

    def iteration(self, batch):
        inputs, labels = batch['inputs'].to(self.device['gpu']).float(), batch['labels'].to(self.device['gpu']).long()

        out = F.log_softmax(self.nn['model'](inputs), 1)
        loss = F.nll_loss(out, labels)
        _, predicted = torch.max(out, 1)
        score = self.new_metrics()
        score.add(predicted, labels)
        val = self.new_averages()
        val.add(loss.item(), len(inputs))
        return {'out': out, 'loss': loss, 'averages': val,
                'metrics': score, 'prediction': predicted}

3. Supply to local node in local.py

if __name__ == "__main__":
    args = json.loads(sys.stdin.read())
    local = COINNLocal(cache=args['cache'], input=args['input'], state=args['state'])
    local.compute(MyDataset, MyTrainer)
    local.send()

4. Define remote node in remote.py

class MyRemote(COINNRemote):

    def _new_metrics(self):  #
        return coinstac_dinunet.metrics.Prf1a()

    def _new_averages(self):
        return coinstac_dinunet.metrics.COINNAverages()

    def _monitor_metric(self):
        return 'f1', 'maximize'


if __name__ == "__main__":
    args = json.loads(sys.stdin.read())
    remote = MyRemote(cache=args['cache'], input=args['input'], state=args['state'])
    remote.compute()
    remote.send()

Define custom metrics

Default arguments:

  • task_name: str = None, Name of the task. [Required]
  • mode: str = None, Eg. train/test [Required]
  • batch_size: int = 4
  • epochs: int = 21
  • learning_rate: float = 0.001
  • gpus: _List[int] = None, Eg. [0], [1], [0, 1]...
  • pin_memory: bool = True, if cuda available
  • num_workers: int = 0
  • load_limit: int = float('inf'), Limit on dataset to load for debugging purpose.
  • pretrained_path: str = None, Path to pretrained weights
  • patience: int = 5, patience to end training by monitoring validation scores.
  • load_sparse: bool = False, Load each data item in separate loader to reconstruct images from patches, if needed.
  • num_folds: int = None, Number of k-folds.
  • split_ratio: _List[float] = (0.6, 0.2, 0.2), Exclusive to num_folds.

Directly passed parameters in coinstac_dinunet.nodes.COINNLocal, args passed through inputspec will override the defaults in the same order.

Custom data splits can be provided in the path specified by split_dir for each sites in their respective inputspecs file. This is mutually exclusive to both num_folds and split_ratio.


Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coinstac-dinunet-0.8.4.tar.gz (26.6 kB view details)

Uploaded Source

File details

Details for the file coinstac-dinunet-0.8.4.tar.gz.

File metadata

  • Download URL: coinstac-dinunet-0.8.4.tar.gz
  • Upload date:
  • Size: 26.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.9.1

File hashes

Hashes for coinstac-dinunet-0.8.4.tar.gz
Algorithm Hash digest
SHA256 8d6a862dce45f12352ba8a3cfbfc2e7071ab4a5d989a8cd2c10e8a93e51a5bad
MD5 422c8ff99900f48324f45f5dd1aa789e
BLAKE2b-256 c8e8eef260967fb7a1aed61cff0df5186a4b8849dd742256707856835ed8715d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page