Open source library for creating containers to run on Amazon SageMaker.
Project description
SageMaker Training Toolkit
Train machine learning models within a Docker container using Amazon SageMaker.
:books: Background
Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models.
To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process.
The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included.
For more information, see the Amazon SageMaker Developer Guide sections on using Docker containers for training.
:hammer_and_wrench: Installation
To install this library in your Docker image, add the following line to your Dockerfile:
RUN pip3 install sagemaker-training
:computer: Usage
Create a Docker image and train a model
-
Write a training script. (For example, this script named
train.py
uses Tensorflow.)import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=1) model.evaluate(x_test, y_test)
-
Define a container with a Dockerfile that includes the training script and any dependencies.
The training script must be located in the
/opt/ml/code
directory. The environment variableSAGEMAKER_PROGRAM
defines which file inside the/opt/ml/code
directory to use as the training entry point. When training starts, the interpreter executes the entry point defined bySAGEMAKER_PROGRAM
. Python and shell scripts are both supported.FROM tensorflow/tensorflow:2.1.0 RUN pip3 install sagemaker-training # Copies the training script inside the container COPY train.py /opt/ml/code/train.py # Defines train.py as script entry point ENV SAGEMAKER_PROGRAM train.py
-
Build and tag the Docker image.
docker build -t tf-2.0 .
-
Use the Docker image to start a training job using the SageMaker Python SDK.
from sagemaker.estimator import Estimator estimator = Estimator(image_name='tf-2.0', role='SageMakerRole', train_instance_count=1, train_instance_type='local') estimator.fit()
To train a model using the image on SageMaker, push the image to ECR and start a SageMaker training job with the image URI.
Pass arguments to the entry point using hyperparameters
Any hyperparameters provided by the training job will be passed to the entry point as script arguments. The SageMaker Python SDK uses this feature to pass special hyperparameters to the training job, including sagemaker_program
and sagemaker_submit_directory
. The complete list of SageMaker hyperparameters is available
here.
-
Implement an argument parser in the entry point script. For example, in a Python script:
import argparse if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--learning-rate', type=int, default=1) parser.add_argument('--batch-size', type=int, default=64) parser.add_argument('--communicator', type=str) parser.add_argument('--frequency', type=int, default=20) args = parser.parse_args() ...
-
Start a training job with hyperparameters.
{"HyperParameters": {"batch-size": 256, "learning-rate": 0.0001, "communicator": "pure_nccl"}}
Read additional information using environment variables
An entry point often needs additional information not available in hyperparameters
.
The SageMaker Training Toolkit writes this information as environment variables that are available from within the script.
For example, this training job includes the channels training
and testing
:
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='train.py', ...)
estimator.fit({'training': 's3://bucket/path/to/training/data',
'testing': 's3://bucket/path/to/testing/data'})
The environment variables SM_CHANNEL_TRAINING
and SM_CHANNEL_TESTING
provide the paths to the channels:
import argparse
import os
if __name__ == '__main__':
parser = argparse.ArgumentParser()
...
# reads input channels training and testing from the environment variables
parser.add_argument('--training', type=str, default=os.environ['SM_CHANNEL_TRAINING'])
parser.add_argument('--testing', type=str, default=os.environ['SM_CHANNEL_TESTING'])
args = parser.parse_args()
...
When training starts, SageMaker Training Toolkit will print all available environment variables. Please see the reference on environment variables for a full list of provided environment variables.
Get information about the container environment
To get information about the container environment, initialize an Environment
object.
Environment
provides access to aspects of the environment relevant to training jobs, including hyperparameters, system characteristics, filesystem locations, environment variables and configuration settings.
It is a read-only snapshot of the container environment during training, and it doesn't contain any form of state.
from sagemaker_training import environment
env = environment.Environment()
# get the path of the channel 'training' from the ``inputdataconfig.json`` file
training_dir = env.channel_input_dirs['training']
# get a the hyperparameter 'training_data_file' from ``hyperparameters.json`` file
file_name = env.hyperparameters['training_data_file']
# get the folder where the model should be saved
model_dir = env.model_dir
data = np.load(os.path.join(training_dir, file_name))
x_train, y_train = data['features'], keras.utils.to_categorical(data['labels'])
model = ResNet50(weights='imagenet')
...
model.fit(x_train, y_train)
#save the model in the end of training
model.save(os.path.join(model_dir, 'saved_model'))
Execute the entry point
To execute the entry point, call entry_point.run()
.
from sagemaker_training import entry_point, environment
env = environment.Environment()
# read hyperparameters as script arguments
args = env.to_cmd_args()
# get the environment variables
env_vars = env.to_env_vars()
# execute the entry point
entry_point.run(env.module_dir,
env.user_entry_point,
args,
env_vars)
If the entry point execution fails, trainer.train()
will write the error message to /opt/ml/output/failure
. Otherwise, it will write to the file /opt/ml/success
.
:scroll: License
This library is licensed under the Apache 2.0 License. For more details, please take a look at the LICENSE file.
:handshake: Contributing
Contributions are welcome! Please read our contributing guidelines if you'd like to open an issue or submit a pull request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.