Skip to main content

Amazon Sagemaker specific TensorFlow extensions.

Project description

SageMaker specific extensions to TensorFlow, for Python 2.7, 3.4-3.6 and TensorFlow versions 1.7-1.12. This package includes the PipeModeDataset class, that allows SageMaker Pipe Mode channels to be read using TensorFlow Datasets.

Install

You can build SageMaker TensorFlow into a docker image with the following command:

pip install sagemaker-tensorflow

You can also install sagemaker-tensorflow for a specific version of TensorFlow. The following command will install sagemaker-tensorflow for TensorFlow 1.7:

pip install "sagemaker-tensorflow>=1.7,<1.8"

Build and install from source

The SageMaker TensorFlow build depends on the following:

  • cmake

  • tensorflow

  • curl-dev

To install these run:

pip install cmake tensorflow

On Amazon Linux, curl-dev can be installed with:

yum install curl-dev

On Ubuntu, curl-dev can be installed with:

apt-get install libcurl4-openssl-dev

To build and install this package, run:

pip install .

in this directory.

To build in a SageMaker docker image, you can use the following RUN command in your Dockerfile:

RUN git clone https://github.com/aws/sagemaker-tensorflow-extensions.git && \
    cd sagemaker-tensorflow-extensions && \
    pip install . && \
    cd .. && \
    rm -rf sagemaker-tensorflow-extensions

Building for a specific TensorFlow version

Release branching is used to track different versions of TensorFlow. To build for a specific release of TensorFlow, checkout the release branch prior to running a pip install. For example, to build for TensorFlow 1.7, you can run the following command in your Dockerfile:

RUN git clone https://github.com/aws/sagemaker-tensorflow-extensions.git && \
    cd sagemaker-tensorflow-extensions && \
    git checkout 1.7 && \
    pip install . && \
    cd .. && \
    rm -rf sagemaker-tensorflow-extensions

Requirements

SageMaker TensorFlow extensions builds on Python 2.7, 3.4-3.6 in Linux with a TensorFlow version >= 1.7. Older versions of TensorFlow are not supported. Please make sure to checkout the branch of sagemaker-tensorflow-extensions that matches your TensorFlow version.

SageMaker Pipe Mode

SageMaker Pipe Mode is a mechanism for providing S3 data to a training job via Linux fifos. Training programs can read from the fifo and get high-throughput data transfer from S3, without managing the S3 access in the program itself.

SageMaker Pipe Mode is enabled when a SageMaker training job is created. Multiple S3 datasets can be mapped to individual fifos, configured in the training request. Pipe Mode is covered in more detail in the SageMaker documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html#your-algorithms-training-algo-running-container-inputdataconfig

Using the PipeModeDataset

The PipeModeDataset is a TensorFlow Dataset for reading SageMaker Pipe Mode channels. After installing this package, the PipeModeDataset can be imported from a moduled named sagemaker_tensorflow.

To construct a PipeModeDataset that reads TFRecord encoded records from a “training” channel, do the following:

from sagemaker_tensorflow import PipeModeDataset

ds = PipeModeDataset(channel='training', record_format='TFRecord')

A PipeModeDataset should be created for a SageMaker Pipe Mode channel. Each channel corresponds to a single S3 dataset, configured when the training job is created. You can create multiple PipeModeDataset instances over different channels to read from multiple S3 datasets in the same training job.

A PipeModeDataset can read TFRecord, RecordIO, or text line records, by using the record_format constructor argument. The record_format keyword argument can be set to either RecordIO, TFRecord, or TextLine to differentiate between the three encodings. RecordIO is the default.

A PipeModeDataset is a regular TensorFlow Dataset and as such can be used in TensorFlow input processing pipelines, and in TensorFlow Estimator input_fn definitions. All Dataset operations are supported on PipeModeDataset. The following code snippet shows how to create a batching and parsing Dataset that reads data from a SageMaker Pipe Mode channel:

features = {
    'data': tf.FixedLenFeature([], tf.string),
    'labels': tf.FixedLenFeature([], tf.int64),
}

def parse(record):
    parsed = tf.parse_single_example(record, features)
    return ({
        'data': tf.decode_raw(parsed['data'], tf.float64)
    }, parsed['labels'])

ds = PipeModeDataset(channel='training', record_format='TFRecord')
num_epochs = 20
ds = ds.repeat(num_epochs)
ds = ds.prefetch(10)
ds = ds.map(parse, num_parallel_calls=10)
ds = ds.batch(64)

Using the PipeModeDataset with the SageMaker Python SDK

The sagemaker_tensorflow module is available for TensorFlow scripts to import when launched on SageMaker via the SageMaker Python SDK. If you are using the SageMaker Python SDK TensorFlow Estimator to launch TensorFlow training on SageMaker, note that the default channel name is training when just a single S3 URI is passed to fit.

Using the PipeModeDataset with SageMaker Augmented Manifest Files

SageMaker Augmented Manifest Files provide a mechanism to associate metdata (such as labels) with binary data (like images) for training. An Augmented Manifest File is a single json-lines file, stored as an object in S3. During training, SageMaker reads the data from an Augmented Manifest File and passes the data to the running training job, through a SageMaker Pipe Mode channel.

To learn more about preparing and using an Augmented Manifest File, please consult the SageMaker documentation on Augmented Manifest Files here.

You can use the PipeModeDataset to read data from a Pipe Mode channel that is backed by an Augmented Manifest, by following these guidelines:

First, use a Dataset batch operation to combine successive records into a single tuple. Each attribute in an Augmented Manifest File record is queued into the Pipe Mode’s fifo as a separate record. By batching, you can combine these successive per-attribute records into a single per-record tuple. In general, if your Augmented Manifest File contains n attributes, then you should issue a call to batch(n) on your PipeModeDataset and then use a simple combining function applied with a map to combine each per-attribute record in the batch into a single tuple. For example, assume your Augmented Manifest File contains 3 attributes, the following code sample will read Augmented Manifest records into a 3-tuple of string Tensors when applied to a PipeModeDataset.

ds = PipeModeDataset("my_channel")

def combine(records):
    return (records[0], records[1], records[2])

ds = ds.batch(3)     # Batch series of three attributes together.
ds = ds.map(combine) # Convert each batch of three records into a single tuple with three Tensors.

# Perform other operations on the Dataset - e.g. subsequent batching, decoding
...

Second, pass "RecordIO" as the value for RecordWrapperType when you launch the SageMaker training job with an Augmented Manifest File. Doing this will cause SageMaker to wrap each per-attribute record in a RecordIO wrapper, enabling the PipeModeDataset to separate these records.

Third, ensure your PipeModeDataset splits records using RecordIO decoding in your training script. You can do this by simply constructing the PipeModeDataset with no record_format argument, as RecordIO is the default record wrapping type for the PipeModeDataset.

If you follow these steps then the PipeModeDataset will produce tuples of string Tensors that you can then decode or process further (for example, by doing a jpeg decode if your data are images).

Support

We’re here to help. Have a question? Please open a GitHub issue, we’d love to hear from you.

License

SageMaker TensorFlow is licensed under the Apache 2.0 License. It is copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. The license is available at: http://aws.amazon.com/apache2.0/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file sagemaker_tensorflow-1.12.0.1.0.1-cp36-cp36m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for sagemaker_tensorflow-1.12.0.1.0.1-cp36-cp36m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 ff476b516ca38b1fc4179105ecf269a96ae6be8832f418e99161f3c47241486a
MD5 917d282bee5b6e37b4c7051896816af4
BLAKE2b-256 c144c4a25f13743abb46107e2d7e6a2776ff656492e811c0b789d3f43619ee67

See more details on using hashes here.

File details

Details for the file sagemaker_tensorflow-1.12.0.1.0.1-cp35-cp35m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for sagemaker_tensorflow-1.12.0.1.0.1-cp35-cp35m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 e65d56ddbf5ef1ad2496914ddb36655cd5d021fe562b81e6560f8a74ede98c40
MD5 4e671896b1e9231560215bb1a123b378
BLAKE2b-256 7bdc5cc13ed3aab69d04771159dc2d090807379402661cfbc3b91f8c0251bc0a

See more details on using hashes here.

File details

Details for the file sagemaker_tensorflow-1.12.0.1.0.1-cp34-cp34m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for sagemaker_tensorflow-1.12.0.1.0.1-cp34-cp34m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 c8c76b13c30a2542ddb921a4b2768cba698cd69d756ca2e3f4ce9a75e9b14e49
MD5 f66aff4f6a6489a109e9297d4dc646ba
BLAKE2b-256 6021db98e2fc98740276a7d77fddd50d789705485e04eb2bc614f9c9d4cd24f1

See more details on using hashes here.

File details

Details for the file sagemaker_tensorflow-1.12.0.1.0.1-cp27-cp27mu-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for sagemaker_tensorflow-1.12.0.1.0.1-cp27-cp27mu-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 5d748af2e3644adefe3ffa440519e3622df3104be87063f03df4b50575682f35
MD5 a8e2ccba6b3af27823bd26cb42fd09c0
BLAKE2b-256 2b8bcadbe1c14eaf2e8c84538abfddf67c0ce68756c97940ae695122cf586d91

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page