TensorFlow steps, savers, and utilities for Neuraxle. Neuraxle is a Machine Learning (ML) library for building neat pipelines, providing the right abstractions to both ease research, development, and deployment of your ML applications.
Project description
Neuraxle-TensorFlow
TensorFlow steps, savers, and utilities for Neuraxle.
Neuraxle is a Machine Learning (ML) library for building neat pipelines, providing the right abstractions to both ease research, development, and deployment of your ML applications.
Usage example
Tensorflow 1
Create a tensorflow 1 model step by giving it a graph, an optimizer, and a loss function.
def create_graph(step: TensorflowV1ModelStep, context: ExecutionContext):
tf.placeholder('float', name='data_inputs')
tf.placeholder('float', name='expected_outputs')
tf.Variable(np.random.rand(), name='weight')
tf.Variable(np.random.rand(), name='bias')
return tf.add(tf.multiply(step['data_inputs'], step['weight']), step['bias'])
"""
# Note: you can also return a tuple containing two elements : tensor for training (fit), tensor for inference (transform)
def create_graph(step: TensorflowV1ModelStep, context: ExecutionContext)
# ...
decoder_outputs_training = create_training_decoder(step, encoder_state, decoder_cell)
decoder_outputs_inference = create_inference_decoder(step, encoder_state, decoder_cell)
return decoder_outputs_training, decoder_outputs_inference
"""
def create_loss(step: TensorflowV1ModelStep, context: ExecutionContext):
return tf.reduce_sum(tf.pow(step['output'] - step['expected_outputs'], 2)) / (2 * N_SAMPLES)
def create_optimizer(step: TensorflowV1ModelStep, context: ExecutionContext):
return tf.train.GradientDescentOptimizer(step.hyperparams['learning_rate'])
model_step = TensorflowV1ModelStep(
create_grah=create_graph,
create_loss=create_loss,
create_optimizer=create_optimizer,
has_expected_outputs=True
).set_hyperparams(HyperparameterSamples({
'learning_rate': 0.01
})).set_hyperparams_space(HyperparameterSpace({
'learning_rate': LogUniform(0.0001, 0.01)
}))
Tensorflow 2
Create a tensorflow 2 model step by giving it a model, an optimizer, and a loss function.
def create_model(step: Tensorflow2ModelStep, context: ExecutionContext):
return LinearModel()
def create_optimizer(step: Tensorflow2ModelStep, context: ExecutionContext):
return tf.keras.optimizers.Adam(0.1)
def create_loss(step: Tensorflow2ModelStep, expected_outputs, predicted_outputs):
return tf.reduce_mean(tf.abs(predicted_outputs - expected_outputs))
model_step = Tensorflow2ModelStep(
create_model=create_model,
create_optimizer=create_optimizer,
create_loss=create_loss,
tf_model_checkpoint_folder=os.path.join(tmpdir, 'tf_checkpoints')
)
Deep Learning Pipeline
batch_size = 100
epochs = 3
validation_size = 0.15
max_plotted_validation_predictions = 10
seq2seq_pipeline_hyperparams = HyperparameterSamples({
'hidden_dim': 100,
'layers_stacked_count': 2,
'lambda_loss_amount': 0.0003,
'learning_rate': 0.006,
'window_size_future': sequence_length,
'output_dim': output_dim,
'input_dim': input_dim
})
feature_0_metric = metric_3d_to_2d_wrapper(mean_squared_error)
metrics = {'mse': feature_0_metric}
signal_prediction_pipeline = Pipeline([
TrainOnly(DataShuffler()),
WindowTimeSeries(),
MeanStdNormalizer(),
MiniBatchSequentialPipeline([
Tensorflow2ModelStep(
create_model=create_model,
create_loss=create_loss,
create_optimizer=create_optimizer,
print_loss=True
).set_hyperparams(seq2seq_pipeline_hyperparams)
])
])
pipeline, outputs = pipeline.fit_transform(data_inputs, expected_outputs)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file neuraxle_tensorflow-0.1.2.tar.gz
.
File metadata
- Download URL: neuraxle_tensorflow-0.1.2.tar.gz
- Upload date:
- Size: 8.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.7.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6c60a9e6aa4a4b3b5b5639b267d08cfac1f4425ff3998133605f96b2f80299ee |
|
MD5 | 4ecb8809e55b6761185b27fb700585d6 |
|
BLAKE2b-256 | d947f123f87c57ffc20d1f02ef2733d6e6b0e54fb03c46eedb3839a5a8420e02 |