Skip to main content

Monotonic Dense Layer implemented in Keras

Project description

Monotonic Dense Layer

This Python library implements Monotonic Dense Layer as described in Davor Runje, Sharath M. Shankaranarayana, “Constrained Monotonic Neural Networks” [PDF].

If you use this library, please cite:

@inproceedings{runje2023,
  title={Constrained Monotonic Neural Networks},
  author={Davor Runje and Sharath M. Shankaranarayana},
  booktitle={Proceedings of the 40th {International Conference on Machine Learning}},
  year={2023}
}

This package contains an implementation of our Monotonic Dense Layer MonoDense (Constrained Monotonic Fully Connected Layer). Below is the figure from the paper for reference.

In the code, the variable monotonicity_indicator corresponds to t in the figure and parameters is_convex, is_concave and activation_weights are used to calculate the activation selector s as follows:

  • if is_convex or is_concave is True, then the activation selector s will be (units, 0, 0) and (0, units, 0), respecively.

  • if both is_convex or is_concave is False, then the activation_weights represent ratios between $\breve{s}$, $\hat{s}$ and $\tilde{s}$, respecively. E.g. if activation_weights = (2, 2, 1) and units = 10, then

$$ (\breve{s}, \hat{s}, \tilde{s}) = (4, 4, 2) $$

mono-dense-layer-diagram

Install

pip install mono-dense-keras

How to use

In this example, we’ll assume we have a simple dataset with three inputs values $x_1$, $x_2$ and $x_3$ sampled from the normal distribution, while the output value $y$ is calculated according to the following formula before adding Gaussian noise to it:

$y = x_1^3 + \sin\left(\frac{x_2}{2 \pi}\right) + e^{-x_3}$

x0 x1 x2 y
0.304717 -1.039984 0.750451 0.234541
0.940565 -1.951035 -1.302180 4.199094
0.127840 -0.316243 -0.016801 0.834086
-0.853044 0.879398 0.777792 -0.093359
0.066031 1.127241 0.467509 0.780875

Now, we’ll use the MonoDense layer instead of Dense layer to build a simple monotonic network. By default, the MonoDense layer assumes the output of the layer is monotonically increasing with all inputs. This assumtion is always true for all layers except possibly the first one. For the first layer, we use monotonicity_indicator to specify which input parameters are monotonic and to specify are they increasingly or decreasingly monotonic:

  • set 1 for increasingly monotonic parameter,

  • set -1 for decreasingly monotonic parameter, and

  • set 0 otherwise.

In our case, the monotonicity_indicator is [1, 0, -1] because $y$ is: - monotonically increasing w.r.t. $x_1$ $\left(\frac{\partial y}{x_1} = 3 {x_1}^2 \geq 0\right)$, and

  • monotonically decreasing w.r.t. $x_3$ $\left(\frac{\partial y}{x_3} = - e^{-x_2} \leq 0\right)$.
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input

from mono_dense_keras import MonoDense

model = Sequential()

model.add(Input(shape=(3,)))
monotonicity_indicator = [1, 0, -1]
model.add(
    MonoDense(128, activation="elu", monotonicity_indicator=monotonicity_indicator)
)
model.add(MonoDense(128, activation="elu"))
model.add(MonoDense(1))

model.summary()
Model: "sequential_7"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 mono_dense_21 (MonoDense)   (None, 128)               512       
                                                                 
 mono_dense_22 (MonoDense)   (None, 128)               16512     
                                                                 
 mono_dense_23 (MonoDense)   (None, 1)                 129       
                                                                 
=================================================================
Total params: 17,153
Trainable params: 17,153
Non-trainable params: 0
_________________________________________________________________

Now we can train the model as usual using Model.fit:

from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.schedules import ExponentialDecay

lr_schedule = ExponentialDecay(
    initial_learning_rate=0.01,
    decay_steps=10_000 // 32,
    decay_rate=0.9,
)
optimizer = Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss="mse")

model.fit(
    x=x_train, y=y_train, batch_size=32, validation_data=(x_val, y_val), epochs=10
)
Epoch 1/10
313/313 [==============================] - 2s 5ms/step - loss: 9.6909 - val_loss: 6.3050
Epoch 2/10
313/313 [==============================] - 1s 4ms/step - loss: 4.1970 - val_loss: 2.0028
Epoch 3/10
313/313 [==============================] - 1s 4ms/step - loss: 1.7086 - val_loss: 1.0551
Epoch 4/10
313/313 [==============================] - 1s 4ms/step - loss: 0.9906 - val_loss: 0.5927
Epoch 5/10
313/313 [==============================] - 1s 4ms/step - loss: 0.6411 - val_loss: 0.1694
Epoch 6/10
313/313 [==============================] - 1s 4ms/step - loss: 0.6686 - val_loss: 1.7604
Epoch 7/10
313/313 [==============================] - 1s 4ms/step - loss: 0.6464 - val_loss: 0.1079
Epoch 8/10
313/313 [==============================] - 1s 4ms/step - loss: 0.4570 - val_loss: 0.1365
Epoch 9/10
313/313 [==============================] - 1s 4ms/step - loss: 0.2945 - val_loss: 0.0664
Epoch 10/10
313/313 [==============================] - 1s 4ms/step - loss: 0.2095 - val_loss: 0.0849

<keras.callbacks.History>

License

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

You are free to: - Share — copy and redistribute the material in any medium or format

  • Adapt — remix, transform, and build upon the material

The licensor cannot revoke these freedoms as long as you follow the license terms.

Under the following terms: - Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  • NonCommercial — You may not use the material for commercial purposes.

  • ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

  • No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mono-dense-keras-0.1.0.tar.gz (25.9 kB view details)

Uploaded Source

Built Distribution

mono_dense_keras-0.1.0-py3-none-any.whl (23.6 kB view details)

Uploaded Python 3

File details

Details for the file mono-dense-keras-0.1.0.tar.gz.

File metadata

  • Download URL: mono-dense-keras-0.1.0.tar.gz
  • Upload date:
  • Size: 25.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for mono-dense-keras-0.1.0.tar.gz
Algorithm Hash digest
SHA256 09454b7af14f6133822882198675df80be04f368bbf183db29be2e4d42d070e1
MD5 47d467835511ca8d3e5ec3a96f60ad95
BLAKE2b-256 eca43c71c3fd83ae7814c9eb199069f555ea230a9bfe3c42bd741cb56f125c24

See more details on using hashes here.

Provenance

File details

Details for the file mono_dense_keras-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mono_dense_keras-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 60d1c71ea018e86b04a1dbe7025c86f4d2b70b5ca95f267f8d16d58ce7af3ef4
MD5 049303d657c3c205c78994cb80d5ea51
BLAKE2b-256 84838535eacd37c2c9f6def63e42d8ae26bb01dcc53ef43f698ed57955df4f5f

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page