Skip to main content

Biblioteca para detecção de outliers baseada em Stacking com Keras

Project description

📌 EAS - Embedded Adaptive Stacking

EAS (Embedded Adaptive Stacking) is a library for pattern detection in time series using model stacking with LSTM, GRU, BiLSTM, and BiGRU.

Python TensorFlow PyPI

🚀 About the Project

EAS (Embedded Adaptive Stacking) is a library for time series analysis using model stacking based on recurrent neural networks.

🔹 Key Features

Smart Stacking: Combines LSTM, GRU, BiLSTM, and BiGRU to improve predictions.
Dynamic Optimization: Includes the LossAdaptiveOptimizer (LORO), which automatically adjusts the learning rate.
Cost-Sensitive Loss Function: CustomLossWithRegression allows fine-tuning penalties for extreme events.
Results Visualization: Clear comparison between predictions and actual values.


📀 Installation

To install the library directly from PyPI, use:

pip install adaptive-stacking-keras

Or to manually install the latest version from the repository:

git clone https://github.com/your-username/adaptive-stacking-keras.git
cd adaptive-stacking-keras
pip install .

📀 How to Use

🔹 Usage Example

Here is a simple example of how to use the library to train a model with Stacking and dynamic optimization.

import tensorflow as tf
from adaptive_stacking_keras import (
    StackingModel,
    CustomLossWithRegression,
    LORO,
    plot_time_series_comparison,
)

# Creating the model with multiple hidden layers
hidden_dims = [64, 128]
model = StackingModel(input_dim=10, hidden_dims=hidden_dims, embedding_dim=32, output_dim=1)

# Creating LORO optimizer
optimizer = LORO(learning_rate=0.001)

# Generating synthetic data
tf.random.set_seed(42)
x_train = tf.random.normal((100, 10, 10))
y_train = tf.random.normal((100, 1))

# Training the model
for epoch in range(5):  # Only 5 epochs for demonstration
    with tf.GradientTape() as tape:
        y_pred, (threshold, alpha) = model(x_train)
        loss = CustomLossWithRegression(model.threshold_alpha_layer)(y_train, y_pred)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    print(f"Epoch [{epoch+1}/5] - Loss: {loss.numpy():.4f} - Threshold: {threshold.numpy():.4f} - Alpha: {alpha.numpy():.4f}")

# Testing with data
x_test = tf.random.normal((50, 10, 10))
y_test = tf.random.normal((50, 1))
y_pred, _ = model(x_test)

# Visualization of results
plot_time_series_comparison(y_test, y_pred, time_range=(10, 40), title="Model Results")

📝 Documentation

Check out the full documentation on the GitHub repository.

💎 Contributions

Contributions are welcome! To contribute:

  1. Fork this repository.
  2. Create a branch with your feature (git checkout -b my-feature).
  3. Commit your changes (git commit -m 'Adding new feature').
  4. Push to the repository (git push origin my-feature).
  5. Open a Pull Request.

🌟 License

This project is licensed under the MIT License - see the LICENSE file for more details.


💪 Built with dedication for developers and researchers!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedded_adaptive_stacking_keras-0.4.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file embedded_adaptive_stacking_keras-0.4.tar.gz.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.4.tar.gz
Algorithm Hash digest
SHA256 cae8aafd659d862830a105e9f345bd0a61292fe4ffea77c9d6ba2bffca66e765
MD5 047f8aa5ac07a9dfb9798cf6a26970ac
BLAKE2b-256 e555a85d8da27ccd5f2a643e31f437adc3f5f79facafbaf2999eb79f2fd23f87

See more details on using hashes here.

File details

Details for the file embedded_adaptive_stacking_keras-0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 9f9a42b485da7a52e8931a941b7230a4ad9fc05a30a11615aaddafae359778a4
MD5 1f00d40d4fcc4fc7604f6c68af4fedea
BLAKE2b-256 5c683ac5d90fa2ccbaefa5f3d69c9db07e1f2ea0b260695ea8fa4ae963674e2c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page