Skip to main content

Biblioteca para detecção de outliers baseada em Stacking com Keras

Project description

📌 EAS - Embedded Adaptive Stacking

EAS (Embedded Adaptive Stacking) is a library for pattern detection in time series using model stacking with LSTM, GRU, BiLSTM, and BiGRU.

Python TensorFlow PyPI

🚀 About the Project

EAS (Embedded Adaptive Stacking) is a library for time series analysis using model stacking based on recurrent neural networks.

🔹 Key Features

Smart Stacking: Combines LSTM, GRU, BiLSTM, and BiGRU to improve predictions.
Dynamic Optimization: Includes the LossAdaptiveOptimizer (LORO), which automatically adjusts the learning rate.
Cost-Sensitive Loss Function: CustomLossWithRegression allows fine-tuning penalties for extreme events.
Results Visualization: Clear comparison between predictions and actual values.


📀 Installation

To install the library directly from PyPI, use:

pip install adaptive-stacking-keras

Or to manually install the latest version from the repository:

git clone https://github.com/your-username/adaptive-stacking-keras.git
cd adaptive-stacking-keras
pip install .

📀 How to Use

🔹 Usage Example

Here is a simple example of how to use the library to train a model with Stacking and dynamic optimization.

import tensorflow as tf
from adaptive_stacking_keras import (
    StackingModel,
    CustomLossWithRegression,
    LORO,
    plot_time_series_comparison,
)

# Creating the model with multiple hidden layers
hidden_dims = [64, 128]
model = StackingModel(input_dim=10, hidden_dims=hidden_dims, embedding_dim=32, output_dim=1)

# Creating LORO optimizer
optimizer = LORO(learning_rate=0.001)

# Generating synthetic data
tf.random.set_seed(42)
x_train = tf.random.normal((100, 10, 10))
y_train = tf.random.normal((100, 1))

# Training the model
for epoch in range(5):  # Only 5 epochs for demonstration
    with tf.GradientTape() as tape:
        y_pred, (threshold, alpha) = model(x_train)
        loss = CustomLossWithRegression(model.threshold_alpha_layer)(y_train, y_pred)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    print(f"Epoch [{epoch+1}/5] - Loss: {loss.numpy():.4f} - Threshold: {threshold.numpy():.4f} - Alpha: {alpha.numpy():.4f}")

# Testing with data
x_test = tf.random.normal((50, 10, 10))
y_test = tf.random.normal((50, 1))
y_pred, _ = model(x_test)

# Visualization of results
plot_time_series_comparison(y_test, y_pred, time_range=(10, 40), title="Model Results")

📝 Documentation

Check out the full documentation on the GitHub repository.

💎 Contributions

Contributions are welcome! To contribute:

  1. Fork this repository.
  2. Create a branch with your feature (git checkout -b my-feature).
  3. Commit your changes (git commit -m 'Adding new feature').
  4. Push to the repository (git push origin my-feature).
  5. Open a Pull Request.

🌟 License

This project is licensed under the MIT License - see the LICENSE file for more details.


💪 Built with dedication for developers and researchers!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedded_adaptive_stacking_keras-0.3.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file embedded_adaptive_stacking_keras-0.3.tar.gz.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.3.tar.gz
Algorithm Hash digest
SHA256 08f00e5841a41aa1ba347acd86ae1203047f571f1d6279b8eeee72a45cc2fab9
MD5 ecd4340b6948d317f287c9814e32b08c
BLAKE2b-256 e1a24837bc0295e78e7197f84aa2f1f84f1247e0dd84efba4c4d2f65bacc29e1

See more details on using hashes here.

File details

Details for the file embedded_adaptive_stacking_keras-0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e951eb1fdaced3be092f8e39f829f4c670594f80b3588b7db996cc56f0b3b7f7
MD5 27b1f39ac78e951a1201fdbac0be2a93
BLAKE2b-256 f5e0f0ecffa022c243aa2ec81d56474738562d8c996032868329bfa6a8afc0c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page