Skip to main content

Biblioteca para detecção de outliers baseada em Stacking com Keras

Project description

📌 EAS - Embedded Adaptive Stacking

EAS (Embedded Adaptive Stacking) is a library for pattern detection in time series using model stacking with LSTM, GRU, BiLSTM, and BiGRU.

Python TensorFlow PyPI

🚀 About the Project

EAS (Embedded Adaptive Stacking) is a library for time series analysis using model stacking based on recurrent neural networks.

🔹 Key Features

Smart Stacking: Combines LSTM, GRU, BiLSTM, and BiGRU to improve predictions.
Dynamic Optimization: Includes the LossAdaptiveOptimizer (LORO), which automatically adjusts the learning rate.
Cost-Sensitive Loss Function: CustomLossWithRegression allows fine-tuning penalties for extreme events.
Results Visualization: Clear comparison between predictions and actual values.


📀 Installation

To install the library directly from PyPI, use:

pip install adaptive-stacking-keras

Or to manually install the latest version from the repository:

git clone https://github.com/your-username/adaptive-stacking-keras.git
cd adaptive-stacking-keras
pip install .

📀 How to Use

🔹 Usage Example

Here is a simple example of how to use the library to train a model with Stacking and dynamic optimization.

import tensorflow as tf
from adaptive_stacking_keras import (
    StackingModel,
    CustomLossWithRegression,
    LORO,
    plot_time_series_comparison,
)

# Creating the model with multiple hidden layers
hidden_dims = [64, 128]
model = StackingModel(input_dim=10, hidden_dims=hidden_dims, embedding_dim=32, output_dim=1)

# Creating LORO optimizer
optimizer = LORO(learning_rate=0.001)

# Generating synthetic data
tf.random.set_seed(42)
x_train = tf.random.normal((100, 10, 10))
y_train = tf.random.normal((100, 1))

# Training the model
for epoch in range(5):  # Only 5 epochs for demonstration
    with tf.GradientTape() as tape:
        y_pred, (threshold, alpha) = model(x_train)
        loss = CustomLossWithRegression(model.threshold_alpha_layer)(y_train, y_pred)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    print(f"Epoch [{epoch+1}/5] - Loss: {loss.numpy():.4f} - Threshold: {threshold.numpy():.4f} - Alpha: {alpha.numpy():.4f}")

# Testing with data
x_test = tf.random.normal((50, 10, 10))
y_test = tf.random.normal((50, 1))
y_pred, _ = model(x_test)

# Visualization of results
plot_time_series_comparison(y_test, y_pred, time_range=(10, 40), title="Model Results")

📝 Documentation

Check out the full documentation on the GitHub repository.

💎 Contributions

Contributions are welcome! To contribute:

  1. Fork this repository.
  2. Create a branch with your feature (git checkout -b my-feature).
  3. Commit your changes (git commit -m 'Adding new feature').
  4. Push to the repository (git push origin my-feature).
  5. Open a Pull Request.

🌟 License

This project is licensed under the MIT License - see the LICENSE file for more details.


💪 Built with dedication for developers and researchers!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedded_adaptive_stacking_keras-0.5.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file embedded_adaptive_stacking_keras-0.5.tar.gz.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.5.tar.gz
Algorithm Hash digest
SHA256 099f74a1062e050036eec590b4eabfeef7e26ec8e65e8ca9c36387d0f0001281
MD5 26ead01862b054ba79e21dd9e91fbc32
BLAKE2b-256 29c82315c0d9735e2ea011c83b662c2f232a88fe140bd96f5b209d1d51049df5

See more details on using hashes here.

File details

Details for the file embedded_adaptive_stacking_keras-0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 347323412e0f2a2e4d97c0098dcdfedae1dfa958b6133d541d75f3711e48e4c1
MD5 75c7f2e79afd9ff49474cc2fcd2863f8
BLAKE2b-256 2767a9870cff00118e7b219e297ae7ddcaeef04d0b89d2cdf426875e645ac2be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page