Skip to main content

Biblioteca para detecção de outliers baseada em Stacking com Keras

Project description

📌 EAS - Embedded Adaptive Stacking

EAS (Embedded Adaptive Stacking) is a library for pattern detection in time series using model stacking with LSTM, GRU, BiLSTM, and BiGRU.

Python TensorFlow PyPI

🚀 About the Project

EAS (Embedded Adaptive Stacking) is a library for time series analysis using model stacking based on recurrent neural networks.

🔹 Key Features

Smart Stacking: Combines LSTM, GRU, BiLSTM, and BiGRU to improve predictions.
Dynamic Optimization: Includes the LossAdaptiveOptimizer (LORO), which automatically adjusts the learning rate.
Cost-Sensitive Loss Function: CustomLossWithRegression allows fine-tuning penalties for extreme events.
Results Visualization: Clear comparison between predictions and actual values.


📀 Installation

To install the library directly from PyPI, use:

pip install adaptive-stacking-keras

Or to manually install the latest version from the repository:

git clone https://github.com/your-username/adaptive-stacking-keras.git
cd adaptive-stacking-keras
pip install .

📀 How to Use

🔹 Usage Example

Here is a simple example of how to use the library to train a model with Stacking and dynamic optimization.

import tensorflow as tf
from adaptive_stacking_keras import (
    StackingModel,
    CustomLossWithRegression,
    LORO,
    plot_time_series_comparison,
)

# Creating the model with multiple hidden layers
hidden_dims = [64, 128]
model = StackingModel(input_dim=10, hidden_dims=hidden_dims, embedding_dim=32, output_dim=1)

# Creating LORO optimizer
optimizer = LORO(learning_rate=0.001)

# Generating synthetic data
tf.random.set_seed(42)
x_train = tf.random.normal((100, 10, 10))
y_train = tf.random.normal((100, 1))

# Training the model
for epoch in range(5):  # Only 5 epochs for demonstration
    with tf.GradientTape() as tape:
        y_pred, (threshold, alpha) = model(x_train)
        loss = CustomLossWithRegression(model.threshold_alpha_layer)(y_train, y_pred)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    print(f"Epoch [{epoch+1}/5] - Loss: {loss.numpy():.4f} - Threshold: {threshold.numpy():.4f} - Alpha: {alpha.numpy():.4f}")

# Testing with data
x_test = tf.random.normal((50, 10, 10))
y_test = tf.random.normal((50, 1))
y_pred, _ = model(x_test)

# Visualization of results
plot_time_series_comparison(y_test, y_pred, time_range=(10, 40), title="Model Results")

📝 Documentation

Check out the full documentation on the GitHub repository.

💎 Contributions

Contributions are welcome! To contribute:

  1. Fork this repository.
  2. Create a branch with your feature (git checkout -b my-feature).
  3. Commit your changes (git commit -m 'Adding new feature').
  4. Push to the repository (git push origin my-feature).
  5. Open a Pull Request.

🌟 License

This project is licensed under the MIT License - see the LICENSE file for more details.


💪 Built with dedication for developers and researchers!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedded_adaptive_stacking_keras-0.7.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file embedded_adaptive_stacking_keras-0.7.tar.gz.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.7.tar.gz
Algorithm Hash digest
SHA256 7a80d95b7b61aebe8147acbf2d011539421c2300c20309300ef18bee7fb8703a
MD5 5f5179db199090449c137f24d6095e73
BLAKE2b-256 357701d527101fb11f31a94f3366316f2369ffa5be22b79117e7faae241ad244

See more details on using hashes here.

File details

Details for the file embedded_adaptive_stacking_keras-0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 34c5263234ed1181c08f4b74c7b1c64ef4d9f11fa759f1434225811590d4603b
MD5 3ee7ef25d29366684213f8d13f498332
BLAKE2b-256 17d4b6a2d3212f236de6c6d54389c02aede6be392a3628357024583e132145df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page