Skip to main content

Biblioteca para detecção de outliers baseada em Stacking com Keras

Project description

📌 EAS - Embedded Adaptive Stacking

EAS (Embedded Adaptive Stacking) is a library for pattern detection in time series using model stacking with LSTM, GRU, BiLSTM, and BiGRU.

Python TensorFlow PyPI

🚀 About the Project

EAS (Embedded Adaptive Stacking) is a library for time series analysis using model stacking based on recurrent neural networks.

🔹 Key Features

Smart Stacking: Combines LSTM, GRU, BiLSTM, and BiGRU to improve predictions.
Dynamic Optimization: Includes the LossAdaptiveOptimizer (LORO), which automatically adjusts the learning rate.
Cost-Sensitive Loss Function: CustomLossWithRegression allows fine-tuning penalties for extreme events.
Results Visualization: Clear comparison between predictions and actual values.


📀 Installation

To install the library directly from PyPI, use:

pip install adaptive-stacking-keras

Or to manually install the latest version from the repository:

git clone https://github.com/your-username/adaptive-stacking-keras.git
cd adaptive-stacking-keras
pip install .

📀 How to Use

🔹 Usage Example

Here is a simple example of how to use the library to train a model with Stacking and dynamic optimization.

import tensorflow as tf
from adaptive_stacking_keras import (
    StackingModel,
    CustomLossWithRegression,
    LORO,
    plot_time_series_comparison,
)

# Creating the model with multiple hidden layers
hidden_dims = [64, 128]
model = StackingModel(input_dim=10, hidden_dims=hidden_dims, embedding_dim=32, output_dim=1)

# Creating LORO optimizer
optimizer = LORO(learning_rate=0.001)

# Generating synthetic data
tf.random.set_seed(42)
x_train = tf.random.normal((100, 10, 10))
y_train = tf.random.normal((100, 1))

# Training the model
for epoch in range(5):  # Only 5 epochs for demonstration
    with tf.GradientTape() as tape:
        y_pred, (threshold, alpha) = model(x_train)
        loss = CustomLossWithRegression(model.threshold_alpha_layer)(y_train, y_pred)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    print(f"Epoch [{epoch+1}/5] - Loss: {loss.numpy():.4f} - Threshold: {threshold.numpy():.4f} - Alpha: {alpha.numpy():.4f}")

# Testing with data
x_test = tf.random.normal((50, 10, 10))
y_test = tf.random.normal((50, 1))
y_pred, _ = model(x_test)

# Visualization of results
plot_time_series_comparison(y_test, y_pred, time_range=(10, 40), title="Model Results")

📝 Documentation

Check out the full documentation on the GitHub repository.

💎 Contributions

Contributions are welcome! To contribute:

  1. Fork this repository.
  2. Create a branch with your feature (git checkout -b my-feature).
  3. Commit your changes (git commit -m 'Adding new feature').
  4. Push to the repository (git push origin my-feature).
  5. Open a Pull Request.

🌟 License

This project is licensed under the MIT License - see the LICENSE file for more details.


💪 Built with dedication for developers and researchers!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedded_adaptive_stacking_keras-0.6.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file embedded_adaptive_stacking_keras-0.6.tar.gz.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.6.tar.gz
Algorithm Hash digest
SHA256 bf3fb4c9b52c97193e9bdd21ac5a94d0fd6ca80129b6cf6fe0cc3f80e7b38689
MD5 3837807aff7bea233e8378e0959fad5a
BLAKE2b-256 8d38b8e8d7c036c2ed3cc2bac922491207d1f0f993d98bdc4e704ae57ac11a59

See more details on using hashes here.

File details

Details for the file embedded_adaptive_stacking_keras-0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for embedded_adaptive_stacking_keras-0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 03c9d82b2c11d7f03b28bf79b0c12eebe91a12b76c596713824f633481c045ba
MD5 cef2723b17cbdd78613e2316291f6806
BLAKE2b-256 2dc640c18e1b3ad483f2504a5db13c19f7cc6a89cda01c3dffff27e7fada895e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page