Prometheus time series anomaly detection with LSTM Autoencoder
Project description
Prometheus Time Series Anomaly Detection with LSTM Autoencoder
This project implements a system for detecting anomalies in time series data collected from Prometheus. It uses an LSTM (Long Short-Term Memory) autoencoder model built with TensorFlow/Keras to learn normal patterns from your metrics and identify deviations. The system includes scripts for data collection, preprocessing, model training, data filtering, and real-time anomaly detection, exposing results via a Prometheus exporter.
Features
- Data Collection: Fetches time series data from a Prometheus instance for specified PromQL queries.
- Preprocessing: Handles missing values and normalizes/scales data for optimal model training.
- LSTM Autoencoder Training: Trains an LSTM autoencoder on the full preprocessed dataset.
- Data Filtering: A script to apply the trained Model A to filter out anomalous sequences from a dataset.
- Real-time Anomaly Detection: Continuously monitors new data and processes it with the trained model to detect anomalies.
- Prometheus Exporter Integration: Exposes key anomaly detection metrics (e.g., reconstruction error, anomaly flag, per-feature errors) that can be scraped by Prometheus and monitored with tools like Grafana.
- Configurable: All stages are highly configurable via a central
config.yamlfile.
Project Structure
.
├── config.yaml # Central configuration file for all scripts
├── data_collector.py # Script to collect historical data from Prometheus
├── preprocess_data.py # Script to preprocess the collected data
├── train_autoencoder.py # Script to train the LSTM autoencoder
├── filter_anomalous_data.py # Script to filter data using the trained model to separate normal/anomalous sequences
├── realtime_detector.py # Script for real-time anomaly detection and Prometheus exporter
├── Pipfile # Dependency declarations
├── Pipfile.lock # Locked versions of dependencies
└── README.md # This file
Prerequisites
- Python 3.12
- Pipenv for managing dependencies
- A running Prometheus instance (v2.x or later) that is scraping the metrics you want to analyze.
- (Optional) Exporters configured for your Prometheus to collect the desired metrics (e.g.,
node_exporter,windows_exporter).
Setup & Installation
-
Clone the Repository:
git clone <your-repository-url> cd <repository-name>
-
Install Dependencies with Pipenv:
pipenv install --dev
After installation you can enter the environment using
pipenv shellor run scripts withpipenv run. -
Prometheus Setup: Ensure your Prometheus server is running and accessible. The scripts will query this server based on the URL and PromQL queries defined in
config.yaml. The example queries inconfig.yamlmight use metrics fromwindows_exporter; adapt these to your own available metrics.
Configuration (config.yaml)
The config.yaml file is central to running this project. Key sections include:
prometheus_url: URL of your Prometheus server.queries: Dictionary of PromQL queries with friendly aliases.data_settings: Parameters fordata_collector.py(e.g.,collection_period_hours,step,output_filename).preprocessing_settings: Parameters forpreprocess_data.py(e.g.,nan_fill_strategy,scaler_type,processed_output_filename,scaler_output_filename).training_settings: Parameters fortrain_autoencoder.py.model_output_filename: Filename for Model A (trained on all data).sequence_length,train_split_ratio,epochs,batch_size,learning_rate,early_stopping_patience: Standard training hyperparameters.lstm_units_encoder1, etc.: LSTM autoencoder architecture definition.
data_filtering_settings: Parameters forfilter_anomalous_data.py.normal_sequences_output_filename: Output file for sequences classified as normal by Model A.anomalous_sequences_output_filename: Output file for sequences classified as anomalous by Model A.
real_time_anomaly_detection: Parameters forrealtime_detector.py.query_interval_seconds: How often to fetch new data.anomaly_threshold_mse: Crucial! MSE threshold for declaring an anomaly. Tune this based on validation error histograms.exporter_port: Port for the Prometheus exporter.metrics_prefix: Prefix for exposed Prometheus metrics.
Before running any script, review and customize config.yaml thoroughly.
Usage / Workflow
The project follows a sequential workflow. Each stage can also be launched via the
cli.py utility:
python cli.py collect # сбор данных
python cli.py preprocess # предобработка
python cli.py train # обучение модели
python cli.py detect # запуск realtime детектора
The sequential workflow remains as follows:
Step 1: Data Collection (data_collector.py)
Collect historical data from your Prometheus instance.
python data_collector.py
Output: Raw data Parquet file (e.g., prometheus_metrics_data.parquet).
Step 2: Data Preprocessing (preprocess_data.py)
Preprocess the collected data (handles NaNs, scales features).
python preprocess_data.py
Outputs: Processed data Parquet file (e.g., processed_metrics_data.parquet) and a saved scaler (e.g., fitted_scaler.joblib).
Step 3: Train Initial Model - Model A (train_autoencoder.py)
Train the first LSTM autoencoder (Model A) on the full preprocessed dataset.
- In
config.yaml(training_settings):- Set
train_on_filtered_sequences: false. - Configure
model_output_filename(e.g.,lstm_autoencoder_model_A.keras).
- Set
python train_autoencoder.py
Outputs: Trained Model A (e.g., lstm_autoencoder_model_A.keras), training history plots. Use the reconstruction_error_histogram_...A.png to help determine anomaly_threshold_mse in config.yaml.
Step 4: Filter Data (Optional) (filter_anomalous_data.py)
Use the trained Model A to classify sequences in your preprocessed dataset as "normal" or "anomalous".
- Ensure
anomaly_threshold_mse(fromreal_time_anomaly_detectionsection, used by this script as the threshold for Model A) is appropriately set inconfig.yaml. - Configure output filenames in
data_filtering_settings.
python filter_anomalous_data.py
Outputs: .npy files for normal sequences (e.g., normal_sequences.npy) and anomalous sequences.
Step 5: Real-time Anomaly Detection (realtime_detector.py)
Run the real-time detector using the trained model.
- Ensure
model_output_filenameintraining_settingspoints to your trained model. - Ensure
anomaly_threshold_mseinreal_time_anomaly_detectionis correctly set.
python realtime_detector.py
The detector starts a Prometheus exporter (e.g., on http://localhost:8001/metrics).
Step 6: Monitoring (Prometheus & Grafana)
Configure Prometheus to scrape the metrics endpoint from realtime_detector.py. Visualize metrics like:
anomaly_detector_latest_reconstruction_error_mseanomaly_detector_is_anomaly_detectedanomaly_detector_total_anomalies_count_totalanomaly_detector_feature_reconstruction_error_mse{feature_name="your_alias"}
Interpreting Results
- Monitoring Metrics: Observe the
is_anomaly_detectedandlatest_reconstruction_error_msemetrics in real time to evaluate detection behavior. - Per-Feature Errors: When an anomaly is flagged by either model, check the corresponding
feature_reconstruction_error_msemetrics (and logs ofrealtime_detector.py) to see which specific time series (features) are contributing most to the anomaly.
Customization & Extending
- Monitoring New Metrics: Add PromQL queries to
config.yaml. Retrain models (all relevant steps) to include these. - Tuning Anomaly Thresholds: The
anomaly_threshold_msevalue is critical. Adjust it based on model performance and desired sensitivity. - Model Architecture: Modify LSTM parameters in
training_settingsofconfig.yaml. - Experimentation: Use the
filter_anomalous_data.pyscript with different thresholds to generate various "cleaned" datasets if needed.
Troubleshooting
- Python Dependencies: Ensure
Pipfile/Pipfile.lockare in sync and runpipenv installif packages change. - Prometheus Connection: Verify
prometheus_urland query validity. - Data Issues: Check for "No data found" errors; inspect PromQL queries and Prometheus scrape targets. Review
nan_fill_strategyif NaNs persist. - Model Training: If loss doesn't decrease, adjust learning rate, batch size, or architecture. For overfitting, utilize
EarlyStoppingor consider more data/regularization. - File Not Found: Double-check filenames in
config.yamlagainst actual generated files (models, scalers, datasets). - Port in Use: If
realtime_detector.pyfails, theexporter_portmight be occupied.
Contributing
Contributions are welcome! Please feel free to open an issue or submit a pull request.
License
This project is licensed under the MIT License.
Publishing to PyPI
This project includes a GitHub Actions workflow that builds and uploads the package to PyPI when a tag starting with v is pushed. Set the PYPI_API_TOKEN secret in your repository settings.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prometheus_anomaly_detection_lstm-0.1.4.tar.gz.
File metadata
- Download URL: prometheus_anomaly_detection_lstm-0.1.4.tar.gz
- Upload date:
- Size: 25.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d1dbbd131fc5978fc17031358296241dba538f60e942dea352e6c7d696105ba4
|
|
| MD5 |
06d990b31d383fb14dc93e6673f07a98
|
|
| BLAKE2b-256 |
db3ced242337efaefee01c41691b10801a5bbe1ab91b9e39281eb692bc7be20f
|
File details
Details for the file prometheus_anomaly_detection_lstm-0.1.4-py3-none-any.whl.
File metadata
- Download URL: prometheus_anomaly_detection_lstm-0.1.4-py3-none-any.whl
- Upload date:
- Size: 26.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0519b80bafcea79e8ccf8b49762c02558f25a4968e93bcc2a8cf336cfe24f647
|
|
| MD5 |
d7c6bf061b7493ca82c3c3e26c17ccea
|
|
| BLAKE2b-256 |
516c2113bc46093d2e49f6edd81c66213f9d851824d56bfbfb2cdeaeb28432e0
|