No project description provided
Project description
VIEWS Evaluation 📊
Part of the VIEWS Platform ecosystem for large-scale conflict forecasting.
📚 Table of Contents
- Overview
- Role in the VIEWS Pipeline
- Features
- Installation
- Architecture
- Project Structure
- Contributing
- License
- Acknowledgements
🧠 Overview
The VIEWS Evaluation repository provides a standardized framework for assessing time-series forecasting models used in the VIEWS conflict prediction pipeline. It ensures consistent, robust, and interpretable evaluations through metrics tailored to conflict-related data, which often exhibit right-skewness and zero-inflation.
🌍 Role in the VIEWS Pipeline
VIEWS Evaluation ensures forecasting accuracy and model robustness as the official evaluation component of the VIEWS ecosystem.
Pipeline Integration:
- Model Predictions →
- Evaluation Metrics Processing →
- Metrics Computation (via EvaluationManager) →
- Final Performance Reports
Integration with Other Repositories:
- views-pipeline-core: Supplies preprocessed data for evaluation.
- views-models: Provides trained models to be assessed.
- views-stepshifter: Evaluates time-shifted forecasting models.
- views-hydranet: Supports spatiotemporal deep learning model evaluations.
✨ Features
- Comprehensive Evaluation Framework: The
EvaluationManagerclass provides structured methods to evaluate time series predictions based on point and uncertainty metrics. - Multiple Evaluation Schemas:
- Step-wise evaluation: groups and evaluates predictions by the respective steps from all models.
- Time-series-wise evaluation: evaluates predictions for each time-series.
- Month-wise evaluation: groups and evaluates predictions at a monthly level.
- Support for Mulyiple Metrics
- Point Evaluation Metrics: RMSLE, CRPS, Average Precision (Brier Score, Jeffreys Divergence, Pearson Correlation, Sinkhorn/Earth-mover Distance & pEMDiv and Variogram to be added).
- Uncertainty Evaluation Metrics: CRPS (and more to be added in the future).
- Data Integrity Checks: Ensures that input DataFrames conform to expected structures before evaluation based on point and uncertainty evaluation.
- Automatic Index Matching: Aligns actual and predicted values based on MultiIndex structures.
- Planned Enhancements:
- Expanding metric calculations beyond RMSLE, CRPS, and AP.
- New visualization tools for better interpretability of evaluation reports.
⚙️ Installation
Prerequisites
- Python >= 3.11
From PyPI
pip install views_evaluation
🏗 Architecture
1. Evaluation Metrics Framework
- Handles forecasting evaluation across multiple models, levels of analysis, and forecasting windows.
- Converts model outputs into standardized evaluation reports.
2. Metrics Computation Pipeline
- Input: Predictions from models in standardized DataFrames.
- Processing: Calculation of relevant evaluation metrics.
- Output: Performance scores for comparison across models.
3. Error Handling & Standardization
- Ensures conformity to VIEWS evaluation standards.
- Warns about unrecognized or incorrectly formatted metrics.
🗂 Project Structure
views-evaluation/
├── README.md # Documentation
├── .github/workflows/ # CI/CD pipelines
├── tests/ # Unit tests
├── views_evaluation/ # Main source code
│ ├── evaluation/
│ │ ├── metrics.py
│ ├── __init__.py # Package initialization
├── .gitignore # Git ignore rules
├── pyproject.toml # Poetry project file
├── poetry.lock # Dependency lock file
🤝 Contributing
We welcome contributions! Please follow the VIEWS Contribution Guidelines.
📜 License
This project is licensed under the LICENSE file.
💬 Acknowledgements
Special thanks to the VIEWS MD&D Team for their collaboration and support.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file views_evaluation-0.3.1.tar.gz.
File metadata
- Download URL: views_evaluation-0.3.1.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.11.13 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4cbfd263b35cb52666d3018936025359def5ffeda9340315a60c22eff7b8fa6b
|
|
| MD5 |
d8288a3abe847e701269a7371589d492
|
|
| BLAKE2b-256 |
8e83f60ca068da70c5c0ba744b2f7d964a1f53432b67f99c720a4e8d220eafee
|
File details
Details for the file views_evaluation-0.3.1-py3-none-any.whl.
File metadata
- Download URL: views_evaluation-0.3.1-py3-none-any.whl
- Upload date:
- Size: 13.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.11.13 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
001fe98de3c585410ecf3719aaf3826fb3a3cb801278207eda6a15441a0a2797
|
|
| MD5 |
19fa27a29ceff98d63b7c6c75a45a63e
|
|
| BLAKE2b-256 |
b904e4b0d9febc69ef860d1935de705c667d04100648ee0d7934fde3bb38ce1b
|