Skip to main content

No project description provided

Project description

GitHub License GitHub branch check runs GitHub Issues or Pull Requests GitHub Release

VIEWS Twitter Header

VIEWS Evaluation 📊

Part of the VIEWS Platform ecosystem for large-scale conflict forecasting.

📚 Table of Contents

  1. Overview
  2. Role in the VIEWS Pipeline
  3. Features
  4. Installation
  5. Architecture
  6. Project Structure
  7. Contributing
  8. License
  9. Acknowledgements

🧠 Overview

The VIEWS Evaluation repository provides a standardized framework for assessing time-series forecasting models used in the VIEWS conflict prediction pipeline. It ensures consistent, robust, and interpretable evaluations through metrics tailored to conflict-related data, which often exhibit right-skewness and zero-inflation.


🌍 Role in the VIEWS Pipeline

VIEWS Evaluation ensures forecasting accuracy and model robustness as the official evaluation component of the VIEWS ecosystem.

Pipeline Integration:

  1. Model Predictions
  2. Evaluation Metrics Processing
  3. Metrics Computation (via EvaluationManager)
  4. Final Performance Reports

Integration with Other Repositories:


Features

  • Comprehensive Evaluation Framework: The EvaluationManager class provides structured methods to evaluate time series predictions based on point and uncertainty metrics.
  • Multiple Evaluation Schemas:
    • Step-wise evaluation: groups and evaluates predictions by the respective steps from all models.
    • Time-series-wise evaluation: evaluates predictions for each time-series.
    • Month-wise evaluation: groups and evaluates predictions at a monthly level.
  • Support for Mulyiple Metrics
    • Point Evaluation Metrics: RMSLE, CRPS, Average Precision (Brier Score, Jeffreys Divergence, Pearson Correlation, Sinkhorn/Earth-mover Distance & pEMDiv and Variogram to be added).
    • Uncertainty Evaluation Metrics: CRPS (and more to be added in the future).
  • Data Integrity Checks: Ensures that input DataFrames conform to expected structures before evaluation based on point and uncertainty evaluation.
  • Automatic Index Matching: Aligns actual and predicted values based on MultiIndex structures.
  • Planned Enhancements:
    • Expanding metric calculations beyond RMSLE, CRPS, and AP.
    • New visualization tools for better interpretability of evaluation reports.

⚙️ Installation

Prerequisites

  • Python >= 3.11

From PyPI

pip install views_evaluation

🏗 Architecture

1. Evaluation Metrics Framework

  • Handles forecasting evaluation across multiple models, levels of analysis, and forecasting windows.
  • Converts model outputs into standardized evaluation reports.

2. Metrics Computation Pipeline

  1. Input: Predictions from models in standardized DataFrames.
  2. Processing: Calculation of relevant evaluation metrics.
  3. Output: Performance scores for comparison across models.

3. Error Handling & Standardization

  • Ensures conformity to VIEWS evaluation standards.
  • Warns about unrecognized or incorrectly formatted metrics.

🗂 Project Structure

views-evaluation/
├── README.md                   # Documentation
├── .github/workflows/           # CI/CD pipelines
├── tests/                       # Unit tests
├── views_evaluation/            # Main source code
│   ├── evaluation/
│   │   ├── metrics.py
│   ├── __init__.py              # Package initialization
├── .gitignore                   # Git ignore rules
├── pyproject.toml               # Poetry project file
├── poetry.lock                  # Dependency lock file

🤝 Contributing

We welcome contributions! Please follow the VIEWS Contribution Guidelines.


📜 License

This project is licensed under the LICENSE file.


💬 Acknowledgements

Views Funders

Special thanks to the VIEWS MD&D Team for their collaboration and support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

views_evaluation-0.2.0.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

views_evaluation-0.2.0-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file views_evaluation-0.2.0.tar.gz.

File metadata

  • Download URL: views_evaluation-0.2.0.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.11 Linux/6.8.0-1021-azure

File hashes

Hashes for views_evaluation-0.2.0.tar.gz
Algorithm Hash digest
SHA256 92430409632a92a28ee47745b99d7d3fe893851da0d893109eefc59b588996e0
MD5 712457c9640b23d1dd98967bcc70279e
BLAKE2b-256 724e745f1e3da4eb4d6226c1ba0f131948cd8529f99d7bfc8fc29ddc439cf7da

See more details on using hashes here.

File details

Details for the file views_evaluation-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: views_evaluation-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 8.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.11 Linux/6.8.0-1021-azure

File hashes

Hashes for views_evaluation-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c225eb5dcaf0e3036ba34515270190662e5914b7f0bb97cd16414b187e1b6409
MD5 433bf5d9dd9616ee75956456e902f66f
BLAKE2b-256 b884546b6d6cbf74f0b17f00f87204c011a7765d3d98f12a38a0d946aa9190d9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page