Skip to main content

No project description provided

Project description

GitHub License GitHub branch check runs GitHub Issues or Pull Requests GitHub Release

VIEWS Twitter Header

VIEWS Evaluation 📊

Part of the VIEWS Platform ecosystem for large-scale conflict forecasting.

📚 Table of Contents

  1. Overview
  2. Role in the VIEWS Pipeline
  3. Features
  4. Installation
  5. Architecture
  6. Project Structure
  7. Contributing
  8. License
  9. Acknowledgements

🧠 Overview

The VIEWS Evaluation repository provides a standardized framework for assessing time-series forecasting models used in the VIEWS conflict prediction pipeline. It ensures consistent, robust, and interpretable evaluations through metrics tailored to conflict-related data, which often exhibit right-skewness and zero-inflation.


🌍 Role in the VIEWS Pipeline

VIEWS Evaluation ensures forecasting accuracy and model robustness as the official evaluation component of the VIEWS ecosystem.

Pipeline Integration:

  1. Model Predictions
  2. Evaluation Metrics Processing
  3. Metrics Computation (via EvaluationManager)
  4. Final Performance Reports

Integration with Other Repositories:


Features

  • Comprehensive Evaluation Framework: The EvaluationManager class provides structured methods to evaluate time series predictions based on point and uncertainty metrics.
  • Multiple Evaluation Schemas:
    • Step-wise evaluation: groups and evaluates predictions by the respective steps from all models.
    • Time-series-wise evaluation: evaluates predictions for each time-series.
    • Month-wise evaluation: groups and evaluates predictions at a monthly level.
  • Support for Mulyiple Metrics
    • Point Evaluation Metrics: RMSLE, CRPS, Average Precision (Brier Score, Jeffreys Divergence, Pearson Correlation, Sinkhorn/Earth-mover Distance & pEMDiv and Variogram to be added).
    • Uncertainty Evaluation Metrics: CRPS (and more to be added in the future).
  • Data Integrity Checks: Ensures that input DataFrames conform to expected structures before evaluation based on point and uncertainty evaluation.
  • Automatic Index Matching: Aligns actual and predicted values based on MultiIndex structures.
  • Planned Enhancements:
    • Expanding metric calculations beyond RMSLE, CRPS, and AP.
    • New visualization tools for better interpretability of evaluation reports.

⚙️ Installation

Prerequisites

  • Python >= 3.11

From PyPI

pip install views_evaluation

🏗 Architecture

1. Evaluation Metrics Framework

  • Handles forecasting evaluation across multiple models, levels of analysis, and forecasting windows.
  • Converts model outputs into standardized evaluation reports.

2. Metrics Computation Pipeline

  1. Input: Predictions from models in standardized DataFrames.
  2. Processing: Calculation of relevant evaluation metrics.
  3. Output: Performance scores for comparison across models.

3. Error Handling & Standardization

  • Ensures conformity to VIEWS evaluation standards.
  • Warns about unrecognized or incorrectly formatted metrics.

🗂 Project Structure

views-evaluation/
├── README.md                   # Documentation
├── .github/workflows/           # CI/CD pipelines
├── tests/                       # Unit tests
├── views_evaluation/            # Main source code
│   ├── evaluation/
│   │   ├── metrics.py
│   ├── __init__.py              # Package initialization
├── .gitignore                   # Git ignore rules
├── pyproject.toml               # Poetry project file
├── poetry.lock                  # Dependency lock file

🤝 Contributing

We welcome contributions! Please follow the VIEWS Contribution Guidelines.


📜 License

This project is licensed under the LICENSE file.


💬 Acknowledgements

Views Funders

Special thanks to the VIEWS MD&D Team for their collaboration and support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

views_evaluation-0.3.0.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

views_evaluation-0.3.0-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file views_evaluation-0.3.0.tar.gz.

File metadata

  • Download URL: views_evaluation-0.3.0.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.11.13 Linux/6.11.0-1015-azure

File hashes

Hashes for views_evaluation-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b291e7514b5622ac346a688813ad3e591f8fc5674f2231fae35fe522b71f012d
MD5 1d9100a5492eb58efb50e86c2adcd6e1
BLAKE2b-256 5c1c1002436e92abb15f45a627e62f4d2fc5a85ccf3e865c39f5ec9d235923ae

See more details on using hashes here.

File details

Details for the file views_evaluation-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: views_evaluation-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.11.13 Linux/6.11.0-1015-azure

File hashes

Hashes for views_evaluation-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 48d25f815ee7d9dfe97ebc745c0bc4bd31c57d7b44de57644a24ce71fdf4e3af
MD5 3bd0fa000f212edaa1bf69625f86ae43
BLAKE2b-256 26ba451a00385b18af1e3f794f51e90a2beaab01ea30f5206f6f0373a6d36a71

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page