Skip to main content

No project description provided

Project description

GitHub License GitHub branch check runs GitHub Issues or Pull Requests GitHub Release

VIEWS Twitter Header

VIEWS Evaluation 📊

Part of the VIEWS Platform ecosystem for large-scale conflict forecasting.

📚 Table of Contents

  1. Overview
  2. Role in the VIEWS Pipeline
  3. Features
  4. Installation
  5. Architecture
  6. Project Structure
  7. Contributing
  8. License
  9. Acknowledgements

🧠 Overview

The VIEWS Evaluation repository provides a standardized framework for assessing time-series forecasting models used in the VIEWS conflict prediction pipeline. It ensures consistent, robust, and interpretable evaluations through metrics tailored to conflict-related data, which often exhibit right-skewness and zero-inflation.


🌍 Role in the VIEWS Pipeline

VIEWS Evaluation ensures forecasting accuracy and model robustness as the official evaluation component of the VIEWS ecosystem.

Pipeline Integration:

  1. Model Predictions
  2. Evaluation Metrics Processing
  3. Metrics Computation (via EvaluationManager)
  4. Final Performance Reports

Integration with Other Repositories:


Features

  • Comprehensive Evaluation Framework: The EvaluationManager class provides structured methods to evaluate time series predictions based on point and uncertainty metrics.
  • Multiple Evaluation Schemas:
    • Step-wise evaluation: groups and evaluates predictions by the respective steps from all models.
    • Time-series-wise evaluation: evaluates predictions for each time-series.
    • Month-wise evaluation: groups and evaluates predictions at a monthly level.
  • Support for Mulyiple Metrics
    • Point Evaluation Metrics: RMSLE, CRPS, Average Precision (Brier Score, Jeffreys Divergence, Pearson Correlation, Sinkhorn/Earth-mover Distance & pEMDiv and Variogram to be added).
    • Uncertainty Evaluation Metrics: CRPS (and more to be added in the future).
  • Data Integrity Checks: Ensures that input DataFrames conform to expected structures before evaluation based on point and uncertainty evaluation.
  • Automatic Index Matching: Aligns actual and predicted values based on MultiIndex structures.
  • Planned Enhancements:
    • Expanding metric calculations beyond RMSLE, CRPS, and AP.
    • New visualization tools for better interpretability of evaluation reports.

⚙️ Installation

Prerequisites

  • Python >= 3.11

From PyPI

pip install views_evaluation

🏗 Architecture

1. Evaluation Metrics Framework

  • Handles forecasting evaluation across multiple models, levels of analysis, and forecasting windows.
  • Converts model outputs into standardized evaluation reports.

2. Metrics Computation Pipeline

  1. Input: Predictions from models in standardized DataFrames.
  2. Processing: Calculation of relevant evaluation metrics.
  3. Output: Performance scores for comparison across models.

3. Error Handling & Standardization

  • Ensures conformity to VIEWS evaluation standards.
  • Warns about unrecognized or incorrectly formatted metrics.

🗂 Project Structure

views-evaluation/
├── README.md                   # Documentation
├── .github/workflows/           # CI/CD pipelines
├── tests/                       # Unit tests
├── views_evaluation/            # Main source code
│   ├── evaluation/
│   │   ├── metrics.py
│   ├── __init__.py              # Package initialization
├── .gitignore                   # Git ignore rules
├── pyproject.toml               # Poetry project file
├── poetry.lock                  # Dependency lock file

🤝 Contributing

We welcome contributions! Please follow the VIEWS Contribution Guidelines.


📜 License

This project is licensed under the LICENSE file.


💬 Acknowledgements

Views Funders

Special thanks to the VIEWS MD&D Team for their collaboration and support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

views_evaluation-0.3.1.tar.gz (13.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

views_evaluation-0.3.1-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file views_evaluation-0.3.1.tar.gz.

File metadata

  • Download URL: views_evaluation-0.3.1.tar.gz
  • Upload date:
  • Size: 13.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.11.13 Linux/6.11.0-1018-azure

File hashes

Hashes for views_evaluation-0.3.1.tar.gz
Algorithm Hash digest
SHA256 4cbfd263b35cb52666d3018936025359def5ffeda9340315a60c22eff7b8fa6b
MD5 d8288a3abe847e701269a7371589d492
BLAKE2b-256 8e83f60ca068da70c5c0ba744b2f7d964a1f53432b67f99c720a4e8d220eafee

See more details on using hashes here.

File details

Details for the file views_evaluation-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: views_evaluation-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.11.13 Linux/6.11.0-1018-azure

File hashes

Hashes for views_evaluation-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 001fe98de3c585410ecf3719aaf3826fb3a3cb801278207eda6a15441a0a2797
MD5 19fa27a29ceff98d63b7c6c75a45a63e
BLAKE2b-256 b904e4b0d9febc69ef860d1935de705c667d04100648ee0d7934fde3bb38ce1b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page