Skip to main content

No project description provided

Project description

GitHub License GitHub branch check runs GitHub Issues or Pull Requests GitHub Release

VIEWS Twitter Header

VIEWS Evaluation ๐Ÿ“Š

Part of the VIEWS Platform ecosystem for large-scale conflict forecasting.

๐Ÿ“š Table of Contents

  1. Overview
  2. Role in the VIEWS Pipeline
  3. Features
  4. Installation
  5. Architecture
  6. Project Structure
  7. Contributing
  8. License
  9. Acknowledgements

๐Ÿง  Overview

The VIEWS Evaluation repository provides a standardized framework for assessing time-series forecasting models used in the VIEWS conflict prediction pipeline. It ensures consistent, robust, and interpretable evaluations through metrics tailored to conflict-related data, which often exhibit right-skewness and zero-inflation.


๐ŸŒ Role in the VIEWS Pipeline

VIEWS Evaluation ensures forecasting accuracy and model robustness as the official evaluation component of the VIEWS ecosystem.

Pipeline Integration:

  1. Model Predictions โ†’
  2. Evaluation Metrics Processing โ†’
  3. Metrics Computation (via MetricsManager) โ†’
  4. Final Performance Reports

Integration with Other Repositories:


โœจ Features

1. EvaluationMetrics

A data class for managing and storing evaluation metrics for time-series forecasting models.

๐Ÿ”น Key Capabilities:

  • Handles conflict-specific data distributions, including skewness and zero-inflation.
  • Three evaluation schemas:
    1. Time-series-wise: Evaluates long-term forecasting behavior.
    2. Step-wise: Assesses performance at each forecasting step.
    3. Month-wise: Measures forecast accuracy on a rolling monthly basis.
  • Transforms evaluation metrics into structured DataFrames for analysis.

๐Ÿ“– More details in the Evaluation Metrics Workshop Notes.


2. MetricsManager

A centralized evaluation engine for computing metrics on time-series forecasts.

๐Ÿ”น Key Capabilities:

  • Customizable metric lists allow for flexible evaluation.
  • Ensures metric consistency by warning about unrecognized metrics.
  • Implements all three evaluation schemas (time-series, step-wise, month-wise).
  • Batch processing for multiple models and forecasting targets.

๐Ÿ“– More details in schema.MD.


3. Roadmap & Upcoming Features ๐Ÿšง

โœ… Planned Enhancements:

  • Multi-target evaluation (e.g., assessing multiple dependent variables simultaneously).
  • Expanding metric calculations beyond RMSLE, CRPS, and AP.
  • New visualization tools for better interpretability of evaluation reports.

โš™๏ธ Installation

Prerequisites

  • Python >= 3.11

๐Ÿ— Architecture

1. Evaluation Metrics Framework

  • Handles forecasting evaluation across multiple models, levels of analysis, and forecasting windows.
  • Converts model outputs into standardized evaluation reports.

2. Metrics Computation Pipeline

  1. Input: Predictions from models in standardized DataFrames.
  2. Processing: Calculation of relevant evaluation metrics.
  3. Output: Performance scores for comparison across models.

3. Error Handling & Standardization

  • Ensures conformity to VIEWS evaluation standards.
  • Warns about unrecognized or incorrectly formatted metrics.

๐Ÿ—‚ Project Structure

views-evaluation/
โ”œโ”€โ”€ README.md                   # Documentation
โ”œโ”€โ”€ .github/workflows/           # CI/CD pipelines
โ”œโ”€โ”€ tests/                       # Unit tests
โ”œโ”€โ”€ views_evaluation/            # Main source code
โ”‚   โ”œโ”€โ”€ evaluation/
โ”‚   โ”‚   โ”œโ”€โ”€ metrics.py
โ”‚   โ”œโ”€โ”€ __init__.py              # Package initialization
โ”œโ”€โ”€ .gitignore                   # Git ignore rules
โ”œโ”€โ”€ pyproject.toml               # Poetry project file
โ”œโ”€โ”€ poetry.lock                  # Dependency lock file

๐Ÿค Contributing

We welcome contributions! Please follow the VIEWS Contribution Guidelines.


๐Ÿ“œ License

This project is licensed under the LICENSE file.


๐Ÿ’ฌ Acknowledgements

Views Funders

Special thanks to the VIEWS MD&D Team for their collaboration and support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

views_evaluation-0.1.0.tar.gz (6.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

views_evaluation-0.1.0-py3-none-any.whl (7.2 kB view details)

Uploaded Python 3

File details

Details for the file views_evaluation-0.1.0.tar.gz.

File metadata

  • Download URL: views_evaluation-0.1.0.tar.gz
  • Upload date:
  • Size: 6.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.10.12 Darwin/22.6.0

File hashes

Hashes for views_evaluation-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9efff211663d57e68a33e0e4d902d7ab8aaa3a1e8325c09845e0016ae28a6156
MD5 2db060f7dc0a506295076a585ea00ccd
BLAKE2b-256 3d918f9a080e46563f4a1ee7c1f471f65157e9d83318f01f85e22f7f69beb07b

See more details on using hashes here.

File details

Details for the file views_evaluation-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: views_evaluation-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 7.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.10.12 Darwin/22.6.0

File hashes

Hashes for views_evaluation-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6749ce0dc6a2cec67cad06b9626eb65de6b062dc361f99f822d78ae9f084dbd2
MD5 26df75b8794680f901f9f51423193376
BLAKE2b-256 a927b8a1a8f72ea036084ddfed6c7a9611d763b33f8d9d03fdbdd0b456fb6945

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page