Skip to main content

Configuration-driven statistical calculations and aggregations for non-SWS FAO data

Project description

fao-analytics

Configuration-driven statistical calculations and aggregations for FAO (Food and Agriculture Organization of the United Nations) data, built on PySpark and validated with Pydantic.

Data sources

The package processes data from FAOSTAT -- the FAO corporate statistical database. Data can be loaded from:

  • Local files -- CSV, Parquet, or Delta format
  • SDMX API -- Connects to the FAO SDMX registry to retrieve dataflows with authoritative dimension ordering and attribute mappings (requires pysdmx)

Each FAOSTAT domain (FDI, LC, OER, CS, BE, etc.) has its own configuration directory under configs/domains/ defining the data mapping, aggregation rules, calculation definitions, and group overrides.

Features

  • fao_agg -- Geographic and dimensional aggregation engine
  • fao_calc -- Statistical indicator calculation engine (ratios, growth rates, transformations)
  • fao_common -- Shared data adapters (CSV, Parquet, Delta, SDMX) and configuration schemas

Installation

# From source (editable / development mode)
pip install -e .

# With SDMX support
pip install -e ".[sdmx]"

# With dev dependencies (pytest, coverage)
pip install -e ".[dev]"

Quick start

Configuration from file paths

from fao_agg import AggregationEngine
from fao_calc import CalculationEngine

# Aggregation -- load config from JSON files, data from a CSV
result = (
    AggregationEngine(
        data_mapping="configs/domains/FDI/data_mapping_fdi.json",
        aggregation_config="configs/domains/FDI/aggregation.json",
    )
    .load_data(path="data/domains/FDI/DataFDI.csv")
    .aggregate()
    .get_results()
)

# Calculation
result = (
    CalculationEngine(
        data_mapping="configs/domains/FDI/data_mapping_fdi.json",
        calculations="configs/domains/FDI/calculations_fdi.json",
    )
    .load_data(path="data/domains/FDI/DataFDI.csv")
    .calculate()
    .get_results()
)

Configuration from dictionaries

from fao_agg import AggregationEngine

data_mapping = {
    "data_source": {
        "type": "csv",
        "options": {"header": "true", "inferSchema": "true"},
    },
    "dimensions": [
        {"name": "area",    "column": "Var1Code", "var_position": 1},
        {"name": "item",    "column": "Var2Code", "var_position": 2},
        {"name": "element", "column": "Var3Code", "var_position": 3},
        {"name": "year",    "column": "Var4Code", "var_position": 4},
    ],
    "columns": {
        "value": "Value",
        "flag": "Flag",
        "agg_flag_int": "AggFlagInt",
        "agg_flag_ext": "AggFlagExt",
    },
}

aggregation_config = {
    "iterations": [
        {
            "iteration": 1,
            "agg_dimensions": ["area"],
        }
    ],
    "base_groups": "configs/groups/base_groups.json",
}

result = (
    AggregationEngine(
        data_mapping=data_mapping,
        aggregation_config=aggregation_config,
    )
    .load_data(path="data/domains/FDI/DataFDI.csv")
    .aggregate()
    .get_results()
)

Auto-generated configuration from SDMX

When you don't need to manually define the data mapping, the SdmxDataAdapter can build it automatically by querying the FAO SDMX registry for the dataflow schema:

from fao_agg import AggregationEngine
from fao_common.adapters.sdmx import SdmxDataAdapter
from fao_common.config.schema import SdmxDataSource

# Build the data mapping automatically from the SDMX registry
adapter = SdmxDataAdapter(
    SdmxDataSource(
        endpoint="https://private-fmr.aws.fao.org/sdmx/v2/",
        domain_code="FDI",
    )
)
data_mapping = adapter.build_data_mapping()

# Use the auto-generated mapping with the aggregation engine
result = (
    AggregationEngine(
        data_mapping=data_mapping,
        aggregation_config="configs/domains/FDI/aggregation.json",
    )
    .load_data()
    .aggregate()
    .get_results()
)

SDMX configuration with a local SDMX CSV

If you have an SDMX-formatted CSV file and want the adapter to handle column mapping via the registry:

from fao_agg import AggregationEngine

result = (
    AggregationEngine(
        data_mapping="configs/domains/FDI/data_mapping_sdmx.json",
        aggregation_config="configs/domains/FDI/aggregation.json",
    )
    .load_data()  # data path is in the mapping config
    .aggregate()
    .get_results()
)

Testing

# Run all tests
pytest

# Run only unit tests
pytest tests/fao_agg/

# Run only integration tests
pytest -m integration

# Run a single domain
pytest tests/domains/test_fdi.py -v

See README_TESTING.md for detailed testing documentation.

Project structure

src/
  fao_agg/        # Aggregation engine
  fao_calc/       # Calculation engine
  fao_common/     # Shared adapters, schemas, Spark utilities
configs/           # JSON configuration files per FAOSTAT domain
data/              # Sample/test data files (CSV)
tests/             # Unit and integration tests

Publishing to PyPI

# Install build tools
pip install build twine

# Build source distribution and wheel
python -m build

# Check the package
twine check dist/*

# Upload to Test PyPI first
twine upload --repository testpypi dist/*

# Upload to PyPI
twine upload dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

non_sws_spark_calculations_engine-0.3.1.tar.gz (78.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file non_sws_spark_calculations_engine-0.3.1.tar.gz.

File metadata

File hashes

Hashes for non_sws_spark_calculations_engine-0.3.1.tar.gz
Algorithm Hash digest
SHA256 8f789e6601a613686964ee62eb9b13b5a2631a1164d2958bdd7236d4f1704a34
MD5 798a63f6a69ff40dc8d9cf54ab13f8a0
BLAKE2b-256 ce7b0fd0619ba853a6281165898eba3542e67861430d0fe7af647b64215d1ae0

See more details on using hashes here.

File details

Details for the file non_sws_spark_calculations_engine-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for non_sws_spark_calculations_engine-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2bedf1da4afc18dba9458b6110b7402bd956c1a6cc043f3e7dc1cf0302b8fe9e
MD5 f6dd314a55bb125bddf86e423c7d7508
BLAKE2b-256 d14dc27c85b259426c026d242ee03bd0660e1952a9dc29eb506bdc1ed318c442

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page