Skip to main content

Package to create aggregated variables from CBS network data

Project description

netCBS

netCBS efficiently creates network-based measures using CBS POPNET network tables (e.g. family, colleagues, neighbors, schoolmates, housemates). For example: compute the average income of a person’s parents, or the average income of the parents of their classmates, using CBS network links.

Installation

pip install netcbs

Quick start

See notebook for accessible information and examples.

The core function is transform(query, df_sample, df_agg, ...).

Inputs

  • df_sample: your “ego” sample. Must contain:

    • RINPERSOON (unique person identifier). Note: RINPERSOONS must be R
  • df_agg: the table containing variables you want to aggregate for alters reached by the network traversal. Must contain:

    • RINPERSOON. Note: RINPERSOONS must be R
    • all variables referenced in the query’s aggregation-variable list (e.g. Income, Age)

Query format

A query describes:

  1. Which variables to aggregate (first segment), and
  2. Which network hops to traverse (one or more context segments), ending in sample.

Format:

"[Var1, Var2, ...] -> ContextA[types] -> ContextB[types] -> ... -> sample"

  • The first segment must be in square brackets: "[Income]" or "[Income, Age]".
  • Each context is one of: Family, Colleagues, Neighbors, Schoolmates, Housemates.
  • Context type selector is either:
    • [all] (use all relationship codes valid for that context), or
    • [101,102,...] (explicit relationship codes)
  • The final segment should be sample (case-sensitive recommended).

Example:

query = "[Income, Age] -> Family[301] -> Schoolmates[all] -> sample"

This means: find the aggregated Income and Age of parents (301) of the schoolmates of the people in the sample (df_sample).

Usage

import polars as pl  
import netcbs

query = "[Income, Age] -> Family[301] -> Schoolmates[all] -> sample"

df_out = netcbs.transform(
    query=query,
    df_sample=df_sample,     # must contain: RINPERSOON
    df_agg=df_agg,           # must contain: RINPERSOON, Income, Age
    year=2021,
    format_file="parquet",   # "parquet" (recommended) or "csv"
    agg_funcs=("avg", "sum", "count"),  # DuckDB aggregate function names (strings)
    return_pandas=False, 
)

About agg_funcs (important)

agg_funcs must be a sequence of DuckDB aggregate function names as strings, e.g.:

  • "avg", "sum", "count", "min", "max" (and other DuckDB aggregates)

The output columns are named:

"_"

So with agg_funcs=("avg","sum") and "[Income, Age]", you get:

  • avg_Income, sum_Income, avg_Age, sum_Age

How it works

  1. Validate query
    validate_query() checks:

    • query structure
    • df_sample has RINPERSOON
    • df_agg has RINPERSOON and all requested aggregation variables
    • each context and relationship-type selector is valid
    • (optionally) referenced CBS files exist for the requested year
  2. Resolve network files
    For each hop, format_path() selects the latest available version of the CBS network file for the requested year.

    • For format_file="parquet", files are expected under a geconverteerde data subfolder.
    • For format_file="csv", files are read with read_csv_auto(..., delim=';').
  3. Traverse the network
    DuckDB reads each network file, filters by the requested relationship codes, and joins hop-by-hop from egos to alters.

  4. Aggregate
    DuckDB joins the final reached persons to df_agg and computes the requested aggregates, grouped by the original sample person.

  5. Join back to sample
    Results are left-joined back onto the sample so every sample person remains in the output (missing networks produce null aggregates).

Contributing

Please refer to the repository’s CONTRIBUTING guide for issues and pull requests.

License and citation

netCBS is published under the MIT license.
For academic citation: Garcia-Bernardo, J. (2024). netCBS: Package to efficiently create network measures using CBS networks in the RA. (v0.1). Zenodo. https://doi.org/10.5281/zenodo.13908121

Contact

Developed and maintained by the ODISSEI Social Data Science (SoDa) team.
Questions or suggestions: please open an issue or contact via the ODISSEI SoDa website.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

netcbs-0.3.tar.gz (40.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

netcbs-0.3-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file netcbs-0.3.tar.gz.

File metadata

  • Download URL: netcbs-0.3.tar.gz
  • Upload date:
  • Size: 40.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for netcbs-0.3.tar.gz
Algorithm Hash digest
SHA256 c65f9ab7e05f3cc5747922521e0ab4d27c3f5d2ff168da0ae6d5ef4b553056ea
MD5 f86e482bac6b29eff78c6d5d71242202
BLAKE2b-256 2de87a5fc7f512a690576729f1ff46a4b896990b1c336f2adfa2f7ef83bbca65

See more details on using hashes here.

File details

Details for the file netcbs-0.3-py3-none-any.whl.

File metadata

  • Download URL: netcbs-0.3-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for netcbs-0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 22136efb598b523758178156ea102d4e48e1322e86936a50c112d35f52efa3e7
MD5 4666fa66585487aeac77d1663a1e4a03
BLAKE2b-256 4c8e77a2fdcfc8c316d0645af68b3471f8bcb5c43c29181c5c182b79cd40f1b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page