Skip to main content

Network Scale-Up Models for Aggregated Relational Data

Reason this release was yanked:

Incorrect build — package files missing

Project description

This package fits several different Network Scale-Up Models (NSUM) to Aggregated Relational Data (ARD). ARD represents survey responses to questions of the form: "How many X’s do you know?", where respondents report how many people they know in different subpopulations.

Specifically, if Nᵢ respondents are asked about Nₖ subpopulations, then the ARD is an Nᵢ times Nₖ matrix, where the (i, j) element represents how many people respondent i reports knowing in subpopulation j.

NSUM leverages these responses to estimate the unknown size of hard-to-reach populations.

In this package, we provide functions to estimate the size and accompanying parameters (e.g. degrees) from 2 papers:

Killworth, P. D., Johnsen, E. C., McCarty, C., Shelley, G. A., and Bernard, H. R. (1998) plug-in MLE

Killworth, P. D., McCarty, C., Bernard, H. R., Shelley, G. A., and Johnsen, E. C. (1998) MLE

Requirements

This package requires the following Python libraries:

  • numpy
  • pandas

PIMLE

The plug-in MLE (PIMLE) estimator from Killworth, P. D., Johnsen, E. C., McCarty, C., Shelley, G. A., and Bernard, H. R. (1998) is a two-stage estimator that first estimates the degrees for each respondent dᵢ by maximizing the following likelihood for each respondent:

L(dᵢ; y, {Nₖ}) = ∏ₖ₌₁ᴸ [ (⁽ᵈⁱ⁾⁄₍ʸⁱₖ₎) × (Nₖ / N)yᵢₖ × (1 − Nₖ / N)dᵢ − yᵢₖ ],

Where: L is the number of subpopulations with known sizes Nₖ. yᵢₖ is the number of people respondent i reports knowing in subpopulation k. (⁽ᵈⁱ⁾⁄₍ʸⁱₖ₎) is the binomial coefficient. In the second stage, the model plugs in the estimated dᵢ into the equation:

yᵢₖ / dᵢ = Nₖ / N
and solves for the unknown *Nₖ* for each respondent. These estimates are then averaged to obtain a single estimate of *Nₖ*.

To summarize, Stage 1 estimates dᵢ using:

dᵢ = N × (∑ₖ₌₁ᴸ yᵢₖ) / (∑ₖ₌₁ᴸ Nₖ)

Stage 2 estimates the unknown subpopulation size Nₖ with:

N̂ₖᴾᴵᴹᴸᴱ = (N / n) × ∑ᵢ₌₁ⁿ (yᵢₖ / dᵢ)

Here is an example of this package creating an estimate using the PIMLE function:

pimle.est = killworth(ard, known_sizes = sizes[c(1, 2, 4)], known_ind = c(1, 2, 4), N = N, model = "PIMLE")

Note that the function will provide a warning saying that at least dᵢ was 0. This occurs when a respondent does not resport knowing anyone in the known subpopulations. This is an issue for the PIMLE since a 0 value is in the denominator for N̂ᵤᴾᴵᴹᴸᴱ . Thus, we ignore the responses from respondents that correspond to dᵢ =0 .

MLE

Next, we analyze the data from the Killworth, P. D., McCarty, C., Bernard, H. R., Shelley, G. A., and Johnsen, E. C. (1998) MLE estimator. This is also a two-stage model, which an identical first stage, i.e.

d̂ⁱ = N̂ ⋅ (∑ₖ₌₁ᴸ yᵢₖ) / (∑ₖ₌₁ᴸ Nₖ)

However, the second stage estimates Nₖ by maximizing the Binomial likelihood with respect to Nₖ , fixing dᵢ at the estimated d̂ᵢ . Thus, the estimate for the unknown subpopulation size is given by

N̂⁽ᴹᴸᴱ⁾ₖ = N ⋅ (∑ᵢ₌₁ⁿ yᵢₖ) / (∑ᵢ₌₁ⁿ d̂ᵢ)

For example, the estimate can be obtained using:

mle.est = killworth(ard, known_sizes = sizes[c(1, 2, 4)], known_ind = c(1, 2, 4), N = N, model = "MLE")

Note that this function will not create a warning for a dᵢ =0 value since the denominator depends on the summation of d̂ᵢ.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

networkscaleup-0.0.9.tar.gz (4.6 kB view details)

Uploaded Source

File details

Details for the file networkscaleup-0.0.9.tar.gz.

File metadata

  • Download URL: networkscaleup-0.0.9.tar.gz
  • Upload date:
  • Size: 4.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for networkscaleup-0.0.9.tar.gz
Algorithm Hash digest
SHA256 3215c3b96f6d415f5ceecc4db3bf5d73a777e271870420abbfac9436cd5f7168
MD5 11189cabc0e6a9c2f9a1dd1deb9fd15f
BLAKE2b-256 51cc8f1de8faf7dca1cd2574902c85d3c3fae301a4289df42a71cb0548cdc1a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page