Skip to main content

NAND-based author name disambiguation for SAO/NASA ADS publication metadata.

Project description

ads-and

PyPI Python License

ads-and is a Python package for author name disambiguation (AND) on SAO/NASA ADS records. Given publications and optionally references in ADS parquet format, it assigns stable author identifiers and writes disambiguated outputs.

The bundled model is a packaged and slightly refined version of NAND (Neural Author Name Disambiguator), described in Amado Olivo et al. 2025. NAND was trained and evaluated on LSPO, a large-scale physics and astronomy AND benchmark built from ~553k NASA/ADS publications linked to ORCID identities (~125k researchers). The model ships inside the package, no external bundle is required.

The bundled package was re-evaluated on the same LSPO benchmark under a reproducible five-seed protocol. Clustering performance on LSPO (with constraints enabled):

F1 Precision Recall
NAND — Amado Olivo et al. 2025 95.93% 96.15% 96.21%
ads-and (this package) 97.02% 96.36% 97.70%

Python import path: author_name_disambiguation

Install

Use uv. Requires Python ≥ 3.11.

uv pip install ads-and

If you don't have a GPU: Optional faster CPU inference via ONNX (still much slower than GPU):

uv pip install "ads-and[cpu_onnx]"

Optional Modal backend (you need a modal account):

uv pip install "ads-and[modal]"

Usage

CLI

ads-and infer `
  --publications-path path/to/publications.parquet `
  --references-path path/to/references.parquet `
  --output-dir path/to/output-dir `
  --runtime auto

Add --json for a machine-readable run summary on stdout.

--runtime options: auto (GPU if CUDA is available, else CPU), gpu, cpu. Advanced infer flags such as --infer-stage, --dataset-id, and --modal-gpu are documented in docs/inference_workflow.md.

Modal uses the same command surface with Modal as a managed remote GPU backend (you need a modal account):

ads-and infer `
  --publications-path path/to/publications.parquet `
  --references-path path/to/references.parquet `
  --output-dir path/to/output-dir `
  --backend modal `
  --runtime gpu `
  --modal-gpu l4

Current repo Modal config is --backend modal --runtime gpu --modal-gpu l4. The local client uploads the ADS parquet inputs, Modal runs the same bundled infer workflow remotely, and the finished outputs are copied back into output-dir. Current L4 rule of thumb: about $0.00085 and ~2.5s per 1,000 ADS entries. Configure MODAL_TOKEN_ID and MODAL_TOKEN_SECRET in your environment or a repo-root .env before using --backend modal.

Exact Modal costs are a separate official lookup:

ads-and cost --output-dir path/to/output-dir

This is a follow-up lookup after the run, once the billing window has closed.

Python

Local CPU/GPU:

from author_name_disambiguation import disambiguate_sources

result = disambiguate_sources(
    publications_path="path/to/publications.parquet",
    references_path="path/to/references.parquet",
    output_dir="path/to/output-dir",
    runtime="auto",
)

print(result.publications_disambiguated_path)
print(result.summary_path)

Modal:

from author_name_disambiguation import disambiguate_sources, resolve_modal_cost

modal_result = disambiguate_sources(
    publications_path="path/to/publications.parquet",
    references_path="path/to/references.parquet",
    output_dir="path/to/output-dir",
    backend="modal",
    runtime="gpu",
    modal_gpu="l4",
)

# later, after the billing interval closes
cost_result = resolve_modal_cost("path/to/output-dir")

Input schema

--publications-path is required. --references-path is optional.

Column Required Type Example
Bibcode yes str "2000MNRAS.319..168C"
Author yes list[str] or semicolon-delimited str ["Cole, Shaun", "Lacey, Cedric G."]
Title_en or Title no — but strongly recommended str "Galaxy luminosity functions in..."
Abstract_en or Abstract no — but strongly recommended str "We model the galaxy population..."
Affiliation no str (ADS format) or list[str] (per-author) "AA(Durham Univ, Dept of Physics); AB(...)"
Year no int 2000

Records missing Bibcode or Author are skipped. Records missing both Title and Abstract will be processed but with meaningfully reduced disambiguation quality, since the model relies heavily on textual context to distinguish authors.

Output

All files are written under output_dir:

File Contents
publications_disambiguated.parquet input columns + AuthorUID, AuthorDisplayName
references_disambiguated.parquet same, for references (only when references are provided)
source_author_assignments.parquet row-level author-to-entity assignments
author_entities.parquet inferred author entities
mention_clusters.parquet mention-to-cluster mapping
summary.json high-level run summary
05_stage_metrics_infer_sources.json diagnostic per-stage runtime and validation metrics
05_go_no_go_infer_sources.json diagnostic run validation summary

The two disambiguated parquets preserve all input columns and append:

Column Type Example
AuthorUID list[str] ["ads_run::s.cole::1", "ads_run::c.lacey::0", "ads_run::c.baugh::0"]
AuthorDisplayName list[str] ["Cole, Shaun", "Lacey, C. G.", "Baugh, C. M."]

Both columns are parallel lists in the same order as the input Author column. Each UID is stable across runs for the same registry. Each author entity gets exactly one display name — the most frequently occurring form of their name in the data (could be full-name or abbreviated depending on the entity). The same UID always carries the same display name string.

Further Details

The bundled fixed model ships inside the package. Repo-only research workflows require user-supplied LSPO raw data from the original source release; both parquet and HDF5 inputs are supported for LSPO preparation and evaluation.

Citation

Cite ads-and as software via CITATION.cff. Cite the original NAND paper if you discuss the underlying method or baseline

Vicente Amado Olivo, Wolfgang Kerzendorf, Bangjing Lu, Joshua V. Shields, Andreas Flörs, and Nutan Chen (2025). Practical Author Name Disambiguation under Metadata Constraints: A Contrastive Learning Approach for Astronomy Literature. Publications of the Astronomical Society of the Pacific, 137(12), 124503. https://doi.org/10.1088/1538-3873/ae1e2d

and cite LSPO separately if you discuss the benchmark or dataset.

Amado Olivo, V. (2024). LSPO: A Large-Scale Physics ORCiD-Linked Dataset for Author Name Disambiguation (Version 1) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.11489161

Resources:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ads_and-0.1.2.tar.gz (4.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ads_and-0.1.2-py3-none-any.whl (4.2 MB view details)

Uploaded Python 3

File details

Details for the file ads_and-0.1.2.tar.gz.

File metadata

  • Download URL: ads_and-0.1.2.tar.gz
  • Upload date:
  • Size: 4.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for ads_and-0.1.2.tar.gz
Algorithm Hash digest
SHA256 e069335a1fc644bbf35e3ac9651b22ccfabefbee5dc6a02d287a4714c48726d3
MD5 b85f6f1f24582730c5e0f57fe15fe1c9
BLAKE2b-256 1ed09862d16a749bc7427aad9bd0e3873ff5ff7f2b162c35959196f2110701f1

See more details on using hashes here.

File details

Details for the file ads_and-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: ads_and-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 4.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for ads_and-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8eec9c3b0b71616cf9ba6a9ead292edb5c5e8bd81436eb98a12acc4d994fa812
MD5 43b1cd9a0622f6f10d176b890524f9a1
BLAKE2b-256 a35992a3ac0ec0affc4b9d25c966f288de576b15dc30cf5411d32dbea138bde8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page