Skip to main content

AI-powered author disambiguation and works search agents for OpenAlex

Project description

Author Disambiguation Agent

Production-ready AI agent for disambiguating life sciences researchers and finding their OpenAlex author IDs and work IDs.

Current Version: 2.4.0 (Works Search Agent & Enhanced MCP Tools)

Features

  • Author Disambiguation: Find researchers' OpenAlex profiles using ORCID, name, and institution
  • Works Search Agent ✨ NEW: Find academic papers and extract author information from works
  • Email Discovery (Optional): Find current email addresses from institutional directories and publications
  • Claude Skills: Modular knowledge system with 6 expert skills for strategy and formatting
  • OpenAlex MCP Tools: 9 specialized tools for searching authors, works, and publications
  • Web Search & Fetch: Access institutional pages, academic profiles, and publication PDFs
  • Multi-source verification: OpenAlex, people.embo.org, institutional directories, ORCID
  • Structured outputs: JSON schema enforcement for both author and work results
  • Embedded MCP Pattern: Direct async tool calls without stdio overhead
  • Benchmark Infrastructure: Comprehensive evaluation framework with 11,332+ high-confidence ground truth matches

Architecture

┌─────────────────────────────────────────────────────────┐
│           Production Agent (Claude API)                 │
│                                                         │
│  ┌───────────────────────────────────────────────-──┐   │
│  │  Async Agent Loop                                │   │
│  └────┬──────────────────────────┬──────────────────┘   │
│       │                          │                      │
│       ├──► web_search            ├──► OpenAlex MCP      │
│       │    (Native Tool)         │    (Embedded)        │
│       │                          │                      │
│       │                          └─────┬─────────────┐  │
│       │                                │             │  │
│       │                                │             │  │
│       │                          ┌─────▼─────────┐   │  │
│       │                          │ OpenAlexTools │   │  │
│       │                          │   (pyalex)    │   │  │
│       │                          └───────────────┘   │  │
└───────┴────────────────────────────────────────────────-┘

MCP Tools Available (9 total):
  Author Tools:
  • search_authors_by_name                  - Domain-aware name search
  • search_authors_by_orcid                 - ORCID lookup (most reliable)
  • search_authors_by_name_and_institution  - Filtered search including institution
  • get_author_details                      - Complete author profile
  • get_author_recent_works                 - Recent publications
  
  Works Tools (NEW):
  • search_works_by_title                   - Find papers by title
  • search_works_by_doi                     - Get work by DOI (most reliable)
  • search_works_by_title_and_author        - Combined title + author search
  • get_work_details                        - Complete work information

Claude Skills

The agent uses a modular knowledge system with expert skills located in src/.claude/skills/:

Available Skills

  1. author-disambiguation-strategy

    • Decision-making logic for finding and ranking author candidates
    • Search strategies (ORCID → name → name+institution)
    • Evidence evaluation and confidence assessment
    • Ranking algorithm with scoring system
    • EMBO membership considerations
  2. email-finder-strategy

    • Priority-ordered email search strategies
    • Institutional directory search patterns
    • Publication PDF extraction methods
    • Critical: Uses LAST (most recent) affiliation
    • Critical: Searches most recent RESEARCH ARTICLE where author is LAST author
  3. output-schema-formatter

    • Complete JSON schema specification
    • Field requirements and validation rules
    • Examples for all status types (success, ambiguous, not_found, error)
    • Email field documentation
  4. openalex-expert

    • OpenAlex API best practices
    • Query patterns and common gotchas
    • Domain classification knowledge
  5. works-search-strategy ✨ NEW

    • Search techniques for finding academic papers
    • Title normalization and matching rules
    • Author validation in work authorships
    • DOI fallback strategies
    • Quality assurance and confidence assessment
  6. verification-expert

    • Evidence quality evaluation
    • Confidence scoring frameworks
    • ORCID trust hierarchy

Benefits of Skills

  • Reduced prompt size: Knowledge is cached separately from workflow
  • Better caching: Skills are loaded once and reused
  • Modular updates: Update strategies without changing main prompt
  • Cost efficiency: ~30% reduction in API costs through prompt caching

Installation

Quick Start

Install from PyPI:

pip install author-disambiguation

As a Dependency in Another Project

Add to your pyproject.toml:

[project]
dependencies = [
    "author-disambiguation>=2.4.0",
]

Or requirements.txt:

author-disambiguation>=2.4.0

Install from GitHub (Development Version)

pip install git+https://github.com/source-data/claude-authors.git

For Development

git clone https://github.com/source-data/claude-authors.git
cd claude-authors
pip install -e .

With Optional Dependencies

# For running benchmarks
pip install "author-disambiguation[benchmarks]"

# For development and testing
pip install "author-disambiguation[dev]"

# Install everything
pip install "author-disambiguation[all]"

📖 See INSTALLATION.md for detailed installation guide

Configuration

Environment Variables

Create a .env file or export variables:

ANTHROPIC_API_KEY=your-anthropic-api-key
OPENALEX_API_KEY=your-email@domain.org

Note: OPENALEX_API_KEY should be your email address for OpenAlex "polite pool" access (10 req/sec).

Usage

Programmatic Usage (Python API)

Import and use the disambiguation agent in your code:

import asyncio
from src import disambiguate_author

async def main():
    result = await disambiguate_author(
        first_name="Marie",
        last_name="Curie",
        institution="University of Paris"
    )
    
    if result['status'] == 'success':
        author = result['author_candidates'][0]['author']
        print(f"OpenAlex ID: {author['openalex_id']}")
        print(f"Name: {author['name']}")
        print(f"Institution: {author['institution']}")

asyncio.run(main())

📖 See API.md for complete API documentation and examples

CLI Usage

# Basic usage
author-disambiguate --first-name "Marie" --last-name "Curie"

# With ORCID (most reliable)
author-disambiguate --name "Albert Einstein" --orcid "0000-0001-2345-6789"

# With institution
author-disambiguate --first-name "John" --last-name "Smith" --institution "MIT"

# With research context
author-disambiguate --name "Jane Doe" --context "machine learning, AI, neural networks"

Works Search (NEW in v2.4.0)

Find academic papers and extract author information from works:

Python API

import asyncio
from src import search_work

async def main():
    result = await search_work(
        title="The state of OA: a large-scale analysis",
        author_last_name="Priem",
        year=2018
    )
    
    if result['status'] == 'success':
        work = result['work']
        print(f"Work ID: {work['openalex_id']}")
        print(f"DOI: {work['doi']}")
        print(f"Title: {work['title']}")
        print(f"Author OpenAlex ID: {work['author_openalex_id']}")
        print(f"Author ORCID: {work['author_orcid']}")
        print(f"Author found: {work['author_found_in_work']}")
        print(f"Confidence: {work['match_confidence']}")  # 'direct' or 'agent'

asyncio.run(main())

CLI Usage

# Search by title only
author-work-search --title "The state of OA"

# Search with author validation
author-work-search --title "The state of OA" --author-last "Priem"

# Search with year
author-work-search --title "The state of OA" --author-last "Priem" --year 2018

How It Works

  1. Direct PyAlex Search (fast, reliable):

    • Searches OpenAlex by title
    • Strict validation: normalized title must match
    • If author specified: validates author is in authorships list
    • Returns work ID, DOI, title, author OpenAlex ID, and ORCID
  2. Agent Fallback (when direct search fails):

    • Uses Claude with full OpenAlex MCP toolset
    • Can handle fuzzy matches, alternate titles, subtitle variations
    • Falls back to web search for DOI if needed
    • More flexible but slower

Returns structured JSON:

{
  "status": "success",
  "work": {
    "openalex_id": "https://openalex.org/W2741809807",
    "doi": "https://doi.org/10.7717/peerj.4375",
    "title": "The state of OA: a large-scale analysis...",
    "author_openalex_id": "https://openalex.org/A5023888391",
    "author_orcid": "https://orcid.org/0000-0001-6187-6610",
    "author_found_in_work": true,
    "match_confidence": "direct"
  }
}

Benchmark Data Preparation

To generate benchmark data from the EMBO candidates Excel file, use the included script to add OpenAlex author IDs:

# Install additional dependencies (if not already installed)
pip install -r requirements.txt

# Run the script to add author IDs
python scripts/add_author_ids.py

This script will:

  1. Read data/embo_membership_candidates_with_work_ids.xlsx
  2. Query OpenAlex API for each work ID to extract author information
  3. Match the specific EMBO candidate author by name from the work's author list
  4. Add columns for the matched author: OpenAlex ID, name, ORCID, institutions, position, corresponding author status, and match confidence score
  5. Save results to data/embo_membership_candidates_with_author_ids.xlsx

The script includes:

  • Fuzzy name matching with confidence scores
  • Progress bars and caching to avoid duplicate API calls
  • Error handling for missing or invalid work IDs
  • Match quality metrics (89.9% success rate, 11,332 high-confidence matches)

Benchmark Evaluation

To evaluate the agent's performance on known EMBO candidates:

# Run benchmark with 10 samples
python scripts/run_benchmark.py -n 10

# Run with 100 samples
python scripts/run_benchmark.py -n 100

# Use all high-confidence matches (11,000+ authors)
python scripts/run_benchmark.py -n 11332

# Use stricter ground truth (perfect matches only)
python scripts/run_benchmark.py -n 100 --min-score 1.0

The benchmark:

  • Uses only high-confidence matches (match score ≥ 0.8) as ground truth
  • Provides up to 6 papers per author as context
  • Tests accuracy at ranks 1-5 (Top-1, Top-2, etc.)
  • Note: Email finding is NOT benchmarked (no ground truth emails available)
  • Generates detailed JSON report with:
    • Overall accuracy metrics
    • Individual test results
    • Analysis of failures
    • Performance statistics

Output is saved to output/benchmark_TIMESTAMP.json

Context-Level Benchmarks

To understand how much context improves disambiguation:

# Run benchmarks with different context levels (50 samples each)
python scripts/run_context_benchmarks.py --n-samples 50

This runs two benchmarks:

  1. No Context: Only name + affiliation (minimal information)
  2. Light Context: Name + affiliation + abstract excerpt (moderate information)

Results show the impact of additional context on accuracy. See BENCHMARK_GUIDE.md for details

Basic Usage

# Using first and last name
python src/production_agent.py --first-name "Jerry" --last-name "Adams"

# Using full name
python src/production_agent.py --name "Jerry M. Adams"

With Institution

python src/production_agent.py --first-name "Jerry" --last-name "Adams" --institution "WEHI"

# Or with multiple affiliations
python src/production_agent.py --name "John Smith" --affiliation "Harvard" --affiliation "MIT"

With ORCID

python src/production_agent.py --first-name "Konrad" --last-name "Beyreuther" --orcid "0000-0002-3317-3069"

With Email Search

python src/production_agent.py --first-name "Yves" --last-name "Barde" --orcid "0000-0002-7627-461X" --find-email

Email Search Strategy:

  • Priority 1: Institutional directories (most reliable)
  • Priority 2: Personal/Lab websites
  • Priority 3: ORCID profiles
  • Priority 4: Google Scholar
  • Priority 5: ResearchGate/LinkedIn
  • Priority 6 (Last Resort): Extract from most recent research article as last author

Critical Requirements for Email Search:

  • Uses author's LAST (most recent) affiliation from OpenAlex
  • For publication fallback: Must be LAST AUTHOR in most recent RESEARCH ARTICLE
  • Not reviews, editorials, or other publication types

With Context (Publications, Keywords, Topics)

# Provide research context (publications, keywords, research topics)
python src/production_agent.py --name "Researcher Name" --context "Known publications: Title 1, Title 2"

# Or with affiliation and context
python src/production_agent.py --first-name "John" --last-name "Smith" --affiliation "MIT" --context "machine learning, neural networks"

Output Format

The agent returns structured JSON with enforced schema validation. All responses use a unified schema regardless of status.

Key Features

  • Unified structure: Same schema for all status types (success, ambiguous, not_found, error)
  • Always uses author_candidates array: Even success cases return a single-element array
  • No confidence scores: Evidence and concerns provide better assessment than arbitrary confidence labels
  • Ranked results: Candidates ordered by evidence strength (rank 1 = strongest match)

Output Schema

{
  // Required fields (always present)
  "status": "success" | "ambiguous" | "not_found" | "error",

  "author_candidates": [{
    "rank": number,  // 1 = strongest match
    "author": {
      "openalex_id": string,
      "openalex_url": string,
      "name": string,
      "orcid": string | null,
      "institution": string | null,
      "works_count": number,
      "cited_by_count": number
    },
    "evidence": string[],  // Supporting evidence for this match
    "concerns": string[]    // Red flags or uncertainties (optional)
  }],  // Empty array for not_found/error

  "search_summary": {
    "embo_found": boolean,
    "orcid_source": string,
    "candidates_evaluated": number,
    "disambiguation_needed": boolean
  },

  "comments": string,  // Detailed process reasoning

  // Optional fields (context-dependent)
  "message"?: string,           // For error/not_found/ambiguous cases
  "error"?: string,             // Error message if status is "error"
  "possible_reasons"?: string[],  // For not_found cases
  "recommendation"?: string,    // Suggested next steps

  // Metadata (added by agent)
  "_metadata": {
    "iterations": number,
    "stats": {
      "input_tokens": number,
      "output_tokens": number,
      "web_searches": number,
      "openalex_calls": number
    },
    "researcher_name": string
  }
}

Success Case Example

Single unambiguous match (notice author_candidates is an array with one element):

{
  "status": "success",
  "author_candidates": [
    {
      "rank": 1,
      "author": {
        "openalex_id": "A5074091984",
        "openalex_url": "https://openalex.org/A5074091984",
        "name": "Yves-Alain Barde",
        "orcid": "0000-0002-7627-461X",
        "institution": "Cardiff University",
        "works_count": 177,
        "cited_by_count": 36058
      },
      "evidence": [
        "ORCID exact match (0000-0002-7627-461X)",
        "Recent publications on brain-derived neurotrophic factor",
        "Current affiliation: Cardiff University (2015-2025)",
        "Previous affiliations: Max Planck Society, University of Basel",
        "High impact researcher: h-index 84, 36,058 citations"
      ],
      "concerns": []
    }
  ],
  "search_summary": {
    "embo_found": false,
    "orcid_source": "User provided",
    "candidates_evaluated": 1,
    "disambiguation_needed": false
  },
  "comments": "Direct ORCID search returned single unambiguous result.",
  "_metadata": {
    "iterations": 3,
    "stats": {
      "input_tokens": 40957,
      "output_tokens": 752,
      "web_searches": 0,
      "openalex_calls": 2
    },
    "researcher_name": "Yves Barde"
  }
}

Ambiguous Case Example

Multiple candidates ranked by evidence strength:

{
  "status": "ambiguous",
  "author_candidates": [
    {
      "rank": 1,
      "author": {
        "openalex_id": "A123456",
        "openalex_url": "https://openalex.org/A123456",
        "name": "John Smith",
        "orcid": null,
        "institution": "MIT",
        "works_count": 50,
        "cited_by_count": 1000
      },
      "evidence": [
        "Institution match (MIT)",
        "1 publication match",
        "Research domain alignment (life sciences)"
      ],
      "concerns": [
        "Timeline slightly inconsistent",
        "No ORCID available"
      ]
    },
    {
      "rank": 2,
      "author": {
        "openalex_id": "A789012",
        "openalex_url": "https://openalex.org/A789012",
        "name": "J. Smith",
        "orcid": null,
        "institution": "Stanford",
        "works_count": 30,
        "cited_by_count": 500
      },
      "evidence": [
        "Name match",
        "Field proximity (biology)"
      ],
      "concerns": [
        "Institution mismatch (Stanford vs MIT)",
        "No publication matches"
      ]
    }
  ],
  "search_summary": {
    "embo_found": false,
    "orcid_source": "Unknown",
    "candidates_evaluated": 5,
    "disambiguation_needed": true
  },
  "message": "Multiple plausible candidates found. Ranked by evidence strength.",
  "recommendation": "Provide known publication titles or ORCID for disambiguation",
  "comments": "Multiple candidates with similar names in related fields",
  "_metadata": {
    "iterations": 8,
    "stats": {
      "input_tokens": 52000,
      "output_tokens": 950,
      "web_searches": 2,
      "openalex_calls": 6
    },
    "researcher_name": "John Smith"
  }
}

Not Found Case Example

{
  "status": "not_found",
  "author_candidates": [],
  "search_summary": {
    "embo_found": false,
    "orcid_source": "Unknown",
    "candidates_evaluated": 0,
    "disambiguation_needed": false
  },
  "message": "No matching author profile found in OpenAlex",
  "possible_reasons": [
    "Researcher not yet indexed in OpenAlex",
    "Name variation not captured",
    "Very early career (no publications)"
  ],
  "recommendation": "Verify researcher name spelling and try with publication titles",
  "comments": "Exhaustive search across all sources returned no matches...",
  "_metadata": {
    "iterations": 12,
    "stats": {
      "input_tokens": 68000,
      "output_tokens": 450,
      "web_searches": 4,
      "openalex_calls": 3
    },
    "researcher_name": "Unknown Researcher"
  }
}

Workflow

  1. Step 0: Quick OpenAlex check with available info (ORCID, name+institution, or name only)
  2. Step 1: EMBO member directory search (if found, use curated ORCID → skip to Step 5)
  3. Step 2: General web search for institutional profiles and ORCID
  4. Step 3: PubMed search for publications and affiliations
  5. Step 4: Build comprehensive researcher profile
  6. Step 5: OpenAlex verification with MCP tools
  7. Step 6: Return JSON-only results with confidence scores

MCP Server Tools

All search tools now include domain awareness and return the primary research domain for each author.

search_authors_by_name(name, per_page=200, preferred_domain=None)

Basic name search returning up to 200 results. Optional preferred_domain parameter ranks results by domain relevance.

  • Domains: "life_sciences", "health_sciences", "physical_sciences", "social_sciences"
  • Returns: Author profiles with primary_domain field and top_concepts

search_authors_by_orcid(orcid)

Most reliable search method when ORCID is available. Returns single author profile with domain information.

search_authors_by_name_and_institution(name, institution_name, per_page=200, preferred_domain=None)

Two-step filtered search: finds institution ID first, then searches authors affiliated with that institution. Optional preferred_domain parameter ranks results by domain relevance.

get_author_details(openalex_author_id)

Complete author profile including affiliations, research topics, h-index, publication counts by year.

get_author_recent_works(openalex_author_id, per_page=10)

Recent publications for identity verification, including journal, DOI, citations, and author affiliations.

Domain Classification

The MCP server automatically determines the primary research domain for each author based on their publication concepts:

  • Life Sciences: Biology, genetics, molecular biology, neuroscience, microbiology, ecology, immunology
  • Health Sciences: Medicine, clinical research, pharmacology, epidemiology, public health, oncology
  • Physical Sciences: Physics, chemistry, astronomy, materials science, quantum mechanics
  • Social Sciences: Economics, sociology, psychology, political science, education, linguistics

Domain ranking helps disambiguate authors with common names by prioritizing candidates in the expected research field.

Tests

Comprehensive test suite with 24 tests covering all OpenAlex tools functionality.

Running Tests

# Run all unit tests (excluding integration tests - fast, no API calls needed)
pytest tests/ -m "not integration"

# Run all tests including integration tests (requires OPENALEX_API_KEY)
pytest tests/

# Run with verbose output
pytest tests/ -v

# Run specific test file
pytest tests/test_openalex_tools.py -v

Project Structure

src/
├── production_agent.py              # Main async agent (MCP-enabled)
├── openalex_mcp/                    # MCP Server Module
│   ├── __init__.py
│   └── openalex_server.py           # FastMCP server (can run standalone)
├── schemas/
│   ├── __init__.py
│   └── disambiguation_result.py     # Unified output schema + validation
├── prompts/
│   ├── __init__.py
│   └── simplified_system_prompt_with_skills.py  

tests/
├── __init__.py
├── test_openalex_tools.py           # OpenAlex tools unit tests (22 tests)
├── test_disambiguation_schema.py    # Schema validation tests (23 tests)
└── test_mcp_server.py               # MCP server tests (12 tests)

scripts/
├── README.md                        # Scripts documentation
├── add_author_ids.py                # Extract OpenAlex author IDs from work IDs
├── clean_embo_members.py            # Clean EMBO members dataset
├── get_all_embo_members_openalex_ids.py  # Process all EMBO members
├── get_embo_members_openalex_ids.py      # Earlier version (tiered strategy)
├── run_benchmark.py                 # Main benchmark evaluation
└── run_context_benchmarks.py        # Context-level benchmarks

# Documentation
BENCHMARK_GUIDE.md                   # Complete benchmark documentation
EMBO_MEMBERS_GUIDE.md                # EMBO members processing guide
data/                                 # Ground truth data (Excel files excluded from git)
output/                               # Benchmark results (excluded from git)

MCP Architecture Details

This project implements the Embedded MCP Pattern, recommended by Anthropic for connecting agents to data sources like APIs and databases.

Why MCP?

  1. Clean Separation: MCP module (src/openalex_mcp/) separates data access from agent logic
  2. Testable: MCP tools can be tested independently of the agent
  3. Reusable: Tools can be extracted to standalone MCP server if needed
  4. Maintainable: Clear boundaries between concerns
  5. Performance: Direct async function calls (no stdio overhead)

Architecture Layers

Layer 1: Production Agent (src/production_agent.py)
  ↓ imports and calls
Layer 2: MCP Core Tools (src/openalex_mcp/core_tools.py)
  ↓ wraps with async
Layer 3: OpenAlex Tools (src/tools/openalex_tools.py)
  ↓ uses
Layer 4: PyAlex Library (pip package)
  ↓ calls
Layer 5: OpenAlex REST API

Changelog

Version 2.4.0 (2026-01-23) - Works Search Agent & Enhanced MCP

New Feature: Works Search Agent

  • Added search_work() function for finding academic papers
  • Hybrid approach: Direct PyAlex search + AI agent fallback
  • Strict validation: title matching & author verification
  • Returns work ID, DOI, title, author OpenAlex ID & ORCID
  • CLI entry point: author-work-search
  • 28/28 passing unit tests

Enhanced OpenAlex MCP Server:

  • Added 4 new works search tools:
    • search_works_by_title: Find papers by title
    • search_works_by_doi: Get work by DOI (most reliable)
    • search_works_by_title_and_author: Combined title + author search
    • get_work_details: Complete work information with full authorships
  • Total: 9 MCP tools (5 author + 4 works)

New Claude Skill:

  • works-search-strategy: Search techniques, validation rules, and quality assurance

Schema & Testing:

  • Added WORKS_SEARCH_SCHEMA for structured work outputs
  • Comprehensive test suite in tests/test_works_search_agent.py
  • Tests for normalization, matching, author finding, and schema validation

Documentation:

  • Updated README with works search examples
  • Added Python API and CLI usage for works search
  • Updated all tool counts and feature lists

Version 2.3.0 (2026-01-23) - Production-Ready Package

Major Release: Converted to pip-installable Python package

Package Structure:

  • Added pyproject.toml for modern Python packaging
  • Created LICENSE (MIT), MANIFEST.in, and package metadata
  • Updated src/__init__.py to expose disambiguate_author function
  • Package is now installable via pip from GitHub
  • Can be used as a dependency in external projects

Documentation:

  • Added API.md: Comprehensive API documentation with usage examples
  • Added INSTALLATION.md: Detailed installation guide
  • Updated README with package installation and programmatic usage

CLI Entry Points:

  • author-disambiguate: Main CLI for author disambiguation
  • author-benchmark: Run benchmark evaluations
  • author-context-benchmark: Run context-level benchmarks

Scripts Organization:

  • Moved all scripts to scripts/ folder with dedicated README
  • Added context-level benchmarks (run_context_benchmarks.py)
  • Added EMBO members processing (get_all_embo_members_openalex_ids.py)
  • Added data cleaning utility (clean_embo_members.py)

Extensibility:

  • Designed for future additional agents (e.g., works retrieval)
  • Clean API for programmatic usage from external modules
  • Production-ready for integration into other projects

Version 2.2.0 (2026-01-08) - Benchmark Infrastructure

Added comprehensive benchmark evaluation framework:

  • add_author_ids.py: Script to extract OpenAlex author IDs from work IDs with fuzzy name matching
  • run_benchmark.py: Automated benchmark evaluation with Top-1 through Top-5 accuracy metrics
  • BENCHMARK_GUIDE.md: Complete benchmark documentation
  • Ground truth data: 11,332+ high-confidence author matches from EMBO candidates
  • Backward-compatible schema handling for evaluation

Version 2.1.0 (2025-11-27) - MCP Architecture

Removed the tool architecture to fully integrate it into the MCP

Version 2.0.0 (2025-11-27) - MCP Architecture

Major architectural refactoring to implement the embedded MCP pattern:

  • Embedded MCP architecture using FastMCP
  • Added Skills for openalex expert and analysis and evaluation of candidates

Benefits:

  • Clean separation between agent logic and data access
  • Testable tool layer with independent test suite
  • Can extract to standalone MCP server if needed
  • Better performance with async execution
  • Follows Anthropic's MCP best practices

Version 1.0.0 (2025-11-26) - Direct Tools Integration

Initial production release with direct tool integration.

Known Issues

  1. System Prompt Size: Large prompt (includes full OpenAlex API guide) causes high token usage
    • ~40K input tokens per request
    • Consider extracting guide to separate documentation or using prompt caching
  2. Domain Classification Accuracy: Occasional misclassification of research domains
    • Some life sciences researchers classified as social sciences
    • Evidence from publications usually clarifies

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

author_disambiguation-2.4.0.tar.gz (82.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

author_disambiguation-2.4.0-py3-none-any.whl (77.8 kB view details)

Uploaded Python 3

File details

Details for the file author_disambiguation-2.4.0.tar.gz.

File metadata

  • Download URL: author_disambiguation-2.4.0.tar.gz
  • Upload date:
  • Size: 82.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for author_disambiguation-2.4.0.tar.gz
Algorithm Hash digest
SHA256 ed93f97ea45324b77934339daf42db4af69e894f2e57219baee62216c29a45a5
MD5 6932c16d41b180f8457ad17713cb374d
BLAKE2b-256 6326fb397c1c026b4faa2611b845c3d716b567a9a3f5568224d3111fc7fbb672

See more details on using hashes here.

File details

Details for the file author_disambiguation-2.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for author_disambiguation-2.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5baddbf81baffffb765f05d7f0392d6c2bd886f511ee34dca1bc6c88b6b4fc88
MD5 c1f2c4e259ba0cd3c9f8f298e092697b
BLAKE2b-256 b0b9fe7f80b276e5aa0445d1e351f79818f0198c8c4b954982de8aef8b4b0539

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page