Skip to main content

Data Scraping for PubMed Central.

Project description

ScrapeMed

Data Scraping for PubMed Central

GitHub CI codecov Documentation Status

HitCount

PyPI PyPI - Downloads

Used by Duke University to power medical generative AI research.

⭐ Enables pythonic object-oriented access to a massive amount of research data. PMC constitutes over 14% of The Pile.

⭐ Natural language Paper querying and Paper embedding, powered via LangChain and ChromaDB

⭐ Easy to integrate with pandas for data science workflows

Installation

Available on PyPI! Simply pip install scrapemed.

Feature List

  • Scraping API for PubMed Central (PMC) ✅
  • Data Validation ✅
  • Markup Language Cleaning ✅
  • Processes all PMC XML into Paper objects ✅
  • Dataset building functionality (paperSets) ✅
  • Semantic paper vectorization with ChromaDB
  • Natural language Paper querying ✅
  • Integration with pandas
  • paperSet visualization ✅
  • Direct Search for Papers by PMCID on PMC ✅
  • Advanced Term Search for Papers on PMC ✅

Introduction

ScrapeMed is designed to make large-scale data science projects relying on PubMed Central (PMC) easy. The raw XML that can be downloaded from PMC is inconsistent and messy, and ScrapeMed aims to solve that problem at scale. ScrapeMed downloads, validates, cleans, and parses data from nearly all PMC articles into Paper objects which can then be used to build datasets (paperSets), or investigated in detail for literature reviews.

Beyond the heavy-lifting performed behind the scenes by ScrapeMed to standardize data scraped from PMC, a number of features are included to make data science and literature review work easier. A few are listed below:

  • Papers can be queried with natural language [.query()], or simply chunked and embedded for storage in a vector DB [.vectorize()]. Papers can also be converted to pandas Series easily [.to_relational()] for data science workflows.

  • paperSets can be visualized [.visualize()], or converted to pandas DataFrames [.to_relational()]. paperSets can be generated not only via a list of PMCIDs, but also via a search term using PMC advanced search [.from_search()].

  • Useful for advanced users: TextSections and TextParagraphs found within .abstract and .body attributes of Paper objects contain not only text [.text], but also text with attached reference data [.text_with_refs]. Reference data includes tables, figures, and citations. These are processed into DataFrames and data dicts and can be found within the .ref_map attribute of a Paper object. Simply decode references based on their MHTML index. ie. an MHTML tag of "MHTML::dataref::14" found in a TextSection of paper p corresponds to the table, fig, or citation at p.ref_map[14].

Documentation

The docs are hosted on Read The Docs!

Sponsorship

Package sponsored by Daceflow.ai!

If you'd like to sponsor a feature or donate to the project, reach out to me at danielfrees@g.ucla.edu.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapemed-1.0.8.tar.gz (46.2 kB view details)

Uploaded Source

Built Distribution

scrapemed-1.0.8-py3-none-any.whl (47.7 kB view details)

Uploaded Python 3

File details

Details for the file scrapemed-1.0.8.tar.gz.

File metadata

  • Download URL: scrapemed-1.0.8.tar.gz
  • Upload date:
  • Size: 46.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for scrapemed-1.0.8.tar.gz
Algorithm Hash digest
SHA256 3cb0efa0f3538e74096656c5476ee5fc41c8bec7558614be4c900923944b8808
MD5 ee1118b55827e63bf713350c8841b7c2
BLAKE2b-256 1c695adc83517c4c719dbd2a83f853772c1eef5c549c0082a042d95c0c1bb7c6

See more details on using hashes here.

File details

Details for the file scrapemed-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: scrapemed-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 47.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for scrapemed-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 76be3ed3b9b952fad52e2632dc614d5dad1bd49f9b33e05700b01bc9c0304a5b
MD5 73cb9f1427781365611b15c1de05b35d
BLAKE2b-256 ddf4e2e92b7f59449f6057c337da769af1f9133b9a53a703b73b3fcfce0896a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page