Skip to main content

Data Scraping for PubMed Central.

Project description

ScrapeMed

Data Scraping for PubMed Central

GitHub CI

HitCount

codecov

Used by Duke University to power medical generative AI research.

⭐ Enables pythonic object-oriented access to a massive amount of research data. PMC constitutes over 14% of The Pile.

⭐ Natural language Paper querying and Paper embedding, powered via LangChain and ChromaDB

Shoutout:

Package sponsored by Daceflow.ai!

Feature List

  • Scraping API for PubMed Central (PMC) ✅
  • Full Adanced Term Search for Papers on PMC ✅
  • Direct Search for Papers by PMCID on PMC ✅
  • Data Validation ✅
  • Markup Language Cleaning ✅
  • Process PMC XMl into Paper objects ✅
  • Dataset building functionality (paperSets) ✅
  • Integration with pandas for easy use in data science applications ✅
  • Semantic paper vectorization with ChromaDB ✅
  • Natural language paper querying ✅
  • Integration with pandas ✅
  • paperSet visualization ✅

Developer Usage

License: MIT

Feel free to fork and continue work on the ScrapeMed package, it is licensed under the MIT license to promote collaboration, extension, and inheritance.

Make sure to create a conda environment and install the necessary requirements before developing this package.

ie: $ conda create --name myenv --file requirements.txt

Add a .env file in your base scrapemed directory with PMC_EMAIL=youremail@example.com. This is necessary for several of the test scripts and may be useful for your development in general.

You will need to install clang++ for chromadb and the Paper vectorization to work. You also need to make sure you have python 3.11 installed and active in your dev environment.

Now an overview of the package structure:

Under examples you can find some example work using the scrapemed modules, which may provide some insight into usage possibilities.

Under examples/data you will find some example downloaded date (XML from Pubmed Central). It is recommended that any time you download data while working out of the notebooks, it should go here. Downloads will also go here by default when passing download=True to the scrapemed module functions which allow you to do so.

Under scrapemed/tests you will find several python scripts which can be run using pytest. If you also clone the .github/workflows/test-scrapemed.yml, these tests will be automatically run on any PR/ push to your github repo. Under scrapemed/test/testdata are some XML data crafted for the purpose of testing scrapemed. This data is necessary to run the testing scripts.

Each of the scrapemed python modules has a docstring at the top describing its general purpose and usage. All functions should also have descriptive docstrings and descriptions of input/output. Please contact me if any documentation is unclear. Full documentation is on its way.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapemed-1.0.4.tar.gz (48.7 kB view details)

Uploaded Source

Built Distribution

scrapemed-1.0.4-py3-none-any.whl (52.5 kB view details)

Uploaded Python 3

File details

Details for the file scrapemed-1.0.4.tar.gz.

File metadata

  • Download URL: scrapemed-1.0.4.tar.gz
  • Upload date:
  • Size: 48.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for scrapemed-1.0.4.tar.gz
Algorithm Hash digest
SHA256 a4e2ca918b5cf2faa284c3a66513ab32d15978ca2643d67e3f9185fc415f1a9c
MD5 bbe71b8758f75bf065135b19b6cd6782
BLAKE2b-256 0374ab0446269a517c27def753e45fb3b19bb4bcb62929d63d807ceb23584cdf

See more details on using hashes here.

File details

Details for the file scrapemed-1.0.4-py3-none-any.whl.

File metadata

  • Download URL: scrapemed-1.0.4-py3-none-any.whl
  • Upload date:
  • Size: 52.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for scrapemed-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 4433ec7a484b511ec3da0ca1e20aa4353446428c3ebccd37cb887a21353ff4ba
MD5 8cc7b21c0e71f204fa1967c05047b854
BLAKE2b-256 1502b94c27a69e84885ceacbb0d060a7f8e6098657ec92ed4f951d3c551d364c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page