Skip to main content

Data Scraping for PubMed Central.

Project description

ScrapeMed

Data Scraping for PubMed Central

GitHub CI

HitCount

codecov

Used by Duke University to power medical generative AI research.

⭐ Enables pythonic object-oriented access to a massive amount of research data. PMC constitutes over 14% of The Pile.

⭐ Natural language Paper querying and Paper embedding, powered via LangChain and ChromaDB

Shoutout:

Package sponsored by Daceflow.ai!

Feature List

  • Scraping API for PubMed Central (PMC) ✅
  • Full Adanced Term Search for Papers on PMC ✅
  • Direct Search for Papers by PMCID on PMC ✅
  • Data Validation ✅
  • Markup Language Cleaning ✅
  • Process PMC XMl into Paper objects ✅
  • Dataset building functionality (paperSets) ✅
  • Integration with pandas for easy use in data science applications ✅
  • Semantic paper vectorization with ChromaDB ✅
  • Natural language paper querying ✅
  • Integration with pandas ✅
  • paperSet visualization ✅

Developer Usage

License: MIT

Feel free to fork and continue work on the ScrapeMed package, it is licensed under the MIT license to promote collaboration, extension, and inheritance.

Make sure to create a conda environment and install the necessary requirements before developing this package.

ie: $ conda create --name myenv --file requirements.txt

Add a .env file in your base scrapemed directory with PMC_EMAIL=youremail@example.com. This is necessary for several of the test scripts and may be useful for your development in general.

You will need to install clang++ for chromadb and the Paper vectorization to work. You also need to make sure you have python 3.11 installed and active in your dev environment.

Now an overview of the package structure:

Under examples you can find some example work using the scrapemed modules, which may provide some insight into usage possibilities.

Under examples/data you will find some example downloaded date (XML from Pubmed Central). It is recommended that any time you download data while working out of the notebooks, it should go here. Downloads will also go here by default when passing download=True to the scrapemed module functions which allow you to do so.

Under scrapemed/tests you will find several python scripts which can be run using pytest. If you also clone the .github/workflows/test-scrapemed.yml, these tests will be automatically run on any PR/ push to your github repo. Under scrapemed/test/testdata are some XML data crafted for the purpose of testing scrapemed. This data is necessary to run the testing scripts.

Each of the scrapemed python modules has a docstring at the top describing its general purpose and usage. All functions should also have descriptive docstrings and descriptions of input/output. Please contact me if any documentation is unclear. Full documentation is on its way.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapemed-1.0.5.tar.gz (48.7 kB view details)

Uploaded Source

Built Distribution

scrapemed-1.0.5-py3-none-any.whl (52.5 kB view details)

Uploaded Python 3

File details

Details for the file scrapemed-1.0.5.tar.gz.

File metadata

  • Download URL: scrapemed-1.0.5.tar.gz
  • Upload date:
  • Size: 48.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for scrapemed-1.0.5.tar.gz
Algorithm Hash digest
SHA256 e4439b2c3c600742822a304aada057c3155cc35adbb949619cb4ed5729df2999
MD5 2677fed16acfa7dfc8579c7870d13c5a
BLAKE2b-256 e87829294812a7c615d5bfc03f72d50eaa3b7169c35b7f6829959a457140b059

See more details on using hashes here.

File details

Details for the file scrapemed-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: scrapemed-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 52.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for scrapemed-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d6f959274e62987d004c28fe50b37c1d043176f0885abb6c185c86a650ea7ce6
MD5 1e326042ef14d2f2f728c393be4793eb
BLAKE2b-256 f7b23183a11323821faaf1e9e37b6f80e1543ec8f183e0b85f213018d4b605f1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page