Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (
Help us improve Python packaging - Donate today!

Scrape biological data from websites

Project Description


Web scrapers to interact with remote databases programatically in Python that makes a local cache of web data with sqlite3 to prevent excessive web traffic.

So far, implemented:

  • [Uniprot]( by uniprot protein ID. (e.g. ‘Q8BP71’)
  • [PubMed]( by PMID (e.g. ‘24213538’)


##Python 2.7.x and 3.x pip install bioscraping


Real unit tests are absent, but you can test basic functionality with python test/



from bioscraping import PubMedClient

pubmed = PubMedClient()

defaults to writing a file called .bioscraping.pubmed.sqlite.db. Use PubMedClient(“:memory:”) for in-memory data storage.


Returns text with author and abstract for PMID.


from bioscraping import UniprotClient

uniprot = UniprotClient()

defaults to writing a file called .bioscraping.uniprot.sqlite.db. Use UniprotClient(“:memory:”) for in-memory data storage.

uniprot.fetch(<Uniprot ID>)

Returns a dictionary of data parsed from xml.

#Buyer beware

UniprotClient has a potential race condition and tempfile needs to be implemented before it is safe for concurrent processes. (see TODO)

Release History

This version
History Node


History Node


History Node


Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, Size & Hash SHA256 Hash Help File Type Python Version Upload Date
(14.1 kB) Copy SHA256 Hash SHA256
Source None Jun 25, 2015

Supported By

Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Google Google Cloud Servers DreamHost DreamHost Log Hosting