Skip to main content

Download data from URLs quickly, with integrity

Project description

getm: Fast reads with integrity for data URLs

getm provides fast binary reads for HTTP URLs using multiprocessing and shared memory.

Data is downloaded in background processes and made availabe as references to shared memory. There are no buffer copies, but memory references must be released by the caller, which makes working with getm a bit different than typical Python IO streams. But still easy, and fast. In the case of part iteration, memoryview objects are released for you.

Python API methods accept a parameter, concurrency, which controls the mode of operation of mget:

  1. Default concurrency == 1: Download data in a single background process, using a single HTTP request that is kept alive during the course of the download.
  2. concurrency > 1: Up to concurrency HTTP range requests will be made concurrently, each in a separate background process.
  3. concurrency == None: Data is read on the main process. In this mode, getm is a wrapper for requests.

Python API

import getm

# Readable stream:
with getm.urlopen(url) as fh:
    data =

# Process data in parts:
for part in getm.iter_content(url, chunk_size=1024 * 1024):
	# Note that 'part.release()' is not needed in an iterator context


getm https://my-cool-url my-local-file


During tests, signed URLs are generated that point to data in S3 and GS buckets. Data is repopulated during each test. You must have credentials available to read and write to the test buckets, and to generate signed URLs.

Set the following environment variables to the GS and S3 test bucket names, respectively:


GCP Credentials

Generating signed URLs during tests requires service account credentials, which are made available to the test suite by setting the environment variable


AWS Credentials

Follow these instructions for configuring the AWS CLI.


pip install getm

Shared Memory Size Tests

Before release, tests should be performed on systems with various amounts of shared memory. Good choices are 64M and 8G. It is also highly encouraged for development work on the shared memory algorithms and configurations of getm.

Shared memory can be resized on Ubuntu systems, and likely other Linux systems, with the bundled convenience script dev_scripts/ Either sudo or root access is required.:

sudo dev_scripts/ 64M
sudo dev_scripts/ 8G

sharedmemory backport to Python 3.7

getm relies on the sharedmemory module, which was introduced in Python 3.8. Since a large portion of getm's audience relies on Python 3.7, a C extension backport of sharedmemory is included.

The backport adds significant complexity getm's code base, requiring C/C++ knowlege to maintain, as well as knowledge of CPython. It will be removed when enough getm users have migrated to Python 3.8 or greater.


Project home page GitHub
Package distribution PyPI


Please report bugs, issues, feature requests, etc. on GitHub.


getm was created by Brian Hannafious at the UCSC Genomics Institute.

Special thanks to Michael Baumann and Lon Blauvelt for critical input and testing.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

getm-0.0.5.tar.gz (34.3 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page