Skip to main content
Help us improve PyPI by participating in user testing. All experience levels needed!

Data visualization toolchain based on aggregating into a grid

Project description


Travis build Status Appveyor build status Task Status

Datashader is a data rasterization pipeline for automating the process of creating meaningful representations of large amounts of data. Datashader breaks the creation of images of data into 3 main steps:

  1. Projection

    Each record is projected into zero or more bins of a nominal plotting grid shape, based on a specified glyph.

  2. Aggregation

    Reductions are computed for each bin, compressing the potentially large dataset into a much smaller aggregate array.

  3. Transformation

    These aggregates are then further processed, eventually creating an image.

Using this very general pipeline, many interesting data visualizations can be created in a performant and scalable way. Datashader contains tools for easily creating these pipelines in a composable manner, using only a few lines of code. Datashader can be used on its own, but it is also designed to work as a pre-processing stage in a plotting library, allowing that library to work with much larger datasets than it would otherwise.


The best way to get started with Datashader is install it together with our extensive set of examples, following the instructions in the examples README.

If all you need is datashader itself, without any of the files used in the examples, you can install it from the bokeh channel using the using the conda package manager:

conda install -c bokeh datashader

If you want to get the very latest unreleased changes to datashader (e.g. to edit the source code yourself), first install using conda as above to ensure the dependencies are installed, and you can then tell Python to use a git clone instead:

conda remove --force datashader
git clone
cd datashader
pip install -e .

Datashader is not currently available on PyPI, to avoid broken or low-performance installations that come from not keeping track of C/C++ binary dependencies such as LLVM (required by Numba).

To run the test suite, first install pytest (e.g. conda install pytest), then run py.test datashader in your datashader source directory.

Learning more

After working through the examples, you can find additional resources linked from the datashader documentation, including API documentation and papers and talks about the approach.


USA census

NYC races

NYC taxi

Project details

Release history Release notifications

This version
History Node


History Node


History Node


History Node


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
datashader-0.6.6-py2.py3-none-any.whl (9.8 MB) Copy SHA256 hash SHA256 Wheel py2.py3 May 24, 2018
datashader-0.6.6.tar.gz (19.9 MB) Copy SHA256 hash SHA256 Source None May 31, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page