Skip to main content

The FMRIB UK Biobank data processing library

Project description


ukbparse has been superseded by funpack and will no longer be developed. Head to for more information.

ukbparse is a Python library for pre-processing of UK BioBank data.

ukbparse is developed at the Wellcome Centre for Integrative Neuroimaging (WIN@FMRIB), University of Oxford. ukbparse is in no way endorsed, sanctioned, or validated by the UK BioBank.

ukbparse comes bundled with metadata about the variables present in UK BioBank data sets. This metadata can be obtained from the UK BioBank online data showcase


Install ukbparse via pip:

pip install ukbparse

Or from conda-forge:

conda install -c conda-forge ukbparse

Introductory notebook

The ukbparse_demo command will start a Jupyter Notebook which introduces the main features provided by ukbparse. To run it, you need to install a few additional dependencies:

pip install ukbparse[demo]

You can then start the demo by running ukbparse_demo.


The introductory notebook uses bash, so is unlikely to work on Windows.


General usage is as follows:

ukbparse [options] output.tsv input1.tsv input2.tsv

You can get information on all of the options by typing ukbparse --help.

Options can be specified on the command line, and/or stored in a configuration file. For example, the options in the following command line:

ukbparse \
  --overwrite \
  --import_all \
  --log_file log.txt \
  --icd10_map_file icd_codes.tsv \
  --category 10 \
  --category 11 \
  output.tsv input1.tsv input2.tsv

Could be stored in a configuration file config.txt:

log_file       log.txt
icd10_map_file icd_codes.tsv
category       10
category       11

And then executed as follows:

ukbparse -cfg config.txt output.tsv input1.tsv input2.tsv


ukbparse contains a large number of built-in rules which have been specifically written to pre-process UK BioBank data variables. These rules are stored in the following files:

  • ukbparse/data/variables_*.tsv: Cleaning rules for individual variables
  • ukbparse/data/datacodings_*.tsv: Cleaning rules for data codings
  • ukbparse/data/types.tsv: Cleaning rules for specific types
  • ukbparse/data/processing.tsv: Processing steps

You can customise or replace these files as you see fit. You can also pass your own versions of these files to ukbparse via the --variable_file, --datacoding_file, --type_file and --processing_file command-line options respectively. ukbparse will load all variable and datacoding files, and merge them into a single table which contains the cleaning rules for each variable.

Finally, you can use the --no_builtins option to bypass all of the built-in cleaning and processing rules.


The main output of ukbparse is a plain-text tab-delimited[*]_ file which contains the input data, after cleaning and processing, potentially with some columns removed, and new columns added.

If you used the --non_numeric_file option, the main output file will only contain the numeric columns; non-numeric columns will be saved to a separate file.

You can use any tool of your choice to load this output file, such as Python, MATLAB, or Excel. It is also possible to pass the output back into ukbparse.

[*]You can change the delimiter via the --tsv_sep / -ts option.

Loading output into MATLAB

If you are using MATLAB, you have several options for loading the ukbparse output. The best option is readtable, which will load column names, and will handle both non-numeric data and missing values. Use readtable like so:

data = readtable('out.tsv', 'FileType', 'text');

The readtable function returns a table object, which stores each column as a separate vector (or cell-array for non-numeric columns). If you are only interested in numeric columns, you can retrieve them as an array like this:

rawdata =  data(:, vartype('numeric')).Variables;

The readtable function will potentially rename the column names to ensure that they are are valid MATLAB identifiers. You can retrieve the original names from the table object like so:

colnames        = data.Properties.VariableDescriptions;
colnames        = regexp(colnames, '''(.+)''', 'tokens', 'once');
empty           = cellfun(@isempty, colnames);
colnames(empty) = data.Properties.VariableNames(empty);
colnames        = vertcat(colnames{:});

If you have used the --description_file option, you can load in the descriptions for each column as follows:

descs = readtable('descriptions.tsv', ...
                  'FileType', 'text', ...
                  'Delimiter', '\t',  ...
descs = [descs; {'eid', 'ID'}];
idxs  = cellfun(@(x) find(strcmp(descs.Var1, x)), colnames, ...
                'UniformOutput', false);
idxs  = cell2mat(idxs);
descs = descs.Var2(idxs);


To run the test suite, you need to install some additional dependencies:

pip install ukbparse[test]

Then you can run the test suite using pytest:



If you would like to cite ukbparse, please refer to its Zenodo page.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for ukbparse, version 0.22.0
Filename, size File type Python version Upload date Hashes
Filename, size ukbparse-0.22.0-py3-none-any.whl (1.5 MB) File type Wheel Python version py3 Upload date Hashes View
Filename, size ukbparse-0.22.0.tar.gz (1.5 MB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page