Skip to main content

Utilities to read BIOPAC AcqKnowledge files

Project description

Libraries for reading BIOPAC files

DOI

These utilities are for reading the files produced by BIOPAC's AcqKnowledge software. Much of the information is based on Application Note 156 from BIOPAC; however, newer file formats were decoded through the tireless efforts of John Ollinger and Nate Vack.

This library is mostly concerned with getting you the data, and less so with interpreting UI-related header values.

Status

As far as I know, this should read any AcqKnowledge file you throw at it. Windows, Mac, uncompressed, compressed, old, new... it should happily read 'em all. If you have trouble with a file, I'd love to get a copy and make bioread work with it.

Installation

We're up in pypi, so installing should be as simple as:

pip install bioread

Some of the optional parts of bioread depend on external libraries. acq2hdf5 depends on h5py and acq2mat depends on scipy, but as neither of those are core parts of bioread (and can be hairy to get working on some systems), they aren't installed by default. To get them, do:

# Just h5py
pip install bioread[hdf5]
# Just scipy
pip install bioread[mat]
# The whole shebang
pip install bioread[all]

As of May 2020 (version 2), we now require Python 3.6 or later. Versions 1.0.4 and below should work with Python 2.7 and up.

API Usage:

Command-line usage:

acq2hdf5

If you want to convert files out of AcqKnowledge, this is probably what you want to use -- Matlab can read these out of the box and there are libraries for R and such. This converts the file, storing channels as datasets with names like /channels/channel_0 and metadata in attributes. Event markers are stored in /event_markers/marker_X

Convert an AcqKnowledge file to an HDF5 file.

Usage:
  acq2hdf5 [options] <acq_file> <hdf5_file>
  acq2hdf5 -h | --help
  acq2hdf5 --version

Options:
  --values-as=<type>    Save raw measurement values, stored as integers in the
                        base file, as either 'raw' or 'scaled'. If stored as
                        raw, you can convert to scaled using the scale and
                        offset attributes on the channel. If storing scaled
                        values, scale and offset will be 1 and 0.
                        [default: scaled]
  --compress=<method>   How to compress data. Options are gzip, lzf, none.
                        [default: gzip]
  --data-only           Only save data and required headers -- do not save
                        journal or marker information.
  -v, --verbose         Print extra messages for debugging.

Note this does not need to read the entire dataset into memory, so if you have a 2G dataset, this will work great.

To get the values you see in AcqKnowledge, leave the --values-as option to its default ('scaled'). For faster performance, less memory usage, and smaller files, you can use 'raw' and convert the channel later (if you care) with the scale and offset attributes.

Generally, gzip compression seems to work very well, but if you're making something really big you might want to use lzf (worse compression, much faster).

What you'll find in the file:

Root-level attributes:

  • file_revision The internal AckKnowledge file version number
  • samples_per_second The base sampling rate of the file
  • byte_order The original file's byte ordering
  • journal The file's journal data.

Channel-level attributes:

  • scale The scale factor of raw data (for float-type data, will be 1)
  • offset The offset of raw data (for float-type data, will be 0)
  • frequency_divider The sampling rate divider for this channel
  • samples_per_second The channel's sampling rate
  • name The name of the channel
  • units The units for the channel
  • channel_number The display number for the channel (used in markers)

Markers

  • label A text label for the channel
  • type A description of this marker's type
  • type_code A short, 4-character code for type
  • global_sample_index The index, in units of the main sampling rate, of this marker
  • channel A hard link to the referred channel (only for non-global events)
  • channel_number The display number for the channel (only for non-global events)
  • channel_sample_index The in the channel's data where this marker belongs (only for non-global events)

acq2mat

Note: I recommend acq2hdf5 for exporting to Matlab. This program is still around because hey: It works.

This program creates a Matlab (version 5) file from an AcqKnowledge file. On the back-end, it uses scipy.io.savemat. Channels are stored in a cell array named 'channels'.

Convert an AcqKnowledge file to a MATLAB file.

Usage:
  acq2mat [options] <acq_file> <mat_file>
  acq2mat -h | --help
  acq2mat --version

Options:
  -c, --compress  Save compressed Matlab file
  --data-only     Only save data and required header information -- do not
                  save event markers.

Note: scipy is required for this program.

If you've saved a file as myfile.mat, you can, in Matlab:

>> data = load('myfile.mat')

data =

              channels: {1x2 cell}
               markers: {1x3 cell}
               headers: [1x1 struct]
    samples_per_second: 1000

>> data.channels{1}

ans =

                 units: 'Percent'
     frequency_divider: 1
    samples_per_second: 1000
                  data: [1x10002 double]
                  name: 'CO2'

>> plot(data.channels{1}.data)

(Plots the data)

>> data.markers{1}

ans =

           style: 'apnd'
    sample_index: 0
           label: 'Segment 1'
         channel: Global

acq2txt

acq2txt will take the data in an AcqKnowledge file and write it to a tab-delimited text file. By default, all channels (plus a time index) will be written.

Write the data from an AcqKnowledge file channel to a text file.

Usage:
  acq2txt [options] <acq_file>
  acq2txt -h | --help
  acq2txt --version

Options:
  --version                    Show program's version number and exit.
  -h, --help                   Show this help message and exit.
  --channel-indexes=<indexes>  The indexes of the channels to extract.
                               Separate numbers with commas. Default is to
                               extract all channels.
  -o, --outfile=<file>         Write to a file instead of standard out.
  --missing-as=<val>           What value to write where a channel is not
                               sampled. [default: ]

The first column will always be time in seconds. Channel raw values are
converted with scale and offset into native units.

acq_info

acq_info prints out some simple debugging information about an AcqKnowledge file. It'll do its best to print something out even for damaged files.

Print some information about an AcqKnowledge file.

Usage:
    acq_info [options] <acq_file>
    acq_info -h | --help
    acq_info --version

Options:
  -d, --debug  print lots of debugging data

Note: Using - for <acq_file> reads from stdin.

As noted in the usage instructions, acq_info will read from stdin, so if your files are gzipped, you can say:

zcat myfile.acq.gz | acq_info -

acq_markers

Prints all of the markers in an AcqKnowlege file to a tab-delimited format, either to stdout or to a specified file. Fields are:

filename time (s) label channel style

Print the event markers from an AcqKnowledge file.

Usage:
  acq_markers [options] <file>...
  acq_markers -h | --help
  acq_markers --version

Options:
  -o <file>     Write to a file instead of standard output.

Note that this one does not read from stdin; in this case, printing the markers from a large number of files was more important than feeding from zcat or something.

Notes

I've tested all the various vintages of files I can think of and find, except very old (AcqKnowledge 2.x) files.

Also, the channel order I read is not the one displayed in the AcqKnowledge interface. Neither the order of the data nor any channel header value I can find seems to entirely control that. I'm gonna just assume it's not a very big deal.

File Format Documentation

While there's no substite for code diving to see how things really work, I've written some quick documentation of the file format.

In addition, developer Mike Davison did a great job figuring out additional .acq file format information (far more than is implemented in bioread!); his contribuions are in notes/acqknowledge_file_structure.pdf

Credits

This code was pretty much all written by Nate Vack njvack@wisc.edu, with a lot of initial research done by John Ollinger.

Copyright & Disclaimers

bioread is distributed under the MIT license. For more details, see LICENSE.

BIOPAC and AcqKnowledge are trademarks of BIOPAC Systems, Inc. The authors of this software have no affiliation with BIOPAC Systems, Inc, and that company neither supports nor endorses this software package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bioread-3.0.0.tar.gz (37.4 kB view details)

Uploaded Source

Built Distribution

bioread-3.0.0-py2.py3-none-any.whl (54.3 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file bioread-3.0.0.tar.gz.

File metadata

  • Download URL: bioread-3.0.0.tar.gz
  • Upload date:
  • Size: 37.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.12

File hashes

Hashes for bioread-3.0.0.tar.gz
Algorithm Hash digest
SHA256 ca0487eb4cc7c1e6f12a84920ade7b008f08925470fbd538fbf5c8c48a910a69
MD5 df5b3143996d48277c4f3fce501a4cd0
BLAKE2b-256 d777da996837d2316ecff9f751aaa280956137882a452359d1022d0a778e7086

See more details on using hashes here.

File details

Details for the file bioread-3.0.0-py2.py3-none-any.whl.

File metadata

  • Download URL: bioread-3.0.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 54.3 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for bioread-3.0.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 ace686c3ed22180588cb86ace0e52aaeb05fd0cd36912cb543136e954c57f34e
MD5 8829c98f997065fa571a65bcdbaf5dd3
BLAKE2b-256 d8ec849b45f39946bef96cbc80ac1e8924e83eb8ff98e43029f7cc6f36f1593a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page