Skip to main content

Parsing and converting of files from BioLogic's EC-Lab.

Project description

eclabfiles

This is a package to parse files from BioLogic's EC-Lab. The parsers build on Chris Kerr's galvani package and on the work of a previous civilian service member at Empa Lab 501, Jonas Krieger.

> pip install eclabfiles

Example Usage

parse

Parse the data as it is stored in the corresponding file. The method automatically determines filetype and tries to apply the respective parser.

>>> import eclabfiles as ecf
>>> ecf.parse("./mpt_files/test_01_OCV.mpt")

The returned data structure may look quite different depending on which file type you read in as the different filetypes also store the same data in very different ways. See section filetypes

to_df

Parse the file and transform only the data part into a Pandas DataFrame.

>>> import eclabfiles as ecf
>>> ecf.to_df("./mpr_files/test_02_CP.mpr")

If the given file is an .mps settings file, then the program tries to read the data from any .mpt and .mpr files in the same folder if they are present. In that case a list of DataFrames is returned.

to_csv

Parse the file and write the data part into a .csv file at the specified location.

>>> import eclabfiles as ecf
>>> ecf.to_csv("./mpt_files/test_03_PEIS.mpt", "./csv_files/test_PEIS.csv")

The csv_path parameter is optional. If left away, the method writes a .csv file at the location of the input file.

If the file is a settings file, this method does as to_df() does and writes multiple numbered .csv files.

to_xlsx

Parse the file and write the data part into an Excel .xlsx file at the specified location.

>>> import eclabfiles as ecf
>>> ecf.to_xlsx("./experiment/test.mps")

The xlsx_path parameter is optional. If left away, the method writes a .xlsx file at the location of the input file.

If the file is a settings file, this method writes multiple numbered sheets into the Excel file.

Filetypes

The file types that are implemented are:

  • .mpt: The .mpt file is a text format file generated when the user exports the raw .mpr file in text format.
  • .mpr: Raw data binary file, which contains the current parameter settings (refreshed at each modification) of the detailed diagram and cell characteristic windows.
  • .mps: Settings file, which contains all the parameters of the experiment.

The .mpt files generally contain a few more data columns than the corresponding binary .mpr files from what I have seen.

The .mps files simply relate different techniques together and store no data, while the other files contain the measurements.

Structure of parsed .mpt files

{
    'header': {
        'technique',
        'settings',
        'params': {},
        'loops': {,
            'n',
            'indexes': [],
        },
    },
    'datapoints': [{}],
}

Structure of parsed .mpr files

[
    {
        'header': {
            'short_name',
            'long_name',
            'length',
            'version',
            'date',
        },
        'data': {
            'technique',
            'comments',
            'active_material_mass',
            'at_x',
            'molecular_weight',
            'atomic_weight',
            'acquisition_start',
            'e_transferred',
            'electrode_material',
            'electrolyte',
            'electrode_area',
            'reference_electrode',
            'characteristic_mass',
            'battery_capacity',
            'battery_capacity_unit',
            'params': {},
        },
    },
    {
        'header': {
            'short_name',
            'long_name',
            'length',
            'version',
            'date',
        },
        'data': {
            'n_datapoints'
            'n_columns'
            'datapoints': [{}]
        },
    },
    {
        'header': {
            'short_name',
            'long_name',
            'length',
            'version',
            'date',
        },
        'data': {
            'ewe_ctrl_min',
            'ewe_ctrl_max',
            'ole_timestamp',
            'filename',
            'host',
            'address',
            'ec_lab_version',
            'server_version',
            'interpreter_version',
            'device_sn',
            'averaging_points',
        },
    },
    {
        'header': {
            'short_name',
            'long_name',
            'length',
            'version',
            'date',
        },
        'data': {
            'n_indexes',
            'indexes': [],
        },
    },
]

Structure of parsed .mpr files

{
    'header': {
        'filename',
        'general_settings': [],
    },
    'techniques': [
        {
            'technique',
            'params',
            'data': {
                'mpr': [],
                'mpt': {},
            },
        },
    ],
}

Techniques

The techniques implemented are:

  • CA
  • CP
  • CV
  • GCPL
  • GEIS
  • LOOP
  • LSV
  • MB
  • OCV
  • PEIS
  • WAIT
  • ZIR (TODO for .mpr)

Notes on implementing further techniques

In the best case you should have an .mps, .mpr and .mpt files ready that contain the technique you would like to implement.

For the parsing of EC-Lab ASCII files (.mpt/.mps) you simply add a list of parameter names in technique_params.py as they appear in these text files. If the technique has a changing number of parameters in these ASCII files, e.g. it contains a modifiable number of 'Limits' or 'Records', define the technique as a dictionary containing head and tail, like with PEIS. Then also write a function that completes the technique parameters (compare construct_peis_params).

Make sure to also add the list of technique parameters into the technique_params dictionary or to add a case for the technique in _parse_technique_params / _parse_techniques in the mpt.py / mps.py modules.

If you want to implement the technique in the .mpr file parser, you will need to define a corresponding Numpy dtype in the technique_params.py module. I would recommend getting a solid hex editor (e.g. Hexinator, Hex Editor Neo) to find the actual binary data type of each parameter.

From the .mpr files I have seen, you will usually find the parameters at an offset of 0x1845 from the start of the data section in the VMP settings module or somewhere around there. Compare the parameter values in the binary data to the values in the corresponding ASCII files.

As a rule of thumb, floats are usually 32bit little-endian (<f4), integers are often 8bit (|u1) or 16bit (<u2) wide and units are stored in 8bit integers. I have not gotten around to linking the integer value with the corresponding unit yet.

Good luck!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eclabfiles-0.3.6.tar.gz (30.8 kB view details)

Uploaded Source

Built Distribution

eclabfiles-0.3.6-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file eclabfiles-0.3.6.tar.gz.

File metadata

  • Download URL: eclabfiles-0.3.6.tar.gz
  • Upload date:
  • Size: 30.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.5.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for eclabfiles-0.3.6.tar.gz
Algorithm Hash digest
SHA256 6f9d29ef03be822c4302106d9ae9bf17ad59dc72c86eca9903eb445f9b6f37fe
MD5 7a844b9566ef0b076651d8f7f487a859
BLAKE2b-256 1d1aa6a37b6673ff0faf11f6b44ea326a1ed1f5524f888805eefbbb915e9d8d3

See more details on using hashes here.

File details

Details for the file eclabfiles-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: eclabfiles-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.5.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for eclabfiles-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e24be7725e51d04f4454e5e7f46eb057a898e6db13ae018ace0ddca2ad265349
MD5 883253460567a06b7a81b74da759aaf5
BLAKE2b-256 52c8c69b257abd921363152bb75fe395a19bbd2ce989d49b62a452a45fa1705b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page