Skip to main content

Package to simplify Micantis API usage

Project description

Micantis API Wrapper

A lightweight Python wrapper for interacting with the Micantis API plus some helpful utilities.
Built for ease of use, fast prototyping, and clean integration into data workflows.


🚀 Features

  • Authenticate and connect to the Micantis API service
  • Download and parse CSV, binary, and Parquet data into pandas DataFrames
  • Parquet support for efficient data storage with embedded metadata
  • Filter, search, and retrieve metadata
  • Utility functions to simplify common API tasks

⚠️ Important

This package is designed for authenticated Micantis customers only.
If you are not a Micantis customer, the API wrapper and utilities in this package will not work for you.

For more information on accessing the Micantis API, please contact us at info@micantis.io.


📦 Installation

pip install micantis

Optional: Parquet Support

For parquet file downloads and metadata extraction, install with parquet support:

pip install micantis[parquet]

Or install pyarrow separately:

pip install pyarrow

💻 Examples

Import functions

import pandas as pd
from micantis import MicantisAPI

Initialize API

# Option 1 - login with username and password
service_url = 'your service url'
username = 'your username'
password = 'your password'

api = MicantisAPI(service_url=service_url, username=username, password=password)
# Option 2 - login in with Microsoft Entra ID
SERVICE = 'your service url'
CLIENT_ID = 'your client id'
AUTHORITY = 'https://login.microsoftonline.com/organizations'
SCOPES = ['your scopes']

api = MicantisAPI(service_url=SERVICE, client_id=CLIENT_ID, authority=AUTHORITY, scopes=SCOPES)

Authenticate API

api.authenticate()

Download Data Table Summary

Optional parameters

  • search: Search string (same syntax as the Micantis WebApp)
  • barcode: Search for a specific barcode
  • limit: Number of results to return (default: 500)
  • min_date: Only return results after this date
  • max_date: Only return results before this date
  • show_ignored: Include soft-deleted files (default: True)
table = api.get_data_table(search=search, barcode=barcode, min_date=min_date, max_date=max_date, limit = 10, show_ignored=show_ignored)
table

Download Binary Files

# Download single file

file_id = 'File ID obtained from data table, id column'
df = api.download_binary_file(id)
# Download many files using list of files from the table

file_id_list = table['id'].to_list()
data = []

for id in file_id_list:
    df = api.download_csv_file(id)
    data.append(df)

all_data = pd.concat(data)

Download CSV Files

# Download single file

file_id = 'File ID obtained from data table, id column'
df = api.download_csv_file(id)
# Download multiple files

id_list = table['id'].to_list()
data = []

for id in id_list:
    df = api.download_csv_file(id)
    data.append(df)

all_data = pd.concat(data)

Download Parquet Files

Download cycle tester data as Apache Parquet files for efficient analysis. Parquet files are smaller, faster, and include embedded metadata.

Optional parameters

  • cycle_ranges: Filter by cycle index (see examples below)
  • test_time_start: Filter by test time start (seconds from test start)
  • test_time_end: Filter by test time end (seconds from test start)
  • line_number_start: Filter by line number start
  • line_number_end: Filter by line number end
  • include_auxiliary_data: Include auxiliary channels like temperature (default: True)
  • output_path: Custom file path (default: uses cell_data_id as filename)
  • return_type: What to return - 'dataframe' (default), 'path', or 'bytes'

Return Type Options

  • 'dataframe' (default): Saves file and returns pandas DataFrame - best for immediate analysis
  • 'dict': Saves file and returns dict with data, metadata, and cycle_summaries - best when you need metadata (requires pyarrow)
  • 'path': Saves file and returns path string - best for large files or batch processing
  • 'bytes': Returns raw bytes without saving - best for direct cloud uploads (Databricks, Azure Blob, S3)
# Download and get DataFrame (default)
file_id = 'File ID obtained from data table, id column'
df = api.download_parquet_file(file_id)
# Get data + metadata in one call
result = api.download_parquet_file(file_id, return_type='dict')

df = result['data']                    # Cycle test data
metadata = result['metadata']          # Cell metadata (name, barcode, timestamps, etc.)
cycle_summaries = result['cycle_summaries']  # Per-cycle summary statistics
# Save file and get path (memory efficient for large files)
path = api.download_parquet_file(file_id, return_type='path')

# Later, read when needed
df = pd.read_parquet(path)
# Get raw bytes for direct cloud upload (no local file)
parquet_bytes = api.download_parquet_file(file_id, return_type='bytes')

# Upload to Azure Blob Storage
blob_client.upload_blob(name='test_data.parquet', data=parquet_bytes)

# Or read directly into DataFrame
import io
df = pd.read_parquet(io.BytesIO(parquet_bytes))

Cycle Range Filtering

Filter data by specific cycles or cycle ranges using the cycle_ranges parameter.

# Download only cycles 1-10
df = api.download_parquet_file(
    file_id,
    cycle_ranges=[{"RangeStart": 1, "RangeEnd": 10}]
)
# Download last 5 cycles
df = api.download_parquet_file(
    file_id,
    cycle_ranges=[{
        "RangeStart": 5,
        "IsStartFromBack": True,
        "RangeEnd": 1,
        "IsEndFromBack": True
    }]
)
# Download specific cycles (1, 5, 10, 50)
df = api.download_parquet_file(
    file_id,
    cycle_ranges=[
        {"Single": 1},
        {"Single": 5},
        {"Single": 10},
        {"Single": 50}
    ]
)
# Download first hour of data
df = api.download_parquet_file(
    file_id,
    test_time_start=0,
    test_time_end=3600
)

Extract Metadata from Parquet Files

Parquet files contain embedded metadata including cell info, timestamps, cycle counts, and per-cycle summaries. Extract this metadata using unpack_parquet() (requires pyarrow).

# From a saved file
result = api.unpack_parquet('file.parquet')

df = result['data']                    # Cycle test data
metadata = result['metadata']          # Cell metadata (name, barcode, timestamps, etc.)
cycle_summaries = result['cycle_summaries']  # Per-cycle summary statistics
# From bytes (no file needed)
parquet_bytes = api.download_parquet_file(file_id, return_type='bytes')
result = api.unpack_parquet(parquet_bytes)

df = result['data']
metadata = result['metadata']
cycle_summaries = result['cycle_summaries']
# Extract and save metadata as CSV files for easy viewing
result = api.unpack_parquet('file.parquet', save_metadata=True)

# Creates:
# - file_metadata.csv
# - file_cycle_summaries.csv
# Batch processing: Download multiple files without loading into memory
file_ids = table['id'].head(10).to_list()
paths = []

for file_id in file_ids:
    path = api.download_parquet_file(file_id, return_type='path')
    paths.append(path)

# Later, process files one at a time (memory efficient)
for path in paths:
    result = api.unpack_parquet(path)
    df = result['data']
    # Process df...

Cells Table

Download Cell ID Information

Retrieve a list of cell names and GUIDs from the Micantis database with flexible filtering options.

Optional parameters

  • search: Search string (same syntax as the Micantis WebApp)
  • barcode: Search for a specific barcode
  • limit: Number of results to return (default: 500)
  • min_date: Only return results after this date
  • max_date: Only return results before this date
  • show_ignored: Include soft-deleted files (default: True)
search = "*NPD*"
cells_df = api.get_cells_list(search=search)
cells_df.head()

Download Cell Metadata

Fetch per-cell metadata and return a clean, wide-format DataFrame.

Parameters:

  • cell_ids: List[str]
    List of cell test GUIDs (required)

  • metadata: List[str] (optional)
    List of metadata names (e.g., "OCV (V)") or IDs.
    If omitted, all non-image metadata will be returned by default.

  • return_images: bool (optional)
    If True, includes image metadata fields. Default is False.


📘 Examples

# Example 1: Get all non-image metadata for a list of cells
cell_ids = cells_df["id"].to_list()
cell_metadata_df = api.get_cell_metadata(cell_ids=cell_ids)
# Example 2: Get specific metadata fields by name
cell_metadata_df = api.get_cell_metadata(
    cell_ids=cell_ids,
    metadata=["Cell width", "Cell height"],
    return_images=False
)
# Merge cell metadata table with cell names to get clean dataframe
# Merge id with Cell Name (as last column)
id_to_name = dict(zip(cells_df['id'], cells_df['name']))
cells_metadata_df['cell_name'] = cells_metadata_df['id'].map(id_to_name)
cells_metadata_df.head()

Specifications Table

Download Specifications List

Retrieve specifications with their associated user properties.

# Get all specifications with their user properties
specs_df = api.get_specifications_table()
specs_df.head()

Test Management

Download Test Requests List

Retrieve test request data with flexible date filtering.

Optional parameters

  • since: Date string in various formats (defaults to January 1, 2020 if not provided)
    • Full month names: "May 1, 2025", "January 15, 2024"
    • ISO format: "2025-05-01" or "25-05-01"
# Get all test requests (defaults to since 2020-01-01)
test_requests = api.get_test_request_list()

# Get test requests since a specific date using month name
test_requests = api.get_test_request_list(since="May 1, 2024")

# Get test requests using ISO format
test_requests = api.get_test_request_list(since="2024-05-01")

Download Failed Test Requests

Retrieve only failed test requests with the same date filtering options.

# Get failed test requests since a specific date
failed_requests = api.get_failed_test_requests(since="January 1, 2024")
failed_requests.head()

Get Individual Test Request Details

Retrieve full details for a specific test request by ID.

New Feature: Multiple output format options for better data analysis!

Format Options

  • return_format='dict': Raw dictionary (default, backwards compatible)
  • return_format='dataframes': Returns 3 DataFrames - summary, tests, and status_log ⭐ Recommended
  • return_format='flat': Single-row DataFrame with basic info
# Option 1: Dictionary format (default, backwards compatible)
request_id = "your-test-request-guid"
test_details = api.get_test_request(request_id)

# Option 2: DataFrames format (recommended for analysis) ⭐
test_details = api.get_test_request(request_id, return_format='dataframes')
print(test_details['summary'])      # Basic request information
print(test_details['tests'])        # All requested tests
print(test_details['status_log'])   # Status change history

# Option 3: Flat DataFrame (best for combining multiple requests)
test_details = api.get_test_request(request_id, return_format='flat')

Batch Processing Multiple Requests

# Get summaries for multiple test requests
request_ids = test_requests['id'].head(10).to_list()

all_summaries = []
for req_id in request_ids:
    summary = api.get_test_request(req_id, return_format='flat')
    all_summaries.append(summary)

# Combine into single DataFrame
combined_df = pd.concat(all_summaries, ignore_index=True)
print(f"Retrieved {len(combined_df)} test requests")
combined_df.head()

Write Cell Metadata

Micantis lets you programmatically assign or update metadata for each cell using either:

  • the human-readable field name (e.g., "Technician", "Weight (g)")
  • or the internal propertyDefinitionId (UUID)

📘 Examples

# Example 1: Update the technician field for a cell
changes = [
    {
        "id": "your-cell-test-guid-here",  # cell test GUID
        "field": "Technician",
        "value": "Mykela"
    },
    {
        "id": "your-cell-test-guid-here",
        "field": "Weight (g)",
        "value": 98.7
    }
]

api.write_cell_metadata(changes=changes)

# Verify the changes
api.get_cell_metadata(cell_ids=["your-cell-test-guid-here"], metadata=['Weight (g)', 'Technician'])
# Example 2: Update using propertyDefinitionId (advanced)
changes = [
    {
        "id": "your-cell-test-guid-here",
        "propertyDefinitionId": "your-property-definition-guid",
        "value": 98.7
    }
]

api.write_cell_metadata(changes=changes)

# Verify the changes
api.get_cell_metadata(cell_ids=["your-cell-test-guid-here"], metadata=['Weight (g)', 'Technician'])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

micantis-0.1.14.tar.gz (20.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

micantis-0.1.14-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file micantis-0.1.14.tar.gz.

File metadata

  • Download URL: micantis-0.1.14.tar.gz
  • Upload date:
  • Size: 20.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.20

File hashes

Hashes for micantis-0.1.14.tar.gz
Algorithm Hash digest
SHA256 d7442afbe4fced9ecdbc189a7af7934fd93918054737265920c560e7ab81d564
MD5 eb1e3dcc9d997220c64d9c86baf1a5b1
BLAKE2b-256 92e2ca716bf5489329dee6af32b08c2a24f433ae0b551898c0cdb08f4424446e

See more details on using hashes here.

File details

Details for the file micantis-0.1.14-py3-none-any.whl.

File metadata

  • Download URL: micantis-0.1.14-py3-none-any.whl
  • Upload date:
  • Size: 17.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.20

File hashes

Hashes for micantis-0.1.14-py3-none-any.whl
Algorithm Hash digest
SHA256 d04eb949afad133c0d5c17b23da6d05fd047ade1ec4000ed8ec3392088e33a70
MD5 edee5b3bf0a8e091285a3919263360a6
BLAKE2b-256 332cc79bb8cdc78c2161582d56b3c35414a1d1fc37b17759446700823aa08fd3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page