Skip to main content

ROS log processing and dataset conversion

Project description

Hephaes

Python package for turning raw ROS/MCAP logs into standardized datasets with consistent schemas across runs. The package helps you:

  • ingest ROS1 .bag and ROS2 .mcap logs
  • inspect topics, rates, and recording time ranges
  • synchronize asynchronous sensor streams onto a shared timeline (downsample or interpolate)
  • convert logs into wide dataset files such as Parquet and TFRecord
  • standardize dataset schemas with explicit topic-to-field mappings

Current Scope

The library is intentionally focused on the core dataset-prep path.

  • Input formats: ROS1 .bag, ROS2 .mcap
  • Input paths must be files, not bag directories
  • Output formats: one wide Parquet or TFRecord file per input log
  • Interface: Python library
  • Python: 3.11+

If you need the same dataset schema across different robots or recording setups, you can map multiple possible source topics to the same target field. The converter will use the first topic that exists in each log.

Installation

Install from pypi:

pip install hephaes

Install from source:

cd hephaes
python -m pip install .

For local development and tests:

cd hephaes
python -m pip install -e ".[dev]"

Quick Start

1. Profile a log

Use Profiler to inspect timing metadata and topic inventory before deciding how to map the log.

from hephaes import Profiler

profile = Profiler(["data/run_001.mcap"], max_workers=1).profile()[0]

print(profile.ros_version)
print(profile.duration_seconds)
print(profile.start_time_iso, profile.end_time_iso)
print([(topic.name, topic.message_type, topic.rate_hz) for topic in profile.topics])

2. Define a standardized schema

You can auto-generate a mapping from discovered topics:

from hephaes import build_mapping_template

mapping = build_mapping_template(profile.topics)
print(mapping.root)

Or define a stable schema explicitly. This is the main mechanism for dataset schema standardization.

from hephaes import build_mapping_template_from_json

mapping = build_mapping_template_from_json(
    profile.topics,
    {
        "front_camera": ["/camera/front/image_raw", "/sensors/front_cam"],
        "imu": ["/imu/data", "/sensors/imu"],
        "vehicle_twist": ["/cmd_vel", "/vehicle/twist"],
    },
    strict_unknown_topics=False,
)

In the example above, front_camera, imu, and vehicle_twist become the canonical dataset fields. Each field can list fallback source topics, which is useful when topic names vary across robots, fleets, or recording versions.

3. Convert logs into Parquet or TFRecord

Use Converter to write one dataset file per input log. Parquet remains the default.

from hephaes import Converter, ResampleConfig, TFRecordOutputConfig

converter = Converter(
    ["data/run_001.mcap"],
    mapping,
    output_dir="dataset/processed",
    output=TFRecordOutputConfig(),
    resample=ResampleConfig(freq_hz=10.0, method="interpolate"),
    robot_context={"robot_id": "alpha-01", "platform": "spot"},
    max_workers=1,
)

dataset_paths = converter.convert()
print(dataset_paths[0])
print(dataset_paths[0].with_suffix(".manifest.json"))

4. Stream the output rows

from hephaes import stream_tfrecord_rows

for row in stream_tfrecord_rows(dataset_paths[0]):
    print(row)
    break

5. Choose image payload contract mode

TFRecord defaults to image_payload_contract="bytes_v2", which writes image data fields as raw bytes features while keeping image metadata fields.

from hephaes import TFRecordOutputConfig

output = TFRecordOutputConfig(
    image_payload_contract="bytes_v2",  # default
)

For backwards-compatible reads/writes during migration windows, use legacy list-based behavior:

from hephaes import TFRecordOutputConfig

legacy_output = TFRecordOutputConfig(
    image_payload_contract="legacy_list_v1",
)

To migrate an existing loaded spec between modes:

from hephaes import load_conversion_spec, set_tfrecord_image_payload_contract

spec = load_conversion_spec("conversion-spec.yaml")
spec = set_tfrecord_image_payload_contract(spec, contract="bytes_v2")

Synchronization Modes

hephaes supports three practical ways to align asynchronous topics:

Mode Configuration Behavior
Preserve original timestamps resample=None Writes rows at the union of observed message timestamps.
Downsample to a fixed rate ResampleConfig(freq_hz=10.0, method="downsample") Buckets messages on a regular grid and keeps the latest payload seen in each bucket.
Interpolate to a fixed rate ResampleConfig(freq_hz=10.0, method="interpolate") Builds a regular timestamp grid and linearly interpolates numeric JSON leaves between samples.

Interpolation is intended for numeric sensor payloads. Non-numeric leaves fall back to the earlier sample.

For Parquet output, preserve/downsample modes store raw message bytes as base64-wrapped JSON strings, while interpolate stores normalized JSON payloads derived from deserialized messages. For TFRecord output, all modes deserialize messages and emit flattened typed features.

Output Format

Each input log becomes one dataset file named like:

episode_0001.parquet
episode_0002.parquet
episode_0003.tfrecord
episode_0001.manifest.json

The logical row schema is wide and simple:

timestamp_ns: int64
front_camera: string
imu: string
vehicle_twist: string
...

Notes:

  • timestamp_ns is always present.
  • Parquet keeps one nullable column per mapping target.
  • TFRecord expands each mapping target into flattened typed feature names such as imu__orientation__x.
  • Parquet stores each mapped field as a JSON string column.
  • Raw byte payloads are wrapped as base64-encoded JSON objects shaped like {"__bytes__": true, "encoding": "base64", "value": "..."}.
  • TFRecord stores flattened typed features derived from deserialized messages.
  • TFRecord uses float_list, int64_list, and bytes_list features, plus companion <field>__present flags for nulls.
  • Image-like payload bytes are written as raw bytes_list features alongside their metadata fields.
  • Each converted episode also gets a sidecar manifest at <episode_id>.manifest.json for indexing and provenance.
  • The manifest includes source metadata, temporal metadata, resolved mapping info, and optional user-supplied robot_context.
  • The labels and privacy sections are present by default; placeholder fields such as auto_tags, vlm_description, objects_detected, and anonymization_method remain null until those features are implemented.

This makes the output easy to stream, inspect, and hand off to downstream ETL, analysis or ML pipelines while preserving source payload fidelity.

Direct Log Access

If you want to read logs directly instead of converting them immediately, use RosReader.

from hephaes import RosReader

with RosReader.open("data/run_001.bag") as reader:
    print(reader.topics)

    for message in reader.read_messages(topics=["/cmd_vel"]):
        print(message.timestamp, message.topic, message.data)
        break

    for topic, timestamp in reader.iter_message_headers(
        topics=["/camera/front/image_raw"],
        start_ns=1_700_000_000_000_000_000,
        stop_ns=1_700_000_000_500_000_000,
    ):
        print(topic, timestamp)
        break

Development

Run the test suite with:

cd hephaes
pytest

Build a wheel locally with:

cd hephaes
python -m build

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hephaes-0.2.2.tar.gz (88.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hephaes-0.2.2-py3-none-any.whl (71.7 kB view details)

Uploaded Python 3

File details

Details for the file hephaes-0.2.2.tar.gz.

File metadata

  • Download URL: hephaes-0.2.2.tar.gz
  • Upload date:
  • Size: 88.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for hephaes-0.2.2.tar.gz
Algorithm Hash digest
SHA256 4f855a3fcfbda7196f30720d412605cf80419020dac1d17e784b0b41adcfa8d4
MD5 b06ad9a366e1afbe6a4e51b550e6e907
BLAKE2b-256 1e742e3a60ba2c037646fa6bab8ba11e5ed21d2586db36f0212238bd95a548e5

See more details on using hashes here.

File details

Details for the file hephaes-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: hephaes-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 71.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for hephaes-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4ff3c3cb2057ccd99a2fb9adce1112b04653e72229fd6315ef94e4629d539b64
MD5 9b4e0a85363677ae7f2a1794d2d8ca13
BLAKE2b-256 6fdda44c07c9897999506af11063f70f8666480fef59f2b548f76341533c0321

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page