Skip to main content

Interoperability Enabler

Project description

What is it?

Interoperability Enabler (IE) component is designed to facilitate seamless integration and interaction among various artefacts within the SEDIMARK ecosystem, including data, AI models, and service offerings.

Key Feature

  • Data Formatter - Convert data from various formats into the SEDIMARK internal processing format (pandas DataFrames)
  • Data Quality Annotations - Enable adding any kind of quality annotations to data inside pandas DataFrames
  • Data Mapper – Convert data from pandas DataFrames into NGSI-LD json
  • Data Extractor – Extract relevant data from a pandas DataFrame
  • Metadata Restorer – Restore metadata to a pandas DataFrame
  • Data Merger – Merge two DataFrames by matching column names

Installation

The source code can be found on GitHub at https://github.com/Sedimark/InteroperabilityEnabler.

To install the package, you can use pip:

pip install InteroperabilityEnabler

Quick Start Examples

Data Formatter (to convert the input data into a pandas DataFrame)

from InteroperabilityEnabler.utils.data_formatter import data_to_dataframe

FILE_PATH="sample.jsonld"
df = data_to_dataframe(FILE_PATH)

It recursively flattens dictionaries while preserving key hierarchies, supporting nested structures and ensuring efficient processing and interoperability.

Data Quality Annotations (to enrich pandas DataFrames by adding quality annotations)

Instance-level annotations:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
annotated_df = add_quality_annotations_to_df(
    df,
    entity_type = entity_type_annotation,
    assessed_attrs = None,
    # type = "new_type", # If there is no type in the input file, a new one can be created
    # context_value = [link1, link2] # If there is no @context in the input file, a new one can be created
)

Attribut-level annotation:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
assessed_attrs = ["attribut_name"]  # Base attribute name (metadata)
annotated_df = add_quality_annotations_to_df(
     df, entity_type = entity_type_annotation, assessed_attrs = assessed_attrs
)

Granular-level annotation:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
assessed_attrs = ["currentTripCount[0]"]  # Base attribute name (metadata) - with the indice
annotated_df = add_quality_annotations_to_df(
   df, entity_type = entity_type_annotation, assessed_attrs = assessed_attrs
)

Data Mapper (to convert the DataFrame into NGSI-LD json format)

from InteroperabilityEnabler.utils.data_mapper import data_conversion, restore_ngsi_ld_structure

data = data_conversion(annotated_df)
data_restored = restore_ngsi_ld_structure(data) # to restore the original NGSI-LD structure

Data Extractor (to extract and return specific columns from a pandas DataFrame)

from InteroperabilityEnabler.utils.extract_data import extract_columns

# Select columns by index
column_indices = [5, 7]

selected_df, selected_column_names = extract_columns(df, column_indices)

print("\nSelected DataFrame:")
print(selected_df)

print("\nSelected Column Names:")
print(selected_column_names)

Metadata Restorer (to restore column names into a pandas DataFrame)

import pandas as pd
from InteroperabilityEnabler.utils.add_metadata import add_metadata_to_predictions_from_dataframe

PREDICTED_DATA = "predicted_data.csv" # example - prediction results from an AI model
predicted_df = pd.read_csv(PREDICTED_DATA, header=None)
predicted_df = add_metadata_to_predictions_from_dataframe(
    predicted_df, selected_column_names
)

Data Merger (merge two DataFrames)

from InteroperabilityEnabler.utils.merge_data import merge_predicted_data

# To combine the original input data with the corresponding prediction results from an AI model
merged_df = merge_predicted_data(df, predicted_df)

Acknowledgement

This software has been developed by the Inria under the SEDIMARK(SEcure Decentralised Intelligent Data MARKetplace) project. SEDIMARK is funded by the European Union under the Horizon Europe framework programme [grant no. 101070074].

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

interoperabilityenabler-0.1.1.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

interoperabilityenabler-0.1.1-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file interoperabilityenabler-0.1.1.tar.gz.

File metadata

  • Download URL: interoperabilityenabler-0.1.1.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for interoperabilityenabler-0.1.1.tar.gz
Algorithm Hash digest
SHA256 053cd1e18553c92108eb2bbe0ab25ce9c3c53c5a16dfa263818a227c9fb45f53
MD5 e54e43c313c3f6fbc6c0bc0f6afadee5
BLAKE2b-256 4a1f4ef36453a15734932f0c36695ee1a3fee59e170b853f73b91174504c3117

See more details on using hashes here.

File details

Details for the file interoperabilityenabler-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for interoperabilityenabler-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ec0ad82e37dd48fff195b429bbe55a0fc043aadafd7a228f06d7fbb76c9080c5
MD5 3c5282715936e23abb6ad97d7e59622d
BLAKE2b-256 f9a01070f0bc521067e155d62ba12415523527123b497ac5cd838f97ce7ab9cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page