Skip to main content

Interoperability Enabler

Project description

What is it?

Interoperability Enabler (IE) component is designed to facilitate seamless integration and interaction among various artefacts within the SEDIMARK ecosystem, including data, AI models, and service offerings.

Key Feature

  • Data Formatter - Convert data from various formats into the SEDIMARK internal processing format (pandas DataFrames)
  • Data Quality Annotations - Enable adding any kind of quality annotations to data inside pandas DataFrames
  • Data Mapper – Convert data from pandas DataFrames into NGSI-LD json
  • Data Extractor – Extract relevant data from a pandas DataFrame
  • Metadata Restorer – Restore metadata to a pandas DataFrame
  • Data Merger – Merge two DataFrames by matching column names

Installation

The source code can be found on GitHub at https://github.com/Sedimark/InteroperabilityEnabler.

To install the package, you can use pip:

pip install InteroperabilityEnabler

Quick Start Examples

Data Formatter (to convert the input data into a pandas DataFrame)

from InteroperabilityEnabler.utils.data_formatter import data_to_dataframe

FILE_PATH="sample.jsonld"
df = data_to_dataframe(FILE_PATH)

It recursively flattens dictionaries while preserving key hierarchies, supporting nested structures and ensuring efficient processing and interoperability.

Data Quality Annotations (to enrich pandas DataFrames by adding quality annotations)

Instance-level annotations:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
annotated_df = add_quality_annotations_to_df(
    df,
    entity_type = entity_type_annotation,
    assessed_attrs = None,
    # type = "new_type", # If there is no type in the input file, a new one can be created
    # context_value = [link1, link2] # If there is no @context in the input file, a new one can be created
)

Attribut-level annotation:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
assessed_attrs = ["attribut_name"]  # Base attribute name (metadata)
annotated_df = add_quality_annotations_to_df(
     df, entity_type = entity_type_annotation, assessed_attrs = assessed_attrs
)

Granular-level annotation:

from InteroperabilityEnabler.utils.annotation_dataset import add_quality_annotations_to_df

entity_type_annotation = "entity_type_value" # entity type for quality annotations
assessed_attrs = ["currentTripCount[0]"]  # Base attribute name (metadata) - with the indice
annotated_df = add_quality_annotations_to_df(
   df, entity_type = entity_type_annotation, assessed_attrs = assessed_attrs
)

Data Mapper (to convert the DataFrame into NGSI-LD json format)

from InteroperabilityEnabler.utils.data_mapper import data_conversion, restore_ngsi_ld_structure

data = data_conversion(annotated_df)
data_restored = restore_ngsi_ld_structure(data) # to restore the original NGSI-LD structure

Data Extractor (to extract and return specific columns from a pandas DataFrame)

from InteroperabilityEnabler.utils.extract_data import extract_columns

# Select columns by index
column_indices = [5, 7]

selected_df, selected_column_names = extract_columns(df, column_indices)

print("\nSelected DataFrame:")
print(selected_df)

print("\nSelected Column Names:")
print(selected_column_names)

Metadata Restorer (to restore column names into a pandas DataFrame)

import pandas as pd
from InteroperabilityEnabler.utils.add_metadata import add_metadata_to_predictions_from_dataframe

PREDICTED_DATA = "predicted_data.csv" # example - prediction results from an AI model
predicted_df = pd.read_csv(PREDICTED_DATA, header=None)
predicted_df = add_metadata_to_predictions_from_dataframe(
    predicted_df, selected_column_names
)

Data Merger (merge two DataFrames)

from InteroperabilityEnabler.utils.merge_data import merge_predicted_data

# To combine the original input data with the corresponding prediction results from an AI model
merged_df = merge_predicted_data(df, predicted_df)

Acknowledgement

This software has been developed by the Inria under the SEDIMARK(SEcure Decentralised Intelligent Data MARKetplace) project. SEDIMARK is funded by the European Union under the Horizon Europe framework programme [grant no. 101070074].

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

interoperabilityenabler-0.1.3.tar.gz (8.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

interoperabilityenabler-0.1.3-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file interoperabilityenabler-0.1.3.tar.gz.

File metadata

  • Download URL: interoperabilityenabler-0.1.3.tar.gz
  • Upload date:
  • Size: 8.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for interoperabilityenabler-0.1.3.tar.gz
Algorithm Hash digest
SHA256 e230124dc7ac9008ed017f1b79f182e420cc330f5a99caa9e517adac7482f2dc
MD5 abd925b904465195636879cd53b40b3b
BLAKE2b-256 0ec5832388dda42a2380e566b3e464aba84e2427305f17c01c1af618938aa0a9

See more details on using hashes here.

File details

Details for the file interoperabilityenabler-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for interoperabilityenabler-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 6fa8782b03014594a41d450265db107afc4ff8facbb35408faf96cada724e33d
MD5 52fc280e3acd4e92f6324ecabd7c7346
BLAKE2b-256 6448355d920609840378066dfcd456b5dfc1458b03c8081f4aa945a11ee65e73

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page