Skip to main content

CIM query utilities

Project description

PyPI version Python Versions License: MIT Code style: black codecov

CIMSPARQL Query CIM data using sparql

This Python package provides functionality for reading cim data from triple stores such as GraphDB, BlazeGraph or Rdf4j into Python memory as pandas dataframes.

The package provides a set of predefined functions/queries to load CIM data such as generator, demand or branch data, though the user can easily define their own queries.

Usage

Load data using predefined functions/queries

>>> from cimsparql.graphdb import ServiceConfig
>>> from cimsparql.model import get_single_client_model
>>> model = get_single_client_model(ServiceConfig(limit=3))
>>> ac_lines = model.ac_lines()
>>> print(ac_lines[['name', 'x', 'r', 'bch']])
         name       x       r       bch
0  <branch 1>  1.9900  0.8800  0.000010
1  <branch 2>  1.9900  0.8800  0.000010
2  <branch 3>  0.3514  0.1733  0.000198

In the example above the client will query repo "" in the default server GraphDB for AC line values.

Inspect/view predefined queries

See the sparql templates folder (cimsparql/sparql) to the query used.

Load data using user specified queries

>>> from string import Template
>>> query = 'PREFIX cim:<${cim}>\nPREFIX rdf: <${rdf}>\nSELECT ?mrid where {?mrid rdf:type cim:ACLineSegment}'
>>> query_result = model.get_table_and_convert(model.template_to_query(Template(query)))
>>> print(query_result)

Prefix and namespace

Available namespace for current graphdb client (gdbc in the examples above), which can be used in queries (such as rdf and cim) can by found by

>>> print(model.prefixes)
{'wgs': 'http://www.w3.org/2003/01/geo/wgs84_pos#',
 'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
 'owl': 'http://www.w3.org/2002/07/owl#',
 'cim': 'http://iec.ch/TC57/2010/CIM-schema-cim15#',
 'gn': 'http://www.geonames.org/ontology#',
 'xsd': 'http://www.w3.org/2001/XMLSchema#',
 'rdfs': 'http://www.w3.org/2000/01/rdf-schema#',
 'SN': 'http://www.statnett.no/CIM-schema-cim15-extension#',
 'ALG': 'http://www.alstom.com/grid/CIM-schema-cim15-extension#'}

Running Tests Against Docker Databases

Tests can be run against RDF4J databases if a container with the correct images are available.

docker pull eclipse/rdf4j-workbench

Launch one or both containers and specify the following environment variables

RDF4J_URL = "localhost:8080/rdf4j-server"

Note 1: The port numbers may differ depending on your local Docker configurations. Note 2: You don't have to install RDF4J or BlazeGraph. Tests requiring these will be skipped in case they are not available. They will in any case be run in the CI pipeline on GitHub (where both always are available).

Test models

  1. micro: MicroGrid/Type1_T1/CGMES_v2.4.15_MicroGridTestConfiguration_T1_NL_Complete_v2
  2. small: See separate documentation

Rest APIs

CimSparql mainly uses SparqlWrapper to communicate with the databases. However, there are certain operations which are performed directly via REST calls. Since there are small differences between different APIs you may have to specify which API you are using. This can be done when initializing the ServiceCfg class or by specifying the SPARQL_REST_API environment variable. Currently, RDF4J and blazegraph is supported (if not given RDF4J is default).

export SPARQL_REST_API=RDF4J  # To use RDF4J
export SPARQL_REST_API=BLAZEGRAPH  # To use BlazeGraph

Contributing

Contributions are always welcome and encouraged! Whether it's reporting a bug, suggesting an enhancement, or submitting a pull request, your input helps improve the project.

Development

Dependencies are managed through uv, install with uv sync.

It's recommended to install the pre-commit hooks so checks are run automatically on every commit. After installing pre-commit itself, install the hooks with pre-commit install. Checks are normally only run on modified files when committing, but you can run all checks on all files with pre-commit run --all.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cimsparql-9.11.2.tar.gz (641.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cimsparql-9.11.2-py3-none-any.whl (67.9 kB view details)

Uploaded Python 3

File details

Details for the file cimsparql-9.11.2.tar.gz.

File metadata

  • Download URL: cimsparql-9.11.2.tar.gz
  • Upload date:
  • Size: 641.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for cimsparql-9.11.2.tar.gz
Algorithm Hash digest
SHA256 c49b6e1163df601d9b73cb00466c7aff4cdf3a9860f8ae0722f7363d76509a78
MD5 384073e43fcb92152ee8253f40987abb
BLAKE2b-256 d94f8a7be793af7e9232a2682f8bee6250475a6df8f07f2ab88e104c07b0d671

See more details on using hashes here.

File details

Details for the file cimsparql-9.11.2-py3-none-any.whl.

File metadata

  • Download URL: cimsparql-9.11.2-py3-none-any.whl
  • Upload date:
  • Size: 67.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for cimsparql-9.11.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7a638dd428a7a9e5773aaf70707f9ca63ec260353cb535bdbc5e113eb1187934
MD5 20973cbec16350c8501ed51330c5e12e
BLAKE2b-256 3b72e4f86c56b4315aeeb06f5084fb5356e151b2f2610f68a8bf599ff1426bea

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page