Skip to main content

A python library to help with WITSML v1.4.1.1 and v2.0

Project description

Komle Plus

komle-plus is fork from komle, a python library for WITSML, uses PyXB-E to marshal/unmarshal xml files according to the schemas.

What is this fork for?:

At work I use some closed source tools that use Komle - some of which I keep. This repository has the necessary patches for komle to work for me. Intend to have a published copy of PyPI available and an automated test pipeline.

Some of the features are:

  • WITSML data model bindings for schema v1.4.1.1 and v2.0
    • Note just one version can be used in the same runtime, due to namespace collision
  • WITSML to dict, for use in a pandas dataframe or json
  • Unit converter based on witsmlUnitDict
  • Soap client to request data from a witsml server, according to the webservice description
  • Validation that xml files conforms to the WITSML schema
  • Support for the generated write schemas, to be used for WMLS_AddToStore
    • Note that write_bindings can not be imported at the same time as read_bindings. See below for details.

Instalation by GitHub

Pre-requisites:

git clone https://github.com/HemersonRafael/komle-plus

if the repo is cloned

make install

Instalation by PyPI

pip install komle-plus

Getting started

from komle.bindings.v1411.read import witsml
from komle import utils as ku
import pandas as pd # Not part of komle setup

with open('log.xml', 'r') as log_file:
    # logs is a regular python object according to the witsml schema
    logs = witsml.CreateFromDocument(log_file.read())

# Print the witsml documentation for logs
print(logs._element().documentation())

# Print the schema location for logCurveInfo, nice to have for reference
print(logs.log[0].logCurveInfo[0]._element().xsdLocation().locationBase)

print([l.name for l in logs.log])

# Convert logdata to a dict
log = logs.log[0]

data_dict = ku.logdata_dict(log)

# Create a dataframe, if you have installed pandas
df_data = pd.DataFrame(data_dict)

# Do the same for the plural logCurveInfo element
df_curve = pd.DataFrame(ku.plural_dict(log.logCurveInfo))

witsml.CreateFromDocument works on any witsml object, like trajectorys, mudLogs, tubulars etc, and returns a python representation according to the schema. Nodes are converted to there corresponding python types and accessed like any other python object, the exception is leaf nodes with attributes where one must call value() since primitive types in python does not have custom attributes. For example mdTop.value() where mdTop also has the attribute mdTop.uom, also see examples/hello_witsml.py.

Usage of different schemas

The difference between the schemas is described here. In summary,

  • Read Schemas: [...] a copy of the normative files except that all choices, elements and attributes are optional. [...] these schema files must represent the XMLout response from the WITSML WMLS_GetFromStore method.
  • Write Schemas: [...] a copy of the normative files except that some unique identifier attributes have had their optionality changed. [...] these schema files must represent the XMLin input to the WITSML WMLS_AddToStore method.
  • Update Schemas (not currently supported): [...] a copy of the normative files with all elements and attributes optional except that all unique identifier attributes and uom attributes are mandatory. [...] these schema files must represent the XMLin input to the WITSML WMLS_UpdateInStore method.
  • Delete Schemas (not currently supported): [...] a copy of the normative files with all elements and attributes optional except for all object uids and parentage-pointers which are mandatory. [...] these schema files must represent the QueryIn input to the WITSML WMLS_DeleteFromStore method.

As a practical matter, any program needing to work on both read and write (and update/delete) should only import the read bindings, since they have the least restrictions. The read bindings will be valid also for write/update/delete, as long as the mandatory elements and attributes are present. For validation of the write schema, a separate test program which only imports the write bindings must be used. (Note that the multiprocessing module can not easily be used for this purpose, since the child processes will inherit the parent process' imports.)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

komle-plus-0.3.1.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

komle_plus-0.3.1-py3-none-any.whl (3.6 MB view details)

Uploaded Python 3

File details

Details for the file komle-plus-0.3.1.tar.gz.

File metadata

  • Download URL: komle-plus-0.3.1.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.12

File hashes

Hashes for komle-plus-0.3.1.tar.gz
Algorithm Hash digest
SHA256 d313f463a8da3960c7c0cf361f2c63f9ca4d4bbd8c30e6d19ca5bf202729e3b0
MD5 4f3f6248414939b3af257bcab072b3a2
BLAKE2b-256 72ef34294aab3052350b0485865def07a24d2ee2a63522921a3ea2a52d583601

See more details on using hashes here.

File details

Details for the file komle_plus-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: komle_plus-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 3.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.12

File hashes

Hashes for komle_plus-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 51f11c19fcc758b24d1aff29dfb4dfbefb0e9265d02e7e8e591e69aeb47b9369
MD5 ef6915f0c837042ba65d823e86fe4082
BLAKE2b-256 aefa10a8cbe871c9ff1f7ee252bf95df54599e4fa2aef2eee9a9f612f5191ca5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page