A Python client for communicating with DSIS
Project description
DSIS Python Client
A Python SDK for the DSIS (DecisionSpace Integration Server) API Management system. Provides easy access to DSIS data through Equinor's Azure API Management gateway with built-in authentication and error handling.
Features
- Dual-Token Authentication: Handles both Azure AD and DSIS token acquisition automatically
- Easy Configuration: Simple dataclass-based configuration management
- Error Handling: Custom exceptions for different error scenarios
- Logging Support: Built-in logging for debugging and monitoring
- Type Hints: Full type annotations for better IDE support
- OData Support: Convenient methods for OData queries with full parameter support
- dsis-schemas Integration: Built-in support for model discovery, field inspection, and response deserialization
- Production Ready: Comprehensive error handling and validation
Installation
pip install dsis-client
Quick Start
Basic Usage
from dsis_client import DSISClient, DSISConfig, Environment, QueryBuilder
# Configure the client
config = DSISConfig(
environment=Environment.DEV,
tenant_id="your-tenant-id",
client_id="your-client-id",
client_secret="your-client-secret",
access_app_id="your-access-app-id",
dsis_username="your-username",
dsis_password="your-password",
subscription_key_dsauth="your-dsauth-key",
subscription_key_dsdata="your-dsdata-key",
dsis_site="", # It should be "dev", "qa" for test, or "prod"
)
# Create client
client = DSISClient(config)
# Build a query with model_name and model_version
query = QueryBuilder(
model_name="OW5000",
district_id="your-district-id",
project="your-project",
model_version="5000107", # optional, defaults to "5000107"
).schema("Well").select("well_name,well_uwi")
# Execute the query
for well in client.execute_query(query):
print(well)
Advanced Usage
from dsis_client import DSISClient, DSISConfig, Environment, QueryBuilder
config = DSISConfig(
environment=Environment.DEV,
tenant_id="...",
client_id="...",
client_secret="...",
access_app_id="...",
dsis_username="...",
dsis_password="...",
subscription_key_dsauth="...",
subscription_key_dsdata="...",
dsis_site="", # It should be "dev", "qa" for test, or "prod"
)
client = DSISClient(config)
# Test connection
if client.test_connection():
print("✓ Connected to DSIS API")
# Build queries with model_name and model_version on QueryBuilder
query = QueryBuilder(
model_name="OpenWorksCommonModel",
district_id="123",
project="wells",
).schema("Basin")
for basin in client.execute_query(query):
print(basin)
# Query with field selection
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
).schema("Well").select("name,depth,status")
for well in client.execute_query(query):
print(well)
# Query with filtering
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
).schema("Wellbore").filter("depth gt 1000")
for wellbore in client.execute_query(query):
print(wellbore)
# Query with expand (related data)
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
).schema("WellLog").expand("logs,completions")
for log in client.execute_query(query):
print(log)
# Refresh tokens if needed
client.refresh_authentication()
Working with dsis-schemas Models
The client provides built-in support for the dsis-schemas package, which provides Pydantic models for DSIS data structures.
Installation
# Basic installation (metadata/OData support only)
pip install dsis-client
# With protobuf support for bulk data decoding
pip install dsis-client dsis-schemas[protobuf]
Note: The DSIS API serves data in two formats:
- Metadata: Via OData (JSON) - entity properties, relationships, statistics
- Bulk Data: Via Protocol Buffers (binary) - large arrays like horizon z-values, log curves, seismic amplitudes
Install dsis-schemas[protobuf] to decode binary bulk data fields.
QueryBuilder: Build OData Queries
The QueryBuilder provides a fluent interface for building OData queries. QueryBuilder IS the query object - no need to call .build().
Basic Usage
from dsis_client import QueryBuilder, DSISClient, DSISConfig
# Create a query with model_name, district_id, and project
query = QueryBuilder(
model_name="OW5000",
district_id="OpenWorks_OW_SV4TSTA_SingleSource-OW_SV4TSTA",
project="SNORRE",
).schema("Well").select("name,depth")
# Execute the query with client
client = DSISClient(config)
for well in client.execute_query(query):
print(well)
# Build a complex query with chaining
query = (
QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
)
.schema("Well")
.select("name", "depth", "status")
.filter("depth gt 1000")
.expand("wellbores")
)
for well in client.execute_query(query):
print(well)
# Reuse builder for multiple queries
builder = QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
)
# Query 1
query1 = builder.schema("Well").select("name,depth")
wells = list(client.execute_query(query1))
# Query 2 (reset builder for new query)
query2 = builder.reset().schema("Basin").select("id,name")
basins = list(client.execute_query(query2))
Using Model Classes with Auto-Casting
For type-safe result casting, use model classes from dsis_model_sdk:
from dsis_client import QueryBuilder
from dsis_model_sdk.models.common import Well, Basin, Fault
# Use schema() with model class for type-safe casting
query = (
QueryBuilder(
model_name="OW5000",
district_id="123",
project="wells",
)
.schema(Basin)
.select("basin_name", "basin_id", "native_uid")
)
# Option 1: Auto-cast results with execute_query
for basin in client.execute_query(query, cast=True):
print(f"Basin: {basin.basin_name}") # Type-safe access with IDE autocomplete
# Option 2: Manual cast with client.cast_results()
all_items = list(client.execute_query(query))
basins = client.cast_results(all_items, Basin)
# Import models directly from dsis_model_sdk
from dsis_model_sdk.models.common import Well, Fault
from dsis_model_sdk.models.native import Well as WellNative
Get Model Information
# Get a model class by name
Well = client.get_model_by_name("Well")
Basin = client.get_model_by_name("Basin")
# Get model from native domain
WellNative = client.get_model_by_name("Well", domain="native")
# Get field information for a model
fields = client.get_model_fields("Well")
print(fields.keys()) # All available fields
Deserialize API Responses
# Get data from API
response = client.get_odata("123", "wells", data_table="Well")
# Deserialize to typed model
well = client.deserialize_response(response, "Well")
print(well.well_name) # Type-safe access with IDE support
print(well.depth) # Automatic validation
Available Models
Common models include: Well, Wellbore, WellLog, Basin, Horizon, Fault, Seismic2D, Seismic3D, and many more.
For a complete list, see the dsis-schemas documentation.
Working with Binary Bulk Data (Protobuf)
Some DSIS entities contain large binary data fields (e.g., horizon z-values, log curves, seismic amplitudes) that are served as Protocol Buffers. The dsis-schemas package provides decoders to work with this data.
Installation for Protobuf Support
# Install with protobuf support for bulk data decoding
pip install dsis-schemas[protobuf]
Note: The DSIS API serves data in two formats:
- Metadata: Via OData (JSON) - entity properties, relationships, statistics
- Bulk Data: Via Protocol Buffers (binary) - large arrays like horizon z-values, log curves, seismic amplitudes
Supported Bulk Data Types
- Horizon 3D (
HorizonData3D) - Interpreted surface z-values - Log Curves (
LogCurve) - Well log measurements vs depth/time - Seismic 3D (
SeismicDataSet3D) - 3D seismic amplitude volume - Seismic 2D (
SeismicDataSet2D) - 2D seismic trace data - Tabular - Generic tabular structures
Example: Decoding Horizon Data
Option 1: Query metadata and binary data together (includes data in response)
import numpy as np
from dsis_model_sdk.models.common import HorizonData3D
from dsis_model_sdk.protobuf import decode_horizon_data
from dsis_model_sdk.utils.protobuf_decoders import horizon_to_numpy
# Step 1: Query for horizon metadata (including binary data field)
query = QueryBuilder(
model_name="OW5000",
district_id=district_id,
project=project,
).schema(HorizonData3D).select("horizon_name,horizon_mean,horizon_mean_unit,data,native_uid")
horizons = list(client.execute_query(query, cast=True, max_pages=1))
# Step 2: Cast to model and decode binary data field
horizon = horizons[0]
print(f"Horizon: {horizon.horizon_name}")
print(f"Mean depth: {horizon.horizon_mean} {horizon.horizon_mean_unit}")
# Step 3: Decode binary bulk data field
if horizon.data:
decoded = decode_horizon_data(horizon.data)
# Step 4: Convert to NumPy array for analysis
array, metadata = horizon_to_numpy(decoded)
print(f"Grid shape: {array.shape}")
print(f"Data coverage: {(~np.isnan(array)).sum() / array.size * 100:.1f}%")
# Use the data
valid_data = array[~np.isnan(array)]
print(f"Depth range: {np.min(valid_data):.2f} - {np.max(valid_data):.2f}")
Option 2: Fetch binary data separately (more efficient for large data)
import numpy as np
from dsis_model_sdk.models.common import HorizonData3D
from dsis_model_sdk.protobuf import decode_horizon_data
from dsis_model_sdk.utils.protobuf_decoders import horizon_to_numpy
# Step 1: Query for horizon metadata only (exclude large binary data field)
query = QueryBuilder(
model_name="OW5000",
district_id=district_id,
project=project,
).schema(HorizonData3D).select("horizon_name,horizon_mean,horizon_mean_unit,native_uid")
horizons = list(client.execute_query(query, cast=True))
# Step 2: Fetch binary data separately for specific horizon
horizon = horizons[0]
print(f"Horizon: {horizon.horizon_name}")
# Fetch binary data - pass entity object directly!
binary_data = client.get_bulk_data(
schema=HorizonData3D,
native_uid=horizon, # Pass entity object directly
query=query # Automatically extracts district_id and project
)
# Step 3: Decode binary bulk data
decoded = decode_horizon_data(binary_data)
# Step 4: Convert to NumPy array for analysis
array, metadata = horizon_to_numpy(decoded)
print(f"Grid shape: {array.shape}")
print(f"Data coverage: {(~np.isnan(array)).sum() / array.size * 100:.1f}%")
# Use the data
valid_data = array[~np.isnan(array)]
print(f"Depth range: {np.min(valid_data):.2f} - {np.max(valid_data):.2f}")
Example: Decoding Log Curve Data
from dsis_model_sdk.protobuf import decode_log_curves
from dsis_model_sdk.utils.protobuf_decoders import log_curve_to_dict
# Query for log curve metadata (exclude binary data for efficiency)
query = QueryBuilder(
model_name="OW5000",
district_id=district_id,
project=project,
).schema("LogCurve").select("log_curve_name,native_uid")
log_curves = list(client.execute_query(query, max_pages=1))
# Fetch binary data for specific log curve - pass entity object directly!
log_curve = log_curves[0]
binary_data = client.get_bulk_data(
schema="LogCurve",
native_uid=log_curve, # Pass entity object directly
query=query # Automatically extracts district_id and project
)
# Decode log curve binary data
decoded = decode_log_curves(binary_data)
print(f"Curve type: {'DEPTH' if decoded.curve_type == decoded.DEPTH else 'TIME'}")
print(f"Index range: {decoded.index.start_index} to {decoded.index.start_index + decoded.index.number_of_index * decoded.index.increment}")
# Convert to dict for easier access
data = log_curve_to_dict(decoded)
for curve_name, curve_data in data['curves'].items():
print(f"Curve: {curve_name}")
print(f" Unit: {curve_data['unit']}")
print(f" Values: {len(curve_data['values'])} samples")
Example: Decoding Seismic Data
import numpy as np
from dsis_model_sdk.models.common import SeismicDataSet3D
from dsis_model_sdk.protobuf import decode_seismic_float_data
from dsis_model_sdk.utils.protobuf_decoders import seismic_3d_to_numpy
# Query for seismic dataset metadata (exclude binary data - it's very large!)
query = QueryBuilder(
model_name="OW5000",
district_id=district_id,
project=project,
).schema(SeismicDataSet3D).select("seismic_dataset_name,native_uid")
seismic_datasets = list(client.execute_query(query, cast=True))
# Fetch binary data separately for specific seismic dataset
seismic = seismic_datasets[0]
print(f"Fetching seismic data for: {seismic.seismic_dataset_name}")
# For large datasets, use streaming to avoid loading everything into memory at once
chunks = []
for chunk in client.get_bulk_data_stream(
schema=SeismicDataSet3D,
native_uid=seismic, # Pass entity object directly
query=query, # Automatically extracts district_id and project
chunk_size=10*1024*1024 # 10MB chunks (DSIS recommended)
):
chunks.append(chunk)
print(f"Downloaded {len(chunk):,} bytes")
# Combine chunks and decode
binary_data = b''.join(chunks)
decoded = decode_seismic_float_data(binary_data)
array, metadata = seismic_3d_to_numpy(decoded)
print(f"Volume shape: {array.shape}") # (traces_i, traces_j, samples_k)
print(f"Memory size: {array.nbytes / 1024 / 1024:.2f} MB")
print(f"Amplitude range: {np.min(array):.2f} to {np.max(array):.2f}")
# Extract a single trace
trace = array[100, 100, :]
print(f"Trace samples: {len(trace)}")
Example: Decoding Surface Grid Data
Surface grids (e.g., SurfaceGrid in OpenWorksCommonModel) use the LGCStructure protobuf format, which is a generic tabular data structure from Landmark Graphics Corporation. Each grid is represented as a collection of elements (columns), where each element contains an array of values.
from io import BytesIO
from dsis_model_sdk.protobuf import decode_lgc_structure, LGCStructure_pb2
# Query for surface grid metadata
query = QueryBuilder(
model_name=\"OpenWorksCommonModel\",
district_id=district_id,
project=project,
).schema(\"SurfaceGrid\").select(\"native_uid,grid_name\")
grids = list(client.execute_query(query, cast=True, max_pages=1))
# Fetch binary data for a specific grid
grid = grids[0]
print(f\"Downloading grid: {grid.grid_name or grid.native_uid}\")
# Build the endpoint URL (SurfaceGrid uses /$value suffix)
endpoint_path = f\"{query.model_name}/{query.model_version}/{district_id}/{project}/SurfaceGrid('{grid.native_uid}')/$value\"
full_url = f\"{config.data_endpoint}/{endpoint_path}\"
# Get the binary data
headers = client.auth.get_auth_headers()
headers[\"Accept\"] = \"application/json\"
response = client._session.get(full_url, headers=headers)
data = response.content
print(f"Downloaded {len(data):,} bytes")
# LGCStructure data is length-prefixed with varint encoding
def read_varint(stream):
"""Read a varint length prefix from stream."""
shift = 0
result = 0
while True:
byte_data = stream.read(1)
if not byte_data:
return 0
byte = byte_data[0]
result |= (byte & 0x7F) << shift
if not (byte & 0x80):
return result
shift += 7
# Parse the length-prefixed message
stream = BytesIO(data)
size = read_varint(stream)
message_data = stream.read(size)
# Decode the LGCStructure
lgc = decode_lgc_structure(message_data)
print(f"Structure name: {lgc.structName}")
print(f"Number of elements: {len(lgc.elements)}")
# Process grid elements (columns)
for i, el in enumerate(lgc.elements[:5]): # Show first 5 elements
data_type = LGCStructure_pb2.LGCStructure.LGCElement.DataType.Name(el.dataType)
if el.dataType == LGCStructure_pb2.LGCStructure.LGCElement.DataType.FLOAT:
values = el.data_float
elif el.dataType == LGCStructure_pb2.LGCStructure.LGCElement.DataType.DOUBLE:
values = el.data_double
elif el.dataType == LGCStructure_pb2.LGCStructure.LGCElement.DataType.INT:
values = el.data_int
else:
values = []
print(f"Element {i}: '{el.elementName}', Type: {data_type}, Values: {len(values):,}")
# For a typical surface grid:
# - Each element represents a row or column in the grid
# - Values are typically FLOAT type representing Z-values (elevation/depth)
# - Missing/null values are often represented as -99999.0 or similar sentinel values
Important Notes:
- Binary bulk data fields are typically large. Make sure to:
- Select only the entities you need with appropriate filters
- Consider memory constraints when working with seismic volumes
- Use NumPy for efficient array operations
- The
datafield in models likeHorizonData3D,LogCurve, andSeismicDataSet3Dcontains the binary protobuf data - Always check if the
datafield exists before attempting to decode it - API Endpoint Format: The binary data endpoint is
/{Schema}('{native_uid}')/data(no/$valuesuffix) - Accept Header: The API returns binary protobuf data with
Accept: application/jsonheader (notapplication/octet-stream)
Configuration
Environment
The client supports three environments:
Environment.DEV- Development environmentEnvironment.QA- Quality Assurance environmentEnvironment.PROD- Production environment
Configuration Parameters
| Parameter | Required | Default | Description |
|---|---|---|---|
environment |
Yes | - | Target environment (DEV, QA, or PROD) |
tenant_id |
Yes | - | Azure AD tenant ID |
client_id |
Yes | - | Azure AD client/application ID |
client_secret |
Yes | - | Azure AD client secret |
access_app_id |
Yes | - | Azure AD access application ID for token resource |
dsis_username |
Yes | - | DSIS username for authentication |
dsis_password |
Yes | - | DSIS password for authentication |
subscription_key_dsauth |
Yes | - | APIM subscription key for dsauth endpoint |
subscription_key_dsdata |
Yes | - | APIM subscription key for dsdata endpoint |
dsis_site |
No | "qa" | DSIS site header |
Error Handling
The client provides specific exception types for different error scenarios:
from dsis_client import (
DSISClient,
DSISConfig,
DSISAuthenticationError,
DSISAPIError,
DSISConfigurationError
)
try:
client = DSISClient(config)
data = client.get_odata("OW5000")
except DSISConfigurationError as e:
print(f"Configuration error: {e}")
except DSISAuthenticationError as e:
print(f"Authentication failed: {e}")
except DSISAPIError as e:
print(f"API request failed: {e}")
Exception Types
DSISException- Base exception for all DSIS client errorsDSISConfigurationError- Raised when configuration is invalid or incompleteDSISAuthenticationError- Raised when authentication fails (Azure AD or DSIS token)DSISAPIError- Raised when an API request fails
Logging
The client includes built-in logging support. Enable debug logging to see detailed information:
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("dsis_client")
# Now use the client
client = DSISClient(config)
data = client.get_odata("OW5000")
API Methods
get(district_id=None, project=None, data_table=None, format_type="json", select=None, expand=None, filter=None, **extra_query)
Make a GET request to the DSIS OData API.
Constructs the OData endpoint URL following the pattern:
/<model_name>/<version>[/<district_id>][/<project>][/<data_table>]
All path segments are optional and can be omitted. The data_table parameter refers to specific data models from dsis-schemas (e.g., "Basin", "Well", "Wellbore", "WellLog", etc.).
Parameters:
district_id: Optional district ID for the queryproject: Optional project name for the querydata_table: Optional data table/model name (e.g., "Basin", "Well", "Wellbore"). If None, uses configured model_nameformat_type: Response format (default: "json")select: OData $select parameter for field selection (comma-separated field names)expand: OData $expand parameter for related data (comma-separated related entities)filter: OData $filter parameter for filtering (OData filter expression)**extra_query: Additional OData query parameters
Returns: Dictionary containing the parsed API response
Example:
# Get using just model and version
data = client.get()
# Get Basin data for a district and project
data = client.get("123", "SNORRE", data_table="Basin")
# Get with field selection
data = client.get("123", "SNORRE", data_table="Well", select="name,depth,status")
# Get with filtering
data = client.get("123", "SNORRE", data_table="Well", filter="depth gt 1000")
# Get with expand (related data)
data = client.get("123", "SNORRE", data_table="Well", expand="logs,completions")
get_odata(district_id=None, project=None, data_table=None, format_type="json", select=None, expand=None, filter=None, **extra_query)
Convenience method for retrieving OData. Delegates to get() method.
Parameters:
district_id: Optional district ID for the queryproject: Optional project name for the querydata_table: Optional data table/model name (e.g., "Basin", "Well", "Wellbore"). If None, uses configured model_nameformat_type: Response format (default: "json")select: OData $select parameter for field selection (comma-separated field names)expand: OData $expand parameter for related data (comma-separated related entities)filter: OData $filter parameter for filtering (OData filter expression)**extra_query: Additional OData query parameters
Returns: Dictionary containing the parsed OData response
Example:
# Get using just model and version
data = client.get_odata()
# Get Basin data for a district and project
data = client.get_odata("123", "SNORRE", data_table="Basin")
# Get with field selection
data = client.get_odata("123", "SNORRE", data_table="Well", select="name,depth,status")
# Get with filtering
data = client.get_odata("123", "SNORRE", data_table="Well", filter="depth gt 1000")
# Get with expand
data = client.get_odata("123", "SNORRE", data_table="Well", expand="logs,completions")
execute_query(query, cast=False, max_pages=-1)
Execute a QueryBuilder query.
Parameters:
query: QueryBuilder instancecast: If True and query has a schema class, automatically cast results to model instances (default: False)max_pages: Maximum number of pages to fetch. -1 (default) fetches all pages
Returns:
- Generator that yields items from the result pages (or model instances if cast=True)
Raises:
- TypeError if query is not a QueryBuilder instance
- ValueError if cast=True but query has no schema class
Example:
from dsis_model_sdk.models.common import Basin
# Build query with QueryBuilder
query = QueryBuilder(district_id="123", project="SNORRE").schema(Basin).select("basin_name,basin_id")
# Option 1: Iterate over results (memory efficient)
for basin in client.execute_query(query, cast=True):
print(basin.basin_name)
# Option 2: Collect all results into a list
basins = list(client.execute_query(query, cast=True))
print(f"Total: {len(basins)} basins")
# Option 3: Fetch only first page
first_page = list(client.execute_query(query, cast=True, max_pages=1))
get_bulk_data(schema, native_uid, district_id=None, project=None, data_field="data")
Fetch binary bulk data (protobuf) for a specific entity.
The DSIS API serves large binary data fields (horizon z-values, log curves, seismic amplitudes) as Protocol Buffers via a special OData endpoint: /{schema}('{native_uid}')/{data_field}/$value
Parameters:
schema: Schema name (e.g., "HorizonData3D", "LogCurve", "SeismicDataSet3D")native_uid: The native_uid of the entitydistrict_id: Optional district ID (if required by API)project: Optional project name (if required by API)data_field: Name of the binary data field (default: "data")
Returns:
- Binary protobuf data as bytes
Raises:
- DSISAPIError if the API request fails
Example:
from dsis_model_sdk.protobuf import decode_horizon_data
# Fetch binary data for a specific horizon
binary_data = client.get_bulk_data(
schema="HorizonData3D",
native_uid="horizon_123",
district_id="123",
project="SNORRE"
)
# Decode the protobuf data
decoded = decode_horizon_data(binary_data)
get_model_by_name(model_name, domain="common")
Get a dsis-schemas model class by name.
Parameters:
model_name: Name of the model (e.g., "Well", "Basin", "Wellbore")domain: Domain to search in - "common" or "native" (default: "common")
Returns: The model class if found, None otherwise
Raises: ImportError if dsis_schemas package is not installed
Example:
Well = client.get_model_by_name("Well")
WellNative = client.get_model_by_name("Well", domain="native")
get_model_fields(model_name, domain="common")
Get field information for a dsis-schemas model.
Parameters:
model_name: Name of the model (e.g., "Well", "Basin")domain: Domain to search in - "common" or "native" (default: "common")
Returns: Dictionary of field names and their information
Raises: ImportError if dsis_schemas package is not installed
Example:
fields = client.get_model_fields("Well")
print(fields.keys()) # All available fields
deserialize_response(response, model_name, domain="common")
Deserialize API response to a dsis-schemas model instance.
Parameters:
response: API response dictionarymodel_name: Name of the model to deserialize to (e.g., "Well", "Basin")domain: Domain to search in - "common" or "native" (default: "common")
Returns: Deserialized model instance
Raises: ImportError if dsis_schemas package is not installed, ValueError if deserialization fails
Example:
response = client.get_odata("123", "wells", data_table="Well")
well = client.deserialize_response(response, "Well")
print(well.well_name) # Type-safe access
QueryBuilder API
QueryBuilder(model_name, district_id, project, model_version="5000107")
Create a new query builder instance. QueryBuilder IS the query object - no need to call .build().
Parameters:
model_name: DSIS model name (e.g., "OW5000" or "OpenWorksCommonModel") (required)district_id: District ID for the query (required)project: Project name for the query (required)model_version: Model version (default: "5000107")
Example:
# Create a query builder with required parameters
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
)
# Chain methods to build the query
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
).schema("Well").select("name,depth")
schema(schema)
Set the schema (data table) using a name or model class.
Parameters:
schema: Schema name (e.g., "Well", "Basin") or dsis_model_sdk model class
Returns: Self for chaining
Example:
# Using schema name
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
).schema("Well")
# Using model class for type-safe casting
from dsis_model_sdk.models.common import Basin
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
).schema(Basin)
select(*fields)
Add fields to the $select parameter.
Parameters:
*fields: Field names to select (can be comma-separated or individual)
Returns: Self for chaining
Example:
builder.select("name", "depth", "status")
builder.select("name,depth,status")
expand(*relations)
Add relations to the $expand parameter.
Parameters:
*relations: Relation names to expand (can be comma-separated or individual)
Returns: Self for chaining
Example:
builder.expand("wells", "horizons")
builder.expand("wells,horizons")
filter(filter_expr)
Set the $filter parameter.
Parameters:
filter_expr: OData filter expression (e.g., "depth gt 1000")
Returns: Self for chaining
Example:
builder.filter("depth gt 1000")
builder.filter("name eq 'Well-1'")
get_query_string()
Get the full OData query string for this query.
Returns: Full query string (e.g., "Well?$format=json&$select=name,depth")
Raises: ValueError if schema is not set
Example:
query = QueryBuilder(district_id="123", project="SNORRE").schema("Well").select("name,depth")
print(query.get_query_string())
# Returns: "Well?$format=json&$select=name,depth"
reset()
Reset the builder to initial state (clears schema, select, expand, filter, format).
Note: Does not reset district_id or project set in constructor.
Returns: Self for chaining
Example:
builder = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
)
builder.schema("Well").select("name")
builder.reset() # Clears schema and select, keeps model_name, district_id and project
builder.schema("Basin").select("id") # Reuse for new query
DSISClient Casting Methods
cast_results(results, schema_class)
Cast API response items to model instances.
Parameters:
results: List of dictionaries from API response (typically response["value"])schema_class: Pydantic model class to cast to (e.g., Basin, Well)
Returns: List of model instances
Raises: ValidationError if any result doesn't match schema
Example:
from dsis_model_sdk.models.common import Basin
query = QueryBuilder(
model_name="OW5000",
district_id="123",
project="SNORRE",
).schema(Basin).select("basin_name,basin_id")
all_items = list(client.execute_query(query))
basins = client.cast_results(all_items, Basin)
for basin in basins:
print(f"Basin: {basin.basin_name}")
Result Casting with QueryBuilder
QueryBuilder supports automatic casting when used with model classes:
from dsis_client import QueryBuilder, DSISClient
from dsis_model_sdk.models.common import Basin
# Set schema with model class
query = QueryBuilder(district_id="123", project="SNORRE").schema(Basin).select("basin_name,basin_id,native_uid")
# Option 1: Auto-cast with executeQuery
basins = client.executeQuery(query, cast=True)
for basin in basins:
print(f"Basin: {basin.basin_name}") # Type-safe access with IDE autocomplete
# Option 2: Manual cast with client.cast_results()
response = client.executeQuery(query)
basins = client.cast_results(response['value'], Basin)
test_connection()
Test the connection to the DSIS API.
Returns: True if connection is successful, False otherwise
Example:
if client.test_connection():
print("✓ Connected to DSIS API")
refresh_authentication()
Refresh both Azure AD and DSIS tokens.
Example:
client.refresh_authentication()
Contributing
License
This project is licensed under the terms of the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dsis_client-1.3.0.tar.gz.
File metadata
- Download URL: dsis_client-1.3.0.tar.gz
- Upload date:
- Size: 32.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
822a208d4fd7437eb09a32e71c57f1ad48b25a45d84945e4ee747df9a51e4fe0
|
|
| MD5 |
009fe96a8a921b7480cf87f0cff5c9f7
|
|
| BLAKE2b-256 |
216db40dda7c68b641d1d1cbd1b7adf029c87f23414908a9bb24fa302a4f7a74
|
Provenance
The following attestation bundles were made for dsis_client-1.3.0.tar.gz:
Publisher:
release.yml on equinor/dsis-python-client
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dsis_client-1.3.0.tar.gz -
Subject digest:
822a208d4fd7437eb09a32e71c57f1ad48b25a45d84945e4ee747df9a51e4fe0 - Sigstore transparency entry: 939652943
- Sigstore integration time:
-
Permalink:
equinor/dsis-python-client@abcd839822bfdba6349c7301ee4f0a2d11162b28 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/equinor
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@abcd839822bfdba6349c7301ee4f0a2d11162b28 -
Trigger Event:
push
-
Statement type:
File details
Details for the file dsis_client-1.3.0-py3-none-any.whl.
File metadata
- Download URL: dsis_client-1.3.0-py3-none-any.whl
- Upload date:
- Size: 37.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eabdfafb5c675fdec545ed7709aed0a8e72d4de9209d1fe73d3a8534bade23ed
|
|
| MD5 |
bfc6b2e5a76092849acd5834dca610d0
|
|
| BLAKE2b-256 |
b0c101f32f4dedf3c7281b7cf93973e18f30953b930bd1499ef140baffb37a78
|
Provenance
The following attestation bundles were made for dsis_client-1.3.0-py3-none-any.whl:
Publisher:
release.yml on equinor/dsis-python-client
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dsis_client-1.3.0-py3-none-any.whl -
Subject digest:
eabdfafb5c675fdec545ed7709aed0a8e72d4de9209d1fe73d3a8534bade23ed - Sigstore transparency entry: 939652949
- Sigstore integration time:
-
Permalink:
equinor/dsis-python-client@abcd839822bfdba6349c7301ee4f0a2d11162b28 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/equinor
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@abcd839822bfdba6349c7301ee4f0a2d11162b28 -
Trigger Event:
push
-
Statement type: