Reusable MLServer test utilities for aiSSEMBLE Inference
Project description
aiSSEMBLE Inference Common Test Utilities
Reusable MLServer test utilities for aiSSEMBLE Inference modules and examples.
Overview
This module provides consolidated MLServer lifecycle management utilities to eliminate code duplication across test suites. It offers two primary usage patterns:
- Simple mode: For examples with static model directories
- Dynamic mode: For module tests with temporary config generation
Installation
# As a test dependency in pyproject.toml
[dependency-groups]
test = [
"aissemble-inference-common-test",
]
[tool.uv.sources]
aissemble-inference-common-test = { path = "../aissemble-inference-common-test", editable = true }
Usage
Simple Mode (Examples)
For tests that use pre-configured model directories:
# tests/features/environment.py
from pathlib import Path
from aissemble_inference_common_test.behave_helpers import (
setup_mlserver_simple,
teardown_mlserver,
)
def before_all(context):
example_dir = Path(__file__).parent.parent.parent
models_dir = example_dir / "models"
setup_mlserver_simple(context, models_dir=models_dir, port=8080)
context.mlserver_fixture.start()
def after_all(context):
teardown_mlserver(context)
Dynamic Mode (Module Tests)
For tests that generate model configurations dynamically:
# tests/features/environment.py
from aissemble_inference_common_test.behave_helpers import (
setup_mlserver_dynamic,
teardown_mlserver,
start_mlserver_with_model,
)
def before_all(context):
setup_mlserver_dynamic(context)
def after_scenario(context, scenario):
if hasattr(context, "mlserver_fixture") and context.mlserver_fixture.process:
context.mlserver_fixture.stop()
def after_all(context):
teardown_mlserver(context)
# In your step definitions:
start_mlserver_with_model(
context,
model_name="yolo",
runtime="aissemble_inference_yolo.YOLORuntime",
model="yolov8n.pt"
)
Features
- Process Management: Robust MLServer subprocess handling with zombie process detection
- Health Checking: Automatic polling of
/v2/health/readyendpoint - Graceful Shutdown: Terminate with timeout, then kill if needed (with safety timeouts)
- Dynamic Port Allocation: Automatic free port discovery
- Config Generation: JSON settings and model-settings creation
- Context Manager Support: Automatic cleanup with Python
withstatement - Error Logging: Detailed warnings for cleanup and shutdown failures
- Backward Compatible: Drop-in replacement for existing test code
API Reference
MLServerFixture
Main fixture class for MLServer lifecycle management.
Basic Usage
from aissemble_inference_common_test import MLServerFixture
# Simple mode
fixture = MLServerFixture.simple(port=8080, models_dir=Path("models"))
fixture.start()
# Dynamic mode
fixture = MLServerFixture.dynamic()
fixture.start_with_model(
model_name="yolo",
runtime="aissemble_inference_yolo.YOLORuntime",
model="yolov8n.pt"
)
# Cleanup
fixture.stop()
fixture.cleanup()
Context Manager Usage (Recommended)
For automatic cleanup even when tests fail:
from aissemble_inference_common_test import MLServerFixture
from pathlib import Path
# Simple mode with context manager
with MLServerFixture.simple(port=8080, models_dir=Path("models")) as fixture:
fixture.start()
# Run tests...
# Automatic cleanup on exit
# Dynamic mode with context manager
with MLServerFixture.dynamic() as fixture:
fixture.start_with_model(
model_name="yolo",
runtime="aissemble_inference_yolo.YOLORuntime",
model="yolov8n.pt"
)
# Run tests...
# Automatic stop() and cleanup() on exit, even if exception occurs
Behave Helpers
setup_mlserver_simple(context, models_dir, port)- Initialize simple fixturesetup_mlserver_dynamic(context)- Initialize dynamic fixturestart_mlserver_with_model(context, model_name, runtime, **params)- Start with configteardown_mlserver(context)- Cleanup and shutdown
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aissemble_inference_common_test-1.5.0.tar.gz.
File metadata
- Download URL: aissemble_inference_common_test-1.5.0.tar.gz
- Upload date:
- Size: 151.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
faa1976a6faaa9377329ef3c196a6f71a0290f0227abbf9bfe8a0eb566c9ac58
|
|
| MD5 |
5dffd463191101c78343bbdae8c6a635
|
|
| BLAKE2b-256 |
a643be0700841b1c7bc410918765efaecebe02bb4c33be81f7877d01d43dc8cf
|
File details
Details for the file aissemble_inference_common_test-1.5.0-py3-none-any.whl.
File metadata
- Download URL: aissemble_inference_common_test-1.5.0-py3-none-any.whl
- Upload date:
- Size: 14.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bd0870a9edf348c8045e7fe61b850e9cc227f0549ac003c834a1994ca48b61a4
|
|
| MD5 |
224682793ad8208cce4d3b25897d98b3
|
|
| BLAKE2b-256 |
c8d5a91df9a26025f67edf4c504a894c55760fd8d9449a00b8aabd6c0009c92f
|