OpenGRIS Scaler Distribution Framework
Project description
OpenGRIS Scaler provides a simple, efficient, and reliable way to perform distributed computing using a centralized scheduler, with a stable and language-agnostic protocol for client and worker communications.
import math
from scaler import Client
with Client(address="tcp://127.0.0.1:2345") as client:
# Compute a single task using `.submit()`
future = client.submit(math.sqrt, 16)
print(future.result()) # 4
# Submit multiple tasks with `.map()` - works like Python's built-in map()
results = client.map(math.sqrt, range(100))
print(sum(results)) # 661.46
# For functions with multiple arguments, use multiple iterables or `.starmap()`
def add(x, y):
return x + y
client.map(add, [1, 2, 3], [10, 20, 30]) # [11, 22, 33]
client.starmap(add, [(1, 10), (2, 20), (3, 30)]) # [11, 22, 33]
OpenGRIS Scaler is a suitable Dask replacement, offering significantly better scheduling performance for jobs with a large number of lightweight tasks while improving on load balancing, messaging, and deadlocks.
Features
- Distributed computing across multiple cores and multiple servers
- Python reference implementation, with language-agnostic messaging protocol built on top of Cap'n Proto and ZeroMQ
- Graph scheduling, which supports Dask-like graph computing, with optional GraphBLAS support for very large graph tasks
- Automated load balancing, which automatically balances load from busy workers to idle workers, ensuring uniform utilization across workers
- Automated task recovery from worker-related hardware, OS, or network failures
- Support for nested tasks, allowing tasks to submit new tasks
top-like monitoring tools- GUI monitoring tool
Installation
Scaler is available on PyPI and can be installed using any compatible package manager.
$ pip install opengris-scaler
# or with graphblas and uvloop and web GUI support
$ pip install opengris-scaler[graphblas,uvloop,gui]
# or simply
$ pip install opengris-scaler[all]
Quick Start
The official documentation is available at finos.github.io/opengris-scaler/.
Scaler has 3 main components:
- A scheduler, responsible for routing tasks to available computing resources.
- An object storage server that stores the task data objects (task arguments and task results).
- A set of workers that form a cluster. Workers are independent computing units, each capable of executing a single task.
- Clients running inside applications, responsible for submitting tasks to the scheduler.
Please be noted that Clients are cross platform, supporting Windows and GNU/Linux, while other components can only be run on GNU/Linux.
Start local scheduler and cluster programmatically in code
A local scheduler and a local set of workers can be conveniently started using SchedulerClusterCombo:
from scaler import SchedulerClusterCombo
cluster = SchedulerClusterCombo(address="tcp://127.0.0.1:2345", n_workers=4)
...
cluster.shutdown()
This will start a scheduler with 4 workers on port 2345.
Setting up a computing cluster from the CLI
The object storage server, scheduler and workers can also be started from the command line with
scaler_object_storage_server, scaler_scheduler and scaler_worker_manager.
First, start the object storage server and scheduler:
$ scaler_object_storage_server "tcp://127.0.0.1:2346"
$ scaler_scheduler "tcp://127.0.0.1:2345" --object-storage-address "tcp://127.0.0.1:2346"
[INFO]2025-06-06 13:13:15+0200: logging to ('/dev/stdout',)
[INFO]2025-06-06 13:13:15+0200: use event loop: builtin
[INFO]2025-06-06 13:13:15+0200: Scheduler: listen to scheduler address tcp://127.0.0.1:2345
[INFO]2025-06-06 13:13:15+0200: Scheduler: connect to object storage server tcp://127.0.0.1:2346
[INFO]2025-06-06 13:13:15+0200: Scheduler: listen to scheduler monitor address tcp://127.0.0.1:2347
...
Finally, start a set of workers that connects to the previously started scheduler:
$ scaler_worker_manager baremetal_native --worker-manager-id my-manager --mode fixed --max-task-concurrency 4 tcp://127.0.0.1:2345
...
Multiple worker managers can be connected to the same scheduler, providing distributed computation over multiple servers.
-h lists the available options for the object storage server, scheduler and the worker manager executables:
$ scaler_object_storage_server -h
$ scaler_scheduler -h
$ scaler_worker_manager baremetal_native --help
All-in-one scaler entrypoint
The scaler command starts the full stack — scheduler and one or more worker managers — from a single TOML file,
with each component running in its own process. This is the simplest way to bring up a cluster from the CLI.
Create a stack.toml:
[object_storage_server]
bind_address = "tcp://127.0.0.1:2346"
[scheduler]
bind_address = "tcp://127.0.0.1:2345"
object_storage_address = "tcp://127.0.0.1:2346"
[[worker_manager]]
type = "baremetal_native"
worker_manager_id = "wm-1"
bind_address = "tcp://127.0.0.1:2345"
mode = "fixed"
max_task_concurrency = 4
Then start the entire stack with a single command:
$ scaler stack.toml
The [object_storage_server] section is required; bind_address must be set in
[object_storage_server], and [scheduler].object_storage_address should point to the same address.
Multiple worker managers can be defined using the [[worker_manager]] array-of-tables syntax, each with its own
type, concurrency settings, and logging configuration.
Submitting Python tasks using the Scaler client
Knowing the scheduler address, you can connect and submit tasks from a client in your Python code:
from scaler import Client
def square(value: int):
return value * value
with Client(address="tcp://127.0.0.1:2345") as client:
future = client.submit(square, 4) # submits a single task
print(future.result()) # 16
Client.submit() returns a standard Python future.
Graph computations
Scaler also supports graph tasks, for example:
from scaler import Client
def inc(i):
return i + 1
def add(a, b):
return a + b
def minus(a, b):
return a - b
graph = {
"a": 2,
"b": 2,
# the input to task c is the output of task a
"c": (inc, "a"), # c = a + 1 = 2 + 1 = 3
"d": (add, "a", "b"), # d = a + b = 2 + 2 = 4
"e": (minus, "d", "c") # e = d - c = 4 - 3 = 1
}
with Client(address="tcp://127.0.0.1:2345") as client:
result = client.get(graph, keys=["e"])
print(result) # {"e": 1}
Configuring with TOML Files
While all Scaler components can be configured using command-line flags, using TOML files is the recommended approach for production or shareable setups. Configuration files make your setup explicit, easier to manage, and allow you to check your infrastructure's configuration into version control.
For convenience, you can define the settings for all components in a single, sectioned TOML file. Each component automatically loads its configuration from its corresponding section.
Core Concepts
-
Usage: To use a configuration file, pass its path via the
--configor-cflag.scaler_scheduler --config /path/to/your/example_config.toml
-
Precedence: Settings are loaded in a specific order, with later sources overriding earlier ones. The hierarchy is:
Command-Line Flags > TOML File Settings > Built-in Default Values
-
Naming Convention: The keys in the TOML file must match the long-form command-line arguments. The rule is to replace any hyphens (
-) with underscores (_).- For example, the flag
--max-task-concurrencybecomes the TOML keymax_task_concurrency. - One can discover all available keys by running any command with the
-hor--helpflag.
- For example, the flag
Supported Components and Section Names
The following table maps each Scaler command to its corresponding section name in the TOML file.
| Command | TOML Section |
|---|---|
scaler_scheduler |
[scheduler] |
scaler_object_storage_server |
[object_storage_server] |
scaler_gui |
[gui] |
scaler_top |
[top] |
scaler_worker_manager baremetal_native |
[[worker_manager]] + type = "baremetal_native" |
scaler_worker_manager symphony |
[[worker_manager]] + type = "symphony" |
scaler_worker_manager aws_raw_ecs |
[[worker_manager]] + type = "aws_raw_ecs" |
scaler_worker_manager aws_hpc |
[[worker_manager]] + type = "aws_hpc" |
scaler_worker_manager orb_aws_ec2 |
[[worker_manager]] + type = "orb_aws_ec2" |
Practical Scenarios & Examples
Scenario 1: Unified Configuration File
Here is an example of a single example_config.toml file that configures multiple components using sections.
example_config.toml
# This is a unified configuration file for all Scaler components.
[scheduler]
bind_address = "tcp://127.0.0.1:6378"
object_storage_address = "tcp://127.0.0.1:6379"
monitor_address = "tcp://127.0.0.1:6380"
logging_level = "INFO"
logging_paths = ["/dev/stdout", "/var/log/scaler/scheduler.log"]
policy_engine_type = "simple"
policy_content = "allocate=even_load; scaling=vanilla"
[[worker_manager]]
type = "baremetal_native"
mode = "fixed"
max_task_concurrency = 8
worker_manager_id = "my-manager"
scheduler_address = "tcp://127.0.0.1:6378"
# Each worker manager has its own worker_config,
# so different managers (on different machines) can advertise
# different capabilities to the scheduler.
per_worker_capabilities = "linux,cpu=8"
task_timeout_seconds = 600
# Each worker manager has its own logging config,
# so different managers can write to different log files.
logging_level = "INFO"
logging_paths = ["/dev/stdout", "/var/log/scaler/worker.log"]
[object_storage_server]
bind_address = "tcp://127.0.0.1:6379"
[gui]
gui_address = "0.0.0.0:8081"
With this single file, starting your entire stack is simple and consistent:
scaler_object_storage_server tcp://127.0.0.1:6379 --config example_config.toml &
scaler_scheduler tcp://127.0.0.1:6378 --config example_config.toml &
scaler_worker_manager baremetal_native --config example_config.toml &
scaler_gui tcp://127.0.0.1:6380 --config example_config.toml &
Scenario 2: Overriding a Section's Setting
You can override any value from the TOML file by providing it as a command-line flag. For example, to use the example_config.toml file but test the cluster with 12 workers instead of 8:
# The --max-task-concurrency flag will take precedence over the [[worker_manager]] section
scaler_worker_manager baremetal_native --config example_config.toml --max-task-concurrency 12
The cluster will start with 12 workers, but all other settings (like task_timeout_seconds) will still be loaded from the
[[worker_manager]] section of example_config.toml.
Nested computations
Scaler allows tasks to submit new tasks while being executed. Scaler also supports recursive task calls.
from scaler import Client
def fibonacci(client: Client, n: int):
if n == 0:
return 0
elif n == 1:
return 1
else:
a = client.submit(fibonacci, client, n - 1)
b = client.submit(fibonacci, client, n - 2)
return a.result() + b.result()
with Client(address="tcp://127.0.0.1:2345") as client:
future = client.submit(fibonacci, client, 8)
print(future.result()) # 21
Note: When creating a Client inside a task (nested client), the address parameter is optional. If omitted, the client automatically uses the scheduler address from the worker context. If provided, the specified address takes precedence.
Task Routing and Capability Management
Note: This feature is experimental and may change in future releases.
Scaler provides a task routing mechanism, allowing you to specify capability requirements for tasks and allocate them to workers supporting these.
Starting the Scheduler with the Capability Allocation Policy
The scheduler can be started with the experimental capability allocation policy using the --allocate-policy/-ap
argument.
$ scaler_scheduler tcp://127.0.0.1:2345 --object-storage-address tcp://127.0.0.1:2346 --policy-engine-type simple --policy-content "allocate=capability; scaling=capability"
Defining Worker Supported Capabilities
When starting a cluster of workers, you can define the capabilities available on each worker using the
--per-worker-capabilities/-pwc argument. This allows the scheduler to allocate tasks to workers based on the
capabilities these provide.
$ scaler_worker_manager baremetal_native --worker-manager-id my-manager --mode fixed --max-task-concurrency 4 --per-worker-capabilities "gpu,linux" tcp://127.0.0.1:2345
Specifying Capability Requirements for Tasks
When submitting tasks using the Scaler client, you can specify the capability requirements for each task using the
capabilities argument in the submit_verbose() and get() methods. This ensures that tasks are allocated to workers
supporting these capabilities.
from scaler import Client
with Client(address="tcp://127.0.0.1:2345") as client:
future = client.submit_verbose(round, args=(3.15,), kwargs={}, capabilities={"gpu": -1})
print(future.result()) # 3
The scheduler will route a task to a worker if task.capabilities.is_subset(worker.capabilities).
Integer values specified for capabilities (e.g., gpu=10) are currently ignored by the capabilities allocation
policy.
This means that the presence of a capabilities is considered, but not its quantity. Support for capabilities tracking
might be added in the future.
IBM Spectrum Symphony integration
A Scaler scheduler can interface with IBM Spectrum Symphony to provide distributed computing across Symphony clusters.
$ scaler_worker_manager symphony --worker-manager-id my-manager tcp://127.0.0.1:2345 --service-name ScalerService --base-concurrency 4
This will start a Scaler worker that connects to the Scaler scheduler at tcp://127.0.0.1:2345 and uses the Symphony
service ScalerService to submit tasks.
Symphony service
A service must be deployed in Symphony to handle the task submission.
Here is an example of a service that can be used
class Message(soamapi.Message):
def __init__(self, payload: bytes = b""):
self.__payload = payload
def set_payload(self, payload: bytes):
self.__payload = payload
def get_payload(self) -> bytes:
return self.__payload
def on_serialize(self, stream):
payload_array = array.array("b", self.get_payload())
stream.write_byte_array(payload_array, 0, len(payload_array))
def on_deserialize(self, stream):
self.set_payload(stream.read_byte_array("b"))
class ServiceContainer(soamapi.ServiceContainer):
def on_create_service(self, service_context):
return
def on_session_enter(self, session_context):
return
def on_invoke(self, task_context):
input_message = Message()
task_context.populate_task_input(input_message)
fn, *args = cloudpickle.loads(input_message.get_payload())
output_payload = cloudpickle.dumps(fn(*args))
output_message = Message(output_payload)
task_context.set_task_output(output_message)
def on_session_leave(self):
return
def on_destroy_service(self):
return
Nested tasks
Nested task originating from Symphony workers must be able to reach the Scaler scheduler. This might require modifications to the network configuration.
Nested tasks can also have unpredictable resource usage and runtimes, which can cause Symphony to prematurely kill tasks. It is recommended to be conservative when provisioning resources and limits, and monitor the cluster status closely for any abnormalities.
Base concurrency
Base concurrency is the maximum number of unnested tasks that can be executed concurrently. It is possible to surpass this limit by submitting nested tasks which carry a higher priority. Important: If your workload contains nested tasks the base concurrency should be set to a value less to the number of cores available on the Symphony worker or else deadlocks may occur.
A good heuristic for setting the base concurrency is to use the following formula:
base_concurrency = number_of_cores - deepest_nesting_level
where deepest_nesting_level is the deepest nesting level a task has in your workload. For instance, if you have a
workload that has
a base task that calls a nested task that calls another nested task, then the deepest nesting level is 2.
ORB AWS EC2 integration
A Scaler scheduler can interface with ORB (Open Resource Broker) to dynamically provision and manage workers on AWS EC2 instances.
$ scaler_worker_manager orb_aws_ec2 tcp://127.0.0.1:2345 --image-id ami-0528819f94f4f5fa5
This will start an ORB AWS EC2 worker adapter that connects to the Scaler scheduler at tcp://127.0.0.1:2345. The scheduler can then request new workers from this adapter, which will be launched as EC2 instances.
The ORB AWS EC2 worker manager can also be included in a scaler all-in-one TOML config:
[scheduler]
bind_address = "tcp://127.0.0.1:2345"
object_storage_address = "tcp://127.0.0.1:2346"
[[worker_manager]]
type = "orb_aws_ec2"
scheduler_address = "tcp://127.0.0.1:2345"
image_id = "ami-0528819f94f4f5fa5"
instance_type = "t3.medium"
aws_region = "us-east-1"
Configuration
The ORB AWS EC2 adapter requires orb-py and boto3 to be installed. You can install them with:
$ pip install "opengris-scaler[orb]"
For more details on configuring ORB AWS EC2, including AWS credentials and instance templates, please refer to the ORB AWS EC2 Worker Adapter documentation.
Worker Manager usage
Note: This feature is experimental and may change in future releases.
Scaler provides a Worker Manager webhook interface to integrate with other job schedulers or resource managers. The Worker Manager allows external systems to request the creation and termination of Scaler workers dynamically.
Please check the OpenGRIS standard for more details on the Worker Manager specification here.
Starting the Native Worker Manager
Start a Native Worker Manager and connect it to the scheduler:
$ scaler_worker_manager baremetal_native --worker-manager-id my-manager tcp://127.0.0.1:2345
To check that the Worker Manager is working, you can bring up scaler_top to see workers spawning and terminating as
there is task load changes.
Performance
uvloop
By default, Scaler uses Python's built-in asyncio event loop.
For better async performance, you can install uvloop (pip install uvloop) and supply uvloop for the CLI argument
--event-loop or as a keyword argument for event_loop in Python code when initializing the scheduler.
scaler_scheduler --event-loop uvloop tcp://127.0.0.1:2345 --object-storage-address tcp://127.0.0.1:2346
from scaler import SchedulerClusterCombo
scheduler = SchedulerClusterCombo(address="tcp://127.0.0.1:2345", event_loop="uvloop", n_workers=4)
Monitoring
From the CLI
Use scaler_top to connect to the scheduler's monitor address (printed by the scheduler on startup) to see
diagnostics/metrics information about the scheduler and its workers.
$ scaler_top tcp://127.0.0.1:2347
It will look similar to top, but provides information about the current Scaler setup:
scheduler | task_manager | scheduler_sent | scheduler_received
cpu 0.0% | unassigned 0 | ObjectResponse 24 | Heartbeat 183,109
rss 37.1 MiB | running 0 | TaskEcho 200,000 | ObjectRequest 24
| success 200,000 | Task 200,000 | Task 200,000
| failed 0 | TaskResult 200,000 | TaskResult 200,000
| canceled 0 | BalanceRequest 4 | BalanceResponse 4
--------------------------------------------------------------------------------------------------
Shortcuts: worker[n] cpu[c] rss[m] free[f] working[w] queued[q]
Total 10 worker(s)
worker agt_cpu agt_rss [cpu] rss free sent queued | object_id_to_tasks
W|Linux|15940|3c9409c0+ 0.0% 32.7m 0.0% 28.4m 1000 0 0 |
W|Linux|15946|d6450641+ 0.0% 30.7m 0.0% 28.2m 1000 0 0 |
W|Linux|15942|3ed56e89+ 0.0% 34.8m 0.0% 30.4m 1000 0 0 |
W|Linux|15944|6e7d5b99+ 0.0% 30.8m 0.0% 28.2m 1000 0 0 |
W|Linux|15945|33106447+ 0.0% 31.1m 0.0% 28.1m 1000 0 0 |
W|Linux|15937|b031ce9a+ 0.0% 31.0m 0.0% 30.3m 1000 0 0 |
W|Linux|15941|c4dcc2f3+ 0.0% 30.5m 0.0% 28.2m 1000 0 0 |
W|Linux|15939|e1ab4340+ 0.0% 31.0m 0.0% 28.1m 1000 0 0 |
W|Linux|15938|ed582770+ 0.0% 31.1m 0.0% 28.1m 1000 0 0 |
W|Linux|15943|a7fe8b5e+ 0.0% 30.7m 0.0% 28.3m 1000 0 0 |
- scheduler section shows scheduler resource usage
- task_manager section shows count for each task status
- scheduler_sent section shows count for each type of messages scheduler sent
- scheduler_received section shows count for each type of messages scheduler received
- function_id_to_tasks section shows task count for each function used
- worker section shows worker details, , you can use shortcuts to sort by columns, and the * in the column header
shows
which column is being used for sorting
agt_cpu/agt_rssmeans cpu/memory usage of worker agentcpu/rssmeans cpu/memory usage of workerfreemeans number of free task slots for this workersentmeans how many tasks scheduler sent to the workerqueuedmeans how many tasks worker received and queued
From the web GUI
scaler_gui provides a web monitoring interface for Scaler.
$ scaler_gui tcp://127.0.0.1:2347 --web-port 8081
This will open a web server on port 8081.
Slides and presentations
We showcased Scaler at FOSDEM 2025. Check out the slides here.
Building from source
Using the Dev Container (Recommended)
The easiest way to build Scaler is by using the provided dev container. See the Dev Container Setup documentation for more details.
Building on GNU/Linux
To contribute to Scaler, you might need to manually build its C++ components.
These C++ components depend on the Boost and Cap'n Proto libraries. If these libraries are not available on your system,
you can use the library_tool.sh script to download, compile, and install them (You might need sudo):
./scripts/library_tool.sh boost download
./scripts/library_tool.sh boost compile
./scripts/library_tool.sh boost install
./scripts/library_tool.sh capnp download
./scripts/library_tool.sh capnp compile
./scripts/library_tool.sh capnp install
./scripts/library_tool.sh libuv download
./scripts/library_tool.sh libuv compile
./scripts/library_tool.sh libuv install
After installing these dependencies, use the build.sh script to configure, build, and install Scaler's C++ components:
./scripts/build.sh
This script will create a build directory based on your operating system and architecture, and install the components
within the main source tree, as compiled Python modules. You can specify the compiler to use by setting the CC and
CXX environment variables.
Building on Windows
Building on Windows requires Visual Studio 17 2022. Similar to the former section, you can use the
library_tool.ps1 script to download, compile, and install them (You might need Run as administrator):
./scripts/library_tool.ps1 boost download
./scripts/library_tool.ps1 boost compile
./scripts/library_tool.ps1 boost install
./scripts/library_tool.ps1 capnp download
./scripts/library_tool.ps1 capnp compile
./scripts/library_tool.ps1 capnp install
./scripts/library_tool.ps1 libuv download
./scripts/library_tool.ps1 libuv compile
./scripts/library_tool.ps1 libuv install
After installing these dependencies, if you are using Visual Studio for developing, you may open the project folder
with it, select preset windows-x64, and build the project. You may also run the following commands to configure,
build, and install Scaler's C++ components:
cmake --preset windows-x64
cmake --build --preset windows-x64 --config (Debug|Release)
cmake --install build_windows_x64 --config (Debug|Release)
The output will be similar to what described in the former section. We recommend using Visual Studio for developing on Windows.
Building the Python wheel
Build the Python wheel for Scaler using cibuildwheel:
pip install build cibuildwheel
python -m cibuildwheel --output-dir wheelhouse
python -m build --sdist
Contributing
Your contributions are at the core of making this a true open source project. Any contributions you make are greatly appreciated.
We welcome you to:
- Fix typos or touch up documentation
- Share your opinions on existing issues
- Help expand and improve our library by opening a new issue
Please review functional contribution guidelines to get started 👍.
NOTE: Commits and pull requests to FINOS repositories will only be accepted from those contributors with an active, executed Individual Contributor License Agreement (ICLA) with FINOS OR contributors who are covered under an existing and active Corporate Contribution License Agreement (CCLA) executed with FINOS. Commits from individuals not covered under an ICLA or CCLA will be flagged and blocked by the (EasyCLA) tool. Please note that some CCLAs require individuals/employees to be explicitly named on the CCLA.
Need an ICLA? Unsure if you are covered under an existing CCLA? Email help@finos.org
Code of Conduct
Please see the FINOS Community Code of Conduct.
License
Copyright 2023 Citigroup, Inc.
This project is distributed under the Apache-2.0 License. See
LICENSE for more information.
SPDX-License-Identifier: Apache-2.0
Contact
If you have a query or require support with this project, raise an issue. Otherwise, reach out to opensource@citi.com.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file opengris_scaler-2.0.1.tar.gz.
File metadata
- Download URL: opengris_scaler-2.0.1.tar.gz
- Upload date:
- Size: 4.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1be56da378d0d4c643130a2a5dc0878f4c50d2279fd575721b895290427ff236
|
|
| MD5 |
4fd2d8f39696cfb87db647dac24bfbaf
|
|
| BLAKE2b-256 |
c83c9d780845958d37a2c9ca8191fbe110dea1f760763aae1b2e6322a51b2523
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1.tar.gz:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1.tar.gz -
Subject digest:
1be56da378d0d4c643130a2a5dc0878f4c50d2279fd575721b895290427ff236 - Sigstore transparency entry: 1226599177
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp313-cp313-musllinux_1_2_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp313-cp313-musllinux_1_2_x86_64.whl
- Upload date:
- Size: 3.2 MB
- Tags: CPython 3.13, musllinux: musl 1.2+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
622f92abc126fd290309959cc0ed9ec5ff7b7a5504e12fddd32ad436361dcfef
|
|
| MD5 |
bac14153c9e7a833097ab36df07b54cb
|
|
| BLAKE2b-256 |
9cb62daac90e5a9fbee9a378bffd1ea27f3c4c4325ad07349cc720c790b145c1
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp313-cp313-musllinux_1_2_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp313-cp313-musllinux_1_2_x86_64.whl -
Subject digest:
622f92abc126fd290309959cc0ed9ec5ff7b7a5504e12fddd32ad436361dcfef - Sigstore transparency entry: 1226599244
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp313-cp313-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp313-cp313-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.13, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d0d9a8ccc3302bc1df0dffeb5f2bbb67ea12d199a145105552db6c0c1fc7c61d
|
|
| MD5 |
ca202d8fb10c32b6fef4baa12d96334c
|
|
| BLAKE2b-256 |
b58801b82ae76e0994bd2f9adcaeb098ad8c1a8663cc58702a3bb90d8a229c88
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp313-cp313-manylinux_2_28_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp313-cp313-manylinux_2_28_x86_64.whl -
Subject digest:
d0d9a8ccc3302bc1df0dffeb5f2bbb67ea12d199a145105552db6c0c1fc7c61d - Sigstore transparency entry: 1226599397
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp312-cp312-musllinux_1_2_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp312-cp312-musllinux_1_2_x86_64.whl
- Upload date:
- Size: 3.2 MB
- Tags: CPython 3.12, musllinux: musl 1.2+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92e188cc142e7f051de349ca80c82501bef14edea8549bff81ca49bf24271f2f
|
|
| MD5 |
0c0a6a1e5f3aedee6ded5d2b567f40b0
|
|
| BLAKE2b-256 |
922149d2ae7421a81f54f28a5477e7ff8289f40d153943a072f44470e740af6a
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp312-cp312-musllinux_1_2_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp312-cp312-musllinux_1_2_x86_64.whl -
Subject digest:
92e188cc142e7f051de349ca80c82501bef14edea8549bff81ca49bf24271f2f - Sigstore transparency entry: 1226599292
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp312-cp312-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp312-cp312-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.12, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
84b453270ca82b7246ece3db1daac2bc2807677c2ec386c13ba5935b20e97551
|
|
| MD5 |
4cdc3edcb5474f0e64ba069c3997acad
|
|
| BLAKE2b-256 |
128ecbf33421fa012323c77650dcb745e002d2afa95fb26033cdc6954aa9bf82
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp312-cp312-manylinux_2_28_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp312-cp312-manylinux_2_28_x86_64.whl -
Subject digest:
84b453270ca82b7246ece3db1daac2bc2807677c2ec386c13ba5935b20e97551 - Sigstore transparency entry: 1226599269
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp311-cp311-musllinux_1_2_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp311-cp311-musllinux_1_2_x86_64.whl
- Upload date:
- Size: 3.2 MB
- Tags: CPython 3.11, musllinux: musl 1.2+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b802ff169675173d4dcc2aa47fe51cecc4fa939e466c5421d13914a133f5f5d0
|
|
| MD5 |
70b466e72916e7e7c9820144fb2222b1
|
|
| BLAKE2b-256 |
cc7c585f0071ec53f00f8fc2ea683c908087d7fb1e973b7c4cc65c80c2a5b8de
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp311-cp311-musllinux_1_2_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp311-cp311-musllinux_1_2_x86_64.whl -
Subject digest:
b802ff169675173d4dcc2aa47fe51cecc4fa939e466c5421d13914a133f5f5d0 - Sigstore transparency entry: 1226599315
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.11, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9b4b381eac7aa092b2225c4e663bfc52e3ed46349c6b2c98d1b3da4359c61905
|
|
| MD5 |
a0a20984b1a196ca179d0ff572ea2425
|
|
| BLAKE2b-256 |
d3ba1054c4161980029a8ad27911031d9f51a5b14ee4fb3613e8775fae9b47af
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp311-cp311-manylinux_2_28_x86_64.whl -
Subject digest:
9b4b381eac7aa092b2225c4e663bfc52e3ed46349c6b2c98d1b3da4359c61905 - Sigstore transparency entry: 1226599449
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp310-cp310-musllinux_1_2_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp310-cp310-musllinux_1_2_x86_64.whl
- Upload date:
- Size: 3.2 MB
- Tags: CPython 3.10, musllinux: musl 1.2+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5b2dc65e0e4be7d5fb6302bfafdbe022f0e6dbcc6ef87bcfdb8379910a3f26f
|
|
| MD5 |
ef60f58f468b100f6cdabafd2a67bf1d
|
|
| BLAKE2b-256 |
6443cf3a0188463f27b39dc319fb6fbc56561282088d18144b60a79f930c7002
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp310-cp310-musllinux_1_2_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp310-cp310-musllinux_1_2_x86_64.whl -
Subject digest:
e5b2dc65e0e4be7d5fb6302bfafdbe022f0e6dbcc6ef87bcfdb8379910a3f26f - Sigstore transparency entry: 1226599215
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.10, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc14a74b0245c3bb8a99413ffe11d5c003b024beff12bfb16e65c6451e7e63a0
|
|
| MD5 |
35bb9fd94babdbb40ebbd8df764fa0a7
|
|
| BLAKE2b-256 |
0fb6fd16bfbe08535bf0a68227659221b6f9aa2078dcb095732f20722eb95454
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp310-cp310-manylinux_2_28_x86_64.whl -
Subject digest:
fc14a74b0245c3bb8a99413ffe11d5c003b024beff12bfb16e65c6451e7e63a0 - Sigstore transparency entry: 1226599424
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp38-cp38-musllinux_1_2_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp38-cp38-musllinux_1_2_x86_64.whl
- Upload date:
- Size: 3.2 MB
- Tags: CPython 3.8, musllinux: musl 1.2+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7010be17e1f3400591e1971395f0aa7a3e83bcfd16cfda8745e5baa8015c9c3f
|
|
| MD5 |
049513ce68bfeb8a14651d140bb9f873
|
|
| BLAKE2b-256 |
e3245902a318a8fbf08e2c6c7654e65628001b5c4184d787779d0b3b525b29d5
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp38-cp38-musllinux_1_2_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp38-cp38-musllinux_1_2_x86_64.whl -
Subject digest:
7010be17e1f3400591e1971395f0aa7a3e83bcfd16cfda8745e5baa8015c9c3f - Sigstore transparency entry: 1226599335
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type:
File details
Details for the file opengris_scaler-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: opengris_scaler-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.8, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
496615ac0d88f89ac52b8846fa35ee8442b95f4165f0be1a0d62be8c90246999
|
|
| MD5 |
1bf52a82bc65b87006309ed8faf698d3
|
|
| BLAKE2b-256 |
b84ee42b1477f237a6865094c074ee8c92c0e07b19179ef2cf169169634d3f08
|
Provenance
The following attestation bundles were made for opengris_scaler-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl:
Publisher:
publish-artifact.yml on finos/opengris-scaler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opengris_scaler-2.0.1-cp38-cp38-manylinux_2_28_x86_64.whl -
Subject digest:
496615ac0d88f89ac52b8846fa35ee8442b95f4165f0be1a0d62be8c90246999 - Sigstore transparency entry: 1226599370
- Sigstore integration time:
-
Permalink:
finos/opengris-scaler@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/finos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-artifact.yml@b07fd9ed4d2be785bf29501cdd5c9d8928a7bc34 -
Trigger Event:
release
-
Statement type: