Skip to main content

Simplified SDK for Foundry Compute Platform - GPU compute made simple

Project description

Flow SDK

PyPI Python License Documentation

GPU compute in seconds, not hours — Flow SDK provides seamless access to GPU infrastructure with a single, simple API. The Flow system is tailored for launching batch tasks and experiments.

Table of Contents

Quick Start

Prerequisites: Get your API key at app.mlfoundry.com

1. Install and Configure

pip install flow-sdk
flow init  # One-time setup wizard

2. Run on GPU

import flow

# Your code launches on GPU in minutes
task = flow.run("python train.py", instance_type="a100")

That's it. Your local train.py file and project files are automatically uploaded and running on an A100 GPU.

Note: Your code is uploaded, but you need to install dependencies. See Handling Dependencies.

Why Flow

Long-standing AI research labs have invested to build sophisticated infrastructure abstractions that enable researchers to focus on research rather than DevOps. DeepMind's Xmanager handles experiments from single GPUs to hundreds of hosts. Meta's submitit brings Python-native patterns to cluster computing. OpenAI's internal platform was designed to seamlessly scale from interactive notebooks to thousand-GPU training runs.

Flow brings these same capabilities to every AI developer. Like these internal tools, Flow provides:

  • Progressive disclosure - Simple tasks stay simple, complex workflows remain possible
  • Unified abstraction - One interface whether running locally or across cloud hardware
  • Fail-fast validation - Catch configuration errors before expensive compute starts
  • Experiment tracking - Built-in task history and reproducibility

The goal: democratize the infrastructure abstractions that enable breakthrough AI research.

Overview

Flow SDK provides a high-level interface for GPU workload submission across heterogeneous infrastructure. Our design philosophy emphasizes explicit behavior, progressive disclosure, and fail-fast validation.

┌─────────────┐       ┌──────────────┐       ┌─────────────────┐
│   Your Code │  -->  │   Flow SDK   │  -->  │   Cloud Infra   │
│ train.py    │       │ Unified API  │       │ FCP, ... others │
└─────────────┘       └──────────────┘       └─────────────────┘
     Local               client-side          cloud accelerators 

Core Capabilities

  • Unified API: Single interface across cloud providers (FCP, AWS, GCP, Azure)
  • Zero DevOps: Automatic instance provisioning, driver setup, and environment configuration
  • Cost Control: Built-in safeguards with max price and runtime limits
  • Persistent Storage: Volumes that persist across task lifecycles
  • Multi-Node: Native support for distributed training
  • Real-Time Monitoring: Log streaming, SSH access, and status tracking
  • Notebook Integration: Google Colab and Jupyter notebook support

Installation

pip install flow-sdk

Requirements:

  • Python 3.11+
  • Linux, macOS, or Windows
  • API key from ML Foundry

Authentication

Run the interactive setup wizard:

uv run flow init

This will:

  1. Prompt for your API key (get one at app.mlfoundry.com)
  2. Help you select a project
  3. Configure SSH keys (optional)
  4. Save settings for all Flow tools

Alternative methods:

Environment Variables

export FCP_API_KEY="fcp-..."
export FCP_PROJECT="my-project"

Manual Config File

# ~/.flow/config.yaml
api_key: fcp-...
project: my-project
region: us-central1-a

Verify Setup

uv run flow status
# Should show "No tasks found" if authenticated

Basic Usage

Python API

import flow
from flow import TaskConfig

# Simple GPU job - automatically uploads your local code
task = flow.run("python train.py", instance_type="a100")

# Wait for completion
task.wait()
print(task.logs())

# Full configuration
config = TaskConfig(
    name="distributed-training",
    instance_type="8xa100",  # 8x A100 GPUs
    command=["python", "-m", "torch.distributed.launch", 
             "--nproc_per_node=8", "train.py"],
    volumes=[{"size_gb": 100, "mount_path": "/data"}],
    max_price_per_hour=25.0,  # Cost protection
    max_run_time_hours=24.0   # Time limit
)
task = flow.run(config)

# Monitor execution
print(f"Status: {task.status}")
print(f"SSH: {task.ssh_command}")
print(f"Cost: {task.cost_per_hour}")

Code Upload

By default, Flow automatically uploads your current directory to the GPU instance:

# This uploads your local files and runs them on GPU
task = flow.run("python train.py", instance_type="a100")

# Disable code upload (use pre-built Docker image)
task = flow.run(
    "python /app/train.py",
    instance_type="a100",
    image="mycompany/training:latest",
    upload_code=False
)

Use .flowignore file to exclude files from upload (same syntax as .gitignore).

Handling Dependencies

Your code is uploaded, but dependencies need to be installed:

# Install from requirements.txt
task = flow.run(
    "pip install -r requirements.txt && python train.py",
    instance_type="a100"
)

# Using uv (recommended for speed)
task = flow.run(
    "uv pip install . && uv run python train.py",
    instance_type="a100"
)

# Pre-installed in Docker image (fastest)
task = flow.run(
    "python train.py",
    instance_type="a100",
    image="pytorch/pytorch:2.0.0-cuda11.8-cudnn8"  # PyTorch pre-installed
)

Command Line

# Submit tasks
uv run flow run "python train.py" --instance-type a100
uv run flow run config.yaml

# Monitor tasks
uv run flow status                    # List all tasks
uv run flow logs task-abc123 -f      # Stream logs
uv run flow ssh task-abc123          # SSH access

# Manage tasks
uv run flow cancel task-abc123       # Stop execution

YAML Configuration

# config.yaml
name: model-training
instance_type: 4xa100
command: python train.py --epochs 100
env:
  BATCH_SIZE: "256"
  LEARNING_RATE: "0.001"
volumes:
  - size_gb: 500
    mount_path: /data
    name: training-data
max_price_per_hour: 20.0
max_run_time_hours: 72.0
ssh_keys:
  - my-ssh-key

Guide for SLURM Users

Flow SDK provides a modern cloud-native alternative to SLURM while maintaining compatibility with existing workflows. This guide helps SLURM users transition to Flow.

Command Equivalents

SLURM Command Flow Command Description
sbatch job.sh flow run job.yaml Submit batch job
sbatch script.slurm flow run script.slurm Direct SLURM script support
squeue flow status View job queue
scancel <job_id> flow cancel <task_id> Cancel job
scontrol show job <id> flow info <task_id> Show job details
sacct Not applicable Flow tracks costs differently
sinfo Not applicable Cloud resources are dynamic
srun flow ssh <task_id> Interactive access

Log Access

# SLURM: View output files
cat slurm-12345.out

# Flow: Stream logs directly
uv run flow logs task-abc123
uv run flow logs task-abc123 --follow    # Like tail -f
uv run flow logs task-abc123 --stderr     # Error output

SLURM Script Compatibility

Flow can directly run existing SLURM scripts:

# Your existing SLURM script
uv run flow run job.slurm

# Behind the scenes, Flow parses #SBATCH directives:
#SBATCH --job-name=training
#SBATCH --nodes=2
#SBATCH --gpus=a100:4
#SBATCH --time=24:00:00
#SBATCH --mem=64G

Migration Examples

Basic GPU Job

SLURM:

#!/bin/bash
#SBATCH --job-name=train-model
#SBATCH --partition=gpu
#SBATCH --gpus=1
#SBATCH --time=12:00:00
#SBATCH --mem=32G

module load cuda/11.8
python train.py

Flow (YAML):

name: train-model
instance_type: a100
command: python train.py
max_run_time_hours: 12.0

Flow (Python):

flow.run("python train.py", instance_type="a100", max_run_time_hours=12)

Multi-GPU Training

SLURM:

#!/bin/bash
#SBATCH --job-name=distributed
#SBATCH --nodes=4
#SBATCH --gpus-per-node=8
#SBATCH --ntasks-per-node=8

srun python -m torch.distributed.launch train.py

Flow:

name: distributed
instance_type: 8xa100
num_instances: 4
command: |
  torchrun --nproc_per_node=8 --nnodes=4 \
    --node_rank=$FLOW_NODE_RANK \
    --master_addr=$FLOW_MAIN_IP \
    train.py

Array Jobs

SLURM:

#!/bin/bash
#SBATCH --array=1-10
#SBATCH --job-name=sweep

python experiment.py --task-id $SLURM_ARRAY_TASK_ID

Flow (using loop):

for i in range(1, 11):
    flow.run(f"python experiment.py --task-id {i}", 
             name=f"sweep-{i}", instance_type="a100")

Key Differences

  1. Resource Allocation: Flow uses instance types (e.g., a100, 4xa100) instead of partition/node specifications
  2. Cost Control: Built-in max_price_per_hour instead of account-based billing
  3. Storage: Cloud volumes (block storage) instead of shared filesystems
    • FCP platform supports both block storage and file shares
    • Flow SDK currently only creates block storage volumes (requires mounting/formatting)
    • File share support is planned for easier multi-node access
  4. Environment: Container-based instead of module system
  5. Scheduling: Cloud-native provisioning instead of queue-based scheduling

Environment Variables

When using the SLURM adapter (flow run script.slurm), Flow sets SLURM-compatible environment variables:

SLURM Variable Set By SLURM Adapter Flow Native Variable
SLURM_JOB_ID ✓ (maps to $FLOW_TASK_ID) FLOW_TASK_ID
SLURM_JOB_NAME FLOW_TASK_NAME
SLURM_ARRAY_TASK_ID ✓ (planned) -
SLURM_NTASKS -
SLURM_CPUS_PER_TASK -
SLURM_NNODES FLOW_NODE_COUNT
SLURM_JOB_PARTITION ✓ (if set) -

For all Flow tasks (regardless of adapter), these variables are available:

  • FLOW_TASK_ID - Unique task identifier
  • FLOW_TASK_NAME - Task name from config

Advanced Features

Module System → Container Images:

# SLURM: module load pytorch/2.0
# Flow equivalent:
image: pytorch/pytorch:2.0.0-cuda11.8-cudnn8

Dependency Management:

# SLURM: --dependency=afterok:12345
# Flow: Use task.wait() in Python or chain commands

Output Formatting:

# Get SLURM-style output (coming soon)
uv run flow status --format=slurm

Future Compatibility

We're considering adding direct SLURM command aliases for easier migration:

  • flow sbatchflow run
  • flow squeueflow status
  • flow scancelflow cancel

If you need specific SLURM features, please open an issue.

Instance Types

Type GPUs Total Memory
a100 1x A100 80GB
4xa100 4x A100 320GB
8xa100 8x A100 640GB
h100 8x H100 640GB
# Examples
flow.run("python train.py", instance_type="a100")     # Single GPU
flow.run("python train.py", instance_type="4xa100")   # Multi-GPU
flow.run("python train.py", instance_type="8xh100")   # Maximum performance

Task Management

Task Object

# Get task handle
task = flow.run(config)
# Or retrieve existing
task = flow.get_task("task-abc123")

# Properties
task.task_id          # Unique identifier
task.status           # Current state
task.ssh_command      # SSH connection string
task.cost_per_hour    # Current pricing
task.created_at       # Submission time

# Methods
task.wait(timeout=3600)       # Block until complete
task.refresh()                # Update status
task.cancel()                 # Terminate execution

Logging

# Get recent logs
logs = task.logs(tail=100)

# Stream in real-time
for line in task.logs(follow=True):
    if "loss:" in line:
        print(line)

SSH Access

# Interactive shell
task.ssh()

# Run command
task.ssh("nvidia-smi")
task.ssh("tail -f /workspace/train.log")

# Multi-node access
task.ssh(node=1)  # Connect to specific node

Extended Information

# Get task creator
user = task.get_user()
print(f"Created by: {user.username} ({user.email})")

# Get instance details
instances = task.get_instances()
for inst in instances:
    print(f"Node {inst.instance_id}:")
    print(f"  Public IP: {inst.public_ip}")
    print(f"  Private IP: {inst.private_ip}")
    print(f"  Status: {inst.status}")

Persistent Storage

Volume Management

# Create volume (currently creates block storage)
with Flow() as client:
    vol = client.create_volume(size_gb=1000, name="datasets")

# Use in task
config = TaskConfig(
    name="training",
    instance_type="a100",
    command="python train.py",
    volumes=[{
        "volume_id": vol.volume_id,
        "mount_path": "/data"
    }]
)

# Or reference by name
config.volumes = [{
    "name": "datasets",
    "mount_path": "/data"
}]

Note: Flow SDK currently creates block storage volumes, which need to be formatted on first use. The underlying FCP platform also supports file shares (pre-formatted, multi-node accessible), but this is not yet exposed in the SDK.

Docker Cache Optimization

# Mount at Docker directory for layer caching
volumes=[{
    "name": "docker-cache",
    "size_gb": 50,
    "mount_path": "/var/lib/docker"
}]

Zero-Import Remote Execution

Flow SDK's invoke() function lets you run Python functions on GPUs without modifying your code:

The Invoker Pattern

# train.py - Your existing code, no Flow imports needed
def train_model(data_path: str, epochs: int = 100):
    import torch
    model = torch.nn.Linear(10, 1)
    # ... training logic ...
    return {"accuracy": 0.95, "loss": 0.01}
# runner.py - Execute remotely on GPU
from flow import invoke

result = invoke(
    "train.py",              # Python file
    "train_model",           # Function name  
    args=["s3://data"],      # Arguments
    kwargs={"epochs": 200},  # Keyword arguments
    gpu="a100"               # GPU type
)
print(result)  # {"accuracy": 0.95, "loss": 0.01}

Why Use invoke()?

  • Zero contamination: Keep ML code pure Python
  • Easy testing: Run functions locally without changes
  • Flexible: Any function, any module
  • Type safe: JSON serialization ensures compatibility

See the Invoker Pattern Guide for detailed documentation.

Decorator Pattern

Flow SDK provides a decorator-based API similar to popular serverless frameworks:

Basic Usage

from flow import FlowApp

app = FlowApp()

@app.function(gpu="a100")
def train_model(data_path: str, epochs: int = 100):
    import torch
    model = torch.nn.Linear(10, 1)
    # ... training logic ...
    return {"accuracy": 0.95, "loss": 0.01}

# Execute remotely on GPU
result = train_model.remote("s3://data.csv", epochs=50)

# Execute locally for testing
local_result = train_model("./local_data.csv")

Advanced Configuration

@app.function(
    gpu="h100:8",  # 8x H100 GPUs
    image="pytorch/pytorch:2.0.0",
    volumes={"/data": "training-data"},
    env={"WANDB_API_KEY": "..."}
)
def distributed_training(config_path: str):
    # Multi-GPU training code
    return {"status": "completed"}

# Async execution
task_id = distributed_training.spawn("config.yaml")

Module-Level Usage

from flow import function

# Use without creating an app instance
@function(gpu="a100")
def inference(text: str) -> dict:
    # Run inference
    return {"sentiment": "positive"}

The decorator pattern provides:

  • Clean syntax: Familiar to Flask/FastAPI users
  • Local testing: Call functions directly without infrastructure
  • Type safety: Full IDE support and type hints
  • Flexibility: Mix local and remote execution seamlessly

Data Mounting

Flow SDK provides seamless data access from S3 and volumes through the Flow client API:

Quick Start

# Mount S3 dataset
from flow import Flow

with Flow() as client:
    task = client.submit(
        "python train.py --data /data",
        gpu="a100",
        mounts="s3://my-bucket/datasets/imagenet"
    )

# Mount multiple sources
with Flow() as client:
    task = client.submit(
        "python train.py",
        gpu="a100:4",
        mounts={
            "/datasets": "s3://ml-bucket/imagenet",
            "/models": "volume://pretrained-models",  # Auto-creates if missing
            "/outputs": "volume://training-outputs"
        }
    )

Supported Sources

  • S3: Read-only access via s3fs (s3://bucket/path)

    • Requires AWS credentials in environment
    • Cached locally for performance
  • Volumes: Persistent read-write storage (volume://name)

    • Auto-creates with 100GB if not found
    • High-performance NVMe storage

Example: Training Pipeline

# Set AWS credentials (from secure source)
import os
os.environ["AWS_ACCESS_KEY_ID"] = get_secret("aws_key")
os.environ["AWS_SECRET_ACCESS_KEY"] = get_secret("aws_secret")

# Submit training with data mounting
with Flow() as client:
    task = client.submit(
        """
        python train.py \\
            --data /datasets/train \\
            --validation /datasets/val \\
            --output /outputs
        """,
        gpu="a100:8",
        mounts={
            "/datasets": "s3://ml-datasets/imagenet",
            "/outputs": "volume://experiment-results"
        }
    )

See the Data Mounting Guide for detailed documentation.

Distributed Training

Single-Node Multi-GPU (Recommended)

config = TaskConfig(
    name="distributed-training",
    instance_type="8xa100",  # 8x A100 GPUs on single node
    command="torchrun --nproc_per_node=8 --standalone train.py"
)

Multi-Node Training

For multi-node training, explicitly set coordination environment variables:

config = TaskConfig(
    name="multi-node-training",
    instance_type="8xa100",
    num_instances=4,  # 32 GPUs total
    env={
        "FLOW_NODE_RANK": "0",  # Set per node: 0, 1, 2, 3
        "FLOW_NUM_NODES": "4",
        "FLOW_MAIN_IP": "10.0.0.1"  # IP of rank 0 node
    },
    command=[
        "torchrun",
        "--nproc_per_node=8",
        "--nnodes=4",
        "--node_rank=$FLOW_NODE_RANK",
        "--master_addr=$FLOW_MAIN_IP",
        "--master_port=29500",
        "train.py"
    ]
)

Advanced Features

Cost Optimization

# Use spot instances with price cap
config = TaskConfig(
    name="experiment",
    instance_type="a100",
    max_price_per_hour=5.0,  # Use spot if available
    max_run_time_hours=12.0  # Prevent runaway costs
)

Environment Setup

# Custom container
config.image = "pytorch/pytorch:2.0.0-cuda11.8-cudnn8"

# Environment variables
config.env = {
    "WANDB_API_KEY": "...",
    "HF_TOKEN": "...",
    "CUDA_VISIBLE_DEVICES": "0,1,2,3"
}

# Working directory
config.working_dir = "/workspace"

Data Access

# S3 integration
config = TaskConfig(
    name="s3-processing",
    instance_type="a100",
    command="python process.py",
    env={
        "AWS_ACCESS_KEY_ID": "...",
        "AWS_SECRET_ACCESS_KEY": "..."
    }
)

# Or use mounts parameter (simplified API)
with Flow() as client:
    task = client.submit(
        "python analyze.py",
        gpu="a100",
        mounts={
            "/input": "s3://my-bucket/data/",
            "/output": "volume://results"
        }
    )

Error Handling

Flow provides structured errors with recovery guidance:

from flow.errors import (
    FlowError,
    AuthenticationError,
    ResourceNotFoundError,
    ValidationError,
    QuotaExceededError
)

try:
    task = flow.run(config)
except ValidationError as e:
    print(f"Configuration error: {e.message}")
    for suggestion in e.suggestions:
        print(f"  - {suggestion}")
except QuotaExceededError as e:
    print(f"Quota exceeded: {e.message}")
    print("Suggestions:", e.suggestions)
except FlowError as e:
    print(f"Error: {e}")

Common Patterns

Interactive Development

Google Colab Integration

Connect Google Colab notebooks to Flow GPU instances:

# Launch GPU instance configured for Colab
uv run flow colab connect --instance-type a100 --hours 4

# You'll receive:
# 1. SSH tunnel command to run locally
# 2. Connection URL for Colab

Then in Google Colab:

  1. Go to Runtime → Connect to local runtime
  2. Paste the connection URL
  3. Click Connect

Your Colab notebook now runs on Flow GPU infrastructure!

Direct Jupyter Notebooks

Run Jupyter directly on Flow instances:

# Launch Jupyter server
config = TaskConfig(
    name="notebook",
    instance_type="a100",
    command="jupyter lab --ip=0.0.0.0 --no-browser",
    ports=[8888],
    max_run_time_hours=8.0
)
task = flow.run(config)
print(f"Access at: {task.endpoints['jupyter']}")

Checkpointing

# Resume training from checkpoint
config = TaskConfig(
    name="resume-training",
    instance_type="a100",
    command="python train.py --resume",
    volumes=[{
        "name": "checkpoints",
        "mount_path": "/checkpoints"
    }]
)

Experiment Sweep

# Run multiple experiments
for lr in [0.001, 0.01, 0.1]:
    config = TaskConfig(
        name=f"exp-lr-{lr}",
        instance_type="a100",
        command=f"python train.py --lr {lr}",
        env={"WANDB_RUN_NAME": f"lr_{lr}"}
    )
    flow.run(config)

Architecture

Flow SDK follows Domain-Driven Design with clear boundaries:

High-Level Overview

┌─────────────────────────────────────────────┐
│          User Interface Layer               │
│        (Python API, CLI, YAML)              │
├─────────────────────────────────────────────┤
│           Core Domain Layer                 │
│     (TaskConfig, Task, Volume models)       │
├─────────────────────────────────────────────┤
│        Provider Abstraction Layer           │
│         (IProvider Protocol)                │
├─────────────────────────────────────────────┤
│        Provider Implementations             │
│     (FCP, AWS, GCP, Azure - future)         │
└─────────────────────────────────────────────┘

Key Components

  • Flow SDK (src/flow/): High-level Python SDK for ML/AI workloads
  • Mithril CLI (mithril/): Low-level IaaS control following Unix philosophy
  • Provider Abstraction: Cloud-agnostic interface for multi-cloud support

Current Provider Support

FCP (ML Foundry) - Production Ready

  • Ubuntu 22.04 environment with bash
  • 10KB startup script limit
  • Spot instances with preemption handling
  • Block storage volumes (file shares available in some regions)
  • See FCP provider documentation for implementation details

AWS, GCP, Azure - Planned

  • Provider abstraction designed for multi-cloud
  • Contributions welcome

Additional Documentation

Example Code

Performance

  • Cold start: 10-15 minutes (instance provisioning on FCP core)
  • Warm start: 30-60 seconds (pre-allocated pool; pending feature: let FCP know if interesting)

Troubleshooting

Common Errors

Authentication Failed

Error: Invalid API key

Solution: Run flow init and ensure your API key is correct. Get a new key at app.mlfoundry.com.

No Available Instances

Error: No instances available for type 'a100'

Solution: Try a different region or instance type. Check availability with flow status.

Quota Exceeded

Error: GPU quota exceeded in region us-east-1

Solution: Try a different region or contact support for quota increase.

Invalid Instance Type

ValidationError: Invalid instance type 'a100x8'

Solution: Use correct format: 8xa100 (not a100x8). See Instance Types.

Task Timeout

Error: Task exceeded max_run_time_hours limit

Solution: Increase max_run_time_hours in your config or optimize your code.

File Not Found

python: can't open file 'train.py': No such file or directory

Solution: Ensure upload_code=True (default) or that your file exists in the Docker image.

Module Not Found

ModuleNotFoundError: No module named 'torch'

Solution: Install dependencies first: flow.run("pip install torch && python train.py"). See Handling Dependencies.

Upload Size Limit

Error: Project size (15.2MB) exceeds limit (10MB)

Note: Files are automatically compressed (gzip), but the 10MB limit applies after compression.

Solutions (in order of preference):

  1. Use .flowignore to exclude unnecessary files (models, datasets, caches)
  2. Clone from Git:
    flow.run("git clone https://github.com/myorg/myrepo.git . && python train.py", 
             instance_type="a100", upload_code=False)
    
  3. Pre-built Docker image with your code:
    flow.run("python /app/train.py", instance_type="a100",
             image="myorg/myapp:latest", upload_code=False)
    
  4. Download from S3/GCS:
    flow.run("aws s3 cp s3://mybucket/code.tar.gz . && tar -xzf code.tar.gz && python train.py",
             instance_type="a100", upload_code=False)
    
  5. Mount code via volume (for development):
    # First upload to a volume manually, then:
    flow.run("python /code/train.py", instance_type="a100",
             volumes=[{"name": "my-code", "mount_path": "/code"}],
             upload_code=False)
    
    Note: Volumes are empty by default. You must manually populate them first (e.g., via git clone or rsync).

Support

License

Apache License 2.0 - see LICENSE.txt

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flow_gpu-2.0.0.tar.gz (716.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flow_gpu-2.0.0-py3-none-any.whl (289.9 kB view details)

Uploaded Python 3

File details

Details for the file flow_gpu-2.0.0.tar.gz.

File metadata

  • Download URL: flow_gpu-2.0.0.tar.gz
  • Upload date:
  • Size: 716.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for flow_gpu-2.0.0.tar.gz
Algorithm Hash digest
SHA256 d4336c880dc664a2199cbabdeea06cab58d55f139b6b4573e15c87b35efaf841
MD5 bccae4b1fddf8987db1f9880cef203b1
BLAKE2b-256 e25951192f2beb2988f9246697812f009c6cc88548106bac64d3261f4069ab67

See more details on using hashes here.

File details

Details for the file flow_gpu-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: flow_gpu-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 289.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for flow_gpu-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5d3168cb285f4928d40350b52ad88f6a244f60cfd366a58e1a4149ece5fa02a6
MD5 5be04d5f3b42921cf3bf3c98506c8e49
BLAKE2b-256 cdc5220344da4af1f577be0cbf756377570de44fd31d14b258406fed00e3ec24

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page