No project description provided
Project description
taskflows
A Python library for task management, service scheduling, and alerting. Convert functions into managed tasks with logging, alerts, and retries. Create systemd services that run on flexible schedules with resource constraints.
Table of Contents
- Features
- Installation
- Quick Start
- Tasks
- Services
- Environments
- Resource Constraints
- CLI Reference
- Web UI
- API Server
- Security
- Logging & Monitoring
- Slack Bot
- Environment Variables
Features
-
Tasks: Convert any Python function (sync or async) into a managed task with:
- Automatic retries on failure
- Configurable timeouts
- Alerts via Slack and Email
- Structured logging with Loki integration
- Context tracking with
get_current_task_id()
-
Services: Create systemd services with:
- Calendar-based scheduling (cron-like)
- Periodic scheduling with boot/login triggers
- Service dependencies and relationships
- Configurable restart policies
- Resource constraints (CPU, memory, I/O)
-
Environments: Run services in:
- Conda/Mamba virtual environments
- Docker containers with full configuration
- Named reusable environment configurations
-
Management: Control services via:
- CLI (
tfcommand) - Web UI with JWT authentication
- REST API
- Slack bot with interactive commands
- CLI (
Installation
pip install taskflows
Prerequisites
# Required for systemd integration
sudo apt install dbus libdbus-1-dev
# Enable user services to run without login
loginctl enable-linger
Quick Start
Create a Task
from taskflows import task, Alerts
from alerts import Slack
@task(
name="my-task",
retries=3,
timeout=60,
alerts=Alerts(
send_to=Slack(channel="alerts"),
send_on=["start", "error", "finish"]
)
)
async def process_data():
# Your code here
return "Done"
# Execute the task
if __name__ == "__main__":
process_data()
Create a Service
from taskflows import Service, Calendar
srv = Service(
name="daily-job",
start_command="python /path/to/script.py",
start_schedule=Calendar("Mon-Fri 09:00 America/New_York"),
enabled=True, # Start on boot
)
srv.create()
Tasks
Task Decorator
The @task decorator wraps any function with managed execution:
from taskflows import task, Alerts, get_current_task_id
from alerts import Slack, Email
@task(
name="data-pipeline", # Task identifier (default: function name)
required=True, # Raise exception on failure
retries=3, # Retry attempts on failure
timeout=300, # Timeout in seconds
alerts=Alerts(
send_to=[
Slack(channel="alerts"),
Email(
addr="sender@example.com",
password="...",
receiver_addr=["team@example.com"]
)
],
send_on=["start", "error", "finish"]
)
)
async def run_pipeline():
# Access current task ID for correlation
task_id = get_current_task_id()
print(f"Running task: {task_id}")
# ... your code ...
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
Function name | Unique task identifier |
required |
bool |
False |
If True, exceptions are re-raised after all retries |
retries |
int |
0 |
Number of retry attempts on failure |
timeout |
float |
None |
Execution timeout in seconds |
alerts |
Alerts |
None |
Alert configuration |
logger |
Logger |
Default | Custom logger instance |
Programmatic Task Execution
Run functions as tasks without the decorator:
from taskflows import run_task
async def my_function(x, y):
return x + y
result = await run_task(
my_function,
name="add-numbers",
retries=2,
timeout=30,
x=1, y=2
)
Alerts
Configure when and where to send alerts:
from taskflows import Alerts
from alerts import Slack, Email
alerts = Alerts(
send_to=[
Slack(channel="critical"),
Email(
addr="sender@gmail.com",
password="app-password",
receiver_addr=["oncall@company.com"]
)
],
send_on=["start", "error", "finish"] # Events to trigger alerts
)
Alert Events:
start: Task execution beginserror: An exception occurred (sent per retry)finish: Task execution completed (includes success/failure status)
Alerts include Grafana/Loki URLs for viewing task logs directly.
Services
Service Configuration
Services are systemd units that run commands on schedules:
from taskflows import Service, Calendar, Periodic, Venv
srv = Service(
# Identity
name="my-service",
description="Processes daily reports",
# Commands
start_command="python process.py",
stop_command="pkill -f process.py", # Optional
restart_command="python process.py reload", # Optional
# Scheduling
start_schedule=Calendar("Mon-Fri 09:00"),
stop_schedule=Calendar("Mon-Fri 17:00"), # Optional
restart_schedule=Periodic( # Optional
start_on="boot",
period=3600,
relative_to="finish"
),
# Environment
environment=Venv("myenv"), # Or DockerContainer, or named env string
working_directory="/app",
env={"DEBUG": "1"},
env_file="/path/to/.env",
# Behavior
enabled=True, # Auto-start on boot
timeout=300, # Max runtime in seconds
kill_signal="SIGTERM",
restart_policy="on-failure",
)
srv.create()
Key Parameters:
| Parameter | Type | Description |
|---|---|---|
name |
str |
Service identifier |
start_command |
str | Callable |
Command or function to execute |
stop_command |
str |
Command to stop the service |
environment |
Venv | DockerContainer | str |
Execution environment |
start_schedule |
Calendar | Periodic |
When to start |
stop_schedule |
Schedule |
When to stop |
restart_schedule |
Schedule |
When to restart |
enabled |
bool |
Start on boot |
timeout |
int |
Max runtime (seconds) |
restart_policy |
str | RestartPolicy |
Restart behavior |
Scheduling
Calendar Schedule
Run at specific times using systemd calendar syntax:
from taskflows import Calendar
# Daily at 2 PM Eastern
Calendar("Mon-Sun 14:00 America/New_York")
# Weekdays at 9 AM
Calendar("Mon-Fri 09:00")
# Specific days and time
Calendar("Mon,Wed,Fri 16:30:30")
# From a datetime object
from datetime import datetime, timedelta
Calendar.from_datetime(datetime.now() + timedelta(hours=1))
Calendar Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
schedule |
str |
Required | Calendar expression |
persistent |
bool |
True |
Run on wake if missed |
accuracy |
str |
"1ms" |
Max deviation from scheduled time |
Periodic Schedule
Run at intervals after a trigger:
from taskflows import Periodic
# Every 5 minutes after boot
Periodic(
start_on="boot", # "boot", "login", or "command"
period=300, # Interval in seconds
relative_to="finish", # "start" or "finish"
accuracy="1ms"
)
Periodic Parameters:
| Parameter | Type | Description |
|---|---|---|
start_on |
Literal["boot", "login", "command"] |
Initial trigger |
period |
int |
Interval in seconds |
relative_to |
Literal["start", "finish"] |
Measure from start or finish |
accuracy |
str |
Max deviation |
Service Dependencies
Control service startup order and relationships:
srv = Service(
name="app-server",
start_command="./start.sh",
# Ordering
start_after=["database", "cache"], # Start after these
start_before=["monitoring"], # Start before these
# Dependencies
requires=["database"], # Fail if dependency fails
wants=["cache"], # Start together, don't fail if cache fails
binds_to=["database"], # Stop when database stops
part_of=["app-stack"], # Propagate stop/restart
# Failure handling
on_failure=["alert-service"], # Activate on failure
on_success=["cleanup-service"], # Activate on success
# Mutual exclusion
conflicts=["maintenance-mode"],
)
Restart Policies
Configure automatic restart behavior:
from taskflows import Service, RestartPolicy
# Simple string policy
srv = Service(
name="worker",
start_command="python worker.py",
restart_policy="always", # "no", "always", "on-failure", "on-abnormal", etc.
)
# Detailed policy
srv = Service(
name="worker",
start_command="python worker.py",
restart_policy=RestartPolicy(
condition="on-failure", # When to restart
delay=10, # Seconds between restarts
max_attempts=5, # Max restarts in window
window=300, # Time window in seconds
),
)
Restart Conditions:
no: Never restartalways: Always restarton-success: Restart on clean exiton-failure: Restart on non-zero exiton-abnormal: Restart on signal/timeouton-abort: Restart on abort signalon-watchdog: Restart on watchdog timeout
ServiceRegistry
Manage multiple services together:
from taskflows import Service, ServiceRegistry
registry = ServiceRegistry(
Service(name="web", start_command="./web.sh"),
Service(name="worker", start_command="./worker.sh"),
Service(name="scheduler", start_command="./scheduler.sh"),
)
# Add more services
registry.add(Service(name="monitor", start_command="./monitor.sh"))
# Bulk operations
registry.create() # Create all services
registry.start() # Start all services
registry.stop() # Stop all services
registry.restart() # Restart all services
registry.enable() # Enable all services
registry.disable() # Disable all services
registry.remove() # Remove all services
# Access individual services
registry["web"].logs()
Environments
Virtual Environments
Run services in Conda/Mamba environments:
from taskflows import Service, Venv
srv = Service(
name="ml-pipeline",
start_command="python train.py",
environment=Venv("ml-env"), # Conda environment name
)
Automatically detects Mamba, Miniforge, or Miniconda installations.
Docker Containers
Run services in Docker containers:
from taskflows import Service, DockerContainer, DockerImage, Volume, CgroupConfig
# Using existing image
srv = Service(
name="api-server",
environment=DockerContainer(
image="python:3.11",
command="python app.py",
ports={"8080/tcp": 8080},
volumes=[
Volume(
host_path="/data",
container_path="/app/data",
read_only=False
)
],
environment={"ENV": "production"},
network_mode="bridge",
restart_policy="no", # Let systemd handle restarts
persisted=True, # Keep container between restarts
cgroup_config=CgroupConfig(
memory_limit=1024 * 1024 * 1024, # 1GB
cpu_quota=50000, # 50% CPU
),
),
)
# Building from Dockerfile
srv = Service(
name="custom-app",
environment=DockerContainer(
image=DockerImage(
tag="myapp:latest",
path="/path/to/app",
dockerfile="Dockerfile",
),
command="./start.sh",
),
)
DockerContainer Parameters:
| Parameter | Type | Description |
|---|---|---|
image |
str | DockerImage |
Image name or build config |
command |
str | Callable |
Command to run |
name |
str |
Container name (auto-generated if not set) |
persisted |
bool |
Keep container between restarts |
ports |
dict |
Port mappings |
volumes |
list[Volume] |
Volume mounts |
environment |
dict |
Environment variables |
network_mode |
str |
Network mode |
cgroup_config |
CgroupConfig |
Resource limits |
Named Environments
Store reusable environment configurations:
from taskflows import Service
# Reference a named environment by string
srv = Service(
name="my-service",
start_command="python app.py",
environment="production-docker", # Named environment
)
Create named environments via the Web UI or API. They store complete Venv or DockerContainer configurations that can be reused across services.
Resource Constraints
Hardware Constraints
Require minimum hardware before starting:
from taskflows import Service, Memory, CPUs
srv = Service(
name="ml-training",
start_command="python train.py",
startup_requirements=[
Memory(amount=8 * 1024**3, constraint=">="), # 8GB RAM
CPUs(amount=4, constraint=">="), # 4+ CPUs
],
)
Constraint Operators: <, <=, =, !=, >=, >
Set silent=True to skip silently instead of failing:
Memory(amount=16 * 1024**3, constraint=">=", silent=True)
System Load Constraints
Wait for system load to be acceptable:
from taskflows import Service, CPUPressure, MemoryPressure, IOPressure
srv = Service(
name="batch-job",
start_command="python process.py",
startup_requirements=[
CPUPressure(max_percent=80, timespan="5min"),
MemoryPressure(max_percent=70, timespan="1min"),
IOPressure(max_percent=90, timespan="10sec"),
],
)
Timespan Options: "10sec", "1min", "5min"
Cgroup Configuration
Fine-grained resource control for services and containers:
from taskflows import Service, CgroupConfig
srv = Service(
name="limited-service",
start_command="python app.py",
cgroup_config=CgroupConfig(
# CPU limits
cpu_quota=50000, # Microseconds per period (50% of 1 CPU)
cpu_period=100000, # Period in microseconds (default 100ms)
cpu_shares=512, # Relative weight
cpuset_cpus="0-3", # Pin to CPUs 0-3
# Memory limits
memory_limit=2 * 1024**3, # 2GB hard limit
memory_high=1.5 * 1024**3, # 1.5GB soft limit
memory_swap_limit=4 * 1024**3,
# I/O limits
io_weight=100, # I/O priority (1-10000)
device_read_bps={"/dev/sda": 100 * 1024**2}, # 100MB/s read
device_write_bps={"/dev/sda": 50 * 1024**2}, # 50MB/s write
# Process limits
pids_limit=100, # Max processes
# Security
oom_score_adj=500, # OOM killer priority
cap_drop=["NET_RAW"], # Drop capabilities
),
)
CLI Reference
The tf command provides service management:
# Service discovery
tf list [PATTERN] # List services matching pattern
tf status [-m PATTERN] [--running] [--all] # Show service status
tf history [-l LIMIT] [-m PATTERN] # Show task history
tf logs SERVICE [-n LINES] # View service logs
tf show PATTERN # Show service file contents
# Service control (PATTERN matches service names)
tf create SEARCH_IN [-i INCLUDE] [-e EXCLUDE] # Create services from Python file/directory
tf start PATTERN [-t/--timers] [--services] # Start matching services/timers
tf stop PATTERN [-t/--timers] [--services] # Stop matching services/timers
tf restart PATTERN # Restart matching services
tf enable PATTERN [-t/--timers] [--services] # Enable auto-start
tf disable PATTERN [-t/--timers] [--services] # Disable auto-start
tf remove PATTERN # Remove matching services
# Multi-server (with -s/--server)
tf list -s server1 -s server2
tf status --server prod-host
tf start my-service -s prod-host
API Management
# Start/stop API server (runs as systemd service)
tf api start
tf api stop
tf api restart
# Setup web UI authentication (interactive, file-based)
tf api setup-ui --username admin
To enable the web UI, set the environment variable before starting:
export TASKFLOWS_ENABLE_UI=1
tf api start
Alternatively, use environment variables for Docker/automation:
export TF_JWT_SECRET=$(tf api generate-secret)
export TF_ADMIN_USER=admin
export TF_ADMIN_PASSWORD=yourpassword
export TASKFLOWS_ENABLE_UI=1
tf api start
Or run the API directly (not as a service):
_start_srv_api --enable-ui
Security Management
# Setup HMAC authentication
tf api security setup [-r/--regenerate-secret]
tf api security status
tf api security disable
tf api security set-secret SECRET
Web UI
A modern React SPA located in frontend/.
Setup
cd frontend
# Install dependencies
npm install
# Development (with hot reload)
npm run dev
# Production build
npm run build
Running
Development mode:
# Terminal 1: Start the API server
tf api start
# Terminal 2: Start React dev server (proxies API to localhost:7777)
cd frontend && npm run dev
Access at http://localhost:3000
Production mode:
# Build the frontend
cd frontend && npm run build
# Start API server with UI enabled (serves from frontend/dist/)
export TASKFLOWS_ENABLE_UI=1
tf api start
Access at http://localhost:7777
Tech Stack
- React 19 + TypeScript + Vite
- React Router v7 (protected routes)
- Zustand (auth, UI state)
- React Query (server state with polling)
- TailwindCSS 4
See frontend/README.md for detailed documentation.
Features
- Dashboard: Real-time service status with auto-refresh
- Multi-select: Select and operate on multiple services
- Search: Filter services by name
- Batch Operations: Start/stop/restart multiple services
- Log Viewer: Search and auto-scroll logs
- Named Environments: Create and manage reusable environments
API Server
The API server provides REST endpoints for service management.
Starting the Server
tf api start # Default port 7777
TASKFLOWS_ENABLE_UI=1 tf api start # With web UI
Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /services |
List all services |
| GET | /services/{name}/status |
Get service status |
| POST | /services/{name}/start |
Start service |
| POST | /services/{name}/stop |
Stop service |
| POST | /services/{name}/restart |
Restart service |
| GET | /services/{name}/logs |
Get service logs |
| GET | /environments |
List named environments |
| POST | /environments |
Create environment |
Authentication
The API uses HMAC-SHA256 authentication. Include these headers:
X-HMAC-Signature: <signature>
X-HMAC-Timestamp: <unix-timestamp>
Security
Taskflows implements multiple security layers to protect against common vulnerabilities and unauthorized access.
Authentication
HMAC Authentication (API)
Secure API communication with HMAC-SHA256 request signing:
# Initial setup
tf api security setup
# View settings
tf api security status
# Regenerate secret (requires client restart)
tf api security setup --regenerate-secret
Configuration stored in ~/.services/security.json.
How it works:
- Shared secret distributed to authorized clients
- Each request signed with HMAC-SHA256(secret, timestamp + body)
- Server validates signature and timestamp (5-minute window)
- Prevents replay attacks and request tampering
Protected Operations:
- Service start/stop/restart
- Service creation/removal
- Environment management
JWT Authentication (Web UI)
The web UI uses JWT tokens with bcrypt password hashing. There are two methods to configure authentication:
Method 1: File-based (Interactive Setup)
tf api setup-ui --username admin
# Prompts for password interactively
Configuration stored in ~/.taskflows/data/ui_config.json and ~/.taskflows/data/users.json.
Method 2: Environment Variables (Docker/Automation)
# Generate a JWT secret
export TF_JWT_SECRET=$(tf api generate-secret)
export TF_ADMIN_USER=admin
export TF_ADMIN_PASSWORD=yourpassword
export TASKFLOWS_ENABLE_UI=1
tf api start
Environment variables take precedence over file-based configuration.
Token Features:
- Bcrypt hashed passwords (12 rounds) for file-based auth
- 1-hour token expiration
- Automatic refresh on activity
- Secure HTTP-only cookies (when HTTPS enabled)
Input Validation & Sanitization
Taskflows validates all user input to prevent injection attacks:
Path Traversal Prevention
All file paths (env_file, working directories) are validated:
# ✅ Safe - absolute path validated
Service(name="my-service", env_file="/home/user/app/.env")
# ❌ Blocked - directory traversal attempt
Service(name="bad", env_file="../../../etc/passwd") # Raises SecurityError
# ❌ Blocked - symlink escape
Service(name="bad", env_file="/tmp/link-to-etc-passwd") # Raises SecurityError
Protection mechanisms:
- Resolves to absolute paths
- Checks against allowed directories
- Detects and blocks symlink escapes
- Prevents
..path components
Service Name Validation
Service names are sanitized to prevent injection:
# ✅ Safe - alphanumeric, dashes, dots, underscores
Service(name="my-service-v2.0_prod")
# ❌ Blocked - path characters
Service(name="../malicious") # Raises SecurityError
Service(name="/etc/passwd") # Raises SecurityError
# ❌ Blocked - special characters
Service(name="bad; rm -rf /") # Raises SecurityError
Allowed characters: [a-zA-Z0-9._-]+ only
Command Injection Prevention
Docker commands are strictly validated using shell quoting:
# ✅ Safe - properly quoted
DockerContainer(command='python script.py --arg "value with spaces"')
# ❌ Rejected - malformed quotes
DockerContainer(command='python script.py --arg "unterminated') # Raises ValueError
Protection: Uses Python's shlex.split() with no unsafe fallback
Credential Management
Best Practices:
-
Never commit secrets to version control
# Use .env files (add to .gitignore) echo "API_KEY=secret123" > .env # Reference in service Service(name="app", env_file=".env")
-
Use environment variables for sensitive configuration
import os Service( name="app", environment={ "DB_PASSWORD": os.getenv("DB_PASSWORD"), "API_KEY": os.getenv("API_KEY"), } )
-
Restrict file permissions
chmod 600 ~/.services/security.json chmod 600 .env
-
Rotate secrets regularly
tf api security setup --regenerate-secret
Docker Socket Security
⚠️ Warning: Services with Docker access have root-equivalent permissions.
When using docker_container, the service accesses Docker's Unix socket (/var/run/docker.sock), which grants:
- Ability to run containers as root
- Access to host filesystem via volume mounts
- Network configuration capabilities
Mitigation strategies:
-
Principle of least privilege - only use Docker when necessary
# Prefer direct execution Service(name="app", exec_start="python app.py") # Only containerize when isolation needed Service(name="app", docker_container=DockerContainer(...))
-
Resource limits - constrain container resources
DockerContainer( name="app", cgroup=CgroupConfig( memory_limit=1 * 1024**3, # 1 GB max cpu_quota=100000, # 1 CPU max pids_limit=100, # Max 100 processes read_only_rootfs=True, # Immutable filesystem ) )
-
Drop capabilities - remove unnecessary Linux capabilities
DockerContainer( name="app", cgroup=CgroupConfig( cap_drop=["ALL"], # Drop all capabilities cap_add=["NET_BIND_SERVICE"], # Only add what's needed ) )
-
Network isolation - use custom networks
DockerContainer(name="app", network_mode="isolated_net")
Security Audit Checklist
- HMAC authentication enabled for API
- Strong passwords for web UI (12+ characters)
- Secrets in environment variables or
.envfiles -
.envfiles in.gitignore - File permissions:
chmod 600on sensitive files - Regular secret rotation schedule
- Docker used only when necessary
- Resource limits on all Docker containers
- Capabilities dropped on Docker containers
- Review service permissions (user/group)
Reporting Security Issues
For security vulnerabilities, please do not open a public issue. Instead:
- Email security concerns to: [maintainer email]
- Include detailed reproduction steps
- Allow 90 days for patch before disclosure
Security References
- OWASP Top 10
- Docker Security Best Practices
- systemd Security Features
- Python Security Best Practices
Logging & Monitoring
Architecture
Application (structlog) → journald → Fluent Bit → Loki → Grafana
Configuration
from loggers import configure_loki_logging, get_struct_logger
configure_loki_logging(
app_name="my-service",
environment="production",
log_level="INFO",
)
logger = get_struct_logger("my_module")
logger.info("user_action", user_id=123, action="login")
Loki Queries
# All logs for a service
{service_name=~".*my-service.*"}
# Errors only
{service_name=~".*my-service.*"} |= "ERROR"
# By app and environment
{app="my-service", environment="production"}
# Parse JSON and filter
{app="my-service"} | json | context_duration_ms > 1000
Alert Integration
Task alerts include Grafana URLs with pre-configured Loki queries for viewing:
- Task execution logs
- Error traces
- Historical runs
Slack Alerts
Send task alerts and notifications to Slack channels.
Setup
-
Create a Slack app at https://api.slack.com/apps
-
Add OAuth scopes:
chat:write,chat:write.publicfiles:write
-
Install the app to your workspace and get the Bot Token
-
Set the environment variable:
export SLACK_BOT_TOKEN=xoxb-...
Usage
from taskflows import task, Alerts
from taskflows.alerts import Slack
@task(
name="my-task",
alerts=Alerts(
send_to=Slack(channel="alerts"),
send_on=["start", "error", "finish"]
)
)
async def my_task():
# Your code here
pass
Programmatic Usage
from taskflows.alerts.slack import send_slack_message
from taskflows.alerts.components import Text, Table
await send_slack_message(
channel="alerts",
subject="Task Complete",
content=[Text("Processing finished successfully")],
)
Environment Variables
| Variable | Description | Default |
|---|---|---|
TASKFLOWS_ENABLE_UI |
Enable web UI serving | 0 |
TASKFLOWS_DISPLAY_TIMEZONE |
Display timezone | UTC |
TASKFLOWS_FLUENT_BIT |
Fluent Bit endpoint | localhost:24224 |
TASKFLOWS_GRAFANA |
Grafana URL | localhost:3000 |
TASKFLOWS_GRAFANA_API_KEY |
Grafana API key | - |
TASKFLOWS_LOKI_URL |
Loki URL | http://localhost:3100 |
LOKI_HOST |
Loki host | localhost |
LOKI_PORT |
Loki port | 3100 |
ENVIRONMENT |
Environment name | production |
APP_NAME |
Application name | - |
Slack Alert Variables
| Variable | Description | Default |
|---|---|---|
SLACK_BOT_TOKEN |
Slack Bot OAuth token | - |
SLACK_ATTACHMENT_MAX_SIZE_MB |
Max attachment size in MB | 20 |
SLACK_INLINE_TABLES_MAX_ROWS |
Max rows for inline tables | 200 |
Development
DBus Documentation
Testing
pytest tests/
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file taskflows-0.19.0.tar.gz.
File metadata
- Download URL: taskflows-0.19.0.tar.gz
- Upload date:
- Size: 161.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
843eba28236f5c4f6fcf727daeb5afa53208bc85968dbd690830e84a1ab77944
|
|
| MD5 |
3a428b8ae1fe2836dbf53a6f1a6f348d
|
|
| BLAKE2b-256 |
863a6c774fa8f35d25589f160fe13880e1a3ca799216c11bd894b2ae7c112de3
|
Provenance
The following attestation bundles were made for taskflows-0.19.0.tar.gz:
Publisher:
release.yml on djkelleher/taskflows
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
taskflows-0.19.0.tar.gz -
Subject digest:
843eba28236f5c4f6fcf727daeb5afa53208bc85968dbd690830e84a1ab77944 - Sigstore transparency entry: 850023219
- Sigstore integration time:
-
Permalink:
djkelleher/taskflows@90eef33779edce28972dd4dd98a78922ef676df9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/djkelleher
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@90eef33779edce28972dd4dd98a78922ef676df9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file taskflows-0.19.0-py3-none-any.whl.
File metadata
- Download URL: taskflows-0.19.0-py3-none-any.whl
- Upload date:
- Size: 172.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3ba9b8b0c65ff7a7c4c72932f9a52a3b6be8ce9873ca22685c204383f4b4161
|
|
| MD5 |
6a5e9e565b2ad7917b079461c5ce3151
|
|
| BLAKE2b-256 |
f264a0b1381a7d837fbd00c8047814cb56d2c8b5b040bfab5abdd9f6e8d7fa80
|
Provenance
The following attestation bundles were made for taskflows-0.19.0-py3-none-any.whl:
Publisher:
release.yml on djkelleher/taskflows
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
taskflows-0.19.0-py3-none-any.whl -
Subject digest:
a3ba9b8b0c65ff7a7c4c72932f9a52a3b6be8ce9873ca22685c204383f4b4161 - Sigstore transparency entry: 850023220
- Sigstore integration time:
-
Permalink:
djkelleher/taskflows@90eef33779edce28972dd4dd98a78922ef676df9 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/djkelleher
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@90eef33779edce28972dd4dd98a78922ef676df9 -
Trigger Event:
push
-
Statement type: