Read-only cloud hygiene for AWS, Azure, and GCP. Multi-account org scanning, CI/CD enforcement, and deterministic cost modeling. No agents, no telemetry.
Project description
CleanCloud
Languages / Langues : 🇬🇧 English | 🇫🇷 Français
Docs: AWS Setup · AWS Permissions & Commands · AWS Multi-Account · Azure Setup · GCP Setup · CI/CD Guide · Detection Rules · Example Outputs · Docker Hub · GitHub Action
Quick Start
pipx install cleancloud
cleancloud demo # see sample findings — no credentials needed
cleancloud demo --category ai # see AI/ML waste findings (SageMaker, AML, Vertex AI — GPU-heavy endpoints/clusters)
Scan your cloud:
cleancloud scan --provider aws --all-regions
cleancloud scan --provider azure
cleancloud scan --provider gcp --all-projects
cleancloud scan --provider aws --category ai # detect idle SageMaker endpoints
CleanCloud is the Cloud Hygiene Engine — detects idle infrastructure and high-cost AI/ML waste across AWS, Azure, and GCP.
Supports: AWS · Azure · GCP
CleanCloud scans your AWS, Azure, and GCP environments and tells you exactly what to clean up — idle infrastructure and high-cost AI/ML resources (SageMaker endpoints, AML compute clusters, Vertex AI endpoints) — with per-resource cost estimates. No agents. No SaaS. Read-only. Runs entirely in your environment.
| AWS/Azure/GCP native cost tools | FinOps SaaS platforms | CleanCloud | |
|---|---|---|---|
| Shows cost trends | ✅ | ✅ | — |
| Names exactly which resources to clean up | ❌ | partial | ✅ |
| Deterministic cost estimate per resource | ❌ | ❌ | ✅ |
| Detects idle AI/ML waste (SageMaker, AML, Vertex AI — including GPU-backed endpoints) | ❌ | ❌ | ✅ |
| Read-only, no agents | ✅ | ❌ | ✅ |
| Runs in air-gapped / regulated environments | ❌ | ❌ | ✅ |
| No SaaS account or vendor access required | ❌ | ❌ | ✅ |
| Multi-account / multi-subscription / multi-project | ❌ | ✅ | ✅ |
| CI/CD and scheduled enforcement (exit codes) | ❌ | ❌ | ✅ |
What It Looks Like
Found 6 hygiene issues:
1. [AWS] Idle RDS Instance (No Connections for 21 Days)
Risk : High
Confidence : High
Resource : aws.rds.instance → db-prod-analytics
Region : us-east-1
Rule : aws.rds.instance.idle
Reason : RDS instance has had zero connections for 21 days
Details:
- instance_class: db.r5.large
- engine: postgres 15.4
- estimated_monthly_cost: ~$380/month
2. [AWS] Unattached EBS Volume
Risk : Low
Confidence : High
Resource : aws.ebs.volume → vol-0a1b2c3d4e5f67890
Region : us-east-1
Rule : aws.ebs.volume.unattached
Reason : Volume has been unattached for 47 days
Details:
- size_gb: 500
- state: available
- tags: {"Project": "legacy-api", "Owner": "platform"}
3. [AWS] Idle NAT Gateway
Risk : Medium
Confidence : Medium
Resource : aws.ec2.nat_gateway → nat-0abcdef1234567890
Region : us-west-2
Rule : aws.ec2.nat_gateway.idle
Reason : No traffic detected for 21 days
Details:
- name: staging-nat
- total_bytes_out: 0
- estimated_monthly_cost: ~$32/month
4. [AWS] Idle Load Balancer (No Healthy Targets)
Risk : Medium
Confidence : High
Resource : aws.elbv2.load_balancer → alb-staging-api
Region : us-east-1
Rule : aws.elbv2.load_balancer.idle
Reason : Load balancer has no healthy targets for 30 days
Details:
- type: application
- estimated_monthly_cost: ~$18/month
5. [AWS] Unattached Elastic IP
Risk : Low
Confidence : High
Resource : aws.ec2.elastic_ip → eipalloc-0a1b2c3d4e5f6
Region : eu-west-1
Rule : aws.ec2.elastic_ip.unattached
Reason : Elastic IP not associated with any instance or ENI (age: 92 days)
6. [AWS] Old EBS Snapshot (438 Days)
Risk : Low
Confidence : High
Resource : aws.ebs.snapshot → snap-0a1b2c3d4e5f67890
Region : us-west-2
Rule : aws.ebs.snapshot.old
Reason : Snapshot is 438 days old with no recent activity
Details:
- size_gb: 200
- estimated_monthly_cost: ~$10/month
--- Scan Summary ---
Total findings: 6
By risk: low: 3 medium: 2 high: 1
By confidence: high: 5 medium: 1
Minimum estimated waste: ~$480/month
(5 of 6 findings costed)
Regions scanned: us-east-1, us-west-2, eu-west-1 (auto-detected)
No cloud account yet? cleancloud demo shows sample output without any credentials.
As featured in
- Korben 🇫🇷 — Major French tech publication
- Last Week in AWS #457 — Corey Quinn's weekly AWS newsletter
What users say
"Solid discovery tool that bubbles up potential savings. Easy to install and use!" — Reddit user
Key Features
-
AI/ML waste detection across all 3 clouds: idle SageMaker endpoints (AWS), idle AML compute clusters (Azure), and idle Vertex AI Online Prediction endpoints (GCP) — always-on GPU-backed resources flagged HIGH risk, with typical waste ranging from $449–$23K+/month. Opt-in via
--category aior--category allMany AI/ML serving resources remain permanently provisioned (min replicas / baseline capacity) and continue billing even with zero traffic — CleanCloud detects these abandoned or underutilized deployments early.
-
33 curated, high-signal detection rules: orphaned volumes, idle databases, stopped instances, unused registries, and more — designed to avoid false positives in IaC environments, each with a deterministic cost estimate
-
Governance enforcement (opt-in):
--fail-on-confidence HIGHor--fail-on-cost 100— enforce waste thresholds on a schedule, owned by platform or FinOps teams -
Multi-account scanning (AWS): scan entire AWS Organizations in one run — config file, inline IDs, or auto-discovery via
--org -
Multi-subscription scanning (Azure): scan all Azure subscriptions in parallel — auto-discovery via Management Group, per-subscription cost breakdown included
-
Multi-project scanning (GCP): scan all accessible GCP projects in parallel — auto-discovery via Application Default Credentials, per-project cost breakdown included
-
Safe for regulated environments: no agents, no telemetry, no SaaS — runs entirely inside your own infrastructure. Suitable for financial services, healthcare, and government accounts where third-party SaaS access is restricted
-
Ecosystem-ready output: JSON for Slack alerts, cost dashboards, and ticketing automation — CSV for spreadsheet workflows — markdown to paste directly into GitHub PRs, Jira, or Confluence
What CleanCloud does NOT do
| ❌ Delete resources | ❌ Modify or create tags |
| ❌ Write to any cloud API | ❌ Store or log credentials |
| ❌ Send telemetry or usage data | ❌ Require a SaaS account or agent |
All operations are read-only. Safe for production accounts, air-gapped environments, and security-reviewed pipelines.
Who uses it:
- Platform and FinOps teams — run weekly hygiene scans across your AWS Org or Azure tenant, enforce waste thresholds, catch drift before it compounds
- Regulated industries — financial services, healthcare, and government teams that cannot send cloud account data to a SaaS vendor
- Mid-market engineering teams — too large to ignore cloud waste, too lean for enterprise FinOps platforms. Native cost tools show bills; CleanCloud shows what to fix
- Cloud consultants and MSPs — run a read-only audit against a client account in minutes, export findings to markdown or JSON
Use cases:
- One-time cloud waste audit — run in CloudShell, see findings in 60 seconds
- Scheduled hygiene governance — weekly job that catches new waste and enforces thresholds across all accounts
- Pre-review reports — export findings to markdown before a quarterly cost review or board meeting
Get Started
Commands
| Command | What it does |
|---|---|
cleancloud demo |
Show sample findings — no credentials needed |
cleancloud scan |
Scan your cloud environment and report findings |
cleancloud doctor |
Check that credentials and permissions are correctly configured |
cleancloud --version |
Show installed version |
cleancloud --help |
List all flags |
Via pipx (recommended for local use):
pipx install cleancloud
pipx ensurepath # adds cleancloud to PATH — restart your shell after this
cleancloud demo # see sample findings without any cloud credentials
Via Docker (no Python required — runs anywhere: CI/CD, scheduled jobs, servers):
docker pull getcleancloud/cleancloud
docker run --rm getcleancloud/cleancloud demo
# With AWS credentials (Docker doesn't inherit local ~/.aws automatically)
docker run --rm \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_REGION=us-east-1 \
getcleancloud/cleancloud scan --provider aws --all-regions
In CI/CD,
aws-actions/configure-aws-credentialssetsAWS_*env vars on the runner — pass them with-e VAR_NAMEand they forward into the container automatically. See CI/CD guide →
When you're ready to scan your real environment, authenticate first — then run:
# AWS: make sure you're logged in (aws configure, aws sso login, or IAM role)
cleancloud scan --provider aws --all-regions
# Azure: make sure you're logged in (az login)
cleancloud scan --provider azure
# GCP: make sure you're logged in (gcloud auth application-default login)
cleancloud scan --provider gcp --all-projects
Not sure if your credentials have the right permissions?
Run cleancloud doctor --provider aws, cleancloud doctor --provider azure, or cleancloud doctor --provider gcp first.
Scan flags:
| Flag | What it does |
|---|---|
--provider aws|azure|gcp |
Cloud provider to scan (required) |
--category hygiene|ai|all |
Rule category: hygiene (default), ai (SageMaker on AWS, AML Compute on Azure, Vertex AI on GCP), or all (hygiene + AI) |
--region REGION |
Scan a single region |
--all-regions |
Scan all active regions — AWS/Azure only |
| AWS multi-account | |
--org |
Auto-discover all accounts via AWS Organizations |
--multi-account FILE |
Config file listing accounts to scan |
--accounts 111,222 |
Inline account IDs, comma-separated |
--concurrency N |
Parallel accounts/projects (default: 3) |
--timeout SECONDS |
Total scan timeout in seconds (default: 3600) |
| Azure multi-subscription | |
--management-group ID |
Scan all subscriptions under a Management Group |
--subscription ID |
Scan a specific subscription (default: all accessible) |
| GCP multi-project | |
--all-projects |
Scan all accessible GCP projects |
--project ID |
Scan a specific project (repeatable) |
| Output | |
--output human|json|csv|markdown |
Output format (default: human) |
--output-file FILE |
Write output to file instead of stdout |
| Enforcement (exit code 2 on match) | |
--fail-on-confidence HIGH|MEDIUM |
Fail on findings at or above this confidence |
--fail-on-cost N |
Fail if estimated monthly waste ≥ $N |
--fail-on-findings |
Fail on any finding |
No install — try in your cloud shell
Got an AWS or Azure account? Run a real scan in seconds with no local setup.
AWS — AWS CloudShell:
pip install --upgrade cleancloud
cleancloud doctor --provider aws # check what permissions your session has
cleancloud scan --provider aws --all-regions
Azure — Azure Cloud Shell:
pip install --upgrade --user cleancloud
export PATH="$HOME/.local/bin:$PATH"
cleancloud doctor --provider azure # check what permissions your session has
cleancloud scan --provider azure
GCP — Cloud Shell:
pip install --upgrade --user cleancloud
export PATH="$HOME/.local/bin:$PATH"
cleancloud doctor --provider gcp # check what permissions your session has
cleancloud scan --provider gcp --all-projects
All three shells authenticate using your portal session — no separate credentials needed. Permissions vary by account; doctor tells you exactly what's available before you scan. If permissions are missing, CleanCloud skips those rules and reports what was skipped.
Install troubleshooting
macOS: brew install pipx && pipx install cleancloud
Linux: sudo apt install pipx && pipx install cleancloud
Windows: python3 -m pip install --user pipx && python3 -m pipx ensurepath && pipx install cleancloud
Command not found: cleancloud — Run pipx ensurepath then restart your shell.
externally-managed-environment error — Use pipx instead of pip.
Upgrading from a previous pip install — remove it first to avoid shadowing:
pip uninstall cleancloud && pipx install cleancloud && pipx ensurepath
Wrong version after install — Run which cleancloud; an old pip install may be shadowing pipx.
Minimum recommended version: v1.7.2 — earlier versions have setup friction. Run cleancloud --version to check.
Shareable markdown report
cleancloud scan --provider aws --all-regions --output markdown
Prints a grouped summary you can paste directly into a GitHub PR comment, Slack message, or issue:
## CleanCloud Scan Results
**Provider:** AWS
**Regions:** us-east-1, us-west-2, eu-west-1
**Scanned:** 2026-03-07
**Estimated monthly waste:** ~$147
**Total findings:** 6
| Finding | Count | Est. Monthly Cost |
|---------|------:|------------------:|
| Unattached EBS Volume | 2 | ~$115 |
| Idle NAT Gateway | 1 | ~$32 |
| Unattached Elastic IP | 1 | ~$0 |
| Detached ENI | 1 | — |
| CloudWatch Log Group: Infinite Retention | 1 | — |
**Confidence:** high: 3 · medium: 3
> Generated by [CleanCloud](https://github.com/cleancloud-io/cleancloud) — read-only cloud hygiene scanner for AWS, Azure, and GCP.
Save to a file with --output-file results.md. Without --output-file, it prints to stdout.
For full output examples including doctor, JSON, CSV, and markdown: docs/example-outputs.md
What CleanCloud Detects
33 rules across AWS, Azure, and GCP — conservative, high-signal, designed to avoid false positives in IaC environments.
AWS:
- Compute: stopped instances 30+ days (EBS charges continue)
- Storage: unattached EBS volumes (HIGH), old EBS snapshots, old AMIs, old RDS snapshots 90+ days
- Network: unattached Elastic IPs (HIGH), detached ENIs, idle NAT Gateways, idle load balancers (HIGH)
- Platform: idle RDS instances (HIGH)
- Observability: infinite retention CloudWatch Logs
- Governance: untagged resources, unused security groups
- AI/ML (opt-in:
--category ai): idle SageMaker endpoints with zero invocations 14+ days — GPU-backed endpoints flagged HIGH risk ($500–$23K/month)
Azure:
- Compute: stopped (not deallocated) VMs (HIGH)
- Storage: unattached managed disks (HIGH), old snapshots
- Network: unused public IPs, empty load balancers (HIGH), empty App Gateways (HIGH), idle VNet Gateways
- Platform: empty App Service Plans (HIGH), idle SQL databases (HIGH), idle App Services, unused Container Registries
- Governance: untagged resources
- AI/ML (opt-in:
--category ai): idle AML compute clusters with non-zero baseline capacity and no workload activity 14+ days — GPU clusters flagged HIGH risk ($600–$15K/month)
GCP:
- Compute: stopped instances 30+ days (disk charges continue) (HIGH)
- Storage: unattached Persistent Disks (HIGH), old snapshots 90+ days
- Network: unused reserved static IPs — regional and global (HIGH)
- Platform: idle Cloud SQL instances with zero connections 14+ days (HIGH)
- AI/ML (opt-in:
--category ai): idle Vertex AI Online Prediction endpoints with zero or near-zero predictions 14+ days (dedicated nodes continue billing regardless of traffic) — GPU-backed endpoints flagged HIGH risk ($449–$23K+/month)
Rules without a confidence marker are MEDIUM — they use time-based heuristics or multiple signals. Start with --fail-on-confidence HIGH to catch obvious waste, then tighten as your team validates.
Full rule details, signals, and evidence: docs/rules.md
How Teams Run CleanCloud
CleanCloud exits 0 by default — it reports findings and never blocks anything unless you ask it to. Three common patterns:
Weekly governance scan — the most common setup for platform and FinOps teams. Run on a schedule, not tied to code changes. Catches new waste before it compounds and enforces a cost threshold across all accounts or subscriptions.
# .github/workflows/cleancloud-weekly.yml
on:
schedule:
- cron: "0 9 * * 1" # every Monday 9am
# AWS — scan entire org, alert if monthly waste crosses $500
cleancloud scan --provider aws --org --all-regions \
--output json --output-file findings.json \
--fail-on-cost 500
# Azure — scan all subscriptions under a Management Group
cleancloud scan --provider azure --management-group <MGMT_GROUP_ID> \
--output json --output-file findings.json \
--fail-on-cost 500
The JSON output can feed Slack alerts, Jira tickets, or a cost dashboard.
On-demand audit — run from CloudShell or your terminal for an immediate point-in-time view. No install, no config, findings in under 60 seconds. Useful before a quarterly cost review, a cloud migration, or an infosec audit.
# AWS CloudShell — uses your portal session, no extra auth
pip install --upgrade cleancloud
cleancloud scan --provider aws --all-regions
# Azure Cloud Shell — uses your portal session, no extra auth
pip install --upgrade --user cleancloud && export PATH="$HOME/.local/bin:$PATH"
cleancloud scan --provider azure
In CI/CD — run as a step in your deployment workflow to catch obvious waste before it ships. Use enforcement flags to block or warn.
# AWS
cleancloud scan --provider aws --region us-east-1 \
--fail-on-confidence HIGH # exit 2 if any HIGH confidence waste found
# Azure
cleancloud scan --provider azure \
--fail-on-confidence HIGH
Enforcement flags — scans always exit 0 unless you opt in:
| Flag | Behavior | Exit code |
|---|---|---|
| (none) | Report only, never fail | 0 |
--fail-on-confidence HIGH |
Fail on HIGH confidence findings | 2 |
--fail-on-confidence MEDIUM |
Fail on MEDIUM or higher | 2 |
--fail-on-cost 50 |
Fail if estimated monthly waste >= $50 | 2 |
--fail-on-findings |
Fail on any finding | 2 |
Copy-pasteable GitHub Actions workflows for AWS (OIDC) and Azure (Workload Identity) — including auth setup, RBAC, and enforcement patterns:
Automation & CI/CD guide → · AWS setup → · Azure setup → · GCP setup →
Need help with OIDC or enforcement flags? Ask in our setup discussion →
Multi-Account Scanning (AWS)
Built for enterprises running AWS Organizations. Scan every account in parallel — findings aggregated into one report.
# Scan from a config file (commit .cleancloud/accounts.yaml to your repo)
cleancloud scan --provider aws --multi-account .cleancloud/accounts.yaml --all-regions
# Inline account IDs — no file needed
cleancloud scan --provider aws --accounts 111111111111,222222222222 --all-regions
# Auto-discover all accounts in your AWS Organization
cleancloud scan --provider aws --org --all-regions --concurrency 5
Permissions required:
| Role | Permissions |
|---|---|
| Hub account | 16 read-only permissions + sts:AssumeRole on spoke roles |
Hub account (--org only) |
Above + organizations:ListAccounts |
| Spoke accounts | 16 read-only permissions (same as single-account scan — no extra changes) |
.cleancloud/accounts.yaml — commit this to your repo:
role_name: CleanCloudReadOnlyRole
accounts:
- id: "111111111111"
name: production
- id: "222222222222"
name: staging
Spoke account trust policy — allows the hub to assume the role:
{
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::<HUB_ACCOUNT_ID>:root" },
"Action": "sts:AssumeRole"
}
How it works:
- Hub-and-spoke — CleanCloud assumes
CleanCloudReadOnlyRolein each target account using STS. No persistent access, no stored credentials. - Three discovery modes —
.cleancloud/accounts.yamlfor explicit control,--accountsfor quick ad-hoc scans,--orgfor full AWS Organizations auto-discovery. - Efficient region detection — active regions are discovered once on the hub account and reused across all spokes. Without this: N accounts × 160 API calls just for region probing. With it: 160 calls once.
- Parallel with isolation — each account runs in its own thread with its own session. One account failing (AccessDenied, timeout) never affects the others.
- Partial-success visibility — if 2 regions fail and 7 succeed within an account, the account is marked
partialwith the failed regions named. - Live progress —
[3/50] done production (123456789012) — 47s, 12 findingsprinted as each account completes. - Per-account cost breakdown — JSON output includes estimated monthly waste per account, sortable and scriptable.
Full setup guide (IAM policy, trust policy, IaC templates): AWS multi-account setup →
Multi-Subscription Scanning (Azure)
Built for enterprises running large Azure tenants. Scan every subscription in parallel with one identity — findings aggregated into one report with a per-subscription cost breakdown.
# Scan all subscriptions the service principal can access (default)
cleancloud scan --provider azure
# Auto-discover via Management Group
cleancloud scan --provider azure --management-group <MANAGEMENT_GROUP_ID>
# Explicit list
cleancloud scan --provider azure --subscription <SUB_1> --subscription <SUB_2>
Permissions required:
| Scope | Role |
|---|---|
| Each subscription | Reader (built-in) |
Management Group (if using --management-group) |
Reader + Microsoft.Management/managementGroups/read |
Assign Reader at the Management Group level and it inherits to all subscriptions underneath — no per-subscription role assignment needed:
az role assignment create \
--assignee <SERVICE_PRINCIPAL_CLIENT_ID> \
--role Reader \
--scope /providers/Microsoft.Management/managementGroups/<MANAGEMENT_GROUP_ID>
How it works:
- Flat identity model — one service principal, Reader at Management Group level. No cross-subscription role assumption, no hub-and-spoke complexity.
- Three discovery modes — all accessible (default),
--management-groupfor auto-discovery,--subscriptionfor explicit control. - Parallel with isolation — each subscription runs in its own thread. One subscription failing (permission denied, timeout) never affects the others.
- Graceful permission handling — rules that fail with 403 are reported as skipped (with the missing permission named), not as scan failures.
- Per-subscription cost breakdown — output shows estimated monthly waste per subscription so you can see exactly which subscription is dirty.
Full setup guide (RBAC, Workload Identity, Management Group): Azure multi-subscription setup →
Multi-Project Scanning (GCP)
Built for teams running multiple GCP projects. Scan all accessible projects in parallel with one identity — findings aggregated into one report with a per-project cost breakdown.
# Scan all projects the identity can access (default — uses ADC project discovery)
cleancloud scan --provider gcp --all-projects
# Scan specific projects
cleancloud scan --provider gcp --project my-project-123 --project another-project-456
# With region filter
cleancloud scan --provider gcp --all-projects --region us-central1
Permissions required (per project):
| Permission | Required for |
|---|---|
compute.disks.list |
Unattached persistent disks |
compute.instances.list |
Stopped VM instances |
compute.addresses.list |
Unused regional static IPs |
compute.globalAddresses.list |
Unused global static IPs |
compute.snapshots.list |
Old disk snapshots |
cloudsql.instances.list |
Idle Cloud SQL instances |
monitoring.timeSeries.list |
SQL connection activity check |
All read-only permissions are covered by four predefined roles: roles/compute.viewer, roles/cloudsql.viewer, roles/monitoring.viewer, and roles/browser (required for --all-projects project enumeration). For CI/CD, use Workload Identity Federation — see GCP setup →.
How it works:
- Application Default Credentials — uses the standard GCP auth chain:
GOOGLE_APPLICATION_CREDENTIALS→ gcloud ADC → Workload Identity → metadata server attached service account. No proprietary auth mechanism. - Auto-discovery — with
--all-projects, CleanCloud enumerates all ACTIVE projects the identity has access to via the Resource Manager API. With--project, only the specified projects are scanned. - Parallel with isolation — each project runs in its own thread. One project failing (permission denied, API not enabled) never affects the others.
- Graceful degradation — rules that fail with 403 are recorded as skipped (with the missing permission named), not as scan failures. The Cloud SQL rule is silently skipped if the SQL Admin API is not enabled.
- Per-project cost breakdown — output shows estimated monthly waste per project.
Full setup guide: GCP setup →
Roadmap
Policy-as-code — cleancloud.yaml with rule packs, per-team exceptions, and cost thresholds in config — the top FinOps governance ask for 2025/2026
More AI/ML waste rules — SageMaker notebook instances running unused, orphaned training artifacts, Vertex AI notebook instances idle
More AWS rules — S3 lifecycle gaps, Redshift idle, NAT Gateway cost leakage (internal services routing through NAT instead of VPC endpoints — S3, DynamoDB, ECR, SSM), unused VPC endpoints
More Azure rules — Azure Firewall idle, AKS node pool idle, Azure Batch unused pools
More GCP rules — GKE node pool idle, BigQuery slot waste, GCS cold storage, Cloud Run idle revisions
Rule filtering — --rules flag to run a subset of rules
Documentation
docs/rules.md— Detection rules, signals, and evidencedocs/aws.md— AWS IAM policy and OIDC setupdocs/azure.md— Azure RBAC and Workload Identity setupdocs/gcp.md— GCP IAM permissions and Application Default Credentials setupdocs/ci.md— Automation, scheduled scans, and CI/CD integrationdocs/example-outputs.md— Full output examplesSECURITY.md— Security policy and threat modeldocs/infosec-readiness.md— IAM Proof Pack, threat model
Found a bug? Open an issue
Feature request? Start a discussion
Questions? suresh@getcleancloud.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cleancloud-1.12.0.tar.gz.
File metadata
- Download URL: cleancloud-1.12.0.tar.gz
- Upload date:
- Size: 232.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cae89f2ba37b5e42565a1613638571b75cffa04ee33d7d7588a71bf91431615f
|
|
| MD5 |
49317330b854b8f90321bc648fd4ce9a
|
|
| BLAKE2b-256 |
866a0437d598823cab002ad658eaf5755a1a5db5015e5abf16ddcb0ba6e9b4fc
|
Provenance
The following attestation bundles were made for cleancloud-1.12.0.tar.gz:
Publisher:
release.yml on cleancloud-io/cleancloud
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cleancloud-1.12.0.tar.gz -
Subject digest:
cae89f2ba37b5e42565a1613638571b75cffa04ee33d7d7588a71bf91431615f - Sigstore transparency entry: 1219565882
- Sigstore integration time:
-
Permalink:
cleancloud-io/cleancloud@a22ac9b32fe9537d09591ac9ec75b159389da6b6 -
Branch / Tag:
refs/tags/v1.12.0 - Owner: https://github.com/cleancloud-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a22ac9b32fe9537d09591ac9ec75b159389da6b6 -
Trigger Event:
release
-
Statement type:
File details
Details for the file cleancloud-1.12.0-py3-none-any.whl.
File metadata
- Download URL: cleancloud-1.12.0-py3-none-any.whl
- Upload date:
- Size: 175.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45adb6b28846888f7373b121205b8da39102d238ad191900cb0643c8e0a420bd
|
|
| MD5 |
ca4ecdb9b915366f3c8dbf73a1aca80e
|
|
| BLAKE2b-256 |
232e090a5c4751725a88eda4657e1d09eab7ffbbcc26bf0f358f3161ef406086
|
Provenance
The following attestation bundles were made for cleancloud-1.12.0-py3-none-any.whl:
Publisher:
release.yml on cleancloud-io/cleancloud
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cleancloud-1.12.0-py3-none-any.whl -
Subject digest:
45adb6b28846888f7373b121205b8da39102d238ad191900cb0643c8e0a420bd - Sigstore transparency entry: 1219565937
- Sigstore integration time:
-
Permalink:
cleancloud-io/cleancloud@a22ac9b32fe9537d09591ac9ec75b159389da6b6 -
Branch / Tag:
refs/tags/v1.12.0 - Owner: https://github.com/cleancloud-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a22ac9b32fe9537d09591ac9ec75b159389da6b6 -
Trigger Event:
release
-
Statement type: