Repo-native, agent-first compliance scanner for FedRAMP and DoD Impact Levels
Project description
Efterlev
Compliance scanning for SaaS teams pursuing FedRAMP 20x — that lives in your repo, not a SaaS dashboard.
Efterlev reads your Terraform, classifies it against the 60 thematic Key Security Indicators, drafts FRMR-compatible attestations grounded in cited evidence, and proposes code-level remediations. Locally. No procurement cycle. No vendor account. Apache 2.0.
pipx install efterlev
cd path/to/your-terraform
efterlev init
export ANTHROPIC_API_KEY=sk-ant-...
efterlev report run
Pronounced "EF-ter-lev." From Swedish efterlevnad (compliance).
Or have an AI assistant do it for you
Paste this into Claude Code, Cursor, Codex, Kiro, or any other AI assistant with shell access. It'll confirm the repo root, recommend a backend (Anthropic API by default), pick the right model tier on Bedrock, install Efterlev, run the full pipeline, and brief you on the top 3 KSIs to focus on.
You are helping me run Efterlev (https://efterlev.com) against my Terraform
for the first time. Efterlev is a FedRAMP 20x compliance scanner that reads
Terraform, classifies it against 60 Key Security Indicators, and drafts
FRMR-compatible attestations with cited evidence (file-level always; HCL
line numbers when scanning .tf source directly). It runs locally; the only
outbound call is to the LLM endpoint I configure.
## Step 0 — confirm the target path
This is the silent footgun, so handle it first.
Ask me for the **absolute path to my repo root** — NOT a Terraform
subdirectory like `infra/terraform/`. Efterlev needs the repo root to
walk all three evidence sources:
- `**/*.tf` for the Terraform-source detectors
- `.github/workflows/*.yml` for the GitHub-workflow detectors
(4 detectors, ~10% of coverage)
- `.efterlev/manifests/*.yml` for procedural-KSI attestations
(covers the AFR / CED / INR themes, ~10% of coverage)
If I name a subdir, you'll silently miss workflows + manifests — about
20% of detector coverage disappears with no error message. So:
1. Ask me for the path.
2. Verify it's a git repo root: check for `.git/` AND at least one of
`**/*.tf` OR `.github/workflows/`. If only one is present, confirm
with me before proceeding ("I see Terraform but no workflows — is
this a repo root or a subdir?").
3. `cd` to that path and stay there for the rest of the run.
## Step 1 — pick the LLM backend
**Always ask me this question — do NOT auto-decide based on env vars,
AWS profile state, or any other ambient signal.** The presence of an
`ANTHROPIC_API_KEY` in my environment doesn't mean I want to use it
for *this* run; many users have credentials for both backends and
testing one specifically is the whole point.
You may surface ambient state as informational context, but the choice
is mine:
> "I see `ANTHROPIC_API_KEY` is set, and `aws sts get-caller-identity`
> succeeds for region us-east-1. Which backend would you like to use
> for this run?
>
> (a) Direct Anthropic API — fastest path; uses your existing key.
> (b) AWS Bedrock — uses your AWS credentials; works for GovCloud
> and FedRAMP-authorized infra."
Only proceed once I've answered. If I'm genuinely undecided and ask
you to choose, then default to (a) and say so explicitly.
The trade-off summary, if I want it:
- **Anthropic API** — ~2 min setup, one env var, no AWS account.
- **Bedrock** — ~5–10 min setup (model-access opt-in + inference
profile selection), but works at FedRAMP-authorized scale.
## Step 2a — Anthropic API path
The fast path. ~2 minutes from zero to running.
1. Confirm I have an API key. Verify:
echo "${ANTHROPIC_API_KEY:0:10}"
Should start with `sk-ant-`. If empty, stop and tell me to grab
one at https://console.anthropic.com/settings/keys and export it.
2. Install or upgrade:
- First-time: `pipx install efterlev` (or `brew install pipx`
first if pipx is missing).
- Upgrade: `pipx upgrade efterlev` (no brackets — extras are
install-time only).
3. Confirm `efterlev --version` prints `0.1.6` or higher. If lower,
clean-reinstall: `pipx uninstall efterlev && pipx install efterlev`.
Then jump to Step 3.
## Step 2b — Bedrock path
Bedrock connects you to LLMs hosted on AWS. Same Anthropic models, just
billed through your AWS account.
### Step 2b.0 — pick the model family
Ask me which model family I want to use:
(a) Anthropic Claude — RECOMMENDED. This is what Efterlev's agents
are tuned for; the only family with full feature support today.
(b) OpenAI on Bedrock — not yet supported. On the v0.3+ roadmap.
(c) Other (AI21 / Cohere / Meta / Mistral) — not yet supported.
If I say anything other than Claude, gently flag it's not yet
supported, recommend Claude as the working path, and only proceed
if I explicitly confirm I want to wait for v0.3. The rest of this
step assumes Claude.
### Step 2b.1 — region and credentials
1. Ask me which AWS region to use. Default suggestion: `us-east-1`
(us-west-2 is the other reliable option). Verify creds:
aws sts get-caller-identity --region <region>
If this fails, stop — I need to run `aws configure` first.
2. Confirm Anthropic model access is enabled in this account. The #1
cold-start cliff. Visit:
https://console.aws.amazon.com/bedrock/home?region=<region>#/modelaccess
At least one Claude 4.x must show "Access granted." If not, request
access there (~near-instant for most accounts) and wait a minute to
propagate before continuing.
### Step 2b.2 — pick the tier
3. Discover available Claude inference profiles in that region.
CRITICAL: use `--no-paginate --max-results 1000`. The default
page size silently truncates at ~100, and older Claude 3.x
profiles fill the first page hiding 4.x. Run:
aws bedrock list-inference-profiles --region <region> \
--type-equals SYSTEM_DEFINED \
--no-paginate --max-results 1000 \
| jq -r '.inferenceProfileSummaries[]
| select(.inferenceProfileName | test("claude"; "i"))
| "\(.inferenceProfileName)\t\(.inferenceProfileArn)"'
Group results by tier (Opus / Sonnet / Haiku), pick the LATEST
version of each. Prefer regional `us.` / `eu.` profiles over
global. If NO 4.x profiles appear after `--no-paginate`, model
access from step 2 isn't actually granted — go back.
4. Present this as the recommendation, NOT a multiple-choice question.
Pick the latest Sonnet ARN as MODEL_ARN unless I explicitly object:
Recommended: Latest Sonnet (e.g. claude-sonnet-4-6)
— Best cost/quality balance for this workload.
— ~5× cheaper than Opus; on the 60-KSI sweep the
classification quality is indistinguishable from Opus.
— Run cost: ~$0.30–1 on a typical mid-size codebase.
— Latency: most runs complete in 60–90 seconds.
Override only if you have a specific reason:
— Latest Opus (e.g. claude-opus-4-7): higher reasoning
ceiling. ~5× the spend. Worth it for ambiguous codebases
or when you want the very best classification.
— Latest Haiku (e.g. claude-haiku-4-6): cheapest tier;
fine for smoke tests but quality dips on edge cases.
If a tier is unavailable in this account, omit it from the
alternatives. Capture the chosen `inferenceProfileArn` as
MODEL_ARN.
### Step 2b.3 — install Efterlev
5. Install or upgrade with the Bedrock extra. **Pick the path that
matches your installer** — pipx and uv tool manage isolated venvs
at different paths, and using the wrong installer's commands
silently fails to find the package. `efterlev doctor` detects which
installer was used and gives the right hint when boto3 is missing,
but you can pre-empt the round-trip:
- **pipx users:**
- First-time: `pipx install 'efterlev[bedrock]'` (keep the quotes —
bracket extras need them; if pipx is missing, `brew install pipx`
first).
- Upgrade: `pipx upgrade efterlev` (drop the `[bedrock]` bracket —
extras are install-time only). If the prior install was non-
Bedrock (no boto3 in the venv), also run:
pipx inject efterlev boto3
Verify: `pipx runpip efterlev show boto3`.
- **uv tool users** (e.g. `uv tool install efterlev` previously):
- Install or re-install with the extra:
uv tool install --reinstall 'efterlev[bedrock]'
This is the right command whether you're starting fresh or
adding boto3 to an existing uv-managed install. Pre-v0.1.9
runbook pointed at `pipx inject efterlev boto3` — that's wrong
for uv-installed efterlev (different venv path; pipx-targeted
commands silently fail to find the package).
- Confirm `efterlev --version` prints `0.1.6` or higher. If lower,
clean-reinstall:
pipx uninstall efterlev && pipx install 'efterlev[bedrock]'
## Step 3 — init
You're already at the repo root from Step 0. Run init with `--force`
by default — it preserves customer-authored `.efterlev/manifests/`
(those are the only sticky state you'd ever want to keep) and
regenerates the FRMR cache, provenance store, and `config.toml`. On
upgrades especially, the regenerated `config.toml` picks up any new
defaults the new release shipped (especially relevant on the Bedrock
path, where the latest available inference profiles drift between
releases).
- Anthropic backend:
`efterlev init --target . --force`
- Bedrock backend:
`efterlev init --target . --force \`
` --llm-backend=bedrock --llm-region=<region> --llm-model=<MODEL_ARN>`
Skip `--force` only if I tell you I have an in-progress workspace
with hand-edited `config.toml` I want to preserve.
## Step 4 — doctor + scan
1. Run `efterlev doctor` — surface any warnings or fails. The doctor
actively pings the LLM (Bedrock InvokeModel or Anthropic API) so
credential / model-access issues surface here before agent runs
spend money.
2. Pick a scan mode. Try plan-JSON first when `terraform` is on PATH
(gives full module coverage); fall back to HCL mode if planning
isn't possible. Common failure modes are EITHER `terraform init`
failing on a missing/locked remote backend, OR `terraform plan`
failing on missing variables / "(known after apply)" / etc. Both
are recoverable.
- **Plan-JSON (preferred):**
terraform init
terraform plan -out plan.bin
terraform show -json plan.bin > plan.json
efterlev scan --plan plan.json
If `terraform init` fails with a missing S3/Terraform Cloud
backend (very common when scanning a repo you don't operate),
skip the remote-state machinery:
terraform init -backend=false
terraform plan -refresh=false -out plan.bin
terraform show -json plan.bin > plan.json
efterlev scan --plan plan.json
**Reality check:** some repos have a `terraform { backend "s3" {} }`
block that `-refresh=false` doesn't bypass — `terraform plan`
still tries to acquire backend state. If you see "Backend
initialization required" after the `-backend=false` workaround,
drop straight to HCL mode. Don't burn 5 minutes on the dance.
If `terraform plan` STILL fails on missing required variables,
create a throwaway `.tfvars` with placeholders and pass it via
`-var-file`. If both routes fail, drop to HCL mode.
- **HCL fallback:** `efterlev scan`. Keeps HCL line numbers in
citations (which plan-JSON loses); the trade-off is missed
coverage on resources defined inside upstream modules.
## Step 5 — agents and a useful brief
1. Run `efterlev agent gap` (~60–90s; ~$0.30–1 on Sonnet, ~$1–2 on
Opus). Print the absolute path to the HTML report.
2. **Read the gap-report JSON** at
`.efterlev/reports/gap/gap-<ts>.json` (newest one) and tell me:
- The classification breakdown — count of `implemented` /
`partial` / `not_implemented` / `not_applicable` /
`evidence_layer_inapplicable`.
- Workspace boundary state (`workspace_boundary_state` field) —
if `boundary_undeclared`, recommend running
`efterlev boundary set --include 'pattern' --exclude 'pattern'`
so the POA&M filter has scope to enforce. Important: if I declare
`--include 'infra/terraform/**'`, ALL workflow evidence
(`.github/workflows/*.yml`) becomes `out_of_boundary` because
workflows aren't under `infra/terraform/`. To keep workflow
findings in scope, pass two patterns:
`--include 'infra/terraform/**' --include '.github/workflows/**'`.
- **Top 3 KSIs to focus on**: pick the highest-impact
`not_implemented` classifications. Bias toward the SVC theme
(encryption, integrity, transport) and IAM (MFA, federation)
since those are typically the most-cited finding categories
in 3PAO reviews. For each, give me one sentence: KSI id +
name, one-line rationale from the report, and the canonical
fix.
- Any KSI classified `partial` — these are the texture cases
where `efterlev agent remediate --ksi <id>` produces the most
useful diff (mid-journey, not zero-to-one). Surface 2 of them.
3. Ask if I also want:
- Narratives: `efterlev agent document` (~$1–2 on Sonnet —
always Sonnet by design; the job is structured extractive
writing, not novel reasoning).
- POA&M markdown: `efterlev poam` (free, deterministic — no
LLM call). Lands at `.efterlev/reports/poam/poam-<ts>.md`.
4. After document + poam complete, surface ONE more brief:
- The mode breakdown of the documentation report —
`agent_drafted` vs `deterministic_template` counts. The
`deterministic_template` rows are inapplicable KSIs that
skipped the LLM (the cost-saving path; ~70% of Sonnet spend
would have been wasted on procedural-only KSIs without it).
- The POA&M's open-item count and out-of-boundary-excluded
count.
## Constraints
- Don't run `efterlev agent remediate` without me asking — it generates
code-level diffs and I want to be in the loop.
- Don't modify my Terraform. Don't commit anything.
- Soft cost cap: $3 on the Anthropic path, $5 on the Bedrock path
(Bedrock retries can burn more on first-run config issues). Stop and
check back before exceeding.
- If anything fails or surprises you, stop and ask — don't paper over.
## When done
Brief me with:
- Counts of `implemented` / `partial` / `not_implemented` /
`evidence_layer_inapplicable` KSIs.
- Paths to the gap report, FRMR JSON, and POA&M markdown.
- Anything notable: secrets caught by the redaction layer, KSIs flagged
for review, modules where evidence was sparse.
Why this exists
A 100-person SaaS company just got told by its biggest prospect: "we'll buy, but only if you're FedRAMP Moderate."
The team googles it. Consulting engagements start at $250K. SaaS compliance platforms cover SOC 2 beautifully and treat FedRAMP as a footnote. Enterprise GRC tooling is priced for the wrong scale. A NIST document family runs to thousands of pages.
What they actually need is something that reads their Terraform and tells them, in their own language, what's wrong and how to fix it. Something a single engineer can install on a Tuesday and show results at Wednesday's standup. Output concrete enough that their 3PAO can use it; honest enough that the 3PAO won't throw it out.
Efterlev is that tool.
It targets FedRAMP 20x — the new authorization track that replaces narrative-heavy System Security Plans with measurable outcomes called Key Security Indicators. KSIs are concrete things ("encrypt network traffic," "enforce phishing-resistant MFA") that can be assessed against actual evidence rather than long descriptions of intent. Most new SaaS authorizations starting in 2026 will target this track. Efterlev's primary internal abstraction is the KSI; FRMR (the machine-readable format FedRAMP 20x is standardizing on) is the primary output.
What it does
- Scans your Terraform — both raw
.tffiles andterraform show -jsonplan output — for evidence of 60 thematic KSIs, backed by underlying NIST 800-53 Rev 5 controls - Classifies each KSI as implemented, partial, not_implemented, not_applicable, or
evidence_layer_inapplicable(the honest answer for procedural KSIs no scanner can see) - Drafts FRMR-compatible attestation JSON grounded in that evidence — every assertion cites its source file (and HCL line numbers when scanning
.tfdirectly; plan-JSON mode resolves modules at the cost of file-level-only citations until line recovery lands in v0.2) - Proposes code-level remediation diffs you can review, edit, or apply
- Generates a reviewer-ready POA&M markdown for every open KSI
- Traces every claim back to the file (and HCL line range, in
.tfmode) that produced it (efterlev provenance show <id>— accepts truncated SHA prefixes) - Watches:
efterlev report run --watchre-runs the full pipeline on every save (debounced 2s)
Everything runs locally. The only outbound network call is to your configured LLM endpoint — direct Anthropic API by default, or AWS Bedrock ([bedrock] extra) for FedRAMP-authorized GovCloud deployments. Scanner output is fully deterministic and offline.
What it doesn't do
- It does not produce an Authorization to Operate. Humans and 3PAOs do that.
- It does not certify compliance. It produces drafts that accelerate the human review cycle.
- It does not guarantee LLM-generated narratives are correct. Every claim carries
requires_review: Literal[True]at the type level — not a flag, not a string. - It does not cover SOC 2, ISO 27001, HIPAA, or GDPR. Other tools serve those well.
- It does not scan live cloud infrastructure (yet — v1.5+).
- It does not replace AWS Config / Security Hub for runtime evaluation. Efterlev is the pre-deploy IaC layer; AWS-native is the runtime evidence layer. See docs/aws-coexistence.md.
For the honest full accounting, see LIMITATIONS.md.
How to run it
efterlev init # creates .efterlev/ workspace
efterlev scan # raw .tf files
# OR for module-composed codebases (the dominant pattern):
terraform init && terraform plan -out plan.bin && terraform show -json plan.bin > plan.json
efterlev scan --plan plan.json # ~60% more evidence on real codebases
efterlev agent gap # KSI-by-KSI classification (Opus 4.7)
efterlev agent document # FRMR JSON + HTML attestations (Sonnet 4.6)
efterlev agent remediate --ksi KSI-SVC-SNT # Terraform diff that closes the gap (Opus 4.7)
efterlev poam # POA&M markdown for every open KSI
efterlev provenance show <record_id> # walk any claim back to source
Or just:
efterlev report run # full pipeline: init → scan → gap → document → poam
efterlev report run --watch # re-run on every file change (2s debounce)
Pre-flight check: efterlev doctor (Python version, workspace, FRMR cache freshness, API key shape, Bedrock creds — all offline).
Wire it into CI: drop-in GitHub Action at .github/workflows/pr-compliance-scan.yml posts a sticky markdown PR comment with findings + detector coverage. See docs/ci-integration.md. Tutorials for GitLab CI, CircleCI, and Jenkins on the docs site.
How it's built
Three layers, each with a clear job:
- Detectors — small, deterministic Python folders. One detector = one folder = one compliance pattern. No AI. The detector library is the community-contributable surface.
- Primitives — typed functions wrapping the things agents need ("scan this directory," "validate this output," "load that catalog"). MCP-exposed.
- Agents — focused reasoning loops backed by Claude. Each has its system prompt in a plain
.mdfile you can read and audit. AI is used for the parts where reasoning matters; never for the parts where determinism does.
This split — deterministic for evidence, AI for reasoning, different model weights for different cognitive loads — is the most important design decision in the project. It's what lets us tell auditors and 3PAOs the truth: scanner findings are verifiable facts about your code; AI claims are drafts you can audit but should not blindly trust.
Hallucination defenses are structural, not advisory. Every AI-generated claim links to evidence records via content-addressed IDs. Prompts wrap evidence in <evidence_NONCE> XML fences with a per-run nonce; a post-generation validator rejects any output citing IDs the model didn't actually see. The provenance store rejects any claim whose derived_from cites IDs that don't resolve. The DRAFT marker is Literal[True] at the type level — there's no flag to clear it.
Secrets never leave the machine unredacted. Every LLM prompt is unconditionally scrubbed for 7 secret families (AWS keys, GCP keys, GitHub tokens, Slack tokens, Stripe keys, PEM private keys, JWTs). The scrubber has no opt-out path. Each redaction writes an audit line to .efterlev/redactions/<scan_id>.jsonl (mode 0o600); review with efterlev redaction review.
LLM calls degrade predictably. Transient errors retry with exponential backoff + full jitter (3 attempts). On primary-model exhaustion, falls back once from Opus to Sonnet before surfacing a failure. Non-retryable errors (auth, invalid request) fail immediately.
For deeper architectural detail, see docs/architecture.md. For the design history including reversals and tradeoffs, see DECISIONS.md.
Coverage at v0.1.3
- 45 detectors — 38 KSI-mapped + 7 supplementary 800-53-only (where FRMR 0.9.43-beta doesn't yet map the underlying control)
- 31 of 60 thematic KSIs covered, across 8 of 11 themes (CNA, CMT, IAM, MLA, PIY, RPL, SCR, SVC). The remaining three themes (AFR, CED, INR) are entirely procedural — covered by customer-authored Evidence Manifests rather than detector evidence.
- Detector sources: 41 Terraform + 4 GitHub workflows
- Three agents: Gap (Opus 4.7), Documentation (Sonnet 4.6), Remediation (Opus 4.7)
- Two LLM backends: Anthropic API (default) + AWS Bedrock (
[bedrock]extra, GovCloud-deployable) - 1053 tests passing; mypy strict + ruff check + ruff format clean across 172 source files
Coverage relative to FedRAMP 20x Phase 2's 70% automated-validation threshold: the threshold applies to the customer's whole authorization package, not to any single tool. Efterlev covers 31 KSIs at the IaC layer pre-deploy; AWS-native services (Config, Security Hub, CloudTrail, Inspector, GuardDuty) cover roughly 14 KSIs at the runtime layer. Honest union: ~33 of 63 KSIs (~52%) — distinct layers, not double-counted. Reaching 70% takes both. See docs/aws-coexistence.md for the strategic mapping and docs/csx-mapping.md for how the outputs map to CSX-SUM / MAS / ORD.
Where Efterlev fits
Sits alongside AWS Config / Security Hub / CloudTrail, not in place of them:
| Efterlev | AWS-native | |
|---|---|---|
| When | Pre-deploy, on every commit or save | Post-deploy, on a 3-day cadence |
| Reads | Terraform .tf + .github/workflows/*.yml |
Live AWS API state, runtime events |
| Output | Per-KSI attestation JSON + POA&M markdown | Config evaluations, Security Hub findings, CloudTrail logs |
| Cost | Free (Apache 2.0, runs locally) | AWS spend |
A FedRAMP 20x customer pursuing the 70% automated threshold typically wires both, plus procedural Evidence Manifests under .efterlev/manifests/*.yml for the procedural-only themes detectors can't see.
Run it from another AI session
efterlev mcp serve
Exposes every CLI verb as an MCP tool over stdio. Point Claude Code (or any MCP client) at it and drive scans, agent calls, and provenance walks from another AI session. Our own agents use the same MCP interface — that's how we know it works. If you want to build a compliance workflow Efterlev doesn't ship, write your own agent against the MCP surface; you don't need to fork the codebase.
Documentation
Full docs site: efterlev.com — quickstart, concepts, tutorials (CI integration, GovCloud deployment, writing detectors, customizing agent prompts), CLI reference, and comparisons against Paramify, Comp AI, Vanta/Drata, and traditional consulting.
In this repo:
docs/architecture.md— three-layer architecture in depthdocs/aws-coexistence.md— how Efterlev fits next to AWS-native servicesdocs/ci-integration.md— drop-in GitHub Action for PR compliance scansdocs/csx-mapping.md— outputs mapped to CSX-SUM / MAS / ORDdocs/deploy-govcloud-ec2.md— running inside an AWS GovCloud boundarydocs/icp.md— Ideal Customer Profile; the lens for every product decisiondocs/dual_horizon_plan.md— roadmap beyond v0.1.0CHANGELOG.md— release-by-release record
Contributing
We want contributors. The detector library is designed to make the common contribution — "here's a new KSI indicator I can evidence from Terraform" — a self-contained folder that doesn't touch the rest of the codebase.
CONTRIBUTING.md has the five-minute path from git clone to running tests, and the hour path from idea to open PR. Community conduct: Contributor Covenant 2.1. Good first issues are labeled good first issue on GitHub. The most valuable contributions right now are new detectors covering KSIs on the roadmap.
Status, governance, license
Status: v0.1.3 is current. See CHANGELOG.md for per-release notes (v0.1.0 first public on 2026-04-29; four patch releases since, all addressing real-world first-run issues caught by deep-dive shakedowns). Verify a published artifact with bash scripts/verify-release.sh v0.1.3 (PEP 740 PyPI attestations + cosign keyless OIDC + SLSA provenance on ghcr.io/efterlev/efterlev).
Governance: Benevolent-dictator model today (@lhassa8), transitioning to a technical steering committee at 10 sustained-activity contributors. Full model in GOVERNANCE.md. Architectural decisions: DECISIONS.md. The project may eventually be donated to a neutral foundation (OpenSSF / Linux Foundation / CNCF) if contributor diversity warrants — that decision is not made and not time-boxed.
License: Apache 2.0. See LICENSE.
Security: Coordinated disclosure process in SECURITY.md. Threat model for Efterlev itself: THREAT_MODEL.md. The pre-launch security review (signed by the maintainer) is at docs/security-review-2026-04.md.
Credits
Efterlev was bootstrapped in a 4-day hackathon using Claude Code. The architecture commits to keeping Claude Code (and other MCP-capable agents) as first-class integration partners — that's what "agent-first" means here, structurally, not as marketing.
Built on compliance-trestle for OSCAL catalog loading, on the FedRAMP Machine-Readable (FRMR) catalog, and on the NIST SP 800-53 Rev 5 catalog. Those projects make this one possible.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file efterlev-0.1.9.tar.gz.
File metadata
- Download URL: efterlev-0.1.9.tar.gz
- Upload date:
- Size: 1.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
05b9c1b18713b284e40f507b788b7f17a37a05f046c0c949b59c684e5afdbb89
|
|
| MD5 |
285ef7bd7f65a2047a436c7c0e0e1d49
|
|
| BLAKE2b-256 |
0af36cb70faac5408cbe2473c472369da74df9c261c2985623faad4965fd40d5
|
Provenance
The following attestation bundles were made for efterlev-0.1.9.tar.gz:
Publisher:
release-pypi.yml on efterlev/efterlev
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
efterlev-0.1.9.tar.gz -
Subject digest:
05b9c1b18713b284e40f507b788b7f17a37a05f046c0c949b59c684e5afdbb89 - Sigstore transparency entry: 1439956601
- Sigstore integration time:
-
Permalink:
efterlev/efterlev@752d7b2c063e8724887bf130991a69b193870c48 -
Branch / Tag:
refs/tags/v0.1.9 - Owner: https://github.com/efterlev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-pypi.yml@752d7b2c063e8724887bf130991a69b193870c48 -
Trigger Event:
push
-
Statement type:
File details
Details for the file efterlev-0.1.9-py3-none-any.whl.
File metadata
- Download URL: efterlev-0.1.9-py3-none-any.whl
- Upload date:
- Size: 1.4 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93ac389156e4aed3ef1398abbb9f459fc0bbf5452696f175a4af554d9a9c5ec5
|
|
| MD5 |
3304c1b0c240abc638f00c3673aca0ba
|
|
| BLAKE2b-256 |
e252ad0f3a7b9f85b75643fbc914cbc9555eb20be7d67a6861e8c250cedfc6f8
|
Provenance
The following attestation bundles were made for efterlev-0.1.9-py3-none-any.whl:
Publisher:
release-pypi.yml on efterlev/efterlev
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
efterlev-0.1.9-py3-none-any.whl -
Subject digest:
93ac389156e4aed3ef1398abbb9f459fc0bbf5452696f175a4af554d9a9c5ec5 - Sigstore transparency entry: 1439956642
- Sigstore integration time:
-
Permalink:
efterlev/efterlev@752d7b2c063e8724887bf130991a69b193870c48 -
Branch / Tag:
refs/tags/v0.1.9 - Owner: https://github.com/efterlev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-pypi.yml@752d7b2c063e8724887bf130991a69b193870c48 -
Trigger Event:
push
-
Statement type: