Skip to main content

Repo-native, agent-first compliance scanner for FedRAMP and DoD Impact Levels

Project description

Efterlev

Compliance scanning for SaaS teams pursuing FedRAMP 20x — that lives in your repo, not a SaaS dashboard.

Efterlev reads your Terraform, classifies it against the 60 thematic Key Security Indicators, drafts FRMR-compatible attestations grounded in cited evidence, and proposes code-level remediations. Locally. No procurement cycle. No vendor account. Apache 2.0.

pipx install efterlev
efterlev quickstart                        # one-command demo against a bundled fixture
                                           # (~3 min on Sonnet, ~$0.30; runs init+scan
                                           #  only without ANTHROPIC_API_KEY set)

Or against your own repo:

pipx install efterlev
cd path/to/your-repo                       # repo root, NOT a Terraform subdir
efterlev init --target . --force
export ANTHROPIC_API_KEY=sk-ant-...
efterlev report run                        # init → scan → gap → document → poam

Pronounced "EF-ter-lev." From Swedish efterlevnad (compliance).

Or have an AI assistant do it for you

Open Claude Code, Cursor, Codex, Kiro, or any AI assistant with shell access, and paste the canonical prompt at docs/ai-quickstart-prompt.md. The assistant will confirm the repo root, ask which backend you want, install Efterlev, run the full pipeline, and brief you on the top 3 KSIs to focus on. Cost: ~$0.30–1 on Sonnet 4.6 (the recommended default), ~5–10 minutes wall.


Why this exists

A 100-person SaaS company just got told by its biggest prospect: "we'll buy, but only if you're FedRAMP Moderate."

The team googles it. Consulting engagements start at $250K. SaaS compliance platforms cover SOC 2 beautifully and treat FedRAMP as a footnote. Enterprise GRC tooling is priced for the wrong scale. A NIST document family runs to thousands of pages.

What they actually need is something that reads their Terraform and tells them, in their own language, what's wrong and how to fix it. Something a single engineer can install on a Tuesday and show results at Wednesday's standup. Output concrete enough that their 3PAO can use it; honest enough that the 3PAO won't throw it out.

Efterlev is that tool.

It targets FedRAMP 20x — the new authorization track that replaces narrative-heavy System Security Plans with measurable outcomes called Key Security Indicators. KSIs are concrete things ("encrypt network traffic," "enforce phishing-resistant MFA") that can be assessed against actual evidence rather than long descriptions of intent. Most new SaaS authorizations starting in 2026 will target this track. Efterlev's primary internal abstraction is the KSI; FRMR (the machine-readable format FedRAMP 20x is standardizing on) is the primary output.


What it does

  • Scans your Terraform — both raw .tf files and terraform show -json plan output — for evidence of 60 thematic KSIs, backed by underlying NIST 800-53 Rev 5 controls
  • Classifies each KSI as implemented, partial, not_implemented, not_applicable, or evidence_layer_inapplicable (the honest answer for procedural KSIs no scanner can see)
  • Drafts FRMR-compatible attestation JSON grounded in cited evidence — every assertion cites its source file (and HCL line numbers when scanning .tf directly; plan-JSON mode resolves modules at the cost of file-level-only citations)
  • Proposes code-level remediation diffs you can review, edit, or apply
  • Generates a reviewer-ready POA&M markdown for every open KSI, with out-of-boundary scope filtering
  • Traces every claim back to the file (and HCL line range, in .tf mode) that produced it via efterlev provenance show <id> — accepts truncated SHA prefixes
  • Watches: efterlev report run --watch re-runs the full pipeline on every save (debounced 2s)
  • Captures token telemetry so you can audit per-run LLM cost without consulting CloudWatch

Everything runs locally. The only outbound network call is to your configured LLM endpoint — direct Anthropic API by default, or AWS Bedrock ([bedrock] extra) for FedRAMP-authorized GovCloud deployments. Scanner output is fully deterministic and offline.

What it doesn't do

  • It does not produce an Authorization to Operate. Humans and 3PAOs do that.
  • It does not certify compliance. It produces drafts that accelerate the human review cycle.
  • It does not guarantee LLM-generated narratives are correct. Every claim carries requires_review: Literal[True] at the type level — not a flag, not a string.
  • It does not cover SOC 2, ISO 27001, HIPAA, or GDPR. Other tools serve those well.
  • It does not scan live cloud infrastructure (yet — v1.5+).
  • It does not replace AWS Config / Security Hub for runtime evaluation. Efterlev is the pre-deploy IaC layer; AWS-native is the runtime evidence layer. See docs/aws-coexistence.md.

For the honest full accounting, see LIMITATIONS.md.


How to run it

efterlev init --target . --force               # creates .efterlev/ workspace
efterlev boundary set \                        # declare authorization scope
  --include 'infra/terraform/**' \
  --include '.github/workflows/**'
efterlev doctor                                # pre-flight check (Python, FRMR cache,
                                               #  API key shape, Bedrock creds, LLM ping)
efterlev scan                                  # raw .tf files
# OR for module-composed codebases (the dominant pattern):
terraform init && terraform plan -out plan.bin && terraform show -json plan.bin > plan.json
efterlev scan --plan plan.json                 # ~60% more evidence on real codebases

efterlev agent gap                             # KSI-by-KSI classification (Opus 4.7)
efterlev agent document                        # FRMR JSON + HTML attestations (Sonnet 4.6)
efterlev agent remediate --ksi KSI-SVC-SNT     # Terraform diff that closes the gap (Opus 4.7)
efterlev poam                                  # POA&M markdown for every open KSI
efterlev provenance show <prefix>              # walk any claim back to source (8-char prefix OK)
efterlev provenance verify                     # tamper-evidence sweep

Or just:

efterlev report run                            # full pipeline: init → scan → gap → document → poam
efterlev report run --watch                    # re-run on every file change (2s debounce)

Wire it into CI: drop-in GitHub Action at .github/workflows/pr-compliance-scan.yml posts a sticky markdown PR comment with findings + detector coverage. See docs/ci-integration.md.


How it's built

Three layers, each with a clear job:

  • Detectors — small, deterministic Python folders. One detector = one folder = one compliance pattern. No AI. The detector library is the community-contributable surface.
  • Primitives — typed functions wrapping the things agents need ("scan this directory," "validate this output," "load that catalog"). MCP-exposed.
  • Agents — focused reasoning loops backed by Claude. Each has its system prompt in a plain .md file you can read and audit. AI is used for the parts where reasoning matters; never for the parts where determinism does.

This split — deterministic for evidence, AI for reasoning, different model weights for different cognitive loads — is the most important design decision in the project. It's what lets us tell auditors and 3PAOs the truth: scanner findings are verifiable facts about your code; AI claims are drafts you can audit but should not blindly trust.

Hallucination defenses are structural, not advisory. Every AI-generated claim links to evidence records via content-addressed IDs. Prompts wrap evidence in <evidence_NONCE> XML fences with a per-run nonce; a post-generation validator rejects any output citing IDs the model didn't actually see. The provenance store rejects any claim whose derived_from cites IDs that don't resolve. The DRAFT marker is Literal[True] at the type level — there's no flag to clear it.

Secrets never leave the machine unredacted. Every LLM prompt is unconditionally scrubbed for 7 secret families (AWS keys, GCP keys, GitHub tokens, Slack tokens, Stripe keys, PEM private keys, JWTs). The scrubber has no opt-out path. Each redaction writes an audit line to .efterlev/redactions/<scan_id>.jsonl (mode 0o600); review with efterlev redaction review.

LLM calls degrade predictably. Transient errors retry with exponential backoff + full jitter (3 attempts). On primary-model exhaustion, falls back once from Opus to Sonnet before surfacing a failure. Non-retryable errors (auth, invalid request) fail immediately. Each call's token usage is captured on the resulting Claim record and written to .efterlev/receipts.log for offline cost auditing.

For deeper architectural detail, see docs/architecture.md. For the design history including reversals and tradeoffs, see DECISIONS.md.


Coverage at v0.1.18

  • 45 detectors — 38 KSI-mapped + 7 supplementary 800-53-only (where FRMR 0.9.43-beta doesn't yet map the underlying control)
  • 31 of 60 thematic KSIs covered, across 8 of 11 themes (CNA, CMT, IAM, MLA, PIY, RPL, SCR, SVC). The remaining three themes (AFR, CED, INR) are entirely procedural — covered by customer-authored Evidence Manifests rather than detector evidence.
  • Detector sources: 41 Terraform + 4 GitHub workflows
  • Three agents: Gap (Opus 4.7), Documentation (Sonnet 4.6), Remediation (Opus 4.7)
  • Two LLM backends: Anthropic API (default) + AWS Bedrock ([bedrock] extra, GovCloud-deployable)
  • 1083 tests passing; mypy strict + ruff check + ruff format clean across 174 source files; full E2E pipeline smoke (real Anthropic API call against a synthetic fixture) runs as a required check on every PR

Coverage relative to FedRAMP 20x Phase 2's 70% automated-validation threshold: the threshold applies to the customer's whole authorization package, not to any single tool. Efterlev covers 31 KSIs at the IaC layer pre-deploy; AWS-native services (Config, Security Hub, CloudTrail, Inspector, GuardDuty) cover roughly 14 KSIs at the runtime layer. Honest union: ~33 of 63 KSIs (~52%) — distinct layers, not double-counted. Reaching 70% takes both. See docs/aws-coexistence.md for the strategic mapping and docs/csx-mapping.md for how the outputs map to CSX-SUM / MAS / ORD.


Where Efterlev fits

Sits alongside AWS Config / Security Hub / CloudTrail, not in place of them:

Efterlev AWS-native
When Pre-deploy, on every commit or save Post-deploy, on a 3-day cadence
Reads Terraform .tf + .github/workflows/*.yml + .efterlev/manifests/*.yml Live AWS API state, runtime events
Output Per-KSI attestation JSON + POA&M markdown + remediation diffs Config evaluations, Security Hub findings, CloudTrail logs
Cost Free (Apache 2.0); ~$0.30–2 per run on the LLM endpoint you configure AWS spend

A FedRAMP 20x customer pursuing the 70% automated threshold typically wires both, plus procedural Evidence Manifests under .efterlev/manifests/*.yml for the AFR / CED / INR themes detectors can't see.


Run it from another AI session (MCP)

efterlev mcp serve

Exposes every CLI verb as an MCP tool over stdio. Point Claude Code (or any MCP client) at it and drive scans, agent calls, and provenance walks from another AI session. Our own agents use the same MCP interface — that's how we know it works. If you want to build a compliance workflow Efterlev doesn't ship, write your own agent against the MCP surface; you don't need to fork the codebase.


Documentation

Full docs site: efterlev.com — quickstart, concepts, tutorials (CI integration, GovCloud deployment, writing detectors, customizing agent prompts), CLI reference, and comparisons against Paramify, Comp AI, Vanta/Drata, and traditional consulting.

In this repo:


Contributing

We want contributors. The detector library is designed to make the common contribution — "here's a new KSI indicator I can evidence from Terraform" — a self-contained folder that doesn't touch the rest of the codebase.

CONTRIBUTING.md has the five-minute path from git clone to running tests, the hour path from idea to open PR, and the per-fix regression-test discipline every patch ships under. Community conduct: Contributor Covenant 2.1. Good first issues are labeled good first issue on GitHub. The most valuable contributions right now are new detectors covering KSIs on the roadmap.


Status, governance, license

Status: v0.1.18 is current. Twenty patch releases since v0.1.0 (2026-04-29). v0.1.5–v0.1.10 closed real-CI bugs surfaced by deep-dive shakedowns; v0.1.11 closed five rendering / display / persistence findings from the first external 3PAO blinded review; v0.1.12–v0.1.15 closed eight findings from rolling deterministic post-release triages; v0.1.16 codified the triage methodology as automated infrastructure (scripts/triage.sh <tag> posts a verifiable artifact-quality report as the GitHub Release notes body); v0.1.17 closed two cosign-tooling findings from v0.1.16's first auto-triage; v0.1.18 ships Tier 1 #1: efterlev quickstart — a one-command activation demo for the ICP A day-one experience (~3 min on Sonnet, ~$0.30; runs init+scan only without ANTHROPIC_API_KEY set). Full E2E pipeline smoke runs on every PR; per-fix regression-test discipline is enforced for bug-fix patches. See CHANGELOG.md for per-release notes. Verify a published artifact with bash scripts/verify-release.sh v0.1.18 (PEP 740 PyPI attestations + cosign keyless OIDC + SLSA provenance on ghcr.io/efterlev/efterlev).

Governance: Benevolent-dictator model today (@lhassa8), transitioning to a technical steering committee at 10 sustained-activity contributors. Full model in GOVERNANCE.md. Architectural decisions: DECISIONS.md. The project may eventually be donated to a neutral foundation (OpenSSF / Linux Foundation / CNCF) if contributor diversity warrants — that decision is not made and not time-boxed.

License: Apache 2.0. See LICENSE.

Security: Coordinated disclosure process in SECURITY.md. Threat model for Efterlev itself: THREAT_MODEL.md. The pre-launch security review (signed by the maintainer) is at docs/security-review-2026-04.md.


Credits

Efterlev was bootstrapped in a 4-day hackathon using Claude Code. The architecture commits to keeping Claude Code (and other MCP-capable agents) as first-class integration partners — that's what "agent-first" means here, structurally, not as marketing.

Built on compliance-trestle for OSCAL catalog loading, on the FedRAMP Machine-Readable (FRMR) catalog, and on the NIST SP 800-53 Rev 5 catalog. Those projects make this one possible.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

efterlev-0.1.18.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

efterlev-0.1.18-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file efterlev-0.1.18.tar.gz.

File metadata

  • Download URL: efterlev-0.1.18.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for efterlev-0.1.18.tar.gz
Algorithm Hash digest
SHA256 679f8945be94e497ffcea16d417821d2a97c3adb32d1779e5ecdaaa3c008dc0b
MD5 719d17b15810166171afb3ccb5a8adef
BLAKE2b-256 205f7e9c0b971feefbf018b6fd649d0b0f5afd84ef5dd450074488eb06729903

See more details on using hashes here.

Provenance

The following attestation bundles were made for efterlev-0.1.18.tar.gz:

Publisher: release-pypi.yml on efterlev/efterlev

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file efterlev-0.1.18-py3-none-any.whl.

File metadata

  • Download URL: efterlev-0.1.18-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for efterlev-0.1.18-py3-none-any.whl
Algorithm Hash digest
SHA256 3511c6ee49e884abefc2ed15b5c78c972417afb53bd215ad320ff54d648f04d2
MD5 1279bcb97835871bd9575c372d69c5f9
BLAKE2b-256 c6c09f7289a157fe1ad0856e4f03bb724bb63194867e6f1c573aec2081fda983

See more details on using hashes here.

Provenance

The following attestation bundles were made for efterlev-0.1.18-py3-none-any.whl:

Publisher: release-pypi.yml on efterlev/efterlev

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page