Skip to main content

A lightweight CLI for catching slop in modern codebases before it hardens into tech debt.

Project description

slopsniff

SlopSniff

A lightweight CLI for catching "slop" in modern codebases before it hardens into team-wide tech debt.

SlopSniff is not trying to detect whether code was written by AI. It is trying to detect the kinds of patterns that show up when teams move too fast, overgenerate code, or skip the cleanup pass — giant files, copy-pasted functions, versioned helper sprawl, and everything else that quietly becomes the norm.


Install

pip install slopsniff

Or with uv:

uv add slopsniff

Usage

# Scan current directory
slopsniff .

# Scan a specific path
slopsniff ./src

# Set a custom CI fail threshold (default: 20)
slopsniff . --fail-threshold 30

# JSON output for machines and CI pipelines
slopsniff . --format json

# Show score contribution per finding
slopsniff . --verbose

# Override thresholds on the fly
slopsniff . --max-file-lines 300 --max-function-lines 40

All flags

Flag Short Default Description
path . Directory to scan
--fail-threshold -t 20 Score at which CI returns exit code 1
--format -f terminal Output format: terminal or json
--verbose -v off Show score per finding
--max-file-lines 400 Override file line warning threshold
--max-function-lines 50 Override function line warning threshold

Example output

SlopSniff Report
========================================
Files scanned:  42
Total score:    18
Status:         WARNING

[HIGH] duplicate-functions
  src/utils/formatters.py:12-44
  Duplicate function body found in 2 locations: src/utils/formatters.py:12-44, src/services/formatters.py:8-40

[MEDIUM] large-function
  src/api/upload.py:77-156
  Function 'process_upload' is 79 lines long (warning threshold: 50)

[LOW] large-file
  src/helpers/common.py
  File is 438 lines long (warning threshold: 400)

Scoring

Each finding contributes to a total slop score.

Severity Score
high 10
medium 5
low 2
Score range Status
0–9 healthy
10–19 warning
20+ fail (non-zero exit)

The fail threshold is configurable via --fail-threshold.


Rules

large-file

Flags files that exceed configurable line count thresholds.

  • medium at 400+ lines
  • high at 800+ lines

large-function

Flags functions that exceed configurable line count thresholds. Uses Python's ast module for accurate line spans in .py files, and brace-depth heuristics for JS/TS/Vue.

  • medium at 50+ lines
  • high at 100+ lines

duplicate-functions

Normalizes and hashes function bodies, then flags exact duplicates found across or within files. Functions under 5 lines are ignored to reduce noise from trivial patterns like empty __init__ methods.

  • high for any exact body match across 2+ locations

helper-sprawl

Flags two categories of low-cohesion patterns:

  1. Generic filenames — files named utils.py, helpers.py, common.py, shared.py, misc.py, etc.
  2. Versioned function names — clusters of functions sharing a base name with variant suffixes like _v2, _old, _safe, _legacy, _copy, _temp
  • low for generic filenames
  • medium for versioned function name clusters

Language support

Language Parser Function detection
Python ast module Full — accurate line spans, nested functions
JavaScript Regex + brace depth Heuristic — function, arrow functions, const fn =
TypeScript Regex + brace depth Same as JS
TSX Regex + brace depth Same as JS
Vue Regex + brace depth Same as JS

Architecture

Walk repo
  └── Filter by extension, skip excluded dirs
        └── Parse each file into FileContext
              ├── python_ast.py  →  ast.FunctionDef extraction
              └── text_parser.py →  regex + brace-depth heuristics
                    └── Run per-file rules
                          ├── LargeFileRule
                          ├── LargeFunctionRule
                          └── HelperSprawlRule (filename check)
                    └── Run cross-file rules (after all files parsed)
                          ├── DuplicateFunctionsRule (hash map)
                          └── HelperSprawlRule (versioned name clusters)
                                └── Aggregate findings
                                      └── Compute score → ScanResult
                                            └── Reporter (terminal | json)
                                                  └── Exit 0 or 1

Data model

@dataclass
class Finding:
    rule_id: str
    severity: str        # "low" | "medium" | "high"
    file_path: str
    line_start: int | None
    line_end: int | None
    message: str
    score: int

@dataclass
class ScanResult:
    findings: list[Finding]
    total_score: int
    files_scanned: int
    passed: bool

Rule interface

Each per-file rule implements:

def run(self, file_context: FileContext) -> list[Finding]: ...

Each cross-file rule implements:

def run_cross_file(self, contexts: list[FileContext]) -> list[Finding]: ...

Rules are plain classes — no magic, no registration, easy to test in isolation.


File structure

slopsniff/
├── pyproject.toml
├── README.md
├── src/
│   └── slopsniff/
│       ├── __init__.py
│       ├── cli.py          # Typer entrypoint
│       ├── config.py       # Config dataclass and defaults
│       ├── models.py       # Finding, FunctionInfo, FileContext, ScanResult
│       ├── scanner.py      # Scan pipeline orchestration
│       ├── scoring.py      # compute_score(), grade()
│       ├── walker.py       # Repo traversal with filtering
│       ├── parsers/
│       │   ├── python_ast.py   # ast-based Python parser
│       │   └── text_parser.py  # Regex/brace parser for JS/TS/Vue
│       ├── reporters/
│       │   ├── terminal.py     # Colored terminal output
│       │   └── json_reporter.py
│       └── rules/
│           ├── base.py                  # PerFileRule / CrossFileRule protocols
│           ├── large_file.py
│           ├── large_function.py
│           ├── duplicate_functions.py
│           └── helper_sprawl.py
└── tests/
    ├── test_walker.py
    ├── test_large_file.py
    ├── test_large_function.py
    ├── test_duplicate_functions.py
    └── test_helper_sprawl.py

CI/CD integration

GitHub Actions

name: SlopSniff

on:
  pull_request:
  push:
    branches: [main]

jobs:
  slopsniff:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: "3.13"

      - name: Install SlopSniff
        run: pip install slopsniff

      - name: Run SlopSniff
        run: slopsniff . --fail-threshold 20

SlopSniff returns exit code 1 when the total score meets or exceeds the threshold, making it a drop-in CI gate.


Development

# Clone and install with dev deps
git clone https://github.com/joshuagilley/slopsniff
cd slopsniff
uv sync --dev

# Run tests
uv run pytest

# Lint
uv run ruff check .

# Run CLI locally
uv run slopsniff .

Defaults reference

Setting Default
Max file lines (warning) 400
Max file lines (high) 800
Max function lines (warning) 50
Max function lines (high) 100
Fail threshold 20
Included extensions .py .js .ts .tsx .vue
Excluded directories .git node_modules .nuxt dist build .venv coverage __pycache__

Roadmap

  • .slopsniff.toml config file support
  • --changed-only mode via git diff
  • Near-duplicate detection (token fingerprints / MinHash)
  • Tree-sitter integration for accurate multi-language AST
  • GitHub PR annotation support
  • Score baselining for legacy repos
  • Suppression comments (# slopsniff: ignore)
  • Homebrew tap

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

slopsniff-0.1.0.tar.gz (9.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

slopsniff-0.1.0-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file slopsniff-0.1.0.tar.gz.

File metadata

  • Download URL: slopsniff-0.1.0.tar.gz
  • Upload date:
  • Size: 9.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for slopsniff-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b58b50de5fa841fac0b2da3140b23d61bd1dabc37d66f8d9839756a506553da7
MD5 faa340b7e94c0e33b9ff5e34f34e320d
BLAKE2b-256 8954d50e3a81abbf242f9f2c100304aa363d675e4c49df6e956fff63bddbbe00

See more details on using hashes here.

Provenance

The following attestation bundles were made for slopsniff-0.1.0.tar.gz:

Publisher: publish.yml on joshuagilley/slopsniff

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file slopsniff-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: slopsniff-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for slopsniff-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dbd920f6fef8969f179953a04b0c0c6c840ba06db0d05f02ce26e83bc361268c
MD5 78a2b081f73798b9324fa2c1b71c69b3
BLAKE2b-256 1f8f3e484154d06dcc69dde4a0cfec4d57731082a0b41df8423dbe121f495a5c

See more details on using hashes here.

Provenance

The following attestation bundles were made for slopsniff-0.1.0-py3-none-any.whl:

Publisher: publish.yml on joshuagilley/slopsniff

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page