Skip to main content

AI writing pattern analysis and scoring tool

Project description

WriteScore

WriteScore Logo

Python License: MIT

CI CodeQL codecov

Ruff pre-commit Security Policy

PRs Welcome

Identify AI patterns in your writing and get actionable feedback to sound more human.

WriteScore CLI demo showing terminal output with analysis scores and recommendations

Quick Start

uv sync
uv run python -m spacy download en_core_web_sm
uv run writescore analyze README.md

That's it! You'll see a detailed analysis with scores and improvement suggestions.

Requirements

Resource Minimum Recommended
Python 3.9 3.11+
RAM 4 GB 8 GB
Disk 2 GB 3 GB

Note: First run downloads transformer models (~500MB) and spaCy model (~50MB). Subsequent runs use cached models.

Getting Started

Quickest path: Install Just, then run just setup. See all options below.

Option Local Install CLI/IDE Docker Required Use WriteScore Contribute
Docker No CLI Yes Instructions N/A
pipx No CLI No Instructions N/A
Homebrew No CLI No Instructions N/A
Standalone No CLI No Instructions N/A
Native (Just) Yes CLI No just install just setup
Native (Just) Yes IDE No just install, open in any IDE just setup, open in any IDE
Native (Manual) Yes CLI No Instructions Instructions
Native (Manual) Yes IDE No Instructions, open in any IDE Instructions, open in any IDE
Devcontainer No CLI Yes Instructions Instructions
Devcontainer No IDE Yes VS Code → "Reopen in Container" Same
Codespaces No CLI No Instructions Instructions
Codespaces No IDE No GitHub → Code → Create codespace Same

After setup, run just test (or uv run pytest for manual installs) to verify.

Installing Just

OS Command
Windows winget install Casey.Just (or choco install just / scoop install just)
macOS brew install just
Ubuntu/Debian sudo apt install just
Fedora sudo dnf install just
Arch Linux sudo pacman -S just
Via Cargo cargo install just
Via Conda conda install -c conda-forge just

Windows users: All just commands work in PowerShell and CMD. With uv, use uv run prefix instead of activating the venv.

Docker

Run WriteScore without any local installation using Docker. Models are pre-downloaded in the image.

# Analyze a file in current directory
docker run --rm -v "$(pwd):/work" -w /work ghcr.io/bohica-labs/writescore:latest analyze document.md

# With GPU support (NVIDIA)
docker run --rm --gpus all -v "$(pwd):/work" -w /work ghcr.io/bohica-labs/writescore:latest analyze document.md

Optional: Install wrapper script for native-like usage:

# Download and install
sudo curl -fsSL https://raw.githubusercontent.com/BOHICA-LABS/writescore/main/scripts/writescore-docker \
  -o /usr/local/bin/writescore
sudo chmod +x /usr/local/bin/writescore

# Now use like a native command
writescore analyze document.md

The wrapper auto-detects GPU (NVIDIA/AMD) and mounts files appropriately.

pipx

Install WriteScore in an isolated environment using pipx. No virtual environment management required.

# Install pipx if you don't have it
# macOS: brew install pipx && pipx ensurepath
# Linux: python3 -m pip install --user pipx && pipx ensurepath

# Install WriteScore
pipx install writescore

# Use immediately (spaCy model auto-downloads on first run)
writescore analyze document.md

Note: First run downloads spaCy model (~50MB) and transformer models (~500MB). Subsequent runs are faster.

Homebrew

Install WriteScore on macOS or Linux using Homebrew:

# Add the tap and install
brew tap bohica-labs/writescore
brew install writescore

# Or install directly
brew install bohica-labs/writescore/writescore

# Use immediately
writescore analyze document.md

The formula installs all dependencies including the spaCy language model.

Standalone Executable

Download a pre-built executable from GitHub Releases - no Python installation required.

Platform Filename
Linux (x64) writescore-linux-amd64
macOS (Intel) writescore-darwin-amd64
macOS (Apple Silicon) writescore-darwin-arm64
Windows (x64) writescore-windows-amd64.exe
# Linux/macOS example
curl -LO https://github.com/BOHICA-LABS/writescore/releases/latest/download/writescore-linux-amd64
chmod +x writescore-linux-amd64
./writescore-linux-amd64 analyze document.md

# Move to PATH for easier access
sudo mv writescore-linux-amd64 /usr/local/bin/writescore
writescore analyze document.md

Note: Standalone executables are self-contained (~500MB) and include all models.

Native Manual

For users who prefer not to install Just. Requires uv.

Use WriteScore:

uv sync
uv run python -m spacy download en_core_web_sm

Contribute:

uv sync --extra dev
uv run python -m spacy download en_core_web_sm
uv run pre-commit install
uv run pre-commit install --hook-type commit-msg

Devcontainer CLI

devcontainer up --workspace-folder "$(pwd)" && \
devcontainer exec --workspace-folder "$(pwd)" just install

For contributors, replace just install with just dev.

Codespaces CLI

gh codespace create -r BOHICA-LABS/writescore && \
gh codespace ssh

Then run just install (users) or just setup (contributors).

Available Commands

Command Description
just List available commands
just install Install package with all dependencies
just setup Full dev setup (install + pre-commit hooks)
just test Run fast tests (excludes slow markers)
just test-all Run all tests including slow ones
just test-cov Run tests with coverage report
just lint Check code with ruff
just lint-fix Auto-fix linting and format code
just typecheck Run mypy type checking
just check Run all checks (lint + typecheck)
just clean Remove build artifacts and caches

Why WriteScore?

The Problem: AI detection tools give binary "AI/human" verdicts without explaining why or how to improve.

The Solution: WriteScore analyzes 12+ writing dimensions to identify specific patterns that make text sound AI-generated, then provides actionable recommendations.

Key Differentiators:

  • Actionable feedback — Know exactly what to fix, not just "this seems AI-generated"
  • Multi-dimensional analysis — Examines vocabulary, sentence variety, formatting patterns, and more
  • Quality-focused — Treats writing improvement as the goal, not accusation
  • Transparent scoring — See how each dimension contributes to your score

When to use WriteScore:

  • Polishing AI-assisted drafts to sound more natural
  • Identifying mechanical patterns in your own writing
  • Quality checks before publishing

When NOT to use:

  • Academic integrity enforcement (use dedicated tools)
  • Legal proof of authorship
  • Detection of latest-generation models with high confidence

Features

  • Dual Scoring — Detection risk + quality score in one analysis
  • 12 Analysis Dimensions — From vocabulary patterns to syntactic complexity
  • Multiple Modes — Fast checks to comprehensive analysis
  • Actionable Insights — Specific recommendations ranked by impact
  • Batch Processing — Analyze entire directories
  • Score History — Track improvements over time

Usage

# Basic analysis
writescore analyze document.md

# Detailed findings with recommendations
writescore analyze document.md --detailed

# Show dual scores (detection risk + quality)
writescore analyze document.md --show-scores

# Fast mode for quick checks
writescore analyze document.md --mode fast

# Full analysis for final review
writescore analyze document.md --mode full

# Batch process a directory
writescore analyze --batch docs/

Analysis Modes

Mode Speed Best For
fast Fastest Quick checks, CI/CD
adaptive Balanced Default, most documents
sampling Medium Large documents
full Slowest Final review, maximum accuracy

See the Analysis Modes Guide for details.

Troubleshooting

Slow First Run

This is normal. First analysis downloads transformer models (~500MB) and caches them. Subsequent runs are much faster.

Out of Memory

Quick fix: Use --mode fast for lower memory usage:

writescore analyze document.md --mode fast

On macOS Apple Silicon, if you see MPS memory errors:

export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0
writescore analyze document.md

ModuleNotFoundError / Command Not Found

Quick fix: Use uv run prefix or activate the venv: source .venv/bin/activate

Diagnostic table:

Where did you install? Current terminal Fix
uv (.venv/) Not using uv run Prefix with uv run or activate venv
Devcontainer Native terminal Run inside container or install natively
Codespaces Local terminal Install natively
Unknown Run diagnostic commands below

Diagnostic commands:

# Check if writescore is anywhere in PATH
which writescore

# Check if installed in current venv
uv pip show writescore

# Check common venv locations
ls -la .venv/bin/writescore 2>/dev/null || echo "Not in .venv"

Common fixes:

# Use uv run prefix
uv run writescore analyze README.md

# Or activate venv directly
source .venv/bin/activate  # Windows: .venv\Scripts\activate
writescore analyze README.md

# Run inside devcontainer (if installed there)
devcontainer exec --workspace-folder "$(pwd)" writescore analyze README.md

# Or reinstall natively
just install  # or: uv sync && uv run python -m spacy download en_core_web_sm

Can't find model 'en_core_web_sm'

python -m spacy download en_core_web_sm

NLTK Data Missing

If you see LookupError mentioning NLTK data:

python -c "import nltk; nltk.download('punkt'); nltk.download('averaged_perceptron_tagger')"

Documentation

Document Description
Architecture System design, components, patterns
Analysis Modes Guide Mode comparison and usage
Development History Project evolution and roadmap
Migration Guide Upgrading from AI Pattern Analyzer
Changelog Version history

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Quick links:

Updating the Demo GIF

The README demo GIF is generated using VHS. To regenerate after feature changes:

# Install VHS (macOS)
brew install vhs

# Generate new demo
vhs docs/assets/demo.tape

The tape file is at docs/assets/demo.tape. Edit it to change the demo script.

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

writescore-6.4.1.tar.gz (780.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

writescore-6.4.1-py3-none-any.whl (834.3 kB view details)

Uploaded Python 3

File details

Details for the file writescore-6.4.1.tar.gz.

File metadata

  • Download URL: writescore-6.4.1.tar.gz
  • Upload date:
  • Size: 780.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for writescore-6.4.1.tar.gz
Algorithm Hash digest
SHA256 289510cabb9a3cd81fd68b6caa9b7bc45c197251371743c598b471527a396e4e
MD5 bad4707bbdef0d8d44e424ffcf08a094
BLAKE2b-256 129f9f8bdd9d494ba5c7cdc280e28106db3d826708c64378d63332bd34d14b4a

See more details on using hashes here.

Provenance

The following attestation bundles were made for writescore-6.4.1.tar.gz:

Publisher: release.yml on BOHICA-LABS/writescore

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file writescore-6.4.1-py3-none-any.whl.

File metadata

  • Download URL: writescore-6.4.1-py3-none-any.whl
  • Upload date:
  • Size: 834.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for writescore-6.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c0d26ed3959650eb4600dc49b39ec737ddf09fc4cd8ea01e860c9476aae1b32f
MD5 7757ab55f7ac0fb50b9e2ee1e140d37e
BLAKE2b-256 ac392048582b170abf383830ae5174a6c8a029d2553c374b7b2ab61de804ab26

See more details on using hashes here.

Provenance

The following attestation bundles were made for writescore-6.4.1-py3-none-any.whl:

Publisher: release.yml on BOHICA-LABS/writescore

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page