AI blog generator with 6-pass pipeline and built-in humanizer that removes AI writing tells.
Project description
blog-pipeline
AI blog generator that doesn't sound like AI.
6-pass Claude API pipeline with a built-in humanizer, topic deduplication, internal linking, and Supabase sync. The humanizer is the key differentiator — it enforces a strict writing ruleset that removes every common AI tell.
Quick start
# Clone and install
git clone https://github.com/nometria/blog-pipeline
cd blog-pipeline
pip install -e .
# Set your API key
export ANTHROPIC_API_KEY=sk-ant-...
# Generate 5 blog posts
blog-generate --count 5 --niche "developer tooling and SaaS"
# Re-humanize existing drafts only
blog-generate --passes 4
# Run tests
pytest tests/ -v
Required environment variables:
ANTHROPIC_API_KEY=sk-ant-... # required
# Optional (for Supabase sync in pass 0 and 6)
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_KEY=eyJ...
BLOG_SITE_URL=https://yourblog.com
Passes
| Pass | What it does |
|---|---|
| 0 | Fetches existing titles from Supabase (prevents duplicates) |
| 1 | Identifies new topics (skips anything already written) |
| 2 | Plans structure per topic (comparison / deep-dive / case-study / how-to / opinion) |
| 3 | Generates full markdown content |
| 4 | Humanizer — strips AI tells (see below) |
| 5 | Adds internal links across all posts |
| 6 | Pushes to Supabase + updates local registry |
The Humanizer
Pass 4 enforces these rules on every post:
- Banned words: leverage, seamless, robust, cutting-edge, game-changer, revolutionize, synergy, paradigm, transformative, unlock, delve, streamline, elevate, empower, holistic, utilize, facilitate, innovative
- No em-dashes (—) — replaced with commas or full stops
- No semicolons connecting sentences
- No emojis
- Contractions required: it's, we're, you'll, don't
- Active voice only
- Max 1 exclamation mark per post
- No "In conclusion / In summary" section openers
Use the humanizer standalone:
from blog_pipeline.humanizer import humanize_post
clean = humanize_post(my_ai_draft)
Setup
git clone https://github.com/nometria/blog-pipeline
cd blog-pipeline
pip install -e .
cp .env.example .env
# Edit .env with your ANTHROPIC_API_KEY
Run
# Full pipeline: generate 5 blogs
blog-generate --passes 1-6 --count 5 --niche "developer tooling and SaaS"
# Re-humanize existing drafts only
blog-generate --passes 4
# Push already-written files to Supabase
blog-generate --passes 6
# Generate content without pushing
blog-generate --passes 1-5 --count 3
Output
blogs/<slug>.md— humanized markdown filesblogs/_topics.json— topic cacheblogs/_plans.json— structure plansblogs/_registry.json— pushed blog tracking
Immediate next steps
- Make the humanizer prompt configurable via
HUMANIZER_RULESenv / YAML - Add
--auditflag: re-score all pushed blogs and unpublish weak ones - Add SEO scoring pass (keyword density check, meta description generation)
- Package as a GitHub Action: auto-generate blogs on schedule
Commercial viability
- Package the humanizer as a standalone API:
POST /humanize→ clean post - Charge per post ($0.10–0.50) or monthly flat ($49–149)
- Differentiator: "the only AI blog writer that bans its own clichés by design"
- Add AI-detector score before/after to prove improvement
Example output
Running pytest tests/ -v:
============================= test session starts ==============================
platform darwin -- Python 3.13.9, pytest-9.0.2, pluggy-1.5.0
cachedir: .pytest_cache
rootdir: /tmp/ownmy-releases/blog-pipeline
configfile: pyproject.toml
plugins: anyio-4.12.1, cov-7.1.0
collecting ... collected 4 items
tests/test_pipeline.py::test_check_banned_words_flags_corporate_speak PASSED [ 25%]
tests/test_pipeline.py::test_check_banned_words_passes_clean_text PASSED [ 50%]
tests/test_pipeline.py::test_check_banned_words_flags_em_dash_clusters FAILED [ 75%]
tests/test_pipeline.py::test_humanize_post_returns_string PASSED [100%]
============================= short test summary info ==========================
FAILED tests/test_pipeline.py::test_check_banned_words_flags_em_dash_clusters
========================= 1 failed, 3 passed in 0.43s ==========================
See examples/sample-post.md for a realistic humanized blog post produced by the pipeline.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file blog_pipeline-0.1.2.tar.gz.
File metadata
- Download URL: blog_pipeline-0.1.2.tar.gz
- Upload date:
- Size: 15.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
117f32455beb793fa5b9a373fdc7ca68fb2404074c618f2bf8fe70c0459058a4
|
|
| MD5 |
8928f5d1f2e5377c0c66d609e8854efe
|
|
| BLAKE2b-256 |
ce7a95235cb71b76634ab3a1e38b29c1eea8e1511d30a876644c7fa2c58af8c4
|
File details
Details for the file blog_pipeline-0.1.2-py3-none-any.whl.
File metadata
- Download URL: blog_pipeline-0.1.2-py3-none-any.whl
- Upload date:
- Size: 14.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cc4ca0b5255466222672d632154eccfeb483a9c251cd9b6f189013215f6907d0
|
|
| MD5 |
845caf92d843ac91e128a197b2cde4c4
|
|
| BLAKE2b-256 |
e840e9921696eeb855177fb48243faa3b9d6108b33b6b88b4a8b8737a298c18c
|