Skip to main content

ReDSL — Refactor + DSL + Self-Learning. LLM-powered autonomous code refactoring.

Project description

ReDSL

Refactor + DSL + Self-Learning — AI-Native DevOps & Refactoring OS

⚠️ This is not a typical requirements DSL. This is an autonomous operating system for AI-driven software engineering.

ReDSL is an experimental framework combining LLM, formal runtime DSL, CI/CD, and self-refactoring loops into one autonomous code lifecycle system.

Current project state

Based on the 2026-04-09 code2llm analysis:

  • Files: 114
  • Functions: 781
  • Classes: 112
  • Lines of code: 19,151
  • Average complexity: CC̄ = 4.1
  • Critical hotspots: 3
  • Duplications / cycles: 0 / 0
  • Test suite: 468 collected tests
  • Next refactor: split format_cycle_report_markdown(), format_batch_report_markdown(), and LLMLayer.call()

🧠 What ReDSL Really Is

Not just a requirements DSL. This is an AI-driven software lifecycle system:

Component Role in System
SUMD System description (high-level spec)
DOQL Runtime application definition (CLI, workflows)
taskfile DevOps operations
testQL Validation
pyqual Code quality system
LLM Refactoring + automation (gpt-5-mini via litellm)

🔥 KEY PARADIGM SHIFT

Before (typical DSL): describe requirements → generate documentation → manual interpretation

Here: describe system → system has CI/CD, tests, linting, deployment, refactor pipeline → LLM can intervene in code

This is an autonomous development system.

🏗️ Architecture: Autonomous Loop

SUMD → DOQL → taskfile → pyqual → testQL → LLM refactor loop → deployment

Detailed Flow:

┌─────────────────────────────────────────────────────────────────────┐
│                    AUTONOMOUS CODE LIFECYCLE                         │
│                                                                       │
│   🧾 SUMD ──► ⚙️ DOQL ──► 🔄 taskfile ──► 🧪 pyqual ──► 🤖 LLM       │
│       │          │           │            │           │               │
│       ▼          ▼           ▼            ▼           ▼               │
│   [Spec]    [Runtime]   [DevOps]    [Quality]   [Refactor]            │
│       │          │           │            │           │               │
│       └──────────┴──────────┴────────────┴───────────┘                │
│                          │                                            │
│                          ▼                                            │
│   ┌─────────────────────────────────────────┐                          │
│   │  REFACTOR ORCHESTRATOR                 │                          │
│   │  PERCEIVE → DECIDE → PLAN → EXECUTE    │                          │
│   │       ↓                    ↓          │                          │
│   │  REFLECT → REMEMBER → IMPROVE        │                          │
│   └─────────────────────────────────────────┘                          │
│                          │                                            │
│                          ▼                                            │
│   ┌─────────────────────────────────────────┐                          │
│   │  VALIDATION LAYER                      │                          │
│   │  regix (regression) │ vallm │ sandbox   │                          │
│   └─────────────────────────────────────────┘                          │
│                          │                                            │
│                          ▼                                            │
│   [deployment] ◄── CI/CD ◄── quality gates ◄── auto-PR              │
│                                                                       │
└─────────────────────────────────────────────────────────────────────┘

🚨 WHAT'S REALLY NEW HERE

🧠 A. "Code as controllable system"

This is NOT: code + AI This is: operating system for code

🔁 B. Self-learning loop

You have: tests, lint, quality gates, refactor pipeline, LLM model 👉 A system that can self-correct its own code

🧩 C. DSL = control interface

DOQL is not a declarative language - it's a system orchestrator:

workflow[name="test"] {
  run pytest
}

Features

  • 🔍 Static Analysis - Integration with popular linters and metrics tools
  • 🧠 LLM with Reflection - Generate refactoring proposals with self-reflection loop
  • Hybrid Engine - Direct refactorings for simple changes, LLM for complex ones
  • 📊 DSL Engine - Define refactoring rules in readable YAML format
  • 💾 Memory System - Learn from refactoring history (episodic, semantic, procedural)
  • 🚀 Scalability - Process multiple projects simultaneously
  • 🔄 Autonomy Loop - Perceive → Decide → Plan → Execute → Reflect → Memory Update

Installation

pip install redsl

Refactor a single project (dry run)

redsl refactor ./my-project --max-actions 5 --dry-run

Refactor without dry run (apply changes)

redsl refactor ./my-project --max-actions 10

Get output in YAML format (for integration)

redsl refactor ./my-project --format yaml

Get output in JSON format (for APIs)

redsl refactor ./my-project --format json


# Process semcod projects with LLM
redsl batch semcod /path/to/semcod --max-actions 10

# Hybrid refactoring (no LLM) for semcod projects
redsl batch hybrid /path/to/semcod --max-changes 30

# Batch processing with JSON output
redsl batch semcod /path/to/semcod --format json

Every refactor and batch run also writes a Markdown report next to the project or root folder:

  • redsl_refactor_plan.md--dry-run output
  • redsl_refactor_report.md — executed refactor cycle
  • redsl_batch_semcod_report.md — batch summary for batch semcod
  • redsl_batch_hybrid_report.md — batch summary for batch hybrid

Analyze code quality

redsl pyqual analyze ./my-project

Analyze with custom config

redsl pyqual analyze ./my-project --config pyqual.yaml

Get analysis in JSON format

redsl pyqual analyze ./my-project --format json

Apply automatic fixes

redsl pyqual fix ./my-project


# Check configuration
redsl debug config --show-env

# View DSL decisions for a project
redsl debug decisions ./my-project --limit 20

GitHub Actions example

  • name: Run reDSL analysis run: | redsl refactor ./ --max-actions 5 --dry-run --format yaml > refactor-plan.yaml

  • name: Upload refactoring plan uses: actions/upload-artifact@v3 with: name: refactor-plan path: refactor-plan.yaml


# Use with jq for JSON processing
redsl refactor ./ --format json | jq '.refactoring_plan.decisions[] | select(.score > 1.0)'

# Pipe to file for review
redsl refactor ./ --format yaml > review-plan.yaml

# Extract only high-impact decisions
redsl refactor ./ --format yaml | yq '.refactoring_plan.decisions[] | select(.score > 1.5)'

Environment Configuration

Create .env file:

# LLM Configuration
OPENAI_API_KEY (set in your environment)
REFACTOR_LLM_MODEL=openai/gpt-4
REFACTOR_DRY_RUN=false

# Custom settings
REFACTOR_MAX_ACTIONS=20
REFACTOR_REFLECTION_ROUNDS=2

Simple Actions (no LLM)

  • REMOVE_UNUSED_IMPORTS - Remove unused imports
  • FIX_MODULE_EXECUTION_BLOCK - Fix module execution blocks
  • EXTRACT_CONSTANTS - Extract magic numbers to constants
  • ADD_RETURN_TYPES - Add return type annotations

Implementation note: the deterministic AST helpers now live in redsl/refactors/ast_transformers.py, and redsl.refactors plus redsl.refactors.direct re-export them for backward compatibility.

Complex Actions (with LLM)

  • EXTRACT_FUNCTIONS - Extract high-complexity functions
  • SPLIT_MODULE - Split large modules
  • REDUCE_COMPLEXITY - Reduce cyclomatic complexity

Fresh-project smoke test

To quickly verify that ReDSL runs in a brand-new project, create a tiny temporary project and run the CLI in dry-run mode:

mkdir -p /tmp/redsl-smoke
cat > /tmp/redsl-smoke/main.py <<'PY'
import os


def main() -> None:
    return None


main()
PY

python3 -m redsl analyze /tmp/redsl-smoke
python3 -m redsl refactor /tmp/redsl-smoke --dry-run --max-actions 5

REST API

Start the API server:

# Using uvicorn directly
uvicorn redsl.api:app --reload --host 0.0.0.0 --port 8000

# Using the CLI
redsl api --host 0.0.0.0 --port 8000

Refactor a Project

curl -X POST "http://localhost:8000/refactor" \
  -H "Content-Type: application/json" \
  -d '{
    "project_path": "./my-project",
    "max_actions": 5,
    "dry_run": true,
    "format": "json"
  }'

Batch semcod processing

curl -X POST "http://localhost:8000/batch/semcod"
-H "Content-Type: application/json"
-d '{ "semcod_root": "/path/to/semcod", "max_actions": 10, "format": "yaml" }'

Hybrid batch processing

curl -X POST "http://localhost:8000/batch/hybrid"
-H "Content-Type: application/json"
-d '{ "semcod_root": "/path/to/semcod", "max_changes": 30 }'


# Get configuration
curl "http://localhost:8000/debug/config?show_env=true"

# Get decisions for a project
curl "http://localhost:8000/debug/decisions?project_path=./my-project&limit=10"

Analyze code quality

curl -X POST "http://localhost:8000/pyqual/analyze"
-H "Content-Type: application/json"
-d '{ "project_path": "./my-project", "format": "json" }'

Apply fixes

curl -X POST "http://localhost:8000/pyqual/fix"
-H "Content-Type: application/json"
-d '{ "project_path": "./my-project" }'


### Interactive API Documentation

Once the server is running, visit:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc

## ⚖️ Markdown + AI vs ReDSL — Comparison

### 📝 Markdown + AI
- Input: loose text
- AI interprets
- No hard system structure
- **This is an assistant**

### 🧠 ReDSL (this project)
- Input: structural system (SUMD + DOQL)
- AI operates in a **controlled runtime**
- Has quality pipeline + CI/CD + refactor loop
- **This is an autonomous system for managing code lifecycle**

| Criterion | Markdown + AI | ReDSL |
|-----------|---------------|-------|
| **UX** | ✅ Wins | ⚠️ Complex |
| **Adoption** | ✅ Easy start | ⚠️ High entry cost |
| **Simplicity** | ✅ Intuitive | ⚠️ Many abstractions |
| **System control** | ❌ None | ✅ Deterministic runtime |
| **Lifecycle automation** | ❌ Manual | ✅ Auto-pipeline |
| **CI/CD + AI integration** | ❌ None | ✅ Native |
| **Determinism** | ❌ Non-deterministic | ✅ DSL-driven |

**Conclusion**: Markdown + AI wins in productivity and UX. ReDSL wins **only if** AI development becomes fully autonomous and companies accept "DSL system as devops runtime".

## 📊 Project Assessment

| Criterion | Score | Justification |
|-----------|-------|---------------|
| 🧠 **Innovation** | **9/10** | Close to Devin, Auto-refactoring systems, AI CI/CD pipelines |
| ⚙️ **Technical coherence** | **8.5/10** | Full dev pipeline, quality system, docker + CI + LLM |
| 🚧 **Practical adoption** | **6/10** | Very complex, high entry cost, no market standard |
| 📉 **Risk** | **High** | Many DSL abstractions, LLM dependency, no production usage proof |

### 🎯 FINAL CONCLUSION

👉 **This is NOT a "requirements DSL" anymore**

👉 **This is: an experimental operating system for AI-driven software engineering**

- ❌ Not a typical DSL
- ❌ Not competition to Markdown (different category)
- 🟢 This is **AI-native DevOps + refactoring OS**
- 🟡 Very ambitious, but hard to deploy

## Architecture

┌─────────────────────────────────────────────────────┐ │ ORCHESTRATOR │ │ (loop: analyze → decide → refactor → reflect) │ ├─────────────┬──────────────┬────────────────────────┤ │ ANALYZER │ DSL ENGINE │ REFACTOR ENGINE │ │ ─ toon.yaml│ ─ rules │ ─ patch generation │ │ ─ linters │ ─ scoring │ ─ validation │ │ ─ metrics │ ─ planning │ ─ application │ ├─────────────┴──────────────┴────────────────────────┤ │ HYBRID REFACTOR ENGINES │ │ ─ DirectRefactorEngine (no LLM) │ │ ─ LLM RefactorEngine (with reflection) │ ├─────────────────────────────────────────────────────┤ │ LLM LAYER (LiteLLM) │ │ ─ code generation ─ reflection ─ self-critique │ ├─────────────────────────────────────────────────────┤ │ MEMORY SYSTEM │ │ ─ episodic (refactoring history) │ │ ─ semantic (patterns, rules) │ │ ─ procedural (strategies, plans) │ └─────────────────────────────────────────────────────┘


## Configuration

Environment variables:
- `OPENAI_API_KEY` or `OPENROUTER_API_KEY` — API key
- `REFACTOR_LLM_MODEL` — LLM model (e.g., `openrouter/moonshotai/kimi-k2.5`)
- `REFACTOR_DRY_RUN` — test mode (`true`/`false`)

## Examples

| Directory | Description |
|-----------|-------------|
| `examples/01-basic-analysis/` | Project analysis from toon.yaml files |
| `examples/02-custom-rules/` | Define custom DSL rules |
| `examples/03-full-pipeline/` | Full cycle: analyze → decide → refactor → reflect |
| `examples/04-memory-learning/` | Memory system: episodic, semantic, procedural |

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

redsl-1.2.53.tar.gz (435.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

redsl-1.2.53-py3-none-any.whl (447.7 kB view details)

Uploaded Python 3

File details

Details for the file redsl-1.2.53.tar.gz.

File metadata

  • Download URL: redsl-1.2.53.tar.gz
  • Upload date:
  • Size: 435.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for redsl-1.2.53.tar.gz
Algorithm Hash digest
SHA256 aa7613e24b7782afa7f75bb0c551d6c407a2aa7d7ab1ac136570fdeb60d0789f
MD5 5b0d2bc3a64605a00b12471b82501998
BLAKE2b-256 324f5392bce3af6e02f8c801441c5526b5dff4eb2d7bfca957d6d093a0a38261

See more details on using hashes here.

File details

Details for the file redsl-1.2.53-py3-none-any.whl.

File metadata

  • Download URL: redsl-1.2.53-py3-none-any.whl
  • Upload date:
  • Size: 447.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for redsl-1.2.53-py3-none-any.whl
Algorithm Hash digest
SHA256 aef561358506bc0c224d091d2dcc5b39c26ae520c678bdc52b3d91873f1b56aa
MD5 8e61147ab2ddd98ae99b9515fad7d5c5
BLAKE2b-256 31be2ae7e639ef73c2cf548a0cf168d3ffde0445359aa378ee3d511436073ac5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page