Skip to main content

High-performance Python code flow analysis with optimized TOON format - CFG, DFG, call graphs, and intelligent code queries

Project description

code2llm - Generated Analysis Files

AI Cost Tracking

PyPI Version Python License AI Cost Human Time Model

  • 🤖 LLM usage: $7.5000 (166 commits)
  • 👤 Human dev: ~$5731 (57.3h @ $100/h, 30min dedup)

Generated on 2026-04-19 using openrouter/qwen/qwen3-coder-next


This directory contains the complete analysis of your project generated by code2llm. Each file serves a specific purpose for understanding, refactoring, and documenting your codebase.

📁 Generated Files Overview

When you run code2llm ./ -f all, the following files are created:

🎯 Core Analysis Files

File Format Purpose Key Insights
evolution.toon.yaml YAML 📋 Refactoring queue - Prioritized improvements 0 refactoring actions needed

🤖 LLM-Ready Documentation

File Format Purpose Use Case
context.md Markdown 📖 LLM narrative - Architecture summary Paste into ChatGPT/Claude for code analysis

📊 Visualizations

File Format Purpose Description
calls.mmd Mermaid 📞 Call graph Function dependencies (edges only)

🚀 Quick Start Commands

Basic Analysis

# Quick health check (TOON format only)
code2llm ./ -f toon

# Generate all formats (what created these files)
code2llm ./ -f all

# LLM-ready context only
code2llm ./ -f context

Performance Options

# Fast analysis for large projects
code2llm ./ -f toon --strategy quick

# Memory-limited analysis
code2llm ./ -f all --max-memory 500

# Skip PNG generation (faster)
code2llm ./ -f all --no-png

Refactoring Focus

# Get refactoring recommendations
code2llm ./ -f evolution

# Focus on specific code smells
code2llm ./ -f toon --refactor --smell god_function

# Data flow analysis
code2llm ./ -f flow --data-flow

📖 Understanding Each File

analysis.toon - Health Diagnostics

Purpose: Quick overview of code health issues Key sections:

  • HEALTH: Critical issues (🔴) and warnings (🟡)
  • REFACTOR: Prioritized refactoring actions
  • COUPLING: Module dependencies and potential cycles
  • LAYERS: Package complexity metrics
  • FUNCTIONS: High-complexity functions (CC ≥ 10)
  • CLASSES: Complex classes needing attention

Example usage:

# View health issues
cat analysis.toon | head -30

# Check refactoring priorities
grep "REFACTOR" analysis.toon

evolution.toon.yaml - Refactoring Queue

Purpose: Step-by-step refactoring plan Key sections:

  • NEXT: Immediate actions to take
  • RISKS: Potential breaking changes
  • METRICS-TARGET: Success criteria

Example usage:

# Get refactoring plan
cat evolution.toon.yaml

# Track progress
grep "NEXT" evolution.toon.yaml

flow.toon - Legacy Data Flow Analysis

Purpose: Understand data movement through the system (legacy / explicit opt-in) Key sections:

  • PIPELINES: Data processing chains
  • CONTRACTS: Function input/output contracts
  • SIDE_EFFECTS: Functions with external impacts

Example usage:

# Find data pipelines
grep "PIPELINES" flow.toon

# Identify side effects
grep "SIDE_EFFECTS" flow.toon

map.toon.yaml - Structural Map + Project Header

Purpose: High-level architecture overview plus compact project header Key sections:

  • MODULES: All modules with basic stats
  • IMPORTS: Dependency relationships
  • EXPORTS: Public API surface and signatures
  • HEADER: Stats, alerts, hotspots, evolution trend

Example usage:

# See project structure
cat map.toon.yaml | head -50

# Find public APIs
grep "SIGNATURES" map.toon.yaml

project.toon.yaml - Compact Analysis View

Purpose: Compact module view generated from project.yaml data Status: Legacy view generated on demand from unified project.yaml

Example usage:

# View compact project structure
cat project.toon.yaml | head -30

# Find largest files
grep -E "^  .*[0-9]{3,}$" project.toon.yaml | sort -t',' -k2 -n -r | head -10

prompt.txt - Ready-to-Send LLM Prompt

Purpose: Pre-formatted prompt listing all generated files for LLM conversation Generation: Written when code2llm runs with a source path and requests -f all (including --no-chunk) or code2logic Contents:

  • Files section: Lists all existing generated files with descriptions, including project.toon.yaml when generated by -f all
  • Source files section: Highlights important source files such as cli_exports/orchestrator.py
  • Missing section: Shows which files weren't generated (if any)
  • Task section: Refactoring brief with concrete execution instructions, not just analysis
  • Priority Order section: State-dependent refactoring priorities, starting with blockers and then architecture cleanup
  • Requirements section: Guidelines for suggested changes

Example usage:

# View the prompt
cat prompt.txt

# Copy to clipboard and paste into ChatGPT/Claude
cat prompt.txt | pbcopy  # macOS
cat prompt.txt | xclip -sel clip  # Linux

context.md - LLM Narrative

Purpose: Ready-to-paste context for AI assistants Key sections:

  • Overview: Project statistics
  • Architecture: Module breakdown
  • Entry Points: Public interfaces
  • Patterns: Design patterns detected

Example usage:

# Copy to clipboard for LLM
cat context.md | pbcopy  # macOS
cat context.md | xclip -sel clip  # Linux

# Use with Claude/ChatGPT for code analysis

Visualization Files (*.mmd, *.png)

Purpose: Visual understanding of code structure Files:

  • flow.mmd - Detailed control flow with complexity colors
  • calls.mmd - Simple call graph
  • compact_flow.mmd - High-level module view
  • *.png - Pre-rendered images

Example usage:

# View diagrams
open flow.png  # macOS
xdg-open flow.png  # Linux

# Edit in Mermaid Live Editor
# Copy content of .mmd files to https://mermaid.live

🔍 Common Analysis Patterns

1. Code Health Assessment

# Quick health check
code2llm ./ -f toon
cat analysis.toon | grep -E "(HEALTH|REFACTOR)"

2. Refactoring Planning

# Get refactoring queue
code2llm ./ -f evolution
cat evolution.toon.yaml

# Focus on specific issues
code2llm ./ -f toon --refactor --smell god_function

3. LLM Assistance

# Generate context for AI
code2llm ./ -f context
cat context.md

# Use with Claude: "Based on this context, help me refactor the god modules"

4. Team Documentation

# Generate all docs for team
code2llm ./ -f all -o ./docs/

# Create visual diagrams
open docs/flow.png

📊 Interpreting Metrics

Complexity Metrics (CC)

  • 🔴 Critical (≥5.0): Immediate refactoring needed
  • 🟠 High (3.0-4.9): Consider refactoring
  • 🟡 Medium (1.5-2.9): Monitor complexity
  • 🟢 Low (0.1-1.4): Acceptable
  • ⚪ Basic (0.0): Simple functions

Module Health

  • GOD Module: Too large (>500 lines, >20 methods)
  • HUB: High fan-out (calls many modules)
  • FAN-IN: High incoming dependencies
  • CYCLES: Circular dependencies

Data Flow Indicators

  • PIPELINE: Sequential data processing
  • CONTRACT: Clear input/output specification
  • SIDE_EFFECT: External state modification

🛠️ Integration Examples

CI/CD Pipeline

#!/bin/bash
# Analyze code quality in CI
code2llm ./ -f toon -o ./analysis
if grep -q "🔴 GOD" ./analysis/analysis.toon; then
    echo "❌ God modules detected"
    exit 1
fi

Pre-commit Hook

#!/bin/sh
# .git/hooks/pre-commit
code2llm ./ -f toon -o ./temp_analysis
if grep -q "🔴" ./temp_analysis/analysis.toon; then
    echo "⚠️  Critical issues found. Review before committing."
fi
rm -rf ./temp_analysis

Documentation Generation

# Generate docs for README
code2llm ./ -f context -o ./docs/
echo "## Architecture" >> README.md
cat docs/context.md >> README.md

📚 Next Steps

  1. Review analysis.toon - Identify critical issues
  2. Check evolution.toon.yaml - Plan refactoring priorities
  3. Use context.md - Get LLM assistance for complex changes
  4. Reference visualizations - Understand system architecture
  5. Track progress - Re-run analysis after changes

🔧 Advanced Usage

Custom Analysis

# Deep analysis with all insights
code2llm ./ -m hybrid -f all --max-depth 15 -v

# Performance-optimized
code2llm ./ -m static -f toon --strategy quick

# Refactoring-focused
code2llm ./ -f toon,evolution --refactor

Output Customization

# Separate output directories
code2llm ./ -f all -o ./analysis-$(date +%Y%m%d)

# Split YAML into multiple files
code2llm ./ -f yaml --split-output

# Separate orphaned functions
code2llm ./ -f yaml --separate-orphans

Generated by: code2llm ./ -f all --readme
Analysis Date: 2026-04-19
Total Functions: 1115
Total Classes: 121
Modules: 152

For more information about code2llm, visit: https://github.com/tom-sapletta/code2llm

License

Licensed under Apache-2.0.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

code2llm-0.5.126.tar.gz (249.0 kB view details)

Uploaded Source

File details

Details for the file code2llm-0.5.126.tar.gz.

File metadata

  • Download URL: code2llm-0.5.126.tar.gz
  • Upload date:
  • Size: 249.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for code2llm-0.5.126.tar.gz
Algorithm Hash digest
SHA256 22b78cad886382049e8dbd22817d21cf6d2b45de4a82231483679e4da15dd516
MD5 8266c5e7c4cfaf6703f4555efe209724
BLAKE2b-256 7a3311945c41a923f259aec17fb9d258f6a8c14896131eecdb601152e899ea62

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page