Skip to main content

Fractal-Attention Analysis (FAA) Framework for LLM Interpretability using Golden Ratio Transformations

Project description

Fractal-Attention Analysis (FAA) Framework

Python 3.8+ License: MIT PyPI version

A mathematical framework for analyzing transformer attention mechanisms using fractal geometry and golden ratio transformations. FAA provides deep insights into how Large Language Models (LLMs) process and attend to information.

🌟 Features

  • Universal LLM Support: Works with any HuggingFace transformer model
  • Fractal Dimension Analysis: Compute fractal dimensions of attention patterns
  • Golden Ratio Transformations: Apply φ-based transformations for enhanced interpretability
  • Comprehensive Metrics: Entropy, sparsity, concentration, and custom interpretability scores
  • Rich Visualizations: Beautiful matplotlib-based attention pattern visualizations
  • CLI Interface: Easy-to-use command-line tools
  • Modular Design: Clean OOP architecture for easy extension
  • GPU Acceleration: Efficient CUDA support with automatic memory management

📊 Key Findings

Our research demonstrates:

  • Universal Fractal Signature: Consistent fractal dimension (≈2.0295) across diverse architectures (GPT-2, Qwen, Llama, Gemma)
  • Architectural Independence: Fractal patterns persist despite model size and design differences
  • Real-time Analysis: Sub-second performance for practical deployment

🚀 Quick Start

Installation

pip install fractal-attention-analysis

Basic Usage

from fractal_attention_analysis import FractalAttentionAnalyzer

# Initialize analyzer with any HuggingFace model
analyzer = FractalAttentionAnalyzer("gpt2")

# Analyze text
results = analyzer.analyze("The golden ratio appears in nature and mathematics.")

# Access results
print(f"Fractal Dimension: {results['fractal_dimension']:.4f}")
print(f"Metrics: {results['metrics']}")

Command Line Interface

# Analyze text with GPT-2
faa-analyze --model gpt2 --text "Hello world"

# Analyze with visualization
faa-analyze --model meta-llama/Llama-3.2-1B \
            --text "AI is transforming the world" \
            --save-viz ./output

# Compare two models
faa-compare --model1 gpt2 --model2 distilgpt2 \
            --text "Test sentence"

📚 Documentation

Core Components

FractalAttentionAnalyzer

Main class for performing fractal-attention analysis:

analyzer = FractalAttentionAnalyzer(
    model_name="gpt2",              # HuggingFace model ID
    device_manager=None,            # Optional custom device manager
    force_eager_attention=True,     # Force eager attention for compatibility
)

# Analyze text
results = analyzer.analyze(
    text="Your input text",
    layer_idx=-1,                   # Layer to analyze (-1 = last)
    head_idx=0,                     # Attention head index
    return_visualizations=True,     # Generate plots
    save_dir=Path("./output")       # Save visualizations
)

FractalTransforms

Fractal transformation and dimension calculations:

from fractal_attention_analysis import FractalTransforms

transforms = FractalTransforms()

# Compute fractal dimension
dimension = transforms.compute_fractal_dimension(attention_matrix)

# Apply fractal interpolation
transformed = transforms.fractal_interpolation_function(attention_matrix)

# Golden ratio scoring
scored = transforms.golden_ratio_scoring(attention_matrix)

AttentionMetrics

Comprehensive attention metrics:

from fractal_attention_analysis import AttentionMetrics

metrics = AttentionMetrics()

# Compute all metrics
all_metrics = metrics.compute_all_metrics(
    attention_matrix,
    fractal_dimension=2.0295
)

# Individual metrics
entropy = metrics.compute_entropy(attention_matrix)
sparsity = metrics.compute_sparsity(attention_matrix)
concentration = metrics.compute_concentration(attention_matrix)

AttentionVisualizer

Visualization utilities:

from fractal_attention_analysis import AttentionVisualizer

visualizer = AttentionVisualizer()

# Plot attention matrix
fig = visualizer.plot_attention_matrix(
    attention_matrix,
    tokens=["Hello", "world"],
    title="Attention Pattern"
)

# Plot fractal comparison
fig = visualizer.plot_fractal_comparison(
    original_attention,
    transformed_attention
)

Advanced Usage

Batch Analysis

texts = [
    "First sentence to analyze.",
    "Second sentence to analyze.",
    "Third sentence to analyze."
]

results = analyzer.analyze_batch(texts)

Model Comparison

comparison = analyzer.compare_models(
    other_model_name="distilgpt2",
    text="Compare attention patterns"
)

print(f"Dimension difference: {comparison['dimension_difference']:.4f}")

Export Results

# Export as JSON
analyzer.export_results(results, "output.json", format='json')

# Export as CSV
analyzer.export_results(results, "output.csv", format='csv')

# Export as NumPy archive
analyzer.export_results(results, "output.npz", format='npz')

🔬 Mathematical Foundation

The FAA framework is based on:

  1. Golden Ratio (φ): Used for optimal attention partitioning

    φ = (1 + √5) / 2 ≈ 1.618
    
  2. Fractal Dimension: Computed using box-counting method

    D = lim(ε→0) [log N(ε) / log(1/ε)]
    
  3. Fractal Interpolation: Iterated Function System (IFS) transformations

    F(x) = Σ wᵢ · fᵢ(x)
    
  4. Neural Fractal Dimension: Theoretical dimension for neural attention

    D_neural = φ² / 2 ≈ 1.309
    

📈 Performance

  • Analysis Time: 0.047-0.248s depending on model size
  • Memory Efficient: Supports models up to 1B parameters on 24GB GPU
  • Universal: Works with GPT, BERT, T5, LLaMA, Qwen, Gemma, and more

🛠️ Development

Setup Development Environment

# Clone repository
git clone https://github.com/ross-sec/fractal-attention-analysis.git
cd fractal-attention-analysis

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=fractal_attention_analysis --cov-report=html

# Run specific test file
pytest tests/test_core.py

Code Quality

# Format code
black src/ tests/

# Sort imports
isort src/ tests/

# Lint
flake8 src/ tests/

# Type check
mypy src/

📖 Citation

If you use FAA in your research, please cite:

@software{ross2025faa,
  title={Fractal-Attention Analysis: A Mathematical Framework for LLM Interpretability},
  author={Ross, Andre and Ross, Leorah and Atias, Eyal},
  year={2025},
  url={https://github.com/ross-sec/fractal-attention-analysis}
}

🤝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Areas for Contribution

  • Support for additional model architectures
  • New fractal transformation methods
  • Enhanced visualization capabilities
  • Performance optimizations
  • Documentation improvements

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

👥 Authors

🙏 Acknowledgments

  • HuggingFace team for the Transformers library
  • The open-source AI research community
  • Fractal geometry pioneers: Benoit Mandelbrot, Michael Barnsley

📞 Support

🗺️ Roadmap

  • Support for multi-head parallel analysis
  • CUDA-optimized fractal computations
  • Real-time streaming analysis
  • Interactive web dashboard
  • Integration with popular interpretability tools (SHAP, LIME)
  • Extended model zoo with pre-computed benchmarks

Made with ❤️ by Ross Technologies & Hooking LTD

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fractal_attention_analysis-0.1.0.tar.gz (36.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fractal_attention_analysis-0.1.0-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

File details

Details for the file fractal_attention_analysis-0.1.0.tar.gz.

File metadata

File hashes

Hashes for fractal_attention_analysis-0.1.0.tar.gz
Algorithm Hash digest
SHA256 3ad41035bf9cd5365966ed716db0680d7c65839b4b2c9533288e48427a0a72ea
MD5 6ccaae1c16fb8424b27a37c016327f16
BLAKE2b-256 c590f3b2089440147fd43655b4b8f815d00873d073d28e63e88ca956f7cf235d

See more details on using hashes here.

Provenance

The following attestation bundles were made for fractal_attention_analysis-0.1.0.tar.gz:

Publisher: publish.yml on ross-sec/fractal_attention_analysis

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file fractal_attention_analysis-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for fractal_attention_analysis-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b0edb16f1129ecec4ad050f1838a88bc6c48787d2efe709934ffe1b0404bb6b0
MD5 665cc878e9530c42c80c026f0eb4cebc
BLAKE2b-256 a0477730e6c97799b603ee49e7a445569991156c8c86bd6ba22bbb5c158497fe

See more details on using hashes here.

Provenance

The following attestation bundles were made for fractal_attention_analysis-0.1.0-py3-none-any.whl:

Publisher: publish.yml on ross-sec/fractal_attention_analysis

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page