Fractal-Attention Analysis (FAA) Framework for LLM Interpretability using Golden Ratio Transformations
Project description
Fractal-Attention Analysis (FAA) Framework
A mathematical framework for analyzing transformer attention mechanisms using fractal geometry and golden ratio transformations. FAA provides deep insights into how Large Language Models (LLMs) process and attend to information.
🌟 Features
- Universal LLM Support: Works with any HuggingFace transformer model
- Fractal Dimension Analysis: Compute fractal dimensions of attention patterns
- Golden Ratio Transformations: Apply φ-based transformations for enhanced interpretability
- Comprehensive Metrics: Entropy, sparsity, concentration, and custom interpretability scores
- Rich Visualizations: Beautiful matplotlib-based attention pattern visualizations
- CLI Interface: Easy-to-use command-line tools
- Modular Design: Clean OOP architecture for easy extension
- GPU Acceleration: Efficient CUDA support with automatic memory management
📊 Key Findings
Our research demonstrates:
- Universal Fractal Signature: Consistent fractal dimension (≈2.0295) across diverse architectures (GPT-2, Qwen, Llama, Gemma)
- Architectural Independence: Fractal patterns persist despite model size and design differences
- Real-time Analysis: Sub-second performance for practical deployment
🚀 Quick Start
Installation
pip install fractal-attention-analysis
Basic Usage
from fractal_attention_analysis import FractalAttentionAnalyzer
# Initialize analyzer with any HuggingFace model
analyzer = FractalAttentionAnalyzer("gpt2")
# Analyze text
results = analyzer.analyze("The golden ratio appears in nature and mathematics.")
# Access results
print(f"Fractal Dimension: {results['fractal_dimension']:.4f}")
print(f"Metrics: {results['metrics']}")
Command Line Interface
# Analyze text with GPT-2
faa-analyze --model gpt2 --text "Hello world"
# Analyze with visualization
faa-analyze --model meta-llama/Llama-3.2-1B \
--text "AI is transforming the world" \
--save-viz ./output
# Compare two models
faa-compare --model1 gpt2 --model2 distilgpt2 \
--text "Test sentence"
📚 Documentation
Core Components
FractalAttentionAnalyzer
Main class for performing fractal-attention analysis:
analyzer = FractalAttentionAnalyzer(
model_name="gpt2", # HuggingFace model ID
device_manager=None, # Optional custom device manager
force_eager_attention=True, # Force eager attention for compatibility
)
# Analyze text
results = analyzer.analyze(
text="Your input text",
layer_idx=-1, # Layer to analyze (-1 = last)
head_idx=0, # Attention head index
return_visualizations=True, # Generate plots
save_dir=Path("./output") # Save visualizations
)
FractalTransforms
Fractal transformation and dimension calculations:
from fractal_attention_analysis import FractalTransforms
transforms = FractalTransforms()
# Compute fractal dimension
dimension = transforms.compute_fractal_dimension(attention_matrix)
# Apply fractal interpolation
transformed = transforms.fractal_interpolation_function(attention_matrix)
# Golden ratio scoring
scored = transforms.golden_ratio_scoring(attention_matrix)
AttentionMetrics
Comprehensive attention metrics:
from fractal_attention_analysis import AttentionMetrics
metrics = AttentionMetrics()
# Compute all metrics
all_metrics = metrics.compute_all_metrics(
attention_matrix,
fractal_dimension=2.0295
)
# Individual metrics
entropy = metrics.compute_entropy(attention_matrix)
sparsity = metrics.compute_sparsity(attention_matrix)
concentration = metrics.compute_concentration(attention_matrix)
AttentionVisualizer
Visualization utilities:
from fractal_attention_analysis import AttentionVisualizer
visualizer = AttentionVisualizer()
# Plot attention matrix
fig = visualizer.plot_attention_matrix(
attention_matrix,
tokens=["Hello", "world"],
title="Attention Pattern"
)
# Plot fractal comparison
fig = visualizer.plot_fractal_comparison(
original_attention,
transformed_attention
)
Advanced Usage
Batch Analysis
texts = [
"First sentence to analyze.",
"Second sentence to analyze.",
"Third sentence to analyze."
]
results = analyzer.analyze_batch(texts)
Model Comparison
comparison = analyzer.compare_models(
other_model_name="distilgpt2",
text="Compare attention patterns"
)
print(f"Dimension difference: {comparison['dimension_difference']:.4f}")
Export Results
# Export as JSON
analyzer.export_results(results, "output.json", format='json')
# Export as CSV
analyzer.export_results(results, "output.csv", format='csv')
# Export as NumPy archive
analyzer.export_results(results, "output.npz", format='npz')
🔬 Mathematical Foundation
The FAA framework is based on:
-
Golden Ratio (φ): Used for optimal attention partitioning
φ = (1 + √5) / 2 ≈ 1.618 -
Fractal Dimension: Computed using box-counting method
D = lim(ε→0) [log N(ε) / log(1/ε)] -
Fractal Interpolation: Iterated Function System (IFS) transformations
F(x) = Σ wᵢ · fᵢ(x) -
Neural Fractal Dimension: Theoretical dimension for neural attention
D_neural = φ² / 2 ≈ 1.309
📈 Performance
- Analysis Time: 0.047-0.248s depending on model size
- Memory Efficient: Supports models up to 1B parameters on 24GB GPU
- Universal: Works with GPT, BERT, T5, LLaMA, Qwen, Gemma, and more
🛠️ Development
Setup Development Environment
# Clone repository
git clone https://github.com/ross-sec/fractal_attention_analysis.git
cd fractal-attention-analysis
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=fractal_attention_analysis --cov-report=html
# Run specific test file
pytest tests/test_core.py
Code Quality
# Format code
black src/ tests/
# Sort imports
isort src/ tests/
# Lint
flake8 src/ tests/
# Type check
mypy src/
📖 Citation
If you use FAA in your research, please cite:
@software{ross2025faa,
title={Fractal-Attention Analysis: A Mathematical Framework for LLM Interpretability},
author={Ross, Andre and Ross, Leorah and Atias, Eyal},
year={2025},
url={https://github.com/ross-sec/fractal_attention_analysis}
}
🤝 Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Areas for Contribution
- Support for additional model architectures
- New fractal transformation methods
- Enhanced visualization capabilities
- Performance optimizations
- Documentation improvements
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
👥 Authors
- Andre Ross - Lead Developer - Ross Technologies
- Leorah Ross - Co-Developer - Ross Technologies
- Eyal Atias - Co-Developer - Hooking LTD
🙏 Acknowledgments
- HuggingFace team for the Transformers library
- The open-source AI research community
- Fractal geometry pioneers: Benoit Mandelbrot, Michael Barnsley
📞 Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: devops.ross@gmail.com
🗺️ Roadmap
- Support for multi-head parallel analysis
- CUDA-optimized fractal computations
- Real-time streaming analysis
- Interactive web dashboard
- Integration with popular interpretability tools (SHAP, LIME)
- Extended model zoo with pre-computed benchmarks
Made with ❤️ by Ross Technologies & Hooking LTD
Star History
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fractal_attention_analysis-1.0.0.tar.gz.
File metadata
- Download URL: fractal_attention_analysis-1.0.0.tar.gz
- Upload date:
- Size: 32.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
030ad2cc5675d2423c51ef1e0f36fc8dfb869d4a0a0eb0fbd48d18be54905160
|
|
| MD5 |
31d206865ad6d5e76273e76cdfbfe0a0
|
|
| BLAKE2b-256 |
c396c97dab6737622a52c51d79dded8a4ef946002bfe7e24d23cd4c8d0ddee17
|
Provenance
The following attestation bundles were made for fractal_attention_analysis-1.0.0.tar.gz:
Publisher:
publish.yml on ross-sec/fractal_attention_analysis
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fractal_attention_analysis-1.0.0.tar.gz -
Subject digest:
030ad2cc5675d2423c51ef1e0f36fc8dfb869d4a0a0eb0fbd48d18be54905160 - Sigstore transparency entry: 641596214
- Sigstore integration time:
-
Permalink:
ross-sec/fractal_attention_analysis@3e17a165be612792b8308d66a9737b09ab4ba188 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/ross-sec
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@3e17a165be612792b8308d66a9737b09ab4ba188 -
Trigger Event:
push
-
Statement type:
File details
Details for the file fractal_attention_analysis-1.0.0-py3-none-any.whl.
File metadata
- Download URL: fractal_attention_analysis-1.0.0-py3-none-any.whl
- Upload date:
- Size: 21.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ea5e3d06a60652d84fb4d38e1d9afb15c19d347590189038c683009d64a10d50
|
|
| MD5 |
38d7d2ae01fc2d4c3b3d358b872f53fe
|
|
| BLAKE2b-256 |
9b157c86776944ac472b7339a66404f413e3835801e5c32b7796606f62678939
|
Provenance
The following attestation bundles were made for fractal_attention_analysis-1.0.0-py3-none-any.whl:
Publisher:
publish.yml on ross-sec/fractal_attention_analysis
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fractal_attention_analysis-1.0.0-py3-none-any.whl -
Subject digest:
ea5e3d06a60652d84fb4d38e1d9afb15c19d347590189038c683009d64a10d50 - Sigstore transparency entry: 641596222
- Sigstore integration time:
-
Permalink:
ross-sec/fractal_attention_analysis@3e17a165be612792b8308d66a9737b09ab4ba188 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/ross-sec
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@3e17a165be612792b8308d66a9737b09ab4ba188 -
Trigger Event:
push
-
Statement type: