GPU-accelerated sparse matrix condition number estimation using CuPy
Project description
sparse-kappa
Condition Number Estimation on GPUs for Sparse Matrices
sparse-kappa is a GPU-accelerated library for estimating condition numbers of sparse matrices using CuPy. It supports a variaty of estimation method associated with linear solvers. sparse-kappa is designed for benchmarking condition number estimator and practical use in science and engineering community.
Features
- GPU-Accelerated: All computations run on NVIDIA GPUs via CuPy
- Multiple Norms: Support for 1-norm and 2-norm condition numbers
- Rich Algorithm Suite:
- 1-norm: Hager-Higham, Power iteration, Oettli-Prager sampling, Block Hager
- 2-norm: Power method, Lanczos, Arnoldi, Golub-Kahan bidiagonalization
- CuPy integrations: SVDS, EIGSH, LOBPCG wrappers
- Flexible Solver System: LU, LSMR, CG, GMRES, Direct, Auto-selection
- Smart LU Caching: Reuses factorizations for multiple solves (10-20x speedup)
- Memory Efficient: Designed for large sparse matrices
Installation
Simply via pip manager
pip install sparse-kappa
git clone https://github.com/chenxinye/sparse-kappa
pip install cupy-cuda11x # or cupy-cuda12x for CUDA 12
pip install -e .
Quick Start
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate
# Create sparse matrix
A = sp.random(10000, 10000, density=0.01, format='csr')
# Estimate condition number
cond = cond_estimate(A)
print(f"κ(A) = {cond:.2e}")
# Use specific method with LU solver
cond = cond_estimate(A, norm=1, method='hager-higham', solver='lu')
Available Methods
1-Norm Methods
| Method | Description | Best For | Complexity |
|---|---|---|---|
hager |
Hager algorithm (default) | High accuracy, general matrices | O(k·nnz) |
power |
Power iteration | Fast rough estimates | O(k·nnz) |
oettli-prager |
Random/adaptive sampling | Quick estimates with variants | O(m·nnz) |
hager-higham |
Hager-Higham (Block algorith, multiple vectors) | Improved robustness | O(k·b·nnz) |
Recommended: Use solver='lu' for all 1-norm methods (10-20x faster)
2-Norm Methods
| Method | Description | Best For | Complexity |
|---|---|---|---|
svds |
Partial SVD (most accurate) | Small-medium matrices (<5k) | O(k·nnz) |
eigsh |
Symmetric eigenvalue solver | Symmetric matrices | O(k·nnz) |
lobpcg |
Block preconditioned CG | Large matrices | O(k·nnz) |
power |
Power iteration | Quick estimates | O(k·nnz) |
lanczos |
Lanczos tridiagonalization | Medium matrices | O(k²·nnz) |
arnoldi |
Arnoldi iteration | Non-symmetric | O(k²·nnz) |
golub-kahan |
Bidiagonalization | Numerically stable | O(k·nnz) |
auto |
Automatic selection | All cases | - |
Solver Options
All 1-norm methods support flexible solver selection:
| Solver | Description | Best For | Speed | Memory |
|---|---|---|---|---|
auto |
Automatic selection (default) | General use | Good | Low |
lu |
LU factorization with caching | Small matrices (<5k), multiple solves | Fastest | High |
lsmr |
LSMR iterative solver | Large matrices, single solve | Medium | Low |
cg |
Conjugate Gradient | SPD matrices | Fast | Low |
bicgstab |
BiCGSTAB (stabilized BiCG) | Non-symmetric matrices | Fast | Low |
gmres |
GMRES | Non-symmetric, when BiCGSTAB fails | Medium | Low |
direct |
Direct solver (no caching) | Single solve, small matrices | Fast | Medium |
Legend:
k = iterations, b = block size, m = samples, nnz = non-zeros
Examples
Example 1: Compare Methods
import cupyx.scipy.sparse as sp
from sparse_kappa import cond_estimate
A = sp.random(2000, 2000, density=0.005, format='csr')
# Compare 1-norm methods
methods_1 = ['hager-higham', 'power', 'oettli-prager', 'block-hager']
for method in methods_1:
result = cond_estimate(A, norm=1, method=method, solver='lu',
return_dict=True)
print(f"{method:15s}: κ={result['condition_number']:.4e}, "
f"iters={result['iterations']}")
# Compare 2-norm methods
methods_2 = ['svds', 'lanczos', 'golub-kahan']
for method in methods_2:
cond = cond_estimate(A, norm=2, method=method)
print(f"{method:12s}: κ={cond:.4e}")
Example 2: LU Solver (for 1-norm)
# Highly recommended: use LU solver for Hager-Higham
result = cond_estimate(A, norm=1, method='hager-higham',
solver='lu', return_dict=True)
print(f"Condition number: {result['condition_number']:.4e}")
print(f"Solver info:")
print(f" Type: {result['solver_info']['solver_A']['method']}")
print(f" Factorized: {result['solver_info']['solver_A']['factorized']}")
print(f" Solves: {result['solver_info']['solver_A']['solve_count']}")
Example 3: Oettli-Prager Variants
# Adaptive (most accurate)
result = cond_estimate(A, norm=1, method='oettli-prager',
solver='lu', variant='adaptive', max_iter=15)
# Random sampling (fastest)
result = cond_estimate(A, norm=1, method='oettli-prager',
solver='lu', variant='random', max_iter=20)
# Hybrid (balanced)
result = cond_estimate(A, norm=1, method='oettli-prager',
solver='lu', variant='hybrid', max_iter=15)
Example 4: Custom Solver Parameters
# LSMR with relaxed tolerance for large matrices
result = cond_estimate(A, norm=1, method='hager-higham',
solver='lsmr',
solver_kwargs={'atol': 1e-3, 'maxiter': 20})
# CG for symmetric matrices
A_spd = A @ A.T + sp.eye(A.shape[0]) * 10
result = cond_estimate(A_spd, norm=1, method='hager-higham',
solver='cg',
solver_kwargs={'atol': 1e-3, 'maxiter': 30})
Performance Tips
- Auto mode is recommended for first-time usage
- For symmetric matrices, use
eigshorlanczos - For large sparse matrices (>10k), use
golub-kahanorlobpcg - For highest accuracy on small matrices, use
svds - Increase
max_iterif convergence fails
Testing
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_norm2.py -v
# Run with coverage
pytest tests/ --cov=sparse_kappa
License
MIT License
Contributing
Contributions welcome! Please submit issues and pull requests on GitHub.
References
- Hager, W. W. (1984). "Condition estimates." SIAM J. Sci. Stat. Comput., 5(2), 311-316.
- Higham, N. J., & Tisseur, F. (2000). "A block algorithm for matrix 1-norm estimation." SIAM J. Matrix Anal. Appl., 21(4), 1185-1201.
- Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press.
- Saad, Y. (2011). Numerical Methods for Large Eigenvalue Problems (2nd ed.). SIAM.
- Oettli, W., & Prager, W. (1964). "Compatibility of approximate solution of linear equations." Numerische Mathematik, 6(1), 405-409.
- Van der Vorst, H. A. (1992). "Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems." SIAM J. Sci. Stat. Comput., 13(2), 631-644.
Citation
If you use this library in your research, please cite:
@software{sparse_kappa,
title={Sparse Matrices Condition Number Estimation on GPUs},
author={Xinye Chen},
year={2026},
url={https://github.com/chenxinye/sparse_kappa}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file sparse_kappa-0.0.2.tar.gz.
File metadata
- Download URL: sparse_kappa-0.0.2.tar.gz
- Upload date:
- Size: 35.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d8d336a072f594e2c7b7cc55364c3d5756ff6d8bec46575a526c028d5dc63bcb
|
|
| MD5 |
e4e0fa772806d733a3f2f7ed257f1b0d
|
|
| BLAKE2b-256 |
9198dc0fccdf6885bcb27228d3e5fcf3b25a8418c1f530cdc2528cac944f3c62
|