Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors.
Project description
qce
Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors.
qce provides reusable Python building blocks for complex-valued channel estimation with quantized observations. It contains scalar quantizers, Bussgang and quantized-covariance utilities, covariance recovery routines, covariance generators, and estimators for quantized linear observation models.
The package is a clean, package-oriented implementation inspired by the research code for
B. Fesl, N. Turan, B. Böck, and W. Utschick, “Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models,” in IEEE Transactions on Signal Processing, vol. 72, pp. 1475-1490, 2024.
[IEEE] [arXiv] [Legacy code]
✨ Highlights
- Quantized channel estimation for complex-valued linear observation models
- Uniform midrise and Lloyd-Max scalar quantizers
- One-bit arcsine-law covariance utilities
- Multi-bit Bussgang covariance approximations with exact scalar quantized variances on the diagonal
- Covariance recovery from quantized complex-valued samples
- Linear baselines: least-squares and Bussgang-LMMSE estimators
- Mixture-prior estimators: Bussgang-GMM and Bussgang-MFA
- Oracle and trained-prior examples with SNR-vs-MSE plots
- Random covariance generators for simulation and testing
- Integration with cplx-gmm, cplx-mfa, gmm-estimator, and mfa-estimator
- Modern Python package layout with
pyproject.toml,uv,pytest, andruff
📦 Installation
Install from PyPI:
pip install qce
or with uv:
uv add qce
For development, clone the repository and install the development environment:
git clone https://github.com/benediktfesl/quantized-channel-estimation.git
cd quantized-channel-estimation
uv sync --group dev
Run tests and checks:
uv run ruff check .
uv run pytest
🚀 Quick Start
Bussgang-LMMSE from quantized observations
import numpy as np
from qce.estimators import BussgangLMMSEEstimator
from qce.quantizers import (
bussgang_matrix,
quantized_covariance,
uniform_midrise_quantizer,
uniform_quantization_step,
)
rng = np.random.default_rng(0)
n_dim = 8
snr_db = 10.0
n_bits = 3
A = np.eye(n_dim, dtype=complex)
C_h = np.eye(n_dim, dtype=complex)
noise_variance = 10.0 ** (-snr_db / 10.0)
C_y = A @ C_h @ A.conj().T + noise_variance * np.eye(n_dim)
quantizer = uniform_midrise_quantizer(
step=float(uniform_quantization_step(snr_db, n_bits)),
n_bits=n_bits,
)
B = bussgang_matrix(C_y, n_bits=n_bits, snr_db=snr_db)
C_r = quantized_covariance(C_y, n_bits=n_bits, snr_db=snr_db, quantizer=quantizer)
estimator = BussgangLMMSEEstimator.from_bussgang(
measurement_matrix=A,
channel_covariance=C_h,
bussgang_matrix=B,
quantized_observation_covariance=C_r,
)
y = (
rng.standard_normal((4, n_dim)) + 1j * rng.standard_normal((4, n_dim))
) / np.sqrt(2.0)
r = quantizer.quantize(np.real(y)) + 1j * quantizer.quantize(np.imag(y))
h_hat = estimator.estimate(r)
Bussgang-GMM with a fitted complex-valued GMM prior
import numpy as np
from qce.estimators import BussgangGmmEstimator
from qce.quantizers import uniform_midrise_quantizer, uniform_quantization_step
rng = np.random.default_rng(0)
snr_db = 10.0
n_bits = 3
n_dim = 8
h_train = (
rng.standard_normal((2_000, n_dim)) + 1j * rng.standard_normal((2_000, n_dim))
) / np.sqrt(2.0)
estimator = BussgangGmmEstimator(
n_components=4,
covariance_type="full",
zero_mean=True,
random_state=0,
n_init=1,
max_iter=100,
)
estimator.fit(h_train)
noise_covariance = 10.0 ** (-snr_db / 10.0) * np.eye(n_dim, dtype=complex)
quantizer = uniform_midrise_quantizer(
step=float(uniform_quantization_step(snr_db, n_bits)),
n_bits=n_bits,
)
r = quantizer.quantize(np.real(h_train[:4])) + 1j * quantizer.quantize(
np.imag(h_train[:4])
)
h_hat = estimator.estimate_quantized(
y=r,
noise_covariance=noise_covariance,
observation_matrix=np.eye(n_dim, dtype=complex),
n_bits=n_bits,
quantizer=quantizer,
quantizer_kind="uniform",
snr_db=snr_db,
)
Bussgang-MFA with a fitted complex-valued MFA prior
import numpy as np
from qce.estimators import BussgangMfaEstimator
from qce.quantizers import uniform_midrise_quantizer, uniform_quantization_step
rng = np.random.default_rng(0)
snr_db = 10.0
n_bits = 3
n_dim = 8
h_train = (
rng.standard_normal((2_000, n_dim)) + 1j * rng.standard_normal((2_000, n_dim))
) / np.sqrt(2.0)
estimator = BussgangMfaEstimator(
n_components=4,
latent_dim=2,
zero_mean=True,
random_state=0,
max_iter=100,
verbose=False,
)
estimator.fit(h_train)
Cn = 10.0 ** (-snr_db / 10.0) * np.eye(n_dim, dtype=complex)
quantizer = uniform_midrise_quantizer(
step=float(uniform_quantization_step(snr_db, n_bits)),
n_bits=n_bits,
)
r = quantizer.quantize(np.real(h_train[:4])) + 1j * quantizer.quantize(
np.imag(h_train[:4])
)
h_hat = estimator.estimate_quantized(
y=r,
Cn=Cn,
A=np.eye(n_dim, dtype=complex),
n_bits=n_bits,
quantizer=quantizer,
quantizer_kind="uniform",
snr_db=snr_db,
)
🧩 Estimation Model
The package considers a quantized complex-valued linear observation model
r = Q(y) = Q(A h + n)
where:
| Symbol | Description |
|---|---|
h |
Unknown complex-valued channel or signal vector. |
A |
Known linear observation matrix. |
n |
Zero-mean complex Gaussian observation noise. |
y |
Unquantized observation. |
r = Q(y) |
Quantized observation. |
For one-bit quantization, qce uses the complex arcsine relation to model the covariance of the quantized observations. For multi-bit quantization, qce uses Bussgang-linearized observation models.
For a component-wise Gaussian prior
p(h) = sum_k pi_k CN(h; mu_k, C_k),
qce builds component-wise quantized observation models:
C_y,k = A C_k A^H + C_n
B_k = Bussgang(C_y,k)
C_r,k = Cov(Q(y) | k)
The component-wise estimate has the LMMSE form
h_hat_k = mu_k + C_hr,k C_r,k^{-1} (r - E[r | k]),
and the final estimate combines the component-wise estimates using posterior component probabilities in the quantized observation domain.
🧠 Estimator API
Linear estimators
| Class | Description |
|---|---|
LeastSquaresEstimator |
Least-squares estimator for linear observation models. |
BussgangLMMSEEstimator |
LMMSE estimator for unquantized or Bussgang-linearized quantized observations. |
Mixture-prior estimators
| Class | Base package | Description |
|---|---|---|
BussgangGmmEstimator |
gmm-estimator / cplx-gmm |
Bussgang-based quantized estimator with a complex-valued GMM prior. |
BussgangMfaEstimator |
mfa-estimator / cplx-mfa |
Bussgang-based quantized estimator with a complex-valued MFA prior. |
BussgangGmmEstimator inherits the fitting API from gmm-estimator, which itself builds on cplx-gmm.
BussgangMfaEstimator inherits the fitting API from mfa-estimator, which itself builds on cplx-mfa.
The inherited estimate(...) methods remain the high-resolution continuous-observation estimators. The additional estimate_quantized(...) methods handle quantized observations.
🔢 Quantizers
qce.quantizers contains scalar quantizers and Bussgang utilities.
| Utility | Description |
|---|---|
ScalarQuantizer |
Frozen scalar quantizer object with thresholds, labels, validation, and .quantize(...). |
uniform_midrise_quantizer(...) |
Symmetric uniform midrise scalar quantizer. |
uniform_quantization_step(...) |
Standard-Gaussian uniform quantizer step utility. |
uniform_distortion_factor(...) |
Approximate uniform quantization distortion factor. |
lloyd_max_quantizer(...) |
Lloyd-Max scalar quantizer for Gaussian scalar inputs. |
bussgang_matrix(...) |
Bussgang matrix for uniform scalar quantization. |
lloyd_max_bussgang_matrix(...) |
Bussgang matrix for Lloyd-Max scalar quantization. |
quantized_covariance(...) |
Quantized covariance for one-bit and uniform multi-bit quantization. |
quantized_variance(...) |
Exact scalar quantized variances for a given scalar quantizer. |
Uniform vs Lloyd-Max
Uniform multi-bit estimators use quantizer_kind="uniform" and require snr_db for the Bussgang gain.
Lloyd-Max multi-bit estimators use quantizer_kind="lloyd_max" and expect a ScalarQuantizer returned by the Lloyd-Max result object:
from qce.quantizers import lloyd_max_quantizer
result = lloyd_max_quantizer(snr_db=10.0, n_bits=3)
quantizer = result.quantizer
The Lloyd-Max mixture-estimator path uses a Bussgang covariance approximation with the supplied Lloyd-Max Bussgang matrix and exact scalar quantized variances on the diagonal.
📈 Covariance Utilities
qce.covariance contains covariance recovery, positive-definite matrix helpers, and simulation-oriented covariance generators.
Covariance recovery
from qce.covariance import estimate_covariance_from_quantized_samples
C_hat = estimate_covariance_from_quantized_samples(
quantized_samples,
n_bits=3,
quantizer=quantizer,
)
The recovery method estimates the covariance of the unquantized signal that was observed through scalar quantization. It follows the legacy covariance-recovery setup: normalized correlations are recovered using one-bit signs, while marginal variances are recovered from threshold-hit probabilities.
Covariance generators
| Generator | Description |
|---|---|
RandomSPDCovarianceGenerator |
Random unstructured Hermitian positive-definite covariance matrices. |
RandomExponentialCovarianceGenerator |
Structured exponential correlation covariances with random phase and marginal variances. |
RandomLowRankCovarianceGenerator |
Low-rank plus diagonal covariance matrices of the form U diag(lambda) U^H + sigma^2 I. |
All generators support:
covariance = generator.sample_covariance()
covariances = generator.sample_covariance(n_draws=100)
samples = generator.sample_observations(n_samples=10_000)
Observation sampling supports configurable factorization modes:
"auto", "none", "cholesky", "eigh"
📊 Examples
Run examples with uv:
uv run python examples/covariance_recovery_from_quantized_samples.py
uv run python examples/snr_vs_mse_oracle_estimators.py
uv run python examples/snr_vs_mse_trained_priors.py
uv run python examples/compare_quantizer_variants.py
Generated figures are written to results/:
results/covariance_recovery/
results/snr_vs_mse_oracle_estimators/
results/snr_vs_mse_trained_priors/
results/quantizer_comparison/
The results/ directory is intended for local runtime outputs and should usually not be committed.
🧪 Development
The project uses:
uvfor dependency and environment managementpytestfor testsrufffor lintingsrc/package layoutsetuptoolsbuild backend
Useful commands:
uv sync --group dev
uv run ruff check . --fix
uv run pytest
uv build
🔗 Related Packages
qce is designed to sit on top of reusable complex-valued prior and estimator packages.
| Package | Role | Links |
|---|---|---|
cplx-gmm |
Complex-valued GMM fitting | PyPI · GitHub |
cplx-mfa |
Complex-valued MFA fitting | PyPI · GitHub |
gmm-estimator |
High-resolution GMM estimator for linear inverse problems | PyPI · GitHub |
mfa-estimator |
High-resolution MFA estimator for linear inverse problems | PyPI · GitHub |
| Legacy quantized-estimation code | Original research scripts for the TSP 2024 paper | GitHub |
📌 Citation
If you use qce in academic work, please cite the package directly:
@software{fesl_qce,
author = {Fesl, Benedikt},
title = {{qce}: Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors},
year = {2026},
url = {https://github.com/benediktfesl/quantized-channel-estimation},
version = {0.1.0}
}
Plain-text citation:
B. Fesl,
qce: Quantized channel estimation with Bussgang-based estimators and complex-valued generative priors, version 0.1.0. Available: https://github.com/benediktfesl/quantized-channel-estimation
If you use the package in the context of quantized channel estimation, please also cite the corresponding research paper.
📚 Research Background
This package is related to the following works on generative priors, channel estimation, quantized systems, and structured covariance models.
Main reference
- B. Fesl, N. Turan, B. Böck, and W. Utschick, “Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models,” in IEEE Transactions on Signal Processing, vol. 72, pp. 1475-1490, 2024.
[IEEE] [arXiv]
Additional related works
-
B. Fesl, “Generative Model-Aided Channel Estimation Design and Optimality Analysis,” Ph.D. dissertation, Technical University of Munich, 2025.
[TUM] -
M. Koller, B. Fesl, N. Turan, and W. Utschick, “An Asymptotically MSE-Optimal Estimator Based on Gaussian Mixture Models,” IEEE Transactions on Signal Processing, vol. 70, pp. 4109–4123, 2022.
[IEEE] [arXiv] -
B. Fesl, M. Joham, S. Hu, M. Koller, N. Turan, and W. Utschick, “Channel Estimation based on Gaussian Mixture Models with Structured Covariances,” in 56th Asilomar Conference on Signals, Systems, and Computers, 2022, pp. 533–537.
[IEEE] [arXiv] -
B. Fesl, N. Turan, M. Joham, and W. Utschick, “Learning a Gaussian Mixture Model from Imperfect Training Data for Robust Channel Estimation,” IEEE Wireless Communications Letters, 2023.
[IEEE] [arXiv] -
M. Baur, B. Fesl, and W. Utschick, “Leveraging Variational Autoencoders for Parameterized MMSE Estimation,” IEEE Transactions on Signal Processing, vol. 72, pp. 3731–3744, 2024.
[IEEE] [arXiv] -
B. Fesl, A. Banna, and W. Utschick, “Enhancing Channel Estimation in Quantized Systems with a Generative Prior,” in IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2024, pp. 681-685.
[IEEE] [arXiv] -
B. Fesl, M. Koller, and W. Utschick, “On the Mean Square Error Optimal Estimator in One-Bit Quantized Systems,” in IEEE Transactions on Signal Processing, vol. 71, pp. 1968-1980, 2023.
[IEEE] [arXiv] -
B. Fesl and W. Utschick, “Linear and Nonlinear MMSE Estimation in One-Bit Quantized Systems Under a Gaussian Mixture Prior,” in IEEE Signal Processing Letters, vol. 32, pp. 361-365, 2025.
[IEEE] [arXiv]
📄 License
This repository is distributed under the BSD 3-Clause License.
See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qce-0.1.0.tar.gz.
File metadata
- Download URL: qce-0.1.0.tar.gz
- Upload date:
- Size: 34.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aa0e95cd61487219888ce947dc80bd6935394a6093fe8f4d8b087122ccd5ae8a
|
|
| MD5 |
81cc890a29cdcc20db843bdcf060a895
|
|
| BLAKE2b-256 |
480f7fa41171cd3008a79a626cbf17428be5deea084189aabce6282573fa3dd2
|
File details
Details for the file qce-0.1.0-py3-none-any.whl.
File metadata
- Download URL: qce-0.1.0-py3-none-any.whl
- Upload date:
- Size: 40.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ff507688a7ab2b6f025b140ba2e46f64fc262b8cdff437d52c53b6e9c528f749
|
|
| MD5 |
aa3c643c6923320540c278dacf177789
|
|
| BLAKE2b-256 |
3ce64ed09aa161b93e6d75b49ad2900cd3fd9ecf62ec72e116e907e771fbf9ed
|