Skip to main content

ParaFilt is a Python package that provides a collection of parallel adaptive filter implementations for efficient signal processing applications.

Project description

ParaFilt

ParaFilt is a Python package that provides a collection of parallel adaptive filter implementations for efficient signal processing applications. It leverages the power of parallel processing using PyTorch, enabling faster and scalable computations on multi-core CPUs and GPUs.

Features

  • Parallel algorithm framework that allows computing iterative algorithms in a parallel way.
  • Parallel implementation of popular adaptive filter algorithms, including LMS, NLMS, RLS, and more.
  • Possibility for researchers to integrate their own adaptive filter algorithms for parallel computing.
  • Comprehensive documentation and examples for quick start and usage guidance.

Installation

To install ParaFilt, you can use pip:

pip install parafilt

Usage

Inputs:

desired_signal: (batch_size, samples) - Desired signal tensor.
input_signal: (samples) - Input tensor.

Returns:

Tuple containing the estimated output and the error signal (d_est, e).
d_est: (batch_size, samples) - Estimated output tensor.
e: (batch_size, samples) - Error signal tensor.

Here's an example of how to use the package to create and apply the LMS filter:

import parafilt

# Create an instance of the LMS filter
lms_filter = parafilt.LMS(hop=1024, framelen=4096, filterlen=1024).cuda()

# Perform parallel filter iteration
d_est, e = lms_filter(desired_signal, input_signal)

Here's an example of how to use the package to create and apply the RLS filter:

import parafilt

# Create an instance of the LMS filter
rls_filter = parafilt.RLS(hop=1024, framelen=4096, filterlen=1024).cuda()

# Perform parallel filter iteration
d_est, e = rls_filter(desired_signal, input_signal)

For detailed usage example, please refer to this notebook.

Parallel Algorithm Framework

Parafilt provides a parallel algorithm framework that enables researchers to implement and execute iterative algorithms in a parallelized manner. This framework allows for efficient utilization of multi-core CPUs and GPUs, resulting in significant speedup for computationally intensive algorithms.

To leverage the parallel algorithm framework, researchers can extend the base classes provided by Parafilt and utilize the parallel computation capabilities provided by PyTorch.

Here's an example of how to use the package to create your own filter:

from parafilt import BaseFilter

class TemplateFilter(BaseFilter):
    def __init__(self, hop: int, framelen: int, filterlen: int = 1024, weights_delay: Optional[int] = None, 
	weights_range: (float, float) = (-65535, 65535)):
        '''
        Template filter class that extends the BaseFilter class.
        :param hop: Hop size for frame processing.
        :param framelen: Length of each frame.
        :param filterlen: Length of the filter.
        :param weights_delay: Delay for the weights, If None, it is set to framelen-1 (default: None).
        :param weights_range: Range for the filter weights (default: (-65535, 65535)).
        '''
        super().__init__(hop=hop, framelen=framelen, filterlen=filterlen, weights_delay=weights_delay,
                         weights_range=weights_range)

    @torch.no_grad()
    def forward_settings(self, d: torch.Tensor, x: torch.Tensor):
        '''
        Placeholder for the settings during forward.
        :param d: Desired signal tensor.
            Shape: (batch_size, frame_length)
        :param x: Input tensor.
            Shape: (1, frame_length, filter_length)
        '''
        return

    @torch.no_grad()
    def iterate(self, d: torch.Tensor, x: torch.Tensor) -> (torch.Tensor, torch.Tensor):
        '''
        Placeholder for the filter iteration.
        :param d: Desired signal tensor.
            Shape: (batch_size, frame_length)
        :param x: Input tensor.
            Shape: (1, frame_length, filter_length)
        :return:
            torch.Tensor: Estimated output tensor.
                Shape: (batch_size, frame_length)
            torch.Tensor: Error tensor.
                Shape: (batch_size, frame_length)
        '''
        raise NotImplementedError

Future Work

  • Implementation of CUDA code for the parallel frameworks and filter algorithms to achieve even faster computations.
  • Addition of an option for zero-padding, enabling the output size to match the input size without discarding any samples during the frame decomposition and reconstruction process after performing the filter.

Citation

TBD.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvement, please open an issue or submit a pull request on the GitHub repository.

License

This project is licensed under the MIT License. See the LICENSE file for more information.

Contact

For any inquiries or questions, please contact zoreasaf@gmail.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parafilt-0.1.2b0.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

parafilt-0.1.2b0-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file parafilt-0.1.2b0.tar.gz.

File metadata

  • Download URL: parafilt-0.1.2b0.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for parafilt-0.1.2b0.tar.gz
Algorithm Hash digest
SHA256 d7efea36c2ffb5aee8b30b6dbaa7e8d237549aed72a07f67a138f7e08c503106
MD5 d72cc4293b45261a6beba54430ed329f
BLAKE2b-256 9780bbbf8512629fa6e2abfef79ae1599d81064d98ad9770ca44b10a5f89824b

See more details on using hashes here.

File details

Details for the file parafilt-0.1.2b0-py3-none-any.whl.

File metadata

  • Download URL: parafilt-0.1.2b0-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for parafilt-0.1.2b0-py3-none-any.whl
Algorithm Hash digest
SHA256 5a71d7712e31a896f294cbe8852190ad83721bca8b71adfc27538241d28783a2
MD5 cc1bc8ec838cf83d8630613b7285908b
BLAKE2b-256 fa9ad54e8d78264ced2e41153f40624d290075234dcf85e3e0fbe0319e222adb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page