statespace processors and models
processors and models from Bayesian Signal Processing: Classical, Modern, and Particle Filtering and Kalman Filtering: Theory and Practice.
210404 new model structure is working well, exemplified by the new bearings-only tracking model. this is thoroughly checked out for the kalman processor and next step is to propagate and check the new stuff throughout the sigmapoint and particle processors.
210328 new model structure is definitely on the right path - it's already quite essential for working on the new bearings-only tracking model. evaluation and plotting is migrating into a new eval class within the base model. there each model will know how to make its own appropriate plots - definitely including model states and processor states - and perform its own evaluation tests.
210321 settling on a new model structure that separates the parts associated with baseline kalman filter, sigma-point filter, and particle filter. this can eventually grow upward into a high level base model. concept is that a model needs to know something about the processors that are going to use it. a particle processor has different needs than a kalman processor. this gives a natural shape and structure to the models.
210314 maybe what we need is to bring in a new reference model - to kind of shake things up and help brainstorming around the concept of a high level base model. a strong candidate is the bearings only ship tracking problem - we have working examples in the orgeon library, and it's in the textbooks.
the bearings only problem has some interesting history - it's basically about being on a ww2 era sub. your sub is travelling along, and you begin hearing the sound of a ship at some bearing. over time, as the bearing changes, you can estimate the position and velocity of the ship. keep in mind that both your sub and the ship are moving the entire time. at some point you make a course change for your sub to pursue the ship.
210308 looking more closely at the oregon library documentation, they have an interesting discussion of objectives - something to think about going forward. is a higher-level statespace model something to consider? putting it this way - adapt/remap a higher-level statespace model into specific lower-level statespace models - we're talking about a translator / converter / adaptor... and we already have an extremely primitive form of that - the rccircuit, jazwinski1, jazwinski2 models are fed into classical, modern, particle. we can think about a higher-level model that can express rccircuit, jazwinski1, and jazwinski2.
designed to allow reuse of a state space definition for state, parameter and joint estimation, using a variety of different inference algorithms. you define your system once in a higher-level state space framework, and then the inference framework generator geninfds together with the inference system noise source generator gensysnoiseds will adapt/remap that model into a lower-level state space framework needed for whatever type of estimation you want to do. This allows you to focus on defining the model, doing data IO, etc. without having to get bogged down into casting the problem into a different framework each time you want to use a different estimator or want to change the type of inference your doing. I.e. the internal inference implementation is hidden or as transparent as possible with respect to the problem definition by the user. more
210221 brought the documentation via readthedocs up to a minimal level. cleaned up the project and brought some focus to what's going on here. as the docs now make clear - this project focuses on reference problems from Bayesian Signal Processing: Classical, Modern, and Particle Filtering Methods, Kalman Filtering: Theory and Practice, and Stochastic Processes and Filtering Theory - in particular, using numpy for matrix and vector manipulation.
190418 brief lit-review posted on linkedin.
190331 concise motivation piece posted on linkedin.
190310 decision-function-based detector is go. simplest possible case - linear rc-circuit system-model and linear kalman-filter tracker. log-likelihood decision function for detection, ensembles of 100 runs each for signal case and noise case. output curves shown in the first plot - green signal, blue noise-only. roc curves in the second plot.
kl-divergence for evaluating sequential monte-carlo - demonstrated below by three pf's in action during the first second of the jazwinksi problem - start-up and convergence. these are 100 hz dist-curves - each dist-curve is a kernel-density-estimate combining hundreds of monte-carlo samples, the fundamental-particles - green dist-curves for truth, blue dist-curves for pf. state-estimates are two red curves on the x,t-plane beneath the dist-curves.
190105 ukf adaptive jazwinksi switched to square-root filtering, qr-factorization, cholesky-factor update and downdate. improved numerical stability and scaled sampling is clear. still a question around scalar-obs and the obs cholesky-factor and gain. with an adhoc stabilizer on the obs cholesky-factor it's working well overall.
181230 pf adaptive jazwinksi. parameter-roughening.
181226 ukf adaptive jazwinski. sample-and-propagate tuning.
180910 ekf adaptive jazwinski. ud-factorized square-root filtering required for numerical stability.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size statespace-1.4.3.tar.gz (9.7 kB)||File type Source||Python version None||Upload date||Hashes View|