Phase shifting algorithms for encoding and decoding sinusoidal fringe patterns.
Project description
Fringes
Phase shifting algorithms for encoding and decoding sinusoidal fringe patterns.
Description
Background
Many applications, such as fringe projection [11] or deflectometry [1], require the ability to encode positional data. To do this, fringe patterns are used to encode the position on the screen / projector (in pixel coordinates) at which the camera pixels were looking at during acquisition.
--- FIGURE coding ---
- Encoding
- Spatial Modulation
The x- resp. y-coordinateξ
of the screen/projector is normalized into the range[0, 1)
by dividing through the maximum coordinateL
and used to modulate the luminance in a sinusoidal fringe patternI
with offsetA
, amplitudeB
and spatial frequencyv
.\ - Temporal Modulation
The pattern is then shiftedN
times with an equidistant phase shift of2πf/N
radian each. An additional phase offsetφ₀
may be set, e.g. to let the fringe patterns start with a gray value of zero.
- Spatial Modulation
- Decoding
- Temporal Demodulation
From these shifts, the phase mapφ
is determined [13]. Due to the trigonometric functions used, the global phaseΦ
is wrapped into the interval[0, 2 π]
withv
periods:φ ≡ Φ mod 2π
. - Spatial Demodulation / Phase Unwrapping
If only one set with spatial frequencyv ≤ 1
is used, no unwrapping is required because one period covers the complete coding range. Hence, the coordinatesξ
are computed directly by scaling:ξ = φ / (2π) * L / v
. This constitutes the registration, which is a mapping in the same pixel grid as the camera sensor and contains the information where each camera pixel, i.e. each camera sightray, was looking at during the fringe pattern acquisition. Note that in contrast to binary coding schemes, e.g. Gray code, the coordinates are obtained with sub-pixel precision.- Temporal Phase Unwrapping (TPU)
If multiple sets with different spatial frequenciesv
are used and the unmbiguous measurement range is larger than the coding rangeUMR ≥ L
, the ambiguity of the phase map is resolved by generalized multi-frequency temporal phase unwrapping [14]. - Spatial Phase Unwrapping (SPU)
However, if only one set withv > 1
is used, or multiple sets butUMR < L
, the ambiguous phaseφ
is unwrapped analyzing the neighbouring phase values [15] [16]. This only yields a relative phase map, therefore absolute positions are unknown.
- Temporal Phase Unwrapping (TPU)
- Temporal Demodulation
Features
- Generalized Temporal Phase Unwrappting (GTPU)
- Uncertainty Propagation
- Computation of Residuals
- Deinterlacing
- Multiplexing
- Filtering Phase Maps
- Remapping
Contents
- Installation
- Usage
- Graphical User Interface
- Attributes
- Methods
- Optimal Coding Strategy
- Troubleshooting
- License
- Project Status
Installation
You can install fringes
directly from PyPi via pip
:
pip install fringes
Usage
You instantiate and deploy the Fringes
class:
import fringes as frng
f = frng.Fringes() # instanciate class
For creating the fringe pattern sequence I
, use the method encode()
.
It will return a NumPy array
in videoshape (frames, width, height, color channels).
I = f.encode() # encode fringe patterns
For analyzing (recorded) fringe patterns, use the method decode()
.
It will return a namedtuple,
containing the Numpy arrays brightness A
, modulation B
and the coordinates ξ
,
all in videoshape.
A, B, xi = f.decode(I) # decode fringe patterns
All parameters are accesible by the respective attributes of the Fringes
class.
f.X = 1920 # set width of the fringe patterns
f.Y = 1080 # set height of the fringe patterns
f.K = 2 # set number of sets
f.N = 4 # set number of shifts
f.v = [9, 10] # set spatial frequencies
f.T # get the number of frames
A glossary of them is obtained by the class attribute doc
.
frng.Fringes.doc # get glossary
You can change the logging level of a Fringes
instance.
Changing it to 'DEBUG'
gives you verbose feedback on which parameters are changes
and how long functions take to execute.
f.logger.setLevel("DEBUG")
Graphical User Interface
Do you need a GUI? Fringes
has a sister project that is called Fringes GUI
:
https://github.com/genicam/fringes_gui
Attributes
All parameters are parsed when setting, so usually several input formats are accepted, e.g.
bool
, int
, float
, str
for scalars and additionally list
, tuple
, ndarray
for arrays.
Note that parameters might have circular dependencies which are resolved automatically, hence dependent parameters might change as well. The attributes in rectangular boxes are readonly, i.e. they are inferred from the others. Only the ones in white boxes will never influence others.
Parameter and their Interdependencies.
Coordinate System
The following coordinate systems can be used by setting grid
to:
'image'
: The top left corner pixel of the grid is the origin (0, 0) and positive directions are right- resp. downwards.'Cartesian'
: The center of grid is the origin (0, 0) and positive directions are right- resp. upwards.'polar'
: The center of grid is the origin (0, 0) and positive directions are clockwise resp. outwards.'log-polar'
: The center of grid is the origin (0, 0) and positive directions are clockwise resp. outwards.
D
denotes the number of directions to be encoded.
If D ≡ 1
, the parameter axis
is used to define along which axis of the coordinate system
(index 0 or 1) the fringes are shifted.
angle
can be used to tilt the coordinate system. The origin stays the same.
Video Shape
Standardized shape
(T
, Y
, X
, C
) of fringe pattern sequence, with
T
: number of framesY
: heightX
: widthC
: number of color channels
T
depends on the paremeters H
, D
, K
, N
and the multiplexing methods:
If all N
are identical, then T = H * D * K * N
with N
as a scalar,
else T = H * ∑ Ni
with N
as an array.
If a multiplexing methods is activated, T
reduces further.
The length L
is the maximum of X
and Y
.
C
depends on the coloring and multiplexing methods.
size
is the product of shape
.
Set
Each set consists of the following parameters:
N
: number of shiftsl
: wavelength [px]v
: spatial frequency, i.e. number of periodsf
: temporal frequency, i.e. number of periods to shift over
Each is an array with shape (direction D
, number of setsK
).
For example, if N.shape ≡ (2, 3)
, it means that we encode D = 2
directions with K = 3
sets each.
Changing D
or K
directly, changes the shape of all set parameters.
When setting a set parameter with a new shape (D'
, K'
),
D
and K
are updated as well as the shape of the other set parameters.
Per direction at least one set with N >= 3
is necessary
to solve for the three unknowns brightness A
, modulation B
and coordinates ξ
.
l
and v
are related by l = L / v
When L
changes, v
is kept constant and only l
is changed.
Usually f = 1
and f
is essentially only changed if frequency division multiplexing FDM
is activated.
reverse
is a boolean which reverses the direction of the shifts (by multiplying f
with -1
).
o
denotes the phase offset φ₀
which can be used to e.g. let the fringe patterns start (in origin) with a gray value of zero
UMR
denotes the unambiguous measurement range.
The coding is only unique in the interval [0, UMR]
, after that it repeats itself.
The UMR
is determined from l
and v
:\
- If
l ∈ ℕ
,UMR = lcm(li)
withlcm
being the least common multiple.\ - Else, if
v ∈ ℕ
,UMR =
withL
/ gcd(vi)gcd
being the greatest common divisor.\ - Else, if
l ∧ v ∈ ℚ
,lcm
resp.gdc
are extended to rational numbers. - Else, if
l ∧ v ∈ ℝ \ ℚ
,l
andv
are approximated by rational numbers with a fixed length of decimal digits.
Coloring and Averaging
The fringe pattern sequence I
can be colorized by setting the hue h
to any RGB color tuple
in the interval [0, 255]
. However, black (0, 0, 0)
is not allowed. h
must be in shape (H, 3)
:
H
is the number of hues and can be set directly; 3 is the length of the RGB color tuple.
The hues h
can also be set by assigning any combination of the following characters as a string:
'r'
: red'g'
: green'b'
: blue'c'
: cyan'm'
: magenta'y'
: yellow'w'
: white
C
is the number of color channels required for either the set of hues h
or wavelength division multiplexing.
For example, if all hues are monochromatic, i.e. the RGB values are identical for each hue, C
equals 1, else 3.
Repeating hues will be fused by averaging them before decoding.
M
is the number of averaged intensity samples and can be set directly.
Multiplexing
The following multiplexing methods can be activated by setting them to True
:
SDM
: Spatial Division Multiplexing [2]
This results in crossed fringe patterns. The amplitudeB
is halved.
It can only be activated if we have two directionsD ≡ 2
.
The number of framesT
is reduced by a factor of 2.WDM
: Wavelength Divison Multiplexing [3]
All shiftsN
must equal 3. Then, the shifts are multiplexed in the color channel, resulting in an RGB fringe pattern.
The number of framesT
is reduced by a factor of 3.FDM
: Frequency Division Multiplexing [4], [5]
Here, the directionsD
and the setsK
are multiplexed. Hence, the amplitudeB
is reduced by a factor ofD
*K
.
It can only be activated ifD ∨ K > 1
i.e.D * K > 1
.
This results in crossed fringe patterns ifD ≡ 2
.
Each set per direction receives an individual temporal frequencyf
, which is used in temporal demodulation to distinguish the individual sets.
A minimal number of shiftsNmin ≥ ⌈ 2 * fmax + 1 ⌉
is required to satisfy the sampling theorem andN
is updated automatically if necessary.
If one wants a static pattern, i.e. one that remains congruent when shifted, setstatic
toTrue
.
SDM
and WDM
can be used together [6] (reducing T
by a factor of 2 * 3 = 6
), FDM
with neighter.
By default, the aforementioned multiplexing methods are deactivated,
so we then only have TDM
: Time Divison Multiplexing.
Data Type
dtype
denotes the Amplitudes of the fringe pattern sequence I
.
Possible values are:
'bool'
'uint8'
(the default)'uint16'
'float32'
'float64'
The total number of bytes nbytes
consumed by the fringe pattern sequence
as well as its maximum gray value Imax
are derived directly from it:
Imax = 1
for float
and bool
,
and Imax = 2Q - 1
for unsigned integers
with Q
bits.
Imax
in turn limits the offset A
and the amplitude B
.
The fringe visibility (also called fringe contrast) is V = A / B
with V ∈ [0, 1]
.
The quantization step size q
is also derived from dtype
:
q = 1
for bool
and Q
-bit unsigned integers
,
and for float
its corresponding resolution.
The standard deviation of the quantization noise is QN = q / √ 12
.
Unwrapping
PU
denotes the phase unwrapping method and is eihter'none'
,'temporal'
,'spatial'
or'SSB'
. See spatial demodulation for more details.mode
denotes the mode used for temporal phase unwrapping. Choose either'fast'
(the default) or'precise'
.Vmin
denotes the minimal fringe visibility (fringe contrast) required for the measurement to be valid and is in the interval[0, 1]
. During decoding, pixels with less are discarded, which can spead up the computation.verbose
can be set toTrue
to also receive the wrapped phase mapφ
, the fringe ordersk
and the residualsr
from decoding.SSB
denotes Single Sideband Demodulation [17] and is deployed ifK ≡ H ≡ N ≡ 1
, i.e.T ≡ 1
and the coordinate system is eighter'image'
or'Cartesian'
.
Quality Metrics
eta
denotes the coding efficiency L / UMR
. It makes no sense to choose UMR
much larger than L
,
because then a significant part of the coding range is not used.
u
denotes the minimum possible uncertainty of the measurement in pixels.
It is based on the phase noise model from [7],
propagated through the temporal phase unwrapping and converted from phase to pixel units.
It is influenced by the fringe parameters
M
: number of averaged intensity samplesN
: number of phase shiftsl
: wavelength of the fringesB
: measured amplitude
and the measurement hardware-specific noise sources [8], [9]
PN
: photon noise of light itselfDN
: dark noise of the used cameraQN
: quantization noise of the light source or camera
The maximum possible dynamic range of the measurement is DR = UMR / u
.
It describes how many points can be discriminated on the interval [0, UMR]
.
It remains constant if L
resp. l
are scaled (the scaling factor cancels out).
Methods
load(fname)
Load a parameter set from the filefname
to aFringes
instance.save(fname)
Save the parameter set of the currentFringes
instance to the filefname
. Iffname
is not provided, the default isparams.yaml
within the package's directory 'fringes'.reset()
Reset the parameter set of the currentFringes
instance to the default values.auto(T)
Automatically set the optimal parameters based on the argumentT
(number of frames). This takes also into account the minimum resolvable wavelengthlmin
and the length of the fringe patternsL
.setMTF(B)
Compute the normalized modulation transfer function at spatial frequencies v and use the result to set the optimallmin
.B
is the modulation from decoding. For more details, see Optimal Coding Strategy.coordinates()
Generate the coordinate matrices of the defined coordinate system.encode(frames)
Encode the fringe pattern sequenceI
.
The framesIt
can be encoded indiviually by passing the frame indicesframes
, either as aninteger
or atuple
. The default isNone
in which case all frames are encoded.
To receive the frames iteratively (i.e. in a lazy manner), simply iterate over the instance.\decode(I, verbose)
Decode the fringe pattern sequenceI
.
If either the argumentverbose
or the attribute with the same name isTrue
, additional infomation is computed and retuned: phase mapsφ
, residualsr
and fringe ordersk
.
If the argumentdenoise
isTrue
, the unwrapped phase map will be smoothened using a bilateral filter which is edge-preserving.
If the argumentdenspike
isTrue
, single pixel outliers in the unwrapped phase map will be replaced by their local neighbourhood.remap(reg, mod)
Mapping decoded coordinatesreg
i.e.ξ
(having sub-pixel accuracy) from camera grid to (integer) positions on pattern/screen grid with weights from modulationmod
i.e.B
. Default formod
isNone
, in which case all weights are assumed to equal one. This yields a grid representing the screen (light source) with the pixel values being a relative measure of how much a screen (light source) pixel contributed to the exposure of the camera sensor.deinterlace(I)
Deinterlace a fringe pattern sequenceI
acquired with a line scan camera while each frame has been displayed and captured while the object has been moved by one pixel.
The next methods are class-methods:
unwrap(phi)
Unwrap the phase mapphi
i.e.φ
spacially.
The next methods are package-methods:
vshape(I)
Transforms video data of arbitrary shape and dimensionality into the standardized shape(T, Y, X, C)
, whereT
is number of frames,Y
is height,X
is width, andC
is number of color channels. Ensures that the array becomes 4-dimensional and that the size of the last dimension, i.e. the number of color channelsC ∈ {1; 3; 4}
. To do this, leading dimensions may be flattened.curvature(registration)
Returns a curvature map.relief(curvature)
Local height map by local integration via an inverse laplace filter.
Optimal Coding Strategy
As makes sense intuitively, more sets K
as well as more shifts N
per set reduce the uncertainty u
after decoding.
A minimum of 3 shifts is needed to solve for the 3 unknowns brightness A
, modulation B
and coordinates ξ
.
Any additional 2 shifts compensate for one harmonic of the recorded fringe pattern.
Therefore, higher accuracy can be achieved using more shifts N
, but the time required to capture them
sets a practical upper limit to the feasible number of shifts.
Generally, shorter wavelengths l
(or equivalently more periods v
) reduce the uncertainty u
,
but the resolution of the camera and the display must resolve the fringe pattern spatially.
Hence, the used hardware imposes a lower bound for the wavelength (or upper bound for the number of periods).
Also, small wavelengths might result in a smaller unambiguous measurement range UMR
.
If two or more sets K
are used and their wavelengths l
resp. number of periods V
are relative primes,
the unmbiguous measurement range can be increased many times.
As a consequence, one can use much smaller wavelenghts l
(larger number of periods v
).
However, it must be assured that the unambiguous measurment range is always equal or larger than both,
the width X
and the height Y
.
Else, temporal phase unwrapping will yield wrong results and instead
spatial phase unwrapping is used.
Be aware that in the latter case only a relative phase map is obtained,
which lacks the information of where exactly the camera sight rays were looking at during acquisition.
To simplify finding and setting the optimal parameters, the following methods can be used:
setMTF()
: The optimalvmax
is determined automativally [18] by measuring the modulation transfer functionMTF
.
Therefore, a sequence of exponentially increasingv
is acquired:- Set
v
to'exponential'
. - Encode, acquire and decode the fringe pattern sequence.
- Call the function
setMTF(B)
with the argumentB
from decoding.
- Set
v
can be set to'auto'
. This automatically determines the optimal integer set ofv
based on the maximal resolvable spatial frequencyvmax
.- Equivalently,
l
can also be set to'auto'
. This will automatically determine the optimal integer setl
based on the minimal resolvable wavelengthlmin = L / vmax
. T
can be set directly, based on the desired acquisition time. The optimalK
,N
and the multiplexing methods will be determined automatically.
Alternatively, simply use the function auto()
to automatically set the optimal v
, T
and multiplexing methods.
Troubleshooting
-
Encoding/Decoding takes a long time
This is probably related to the just-in-time compiler Numba used for computationally expensive functions: During the first execution of any function decorated with it, an initial compilation is executed. This can take several tens of seconds up to single digit minutes, depending on your CPU. However, for any subsequent execution, the compiled code is buffered and the code of the function runs much faster, approaching the speeds of code written in C.
-
My decoded coordinates show lots of noise
Try using more, sets
K
and/or shiftsN
and adjust the used wavelengthsl
resp. number of periodsv
.
Also, make sure the exposure of your camera is adjusted so that the fringe patterns show up with maximum contrast.
Try to avoid under- and overexposure during acquisition. -
My decoded coordinates show systematic offsets
First, ensure that the correct frame was captured while acquiring the fringe pattern sequence. If the timings are not set correctly, the sequence may be a frame off.
Secondly, this might occur if either the camera or the display used have a gamma value very different from 1. Typical screens have a gamma value of 2.2; therefore compensate by setting the inverse valuegamma-1 = 1 / 2.2 ≈ 0.45
to thegamma
attribute of theFringes
instance. Alternatively, change the gamma value of the light source or camera directly. You might also use more shiftsN
to compensate for the dominant harmonics of the gamma-nonlinearities.
References
[11] [Burke 2002]
[1] [Burke 2022]
[2] [Park2008]?
[3] [Huang 1999]
[4] [Liu 2014] [Liu2010] [Park2008]?
[5] [Kludt 2018]
[6] [Trumper 2016]
[7] [Surrel 1997]
[8] [EMVA1288]
[9] [Bothe2008]
[10] [Fischer]
[13] [Burke 2012]
[14] [Kludt 2024]
[15] [Herráez 2002]
[16] [Lei 2015]
[17] [Takeda]
[18] [Bothe 2008]
[19]
Inverse Laplace Filter
License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.