Filters

Filters transform data and have at least one input and one output.

Point-based transformation

Binarization

class binarize

Binarizes an image.

"threshold": float

Any values above the threshold are set to one all others to zero.

Clipping

class clip

Clip input to set minimum and maximum value.

"min": float

Minimum value, all values lower than min are set to min.

"max": float

Maximum value, all values higher than max are set to max.

Masking

class mask

Mask the circular outer region by setting values to zero.

Arithmetic expressions

class calculate

Calculate an arithmetic expression. If you choose one dimention you have access to the value stored in the input buffer via the v letter in expression, to the index of v via letter x and to the size of the buffer via size. If you choose two dimentions, instead of size you can access width and height and on top of x you also have the y for the vertical coordinate and linearized index (computed as y * width + x). Please be aware that v is a floating point number while the rest of the variables are integers. One dimension is useful if you have multidimensional data and want to address only one dimension. Let’s say the input is two dimensional, 256 pixels wide and you want to fill the x-coordinate with x for all respective y-coordinates (a gradient in x-direction). Then you can write expression=”x % 256”. Another example is the sinc function which you would calculate as expression=”sin(v) / x” for 1D input. For more complex math or other operations please consider using OpenCL.

"expression": string

Arithmetic expression with math functions supported by OpenCL.

"dimensions": uint

Number of dimensions in [1, 2].

Statistics

class measure

Measure basic image properties.

"metric": string

Metric, one of min, max, sum, mean, var, std, skew or kurtosis.

"axis": int

Along which axis to measure (-1, all).

Generic OpenCL

class opencl

Load an arbitrary OpenCL kernel from filename or source and execute it on each input. The kernel must accept as many global float array parameters as connected to the filter and one additional as an output. For example, to compute the difference between two images, the kernel would look like:

kernel void difference (global float *a, global float *b, global float *c)
{
    size_t idx = get_global_id (1) * get_global_size (0) + get_global_id (0);
    c[idx] = a[idx] - b[idx];
}

and could be used like so if defined in a file named diff.cl:

$ ufo-launch [read, read] ! opencl kernel=difference filename=diff.cl !  null

If filename is not set, a default kernel file (opencl.cl) is loaded. See OpenCL default kernels for a list of kernel names defined in that file.

"filename": string

Filename with kernel sources to load.

"source": string

String with OpenCL kernel code.

"kernel": string

Name of the kernel that this filter is associated with.

"options": string

OpenCL build options.

"dimensions": uint

Number of dimensions the kernel works on. Must be in [1, 3].

"halve-width": boolean

Use half the width of the input size for calculation if complex, i.e. account for x[0] = Re(z[0]), x[1] = Im(z[0]), …

Spatial transformation

Transposition

class transpose

Transpose images from (x, y) to (y, x).

Rotation

class rotate

Rotates images clockwise by an angle around a center (x, y). When reshape is True, the rotated image is not cropped, i.e. the output image size can be larger that the input size. Moreover, this mode makes sure that the original coordinates of the input are all contained in the output so that it is easier to see the rotation in the output. Try e.g. rotation with center equal to \((0, 0)\) and angle \(\pi / 2\).

"angle": float

Rotation angle in radians.

"reshape": boolean

Reshape the result to encompass the complete input image and input indices.

"center": GValueArray

Center of rotation (x, y)

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information.

"interpolation": enum

Specifies interpolation when a computed pixel coordinate falls between pixels, can be nearest or linear.

Flipping

class flip

Flips images vertically or horizontally.

"direction": enum

Can be either horizontal or vertical and denotes the direction along with the image is flipped.

Binning

class bin

Bin a square of pixels by summing their values.

"size": uint

Number of pixels in one direction to bin to a single pixel value.

Rescaling

class rescale

Rescale input data by a fixed factor.

"factor": float

Fixed factor for scaling the input in both directions.

"x-factor": float

Fixed factor for scaling the input width.

"y-factor": float

Fixed factor for scaling the input height.

"width": uint

Fixed width, disabling scalar rescaling.

"height": uint

Fixed height, disabling scalar rescaling.

"interpolation": enum

Interpolation method used for rescaling which can be either nearest or linear.

Padding

class pad

Pad an image to some extent with specific behavior for pixels falling outside the original image.

"x": int

Horizontal coordinate in the output image which will contain the first input column.

"y": int

Vertical coordinate in the output image which will contain the first input row.

"width": uint

Width of the padded image.

"height": uint

Height of the padded image.

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information.

Cropping

class crop

Crop a region of interest from two-dimensional input. If the region is (partially) outside the input, only accessible data will be copied.

"x": uint

Horizontal coordinate from where to start the ROI.

"y": uint

Vertical coordinate from where to start the ROI.

"width": uint

Width of the region of interest.

"height": uint

Height of the region of interest.

"from-center": boolean

Start cropping from the center outwards.

Cutting

class cut

Cuts a region from the input and merges the two halves together. In a way, it is the opposite of crop.

"width": uint

Width of the region to cut out.

Tiling

class tile

Cuts input into multiple tiles. The stream contains tiles in a zig-zag pattern, i.e. the first tile starts at the top left corner of the input goes on the same row until the end and continues on the first tile of the next row until the final tile in the lower right corner.

"width": uint

Width of a tile which must be a divisor of the input width. If this is not changed, the full width will be used.

"height": uint

Width of a tile which must be a divisor of the input height. If this is not changed, the full height will be used.

Swapping quadrants

class swap-quadrants

Cuts the input into four quadrants and swaps the lower right with the upper left and the lower left with the upper right quadrant.

Polar transformation

class polar-coordinates

Transformation between polar and cartesian coordinate systems.

When transforming from cartesian to polar coordinates the origin is in the image center (width / 2, height / 2). When transforming from polar to cartesian coordinates the origin is in the image corner (0, 0).

"width": uint

Final width after transformation.

"height": uint

Final height after transformation.

"direction": string

Conversion direction from polar_to_cartesian.

Stitching

class stitch

Stitches two images horizontally based on their relative given shift, which indicates how much is the second image shifted with respect to the first one, i.e. there is an overlapping region given by \(first\_width - shift\). First image is inserted to the stitched image from its left edge and the second image is inserted after the overlapping region. If shift is negative, the two images are swapped and stitched as described above with shift made positive.

If you are stitching a 360-degree off-centered tomographic data set and know the axis of rotation, shift can be computed as \(2axis - second\_width\) for the case the axis of rotation is greater than half of the first image. If it is less, then the shift is \(first\_width - 2 axis\). Moreover, you need to horizontally flip one of the images because this task expects images which can be stitched directly, without additional needed transformations.

Stitching requires two inputs. If you want to stitch a 360-degree off-centered tomographic data set you can use:

ufo-launch [read path=projections_left/, read path=projections_right/ ! flip direction=horizontal] ! stitch shift=N ! write filename=foo.tif
"shift": int

How much is second image shifted with respect to the first one. For example, shift 0 means that both images overlap perfectly and the stitching doesn’t actually broaden the image. Shift corresponding to image width makes for a stitched image with twice the width of the respective images (if they have equal width).

"adjust-mean": boolean

Compute the mean of the overlapping region in the two images and adjust the second image to match the mean of the first one.

"blend": boolean

Linearly interpolate between the two images in the overlapping region.

Multi-stream

Interpolation

class interpolate

Interpolates incoming data from two compatible streams, i.e. the task computes \((1 - \alpha) s_1 + \alpha s_2\) where \(s_1\) and \(s_2\) are the two input streams and \(\alpha\) a blend factor. \(\alpha\) is \(i / (n - 1)\) for \(n > 1\), \(n\) being number and \(i\) the current iteration.

"number": uint

Number of total output stream length.

class interpolate-stream

Interpolates between elements from an incoming stream.

"number": uint

Number of total output stream length.

Subtract

class subtract

Subtract data items of the second from the first stream.

Correlate

class correlate-stacks

Reads two datastreams, the first must provide a 3D stack of images that is used to correlate individal 2D images from the second datastream. The number property must contain the expected number of items in the second stream.

"number": uint

Number of data items in the second data stream.

Filters

Median

class median-filter

Filters input with a simple median.

"size": uint

Odd-numbered size of the neighbouring window.

Edge detection

class detect-edge

Detect edges by computing the power gradient image using different edge filters.

"filter": enum

Edge filter (or operator) which is one of sobel, laplace and prewitt. By default, the sobel operator is used.

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information.

Gaussian blur

class blur

Blur image with a gaussian kernel.

"size": uint

Size of the kernel.

"sigma": float

Sigma of the kernel.

Gradient

class gradient

Compute gradient.

"direction": enum

Direction of the gradient, can be either horizontal, vertical, both or both_abs.

"finite-difference-type": enum

Direction of the gradient, can be either forward, backward, or central.

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information.

Non-local-means denoising

class non-local-means

Reduce noise using Buades’ non-local means algorithm.

"search-radius": uint

Radius for similarity search.

"patch-radius": uint

Radius of patches.

"h": float

Smoothing control parameter, should be around noise standard deviation or slightly less. Higher h results in a smoother image but with blurred features. If it is 0, estimate noise standard deviation and use it as the parameter value.

"sigma": float

Noise standard deviation, improves weights computation. If it is zero, it is not automatically estimated as opposed to h. estimate-sigma has to be specified in order to override sigma value.

"window": boolean

Apply Gaussian profile with \(\sigma = \frac{P}{2}\), where \(P\) is the patch-radius parameter to the weight computation which decreases the influence of pixels towards the corners of the patches.

"fast": boolean

Use a fast version of the algorithm described in 1. The only difference in the result from the classical algorithm is that there is no Gaussian profile used and from the nature of the fast algorithm, floating point precision errors might occur for large images.

"estimate-sigma": boolean

Estimate sigma based on 2, which overrides sigma parameter value. Only the first image in a sequence is used for estimation and the estimated sigma is re-used for every consequent image.

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information.

1

J. Darbon, A. Cunha, T.F. Chan, S. Osher, and G.J. Jensen, Fast nonlocal filtering applied to electron cryomicroscopy in 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008, pp. 1331-1334. DOI:10.1109/ISBI.2008.4541250

2

J. Immerkaer, Fast noise variance estimation in Computer vision and image understanding 64.2 (1996): 300-302. DOI:10.1006/cviu.1996.0060

Finding large spots

class find-large-spots

Find large spots with extreme values in an image. First, pixels with values greater than spot_threshold are added to the mask, then connected pixels with absolute difference between them and the originally detected greater than grow-threshold are added to the mask. In the end, holes are also removed from the mask.

"spot-threshold": float

Pixels with values greater than this threshold are added to the mask.

"grow-threshold": float

Pixels connected to the ones found by spot-threshold with absolute difference greater than this threshold are added to the mask. If the value is 0, it is automatically set to full width at tenth maximum of the estimated noise standard deviation.

"addressing-mode": enum

Addressing mode specifies the behavior for pixels falling outside the original image. See OpenCL sampler_t documentation for more information. This parameter is used only for automatic noise standard deviation estimation.

Horizontal interpolation

class horizontal-interpolate

Interpolate masked values in rows of an image. For all pixels equal to one in the mask, find the closest pixel where mask is zero to the left and right and linearly interpolate the value in the current pixel based on the found left and right values. If use-one-sided-gradient is TRUE and the mask goes to the left or right border of the image and on the other side there are at least two non-masked pixels \(x_1\) and \(x_2\), compute the value in the current pixel \(x\) by (in case the mask goes to the right border, left is analogous) \(f(x) = f(x_2) + (x - x_2) * (f(x_2) - f(x_1))\). In case use-one-sided-gradient is FALSE or there is only one valid pixel on one of the borders and all the others are masked, use that pixel’s value in all the remaining ones.

"use-one-sided-gradient": boolean

If TRUE, use two good pixels on one side to compute gradient and fill the masked values accordingly (for the case the mask spans to the border). If FALSE, just copy the last “good” pixel value to the masked values.

Stream transformations

Averaging

class average

Read in full data stream and generate an averaged output.

"number": uint

Number of averaged images to output. By default one image is generated.

Reducing with OpenCL

class opencl-reduce

Reduces or folds the input stream using a generic OpenCL kernel by loading an arbitrary kernel from filename or source. The kernel must accept exactly two global float arrays, one for the input and one for the output. Additionally a second finish kernel can be specified which is called once when the processing finished. This kernel must have two arguments as well, the global float array and an unsigned integer count. Folding (i.e. setting the initial data to a known value) is enabled by setting the fold-value.

Here is an OpenCL example how to compute the average:

kernel void sum (global float *in, global float *out)
{
    size_t idx = get_global_id (1) * get_global_size (0) + get_global_id (0);
    out[idx] += in[idx];
}

kernel void divide (global float *out, uint count)
{
    size_t idx = get_global_id (1) * get_global_size (0) + get_global_id (0);
    out[idx] /= count;
}

And this is how you would use it with ufo-launch:

ufo-launch ... ! opencl-reduce kernel=sum finish=divide ! ...

If filename is not set, a default kernel file is loaded. See OpenCL reduction default kernels for a list of possible kernels.

"filename": string

Filename with kernel sources to load.

"source": string

String with OpenCL kernel code.

"kernel": string

Name of the kernel that is called on each iteration. Must have two global float array arguments, the first being the input, the second the output.

"finish": string

Name of the kernel that is called at the end after all iterations. Must have a global float array and an unsigned integer arguments, the first being the data, the second the iteration counter.

"fold-value": float

If given, the initial data is filled with this value, otherwise the first input element is used.

"dimensions": uint

Number of dimensions the kernel works on. Must be in [1, 3].

Statistics

class flatten

Flatten input stream by reducing with operation based on the given mode.

"mode": string

Operation, can be either min, max, sum and median.

class flatten-inplace

Faster inplace operating variant of the flatten task.

"mode": enum

Operation, can be either min, max and sum.

Slicing

class slice

Slices a three-dimensional input buffer to two-dimensional slices.

Stacking

class stack

Symmetrical to the slice filter, the stack filter stacks two-dimensional input. If number is not a divisor of the number of input images, the last produced stack at index which starts to exceed the number of input images will contain arbitrary images from the previous iterations.

"number": uint

Number of items, i.e. the length of the third dimension.

Stacking with sliding window

class sliding-stack

Stacks input images up to the specified number and then replaces old images with incoming new ones as they come. The first image is copied to all positions in the beginning. By default, images in the window are not ordered, i.e. if e.g. number = 3, then the window will contain the following input images: (0, 0, 0), (0, 1, 0), (0, 1, 2), (3, 1, 2), (3, 4, 2), (3, 4, 5) and so on. If you want them to appear ordered with respect to their arrival time, use ordered.

"number": uint

Number of items, i.e. the length of the third dimension.

"ordered": boolean

Order items in the sliding window.

Merging

class merge

Merges the data from two or more input data streams into a single data stream by concatenation.

"number": uint

Number of input streams. By default this is two.

Slice mapping

class map-slice

Lays out input images on a quadratic grid. If the number of input elements is not the square of some integer value, the next higher number is chosen and the remaining data is blackened.

"number": uint

Number of expected input elements. If more elements are sent to the mapper, warnings are issued.

Color mapping

class map-color

Receives a two-dimensional image and maps its gray values to three red, green and blue color channels using the Viridis color map.

Splitting channels

class unsplit

Turns a three-dimensional image into two-dimensional image by interleaving the third dimension, i.e. [[[XXX],[YYY],[ZZZ]]] is turned into [[XYZ],[XYZ],[XYZ]]. This is useful to merge a separate multi-channel RGB image into a “regular” RGB image that can be shown with cv-show.

This task adds the channels key to the output buffer containing the original depth of the input buffer.

Fourier domain

Fast Fourier transform

class fft

Compute the Fourier spectrum of input data. dimensions specifies the dimensionality of the transform, it is independent from the input dimensions. E.g. if you have 3D input, you can compute a 3D FT, a batch of 2D FTs of every plane, or a batch of 1D FTs of every row. If you have 2D input, you may compute a 2D FT or a batch of 1D FTs of every row. For every dimension, if size is not specified and auto-zeropadding is True, the input is padded to the next power of two. If it is False, the output has the same size as the input (via the Chirp-z transform from [Rabiner et al., 1969]).

Please note that Chirp-z needs to perform 2 padded-size FFTs and pads the input to the next power of two of double the input size, so it can be considerably slower than using auto-zeropadding. E.g. if the input size is 1023 x 1023 pixels, auto-zeropadding=True pads the input to 1024 x 1024 pixels. In the case of auto-zeropadding=False and no user size specification (see parameters below), Chirp-z pads the input to 2048 x 2048. On the top of that, it requires two FFTs with the padded size, so in this case it is eight times slower than using auto-zeropadding=True (factor of four for the padding in the two dimensions and the additional factor of two for the two FFTs).

Example usage:

# Suppose input.tif is 3D and has the following size: width=17, height=15, depth=9
# 3D transform, input size = output size
ufo-launch read path=input.tif ! stack number=9 ! fft dimensions=3 ! ifft dimensions=3 ! slice ! write filename=inverse.tif
# 3D transform, auto zeropadding
ufo-launch read path=input.tif ! stack number=9 ! fft dimensions=3 auto-zeropadding=True ! ifft dimensions=3 ! slice ! write filename=inverse.tif
# 3D transform, custom size
ufo-launch read path=input.tif ! stack number=9 ! fft dimensions=3 size-x=20 size-y=64 size-z=10 ! ifft dimensions=3 ! slice ! write filename=inverse.tif
# Batch of nine 2D transforms
ufo-launch read path=input.tif ! stack number=9 ! fft dimensions=2 ! ifft dimensions=2 ! slice ! write filename=inverse.tif

# 2D transform
ufo-launch read path=input.tif number=1 ! fft dimensions=2 ! ifft dimensions=2 ! write filename=inverse.tif
# 2D transform, auto zeropadding
ufo-launch read path=input.tif number=1 ! fft dimensions=2 auto-zeropadding=True ! ifft dimensions=2 ! write filename=inverse.tif
# 2D transform, custom size
ufo-launch read path=input.tif number=1 ! fft dimensions=2 size-x=20 size-y=19 ! ifft dimensions=2 ! write filename=inverse.tif

# Batch of fifteen 1D transforms
ufo-launch read path=input.tif number=1 ! fft dimensions=1 ! ifft dimensions=1 ! write filename=inverse.tif
"auto-zeropadding": boolean

Automatically zeropad input data to a size to the next power of 2.

"dimensions": uint

Number of dimensions in [1, 3].

"size-x": uint

Size of FFT transform in x-direction, 0=automatic selection.

"size-y": uint

Size of FFT transform in y-direction, 0=automatic selection.

"size-z": uint

Size of FFT transform in z-direction, 0=automatic selection.

class ifft

Compute the inverse Fourier of spectral input data (see fft) for details on how the transform works. You may crop the output by setting the crop-width and crop-height parameters, otherwise the output has the same size as the input.

"dimensions": uint

Number of dimensions in [1, 3].

"crop-width": int

Width to crop output, 0=automatic selection.

"crop-height": int

Height to crop output, 0=automatic selection.

class power-spectrum

Compute power spectrum from fourier coefficients.

Frequency filtering

class filter

Computes a frequency filter function and multiplies it with its input, effectively attenuating certain frequencies.

"filter ": enum

Any of ramp, ramp-fromreal, butterworth, faris-byer, hamming and bh3 (Blackman-Harris-3). The default filter is ramp-fromreal which computes a correct ramp filter avoiding offset issues encountered with naive implementations.

"scale": float

Arbitrary scale that is multiplied to each frequency component.

"cutoff": float

Cutoff frequency of the Butterworth filter.

"order": float

Order of the Butterworth filter.

"tau": float

Tau parameter of Faris-Byer filter.

"theta": float

Theta parameter of Faris-Byer filter.

Stripe filtering

class filter-stripes

Filter vertical stripes. The input and output are in 2D frequency domain. The filter multiplies horizontal frequencies (for frequency ky=0) with a Gaussian profile centered at 0 frequency if vertical-sigma is 0. Otherwise it applies also a vertical Gaussian profile (1 - Gaussian), which enables filtering of not perfectly vertical stripes, which is useful for broader stripes and stripes which are not perfectly straight. If horizontal-sigma is 0, only the vertical Gaussian profile is applied (i.e. a horizontal stripe is cut out around ky=0). This is useful e.g. for filtering DMM stripes.

Example usage:

$ ufo-launch read path=sino.tif ! fft dimensions=2 ! filter-stripes sigma=1 ! ifft dimensions=2 ! write filename=sino-filtered.tif
"horizontal-sigma": float

Horizontal filter strength, which is the sigma of the Gaussian. Small values, e.g. 1e-7 cause only the zero frequency to remain in the signal, i.e. stronger filtering. Values around 1 are a good starting point.

"vertical-sigma": float

Vertical filter strength, which is the sigma of the Gaussian. The larger the value, the more non-vertical frequencies are removed. Value around 4 is a good starting point.

1D stripe filtering

class filter-stripes1d

Filter stripes in 1D along the x-axis. The input and output are in frequency domain. The filter multiplies the frequencies with an inverse Gaussian profile centered at 0 frequency. The inversed profile means that the filter is f(k) = 1 - gauss(k) in order to suppress the low frequencies.

"strength": float

Filter strength, which is the full width at half maximum of the gaussian.

Zeropadding

class zeropad

Add zeros in the center of sinogram using oversampling to manage the amount of zeros which will be added.

"oversampling": uint

Oversampling coefficient.

"center-of-rotation": float

Center of rotation of sample.

Reconstruction

Flat-field correction

class flat-field-correct

Computes the flat field correction using three data streams:

  1. Projection data on input 0

  2. Dark field data on input 1

  3. Flat field data on input 2

"absorption-correct": boolean

If TRUE, compute the negative natural logarithm of the flat-corrected data.

"fix-nan-and-inf": boolean

If TRUE, replace all resulting NANs and INFs with zeros.

"sinogram-input": boolean

If TRUE, correct only one line (the sinogram), thus darks are flats are 1D.

"dark-scale": float

Scale the dark field prior to the flat field correct.

"flat-scale": float

Scale the flat field prior to the flat field correct.

Sinogram transposition

class transpose-projections

Read a stream of two-dimensional projections and output a stream of transposed sinograms. number must be set to the number of incoming projections to allocate enough memory.

"number": uint

Number of projections.

Warning

This is a memory intensive task and can easily exhaust your system memory. Make sure you have enough memory, otherwise the process will be killed.

Tomographic backprojection

class backproject

Computes the backprojection for a single sinogram.

"num-projections": uint

Number of projections between 0 and 180 degrees.

"offset": uint

Offset to the first projection.

"axis-pos": double

Position of the rotation axis in horizontal pixel dimension of a sinogram or projection. If not given, the center of the sinogram is assumed.

"angle-step": double

Angle step increment in radians. If not given, pi divided by height of input sinogram is assumed.

"angle-offset": double

Constant angle offset in radians. This determines effectively the starting angle.

"mode": enum

Reconstruction mode which can be either nearest or texture.

"roi-x": uint

Horizontal coordinate of the start of the ROI. By default 0.

"roi-y": uint

Vertical coordinate of the start of the ROI. By default 0.

"roi-width": uint

Width of the region of interest. The default value of 0 denotes full width.

"roi-height": uint

Height of the region of interest. The default value of 0 denotes full height.

Tomographic Stacked backprojection

class stacked-backproject

Computes the backprojection of multiple sinograms in parallel. Stream multiple sinograms by introducing a stack filter of certain size before this filter. A suitable minimum stack size must be specified based on precision mode

  1. single - 2

  2. half - 4

  3. int8 - 4

"num-projections": uint

Number of projections between 0 and 180 degrees

"offset": uint

Offset to the first projection.

"axis-pos": double

Position of the rotation axis in horizontal pixel dimension of a sinogram or projection. If not given, the center of the sinogram is assumed.

"angle-step": double

Angle step increment in radians. If not given, pi divided by height of input sinogram is assumed.

"angle-offset": double

Constant angle offset in radians. This determines effectively the starting angle.

"roi-x": uint

Horizontal coordinate of the start of the ROI. By default 0.

"roi-y": uint

Vertical coordinate of the start of the ROI. By default 0.

"roi-width": uint

Width of the region of interest. The default value of 0 denotes full width.

"roi-height": uint

Height of the region of interest. The default value of 0 denotes full height.

"precision-mode": enum

Precision mode or storage format which can be single or half or int8 Correspondingly it represents storage in 32, 16 and 8-bits.

Forward projection

class forwardproject

Computes the forward projection of slices into sinograms.

"number": uint

Number of final 1D projections, that means height of the sinogram.

"angle-step": float

Angular step between two adjacent projections. If not changed, it is simply pi divided by number.

Laminographic backprojection

class lamino-backproject

Backprojects parallel beam computed laminography projection-by-projection into a 3D volume.

"region-values": int

Elements in regions.

"float-region-values": float

Elements in float regions.

"x-region": GValueArray

X region for reconstruction as (from, to, step).

"y-region": GValueArray

Y region for reconstruction as (from, to, step).

"z": float

Z coordinate of the reconstructed slice.

"region": GValueArray

Region for the parameter along z-axis as (from, to, step).

"projection-offset": GValueArray

Offset to projection data as (x, y) for the case input data is cropped to the necessary range of interest.

"center": GValueArray

Center of the volume with respect to projections (x, y), (rotation axes).

"overall-angle": float

Angle covered by all projections (can be negative for negative steps in case only num-projections is specified)

"num-projections": uint

Number of projections.

"tomo-angle": float

Tomographic rotation angle in radians (used for acquiring projections).

"lamino-angle": float

Absolute laminogrpahic angle in radians determining the sample tilt.

"roll-angle": float

Sample angular misalignment to the side (roll) in radians (CW is positive).

"parameter": enum

Which paramter will be varied along the z-axis, from z, x-center, lamino-angle, roll-angle.

Fourier interpolation

class dfi-sinc

Computes the 2D Fourier spectrum of reconstructed image using 1D Fourier projection of sinogram (fft filter must be applied before). There are no default values for properties, therefore they should be assigned manually.

"kernel-size": uint

The length of kernel which will be used in interpolation.

"number-presampled-values": uint

Number of presampled values which will be used to calculate kernel-size kernel coefficients.

"roi-size": int

The length of one side of region of Interest.

"angle-step": double

Increment of angle in radians.

Center of rotation

class center-of-rotation

Compute the center of rotation of input sinograms.

"angle-step": double

Step between two successive projections.

"center": double

The calculated center of rotation.

Sinogram offset shift

class cut-sinogram

Shifts the sinogram given a center not centered to the input image.

"center-of-rotation": float

Center of rotation of specimen.

Phase retrieval

class retrieve-phase

Computes and applies a fourier filter to correct phase-shifted data. Expects frequencies as an input and produces frequencies as an output. Propagation distance can be specified for both x and y directions together by the distance parameter or separately by distance-x and distance-y, which is useful e.g. when pixel size is not symmetrical. distance may be a list in which case a multi-distance CTF phase retrieval is performed. In this case method must be set to ctf_multidistance.

"method": enum

Retrieval method which is one of tie, ctf, ctf_multidistance, qp or qp2.

"energy": float

Energy in keV.

"distance": float

Distance in meters.

"distance-x": float

Distance in x-direction in meters.

"distance-y": float

Distance in y-direction in meters.

"pixel-size": float

Pixel size in meters.

"regularization-rate": float

Regularization parameter is log10 of the constant to be added to the denominator to regularize the singularity at zero frequency: 1/sin(x) -> 1/(sin(x)+10^-RegPar). It is also log10(delta / beta) where the complex refractive index is delta + beta * 1j.

Typical values [2, 3].

"thresholding-rate": float

Parameter for Quasiparticle phase retrieval which defines the width of the rings to be cropped around the zero crossing of the CTF denominator in Fourier space.

Typical values in [0.01, 0.1], qp retrieval is rather independent of cropping width.

"frequency-cutoff": float

Cutoff frequency after which the filter is set to 0 in radians.

"output-filter": boolean

Output filter values instead of the filtered frequencies.

General matrix-matrix multiplication

class gemm

Computes \(\alpha A \cdot B + \beta C\) where \(A\), \(B\) and \(C\) are input streams 0, 1 and 2 respectively. \(A\) must be of size \(m\times k\), \(B\) \(k\times n\) and \(C\) \(m\times n\).

Note

This filter is only available if CLBlast support is available.

"alpha": float

Scalar multiplied with \(AB\).

"beta": float

Scalar multiplied with \(C\).

Segmentation

class segment

Segments a stack of images given a field of labels using the random walk algorithm described in 3. The first input stream must contain three-dimensional image stacks, the second input stream a label image with the same width and height as the images. Any pixel value other than zero is treated as a label and used to determine segments in all directions.

3

Lösel and Heuveline, Enhancing a Diffusion Algorithm for 4D Image Segmentation Using Local Information in Proc. SPIE 9784, Medical Imaging 2016, http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2506235

Auxiliary

Buffering

class buffer

Buffers items internally until data stream has finished. After that all buffered elements are forwarded to the next task.

"number": uint

Number of pre-allocated buffers.

"dup-count": uint

Number of times each image should be duplicated.

"loop": boolean

Duplicates the data in a loop manner dup-count times.

Stamp

class stamp

Writes the current iteration into the top-left corner.

"font": string

Pango font description, by default set to Mono 9.

"scale": float

Scales the default brightness of 1.0.

Note

This filter requires Pango and Cairo for text layouting.

Loops

class loop

Repeats output of incoming data items. It uses a low-overhead policy to avoid unnecessary copies. You can expect the data items to be on the device where the data originated.

"number": uint

Number of iterations for each received data item.

Monitoring

class monitor

Inspects a data stream and prints size, location and associated metadata keys on stdout.

"print": uint

If set print the given numbers of items on stdout as hexadecimally formatted numbers.

Sleep

class sleep

Wait time seconds before continuing. Useful for debugging throughput issues.

"time": double

Time to sleep in seconds.

Display

class cv-show

Shows the input using an OpenCV window.

"min": float

Minimum for display value scaling. If not set, will be determined at run-time.

"max": float

Maximum for display value scaling. If not set, will be determined at run-time.