My aim is to compute the maximum and minimum diameters of a slice of a 3D tubular shape (as shown in the attached figure).
What I tried : several rotations of the skeleton of the shape to make it parallel to the Z axis and then cut the interpolated tubular shape at some voxel to get the slice.
However these rotations make lot of interpolations. Hence, I would like to avoid these with a method that computes the diameters (max and min) without using rotations.
Any help please?
Related
I'm converting 2D (spatial) images to that of the frequency domain using tf.signal.fft2d (in numpy: np.fft.fft2) and notice that the start and end shapes are the same, although I don't see why they have to be. For example:
test_img = np.random.rand(100, 100) # shape (100, 100)
spectral = np.fft.fft2(test_img)
# -> spectral.shape = (100, 100)
Given that the image is now in the spectral basis - how are the basis elements chosen in NumPy (and Tensorflow as the implementations are the same)? Specifically, what are the starting (lowest) frequencies, and how are the more periodic ones chosen?
Why do you expect the two axis to be of different length?
By default, the FFT is computed on the points you supply, resulting in a 2D array that (correctly) has the same shape of the input. To change this behavior, you must provide the s parameter to fft2 (see the docs). For example, in your case, calling np.fft.fft2(test_img, s=(200, 100)) will result in an output of shape (200, 100). This is internally obtained by zero padding your input (i.e. adding 100 trailing zeros along the the first dimension), and computing the FFT on the resulting matrix.
As a general rule, for a FFT output of shape (N, M), the (normalized) frequency basis will be 1/N on axis 0 and 1/M on axis 1. To convert them to an actual frequency you need to multiply each by the sampling frequency of the respective dimension.
Be aware that when you compute double-sided FFT (as you are doing), you'll have positive frequencies up to Nyquist in the first half, and negative frequencies in the second half (see this page)
I have a set of 3D numpy arrays of size 128x128x64.
I want to reduce these into 128x128 by averaging over the third axis, and am currently doing that using this line
np.mean(X, axis=2)
However, the pattern of the data is that the outer slices, contain less influential information and the central slices contain more.
Is there anyway I can average over the slices giving preference/more weight to the central slices and slowly decreasing out to the outer ones?
Either that or the important data in the arrays is usually bunched up and has the highest intensity. If there is anyway I can give preference to these regions whilst averaging please suggest any ideas.
Thanks
I need to interpolate a 4D data set (four angles, one output value). Angles are cyclic but scipy.interpolate.LinearNDInterpolator can't seem to take this into account. Is there a tool that can do this?
You could duplicate all of your data at ±2π on each dimension (for 4 dimensions, the easy way of doing this would create 81 copies of each point; the slightly harder way would create 16 copies, by adding 2π to the angles between 0 and π, and subtracting 2π from the angles between π and 2π). That should ensure that every point you query (with all of the angles between 0 and 2π) will have neighbors "on both sides" in each dimension, for the linear interpolation to work with.
I want to visualize such 4D data in the 2D plane. Is there any way to do that?
You can use principal component analysis to reduce the dimensions. Sklearn has an easy to use package. PCA uses eigen vectors to determine the most 'important' directions in the higher dimensional space and projects the values into the lower dimensional space. Although keep in mind the generated vectors do lose some information from the higher dimensional input.
You can use pairs plot. Look at this link: Visualizing Data with Pairs Plots in Python.
Depending on the data type, you can plot them on a 2d field with the following dimensions:
Dim 1-2: X and Y axes
Dim 3: plotted point size
Dim 4: plotted point color gradient
Or if you do it with a 3D software, you can plot in 3D, with all the point plotted with color gradient. While rotating the field, you can have vision on all dimensions.
I have a 100x100 Matrix with Zeros. I want to add a 10x20 ellipsis around a specific point in the Matrix - lets say at position 40,60. The Ellipsis should be filled with values from 0 to 1. (1 in the center - 0 at the edge) - The numbers should be gaussian-distributed.
Maybe someone can give me a clue, how to start with this problem..
You need to draw samples from a multi-variate gaussian distribution. The function you can use is numpy.random.multivariate_normal
You mean value matrix should be [40, 60]. The covariance C matrix should be 2X2. Regarding its values:
C[1, 1], C[2, 2]: decides the width of the ellipse along each axis. Choose it so that 3*C[i,i] is almost equal to the width of the ellipse along this axis.
The diagonal values are zero if you want the ellipse to be along the axes, otherwise put larger values (keep in mind that C[2, 1]==C[1, 2]
However, keep in mind that, since it is a Gaussian distribution, the output values will be close to 0 at distance 3*C[i,i] from the center, but they will never be truly zero.