I have a list a 3d image represented in an array of size 50x50x50. Every element of this 3D array is a pixel. I've differentiated every pixel in the x,y,z direction. How can I represent this in the array?
After differentiating it, I get a list of size 3, and within each index is a 50,50,50. This is therefore a list has the differentiated image for x,y and z direction, which is very nearly what I want. But I would like an array which was 50,50,50,3 rather than 3,50,50,50.
This is what I would want represented. Every pixel has a value for x,y and z
My code:
array_image=full_image[0:50,0:50,0:50]
Gradient=np.gradient(array_image)
If you look at the np.gradient doc carefully, it actually returns what you want but with different shape.
gradient : ndarray or list of ndarray.
A set of ndarrays (or a single ndarray if there is only one dimension)
corresponding to the derivatives of f with respect to each dimension.
Each derivative has the same shape as f.
So your Gradient is a list of gradients for array_image, corresponding to the each dimension.
res = np.zeros([50,50,50,3])
for i in range(3):
res[:,:,:,i] = Gradient[i]
Related
Suppose you have a higher dimensional array (3 or greater) which is composed of a series of 2d images. If this array is called x, then a 2d image will be represented as x[0,0,:,:]. Now what I want to do is apply a function that takes in a 2d image and outputs a scalar, on this higher dimensional array so that I would convert the dimension of the original array to one that is 2 dimensions lower. How would I do such a thing?
In other words, what is the faster numpy way of doing this: np.array([[f(x[i,j,:,:]) for i in range(x.shape[0])] for j in range(x.shape[1])]) for a list of axes and some function f that takes in an array.
I've looked at numpy.apply_along_axis but that only acts on a 1d array and the shape must be identical. numpy.apply_on_axes also doesn't work since it doesn't reduce the amount of dimensions which are given to the function (it gives my function a 4d array, not a 2d array which I can work with). numpy.vectorize doesn't work because it doesn't ever apply on more than one element at once.
I have a 3D numpy array points of dimensions [10000x3000x128] where the first dimension is the number of frames, the second dimension the number of points in each frame and the third dimension is a 128-element feature vector associated to each point. What I want to do is to efficiently filter the points in each frame by using a boolean 2D mask of dimensions [10000x3000] and for each of the selected points also take the related 128-dim vector of features. Moreover, in output I need still a 3D vector and not a merged 2D vector and possibly avoid any for loop.
Actually what I'm doing is:
# example of points
points = np.array([10000, 3000, 128])
# fg, bg = 2D dimensional boolean np.array
# init empty lists
fg_points, bg_points = [], []
for i in range(points.shape[0]):
fg_mask_tmp, bg_mask_tmp = fg[i], bg[i]
fg_points.append(points[i,fg_mask_tmp,:])
bg_points.append(points[i,bg_mask_tmp,:])
fg_features, bg_features = np.array(fg_points), np.array(bg_points)
But this is a quite naive solution that for sure can be improved in a more numpy-like way.
In addition, I also tried other solutions as:
fg_features = points[fg,:]
But this solution does not preserve the dimensions of the array merging the two first dimensions since the number of filtered points for each frame can vary.
Another solution I tried is to enlarge the 2D masks by appending a [128] true value to the last dimension, but with any successful result.
Dos anyone know a possible efficient solution?
Thank you in advance for any help!
What I am trying to do is take a numpy array representing 3D image data and calculate the hessian matrix for every voxel. My input is a matrix of shape (Z,X,Y) and I can easily take a slice along z and retrieve a single original image.
gx, gy, gz = np.gradient(imgs)
gxx, gxy, gxz = np.gradient(gx)
gyx, gyy, gyz = np.gradient(gy)
gzx, gzy, gzz = np.gradient(gz)
And I can access the hessian for an individual voxel as follows:
x = 100
y = 100
z = 63
H = [[gxx[z][x][y], gxy[z][x][y], gxz[z][x][y]],
[gyx[z][x][y], gyy[z][x][y], gyz[z][x][y]],
[gzx[z][x][y], gzy[z][x][y], gzz[z][x][y]]]
But this is cumbersome and I can't easily slice the data.
I have tried using reshape as follows
H = H.reshape(Z, X, Y, 3, 3)
But when I test this by retrieving the hessian for a specific voxel the, the value returned from the reshaped array is completely different than the original array.
I think I could use zip somehow but I have only been able to find that for making lists of tuples.
Bonus: If there's a faster way to accomplish this please let me know, I essentially need to calculate the three eigenvalues of the hessian matrix for every voxel in the 3D data set. Calculating the hessian values is really fast but finding the eigenvalues for a single 2D image slice takes about 20 seconds. Are there any GPUs or tensor flow accelerated libraries for image processing?
We can use a list comprehension to get the hessians -
H_all = np.array([np.gradient(i) for i in np.gradient(imgs)]).transpose(2,3,4,0,1)
Just to give it a bit of explanation : [np.gradient(i) for i in np.gradient(imgs)] loops through the two levels of outputs from np.gradient calls, resulting in a (3 x 3) shaped tensor at the outer two axes. We need these two as the last two axes in the final output. So, we push those at the end with the transpose.
Thus, H_all holds all the hessians and hence we can extract our specific hessian given x,y,z, like so -
x = 100
y = 100
z = 63
H = H_all[z,y,x]
I am working on an image processing problem where I have code that looks like this (the code written below just illustrates the type of problem I want to solve):
for i in range(0,10):
for j in range(0,10):
number_length = round(random.random()*10)
a = np.zeros(number_length)
Z[i][j] = a
What I want to do is create some sort of 2D list or np.array (not really sure) where I essentially index a term for every pixel in an image, and have a vector/list of values for every individual pixel of which I can not anticipate its length, moreover, the length of each vector for every indexed pixel is different to each other. What is the best way to go about this?
In my MATLAB code the workaround is simple: I define a 2D cell and just assign any vector to any element in the 2D cell. Since cells do not complain about coherent length of every indexed vector, this is a good thing. What is the equivalent optimal solution to handle this in python?
Ideally the solution should not involve anticipating the maximum length of "a" for any pixel and to make all indexed vectors the same length (since this implies I have to do some sort of zero padding that will consume memory if the indexed vectors are high dimensional and these high dimensional vectors are sparse through out the image).
A NumPy array won't work because it requires fixed dimensions. You can use a 2d list (i.e. list of lists), where each element can be an array of arbitrary length. This is analogous to your setup in Matlab, using a 2d cell array of vectors.
Try this:
z = [[np.zeros(np.random.randint(10)+1) for j in range(10)] for i in range(10)]
This creates a 10x10 list, where z[i][j] is a NumPy array of zeros with random length (from 1 to 10).
Edit (nested loops requested in comment):
z = [[None for j in range(10)] for i in range(10)]
for i in range(len(z)):
for j in range(len(z[i])):
z[i][j] = np.zeros(np.random.randint(10)+1)
I have a 3D array created using the numpy mgrid command so that each element has a certain value and the indexes retain the spatial information. For example, if one summed over the z-axis (3rd dimension) then the the resultant 2D array could be used in matplotlib with the function imshow() to obtain an image with different binned pixel values.
My question is: How can I obtain the index values for each element in this grid (a,b,c)?
I need to use the index values to calculate the relative angle of each point to the origin of the grid. (eg. theta=sin-1(sqrt(x^2+y^2)/sqrt(x^2+y^2+z^2))
Maybe this can be translated to another 3D grid where each element is the array [a,b,c]?
I'm not exactly clear on your meaning, but if you are looking for 3d arrays that contain the indices x, y, and z, then the following may suit your needs; assume your data is held in a 3D array called "abc":
import numpy as nm
x,y,z = nm.mgrid[[slice(dm) for dm in abc.shape]]