Coordinate Conversion of Arrays with Different Length - python

I'm trying to convert between coordinate systems for two set of 1D arrays. I know how to make this particular conversion if both 1D arrays had the same length, but that is not the case, as seen below:
y = np.linspace(0,360, 90)
x = np.linspace(2,90,22)
The overall goal is to create a meshgrid out of the new set of arrays, but getting to that point seems to be rather challenging. I've initially tried using two for loops that enumerates from 0 to 89, and then from 0 to 21, but that was a dead end.

Related

Is it possible to use an array as a list of indices of a matrix to define a new matrix WITHOUT for loops?

I'm have a 3D problem where to final output is an array in the xy plane. I have an array in the x-z plane (dimensions (xsiz, zsiz)) and an array in the y-plane (dimension ysiz) as below:
xz = np.zeros((xsiz, zsiz))
y = (np.arange(ysiz)*(zsiz/ysiz)).astype(int)
xz can be thought of as an array of (zsiz) column vectors of size (xsiz) and labelled by z in range (0, zsiz-1). These are not conveniently accessible given the current setup - I've been retrieving them by np.transpose(xz)[z]. I would like the y array to act like a list of z values and take the column vectors labelled by these z values and combine them in a matrix with final dimension (xsiz, ysiz). (It seems likely to me that it will be easier to work with the transpose of xz so the row vectors can be retrieved as above and combined giving a (ysiz, xsiz) matrix which can then be transposed but I may be wrong.)
This would be a simple using for loops and I've given an example of a such a loop that does what I want below in case my explanation isn't clear. However, the final intention is for this code to be parallelized using CuPy so ideally I would like the entire process to be carried out by matrix manipulation. It seems like it should be possible like this but I can't think how!
Any help greatly appreciated.
import numpy as np
xsiz = 5 #sizes given random values for example
ysiz = 6
zsiz = 4
xz = np.arange(xsiz*zsiz).reshape(xsiz, zsiz)
y = (np.arange(ysiz)*(zsiz/ysiz)).astype(int)
xzT = np.transpose(xz)
final_xyT = np.zeros((0, xsiz))
for i in range(ysiz):
index = y[i]
xvec = xzT[index]
final_xyT = np.vstack((final_xyT, xvec))
#indexing could go wrong here if y contained large numbers
#CuPy's indexing wraps around so hopefully this shouldn't be too big an issue
final_xy = np.transpose(final_xyT)
print(xz)
print(final_xy)
If I correctly get your problem you need this:
xz[:,y]

Python numpy comparing two 3D Arrays for similarity

I am trying to compare two 3D numpy arrays to calculate similarity. I have found these two posts, which I am trying to stich together to something useful.
Comparing NumPy Arrays for Similarity
Subtracting numpy arrays of different shape efficiently
To make a long story short, I have two arrays created from 3D point clouds so they are filled with 3D coordinates, but because the 3D objects are different, the arrays have different lengths.
If requested, I can post some sample arrays, but they are +1000 points, so that would be a lot of text to post.
Here is what I am trying to do now. You can get array1 and array2 data here: https://pastebin.com/WbNvRUwG (array2 starts at line 1858).
array1 = [long np array with 3D coordinates]
array2 = [long np array with 3D coordinates]
array1_original = array1.copy()
if len(array1) < len(array2):
array1, array2 = array2, array1
array_difference = np.subtract(array1, array2[:,None]) # The [:,None] is from the second link to make the arrays have same length to enable subtractraction
array_abs_difference = np.absolute(array_difference)
array_total_difference = np.sum(array_abs_difference)
similarity = 1 - (array_total_difference /
np.sum(array1_original))
My array differences are fine and represent what I want, so the most similar arrays have small differences, but when I do the sum of array1_original it comes out way smaller than my differences and therefore my similarity score becomes negative.
I also tried to calculate the difference from an array filled with zeros to array1_original, but it comes out about the same.
Can anyone tell me why np.sum(array1_original) would not be bigger than np.sum(array_abs_difference)?
The numpy comparison ended up being to slow, so I just used open3D instead. It works for me

Reshaping numpy array

What I am trying to do is take a numpy array representing 3D image data and calculate the hessian matrix for every voxel. My input is a matrix of shape (Z,X,Y) and I can easily take a slice along z and retrieve a single original image.
gx, gy, gz = np.gradient(imgs)
gxx, gxy, gxz = np.gradient(gx)
gyx, gyy, gyz = np.gradient(gy)
gzx, gzy, gzz = np.gradient(gz)
And I can access the hessian for an individual voxel as follows:
x = 100
y = 100
z = 63
H = [[gxx[z][x][y], gxy[z][x][y], gxz[z][x][y]],
[gyx[z][x][y], gyy[z][x][y], gyz[z][x][y]],
[gzx[z][x][y], gzy[z][x][y], gzz[z][x][y]]]
But this is cumbersome and I can't easily slice the data.
I have tried using reshape as follows
H = H.reshape(Z, X, Y, 3, 3)
But when I test this by retrieving the hessian for a specific voxel the, the value returned from the reshaped array is completely different than the original array.
I think I could use zip somehow but I have only been able to find that for making lists of tuples.
Bonus: If there's a faster way to accomplish this please let me know, I essentially need to calculate the three eigenvalues of the hessian matrix for every voxel in the 3D data set. Calculating the hessian values is really fast but finding the eigenvalues for a single 2D image slice takes about 20 seconds. Are there any GPUs or tensor flow accelerated libraries for image processing?
We can use a list comprehension to get the hessians -
H_all = np.array([np.gradient(i) for i in np.gradient(imgs)]).transpose(2,3,4,0,1)
Just to give it a bit of explanation : [np.gradient(i) for i in np.gradient(imgs)] loops through the two levels of outputs from np.gradient calls, resulting in a (3 x 3) shaped tensor at the outer two axes. We need these two as the last two axes in the final output. So, we push those at the end with the transpose.
Thus, H_all holds all the hessians and hence we can extract our specific hessian given x,y,z, like so -
x = 100
y = 100
z = 63
H = H_all[z,y,x]

Create 2D lists in python with variable length indexed vectors

I am working on an image processing problem where I have code that looks like this (the code written below just illustrates the type of problem I want to solve):
for i in range(0,10):
for j in range(0,10):
number_length = round(random.random()*10)
a = np.zeros(number_length)
Z[i][j] = a
What I want to do is create some sort of 2D list or np.array (not really sure) where I essentially index a term for every pixel in an image, and have a vector/list of values for every individual pixel of which I can not anticipate its length, moreover, the length of each vector for every indexed pixel is different to each other. What is the best way to go about this?
In my MATLAB code the workaround is simple: I define a 2D cell and just assign any vector to any element in the 2D cell. Since cells do not complain about coherent length of every indexed vector, this is a good thing. What is the equivalent optimal solution to handle this in python?
Ideally the solution should not involve anticipating the maximum length of "a" for any pixel and to make all indexed vectors the same length (since this implies I have to do some sort of zero padding that will consume memory if the indexed vectors are high dimensional and these high dimensional vectors are sparse through out the image).
A NumPy array won't work because it requires fixed dimensions. You can use a 2d list (i.e. list of lists), where each element can be an array of arbitrary length. This is analogous to your setup in Matlab, using a 2d cell array of vectors.
Try this:
z = [[np.zeros(np.random.randint(10)+1) for j in range(10)] for i in range(10)]
This creates a 10x10 list, where z[i][j] is a NumPy array of zeros with random length (from 1 to 10).
Edit (nested loops requested in comment):
z = [[None for j in range(10)] for i in range(10)]
for i in range(len(z)):
for j in range(len(z[i])):
z[i][j] = np.zeros(np.random.randint(10)+1)

NumPy: Pick 2D indices of minimum values over 4D array

I have a function f(x,y,v,w) that I've evaluated over a range of values in (x,y,v,w) and stored in a 4D NumPy array, let's call it A.
I want a way to find two 2D arrays, V_best and W_best that hold the values of v,w that minimize f(x,y,v,w) over x,y. I've approached this by attempting to retrieve the indices of the values of (v,w) that give the minimum values of A over (x,y).
I've tried to use argmin for this, but I can't wrap my head around what the 3D arrays I get in return are, or how to use them in this context. As with many things I'm sure there's an obvious way to do this.
What I have is,
x = np.linspace(0,1,N1)
y = np.linspace(0,1,N2)
v = np.linspace(-5,5,N3)
w = np.linspace(-5,5,N4)
V,W,X,Y = np.meshgrid(v,w,x,y)
VALUEGRID = myfunc(V,W,X,Y)
V_besti = np.argmin(VALUEGRID,axis=0)
W_besti = np.argmin(VALUEGRID,axis=1)
Ideally, V_best and W_best will be of shape (N1,N2), corresponding to the dimensions of the range of x,y. I hope this is sufficiently clear.
Thank you in advance.

Categories