Template matching two 3D numpy array - python

I have an 3D numpy array, start_array, I do some processing to it by applying some random translations and rotations to get transform_arry. I have access to both the arrays only. I want to identify the random transforms done to the arrays. What would be a quick and easy way to do it using python?

Related

How to register a Numpy Array?

I have a numpy array, which contains a set of 20 or so images (so it can be thought of as a tensor). I need to find the easiest way of registering this stack of images.
Is there a function with numpy where I can input a set of images (in the form of an array), and it will return me a stack (adding all borders needed etc..) where they are aligned? If not what is the easiest way of doing this?

How to maintain Direction in SimpleITK image to numpy array conversion?

I have three different isotropic MRI DICOM volumes of the same object, each with a different direction (orthogonal sagittal, coronal and transverse acquisitions of same object).
I would like to convert them to numpy arrays and plot them, in such a way that their indexing matches. Let's say that if I have three numpy arrays issued from sitk images:
sag_array = sitk.GetArrayFromImage( sag_sitk )
dors_array = sitk.GetArrayFromImage( dors_sitk )
trans_array = sitk.GetArrayFromImage( trans_sitk )
I would like to be able to plot them using the same indexing, so that the slices
sag_array[:,:,index]
dors_array[:,:,index]
trans_array[:,:,index]
correspond to the same view, with no flipping or inversion of the axes.
I guess this info is contained in the Direction of the SimpleITK images, is there a way to transfer it to the numpy arrays after the conversion?
Does the Direction property in general have any effect on the numpy conversion, or is it lost?
I solved it by pre-processing all the images with the sitk.Resample() function to a common Origin and Direction. In this way, when converting to numpy arrays, since they occupied the same physical space, they're sliced coherently among each other.

Why do we convert image in arrays for image processing?

Whenever we perform any operations with the images, we convert them into Arrays, what's the specific reason?
OpenCV images are stored as three-dimensional Numpy arrays. When you read in images using the library, they are represented as Numpy arrays.
With Numpy you can make blazing fast operations on numerical arrays, no matter which dimension, shape they are.

Loading 3D Model but getting 2D Array in Python

I`ve downloaded a sample .stl file from here: [https://www.thingiverse.com/thing:156207]
Then I've used this code to get a numpy array for further image processing with matplotlib:
import numpy as np
from stl import mesh
np.set_printoptions(threshold=np.nan)
# Using an existing stl file:
your_mesh = mesh.Mesh.from_file('300_polygon_sphere_100mm.stl')
data = np.array(your_mesh)
print(data.shape)
Unfortunately, this is an array with only two dimensions. I've checked the .stl file with my editor and there are three dimensions.
Can someone help me? My goal is to create a code with that i can slice 3D models to get acces to the sliced 2d images.
Thanks.
EDIT: I've tried to reshape it:
data_reshaped = np.reshape(data, (550, 3, 3))
But i guess this totally wrong. And i don't know if the pattern is (Z, X, Y).
I want to do some slicing operations on the 3d array to get XY images like this guy is very easily doing https://www.youtube.com/watch?v=5jQVQE6yfio&list=PLT66ZlnovHPYzny9TYM1mx02k5Xnw_kjw&t=215s&index=3
You won't be able to just load the .stl file into a numpy array and perform slicing as shown in the video you linked. In the video, they load a model that is stored as a 3D numpy array.
However, the model you are trying to load consists of a polygonal mesh. This means you only have the coordinate values of the vertices. You can open the .stl file in a text editor to see its contents. (By converting the loaded mesh into a numpy array you just extract those coordinate values. You can actually compare the values in the numpy array and the text file, they are the same.) The resulting numpy array has shape (550, 9). The first dimension is defined by the number of faces in the model (in this case, the model has 550 faces). As each face has three vertices, which have three coordinate values each, hence you have 9 numbers per face. So the third dimension is not lost. It's just stored in a different manner.
Simply reshaping the array won't create you a model of which you can get slices of, as shown in the video. To achieve this, you have to convert the meshed model into a rasterized one. You could do this by initializing an empty 3D array that contains the whole model and then determining for each pixel if it intersects with the geometry of the mesh you loaded.

Numpy function over matrix

So my question is quite similar to this post: Most efficient way to map function over numpy array, but I have some additional questions to add along.
Right now, I'm taking in an image represented by a 2-D array, and for each pixel in the image, I am doing some computation that involves convolving the nxn neighboring pixels with a Gaussian kernel to find a "weight" for each pixel. My end goal is to return a 2-D array of the same size as the input, with the calculated weight in place of each pixel.
So what I did was to first create a function getWeight that, given a pixel, does the necessary computation using its neighbors and a Gaussian kernel to find its corresponding weight.
So my question is: given getWeight is using a for-loop, or the numpy.fromiter, to apply this function to every pixel in the 2-D array
the best way to go about solving this problem?
Or could there be a way to use built-in np functions to apply this sort of operation on the entirety of the array at once? (This question is kind of vague, but what I am trying to get at is that since numpy operations on arrays are not actually done by "using a for loop for every pixel", whether there could be something I could use to optimize my problem).

Categories