Modifying numpy arrays with OpenCV - python

Just trying to understand how a piece of Python code works. The code in question modifies the dimensions of a numpy array using cv2.resize. I would like to know the way that opencv modifies the array in order to populate the new dimensions of the array. I'm trying to translate this into C#, so any psuedocode in an explanation would also be appreciated.
print(last_heatmap.shape) --> [32,32,21]
print(type(last_heatmap)) --> numpy.ndarray
last_heatmap = cv2.resize(last_heatmap, (256,256))
print(last_heatmap.shape) --> [256,256,21]
print(type(last_heatmap)) --> numpy.ndarray
I'm looking for a understanding of how this particular function works, as a bonus suggestions about how to replicate this in other languages (for c#, perhaps an 3d-array of floats?). Thanks.

OpenCV’s resize does not actually modify the array, it creates a completely new one. The documentation for the function can be found here.
For each pixel in the destination array it will scale its coordinates to match the size of the source array. The pixel at the mapped coordinate will be looked up in the source array and written to the destination. By default it does a bilinear interpolation, considering averaged sub pixel values.
Image scaling is a common operation. You will not have to reimplement this in C# and can instead rely on libraries.

Related

pygalmesh setting the proper variables and explanation

for my research i am using pygalmesh to create a mesh from an array. The problem i am trying to solve is a vascular network. In it, theres multiple vessel that are not meshed while the surrounding tissue is meshed. I am able to create the array but i am having trouble setting the correct variable. In the readme documentation theres a lot of variable for which i can't find any infos.
For example :
max_edge_size_at_feature_edges=0.025,
min_facet_angle=25,
max_radius_surface_delaunay_ball=0.1,
max_facet_distance=0.001
Is there a file where they explain all those variable, like what do they actually change in the mesh and all the ones that can be put in?
My current goal with meshing would be to reduce the number of 2d element around my vessels to reduce my array dimension in the later computation.
Thx
PS : If there are other meshing alternative that you know of that can mimic pygalmesh meshing from an array and are easy to use, let me know!

How to correlate different MRI sequences images in NIFTI format?

I am new on Medical Imaging. I am dealing with MRI images, namely T2 and DWI.
I uploaded both images with nib.load, yet each sequence image has a different number of slices (volume, depth?). If I select one slice (z coordinate), how can get the corresponding slice on the other sequence image? ITK does it correctly, so maybe something in the NIFTI header could help?
Thank you so much for reading! I also tried interpolation, but it did not work.
If ITK does it correctly, why not just use ITK?
If you insist on hand-rolling your own index <-> physical space conversion routines, take a look at how ITK computes those matrices and the publicly accessible methods which use them. For being specific to NIFTI, take a look at ITK's NIFTI reader which sets the relevant metadata in itk::Image.
Reading the NIFTI, namely affine, and extracting the translation Transformation Matrix to create a mapping from the T2 to the DWI voxels.
Hint: nibabel.

How to flatten 3D object surface into 2D array?

I've got 3D objects which are represented as numpy arrays.
How can I unfold the "surface" of such objects to get a 2D map of values (I don't care about inner values)?
It's similar to unwrapping globe surface, but the shape is varied from case to case.
This is a vertices problem. Each triangle on the model is a flat surface that can be mapped to a 2D plane. So the most naive solution without any assumed structure would be to:
for triangle in mesh:
// project to plane defined by normal to avoid stretching
This solution is not ideal as it places all of the uv's on top of each other. The next step would be to spread out the triangles to fill a certain space. This is the layout stage that defines how the vertices are layed out in the 2D space.
Usually it is ideal to fit the UV's within a unit square. This allows for easy UV mapping from a single image.
Option: 2
You surround the object with a known 2D mapped shape and project each triangle onto the shape based on its normal. This provides a mechanism for unwrapping the uv's in a structured manor. An example object would be to project onto a cube.
Option: 3
consult academic papers and opensource libraries/tools like blender:
https://wiki.blender.org/index.php/Doc:2.4/Manual/Textures/Mapping/UV/Unwrapping
blender uses methods as described above to unwrap arbitrary geometry. There are other methods to accomplish this as described on the blender unwrap page. The nice thing about blender is that you can consult the source code for the implementation of the uv unwrap methods.
Hope this is helpful.

pyopengl buffer dynamic read from numpy array

I am trying to write a module in python which will draw a numpy array of color data (rgb) to screen. At the moment I am currently using a 3 dimensional color array like this:
numpy.ones((10,10,3),dtype=np.float32,order='F') # (for 10x10 pure white tiles)
binding it to a buffer and using a glVertexAttribArray to broadcast the data to an array of tiles (point sprites) (in this case a 10x10 array) and this works fine for a static image.
But I want to be able to change the data in the array and have buffer reflect this change without having to rebuild it from scratch.
Currently I've built the buffer with:
glBufferData(GL_ARRAY_BUFFER, buffer_data.nbytes, buffer_data, GL_DYNAMIC_DRAW)
where buffer_data is the numpy array. What (if anything) could I pass instead (some pointer into memory perhaps?)
If you want to quickly render a rapidly changing numpy array, you might consider taking a look at glumpy. If you do go with a pure pyopengl solution I'd also be curious to see how it works.
Edit: see my answer here for an example of how to use Glumpy to view a constantly updating numpy array
glBufferData is for updating the entire buffer as it will create a new buffer each time.
What you want is either:
glMapBuffer / glUnmapBuffer.
glMapBuffer copies the buffer to client memory and alter the values locally, then push the changes back to the GPU with glUnmapBuffer.
glBufferSubData
This allows you to update small sections of a buffer, instead of the entire thing.
It sounds like you also want some class that automatically picks up these changes.
I cannot confirm if this is a good idea, but you could wrap or extend numpy.array and over-ride the built in method setitem.

How do i fill "holes" in an image?

I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data.
What you want is called Inpainting.
OpenCV has an inpaint() function that does what you want.
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead.
I made my first gimp python script that might help you:
my scripts
It is called conditional filter as it is a matrix filter that fill all transparent pixels from an image according to the mean value of its 4 nearest neighbours that are not transparent.
Be sure to use a RGBA image with only 0 and 255 transparent values.
Its is rough, simple, slow, unoptimized but bug free.

Categories