Constructing image in python after 2D-fft - python

Recently I did my own 2d-fft to get the frequency spectrum from 512x512 image, using python. I did the 2d-fft using 1d-fft, by doing the 1d on every row, and then on every column I ended up having a 512x512 matrix of complex numbers.
I want to know how can I construct the resulting image from this complex array? Should I use the real values only? Or should I convert the real and imaginary to magnitude and phase, and then use the magnitude to build up my resulting image?

Related

Numpy Extract Data from Compressed Sparse Column Format

I have a mat file with sparse data for around 7000 images with 512x512 dimensions stored in a flattened format (so rows of 262144) and I’m using scipy’s loadmat method to turn this sparse information into a Compressed Sparse Column format. The data inside of these images is a smaller image that’s usually around 25x25 pixels somewhere inside of the 512x512 region , though the actual size of the smaller image is not consitant and changes for each image. I want to get the sparse information from this format and turn it into a numpy array with only the data in the smaller image; so if I have an image that’s 512x512 but there’s a circle in a 20x20 area in the center I want to just get the 20x20 area with the circle and not get the rest of the 512x512 image. I know that I can use .A to turn the image into a non-sparse format and get a 512x512 numpy array, but this option isn’t ideal for my RAM.
Is there a way to extract the smaller images stored in a sparse format without turning the sparse data into dense data?
I tried to turn the sparse data into dense data, reshape it into a 512x512 image, and then I wrote a program to find the top, bottom, left, and right edges of the image by checking for the first occurrence of data from the top, bottom, left, and right but this whole processes seemed horribly inefficient.
Sorry about the little amount of information I provided; I ended up figuring it out.Scipy's loadmat function when used to extract sparse data from a mat file returns a csc_matrix, which I then converted to numpy's compressed sparse column format. Numpy's format has a method .nonzero() that will return the index of every non_zero element in that matrix. I then reshaped the numpy csc matrix into 512x512, and then used .nonzero() to get the non-zero elements in 2D, then used used those indexes to figure out the max height and width of my image I was interested in. Then I created a numpy matrix of zeros the size of the image I wanted, and set the elements in that numpy matrix to the elements to the pixels I wanted by indexing into my numpy csc matrix (after I called .tocsr() on it)

How to extract ROI of a 3D Model with Python

I want to extract a specific ROI out of a 3D Model that i created but i have no idea how to do this. I also performed a binary segmentation on one of the images i used for the mesh generation. Now i want to cut out my segmented region out of the mesh.
After a few days of research i still got no clue how to perform this exactly. I have following worklfow in my head:
generate mesh with photogrammetry approach
create binary mask for specific image (most middle one)
convert mesh to 3d array (if possible?)
do array operation and combine 2d array (binary mask) with 3d array and set all values in 3d array to 0 that are not in my ROI
create mesh out of new 3D array
I'm still not sure if this could work. Is there any standard python approach to this?
I appreciate your time.

How to (generally!) deal with out-of-range output pixel destinations

I'm working on a perspective transform application involving transforming 3D points to 2D camera pixels. It is a purely mathematical model, because I'm preparing to use it on hardware that I don't really have access to (so I'm making up focal length and offset values for the intrinsic camera matrix).
When I do the mapping, depending on the xyz location of the camera, I get huge differences in where my transformed image is, and I have to make the matrix where I'm inputting the pixels really large. (I'm mapping an image of 1000x1000 pixels to an image of about 600x600 pixels, but its located around 6000, so I have to make my output matrix 7000x7000, which takes a long time to plt.imshow. I have no use for the actual location of the pixels, because I'm only concerned with what the remapped image looks like.
I was wondering how people dealt with this issue:
I can think of just cropping the image down to the area that is non-zero (where my pixels are actually mapped too? Like seen in:
How to crop a numpy 2d array to non-zero values?
but that still requires me to use space and time to alot a 7000x7000 destination matrix

finding a local maximum in a 3d array (array of images) in python

I'm trying to implement a blob detector based on LOG, the steps are:
creating an array of n levels of LOG filters
use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels.
find a local maxima and circle the blob in the original image.
I already created the filters and the 3d array (which is an array of 2d images).
I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images).
Now I'm trying to figure out how to find the local maxima in the array.
I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales)
The brute force way of checking the pixel value directly seems ugly and not very efficient.
Whats the best way to find a local maxima point in python using openCV?
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore:
Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged.
Brute-force test the candidates against their upper and lower slices.

Image warping by means of a vector field using Python

I have two images of the same size and I have computed a vector field to warp the second image onto the first one.
However, as my vector field is computed over a grid with a 10 pixels spacing along both directions, I would like to define such a vector field but for all points of my image.
Thus, I am wondering how I could achieve this.
Possibilities:
interpolate between the points - 2D interpolation over regular grid,
should be fast, using scipy
compute your vector field for a 1-pixel resolution
reduce the size (using PIL) of your original image and use the
10-pixel vector field
Either case it is a tradeoff between image size/quality and speed.

Categories