Image warping by means of a vector field using Python - python

I have two images of the same size and I have computed a vector field to warp the second image onto the first one.
However, as my vector field is computed over a grid with a 10 pixels spacing along both directions, I would like to define such a vector field but for all points of my image.
Thus, I am wondering how I could achieve this.

Possibilities:
interpolate between the points - 2D interpolation over regular grid,
should be fast, using scipy
compute your vector field for a 1-pixel resolution
reduce the size (using PIL) of your original image and use the
10-pixel vector field
Either case it is a tradeoff between image size/quality and speed.

Related

How to plot 2-D navigation data points on a floor map using Matlab/Python?

I want to show the tracking result of my indoor localization algorithm with respect to the ground truth reference path on the floor map. The floor plan and the walking route representing the ground truth is as follows:
Here, the red line is the ground truth route. The right-left side of the image represents the x-axis and it is compressed (original x-axis length is much larger) owing to space. The top-bottom of the image is the y-axis and it represents precisely according to the coordinate.
I want to draw the localization estimation points (2-D) on it. I tried to do it using Origin. I got the following image.
As seen in the figure above, the image does not resemble the floor plan precisely (using log on y-axis can reduce the dimension of y-axis but it does not yield a complete solution in my case).
To summarize:
What I have: (a) A set of 2-D coordinate points from each localization algorithm (I'm comparing my method with two other methods, so there are 3 sets of 2-D coordinate points) and (b) a floor plan image.
What I want: To plot the sets of 2-D coordinate points on the floor plan image.
If anyone could drop a sample Matlab/python code to plot the 2-D coordinates, I'd highly appreciate it.
Thank you.
To plot on top of an image, you have to provide the nessecary scaling information. This can be achieved using the image function passing x, y and C: https://de.mathworks.com/help/matlab/ref/image.html?s_tid=doc_ta
I don't know how your floor plan is scaled, but the resulting code should be something like:
image(x,y,C) % x and y provides the scaling informating, C is the image.
hold on
plot(...) % code you already have

How to (generally!) deal with out-of-range output pixel destinations

I'm working on a perspective transform application involving transforming 3D points to 2D camera pixels. It is a purely mathematical model, because I'm preparing to use it on hardware that I don't really have access to (so I'm making up focal length and offset values for the intrinsic camera matrix).
When I do the mapping, depending on the xyz location of the camera, I get huge differences in where my transformed image is, and I have to make the matrix where I'm inputting the pixels really large. (I'm mapping an image of 1000x1000 pixels to an image of about 600x600 pixels, but its located around 6000, so I have to make my output matrix 7000x7000, which takes a long time to plt.imshow. I have no use for the actual location of the pixels, because I'm only concerned with what the remapped image looks like.
I was wondering how people dealt with this issue:
I can think of just cropping the image down to the area that is non-zero (where my pixels are actually mapped too? Like seen in:
How to crop a numpy 2d array to non-zero values?
but that still requires me to use space and time to alot a 7000x7000 destination matrix

SciPy: Interpolation/Smoothing without edge bleeding

I have a raster that is being stored in a numpy array that holds an aerial photo of an area that may be any shape. There is a good amount of noise in the data within the area that needs to be smoothed out. The edge of the image (where no data is) is marked by 0s that extend to the edge of the raster.
I have tried using the gaussian filter in scipy.ndimage.filters, but that reduces the values of the pixels at the edge of the data set. I can't find a flag to set nodata value. Is there a better way to do this in Python?

Determining the pattern orientation of a spatiotemporal image

How can I obtain average direction of the pattern shown in the figure below. It is the direction of the red arrow relative to the yellow (horizontal) line. Any ideas for an approach? I couldn't figure out a way to approach. This is a spatio-temporal image created from a video. Thank you.
Here is my original image:
The simplest approach would be to compute the gradient vector (x derivative and y derivative) and find its direction at each pixel (atan2(y,x)). The average orientation is what you want, not the average direction (will cancel out). So apply modulus pi, then average across the image.
The best way to compute image gradients is through the Gaussian gradients.
The structure tensor is the more robust way of accomplishing this. In short, it computes local averages of the gradient vector to reduce the effect of noise. It does this by computing the outer product of the gradient vector with itself, which produces a symmetric matrix. The individual components of this matrix can then be locally averaged (i.e. apply a smoothing filter). This is similar to computing the angle of the vectors, doubling the angles to make vectors in opposite directions equal, then averaging them.
Note that you can apply either of these solutions in 3D (you can think of the video data as 2D + time = 3D). That way, you compute both the speed and the direction of motion within the 2D frame, rather than just the speed along the direction in which you extracted the 2D image you show in the question. The image gradient and the concept of the structure tensor easily extend to 3D. This 3D approach is similar to the approach by Lucas-Kanade for optical flow. If you follow that link to Wikipedia, you'll see it uses the structure tensor in 2D, and adds gradients along the time dimension.
Might be useful to try Fourier transform.
In your case you should get two vertical lines in the middle of the transformed image corresponding to the information when traveling vertically in the image.
On the other hand there shouldn't be a horizontal line since when traveling horizontally in the image there is little information (little change)
For example you can use this online site to play with fourier transforms:
https://www.ejectamenta.com/Fourifier-fullscreen/
It might sound like the problem remains the same but in fact it is much easier now.
The 2D pattern is converted into dominant lines which are quite easy to find in the transformed image.
For example you can search for the strongest pixels in the image and simply determine if they are more likely to be horizontal line or a vertical line or determine the angle of the dominant line. Then rotate by 90 degrees.
For example see this image of wood grain and the resulting transformed image:
And don't worry about the two lines. The image is symmetric so just ignore 3/4 of the image and look in 1 quarter.
I recommend giving the Hough transform a go, it is available in OpenCv. The Hough transform maps lines to angles, and might be useful in your case.

finding a local maximum in a 3d array (array of images) in python

I'm trying to implement a blob detector based on LOG, the steps are:
creating an array of n levels of LOG filters
use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels.
find a local maxima and circle the blob in the original image.
I already created the filters and the 3d array (which is an array of 2d images).
I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images).
Now I'm trying to figure out how to find the local maxima in the array.
I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales)
The brute force way of checking the pixel value directly seems ugly and not very efficient.
Whats the best way to find a local maxima point in python using openCV?
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore:
Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged.
Brute-force test the candidates against their upper and lower slices.

Categories