Affine transformation with 3 points and merging images - python

I am writing a Python program that shows 2 thermal (low res) drone shots next to each other. The user then chooses 3 points on both pictures after which the second picture should be transformed and merged with the first. The following step the program should show the merged picture with the third picture from the flight next to it etc.
At the moment I'm looking at ways to do the transformation and merging of the images.
I was thinking of reading the images as arrays (a list of lists (= rows)), manually calculating the new location of each pixel with the transformation formula (at the moment I have calculated the transformation matrix with OpenCV), creating a new empty list of lists and pasting the pixel value in the corresponding location in the new list of lists. Afterward, I wanted to fill al the empty 'cells' with zero's.
To merge them I would again create a new list of lists and loop through both images and get the maximum or average value for each cell and filling the empty remaining 'cells' with zero's. This resulting merged image I would then show in the window next to the following image from the flight as I mentioned.
I still don't know how I'm going to predict the size of my list of lists before calculating the new coordinates of the pixel values. Also, this approach seems kind of long and inefficient so I was wondering if there is an easier way to do this using Wand or OpenCV (or any other library).

I did everything manually and it works, one problem I still have is diagonal NoData lines through the tranformed image. I suppose this is the result of the rounding of coordinates because I can't use floats (which are the result of the transformation formula) as indices in the lists. (The 3 sets of points weren't chosen super carefully for this example but you get the point.)
EDIT: I solved the diagonal lines by filling in the calculated value in the center pixels and the 8 other pixels bordering the center pixels. It makes the image blurrier but it's the only solution I've got.

Related

Detecting B&W Clusters of Pixels

I am relatively new to Python and would like some help with some ideas to solve this problem...
I have a black and white image as so:
black image with white dots
And essentially need to get the midpoint (or honestly any point, as long as it's consistent across all of the dots) of each of those white dots. The program could spit out a list of coordinate points for each of those dots.
I am doing this because I want to have a list of the distances of each dot from its place to the bottom of the image. I said getting the mid-point doesn't matter, it could be any point as long as it's consistent across the dots because I am comparing the values of one image to the values of another that would be measured in the same way.
I had tried to split the image into rows and then count the number of pixels in each row, but that felt like it was limiting and wouldn't really do the best job.
I was thinking to maybe make a loop that looks at one pixel and then checks to see the pixels around it until it reaches the edge or something like that, but it seems like that would take a lot of computing power even with B&W as I have to run this through hundreds of images that have approximately 10 million pixels.
Possibly a solution related to converting the coordinates of the image into a graph and performing cluster analysis?
If you have a binary image, then I think that using skimage to label then get region properties. I think that this tutorial should get you moving on the take you are hoping to accomplish:
https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html

Identify groups of numbers in an array (group pixels of an image)

this question is related to another question i asked before here. now i have an array of contains 0s,1s,-1s. consider it as an image where background is 0s. i has groups of 1s and -1s.
This it that array opened in excel. those highlighted groups are 1s( in some cases can be -1s). there can be 4 groups in one array maximum. i want to separate those groups in to left, right, top, bottom with its value and the original index.
referring to the previous question, i am trying to find the points on the humps and hollows in the puzzle pieces. if i can group them separately then i know how to find the index of the point i want.
i tried to separate like this. but this doesn't apply for all the pieces. some times it can cut through hollows.
thanks in advance for any help!!!
Since your data is 2d array. Have you tried using an approach like region growing to segment the data?
https://en.wikipedia.org/wiki/Region_growing
Basically, you need to start with a seed point and grow the region by considering neighbouring points and whether or not they fit the criteria for your region.

How to create heat map from set of tracks?

At the moment I have a set of tracks (in separate files) in 3 dimensions. My goal is to create at least 2 2D heat maps (XY/XZ relations) from these tracks based off of how many distinct tracks cross a region of some arbitrary size.
However, it would be ideal to have a 3D heatmap!
Lets say the region is 10x10 and the tracks span a 100x100 region (it's actually 480 x 640 in reality but 100x100 is simpler to discuss)
I have a notion of how to do this, but it involves an additional 2-3 matrices per track, and does not seem like an efficient/the easiest way to code this.
Essentially my idea revolves around processing each track individually. You start off with an appropriately sized int matrix that keeps track of how many tracks appeared in your region of interest (ROI) starting at 0 for every entry. You then have another equivalently appropriately sized matrix, but this one takes booleans, you iterate over the track list, and change the entry of the bool matrix to true if the track is in the corresponding area. Then you increment the original int matrix +1 if the bool is true in the corresponding region. You then reset everything but the int matrix and start over with a new track file. Then you can just create a graph of boxes with the intensity/color corresponding to the int matrix.
But I was wondering if there is a cleaner or more efficient way to do this.

Indexing numpy indices like array with list of 2D points

I am using python 2.7
I have an array of indices created by
ids=np.indices((20,20))
ids[0] is filled with all the vertical coordinates and
ids1 is filled with all the horizontal coordinates
ids has a shape of (2,20,20)
I have a boolean mask of shape (20,20)
I need to have a list of ids that correspond to the ones marked as true in the mask.
I am trying to do this by mid=ids[:,mask].T which gives me a list of this sort
[2,17]
[4,6]
[1,19]
[18,4]
and so on. They are saved in an array called mid
Then, I need all those coordinates in mid to find the values in another array. Meaning I need
anotherarray([2,17])
I have not managed to take the list of mid to use them in a fancy indexing way, can someone help me?
I have
anotherarray[mid[0],mid[1]]
and it doesnt work. I also have
anotherarray[tuple(mid)]
and it doesn't work
Edit (read only if you care about context): I wanted to add context to show why I think I need the extra indices. Maybe I don't, that is what I want to fin out to make this efficient.
This is a registration problem, a ver simple one. I have two images. A reference and a floating as seen below. Reference to the left, and floating to the right.
The reference image and the floating image are in different coordinate spaces. I have points marked as you can see in the images. I find an affine transformation between each other.
The region delimited by the line is my region of interest. I send the coordinates of that region in the floating space to the reference space.
There in the reference space, I find what pixels are found inside the region and they become the mask array, containing the information of both in and outer pixels.
But I only care about those inside, so I want only the indices of those pixels inside the mask in the reference space and save them using mid=ids[:,mask] .
Once I have those points, I transform them back to the floating space, and in those new indices I need to look for the intensity. Those intensities are the ones who will be written back in the reference in their corresponding indices. That is why I think I need to have the indices of those points in both reference and floating space, and the intensities of the image. That other image is the anotherarray from which I want only the transformed masked pixels.
So there you go, that is the explanation if you care about it. Thank you for reading and answering.
A few tips: You can get mid directly from mask using np.argwhere(mask). Probably more convenient for your purpose is np.where which you can use like mi, mj = np.where(mask) and then anotherarray[mi, mj].

Linear shift between 2 sets of coordinates

My Problem is the following:
For my work I need to compare images of scanned photographic plates with a catalogue of a sample of known stars within the general area of the sky the plates cover (I call it the master catalogue). To that end I extract information, like the brightness on the image and the position in the sky, of the objects in the images and save it in tables. I then use python to create a polynomial fit for the calibration of the magnitude of the stars in the image.
That works up to a certain accuracy pretty well, but unfortunately not well enough, since there is a small shift between the coordinates the object has in the photographic plates and in the master catalogue.
Here the green circles indicate the positions (center of the circle) of objects in the master catalogue. As you can see the actual stars are always situated to the upper left of the objects in the master catalogue.
I have looked a little bit in the comparison of images (i.e. How to detect a shift between images) but I'm a little at a loss now, because I'm not actually comparing images but arrays with the coordinates of the objects. An additional problem here is that (as you can see in the image) there are objects in the master catalogue that are not visible on the plates and not all plates have the same depth (meaning some show more stars than others do).
What I would like to know is a way to find and correct the linear shift between the 2 arrays of different size of coordinates in python. There shouldn't be any rotations, so it is just a shift in x and y directions. The arrays are normal numpy recarrays.
I would change #OphirYoktan's suggestion slightly. You have these circles. I assume you know the radius, and you have that radius value for a reason.
Instead of randomly choosing points, filter the master catalog for x,y within radius of your sample. Then compute however many vectors you need to compute for all possible master catalog entries within range of your sample. Do the same thing repeatedly, then collect a histogram of the vectors. Presumably a small number will occur repeatedly, those are the likely true translations. (Ideally, "small number" == 1.)
There are several possible solutions
Note - these are high level pointers, you'll need some work to convert it to working code
The original solution (cross correlation) can be adapted to the current data structure, and should work
A believe that RANSAC will be better in your case
basically it means:
create a model based on a small number of data points (the minimal number that are required to define a relevant model), and verify it's correctness using the full data set.
specifically, if you have only translation to consider (and not scale):
select one of your points
match it to a random point in the catalog [you may do "educated guesses", if you have some prior about what translation is more likely]
this matching gives you the translation
verify this translation matches the rest of your points
repeat until you find a good match
I'm assuming here the objects aren't necessarily in the same order in both the photo plate and master catalogue.
Consider the set of position vectors, A, of the objects in the photo plate, and the set of position vectors, B, of the objects in the master catalogue. You're looking for a vector, v, such that for each a in A, a + v is approximately some element in b.
The most obvious algorithm to me would be to say for each a, for each b, let v = b - a. Now, for each element in A, check that there is a corresponding element in B that is sufficiently close (within some distance e that you choose) to that element + v. Once you find the v that meets this condition, v is your shift.

Categories