Indexing numpy indices like array with list of 2D points - python

I am using python 2.7
I have an array of indices created by
ids=np.indices((20,20))
ids[0] is filled with all the vertical coordinates and
ids1 is filled with all the horizontal coordinates
ids has a shape of (2,20,20)
I have a boolean mask of shape (20,20)
I need to have a list of ids that correspond to the ones marked as true in the mask.
I am trying to do this by mid=ids[:,mask].T which gives me a list of this sort
[2,17]
[4,6]
[1,19]
[18,4]
and so on. They are saved in an array called mid
Then, I need all those coordinates in mid to find the values in another array. Meaning I need
anotherarray([2,17])
I have not managed to take the list of mid to use them in a fancy indexing way, can someone help me?
I have
anotherarray[mid[0],mid[1]]
and it doesnt work. I also have
anotherarray[tuple(mid)]
and it doesn't work
Edit (read only if you care about context): I wanted to add context to show why I think I need the extra indices. Maybe I don't, that is what I want to fin out to make this efficient.
This is a registration problem, a ver simple one. I have two images. A reference and a floating as seen below. Reference to the left, and floating to the right.
The reference image and the floating image are in different coordinate spaces. I have points marked as you can see in the images. I find an affine transformation between each other.
The region delimited by the line is my region of interest. I send the coordinates of that region in the floating space to the reference space.
There in the reference space, I find what pixels are found inside the region and they become the mask array, containing the information of both in and outer pixels.
But I only care about those inside, so I want only the indices of those pixels inside the mask in the reference space and save them using mid=ids[:,mask] .
Once I have those points, I transform them back to the floating space, and in those new indices I need to look for the intensity. Those intensities are the ones who will be written back in the reference in their corresponding indices. That is why I think I need to have the indices of those points in both reference and floating space, and the intensities of the image. That other image is the anotherarray from which I want only the transformed masked pixels.
So there you go, that is the explanation if you care about it. Thank you for reading and answering.

A few tips: You can get mid directly from mask using np.argwhere(mask). Probably more convenient for your purpose is np.where which you can use like mi, mj = np.where(mask) and then anotherarray[mi, mj].

Related

Python: Compute the area and perimeter of structures from arrays of ones and zeros

I am facing the following issue: I have two 2D-arrays of ones and zeros (same shape; (1920,1440)), which specify the masks and outlines of objects. Here, ones indicate space occupied by said objects and zeros indicate empty space (ones indicate the outlines and zeros indicate empty space respectively).
Here you can find a graphical representation of the mask array: https://ibb.co/v36TrJv and here you can find a graphical representation of the outlines array: https://ibb.co/FxKwmTq. Ones are depicted in white and zeros are depicted in black.
As you can see, the masks form ellipse-like structures, which do not overlap and the outlines always form closed contours. I now would like to compute the area occupied by each structure as well as the perimeter. Ideally, I would end up with two 2D-arrays with the same shape as the input arrays. Here, the first array would hold the area of each structure at the points where the mask array has a value of one. Analogously, the second array would hold the respective perimeter of each structure at these points. I need the output arrays to be in this form, so that I can do shape index computations and produce graphical representations of the results.
As a minimum reproducible example you can download the images from provided links and use the following code to extract the arrays from them:
import skimage.io as sio
import numpy as np
masks = sio.imread("masks.png")
masks = np.mean(masks, axis =2)/255
outlines = sio.imread("outlines.png")
outlines = np.mean(outlines, axis=2)/255
I have already played around a bit with OpenCV, as it apparently has functions, which are specifically designed for the applications I am looking for. Yet, my efforts have not yielded any notable results so far. I tried to adapt the example code from the contour features section of the OpenCV Docs (https://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html):
import cv2 as cv
img = cv.imread('masks.png',0)
ret,thresh = cv.threshold(img,127,255,0)
contours,hierarchy = cv.findContours(thresh, 1, 2)
cnt = contours[0]
print(cnt)
Here, the outcome does not seem to be what I am looking for. I also tried to adjust the threshold but without success. I am unable to figure out which adjustments I would have to make in order to arrive at my desired results using OpenCV.
Furthermore, I have come across Green's theorem (https://en.wikipedia.org/wiki/Green%27s_theorem) and was considering to give implementing it for my purpose a try. But I thought I first ask for some external help, because I feel like there should be a more straight forward solution for my problem. Any help will be highly appreciated!
OpenCV's contours will do your work. I think you are understanding them wrong. In your code, contours[0] will only give the first contour detected. Whereas, you should iterate over the contours variable like for contour in contours:, and then for each contour, get the area and perimeter using the functions given in the doc you shared and store these details in a list of list. Thus your final list will be of size nĂ—2 where n is the number of objects in your image.
Also, a suggestion there, find the contours in the image having the objects filled with ones and background with 0 as in the first image you shared. Also, just to be on safe side, as all your objects are separated, use RETR_EXTERNAL as a flag while finding the contours. Refer to OpenCV docs to get more information on this.

Affine transformation with 3 points and merging images

I am writing a Python program that shows 2 thermal (low res) drone shots next to each other. The user then chooses 3 points on both pictures after which the second picture should be transformed and merged with the first. The following step the program should show the merged picture with the third picture from the flight next to it etc.
At the moment I'm looking at ways to do the transformation and merging of the images.
I was thinking of reading the images as arrays (a list of lists (= rows)), manually calculating the new location of each pixel with the transformation formula (at the moment I have calculated the transformation matrix with OpenCV), creating a new empty list of lists and pasting the pixel value in the corresponding location in the new list of lists. Afterward, I wanted to fill al the empty 'cells' with zero's.
To merge them I would again create a new list of lists and loop through both images and get the maximum or average value for each cell and filling the empty remaining 'cells' with zero's. This resulting merged image I would then show in the window next to the following image from the flight as I mentioned.
I still don't know how I'm going to predict the size of my list of lists before calculating the new coordinates of the pixel values. Also, this approach seems kind of long and inefficient so I was wondering if there is an easier way to do this using Wand or OpenCV (or any other library).
I did everything manually and it works, one problem I still have is diagonal NoData lines through the tranformed image. I suppose this is the result of the rounding of coordinates because I can't use floats (which are the result of the transformation formula) as indices in the lists. (The 3 sets of points weren't chosen super carefully for this example but you get the point.)
EDIT: I solved the diagonal lines by filling in the calculated value in the center pixels and the 8 other pixels bordering the center pixels. It makes the image blurrier but it's the only solution I've got.

Identify groups of numbers in an array (group pixels of an image)

this question is related to another question i asked before here. now i have an array of contains 0s,1s,-1s. consider it as an image where background is 0s. i has groups of 1s and -1s.
This it that array opened in excel. those highlighted groups are 1s( in some cases can be -1s). there can be 4 groups in one array maximum. i want to separate those groups in to left, right, top, bottom with its value and the original index.
referring to the previous question, i am trying to find the points on the humps and hollows in the puzzle pieces. if i can group them separately then i know how to find the index of the point i want.
i tried to separate like this. but this doesn't apply for all the pieces. some times it can cut through hollows.
thanks in advance for any help!!!
Since your data is 2d array. Have you tried using an approach like region growing to segment the data?
https://en.wikipedia.org/wiki/Region_growing
Basically, you need to start with a seed point and grow the region by considering neighbouring points and whether or not they fit the criteria for your region.

Get n largest regions from binary image

I have given a large binary image (every pixel is either 1 or 0).
I know that in that image there are multiple regions (a region is defined as a set of neighboring 1s which are enclosed by 0s).
The goal is to find the largest (in terms of pixel-count or enclosed area, both would work out for me for now)
My current planned approach is to:
start an array of array of coordinates of the 1s (or 0s, whatever represents a 'hit')
until no more steps can be made:
for the current region (which is a set of coordinates) do:
see if any region interfaces with the current region, if yes add them togther, if no continue with the next iteration
My question is: is there a more efficient way of doing this, and are there already tested (bonus points for parallel or GPU-accelerated) implementations out there (in any of the big libraries) ?
You could Flood Fill every region with an unique ID, mapping the ID to the size of the region.
You want to use connected component analysis (a.k.a. labeling). It is more or less what you suggest to do, but there are extremely efficient algorithms out there. Answers to this question explain some of the algorithms. See also connected-components.
This library collects different efficient algorithms and compares them.
From within Python, you probably want to use OpenCV. cv.connectedComponentsWithStats does connected component analysis and outputs statistics, among other things the area for each connected component.
With regards to your suggestion: using coordinates of pixels rather than the original image matrix directly is highly inefficient: looking for neighbor pixels in an image is trivial, looking for the same in a list of coordinates requires expensive searchers.

Examples on N-D arrays usage

I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
A few simple examples are:
A n x m 2D array of p-vectors represented as an n x m x p 3D matrix, as might result from computing the gradient of an image
A 3D grid of values, such as a volumetric texture
These can even be combined in the case of a gradient of a volume in which case you get a 4D matrix
Staying with the graphics paradigm, adding time adds an extra dimension, so a time-variant 3D gradient texture would be 5D
numpy.sum(array, axis=5) is not valid for a 5D-array (as axes are numbered starting at 0)
Practical applications are hard to come up with but I can give you a simple example for 3D.
Imagine taking a 3D world (a game or simulation for example) and splitting it into equally sized cubes. Each cube could contain a specific value of some kind (a good example is temperature for climate modelling). The matrix can then be used for further operations (simple ones like calculating its Transpose, its Determinant etc...).
I recently had an assignment which involved modelling fluid dynamics in a 2D space. I could have easily extended it to work in 3D and this would have required me to use a 3D matrix instead.
You may wish to also further extend matrices to cater for time, which would make them 4D. In the end, it really boils down to the specific problem you are dealing with.
As an end note however, 2D matrices are still used for 3D graphics (You use a 4x4 augmented matrix).
There are so many examples... The way you are trying to represent it is probably wrong, let's take a simple example:
You have boxes and a box stores N items in it. You can store up to 100 items in each box.
You've organized the boxes in shelves. A shelf allows you to store M boxes. You can identify each box by a index.
All the shelves are in a warehouse with 3 floors. So you can identify any shelf using 3 numbers: the row, the column and the floor.
A box is then identified by: row, column, floor and the index in the shelf.
An item is identified by: row, column, floor, index in shelf, index in box.
Basically, one way (not the best one...) to model this problem would be to use a 5D array.
For example, a 3D array could be used to represent a movie, that is a 2D image that changes with time.
For a given time, the first two axes would give the coordinate of a pixel in the image, and the corresponding value would give the color of this pixel, or a grey scale level. The third axis would then represent time. For each time slot, you have a complete image.
In this example, numpy.sum(array, axis=2) would integrate the exposure in a given pixel. If you think about a film taken in low light conditions, you could think of doing something like that to be able to see anything.
They are very applicable in scientific computing. Right now, for instance, I am running simulations which output data in a 4D array: specifically
| Time | x-position | y-position | z-position |.
Almost every modern spatial simulation will use multidimensional arrays, along with programming for computer games.

Categories