In matlab you can use
cc = bwconncomp(bimg);
pixels = cc.PixelIdxList{i}
To get pixel list of each connected components. What's the python equivalent?
I tried
from skimage import measure
label = measure.label(bimg)
To get the labels, however, this does not come with pixel list.
Any suggestions?
The regionprops function in scikit-image returns the property "coords", an (N, 2) ndarray coordinate list (row, col) of the region. I ran into the same problem and am using this to get the pixel list from the label array.
To get pixel list of a connected component with some_label (e.g. some_label=1), if you have a labeled image:
pixels = numpy.argwhere(labeled_image == some_label)
See numpy.argwhere.
And see also this similar question.
Related
Assume that I have a binary numpy array (0 or 1 / True or False) that come from a .jpg image (2D array, from a grayscale image). I just made some processing to get the edges of the image, based on color change.
Now, from every surface/body from this array I need to get its centers.
Here the original image:
Here the processed one:
Now I need to get the centers of each surface generated for this lines (i.e. indexes that more or less point the center of each surface generated).
In the case you are interested, you can find the file (.npy) here:
https://gofile.io/d/K8U3ZK
Thanks a lot!
Found a solution that works. scipy.ndimage.label assigns a unique int. to each label or area, to validate the results I simply plot the output array
from scipy.ndimage import label
labeled_array, no_feats = label(my_binary_flower)
plt.imshow(labeled_array)
I am trying to draw a minimum bounding rectangle around an object in an image using openCV,
the cv2.minAreaRect is returning the rectangle of correct size but its orientation is off
Following is my codeSnippet
The following screentshot shows the image i am working
In the next screenshot it shows the image with the detected border
According to the opencv documentation linked here: https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html
this should work
np.where() returns things in the normal array index (row, column) order, but OpenCV expects points in (x, y) order, which is opposite to (row, column). This has the effect that you are flipping the points about the diagonal of the image.
Simply reverse the points by swapping the two columns. Better yet would just be to be more explicit with variables and not do everything on one line:
y, x = np.where(binary == 0)
coords = np.column_stack((x, y))
I have an image containing coloured regions (some of them using the same colour) and I would like that each region have a different colour.
The objective is to colour/label each region using a different colours/labels.
Sample image:
You can achieve this by looping over the unique values in your image, creating a mask of the objects with that value, and performing bwlabel for each such mask. This will give you unique labels for each connected component in that mask, and you can collect the labels from all the masks by adding the number of labels already found previously:
img = imread('i5WLA.png');
index = zeros(size(img));
for iGray = unique(img(:)).' %'
mask = (img == iGray);
L = bwlabel(mask, 4);
index(mask) = L(mask)+max(index(:));
end
subplot(2,1,1);
imshow(img, []);
title('Original');
subplot(2,1,2);
imshow(index, []);
title('Each region labeled uniquely');
And here's the plot this makes:
You can now see that each connected object has its own unique gray value. You can then create a color image from this new indexed image using either ind2rgb or label2rgb and selecting a colormap to use (here I'm using hsv):
rgbImage = ind2rgb(index, hsv(max(index(:))));
imshow(rgbImage);
% Or...
rgbImage = label2rgb(index, #hsv);
imshow(rgbImage);
Unless there's already a function floating around that does what you want, you can always write it yourself.
If I had to do this, I'd consider something like a union-find algorithm to group all equal-color adjacent/connected pixels into sets, then assign labels to those sets.
A naive (less efficient but doesn't require union-find) implementation using pseudo-code:
# assume pixels are stored in numpy array. Use your imagination to adjust as required.
put each pixel into its own set.
for pixel in pixels:
neighbors = adjacent_same_color_pixels(pixel)
find the sets that contain pixel, and the sets that contain the neighbors
join all those sets together, delete the original sets
now there's one set for each connected same-color shape.
assign labels as desired.
Lets say that you are given this image
and are given the instruction to programmatically color only the inside of it the appropriate color, but the program would have to not only work on this shape and other primitives but on any outlined shape, however complex it may be and shaded or not.
This is the problem I am trying to solve, but here's where I'm stuck, it seems like it should be simple to teach a computer to see black lines, and color inside them. But searching mostly turns up eigenface style recognition algorithms, which seems to me to be over fitting and far greater complexity than is needed for at least the basic form of this problem.
I would like to frame this as a supervised learning classifier problem, the purpose of which is to feed my model a complete image and it will output smaller numpy arrays consisting of pixels classsified as object or background. But in order to do that I would need to give it training data, which to me seems like I would need to hand label every pixel in my training set, which obviously defeats the purpose of the program.
Now that you have the background, here's my question, given this image, is there an efficient way to get two distinct arrays, each consisting of all adjacent pixels that do not contain any solid black (RGB(0,0,0)) pixels?
Which would make one set all pixels on the inside of the circle, and the other, all pixels on the outside of the circle
You can use scipy.ndimage.measurements.label to do all the heavy lifting for you:
import scipy.ndimage
import scipy.misc
data = scipy.misc.imread(...)
assert data.ndim == 2, "Image must be monochromatic"
# finds and number all disjoint white regions of the image
is_white = data > 128
labels, n = scipy.ndimage.measurements.label(is_white)
# get a set of all the region ids which are on the edge - we should not fill these
on_border = set(labels[:,0]) | set(labels[:,-1]) | set(labels[0,:]) | set(labels[-1,:])
for label in range(1, n+1): # label 0 is all the black pixels
if label not in on_border:
# turn every pixel with that label to black
data[labels == label] = 0
This will fill all closed shapes within the image, considering a shape cut by the edge of the image not to be closed
I need to search outliers in more or less homogeneous images representing some physical array. The images have a resolution which is much higher than the screen resolution. Thus every pixel on screen originates from a block of image pixels. Is there the possibility to customize the algorithm which calculates the displayed value for such a block? Especially the possibility to either use the lowest or the highest value would be helpful.
Thanks in advance
Scipy provides several such filters. To get a new image (new) whose pixels are the maximum/minimum over a w*w block of an original image (img), you can use:
new = scipy.ndimage.filters.maximum_filter(img, w)
new = scipy.ndimage.filters.minimum_filter(img, w)
scipy.ndimage.filters has several other filters available.
If the standard filters don't fit your requirements, you can roll your own. To get you started here is an example that shows how to get the minimum in each block in the image. This function reduces the size of the full image (img) by a factor of w in each direction. It returns a smaller image (new) in which each pixel is the minimum pixel in a w*w block of pixels from the original image. The function assumes the image is in a numpy array:
import numpy as np
def condense(img, w):
new = np.zeros((img.shape[0]/w, img.shape[1]/w))
for i in range(0, img.shape[1]//w):
col1 = i * w
new[:, i] = img[:, col1:col1+w].reshape(-1, w*w).min(1)
return new
If you wanted the maximum, replace min with max.
For the condense function to work well, the size of the full image must be a multiple of w in each direction. The handling of non-square blocks or images that don't divide exactly is left as an exercise for the reader.