How to access individual labels in OpenCV connected component labelling - python

I am trying to access individual labels of objects with OpenCV's connectedComponentsWithStats in Python. However, when I run the connectedComponentsWithStats function, a labelled array is returned that has each object with different pixel values. How do I efficiently access each labelled object as a separate array? I am using very large images here with about 12000 x 10000 pixel dimensions.
I have an image here that has been labelled with cv.connectedComponentsWithStats:
The colormap used starts with purple(1) and ends with yellow (last label). How do I reference each labelled object independently as a separate array?

source = <some_image>
labels = <connected components result>
for label in np.unique(labels):
m = (labels == label) # boolean array/mask of pixels with this label
obj = source[m] # orignal pixel values for the labeled object
This will give back a flat result, it's unclear from your question whether this is acceptable

Related

Quadtree of varying threshold / depth on each segmented object based on pixel information

Is there a way to apply quadtree (in python) with varying threshold on a single image (image attached: semantic segmentation), which already has objects segmented.
The aim is to apply coarse / fine quadtree on each object based on pixel information. Since each pixel of the image has instance ID associated with it. For example the pixels associated with background has an Instance ID of '0' and therefore I would like to apply a coarse quadtree on the background (as its less important). While the object in the center (bear) has an instance ID of "1" and I would like to apply a finer quadtree threshold (as its more important than other objects). Something like this
for i in range(x) # x coordinates
for j in range(y) # y coordinate
#read the instance ID from pixel value
if [x,y] == 0 #Background Instance
//apply quadtree at a coarse level only on this pixel
else if [x,y] == 1 #Bear Instance
//apply quadtree at a fine level only on this pixel
So far I have not come across any such quadtree algorithm. Almost all quadtree algorithms apply color mapping based threshold. This is the result I achieved using color map based quadtree. color map based quadtree. Almost all the algorithms do not have any way of varying the threshold level with in a single image. They just take one threshold based on color averaging and apply on all image. Thats what I am NOT looking for. I need to be able to vary the threshold based on the object importance, defined by instance ID contained in each pixel of the image.

How to convert rgb to labels for image segmentation

I have around 4000 rgb label images which are masks for some other images. I can use this image label pair in the deep learning encoder-decoder structure (eg:UNet) architecture with the help of regression approach. But I would like to do segmentation approach. For that how can I convert these images?
Sample image:
(Above sample image should contain 3 classes. one oval shape part, the remaining red part, and the background white part. This can go upto 7 classes in some other image pairs)
There is supposed to be 7 classes including background for the entire dataset. But when I tried to find the unique values in an RGB label, there are more than 30 unique value pairs coming. Otherwise I would have select the unique rgb pair and do the processing. How to overcome this
Here's one potential way to handle this (in MATLAB, but similar in other situations)
The image you have shared is rather pixelated, and hence quite difficult to handle. If your dataset contains similarly pixelated images, I'd explore some kind of pre-processing to get rid of spurious edge discolorations, as they mess up the clustering. For the sake of demonstration here, I've created a test image with exactly three colors.
% Create a test image - one shared is very pixelated.
I = uint8(zeros(100, 100, 3));
I(10:20, 10:20, 1) = 255;
I(40:50, 40:50, 2) = 255;
If the number of colors here is unknown, but up to 7, here's a quick way to use imsegkmeans and it's 'C' output to find the number of unique centers.
% Specify max clusters
maxNumClusters = 7;
% Run clustering using the max value
[~, C] = imsegkmeans(I, maxNumClusters);
nUniqueClusters = size(unique(C, 'rows'), 1);
'nUniqueClusters' should now contain the 'true' number of clusters in the image. In a way, this is almost like finding the number of unique entries of pixel RGB triplets in the image itself - I think what's affecting your work is noise due to pixelation - which is a separate problem.
[L, C] = imsegkmeans(I, nUniqueClusters);
% Display the labeled image for further verification.
B = labeloverlay(I, L);
figure, imshow(B)
One way to attempt to fix the pixelation problem is to plot a histogram of your image pixels (for one of the three color planes), and then managing the low values somehow - possibly marking all of them with a distinct new color that you know doesn't exist in your dataset otherwise (0, 0, 0), for example - and marking it's label to be 'unknown'. This is slightly outside the scope of your original question - hence just a text description of it here.

Labeling image regions

I have an image containing coloured regions (some of them using the same colour) and I would like that each region have a different colour.
The objective is to colour/label each region using a different colours/labels.
Sample image:
You can achieve this by looping over the unique values in your image, creating a mask of the objects with that value, and performing bwlabel for each such mask. This will give you unique labels for each connected component in that mask, and you can collect the labels from all the masks by adding the number of labels already found previously:
img = imread('i5WLA.png');
index = zeros(size(img));
for iGray = unique(img(:)).' %'
mask = (img == iGray);
L = bwlabel(mask, 4);
index(mask) = L(mask)+max(index(:));
end
subplot(2,1,1);
imshow(img, []);
title('Original');
subplot(2,1,2);
imshow(index, []);
title('Each region labeled uniquely');
And here's the plot this makes:
You can now see that each connected object has its own unique gray value. You can then create a color image from this new indexed image using either ind2rgb or label2rgb and selecting a colormap to use (here I'm using hsv):
rgbImage = ind2rgb(index, hsv(max(index(:))));
imshow(rgbImage);
% Or...
rgbImage = label2rgb(index, #hsv);
imshow(rgbImage);
Unless there's already a function floating around that does what you want, you can always write it yourself.
If I had to do this, I'd consider something like a union-find algorithm to group all equal-color adjacent/connected pixels into sets, then assign labels to those sets.
A naive (less efficient but doesn't require union-find) implementation using pseudo-code:
# assume pixels are stored in numpy array. Use your imagination to adjust as required.
put each pixel into its own set.
for pixel in pixels:
neighbors = adjacent_same_color_pixels(pixel)
find the sets that contain pixel, and the sets that contain the neighbors
join all those sets together, delete the original sets
now there's one set for each connected same-color shape.
assign labels as desired.

Python connected components with pixel list

In matlab you can use
cc = bwconncomp(bimg);
pixels = cc.PixelIdxList{i}
To get pixel list of each connected components. What's the python equivalent?
I tried
from skimage import measure
label = measure.label(bimg)
To get the labels, however, this does not come with pixel list.
Any suggestions?
The regionprops function in scikit-image returns the property "coords", an (N, 2) ndarray coordinate list (row, col) of the region. I ran into the same problem and am using this to get the pixel list from the label array.
To get pixel list of a connected component with some_label (e.g. some_label=1), if you have a labeled image:
pixels = numpy.argwhere(labeled_image == some_label)
See numpy.argwhere.
And see also this similar question.

Coloring only the inside of a shape

Lets say that you are given this image
and are given the instruction to programmatically color only the inside of it the appropriate color, but the program would have to not only work on this shape and other primitives but on any outlined shape, however complex it may be and shaded or not.
This is the problem I am trying to solve, but here's where I'm stuck, it seems like it should be simple to teach a computer to see black lines, and color inside them. But searching mostly turns up eigenface style recognition algorithms, which seems to me to be over fitting and far greater complexity than is needed for at least the basic form of this problem.
I would like to frame this as a supervised learning classifier problem, the purpose of which is to feed my model a complete image and it will output smaller numpy arrays consisting of pixels classsified as object or background. But in order to do that I would need to give it training data, which to me seems like I would need to hand label every pixel in my training set, which obviously defeats the purpose of the program.
Now that you have the background, here's my question, given this image, is there an efficient way to get two distinct arrays, each consisting of all adjacent pixels that do not contain any solid black (RGB(0,0,0)) pixels?
Which would make one set all pixels on the inside of the circle, and the other, all pixels on the outside of the circle
You can use scipy.ndimage.measurements.label to do all the heavy lifting for you:
import scipy.ndimage
import scipy.misc
data = scipy.misc.imread(...)
assert data.ndim == 2, "Image must be monochromatic"
# finds and number all disjoint white regions of the image
is_white = data > 128
labels, n = scipy.ndimage.measurements.label(is_white)
# get a set of all the region ids which are on the edge - we should not fill these
on_border = set(labels[:,0]) | set(labels[:,-1]) | set(labels[0,:]) | set(labels[-1,:])
for label in range(1, n+1): # label 0 is all the black pixels
if label not in on_border:
# turn every pixel with that label to black
data[labels == label] = 0
This will fill all closed shapes within the image, considering a shape cut by the edge of the image not to be closed

Categories