I am converting an image(full-pixel semantic segmentation mask) with each object corresponding to a constant color(no illumination or other effects; aliased image) to find the contour of each object. Ideally, I am expecting a shared boundary between adjacent objects.
My current approach does not provide a shared boundary because I am isolating each connected component. The solution has boundaries overlapping with between adjacent contours. Can you suggest an approach for shared boundaries?
Approach:
Create a mask for each of the unique colors.
Find the connected components for each of the object in the mask
Find the contour for each connected component.
input image-https://drive.google.com/file/d/1-12gVzPUueXSOpg4EOSRxi1Dx2nBIFQ9/view?usp=sharing
output image generated from contours(identical to the input but has overlapping contours)-https://drive.google.com/file/d/19WzIVe3iXU6IibEojNgHlEaNO3FuLgdW/view?usp=sharing
overlapping contours in the red doodle, see yellow and green-https://drive.google.com/file/d/1g02cvbwS1toNIbj4icZunRx70I-6i923/view?usp=sharing
image generated from contours looks similar but the below image shows the overlapping contours
Related
My goal is to separate all the objects from each other. After that I could use blob or detection so that I can measure the area of each one to make a Histogram with the size distribution of the objects.
Original image:
The problem is that the objects are merging with each other, mainly due to their shadow and/or their proximity to each other.
Final results - bounding box:
Binary image:
I have tried canny edge detection, holistically-nested-edge-detection, and still having this issue.
What can I do to fix it?
you can get the box coordinates for each detection and extract the detect image based on those coordinates and then apply your filter.
Check out this post on region of interest
https://stackoverflow.com/a/58211775/14770223
I am new to OpenCV, so please bear with me.
Currently, I get contours of both white and black things in my binary image. I only want black contours though (so where the value is 0). Is there some kind of setting I can adjust to get this result? Or can I filter them?
Also: cv.findContours() does return both the contours and a hierarchy. What is hierarchy used for?
And lastsly: Contours seemingly consist of an array with multiple coordinates. What do they mean?
cv2.findContours finds all the contours in your image. Some are internal, some are external, some are nested inside other contours.
For this reason the method returns multiple coordinates.
Hierarchy is a vector that contains information about these different levels of contours extracted (external, nested, internal etc..).
You can however set a retrievalMode to filter contours based on hierarchy.
Under no circumstances they contain information about color so you need to filter them in some way.
I might add that a sensible thing you can do is filter the image before getting contours, so you find contours only in the mask you create, based on the color or range of colours of your choice (see cv2.inRange)
I'm trying to extract the edge of a drop from the following image which I've first applied cv2.Canny() onto:
I tried using cv2.findContours() (and taking the longest contour found) but this ends up being a closed loop around the drop's 1 pixel edge (shown exaggerated in blue).
Is there a way to extract just a single open edge (as a list of (x, y) points, similar to the structure of a contour returned by findContours()) that goes around the profile of the drop?
Images:
Original Image
After applying Canny
I am working on a project where I am able, after some process, to find a binary image where moving objects are white, the rest being all black :
Binary image
Then, there's an algorithm that agglomerates the blobs which are supposed to belong together, based on the distance between them (the one in the center for example). To do this, they use the findContour function so that each blob, which is flagged with a number, is represented by its contour pixels (there would be 5 in my image, the one in the center being composed of two close blobs). The output of the algorithm is the flag of the blobs belonging together, so for example with the above image, from top to bottom : (1, [2, 3], 4, 5).
Now I want to compute a concave hull for each of these agglomerated blobs. I have the algorithm to do it, but I can't apply it on the outer pixels, I need the pixels of the whole object !
How can I do that ?
The problem is that if I retrieve the pixels from the original image, I lose the connection between "pixels of the image" and "blobs". The blobs only have information about the contour.
I'd be grateful if you had an idea on how to solve this. :)
How about using connectedComponents (or connectedComponentsWithStats) instead of findContours ?
It will find your blobs while giving you a list of all pixels in these blobs (not only the contour), in its "output" return array.
I am attempting to use machine learning (namely random forests) for image segmentation. The classifier utilizes a number of different pixel level features to classify pixels as either edge pixels or non edge pixels. I recently applied my classifier to a set of images that are pretty difficult to segment even manually (Image segmentation based on edge pixel map) and am still working on obtaining reasonable contours from the resulting probability map. I also applied the classifier to an easier set of images and am obtaining quite good predicted outlines (Rand index > 0.97) when I adjust the threshold to 0.95. I am interested in improving the segmentation result by filtering contours extracted from the probability map.
Here is the original image:
The expert outlines:
The probability map generated from my classifier:
This can be further refined when I convert the image to binary based on a threshold of 0.95:
I tried filling holes in the probability map, but that left me with a lot of noise and sometimes merged nearby cells. I also tried contour finding in openCV but this didn't work either as many of these contours are not completely connected - a few pixels will be missing here and there in the outlines.
Edit: I ended up using Canny edge detection on the probability map.
The initial image seems to be well contrasted and I guess we can simply threshold to obtain a good estimate of the cells. Here is a morphological area based filtering of the thresholded image:
Threshold:
Area based opening filter(this needs to be set based on your dataset of cells under study):
Area based closing filter(this needs to be set based on your dataset of cells under study):
Contours using I-Erosion(I):
Code snippet:
C is input image
C10 = C>10; %threshold depends on the average contrast in your dataset
C10_areaopen = bwareaopen(C10,2500); %area filters average remove small components that are not cells
C10_areaopenclose = ~bwareaopen(~C10_areaopen,100); %area filter fills holes
se = strel('disk',1);
figure, imshow(C10_areaopenclose-imerode(C10_areaopenclose,se)) %inner contour
To get smoother shapes I guess fine opening operations can be performed on the filtered images, thus removing any concave parts of the cells. Also for cells that are attached one could use the distance function and the watershed over the distance function to obtain segmentations of the cells: http://www.ias-iss.org/ojs/IAS/article/viewFile/862/765
I guess this can be also used on your probability/confidence maps to perform nonlinear area based filtering.