I am new to OpenCV, so please bear with me.
Currently, I get contours of both white and black things in my binary image. I only want black contours though (so where the value is 0). Is there some kind of setting I can adjust to get this result? Or can I filter them?
Also: cv.findContours() does return both the contours and a hierarchy. What is hierarchy used for?
And lastsly: Contours seemingly consist of an array with multiple coordinates. What do they mean?
cv2.findContours finds all the contours in your image. Some are internal, some are external, some are nested inside other contours.
For this reason the method returns multiple coordinates.
Hierarchy is a vector that contains information about these different levels of contours extracted (external, nested, internal etc..).
You can however set a retrievalMode to filter contours based on hierarchy.
Under no circumstances they contain information about color so you need to filter them in some way.
I might add that a sensible thing you can do is filter the image before getting contours, so you find contours only in the mask you create, based on the color or range of colours of your choice (see cv2.inRange)
Related
I am working on foreground object segmentation, and I wanted to know if there's a way to detect contours within a bounding box.
So far, I've been working on detecting contours from the whole image, but since that image is really crowded, it has become difficult to segment out particular objects.
I've used the cv2.boundingRect() to draw a bounding box around the object, and then tried to use cv2.GrabCut() to get an accurate mask of the object, but it just cuts around the box and not close to the shape of the object.
I was hoping that if I detect contours within the bounding box, I'll be able to get a more accurate mask of the object.
Any suggestions on how I can do that?
I am converting an image(full-pixel semantic segmentation mask) with each object corresponding to a constant color(no illumination or other effects; aliased image) to find the contour of each object. Ideally, I am expecting a shared boundary between adjacent objects.
My current approach does not provide a shared boundary because I am isolating each connected component. The solution has boundaries overlapping with between adjacent contours. Can you suggest an approach for shared boundaries?
Approach:
Create a mask for each of the unique colors.
Find the connected components for each of the object in the mask
Find the contour for each connected component.
input image-https://drive.google.com/file/d/1-12gVzPUueXSOpg4EOSRxi1Dx2nBIFQ9/view?usp=sharing
output image generated from contours(identical to the input but has overlapping contours)-https://drive.google.com/file/d/19WzIVe3iXU6IibEojNgHlEaNO3FuLgdW/view?usp=sharing
overlapping contours in the red doodle, see yellow and green-https://drive.google.com/file/d/1g02cvbwS1toNIbj4icZunRx70I-6i923/view?usp=sharing
image generated from contours looks similar but the below image shows the overlapping contours
What I'm trying to do is find big light polluted areas in a photo of nighttime Earth. I converted the source photo to grayscale and then to binary photo with a threshold. cv2.findcontours works properly but when I try to get rid of small contours it only deletes a part of them.
Source image
import cv2
image_orig=cv2.imread('C:\Users\pc\Desktop\middleeast.jpg')
image_gray=cv2.cvtColor(image_orig,cv2.COLOR_BGR2GRAY)
_, image_threshold=cv2.threshold(image_gray,60,255,cv2.THRESH_BINARY)
_,contours,_=cv2.findContours(image_threshold,cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)
image_contours=image_orig.copy()
cv2.drawContours(image_contours,contours,-1,(255,255,0),1)
cv2.imshow('image_contours',image_contours)
cv2.imwrite('C:\Users\pc\Desktop\middleEastAllContours.jpg', image_contours)
for counter,contour in enumerate(contours):
if cv2.contourArea(contour)<250.0:
contours.pop(counter)
image_big_contours=image_orig.copy()
cv2.drawContours(image_big_contours,contours,-1,(255,255,0),1)
cv2.imshow('big contours',image_big_contours)
cv2.waitKey(0)
As you can see there still are plenty of small light polluted areas contoured. How can I get rid of them?
All contours comparison
Big contours comparison
Source image all contours Source image big contours
I think that the problem is in the pop of the for: when you pop a contour, you are skipping the next one.
For example, if you pop contour number 10, then the next countour will become number 10, but you will skip it because in the next iteration you will look at contour 11.
I am not a python expert (I don't really know how the two variables are used in the for) but you could try to do a counter=counter-1 after the pop. Another option is iterate the list backwards (start in the last element and finish at the first)
I am trying to identify numbers in images. I am using the cv2.findContours function to "separate" the digits in the photo. But even after several modifications to the image the function recognises arbitrary contours even in the absolute top left corner of the image even though the final modified image has an absolutely black background with only the digits being white. Why is it so? Complete source code along with photo and everything else :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition
Source code :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition/blob/master/performRecognition.ipynb
In this code I am using the rects = [cv2.boundingRect(ctr) for ctr in ctrs] line to identify the contours returned by the function. Upon printing it out I found out that many of the bounding rectangles are in (0,0,0,0). If any further details/clarification is required please comment.
I am attempting to write a program to handle the finding the intersections of the lines outlining a rectangular object, (eg. Computer screen). I attempted to use hough lines originally, however, due to variable lighting conditions as well as content appearing on the screen, the lines that are drawn are not always the outline of the screen. Additionally, there may be a large amount of other random objects in the frame.
My next approach was to use contours which always seem to outline both the screen as well as being able to handle the variable content encapsulated within it. How do I go about using the contours to approximate a line?
I used
print len(contours)
and received a contour fairly long contour length > 200 consistently.
Feel free to comment asking for clarity.
EDIT
The green "lines" are the contours found by findContours. I am primarily interested in the contour surrounding the screen content. How can I use those contours to find a line approximating those contours and then find the point of intersection of the two lines? This is from a webcam stream so conditions, angle, and distance may not stay constant.
A first step might be to use the size of the contours to filter out those you are not interested in since the smaller contours usually correspond to stuff on the screen.
Also, the findContour method can be used to return the contours in a hierarchy of nested contours. This will tell you which contour is contained within another and allow you to get the outer-most, second outer-most one etc. If you are trying to get the screen, then it would have to be a large contour that is possibly the second largest and nested just below the contour for the monitor.
After obtaining the potential contours, which are just list of points, you can do a robust fitting for a single rectangle using just these points either by RANSAC (with a model for rectangles, not homography), or hough transform modified for this case.