How to fill a circle contour at the edge of an image? - python

In the image above, I want to eventually be able to fill in the contours of the colored circles. Unfortunately, the yellow circle on the bottom is right at the edge of the image, so cv2.findContours() doesn't work:
_, green_contours, _ = cv2.findContours(green_seg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Does anyone know how I can somehow fill in the yellow circle contour even though it's at the edge of the image? Since it's at the edge of the image, the pixel value on the bottom edge of the image doesn't complete the circle, isn't valued at 255, if that makes sense.
I looked online and some people say it's possible to draw a big box around the image and then do the contours, but if I do that, then opencv only draws the big contour around the entire image and not the circle.
Any other thoughts would be greatly appreciated!
EDIT: The image I show above is just one case I'm considering. I'm trying to think how I can make this general enough such that for any contour that is at the edge of the border, I can still fill in the contour with cv2.drawContour().

If you have (more or less) convex polygons, you actually CAN use cv2.findContours. Having the contours, try to find the center of mass, e.g. using cv2.moments, and then use this as the seed point in cv2.floodFill.
Please see the following code snippet. I assume, you can identify your polygons by color. Also, instead of some advanced finding of the center of mass, I just used the center point of the bounding rectangle of each contour. Maybe, that's also sufficient for your use case!?
import cv2
import numpy as np
# Set up test image
colors = [(0, 255, 0), (0, 0, 255)]
input = np.zeros((400, 400, 3), np.uint8)
cv2.circle(input, (100, 100), 50, colors[0], 10)
cv2.circle(input, (150, 350), 75, colors[1], 10)
output = input.copy()
# Iterate all colors...
for i, c in enumerate(colors):
# Mask color
img = np.all(output == c, axis=2).astype(np.uint8) * 255
# Find contours with respect to OpenCV version
cnts = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Get bounding rectangles; derive seed points for flood filling; flood fill
rects = [cv2.boundingRect(c) for c in cnts]
seeds = [(np.int32(r[0] + r[2] / 2), np.int32(r[1] + r[3] / 2)) for r in rects]
[cv2.floodFill(output, mask=None, seedPoint=s, newVal=c) for s in seeds]
cv2.imshow('input', input)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
That's the input:
And, that's the output:
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
NumPy: 1.18.1
OpenCV: 4.1.2
----------------------------------------

A simple way, perhaps not so precise, is to compute the convex hull for each contour and draw the interior with a fill color:
import cv2
import numpy as np
import sys
# load input image from the cmd-line
img = cv2.imread('test_images/partial_contour.png')
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
output_img = img.copy()
# isolate just the colored drawings
ret, thres_bgr = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)
thres_bgr[np.where((thres_bgr == [255,255,255]).all(axis=2))] = [0,0,0] # replace white pixels for black
# convert from 3-channels (BGR) to a single channel (gray)
gray_img = cv2.cvtColor(thres_bgr, cv2.COLOR_BGR2GRAY)
# this loop processes all the contours found in the image
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
# compute a convex hull
hull = cv2.convexHull(cnt)
# fill the inside with red
cv2.fillPoly(output_img, pts=[hull], color=(0, 0, 255))
cv2.imshow('output_img', output_img)
cv2.imwrite('fill_partial_cnt_output.png', output_img)
cv2.waitKey(0)

Related

Extract out the floor from room image?

I am trying to take out the ground floor of room image as separate one . You can assume to extract it out from there and show using basic approach. I am using Opencv instead of any segmentation approaches as i think those approaches will take time and opencv will be fast. It should work with every image means remain dynamic .
Here is right now what i am using:
Image:
Image
Code:
import cv2
import numpy as np
# Load the image
img = cv2.imread("room.jpg")
# Convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to remove noise
blur = cv2.GaussianBlur(gray, (5, 5), 0)
# Use Canny edge detection to find edges
edges = cv2.Canny(blur, 50, 150)
# Find contours in the image
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Sort the contours by area
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Set a minimum area for the contour to be considered
min_area = 5000
# Loop through the contours and draw only those with an area greater than min_area
for cnt in contours:
if cv2.contourArea(cnt) > min_area:
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 3)
break
# Display the result
cv2.imshow("Floor", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
It is not giving desired results, It is wrong placing points. How can i sort it out, Opinions and other things would also be apprecietable.

Calculating the ratio of black and white pixels along a rotated line

I want to develop an algorithm to recognize through an image, if the object present in said image is a spoon, a fork, or a knife. To acomplish this I am thinking of first comparing the ratio between white and black pixels along a line and if the ratio is the same along the ROI (with a tolerance of say 30%) i can say it's a knife, if not then I move to determine if it's a spoon or a fork.
But my problem starts here, I can't control wheter or not the spoon/fork/knife comes correctly oriented or not, if it comes rotated I am afraid my algorithm needs to be able to check along rotated lines as well. I can alredy get the angle of rotation, but I don't really know what to do with it. The code I have so far is:
import cv2
import numpy as np
img = cv2.imread('GenericImages/PBL3/fork.jpg')
NC = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, threshold = cv2.threshold(NC, 50, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draws contours
for c in contours:
if cv2.contourArea(c) < 1000:
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 191, 255), 2)
cv2.imshow('ROI', img)
cv2.waitKey(0)

Using OpenCV to find a rotated rectangle of a certain aspect ratio?

I am not having difficulty transforming a found box, it is the fact that I am not able to detect the box in the first place when it is at an angle.
Here is a sample image I want the largest ~1230:123 rectangle in the image the problem is the rectangle can be rotated.
Here is a picture of a rotated barcode that I am unable to detect:
The function I have been using to process uses contour area just looks for the largest rectangle.
What methods should I use to look for a rotated rectangle so that even when rotated I can detect it?
#PYTHON 3.6 Snippet for Image Processing
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# compute the Scharr gradient magnitude representation of the images
# in both the x and y direction using OpenCV 2.4
ddepth = cv2.cv.CV_32F if imutils.is_cv2() else cv2.CV_32F
gradX = cv2.Sobel(gray, ddepth=ddepth, dx=1, dy=0, ksize=-1)
gradY = cv2.Sobel(gray, ddepth=ddepth, dx=0, dy=1, ksize=-1)
# subtract the y-gradient from the x-gradient
gradient = cv2.subtract(gradX, gradY)
gradient = cv2.convertScaleAbs(gradient)
# blur and threshold the image
blurred = cv2.blur(gradient, (8, 8))
(_, thresh) = cv2.threshold(blurred, 225, 255, cv2.THRESH_BINARY)
# construct a closing kernel and apply it to the thresholded image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7))
closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# perform a series of erosions and dilations
closed = cv2.erode(closed, None, iterations = 4)
closed = cv2.dilate(closed, None, iterations = 4)
# find the contours in the thresholded image, then sort the contours
# by their area, keeping only the largest one
cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
# compute the rotated bounding box of the largest contour
rect = cv2.minAreaRect(c)
You don't need all the preprocessing (like Sobel, erode, dilate) for finding before executing findContours.
findContours works better when contours are full (filled with white color) instead of having just the edges.
I suppose you can keep the code from cv2.findContours to the end, and get the result you are looking for.
You may use the following stages:
Apply binary threshold using Otsu's thresholding (just in case image is not a binary image).
Execute cv2.findContours, and Find the contour with the maximum area.
Use cv2.minAreaRect for finding the minimum area bounding rectangle.
Here is a code sample:
import numpy as np
import cv2
img = cv2.imread('img.png', cv2.IMREAD_GRAYSCALE) # Read input image as gray-scale
ret, img = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU) # Apply threshold using Otsu's thresholding (just in case image is not a binary image).
# Find contours in img.
cnts = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Find the contour with the maximum area.
c = max(cnts, key=cv2.contourArea)
# Find the minimum area bounding rectangle
# https://stackoverflow.com/questions/18207181/opencv-python-draw-minarearect-rotatedrect-not-implemented
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
# Convert image to BGR (just for drawing a green rectangle on it).
bgr_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.drawContours(bgr_img, [box], 0, (0, 255, 0), 2)
# Show images for debugging
cv2.imshow('bgr_img', bgr_img)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
Note: The largest contour seems to be a parallelogram and not a perfect rectangle.

How to approximate jagged edges as lines using Python OpenCV?

I am trying to find accurate locations for the corners on ink blotches as seen below:
My idea is to fit lines to the edges and then find where they intersect. As of now, I've tried using cv2.approxPolyDP() with various values of epsilon to approximate the edges, however this doesn't look like the way to go. My cv2.approxPolyDP code gives the following result:
Ideally, this is what I want to produce (drawn on paint):
Are there CV functions in place for this sort of problem? I've considered using Gaussian blurring before the threshold step although that method does not seem like it would be very accurate for corner finding. Additionally, I would like this to be robust to rotated images, so filtering for vertical and horizontal lines won't necessarily work without other considerations.
Code:*
import numpy as np
from PIL import ImageGrab
import cv2
def process_image4(original_image): # Douglas-peucker approximation
# Convert to black and white threshold map
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
(thresh, bw) = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Convert bw image back to colored so that red, green and blue contour lines are visible, draw contours
modified_image = cv2.cvtColor(bw, cv2.COLOR_GRAY2BGR)
contours, hierarchy = cv2.findContours(bw, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(modified_image, contours, -1, (255, 0, 0), 3)
# Contour approximation
try: # Just to be sure it doesn't crash while testing!
for cnt in contours:
epsilon = 0.005 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
# cv2.drawContours(modified_image, [approx], -1, (0, 0, 255), 3)
except:
pass
return modified_image
def screen_record():
while(True):
screen = np.array(ImageGrab.grab(bbox=(100, 240, 750, 600)))
image = process_image4(screen)
cv2.imshow('window', image)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
screen_record()
A note about my code: I'm using screen capture so that I can process these images live. I have a digital microscope that can display live feed on a screen, so the constant screen recording will allow me to sample from the video feed and locate the corners live on the other half of my screen.
Here's a potential solution using thresholding + morphological operations:
Obtain binary image. We load the image, blur with cv2.bilateralFilter(), grayscale, then Otsu's threshold
Morphological operations. We perform a series of morphological open and close to smooth the image and remove noise
Find distorted approximated mask. We find the bounding rectangle coordinates of the object with cv2.arcLength() and cv2.approxPolyDP() then draw this onto a mask
Find corners. We use the Shi-Tomasi Corner Detector already implemented as cv2.goodFeaturesToTrack() for corner detection. Take a look at this for an explanation of each parameter
Here's a visualization of each step:
Binary image -> Morphological operations -> Approximated mask -> Detected corners
Here are the corner coordinates:
(103, 550)
(1241, 536)
Here's the result for the other images
(558, 949)
(558, 347)
Finally for the rotated image
(201, 99)
(619, 168)
Code
import cv2
import numpy as np
# Load image, bilaterial blur, and Otsu's threshold
image = cv2.imread('1.png')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.bilateralFilter(gray,9,75,75)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Perform morpholgical operations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,10))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
# Find distorted rectangle contour and draw onto a mask
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
rect = cv2.minAreaRect(cnts[0])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),4)
cv2.fillPoly(mask, [box], (255,255,255))
# Find corners
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(mask,4,.8,100)
offset = 25
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),5,(36,255,12),-1)
x, y = int(x), int(y)
cv2.rectangle(image, (x - offset, y - offset), (x + offset, y + offset), (36,255,12), 3)
print("({}, {})".format(x,y))
cv2.imshow('image', image)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('mask', mask)
cv2.waitKey()
Note: The idea for the distorted bounding box came from a previous answer in How to find accurate corner positions of a distorted rectangle from blurry image
After seeing the description of the corners, here is what I would recommend:
by any method, find the gross location of the corners (for instance by looking for the extreme values of (±X+±Y, ±X+±Y) or (±X, ±Y)).
consider a strip than joins two corners, with a certain width. Extract the pixels in that strip, on a portion close to the corner, rotate to make it horizontal and average the values along horizontals.
you will obtain a gray profile that tells you the accurate position of the edge, at the mean between the background and foreground intensities.
repeat on all four edges and at both ends. This will give you four accurate corners, by intersection.

Extract building edges from map image using Python

I got a map image here.
I need to extract the edges of buildings for further process, the result would be like step 2 for the post here.
Since I am not familiar with this field, can this be done by libraries such as OpenCV?
Seems you want to select individual buildings, so I used color separation. The walls are darker, which makes for good separation in the HSV colorspace. Note that the final result can be improved by zooming in more and/or by using an imagetype with less compression, such as PNG.
Select walls
First I determined good values for separation. For that I used this script. I found that the best result would be to separate the yellow and the gray separately and then combine the resulting masks. Not all walls closed perfectly, so I improved the result by closing the mask a bit. The result is a mask that displays all walls:
Left to right: Yellow mask, Gray mask, Combined and solidified mask
Find buildings
Next I used findCountours to separate out buildings. Since the wall contours will probably not be very useful (as walls are interconnected), I used the hierarchy to find the 'lowest' contours (that have no other contours inside of them). These are the buildings.
Result of findContours: the outline of all contours in green, the outline of individual buildings in red
Note that buildings on the edge are not detected. This is because using this technique they are not a separate contour, but part of the exterior of the image. This can be solve this by drawing a rectangle in gray on the border of the image. You may not want this in your final application, but I included it in case you do.
Code:
import cv2
import numpy as np
#load image and convert to hsv
img = cv2.imread("fLzI9.jpg")
# draw gray box around image to detect edge buildings
h,w = img.shape[:2]
cv2.rectangle(img,(0,0),(w-1,h-1), (50,50,50),1)
# convert image to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define color ranges
low_yellow = (0,28,0)
high_yellow = (27,255,255)
low_gray = (0,0,0)
high_gray = (179,255,233)
# create masks
yellow_mask = cv2.inRange(hsv, low_yellow, high_yellow )
gray_mask = cv2.inRange(hsv, low_gray, high_gray)
# combine masks
combined_mask = cv2.bitwise_or(yellow_mask, gray_mask)
kernel = np.ones((3,3), dtype=np.uint8)
combined_mask = cv2.morphologyEx(combined_mask, cv2.MORPH_DILATE,kernel)
# findcontours
contours, hier = cv2.findContours(combined_mask,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# find and draw buildings
for x in range(len(contours)):
# if a contour has not contours inside of it, draw the shape filled
c = hier[0][x][2]
if c == -1:
cv2.drawContours(img,[contours[x]],0,(0,0,255),-1)
# draw the outline of all contours
for cnt in contours:
cv2.drawContours(img,[cnt],0,(0,255,0),2)
# display result
cv2.imshow("Result", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
With buildings drawn solid red and all contours as green overlay
Here's a simple approach
Convert image to grayscale and Gaussian blur to smooth edges
Threshold image
Perform Canny edge detection
Find contours and draw contours
Threshold image using cv2.threshold()
Perform Canny edge detection with cv2.Canny()
Find contours using cv2.findContours() and cv2.drawContours()
import cv2
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
thresh = cv2.threshold(blurred, 240 ,255, cv2.THRESH_BINARY_INV)[1]
canny = cv2.Canny(thresh, 50, 255, 1)
cnts = cv2.findContours(canny, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image,[c], 0, (36,255,12), 2)
cv2.imshow('thresh', thresh)
cv2.imshow('canny', canny)
cv2.imshow('image', image)
cv2.imwrite('thresh.png', thresh)
cv2.imwrite('canny.png', canny)
cv2.imwrite('image.png', image)
cv2.waitKey(0)

Categories