I am trying to focus on a Region of Interest on the face by using Numpy cropping. For large images, I have no trouble. But for smaller ones, I can find the face and find the matching bounding box, but when I try to crop it and show the image, I get a gray background to the right of the image. I wouldn't mind it, but when I try to apply contours to the image, the bottom and right edges are picked up as contour edges. How do I make sure that the gray padding doesn't affect my contour?
Here is my function:
def head_ratio(self):
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
faces = face_cascade.detectMultiScale(self.gray, 1.3, 5)
m_height, m_width, channels = self.image.shape
x, y, w, h = faces[0]
ROI = self.image[y:y+h,x:x+w]
kernel = np.ones((5,5), np.float32)/10
blurred = cv2.filter2D(self.gray[y:y+h, x:x+w], -1, kernel)
ret, thresh = cv2.threshold(blurred, 127, 255, cv2.THRESH_BINARY)
_,contours,_ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(ROI, contours, -1, (255, 255, 255), 3)
cv2.imshow("output", ROI)
cv2.waitKey(0)
cv2.destroyAllWindows()
return 1
Thank you for your help. I have been trying to look for an answer to this but was having trouble with what to search for.
Related
I have the following image: mask
I'm trying separate each white piece into its own image. My approach is to find the contours, iterate over them and fill each one with white color then save the new image.
So far, I've found the contours after using Canny Edge Detection: contours
But I can't seem to fill them all on the inside, since the edges are not fully connected:contours filled
Is there a way to fill in the contours without using dilation/erosion? I intend to preserve the image as it is, not altering it more than needed.
I've used the following code.
import cv2
import numpy as np
def get_blank_image(image):
return np.zeros((image.shape[0], image.shape[1], 3), np.uint8)
# Let's load a simple image with 3 black squares
image = cv2.imread('img.png')
cv2.waitKey(0)
blank_image = get_blank_image(image)
# cv2.imshow('blank', blank_image)
# Grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find Canny edges
edged = cv2.Canny(gray, 30, 50)
cv2.waitKey(0)
# Finding Contours
# Use a copy of the image e.g. edged.copy()
# since findContours alters the image
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
good_contours = []
for con in contours:
area = cv2.contourArea(con)
if area > 10:
good_contours.append(con)
cv2.drawContours(blank_image, [con], 0, (255, 255, 255), thickness=cv2.FILLED)
# cv2.fillPoly(blank_image, pts=[con], color=(255, 255, 255))
# cv2.imshow('blank', blank_image)
# cv2.waitKey(0)
# good_contours.remove(con)
# cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
cv2.imshow('blank', blank_image)
print("Number of Contours found = " + str(len(good_contours)))
# Draw all contours
# -1 signifies drawing all contours
cv2.drawContours(image, good_contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm trying to detect and draw a rectangular contour on every painting on for example this image:
I followed some guides and did the following:
Grayscale conversion
Applied median blur
Sharpen image
Applied adaptive Threshold
Applied Morphological Gradient
Find contours
Draw contours
And got the following result:
I know it's messy but is there a way to somehow detect and draw a contour around the paintings better?
Here is the code I used:
path = '<PATH TO THE PICTURE>'
#reading in and showing original image
image = cv2.imread(path)
image = cv2.resize(image,(880,600)) # resize was nessecary because of the large images
cv2.imshow("original", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
# grayscale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("painting_gray", gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
# we need to find a way to detect the edges better so we implement a couple of things
# A little help was found on stackoverflow: https://stackoverflow.com/questions/55169645/square-detection-in-image
median = cv2.medianBlur(gray,5)
cv2.imshow("painting_median_blur", median) #we use median blur to smooth the image
cv2.waitKey(0)
cv2.destroyAllWindows()
# now we sharpen the image with help of following URL: https://www.analyticsvidhya.com/blog/2021/08/sharpening-an-image-using-opencv-library-in-python/
kernel = np.array([[0, -1, 0],
[-1, 5,-1],
[0, -1, 0]])
image_sharp = cv2.filter2D(src=median, ddepth=-1, kernel=kernel)
cv2.imshow('painting_sharpend', image_sharp)
cv2.waitKey(0)
cv2.destroyAllWindows()
# now we apply adapptive thresholding
# thresholding: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html#adaptive-thresholding
thresh = cv2.adaptiveThreshold(src=image_sharp,maxValue=255,adaptiveMethod=cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
thresholdType=cv2.THRESH_BINARY,blockSize=61,C=20)
cv2.imshow('thresholded image', thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# lets apply a morphological transformation
kernel = np.ones((7,7),np.uint8)
gradient = cv2.morphologyEx(thresh, cv2.MORPH_GRADIENT, kernel)
cv2.imshow('dilated image', gradient)
cv2.waitKey(0)
cv2.destroyAllWindows()
# # lets now find the contours of the image
# # find contours: https://docs.opencv.org/4.x/dd/d49/tutorial_py_contour_features.html
contours, hierarchy = cv2.findContours(gradient, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print("contours: ", len(contours))
print("hierachy: ", len(hierarchy))
print(hierarchy)
cv2.drawContours(image, contours, -1, (0,255,0), 3)
cv2.imshow("contour image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Tips, help or code is appreciated!
Here's a simple approach:
Obtain binary image. We load the image, grayscale, Gaussian blur, then Otsu's threshold to obtain a binary image.
Two pass dilation to merge contours. At this point, we have a binary image but individual separated contours. Since we can assume that a painting is a single large square contour, we can merge small individual adjacent contours together to form a single contour. To do this, we create a vertical and horizontal kernel using cv2.getStructuringElement then dilate to merge them together. Depending on the image, you may need to adjust the kernel sizes or number of dilation iterations.
Detect paintings. Now we find contours and filter using contour area using a minimum threshold area to filter out small contours. Finally we obtain the bounding rectangle coordinates and draw the rectangle with cv2.rectangle.
Code
import cv2
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.jpeg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (13,13), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Two pass dilate with horizontal and vertical kernel
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,5))
dilate = cv2.dilate(thresh, horizontal_kernel, iterations=2)
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,9))
dilate = cv2.dilate(dilate, vertical_kernel, iterations=2)
# Find contours, filter using contour threshold area, and draw rectangle
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 20000:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36, 255, 12), 3)
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()
So here is the actual size of the portrait frame.
So here is small code.
#!/usr/bin/python 37
#OpenCV 4.3.0, Raspberry Pi 3/B/4B-w/4/8GB RAM, Buster,v10.
#Date: 3rd, June, 2020
import cv2
# Load the image
img = cv2.imread('portrait.jpeg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(img, 120,890)
# Apply adaptive threshold
thresh = cv2.adaptiveThreshold(edged, 255, 1, 1, 11, 2)
thresh_color = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR)
# apply some dilation and erosion to join the gaps - change iteration to detect more or less area's
thresh = cv2.dilate(thresh,None,iterations = 50)
thresh = cv2.erode(thresh,None,iterations = 50)
# Find the contours
contours,hierarchy = cv2.findContours(thresh,
cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
# For each contour, find the bounding rectangle and draw it
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 20000:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,
(x,y),(x+w,y+h),
(0,255,0),
2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is output:
I am trying to do perspective transform for documents in various background and light conditions. Currently, I can't do it on this image because the contour is not close.
The original Image
Edge Detection of image
So far, I have tried Adaptive Threshold, GaussianBlur, MedianBlur,all kind of Morphology operations, Hough Transform and Histogram Equalization and play with all of the parameters in them but nothing has worked.
Can anyone help me with this issue ?
Use Canny Edges, Dilation and Erosion getting this output
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,20,40)
kernel = np.ones((5,5),np.uint8)
dilation = cv2.dilate(edges,kernel,iterations = 1)
erosion = cv2.erode(dilation,kernel,iterations = 1)
ret,thresh = cv2.threshold(erosion, 200, 255, 0)
contours, hierarchy = cv2.findContours(thresh,
cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
largest_areas = sorted(contours, key=cv2.contourArea)
cv2.drawContours(img, [largest_areas[-2]], -1, 255, 2)
cv2.imshow("image", img)
cv2.waitKey(0)
Right now I am trying to create one program, which remove text from background but I am facing a lot of problem going through it
My approach is to use pytesseract to get text boxes and once I get boxes, I use cv2.inpaint to paint it and remove text from there. In short:
d = pytesseract.image_to_data(img, output_type=Output.DICT) # Get text
n_boxes = len(d['level']) # get boxes
for i in range(n_boxes): # Looping through boxes
# Get coordinates
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
crop_img = img[y:y+h, x:x+w] # Crop image
gray = cv2.cvtColor(crop_img, cv2.COLOR_BGR2GRAY)
gray = inverte(gray) # Inverse it
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU)[1]
dst = cv2.inpaint(crop_img, thresh, 10, cv2.INPAINT_TELEA) # Then Inpaint
img[y:y+h, x:x+w] = dst # Place back cropped image back to the source image
Now the problem is that I am not able to remove text completely
Image:
Now I am not sure what other method I can use to remove text from image, I am new to this that's why I am facing problem. Any help is much appreciated
Note: Image looks stretched because I resized it to show it in screen size
Original Image:
Here's an approach using morphological operations + contour filtering
Convert image to grayscale
Otsu's threshold to obtain a binary image
Perform morph close to connect words into a single contour
Dilate to ensure that all bits of text are contained in the contour
Find contours and filter using contour area
Remove text by "filling" in the contour rectangle with the background color
I used chrome developer tools to determine the background color of the image which was (222,228,251). If you want to dynamically determine the background color, you could try finding the dominant color using k-means. Here's the result
import cv2
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,3))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, close_kernel, iterations=1)
dilate_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,3))
dilate = cv2.dilate(close, dilate_kernel, iterations=1)
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 800 and area < 15000:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (222,228,251), -1)
cv2.imshow('image', image)
cv2.waitKey()
I want to save image with contour
Here is my code:
img = cv2.imread('123.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, binary = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
image, contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
# some code in here
cv2.imwrite('234.jpg', cnt)
Thanks a lot.
What you want to do is to create a mask that you draw the contours on to, then use that to snip out the rest of the picture, or vice-versa. For instance, based on this tutorial:
(contours, _) = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
mask = np.ones(img.shape[:2], dtype="uint8") * 255
# Draw the contours on the mask
cv2.drawContours(mask, contours, -1, 0, -1)
# remove the contours from the image and show the resulting images
img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("Mask", mask)
cv2.imshow("After", img)
cv2.waitKey(0)
The easiest way to save the contour as image is taking out its ROI(region of image) and saving it using imwrite() as follows -
First use cv2.boundingRect to get the bounding rectangle for a set of points (i.e. contours):
x, y, width, height = cv2.boundingRect(contours[i])
You can then use NumPy indexing to get your ROI from the image:
roi = img[y:y+height, x:x+width]
And save the ROI to a new file:
cv2.imwrite("roi.png", roi)
I was trying many times and finally, I could make it:
image= cv2.imread('muroprueba.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, binary = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
cnts, herarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image,cnts,-1,(0,255,0),1)
cv2.imshow('image1',image)
cv2.waitKey(0)
cv2.imwrite('F:\caso1.jpg',image) #Save the image
cv2.destroyAllWindows()