I have an small project where I need to calculate the area of hair portions and tell which one covers greater area among two images. I have another code for hair extraction. However it is not also giving result as expected.You may have guessed already from image below. I will work on it later.
I am trying to calculate the area from contours which is giving me error like:
OpenCV(3.4.4) C:\projects\opencv-python\opencv\modules\imgproc\src\contours.cpp:195: error: (-210:Unsupported format or combination of formats) [Start]FindContours supports only CV_8UC1 images when mode != CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function 'cvStartFindContours_Impl'
So, why is findContours not supporting my image?
Another approach:
I only need to find the area of hair portion. So, I thought of calculating area covered by all the white pixels and then subtract it from area of whole image too. In this case, I do not know how to calculate area covered by all white pixels. I thought this way because, hair color can vary, but background will always be white.
So, is this technique possible? Or please suggest some solution for above mentioned error?
My image:
My code:
import cv2
import numpy as np
img = cv2.imread("Hair.jpg")
_, contours, _ = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
c = max(contours, key = cv2.contourArea)
cv2.drawContours(img, [c], -1, (255,255, 255), -1)
area = cv2.contourArea(c)
print(area)
cv2.imshow("contour", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Your error already tells what is wrong, specifically this part:
FindContours supports only CV_8UC1 images when mode
This means that it has to be a greyscale image. You pass an image loaded with:
img = cv2.imread("Hair.jpg")
Which by default returns the image in CV_8UC3 or in simple words, BGR colorspace. Even if your image only has black and white. Solution, load as greyscale:
img = cv2.imread("Hair.jpg", cv2.IMREAD_GRAYSCALE)
Also, I notice that this is a .jpg file, which may introduce some artifacts that you may not like/want. To remove them, use threshold:
ret,thresh1 = cv.threshold(img,127,255,cv.THRESH_BINARY)
I hope this helps you, if not, leave a comment
Update:
findContours function takes black as background and white as foreground. In your case is the other way around. But there is an easy way to solve this, just invert the image when it is being passed:
_, contours, _ = cv2.findContours(255-img, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
255 is the max value the image can have, and this will turn black into white and white into black, giving you the correct contour.
Related
I'm writing a program that takes an image that has a grid of 4 by 4 letters somewhere in that image.
E.g.
1
I want to read these letters into my program and for that I'm using pytesseract for the OCR.
Before feeding the image to pytesseract I do some preprocessing with openCV to increase the odds of pytesseract working correctly.
This is the code I use for this:
import cv2
img = cv2.imread('my_image.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_pre_processed = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
And these are some sample outputs of img_pre_processed:
2 3 4
Since the letters in the grid are spaced apart pytesseract has a difficult time to read them when I give the entire image as input. So it would be helpful if I knew the coordinates of every letter, then I could edit the image in such a way that pytesseract can always recognise them.
I started to try and solve this problem on my own and the solution I'm coming up with might work but it's getting rather complicated. So I'm wondering if there is a better way to do it.
At the moment I'm using the cv2.findContours() function to get all the contours of the objects in the image. For every contour I calculate the center coordinates and the area of the box you would be able to draw around it. I then sort these by area to get the largest contours. Now here it starts to get more and more complicated. I can't just say take the biggest 16 contours, because there might be unwanted objects in the picture that have a bigger area than the 16 letters that I want. Also some letters like O, P, Q,... have 2 contours and their inner contour might even be bigger than another letters outer contour like the letter I for example.
E.g. This is an image with the 18 biggest contours marked with a green box. 5
So to continue with my way of attacking the problem I would have to write an algorithm that finds the contours that are most likely part of the grid while ignoring the contours that are unwanted and also the inner contours of letters that have 2 contours.
While this is possible, I'm wondering if there is be a better way of doing this.
Somebody told me that if you can filter the image in such a way that everything gets more blurry so that all the letters become blobs. That it might be possible to do a pattern detection with 4x4 grid of blobs. But I don't know how to do that or if that's possible.
So if somebody knows a better way to tackle this problem or if you know how to execute the plan of attack I mentioned earlier that would be most helpfull.
Thanks in advance!
You can simply filter the bounding rectangles by width and height. As this is a rule based approach, it may need more example images to fine tune the filter rules.
import cv2
# get bounding rectangles of contours
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
bbox = [cv2.boundingRect(c) for c in contours]
# filter rectangles by width and height
for x, y, w, h in bbox:
if (4 < w < 200) and (30 < h < 200):
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I am trying to separate a cow from a depth image.
When I use contours it separates most of it but fails to separate the fence when the cow is leaning on it. (Note: it is ok that the head is being removed from the cow, the application I am using it on works better if the head is removed)
Here is the code I use to detect and remove contours. My idea is to remove them by size. It works when the cow is not touching the fence but in this case, doesn't work.
# Remove stuructures connected to the image border------------------------------------------------
# find contours in the image and initialize the mask that will be
cnts = cv2.findContours(BW3.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
mask = np.ones( BW3.shape[:2], dtype="uint8") * 255
# loop over the contours
for c in cnts:
if cv2.contourArea(c) > 250000:
# if the contour is bad, draw it on the mask
cv2.drawContours(mask, [c], -1, 0, -1)
BW3 = cv2.bitwise_and( BW3, BW3, mask=mask)
if cv2.contourArea(c) < 10000:
cv2.drawContours(mask, [c], -1, 0, -1)
BW3 = cv2.bitwise_and( BW3, BW3, mask=mask)
cv2.imshow('H_Black and white', BW3)
cv2.waitKey()
Is there any way to remove the fencing around the cow when it is touching? I have tried using HoughLinesP() with no luck, I am new to OpenCV so I could be going about it the wrong way. Another potential solution would be to crop the image, but the camera is in a slightly different location each time so cropping would have to be adjusted for each camera variation. Any advice would be appreciated.
Thank you
EDIT:
The purpose of separating the cow from the background is to use volumetric estimation to determine the weight of the animal. If effectively implemented this will be a cheaper solution than a standard scale. This is for a research project (The project will be open-sourced, not monetized).
The original input depth image is cropped before any other code is run (All images reflect this except for the first depth image in this post) To get the contours I change the depth picture to HSV. Then take the Hue image and change that to black and white before running cv2.findCountours
Here is a reconstruction of the depth values from the jet colormap (inverted values so it's easier to read visually):
I am trying to OCR the picture of documents and my current approach is
Read an image as a grayscale
Binarize it thresholding
Wrap perspective along the contours obtained from cv2.findContours()
The above works well if image is not shadowed. Now I want to get contours of shadowed pictures. My first attempt is to use cv2.adaptiveThreshold for step 2. The adaptive threshold successfully weakened the shadow but the resulted image lost the contrast between the paper and the background. That made cv2 impossible to find contours of the paper. So I need to use other method to remove the shadow.
Is there any way to remove shadow maintaining the background colour?
For reference here is the sample picture I am processing with various approaches. From left, I did
grayscale
thresholding
adaptive thresholdin
normalization
My goal is to obtain the second picture without shadow.
Please note that I actually have a temporary solution specifically to the picture which is to process the part of the picture with shadow separately. Yet, it is not the general solution to shadowed picture as its performance depends on the size, shape and position of a shadow so please use other methods.
This is the original picture.
Here is one way in Python/OpenCV using division normalization, optionally followed by sharpening and/or thresholding.
Input:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('receipt.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (95,95), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=1.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# threshold
thresh = cv2.threshold(sharp, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# save results
cv2.imwrite('receipt_division.png',division)
cv2.imwrite('receipt_division_sharp.png',sharp)
cv2.imwrite('receipt_division_thresh.png',thresh)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.imshow('thresh', thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division:
Sharpened:
Thresholded:
So I'm trying to recognize an region that's already been defined by a bounding box. Example:
Some of the areas within these rectangles in these images are white and some are black, and most of them are completely different sizes. The only common characteristic between these images is the red rectangle:
Essentially what I'm trying to do is create a randomly generated meme bot that places a random source image in the region defined by these rectangles. I have tons of these images already with predefined areas with these red rectangles for use. I want to automate the process somehow, currently every resize shape and offset has to be defined for each template. So what I need to do is recognize the area within the rectangle and have it return the defined resize shape and offset needed to place the source image.
How should I go about this? Should I use something in OpenCV or am I going to have to train a CNN? Just really looking for a push in the right direction because I'm pretty lost as to the best approach to this problem.
I think OpenCV can do it. Below is a short example of the steps for what you need. Read the comments in the code for more details.
import cv2
import numpy as np
img = cv2.imread("1.jpg")
#STEP1: get only red color (or the bounding box color) in the image
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower_red = np.array([0,50,50])
upper_red = np.array([0,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_red, upper_red)
red_only = cv2.bitwise_and(img,img, mask= mask)
#STEP2: find contour
gray_img = cv2.cvtColor(red_only,cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray_img,1,255,cv2.THRESH_BINARY)
_,contours,_ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#max contour in the image is the box you want
areas = [cv2.contourArea(c) for c in contours]
sorted_areas = np.sort(areas)
cnt=contours[areas.index(sorted_areas[-1])]
r = cv2.boundingRect(cnt)
cv2.rectangle(img,(r[0],r[1]),(r[0]+r[2],r[1]+r[3]),(0,255,0),3)
cv2.imshow("img",img)
cv2.imshow("red_only",red_only)
cv2.imshow("thresh",thresh)
cv2.waitKey()
cv2.destroyAllWindows()
I'm currently working on an image where I have to find the box outer region. But I failed to find the white and black boxes regions.
input image:
https://i.imgur.com/gec9eP5.png
output image:
https://i.imgur.com/Giz1DAW.png
Update edit:
if I use HLS instead of HSV I can find 3 more box region but 2 is still missing.
here is new output:
https://i.imgur.com/eUqltKI.png
and here is my code:
import cv2
import numpy as np
img = cv2.imread("1.png")
imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([0,50,0])
upper_blue = np.array([255,255,255])
mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue)
_, contours, _ = cv2.findContours(mask_blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
im = np.copy(img)
cv2.drawContours(im, contours, -1, (0, 255, 0), 2)
cv2.imwrite("contours_blue.png", im)
The mask you're generating with
mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue)
does not include the bottom row at all, so it's impossible to detect these outlines with
_, contours, _ = cv2.findContours(mask_blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
You could try to work with multiple masks / thresholds to account for the different color ranges and merge the detected contours.
Color channel threshold is not the optimal solution in cases when you are dealing with objects of many different colors (that are not known in advance) and with background color that is not necessarily distinctly different from all object colors. Combination of multiple thresholds/conditions could solve the job for this particular image but this same combination can fail for slightly different input, so I think this approach is generally not too good.
I think the problem is very elementary in nature so I would recommend sticking to a simple approach. For example, if you apply Sobel operator on your image, you get result like one below. The intensity of the result is weak on some borders, so I inverted the image colors to make it better visible.
There are tons of tutorials on Sobel operator on the web so I won't go into detail here. On your input image there is no noise, so the intensity outside and within the boxes is zero. I would therefore suggest masking-out all zero values. If you do contour detection after that, you will end up with two contours per square - one will be on the inner side of the border and one on the outer side of the border. If you only want to extract contours on the outer border, see how contour hirearchy works in the OpenCV documentation. If you want to have contour exactly on the border, help yourself with outer contour and erosion.