Why is rectangle not showing up in my code?
import cv2
im = cv2.imread('players.bmp')
#im.shape >>returns (765,1365,3)
cv2.rectangle(im, (64,1248), (191,1311), (0,255,0), 2)
cv2.namedWindow("image", cv2.WINDOW_NORMAL)
cv2.imshow('image', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
I had a completely different cause for the issue
for me it wasn't showing up because I was using a numpy view from the image, so instead of using cv2.cvtColor(frame, cv2.COLOR_RGB2BGR), I instead used frame[..., ::-1] to convert RGB and BGR.
Somehow this makes the result immutable and when cv2.rectangle tries to write to it, it just doesn't get changed.
It does not show the rectangle, because you are drawing it outside the image.
Why? you may be asking. It is simple. You have this:
#im.shape >>returns (765,1365,3)
This means
rows/height = 765
cols/width = 1365
channels = 3
Then you do
cv2.rectangle(im, (64,1248), (191,1311), (0,255,0), 2)
Here you use 2 points which are tuples (x,y), but you are writing them as if they where tuples (y,x). I know that OpenCV uses in a lot of functions the order (y,x), but this is due that they see the image as a matrix, which commonly is accessed with (row, column) which translates to (y,x). In the case of this rectangle they require Points which are expressed in the typical Cartesian way (x,y).
In conclusion, just change it to:
cv2.rectangle(im, (1248, 64), (1311, 191), (0,255,0), 2)
And it should work.
I stumbled upon this question while facing the same problem. In my case, I was correctly passing x and y along with x+w and y+h. I had made 2 mistakes though:
I had this part in a try and except block (so I couldn't see the exception being raised)
My code was changing h to an array prior to me drawing the rectangle. (I had extracted the values of x,y,w,h from x,y,w,h = cv2.boundingRect(cnt) before mistakenly changing h to [1,-1,-1,-1])
Related
I have read through dozens of questions on this topic here. (see eg: 1, 2, 3). There are a lot of helpful explanations of how to play around with parameters etc, watershedding, etc. Yet no matter what I have tried to put together I am still not managing a halfway-passable count of the cells in my image
Here are two examples of the kind of images I need to process.
Initially I was trying to count all the cells, but because of the difference in focus at the edges (where it gets blurrier) I thought it might be easier to count cells within a rectangle the user selects.
I was hopeful this would improve the results, but as you can see, HoughCircles is both selecting as circles empty spaces with nothing in them, and missing many cells:
Other algorithms I have tried have fared worse.
My code:
cap = cv2.VideoCapture(video_file)
frames = []
while True:
frame_exists, curr_frame = cap.read()
if frame_exists:
frames.append(curr_frame)
else:
break
frames = [cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) for frame in frames]
for img in frames:
circles = cv2.HoughCircles(img,
cv2.HOUGH_GRADIENT,
minDist=10,
dp=1.1,
param1=4, #the lower the number the more circles found
param2=13,
minRadius=4,
maxRadius=10)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(img, (x, y), r, (0, 255, 0), 1)
cv2.imshow("result", img)
Editing to add in my not helpful preprocessing code:
denoise = cv2.fastNlMeansDenoising(img, h=4.0, templateWindowSize=15, searchWindowSize=21)
thresh=cv2.adaptiveThreshold(denoise,255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,41,2)
(and then I passed thresh to HoughCircles instead of img)
It just didn't seem to make any difference...
I believe that these are not circular enough for Hough to work well, you would have to lower param2 too much to account for the lack of uniformity. I would recommend looking into the cv2.findContours method instead and use your 'thresh' image.
https://docs.opencv.org/4.x/dd/d49/tutorial_py_contour_features.html
I am in the initial stages of writing a Rubik's cube solver and am stuck at the following challenge:
Using the following image-processing code gives me the following image:
import cv2 as cv
import glob
import numpy as np
for img in glob.glob("captured_images/*.jpg"):
image = cv.imread(img)
copy = image.copy()
grey = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
decrease_noise = cv.fastNlMeansDenoising(grey, 10, 15, 7, 21)
blurred = cv.GaussianBlur(decrease_noise, (3, 3), 0)
canny = cv.Canny(blurred, 20, 40)
thresh = cv.threshold(canny, 0, 255, cv.THRESH_OTSU + cv.THRESH_BINARY)[1]
contours = cv.findContours(thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for c in contours:
# obtain the bounding rectangle coordinates for each square
x, y, w, h = cv.boundingRect(c)
# With the bounding rectangle coordinates we draw the green bounding boxes
cv.rectangle(copy, (x, y), (x + w, y + h), (36, 255, 12), 2)
cv.imshow('copy', copy)
cv.waitKey(0)
cv.destroyAllWindows()
There are numerous bound rectangles highlighted. Trying to filter out only the squares using this code:
contour_list = []
for contour in contours:
approx = cv.approxPolyDP(contour, 0.01 * cv.arcLength(contour, True), True)
area = cv.contourArea(contour)
if len(approx) == 4:
(x, y, w, h) = cv.boundingRect(approx)
if (float(w)/h) == 1:
cv.rectangle(copy, (x, y), (x + w, y + h), (36, 255, 12), 2)
contour_list.append(contour)
doesn't work as the squares aren't precise enough to fit the definition of "all sides of square are equal".
I though retaking the images against a white background might help to more easily find the relevant squares, however modifying the original image to a cube with a white background and using the original code causes only the larger cube to be recognised as a square:
My question is three-fold:
1a) How can I modify my original code for the original image to accurately measure only the relevant squares by using the following criteria for finding squares:
There must be four corners
All four lines must be roughly the same length
All four corners must be roughly 90 degrees
1b) In the second image with the white background, how can I select everything outside the bound rectangle and convert that white background to black, which helps greatly in correctly detecting the appropriate squares?
1c) In general, why is a black background so much more beneficial than a white background in using the cv2.rectangle() function?
Any help in gaining some clearer understanding is much appreciated! :)
How can I modify my original code for the original image to accurately measure only the relevant squares by using the following criteria for finding squares:
Your code only accepts contours that are exactly square. You need to have a "squaredness" factor and then determine some acceptable threshold.
The "squaredness" factor is h/w if w > h else w/h. The closer that value to one, the more square the rectangle is. Then you can accept only rectangles with a factor of .9 or higher (or whatever works best).
In general, why is a black background so much more beneficial than a white background in using the cv2.rectangle() function?
The contour finding algorithm that OpenCV uses is actually:
Suzuki, S. and Abe, K., Topological Structural Analysis of Digitized Binary Images by Border Following. CVGIP 30 1, pp 32-46 (1985)
In your case, the algorithm might just have picked up the contours just fine, but you have set the RETR_EXTERNAL flag, which will cause OpenCV to only report the outermost contours. Try changing it to RETR_LIST.
Find the OpenCV docs with regards to contour finding here: https://docs.opencv.org/master/d9/d8b/tutorial_py_contours_hierarchy.html
As part of a program which contains a series of images to be processed, I first need to detect a green-coloured rectangle. I'm trying to write a program that doesn't use colour masking, since the lighting and glare on the images will make it difficult to find the appropriate HSV ranges.
(p.s. I already have two questions based on this program, but this one is unrelated to those. It's not a follow up, I want to address a separate issue.)
I used the standard rectangle detection technique, making use of findContours() and approxPolyDp() methods. I added some constraints that got rid of unnecessary rectangles (like aspectRatio>2.5, since my desired rectangle is clearly the "widest" and area>1500, to discard random small rectangles) .
import numpy as np
import cv2 as cv
img = cv.imread("t19.jpeg")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
#threshold
th = cv.adaptiveThreshold(gray,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,cv.THRESH_BINARY,9,2)
cv.imshow("th",th)
#rectangle detection
contours, _ = cv.findContours(th, cv.RETR_TREE, cv.CHAIN_APPROX_NONE)
for contour in contours:
approx = cv.approxPolyDP(contour, 0.01* cv.arcLength(contour, True), True)
cv.drawContours(img, [approx], 0, (0, 0, 0), 5)
x = approx.ravel()[0]
y = approx.ravel()[1]
x1 ,y1, w, h = cv.boundingRect(approx)
a=w*h
if len(approx) == 4 and x>15 :
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>1500:
print(x1,y1,w,h)
width=w
height=h
start_x=x1
start_y=y1
end_x=start_x+width
end_y=start_y+height
cv.rectangle(output, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv.putText(output, "rectangle "+str(x1)+" , " +str(y1-5), (x1, y1-5), cv.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
cv.imshow("op",output)
print("start",start_x,start_y)
print("end", end_x,end_y)
print("width",width)
print("height",height)
It is working flawlessly for all the images, except one:
I used adaptive thresholding to create the threshold, which was used by the findContours() method.
I tried displaying the threshold and the output , and it looks like this:
The thresholds for the other images also looked similar...so I can't pinpoint what exactly has gone wrong in the rectangle detection procedure.
Some tweaks I have tried:
Changing the last two parameters in the adaptive parameters method.
I tried 11,1 , 9,1, and for both of them, the rectangle in the
threshold looked more prominent : but in this case the output
detected no rectangles at all.
I have already disregarded otsu thresholding, as it is not working
for about 4 of my test images.
What exactly can I tweak in the rectangle detection procedure for it to detect this rectangle?
I also request , if possible, only slight modifications to this method and not some entirely new method. As I have mentioned, this method is working perfectly for all of my other test images, and if the new suggested method works for this image but fails for the others, then I'll find myself back here asking why it failed.
Edit: The method that abss suggested worked for this image, however failed for:
image 4
image 1, far off
Other test images:
image 1, normal
image 2
image 3
image 9, part 1
image 9, part 2
You can easily do it by adding this line of code after your threshold
kernel = cv.getStructuringElement(cv.MORPH_RECT,(3,3))
th = cv.morphologyEx(th,cv.MORPH_OPEN,kernel)
This will remove noise within the image. you can see this link for more understanding about morphologyEx https://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html
The results I got is shown below
I have made a few modifications to your code so that it works with all of your test images. There are a few false positives that you may have to filter based on HSV color range for green (since your target is always a shade of green). Alternately you can take into account the fact that the one of the child hierarchy of your ROI contour is going to be > 0.4 or so times than the outer contour. Here are the modifications:
Used DoG for thresholding useful contours
Changed arcLength multiplier to 0.5 instead of 0.1 as square corners are not smooth
cv2.RETR_CCOMP to get 2 level hierarchy
Moved ApproxPolyDP inside to make it more efficient
Contour filter area changed to 600 to filter ROI for all test images
Removed a little bit of unnecessary code
Check with all the other test images that you may have and modify the parameters accordingly.
img = cv2.imread("/path/to/your_image")
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gw, gs, gw1, gs1, gw2, gs2 = (3,1.0,7,3.0, 3, 2.0)
img_blur = cv2.GaussianBlur(gray, (gw, gw), gs)
g1 = cv2.GaussianBlur(img_blur, (gw1, gw1), gs1)
g2 = cv2.GaussianBlur(img_blur, (gw2, gw2), gs2)
ret, thg = cv2.threshold(g2-g1, 127, 255, cv2.THRESH_BINARY)
contours, hier = cv2.findContours(thg, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
img_cpy = img.copy()
width=0
height=0
start_x=0
start_y=0
end_x=0
end_y=0
for i in range(len(contours)):
if hier[0][i][2] == -1:
continue
x ,y, w, h = cv2.boundingRect(contours[i])
a=w*h
aspectRatio = float(w)/h
if aspectRatio >= 2.5 and a>600:
approx = cv2.approxPolyDP(contours[i], 0.05* cv2.arcLength(contours[i], True), True)
if len(approx) == 4 and x>15 :
width=w
height=h
start_x=x
start_y=y
end_x=start_x+width
end_y=start_y+height
cv2.rectangle(img_cpy, (start_x,start_y), (end_x,end_y), (0,0,255),3)
cv2.putText(img_cpy, "rectangle "+str(x)+" , " +str(y-5), (x, y-5), cv2.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 0))
plt.imshow(img_cpy)
print("start",start_x,start_y)
print("end", end_x,end_y)
I have used:
result = cv2.matchTemplate(frame, template, cv2.TM_CCORR_NORMED)
to generate this output:
I need a list of (x, y) tuples at each of the local maxima (bright spots) in the result. Simply finding all points above a threshold doesn't work, since there are many such points around each maximum.
I can guarantee the minimum distance between any two maxima, which ought to help speed things up.
Is there an efficient technique for doing this?
(P.S.: this is cross-posted from https://forum.opencv.org/t/locating-local-maximums/1534)
update
Based on an excellent suggestion by Michael Lee, I've added skeletonizing to the thresholded image. It's close, but the skeletonizing still has many "worms" rather than single points. My processing flow is as follows:
# read the image
im = cv.imread("image.png", cv.IMREAD_GRAYSCALE)
# apply thresholding
ret, im2 = cv.threshold(im, args.threshold, 255, cv.THRESH_BINARY)
# dilate the thresholded image to eliminate "pinholes"
im3 = cv.dilate(im2, None, iterations=2)
# skeletonize the result
im4 = cv.ximgproc.thinning(im3, None, cv.ximgproc.THINNING_ZHANGSUEN)
# print the number of points found
x, y = np.nonzero(im5)
print(x.shape)
# => 1208
This is a step in the right direction, but there should be more like 220 points, not 1208.
Here are the intermediate results. As you can see in the last picture (skeletonized), there are still lots of little "worms" rather than single point. Is there a better approach?
Thresholded:
Dilated:
Skeletonized:
Update 2/14: Seems like skeletonization only took you part of the way there. Here's a better solution which I believe should get you the rest of the way. Here's how you would do it in scikit-image - maybe you can find the analog in OpenCV - seems like cv2.findContours would be a good start.
# mask is the thresholded image (before or after dilation should work, no skeletonization.
from skimage.measure import label, regionprops
labeled_image = label(mask)
output_points = [region.centroid for region in regionprops(labeled_image)]
Explanation: Label will convert your binary image into a labeled image, where each mask has a different integer value. Then, regionprops uses these labels in order to separate each mask, from which we can use the centroid property to compute the middle point from each - this is guaranteed to be a single point.
Simply finding all points above a threshold doesn't work, since there
are many such points around each maximum.
Actually, this does work - as long as you apply one more processing step. After thresholding, then we want to skeletonize. Scikit-image has a good function to achieve that here, which should give you a binary mask with single points.
Afterwards, you're probably going to want to run something like:
indices = zip(*np.where(skeleton))
to get your final points!
Based on Michael Lee's answer, here's the solution that worked for me (using all openCV rather than skimage):
# read in color image and create a grayscale copy
im = cv.imread("image.png")
img = cv.cvtColor(im, cv.COLOR_BGR2GRAY)
# apply thresholding
ret, im2 = cv.threshold(img, args.threshold, 255, cv.THRESH_BINARY)
# dilate the thresholded peaks to eliminate "pinholes"
im3 = cv.dilate(im2, None, iterations=2)
contours, hier = cv.findContours(im3, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
print('found', len(contours), 'contours')
# draw a bounding box around each contour
for contour in contours:
x,y,w,h = cv.boundingRect(contour)
cv.rectangle(im, (x,y), (x+w,y+h), (255,0,0), 2)
cv.imshow('Contours', im)
cv.waitKey()
which results in just what we're looking for:
I created one task, where I have white background and black digits.
I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?
One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.
Have you thought about drawing a line from a certain point of the contour and look for points where the line intersects your contour? I mean if you get the coordinates from two points you can measure the distance. I have made a sample to demonstrate what I mean. Note that this script is meant just for the demonstration of solution and it will not work with other pictures except my sample one. I would give a better one but I have only encountered with programming a few months back.
So the first thing is to extract the contours which you said you have already done (mind that cv2.findContours finds white values). then you can get referential coordinates with cv2.boundingRect() - it returns x,y coordinate, width and height of an bounding rectangle for your contour (you can of course do something similar by extracting a little fracture of your contour on a mask and work from there). In my example I defined the center of the box and moved the line slightly downwards then made a line to the left (I have done it by appending to lists and converting it to arrays and there are probably a million better solutions). Then you look for points that are in your contour and in your line (those points are the points of intersection). I have calculated simply by difference of two x coordinates because it works for this demonstration but better approach would be sqrt(x2-x1)^2+(y2-y1)^2. Maybe it will give you an idea. Cheers!
Sample code:
import cv2
import numpy as np
import numpy
img = cv2.imread('Thickness2.png')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray_image,10,255,cv2.THRESH_BINARY_INV)
im2, cnts, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
font = cv2.FONT_HERSHEY_TRIPLEX
for c in cnts:
two_points = []
coord_x = []
coord_y = []
area = cv2.contourArea(c)
perimeter = cv2.arcLength(c, False)
if area > 1 and perimeter > 1:
x,y,w,h = cv2.boundingRect(c)
cx = int((x+(w/2))) -5
cy = int((y+(h/2))) +15
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
for a in range(cx, cx+70):
coord_x.append(a)
coord_y.append(cy)
coord = list(zip(coord_x, coord_y))
arrayxy = np.array(coord)
arraycnt = np.array(c)
for a in arraycnt:
for b in arrayxy:
if a[:,0] == b[0] and a[:,1] == b[1]:
cv2.circle(img,(b[0],b[1]), 2, (0,255,255), -1)
two_points.append(b)
pointsarray = np.array(two_points)
thickness = int(pointsarray[1,0]) - int(pointsarray[0,0])
print(thickness)
cv2.line(img, (cx, cy), (cx+50, cy), (0,0,255), 1)
cv2.putText(img, 'Thickness : '+str(thickness),(x-20, y-10), font, 0.4,(0,0,0),1,cv2.LINE_AA)
cv2.imshow('img', img)
Output: