I'm working on an project where I would like to take a bounding box already drawn on a subject and select it (via mouse click) so I can have something like a text dialogue box hover above the image, so I can then type in a label. I'm already using OpenCV to detect the object and draw an initial bounding box on it using the Haar Cascade classifier, but so far I can't find the right combination of OpenCV directives to be able to select that bounding box and then annotate it. The relevant code is below.
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
Would appreciate some good pointers. Thanks.
You can take the x/y position of the mouse and compare that to the bounding boxes.
The code below descibes how you could do that.
First, to be able to process mouse input, you have to crate a namedWindow. You can then attach a mouseCallback to that window:
# create window
cv2.namedWindow("Frame")
# attach a callback to the window, that calls 'getFace'
cv2.setMouseCallback("Frame", getFace)
In the getFace method, you check for button press, then loop through the faces and check if the x/y of the mouse is within the bounds of the bounding box of a face. If so, return the index of the face.
def getFace(event, x,y, flags, param):
if event == cv2.EVENT_LBUTTONDOWN:
# if mousepressed
for i in range(len(faces)):
# loop through faces
(face_x,face_y,w,h) = faces[i]
# unpack variables
if x > face_x and x < face_x + w:
# if x is within x-range of face
if y > face_y and y < face_y + h:
# if y is also in y-range of face
return i
# then return the index of the face
Related
I have an application where I crop parts of an image and save them in OpenCv 4 using Python, first drawing rectangles before saving (SSCCE below). Some of the images are very large, so it is helpful to zoom in on them before selecting regions to crop.
The problem is once you've zoomed on an image, the mouse cursor permanently toggles to the interactive hand. Then clicking/dragging only serves to translate the position of the image in the window (unless I go back out to full size, but then I can't draw rectangles on the zoomed-in image).
So my question is once I've zoomed in, how can I get from this:
back to this:
So I can get back to drawing rectangles on the zoomed in image? I'd love to be able to right click to get back to the pointer (some magic with cv2.EVENT_RBUTTONDOWN), but maybe there is some built-in way to do it I'm just ignorant of?
One thing that sort of works is when I changed my program so that it is right button presses/releases that draws the rectangles, so then there is no longer any interference with the native left-click navigation. The problem is I can draw more precise rectangles when I have an arrow than when I have a hand. So while this is a hack I can use as a workaround, it would be really nice if I could toggle back to the arrow for drawing rectangles while zoomed in.
SSCE
import cv2
import numpy as np
#Insert your path to file here:
input_path = r'C:/image0000.bmp'
image = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE)
(im_h, im_w) = image.shape
max_d = np.max([im_h, im_w])
line_width = int(np.ceil(max_d/1000))
image_to_show = image.copy() # np.copy(image)
mouse_pressed = False
s_x = s_y = e_x = e_y = -1
def mouse_callback(event, x, y, flags, param):
global image_to_show, s_x, s_y, e_x, e_y, mouse_pressed
if event == cv2.EVENT_LBUTTONDOWN:
mouse_pressed = True
s_x, s_y = x, y
image_to_show = np.copy(image)
elif event == cv2.EVENT_MOUSEMOVE:
if mouse_pressed:
image_to_show = np.copy(image)
cv2.rectangle(image_to_show, (s_x, s_y),
(x, y), (255, 255, 255), line_width)
elif event == cv2.EVENT_LBUTTONUP:
mouse_pressed = False
e_x, e_y = x, y
print(s_x, s_y, e_x, e_y)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.setMouseCallback('image', mouse_callback)
while True:
cv2.imshow('image', image_to_show)
k = cv2.waitKey(1)
if k == 27:
break
cv2.destroyAllWindows()
In MFC framework, I use GetWindowHandle and SetClassLong (), then use SendMessage to this window and force it to refresh
HWND hWnd = (HWND)cvGetWindowHandle ("SrcView");
if (event == CV_EVENT_RBUTTONDOWN || flag == CV_EVENT_FLAG_RBUTTON)
SetClassLongPtr (hWnd, GCLP_HCURSOR, (LONG_PTR)pDlgMain->m_cursorDrag);
else
SetClassLongPtr (hWnd, GCLP_HCURSOR, (LONG_PTR)LoadCursor(NULL, IDC_ARROW));
SendMessage (hWnd, WM_SETCURSOR, NULL, NULL);
Maybe find the corresponded ones of these three functions in python may be helpful.
i am using this as base for making a better vehicle counter but the roi line is hard coded in the middle of the screen: cv.line(input_frame, (0, int(height/2)), (int (width), int(height/2)), (0, 0xFF, 0), 5)
like this and i am looking for a way two make it dynamic or by letting user draw which if find only tutorials and answers for rectangles or make it movable by mouse clicks for going up and down in the screen.
Thank you in advance.
Take a look here
You need to capture your mouse coordinates and then use cv.line
def click_and_crop(event, x, y, flags, param):
# grab references to the global variables
global refPt, cropping
# if the left mouse button was clicked, record the starting
# (x, y) coordinates and indicate that cropping is being
# performed
if event == cv2.EVENT_LBUTTONDOWN:
refPt = [(x, y)]
cropping = True
# check to see if the left mouse button was released
elif event == cv2.EVENT_LBUTTONUP:
# record the ending (x, y) coordinates and indicate that
# the cropping operation is finished
# y = height x = width
refPt.append((x, y))
cropping = False
# draw a rectangle around the region of interest
#cv.line(input_frame, (0, int(height/2)), (int (width), int(height/2)), (0, 0xFF, 0), 5)
cv2.line(image,(0, int(y)), (int(width), int(y)), (0, 255, 0), 2)
cv2.imshow("image", image)
I'm using the code explained in https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/#comment-480634 and trying to basically detect the small rounded profile images (to be precise 5) displayed lower half of this sample instagram page (attached). What I can't figure out is why:
1. only one out of the 5 small rounded profile circles is captured by the code
2. how come there's a big circle displayed on the page and seems quite absurd to me.
Here is the code I'm using:
# we create a copy of the original image so we can draw our detected circles
# without destroying the original image.
image = cv2.imread("instagram_page.png")
# the cv2.HoughCircles function requires an 8-bit, single channel image,
# so we’ll convert from the RGB color space to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#blurred = cv2.GaussianBlur(gray, (5, 5), 0)
# detect circles in the image. We pass in the image we want to detect circles as the first argument,
# the circle detection method as the second argument (currently, the cv2.cv.HOUGH_GRADIENT method
# is the only circle detection method supported by OpenCV and will likely be the only method for some time),
# an accumulator value of 1.5 as the third argument, and finally a minDist of 100 pixels.
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.7, minDist= 1, param1 = 300, param2 = 100, minRadius=3, maxRadius=150)
print("Circles len -> {}".format(len(circles)))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
# converting our circles from floating point (x, y) coordinates to integers,
# allowing us to draw them on our output image.
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
orange = (39, 127, 255)
cv2.circle(output, (x, y), r, orange, 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
img_name = "Output"
cv2.namedWindow(img_name,cv2.WINDOW_NORMAL)
cv2.resizeWindow(img_name, 800,800)
cv2.imshow(img_name, output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I use minDist = 1 to make sure those close circles are potentially captured. Does anybody see something completely wrong with my parameters?
I played around with the parameters and managed to detect all circles (Ubuntu 16.04 LTS x64, Python 3.7, numpy==1.15.1, python-opencv==3.4.3):
circles = cv2.HoughCircles(
gray,
cv2.HOUGH_GRADIENT,
1.7,
minDist=100,
param1=48,
param2=100,
minRadius=2,
maxRadius=100
)
I'm learning OpenCV (and Python) and have managed to get OpenCV to detect my nose and move the mouse using the movement of my nose, but since it looses track of my nose often I want it to fall back to moving using my face instead of my nose if needed. I've managed to draw rectangles around my face and my nose in video.
I tried being cheeky and just putting the loop for my face rectangle in "if cv2.rectangle" (for the nose), but it is always true. My question is how can I create a test to see if nose is detected fallback to move mouse with face, and if nose is re-detected go back to using the nose.
My loops as of now
# Here we draw the square around the nose, face and eyes that is detected.
for (x,y,w,h) in nose_rect:
cv2.rectangle(frame, (x,y), (x+w,y+h), (0,0,255), 3)
if cv2.rectangle:
m.move(x * 4, y * 4) # TODO: Write and if that goes into face if nose is not visible
break
else:
for (x, y, w, h) in face_rect:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 3)
break
for (x,y,w,h) in eye_rect:
cv2.rectangle(frame, (x,y), (x+w,y+h), (205,0,0), 3)
break
I can post my entire program if that helps, I've tried doing a bunch of the OpenCV official tutorials but did not managed to find an answer to my question there.
Thank you for all replies!
PS: I'm using Python 3.5
Here's the snippet you should use in your code-
if(len(nose_rect)>0):
print ("Only Nose")
for (x,y,w,h) in nose_rect:
cv2.rectangle(frame, (x,y), (x+w,y+h), (0,0,255), 3)
#Here we say that m (the variable created before, should move the mouse using the x, and y variable from the nose rect.
# We have acellerated movement speed by 4 to make it possible to navigate the cursor through the whole screen.
m.move(x * 4, y * 4) # TODO: Write and if that goes into face if nose is not visible
elif (len(face_rect)>0):
print ("Only Face")
for (x,y,w,h) in face_rect:
cv2.rectangle(frame, (x,y), (x+w,y+h), (0,255,0), 3)
elif (len(face_rect)>0):
print ("Only Eye")
for (x,y,w,h) in eye_rect:
cv2.rectangle(frame, (x,y), (x+w,y+h), (205,0,0), 3)
else:
print ("Nothing detected.")
Also, for waiting use the time.sleep() method
time.sleep(0.001) # Waiting 1 millisecond to show the next frame.
if (cv2.waitKey(1) & 0xFF == ord('q')):#exit on pressing 'q'
break
In Tkinter, when I create an image on a canvas and find the coordinates of it, it only returns two coordinates, so the find_overlapping method doesn't work with it (naturally). Is there an alternative?
You should be able to get the image's bounding box (bbox) by calling bbox = canvas.bbox(imageID). Then you can use canvas.find_overlapping(*bbox).
The coordinates it returns should be the coordinates of the top left corner of the image. So if the coordinates you got were (x, y), and your image object (assuming it's a PhotoImage) is img, then you can do:
w, h = img.width(), img.height()
find_overlapping(x, y, x + w, y + h)