How to check if these is no face detected using cvlib - python

cvlib library in python is well established and many people in research community uses it. I noticed that if threr is no face detected the (for) loop stops for exampe, if i have the following code:
cap = cv2.VideoCapture(0)
if not (cap.isOpened()):
print('Could not open video device')
#To set the resolution
vid_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
vid_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
while(True):
ret, frame = cap.read()
if not ret:
continue
faces, confidences = cv.detect_face(frame)
# loop through detected faces and add bounding box
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
crop_img = frame[startY-5:endY-5, startX-5:endX-5]```
print(faces)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
as Im printing (faces) the output would be somthing like this
[[392, 256, 480, 369]]
[[392, 256, 478, 369]]
[[392, 255, 478, 370]]
.
.
.
[[392, 255, 478, 370]]
However, as soon as I block the camera or move my head away from it, as there is no face detected, the for loop freezes or pauses till it sees a face to detect.
I need an if statement or any other condition to check this freeze or a pause to produce do something else.

I have found the answer to this question if we add an if statement before the for loop as faces is an int produces 1,2,3 depending on how many faces in front of the camera.
if len(faces) == 0:
print('no faces')
print(faces) # going to print 0
else:
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
crop_img = frame[startY-5:endY-5, startX-5:endX-5]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()

You are displaying only the frames where face is detected. If no face is detected you are seeing the last frame where face is detected until the next time a face is detected in the coming frames.
Move the imshow out of for loop. For example,
# loop through detected faces and add bounding box
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
crop_img = frame[startY-5:endY-5, startX-5:endX-5]
print(faces)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
Checkout the complete example here.

Related

How to rotate camera recorded video?

I am trying to detect faces in a camera recorded video. When i did it with webcam video, it's working fine. But, with camera recorded video, the video gets rotated by -90 degree. Please suggest me, how do I get the actual video output for face detection?
import cv2
import sys
cascPath = sys.argv[1]
faceCascade = cv2.CascadeClassifier('C:/Users/HP/Anaconda2/pkgs/opencv-3.2.0-np112py27_204/Library/etc/haarcascades/haarcascade_frontalface_default.xml')
#video_capture = cv2.videoCapture(0)
video_capture = cv2.VideoCapture('C:/Users/HP/sample1.mp4')
w=int(video_capture.get(3))
h=int(video_capture.get(4))
#output = cv2.VideoWriter('output_1.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 60,frameSize = (w,h))
while True:
ret, frame = video_capture.read()
frame = rotateImage(frame,90)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.3, 5)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
#cv2.imshow('face',i)
#output.write(frame)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
output.release()
cv2.destroyAllWindows()
In cv2 you can use the cv2.rotate function to rotate image as per your requirement
rotated=cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
for rotating video you can use cv2.flip(), this method take 3 Args and one of them is the rotating code(0,1,-1) you can check this link for more details:
https://www.geeksforgeeks.org/python-opencv-cv2-flip-method/

OpenCV slow face detection with CCTV Or IP Cam

When i try to detect face using my laptop or computer web cam it work fine but when i try to detect using IP cam it looks like it take to much time to detect one frame. Is there any solution for this because I also try YOLO. It take more time than opencv haar cascade
There I have a simple code that detect face and crop than part of frame.
cap = cv2.VideoCapture("web_Cam_IP")
cropScal = 25
while(True):
# Capture frame-by-frame
for i in range(10): #this loop skip 10 frames if I don't skip frame it looks like it stack there
ret, frame = cap.read()
frame = cv2.resize(frame, (0, 0), fx=0.70, fy=0.70)
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
faces = faceCascade.detectMultiScale(gray, scaleFactor=1.02, minNeighbors=5, minSize=(30, 30))
for (x, y, w, h) in faces:
if len(faces) > 0 :
try:
img = gray[y-cropScal:y+h+cropScal, x-cropScal:x+w+cropScal]
img = cv2.resize(img,(200,200))
img = Image.fromarray(img)
img.save('images/'+datetime.now().strftime("%d_%m_%Y_%I_%M_%S_%p")+'.png')
except Exception as e:
pass
cv2.rectangle(gray, (x-cropScal, y-cropScal), (x+w+cropScal, y+h+cropScal), (0, 255, 0), 2)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
You're only scaling the input frames by a factor of 0.70, not to an absolute resolution. It's possible that your IP cam has a higher resolution than your webcam and so the detection requires more time to analyze a larger frame.
Try rescaling the frames to a definite size (eg. 800x600) before the face detection.

Understanding how to deploy python code to pop up balloons

I'm a newbie in programming and I need to write code to detect balloon on the fixed background using numpy and openCV in live video and to return the centre of the object [balloon].
Sorry about the ignorance of the questions.
Since I'm new, I had troubles with thinking about the logic of doing it, I don't have the resources to "teach the machine" and creating cascade XML to detect balloons so I thought about 1 possible solution :
Using cv2.createBackgroundSubtractorMOG2() to detect motion with the same background and once there is some object [balloon], count all the white pixels in the live video and return the centre of it, with the right threshold amount of white pixels.
The problem is, I don't know how to get the value of the pixel from 0-255 to know if it's white or black and shows the video at the same time, I think that there is a much easier way that I couldn't find guides for it.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
fgbg = cv2.createBackgroundSubtractorMOG2()
while(1):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
fgmask = fgbg.apply(gray)
img_arr = np.array(fgmask)
cv2.imshow('frame',fgmask)
for i in fgmask:
for j in i:
print(fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
I'm getting fray video on the output and lots of values that I don't know how to understand them on the output.
I would use
changes = (fgmask>200).sum()
to compare all pixels with almost white value (>200) and count these pixels.
And then I can compare result with some value to treat it as move.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
fgbg = cv2.createBackgroundSubtractorMOG2()
while True:
ret, frame = cap.read()
if frame is None:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
fgmask = fgbg.apply(gray)
#changes = sum(sum(fgmask>200))
changes = (fgmask>200).sum()
is_moving = (changes > 10000)
print(changes, is_moving)
cv2.imshow('frame', fgmask)
k = cv2.waitKey(10) & 0xff
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
print() needs some time to display text so printing all pixels (many times in loop) can slow down program. So I skip this. I don't have to know values of all pixels.
EDIT: Using answer in how to detect region of large # of white pixels using opencv? and add code which can find white regions and draw rectangle. Program opens two window - one with grayscale fgmask and other with RGB frame and they can be hidden one behind another. You have to move one window to see another.
EDIT: I added code which use cv2.contourArea(cnt) and (x,y,w,h) = cv2.boundingRect(cnt) to create list with items (area,x,y,w,h) for all counturs and then get max(items) to get contour with the biggest area. And then it use (x + w//2, y + h//2) as center for red circle.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
fgbg = cv2.createBackgroundSubtractorMOG2()
while True:
ret, frame = cap.read()
if frame is None:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
fgmask = fgbg.apply(gray)
#changes = sum(sum(fgmask>200))
changes = (fgmask>200).sum() #
is_moving = (changes > 10000)
print(changes, is_moving)
items = []
contours, hier = cv2.findContours(fgmask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area = cv2.contourArea(cnt)
if 200 < area:
(x,y,w,h) = cv2.boundingRect(cnt)
cv2.rectangle(fgmask, (x,y),(x+w,y+h),255, 2)
cv2.rectangle(frame, (x,y),(x+w,y+h),(0,255,0), 2)
items.append( (area, x, y, w, h) )
if items:
main_item = max(items)
area, x, y, w, h = main_item
if w > h:
r = w//2
else:
r = h//2
cv2.circle(frame, (x+w//2, y+h//2), r, (0,0,255), 2)
cv2.imshow('fgmask', fgmask)
cv2.imshow('frame', frame)
k = cv2.waitKey(10) & 0xff
if k == 27:
break
cv2.destroyAllWindows()
cap.release()

Plot values not updated in OpenCV's VideoCapture

I am trying to plot facial key points on the video frame using Open CV Video Capture. I am using a trained pytorch CNN model. Here is the code:
cap = cv.VideoCapture(0)
time.sleep(2.0)
while cap.isOpened():
ret, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
face_gray = gray[y:y+h, x:x+w]
sample = cv.resize(face_gray, (96, 96))
sample = sample.astype('float32')/255
sample = np.asarray(sample).reshape(1,96,96)
sample = torch.from_numpy(sample).unsqueeze(0).to(device)
output = saved_model(sample)
output = output.view(-1, 2).detach()
output = (output * 48) + 48
output = output.cpu().numpy()
print(output)
for i in range(15):
cv.circle(frame, (output[i][0], output[i][1]), 1, (0, 0, 255), -1)
cv.imshow("Frame", frame)
key = cv.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.release()
cv.destroyAllWindows()
Input dimension: torch.Tensor([1,1,96,96]), 1 grayscale image
Output dimension: torch.Tensor([15, 2]), (x,y) of 15 facial key points
When the face is detected (using Haar Cascade) in the Video Capture, the output values are the same due to which the key points plot does not change.
I don't see anything wrong with your code block. The only possibility of error that can happen is when you have a static face in the video frame and that is detected as the last face by the HaarCascade detector. By looking at your code block, it is apparent that you are trying to detect keypoints only one face per video frame. Try moving the sample = ..., output = ..., and for ...: block to render keypoints into the for ... iterator of faces.
The code block after suggested edits will look like this:
cap = cv.VideoCapture(0)
time.sleep(2.0)
while cap.isOpened():
ret, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
face_gray = gray[y:y+h, x:x+w]
# Push this block into for iterator of faces
sample = cv.resize(face_gray, (96, 96))
sample = sample.astype('float32')/255
sample = np.asarray(sample).reshape(1,96,96)
sample = torch.from_numpy(sample).unsqueeze(0).to(device)
output = saved_model(sample)
output = output.view(-1, 2).detach()
output = (output * 48) + 48
output = output.cpu().numpy()
print(output)
for i in range(15):
cv.circle(frame, (output[i][0], output[i][1]), 1, (0, 0, 255), -1)
# End block
cv.imshow("Frame", frame)
key = cv.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.release()
cv.destroyAllWindows()
That code isn't handling the case of len(faces) > 0 initially for some number of iterations, then len(faces) == 0 subsequently. Should that happen, face_gray will retain its prior value, and you'll be drawing onto a new frame based on a stale face_gray.

How to create a counter in face detection?

As you can see in the code below, it only detects the faces with haar cascade, I would like to know how I show the webcam how many people are currently detected.
For example, show in the corner of the webcam X people detected.
from __future__ import print_function
import cv2
cap = cv2.VideoCapture(0)
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
while (cap.isOpened()):
ret,frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5,
flags=cv2.CASCADE_SCALE_IMAGE,minSize=(50, 50), maxSize=None)
if len(faces) > 0:
print("detected person!")
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x - 10, y - 20), (x + w + 10, y + h + 10), (0, 255, 0), 2)
roi_gray = frame[y-15:y + h+10, x-10:x + w+10]
cv2.imshow("imagem", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I suspect everything shown in the code works just fine.
If so, you already know how many faces are detected with len(faces). You now only need to add this info into the video.
For this, I suggest you use the cv::putText function : https://docs.opencv.org/3.1.0/d6/d6e/group__imgproc__draw.html#ga5126f47f883d730f633d74f07456c576
You will then be able to add this on each frames that are read.
Side note: This might just be because of copy-pasting your code here, but pay attention to your indentation.
Simply displaying the count of len(faces) may not solve the purpose,as you can have instances wherein there are multiple bounding boxes drawn over the same face.Therefore, I would suggest you to perform Non Maximal Suppression(NMS) on the result of your detections, followed by incrementing a counter for each time one calls the NMS operation. The final count of the counter will give you a better and more accurate result.

Categories