How to create a counter in face detection? - python

As you can see in the code below, it only detects the faces with haar cascade, I would like to know how I show the webcam how many people are currently detected.
For example, show in the corner of the webcam X people detected.
from __future__ import print_function
import cv2
cap = cv2.VideoCapture(0)
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
while (cap.isOpened()):
ret,frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5,
flags=cv2.CASCADE_SCALE_IMAGE,minSize=(50, 50), maxSize=None)
if len(faces) > 0:
print("detected person!")
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x - 10, y - 20), (x + w + 10, y + h + 10), (0, 255, 0), 2)
roi_gray = frame[y-15:y + h+10, x-10:x + w+10]
cv2.imshow("imagem", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

I suspect everything shown in the code works just fine.
If so, you already know how many faces are detected with len(faces). You now only need to add this info into the video.
For this, I suggest you use the cv::putText function : https://docs.opencv.org/3.1.0/d6/d6e/group__imgproc__draw.html#ga5126f47f883d730f633d74f07456c576
You will then be able to add this on each frames that are read.
Side note: This might just be because of copy-pasting your code here, but pay attention to your indentation.

Simply displaying the count of len(faces) may not solve the purpose,as you can have instances wherein there are multiple bounding boxes drawn over the same face.Therefore, I would suggest you to perform Non Maximal Suppression(NMS) on the result of your detections, followed by incrementing a counter for each time one calls the NMS operation. The final count of the counter will give you a better and more accurate result.

Related

Python Opencv draw rectangle from smaller frame

I am currently working on a face detection algorithm using MTCNN.
With the below code, I am able to show the frame containing my face:
`def run_detection(fast_mtcnn):
frames = []
frames_processed = 0
faces_detected = 0
batch_size = 60
start = time.time()
cap = cv2.VideoCapture(0)
while True:
__, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frames.append(frame)
faces = fast_mtcnn(frames)
frames_processed += len(frames)
faces_detected += len(faces)
frames = []
if len(faces) > 0:
for face in faces:
cv2.imshow('frame',face)
print(
f'Frames per second: {frames_processed / (time.time() - start):.3f},',
f'faces detected: {faces_detected}\r',
end=''
)
if cv2.waitKey(1) &0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()`
FYI, the cv2.imshow('frame',face) is just for me to put some value there as I would like to get its bouding box.
Which looks something like this(please forgive my silly looking face):
enter image description here
This I just to show that it is detecting the face and its taking out the frames relating to my face.
What my challenge is, is how to take the smaller frame face (containing my face) and get the edge coordinates of it to draw a bounding box for each person in the frame.
What I tried is the following:
cv2.rectangle(frame,
(face[0], face[1]),
(face[0]+face[2], face[1] + face[3]),
(0,155,255),
2)
Which i assume is completely wrong.
The solution will depend on how the points are returned by the face detection system which can vary depending on the library you use.
In this case the detected face contains the following information
x, y, w, h = face where x and y is a point, w is the width of the bounding box, and h is the height.
Hence you can draw a rectangle of the face as follows:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2)

Face Detection - Open CV can't find the face

I am learning the OpenCV. Here is my code:
import cv2
face_patterns = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
sample_image = cv2.imread('1.jpg')
gray = cv2.cvtColor(sample_image,cv2.COLOR_RGB2GRAY)
faces = face_patterns.detectMultiScale(gray,1.3,5)
print(len(faces))
for (x, y, w, h) in faces:
cv2.rectangle(sample_image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imwrite('result.jpg', sample_image)
If I use the picture A, I could get a lot of faces, if I use the picture B, I get none.
I changed argument in detectMultiScale(gray,1.3,5) many times, it still doesn't work.
Picture A
Picture A Result
Picture B no face
I see this more as a problem of Cv2 module itself. There are better models than HAAR CASCADES for detecting faces. face_recognition library is also very useful to detect and recognize face. It uses hog as default model. You can also use cnn for better accuracy but the detection process will be slow.
Find more here.
import cv2
import face_recognition as fr
sample_image = fr.load_image_file("1.jpg")
unknown_face_loc = fr.face_locations(sample_image, model="hog")
print(len(unknown_face_loc)) #detected face count
for faceloc in unknown_face_loc:
y1, x2, y2, x1 = faceloc
cv2.rectangle(sample_image, (x1, y1), (x2, y2), (0, 0, 255), 2)
sample_image = sample_image[:, :, ::-1] #converting bgr image to rbg
cv2.imwrite("result.jpg", sample_image)
Instead of -
faces = face_patterns.detectMultiScale(gray,1.3,5)
Try Using -
faces = face_patterns.detectMultiScale(blackandwhite,1.3,5)
If the problem occurs even after this check out my code for face detection.
It uses hog as default model. You can also use cnn for better accuracy but the detection process will be slow.
cascade_classifier = cv2.CascadeClassifier('haarcascades/haarcascade_eye.xml')
cap = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, 0)
detections = cascade_classifier.detectMultiScale(gray,scaleFactor=1.3,minNeighbors=5)
if(len(detections) > 0):
(x,y,w,h) = detections[0]
frame = cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
# for (x,y,w,h) in detections:
# frame = cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()```

How to check if these is no face detected using cvlib

cvlib library in python is well established and many people in research community uses it. I noticed that if threr is no face detected the (for) loop stops for exampe, if i have the following code:
cap = cv2.VideoCapture(0)
if not (cap.isOpened()):
print('Could not open video device')
#To set the resolution
vid_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
vid_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
while(True):
ret, frame = cap.read()
if not ret:
continue
faces, confidences = cv.detect_face(frame)
# loop through detected faces and add bounding box
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
crop_img = frame[startY-5:endY-5, startX-5:endX-5]```
print(faces)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
as Im printing (faces) the output would be somthing like this
[[392, 256, 480, 369]]
[[392, 256, 478, 369]]
[[392, 255, 478, 370]]
.
.
.
[[392, 255, 478, 370]]
However, as soon as I block the camera or move my head away from it, as there is no face detected, the for loop freezes or pauses till it sees a face to detect.
I need an if statement or any other condition to check this freeze or a pause to produce do something else.
I have found the answer to this question if we add an if statement before the for loop as faces is an int produces 1,2,3 depending on how many faces in front of the camera.
if len(faces) == 0:
print('no faces')
print(faces) # going to print 0
else:
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
crop_img = frame[startY-5:endY-5, startX-5:endX-5]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
You are displaying only the frames where face is detected. If no face is detected you are seeing the last frame where face is detected until the next time a face is detected in the coming frames.
Move the imshow out of for loop. For example,
# loop through detected faces and add bounding box
for face in faces:
(startX,startY) = face[0],face[1]
(endX,endY) = face[2],face[3]
cv2.rectangle(frame, (startX,startY), (endX,endY), (0,255,0), 2)
crop_img = frame[startY-5:endY-5, startX-5:endX-5]
print(faces)
cv2.imshow('object detected',frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
Checkout the complete example here.

Python: classify objects in images

camera = webcam; % Connect to the camera
nnet = alexnet; % Load the neural net
while true
picture = camera.snapshot; % Take a picture
picture = imresize(picture,[227,227]); % Resize the picture
label = classify(nnet, picture); % Classify the picture
image(picture); % Show the picture
title(char(label)); % Show the label
drawnow;
end
I found this matlab code in the internet. It displays a window with the picture from a webcam and very quickly also names the things in the picture ("keyboard","bootle","pencil","clock"...). I want to do that in python.
So far I have this:
import cv2
import sys
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
This is alreay very similar, but only detecting faces. The matlab code uses alexnet. I guess this is a pre-trained network based on imagenet data (http://www.image-net.org/). But it is no longer available.
How would I do this in python?
(There has been a similar question here, but it is 4 yrs. old and I think there are newer techniques now).
With the "tensorflow" package and the pre-trained network "vgg16", the solution is quite easy.
See https://github.com/machrisaa/tensorflow-vgg/blob/master/test_vgg16.py

Python Opencv2 + webcam facial dection, no face being detected no error

I'm using a guide provided online with opencv2.4 that shows you how to detect faces with opencv2 and python. I followed the guide and understand what it says. However I can't seem to find the issue with my program because the video shows but now face is detected and the video is very clear. There are no errors. I ran in debug mode and the value faces remains a blank tuple so I'm assuming that means its not finding the face. What I don't understand is why and I think it has something to do with the hash table.
By hash table I mean the cascade xml file. I understand cascades are basically the guidelines for detecting the facial artifacts correct?
Links to the guides. The hash table i.e the xml file is on the github linked.
https://github.com/shantnu/FaceDetect/blob/master/haarcascade_frontalface_default.xml
https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/
import cv2
import sys
import os
#cascPath = sys.argv[1]
cascPath = os.getcwd()+'facehash.xml'
faceCascade = cv2.CascadeClassifier(cascPath)
print faceCascade
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
cv2.cv
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
You have a wrong path to your xml classifier. (I guess you've changed the name to get a shorter form).
Instead of your cascPath:
cascPath = os.getcwd()+'facehash.xml'
Try this:
cascPath = "{base_path}/folder_with_your_xml/haarcascade_frontalface_default.xml".format(
base_path=os.path.abspath(os.path.dirname(__file__)))
And now it should work as well.

Categories