I do some research in this forum and i hope nobody asked the same question.
For my school project i'm trying to create a python program in my raspberry pi 3 for detect face and for recognize my own face. I can detect face but i don't how can i do for recognize my face and write my name in the screen when the camera see me.
Should i create a xml cascade with photos of me ?
Here is my code for detect face :
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('face_haar.xml')
cap = cv2.VideoCapture(1)
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
This is my first thread in this forum if i do something wrong i'm sorry and say me what i do wrong. If my english is bad i'm sorry and say me if it's true.
Related
I am trying to display a live video feed from camera. When I run the program, ret returns true but cv2.imshow() displays a placeholder image. Any help would be greatly appreciated.
import numpy as np
from cv2 import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#initialize video from the webcam
video = cv2.VideoCapture(0)
print(cv2.VideoCapture(0).isOpened()) # ->returns True
while True:
# ret tells if the camera works properly. Frame is an actual frame from the video feed
ret, frame= video.read()
if ret ==True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Detect the faces
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
# Draw the rectangle around each face
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
# Display
cv2.imshow('img', frame)
if cv2.waitKey(30) & 0xff==27:
break
video.release()
cv2.destroyAllWindows()
cv2.imshow('img', frame) opens the following window.
So, I checked whether the Camera permission is allowed and it seems it already is. I am using MacOS Big Sur (version 11.6).
I am trying to use face detection but I do not want the video feed window to open up when I use videocapture, this is the code I'm working on:
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow('img', img)
k = cv2.waitKey(30) & 0xff
if k==27:
break
cap.release()
I just want to somehow disable the video feed window that opens up automatically and instead get an output on the command line every time it detects a face.
I am using the code from here: https://github.com/adarsh1021/facedetection.
Thank you.
Reomve cv2.imshow('img', img) and replace it with print(faces)
cv2.imshow() is responsible for opening of image window.
I want to detect only the left eye in a video streaming, but I can't. When I run this code, it detects two eyes (right and left) and sometimes it also detects the mouth or the nose like it is an eye.
Follow the code below:
import cv2
import numpy as np
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_lefteye_2splits.xml')
video_capture = cv2.VideoCapture(0)
while True:
ret, img = video_capture.read()
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray_img,
scaleFactor=1.98,
minNeighbors=8,
minSize=(80, 80),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
#print 'faces: ', faces
for (p,q,r,s) in faces:
#cv2.rectangle(img,(p,q),(p+r,q+s),(255,0,0),3)
face_gray = gray_img[q:q+s, p:p+r]
face_color = img[q:q+s, p:p+r]
eyes = eye_cascade.detectMultiScale(face_gray)
for (ep,eq,er,es) in eyes:
cv2.rectangle(face_color,(ep,eq),(ep+er,eq+es), (0,255,0),3)
rimg = cv2.flip(img, 1) # invert the object img
cv2.imshow("Video", rimg)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
So how should I do to detect only the left eye?
PS: I'm using Python 2.7.13, NumPy 1.10.0 and OpenCV 2.4.13.3 in this program.
You can use a simple for loop to iterate through the eyes returned by eye-cascade and find the index of the eye with minimum x-coordinate. That will give you the index of the left-most eye.
def getleftmosteye(eyes):
leftmost=9999999
leftmostindex=-1
for i in range(0,2):
if eyes[i][0]<leftmost:
leftmost=eyes[i][0]
leftmostindex=i
return eyes[leftmostindex]
Now you can get coordinates of the left eye once get its index.
I am trying to create a face detection system in python using openCV that counts the total number of faces.(also it is detecting the eyes of the person).
But i want it to give the exact number of faces it detected say, by the end of day in real time.
following is the code.. any suggestions please:
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('C:\Python27\Scripts\haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('C:\Python27\Scripts\haarcascade_eye.xml')
img = cv2.VideoCapture(0)
while(1):
_,f=img.read()
gray = cv2.cvtColor(f, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
print(len(faces))
for (x,y,w,h) in faces:
cv2.rectangle(f,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = f[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',f)
if cv2.waitKey(25) == 27:
break
cv2.destroyAllWindows()
img.release()
I'm using a guide provided online with opencv2.4 that shows you how to detect faces with opencv2 and python. I followed the guide and understand what it says. However I can't seem to find the issue with my program because the video shows but now face is detected and the video is very clear. There are no errors. I ran in debug mode and the value faces remains a blank tuple so I'm assuming that means its not finding the face. What I don't understand is why and I think it has something to do with the hash table.
By hash table I mean the cascade xml file. I understand cascades are basically the guidelines for detecting the facial artifacts correct?
Links to the guides. The hash table i.e the xml file is on the github linked.
https://github.com/shantnu/FaceDetect/blob/master/haarcascade_frontalface_default.xml
https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/
import cv2
import sys
import os
#cascPath = sys.argv[1]
cascPath = os.getcwd()+'facehash.xml'
faceCascade = cv2.CascadeClassifier(cascPath)
print faceCascade
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
cv2.cv
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
You have a wrong path to your xml classifier. (I guess you've changed the name to get a shorter form).
Instead of your cascPath:
cascPath = os.getcwd()+'facehash.xml'
Try this:
cascPath = "{base_path}/folder_with_your_xml/haarcascade_frontalface_default.xml".format(
base_path=os.path.abspath(os.path.dirname(__file__)))
And now it should work as well.