I am working on with the Face detection using cascade Classifier in opencv python. Its working fine. But i want to develop my code to detect only one face and also the largest face only to detect.
Sort the detected faces by size and keep the biggest one only?
Related
This tutorial teaches how to detect landmarks of a face.
Here, there are no landmarks for the forehead. What can I do so that the landmarks also detect the forehead?
Forehead is not "re;iable" part of face, it's often being covered by hairs.. By the same reason ears are not included too in that 68-points dlib model. So, you can either create your own model or use, for example, Mediapipe Mesh.
I am building a face remake application from where the user remakes face by scrolling through different facial parts to make a face (basically, mix-matching eyes, nose, lips of some other person), until now I wrote an algorithm that divides the facial parts like eyes, nose from the image using Python's Dlib and OpenCv, but due to images being ill-captured facial features cropped aren't aligned. So when I create a database of eyes, nose, lips, and face, while remaking they aren't gonna look like a face when placed at their position.
This is the Original Image
These are the Eyes(Titled)
The Nose Cropped(tilted)
And Lips(tilted)
This image is what I ended up to extract just the face part(NEED ALTERNATIVE IDEA FOR THIS (%)
So my plan was to use this last image as the base for remaking the face.
To tackle the tilting, I applied the face alignment algorithm so that when I crop the eyes in a rectangle image is perfectly straight. But this is what I am getting:
The face is straight but the image being tilted the pockets of black color are added around it, now as this is unavoidable, I NEED HELP REGARDING, HOW CAN I PREPARE THE FACE WITH NO FACIAL PARTS?
NOTES:
1.) Image cropping should not crop out any part of the face.
2.) Thought of using Grabcut to separate face and background, but validating nearly 10,000 images, I need a higher accuracy module, Or would Grabcut be fine(NEED SUGGESTION)?
Dont mind the smudge, but this is somewhat I need the output from the Algorithm
I want to use a webcam to identify faces in OpenCV. I am very new to this library and Python itself. I want to use a Siamese network trained on a single face, but from past experience with examples online, I noticed that only Haar Cascades are used; I am aware this is simply to find faces.
I wish to know if I can use a pre-trained model to then identify faces after finding them with the Haar Cascade.
If I asked in the wrong forum, please kindly transfer me to the appropriate one. I just really need answers as this is for my project.
Thanks
I am trying to create an application that is able to detect and track the iris of an eye in a live video stream. In order to do that, I want to use Python and OpenCV. While researching for this on the internet, it seemed to me that there are multiple possible ways to do that.
First Way:
Run a Canny Filter to get the edges, and then use HoughCircle to find the Iris.
Second Way:
Use Otsus-Algorithm to find the perfect threshold and then use cv2.findContours() to find the Iris.
Since I want this to run on a Raspberry Pi (4B), my question is which of these methods is better, especially in terms of reliability and performance?
I would take a third path and start from a well enstablished method for facial landmark detection (e.g. dlib). You can use a pre-trained model to get a reliable estimate on the position of the eye.
This is an example output from a facial landmark detector:
Then you go ahead from there to find the iris, either using edge detection, Hough or whathever.
Probably you can simply use an heuristic as you can assume the iris to be always in the center of mass of the keypoints around each eye.
There are also some good tutorials online in a similar setting (even for Raspberry) for example this one or this other one from PyImageSearch.
I am working on a face detection and recognition app in python using tensorflow and opencv. The overall flow is as follow:
while True:
#1) read one frame using opencv: ret, frame = video_capture.read()
#2) Detect faces in the current frame using tensorflow (using mxnet_mtcnn_face_detection)
#3) for each detected face in the current frame, run facenet algorithm (tensorflow) and compare with my Database, to find the name of the detected face/person
#4) displlay a box around each face with the name of the person using opencv
Now, my issue is that the overhead (runtime) of face detection and recognition is very high and thus sometime the output video is more like a slow motion! I tried to use tracking methods (e.g., MIL, KCF), but in this case, I cannot detect new faces coming into the frame! Any approach to increase the speedup? At least to get rid of "face recognition function" for those faces that already recognized in previous frames!