I have a face image, and I found the 68 facial coordinate points with dlib libarary in python.
Now I want to change the face emotion by closing the mouth, for ex. remove the distance between points[61, 62,63,64,65,66,67,68,68], or closing the right eye by removing the distance between points:[43,44,..,48]
So, I need to to color this area with natural colors in a way that it seems natural.
is there any method or algorithm available to solve it? I use openCV right now,but any other library is okay for me, thanks.
What you could do is run your facial coordinate points detection algorithm on the images in this data set: https://datarepository.wolframcloud.com/resources/FER-2013, which contains images for seven emotions (happiness, neutral, sadness, anger, surprise, disgust, fear).
Like that you could maybe infer the transformation from the neutral face to one of this emotion.
Related
I am building a face remake application from where the user remakes face by scrolling through different facial parts to make a face (basically, mix-matching eyes, nose, lips of some other person), until now I wrote an algorithm that divides the facial parts like eyes, nose from the image using Python's Dlib and OpenCv, but due to images being ill-captured facial features cropped aren't aligned. So when I create a database of eyes, nose, lips, and face, while remaking they aren't gonna look like a face when placed at their position.
This is the Original Image
These are the Eyes(Titled)
The Nose Cropped(tilted)
And Lips(tilted)
This image is what I ended up to extract just the face part(NEED ALTERNATIVE IDEA FOR THIS (%)
So my plan was to use this last image as the base for remaking the face.
To tackle the tilting, I applied the face alignment algorithm so that when I crop the eyes in a rectangle image is perfectly straight. But this is what I am getting:
The face is straight but the image being tilted the pockets of black color are added around it, now as this is unavoidable, I NEED HELP REGARDING, HOW CAN I PREPARE THE FACE WITH NO FACIAL PARTS?
NOTES:
1.) Image cropping should not crop out any part of the face.
2.) Thought of using Grabcut to separate face and background, but validating nearly 10,000 images, I need a higher accuracy module, Or would Grabcut be fine(NEED SUGGESTION)?
Dont mind the smudge, but this is somewhat I need the output from the Algorithm
i need some advice in a computer vision projekt that i am working on. I am trying to extract a corner in the image below. The edge im searching for is marked yellow in the right image. The edge detection is always failing because the edge is too blurred in the middle.
I run this process with opencv and python.
I started to remove the white dots with a threshold method. After that a big median blur (31-53). After that a adaptive Threshod method to seperate the areas left and right from the corners. But the sepearation is always bad because the edge is barely visible.
Is there some other way to extract this edge or do i have to try with a better camera?
Thanks for your help.
First do you have other dataset? because it is hard to discuss it just from 1 input.
Couple things that you can do.
The best is you change the camera of imaging technique to have a better and clear edge.
When it is hard to do so. Try model-based fitting.If you image is repeatable in all class. I can observe some circles on the right and 2 sharp straight-line edges on the left. Your wanted red soft edge circle is in the middle of those 2 apparent features. That can be considered as a model. then you can always use some other technique for the pixel in-between those 2 region(because they are easy to detect) . Those technique includes but not limit to histogram equalization, high pass filter or even wavelet transform.
The Wost way is to use parameter fitting to do. What you want to segment is sth not a strong edge and sth not a smooth plane. So you can tweak the canny edge detect to find those edge which is not so strong. I do not support this method. If you really no choice and no other image, then you can try it.
Last way is to use deep learning based method to train and auto segment this part out. This method might work. but it needs you to have hundred if not thousands of dataset and labels.
Regards
Shenghai Yuan
Do we have any way to get points on the forehead of a face image?
I am using 68 points landmarks shape_predictor to get other points on the face but for this particular problem I need
points that are from the hairline to the center of the forehead.
Any suggestions would be helpful.
See this Github Repository: shape predictor with forehead coverage
I was looking for a forehead area detection when I came across this 81 point predictor.
With some simple co-ordinate geometry calculations you can get all the points you have been looking for.
you can use the tool provided with dlib called "imgLab" to train your own shape detector by performing landmark annotations
I am doing a dice value recognition hobby project that I want to run on a Raspberry Pi. For now, I am just learning OpenCV as that seems like the hardest thing for me. I have gotten this far, where I have dilated, eroded and canny filtered out the dice. This has given me a hierarchy of contours. The image shows the bounding rectangles for the parent contours:
My question is: how would I proceed to count the pips? Is it better to do some template matching for face values, or should I mathematically test if a pip is in a valid position within the bounding box?
There could be multiple ways to do it:
Use hole filling and then morphological operator to filter circles.
Simpler approach would be using white pixel density (% of white pixels). Five dot would have higher white pixel density.
Use image moments (mathematical property which represents shape and structure of image) to train the neural network for different kinds of dice faces.
Reference:
Morphology
http://blogs.mathworks.com/pick/2008/05/23/detecting-circles-in-an-image/
As Sivam Kalra Said, there are many valid approaches.
I would go with template matching, as it should be robust and relatively easy to implement.
using your green regions in the canny image, copy each found die face from the original grayscale image into a smaller search image. The search image should be slightly larger than a die face, and larger than your 6 pattern images.
optionally normalize the search image
use cvMatchTemplate with each of the 6 possible dice patterns (I recommend the CV_TM_SQDIFF_NORMED algorithm, but test which works best)
find and store the global minimum in the result image for each of the 6 matches
rotate the search image in ~2° steps from 0° to 90°, and repeat the template match for each step
the dice pattern with the lowest minimum over all steps is the correct one.
contour hierechy could be a good and very easy option, but you need a perpendicular vision.
so you can do it with contours but fitting circles with som threshold
(sorry about my apalling english)
Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).