I am building a face remake application from where the user remakes face by scrolling through different facial parts to make a face (basically, mix-matching eyes, nose, lips of some other person), until now I wrote an algorithm that divides the facial parts like eyes, nose from the image using Python's Dlib and OpenCv, but due to images being ill-captured facial features cropped aren't aligned. So when I create a database of eyes, nose, lips, and face, while remaking they aren't gonna look like a face when placed at their position.
This is the Original Image
These are the Eyes(Titled)
The Nose Cropped(tilted)
And Lips(tilted)
This image is what I ended up to extract just the face part(NEED ALTERNATIVE IDEA FOR THIS (%)
So my plan was to use this last image as the base for remaking the face.
To tackle the tilting, I applied the face alignment algorithm so that when I crop the eyes in a rectangle image is perfectly straight. But this is what I am getting:
The face is straight but the image being tilted the pockets of black color are added around it, now as this is unavoidable, I NEED HELP REGARDING, HOW CAN I PREPARE THE FACE WITH NO FACIAL PARTS?
NOTES:
1.) Image cropping should not crop out any part of the face.
2.) Thought of using Grabcut to separate face and background, but validating nearly 10,000 images, I need a higher accuracy module, Or would Grabcut be fine(NEED SUGGESTION)?
Dont mind the smudge, but this is somewhat I need the output from the Algorithm
Related
I have a face image, and I found the 68 facial coordinate points with dlib libarary in python.
Now I want to change the face emotion by closing the mouth, for ex. remove the distance between points[61, 62,63,64,65,66,67,68,68], or closing the right eye by removing the distance between points:[43,44,..,48]
So, I need to to color this area with natural colors in a way that it seems natural.
is there any method or algorithm available to solve it? I use openCV right now,but any other library is okay for me, thanks.
What you could do is run your facial coordinate points detection algorithm on the images in this data set: https://datarepository.wolframcloud.com/resources/FER-2013, which contains images for seven emotions (happiness, neutral, sadness, anger, surprise, disgust, fear).
Like that you could maybe infer the transformation from the neutral face to one of this emotion.
i am learning opencv for a project and i need to detect edges on a very bad image. I want to mark the edge between the light gray and the dark gray area.(marked yellow) My problem is to find the right combination of operators to do this job.
So far i tried simple thresholding, but the gray-value on the right circle goes from 30 to 100 which is found everywhere else in the image. The image is also very noisy and filters (median,gaussian) are not very useful. I am also familiar with Sobel-Edge Detection but the wanted egde is not big enough to sort out the edges that are made from the noise.
I also tried the Canny-Operator but even this operator detects many edges on the right side of the image because of the many white and black pixels.
Someone an idea?
I am a newbie to computer vision, image processing and OCR. As a part of task, i need to perform the OCR on attached emails. However, the problem is ROI is not constant in all images. For example, in all images we need to extract the information related with patient Yadav.
This needs to be achieved in Python and OpenCV. Please help. I have already tried the approach as given below:
Change DPI
Grayscale
Binarize
However, facing the problem with layout or zone analysis.
Thanks for help.
Welcome to the computer vision world.
I think you're not understanding your problem well enough. Just trying something and check whether it work will never work.
At first read how tesseract (OCR engine) do to improve their accuracy at https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality
Ok, then what make OCR do not work on your data. As you can see your data is clean, and there is nearly none of noise. So it is the border of printed email (as "ROI" in your question) does not correctly line up.
So have can we make the ROI line up correctly . we could use some perspective transformation. i took an example from https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
So how can we perform perspective transformation. We have to find the 4 corners of the ROI and move it back to corners of the image. To find 4 corner of the ROI you could find some contours .
So here the summary, here is the steps.
1. Find the ROI (white color) using color segmentation
2. Find the contour which cover the ROI
3. Find 4 corners of the ROI's contour
4. Apply perspective transform
5. Run tesseract on transformed image
Hope that help
I would like to check whether a given image is matched with a cropped face image. I tried to crop the face from the image using OpenCV Python. Now i want to check if the input image is a match with the cropped face or not. What methods can I use with OpenCV to achieve this?
For sufficiently small (and not scientifically accurate) purposes, you could use OpenCV's template matching.
Feature extraction and matching may give you more accurate results in many cases. Face detector comes in as a part of OpenCV. Face recognition, however, is a much larger problem altogether.
Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).