Face and hair detection for Python - python

I am using OpenCV or dlib to detect face from images. The result is very good. Here is an example:
However, I also want to take the hair and the neck from the image, like that:
I have tried to look for a library or framework to help me achieve that but I can't find one.
Are there any way to do that?

In case you want to extract exactly region of hair and neck, you need to train your own model because the current dlib model does not include them.
Otherwise, you just want to capture relatively, you can use Openpose which gives you the landmarks of faces + ears + shoulders (even body and hand fingers). From those landmarks you can draw your interested area.
Example:
the width of rectangle = the length of shoulder (point 2 -> point 5)
the height = the length from the neck to (point 1) to the nose (point 0) x 2. (point 1 - point 0)*2
landmarks by openpose
face + hair + neck

use this code to increase the bounding box by percentage.
rects = detector(original_image, 1)
for rect in rects:
(x, y, w, h) = rect_to_bb(rect)
x_inc = int(w*0.3)
y_inc = int(h*0.3)
sub_face = original_image[y-y_inc:y+h+y_inc, x-x_inc:x+w+x_inc]
newimg = cv2.resize(sub_face,(int(224),int(224)))

Related

Processing an image of a compass to determine the direction a player is facing

I am building a video game overlay that sends data back to the player to create a custom HUD, just for fun.
I am trying to read an image of a video game compass and determine the exact orientation of the compass to be a part of my HUD.
Example photo which shows the compass at the top of the screen:
(The circle currently facing ~170°, NOTE: The position of the compass is also fixed)
Example photo which shows the compass at the top of the screen:
Obviously, when I image process on the compass I will only be looking at the compass and not the whole screen.
This has been more challenging for me compared to previous computer vision aspects of my HUD. I have been trying to process the image using cv2 and from there use some object detection to find the "needle" of the compass.
I am struggling to get a triangle shape detection on either needle that will help me know my orientation.
The solution could be lower-tech and hackier, perhaps just searching for the pixel on the edge of the compass and determining that is the end of the needle.
One solution I do not think is viable is using object detection to find a picture of a compass facing true north and then calculating the rotation of the current compass. This is due to the fact that the background of the compass does not rotate only the needle does.
So far I have applied Hough Circle Transform as seen here:
https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html#hough-circles
Which has helped me get a circle around my compass as well as the middle of my compass. However, I cannot find a good solution for finding the facing of the needle compared to the middle of the compass.
I understand this is a pretty open-ended question but I am looking for any theoretical solutions that would help me implement a solution. Anything would help as this is a strange problem for me and I am struggling to think how to go about solving it.
In general I would suggest to look at a thin ring just beneath the border or your compass (This will give you lowest error). Either you could work on an image which is a polar transform of this ring or directly on that ring, looking for the center of gravity of the color red. This center of gravity with respect to the center of your compass should give you the angle. Most likely you don't even need the polar transform.
im = cv.imread("RPc9Q.png")
(x,y,w,h) = (406, 14, 29, 29)
warped = cv.warpPolar(
src=im,
dsize=(512, 512),
center=(x + (w-1)/2, y + (h-1)/2),
maxRadius=(w-1)/2,
flags=cv.WARP_POLAR_LINEAR | cv.INTER_LINEAR
)
Here's some more elaboration on the polar warp approach.
polar warp
take a column of pixels, being a circle in the source picture
plot to see what's there
argmax to find the red bits of the arrow
im = cv.imread("RPc9Q.png") * np.float32(1/255)
(x,y,w,h) = (406, 14, 29, 29)
# polar warp...
steps_angle = 360 * 2
steps_radius = 512
warped = cv.warpPolar(
src=im,
dsize=(steps_radius, steps_angle),
center=(x + (w-1)/2, y + (h-1)/2),
maxRadius=(w-1)/2,
flags=cv.WARP_POLAR_LINEAR | cv.INTER_LANCZOS4
)
# goes 360 degrees, starting from 90 degrees (east) clockwise
# sample at 85% of "full radius", picked manually
col = int(0.85 * steps_radius)
# for illustration
imshow(cv.rotate(cv.line(warped.copy(), (col, 0), (col, warped.shape[0]), (0, 0, 255), 1), rotateCode=cv.ROTATE_90_COUNTERCLOCKWISE))
signal = warped[:,col,2] # red channel, that column
# polar warp coordinate system:
# first row of pixels is sampled at exactly 90 degrees (east)
samplepoints = np.arange(steps_angle) / steps_angle * 360 + 90
imax = np.argmax(signal) # peak
def vertex_parabola(y1, y2, y3):
return 0.5 * (y1 - y3) / (y3 - 2*y2 + y1)
# print("samples around maximum:", signal[imax-1:imax+2] * 255)
imax += vertex_parabola(*signal[imax-1:imax+2].astype(np.float32))
# that slice will blow up in your face if the index gets close to the edges
# either use np.roll() or drop the correction entirely
angle = imax / steps_angle * 360 + 90 # ~= samplepoints[imax]
print("angle:", angle) # 176.2
plt.figure(figsize=(16,4))
plt.xlim(90, 360+90)
plt.xticks(np.arange(90, 360+90, 45))
plt.plot(
samplepoints, signal, 'k-',
samplepoints, signal, 'k.')
plt.axvline(x=angle, color='r', linestyle='-')
plt.show()
I have been able to solve my question with the feedback provided.
First I grab the image of the compass:
step_1
After I process the image crop out the middle and edges of the compass as seen here:
step_2
Now I have a cropped compass with only a little bit of red showing where the compass needle points. I masked out the red part of the image.
step_3
From there it is a simple operation to find the center of the blob which roughly outputs where the needle is pointing. Although this is not perfectly accurate I believe it will work for my purposes.
step_4
Now that I know where the needle end is it should be easy to calculate the direction based on that.
Some references:
Finding red color in image using Python & OpenCV
https://www.geeksforgeeks.org/python-opencv-find-center-of-contour/

Is there a simple way to map a texture to a different "UV" system in python?

I really don't know if "UV's" is the right word as i'm from the world of Unity and am trying to write some stuff in python. What i'm trying to do is to take a picture of a human (from webcam) take the placement of their landmarks/key features and alter a second image (of a different person) to make their key features in the same place whilst morphing / warping the parts of their skin that are within the face to fit the position of the first input image (webcam)'s landmarks. After i do that I need to put the face back on the non-webcam input. (i'm sorry for how much that made me sound like a serial killer, stretching and cutting faces) I know that probably didn't make any sense but I want it to look like this.
I have the face landmark and cutting done with DLIB and OpenCV but i need a way to find a way to take these "cut" face chunks and stretch them "dynamically". What I mean by dynamically is that you don't just put a mask on by linearly re-sizing it on 1 or 2 axises. You can select a point of the mask and change that, I wanna do that but my mask is my cut chunk and the point is a section of that chunk that needs to change for the chunk to comply with the position of the generated landmarks. I know this is a very hard topic to think about and if you guys need any clarification just ask. My code:
import cv2
import numpy as np
import dlib
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
while True:
_, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 3)
landmarks = predictor(gray, face)
for n in range(0, 68):
x = landmarks.part(n).x
y = landmarks.part(n).y
cv2.circle(frame, (x, y), 4, (255, 0, 0), -1)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1)
if key == 27:
break
EDIT: No i'm not a serial killer
If you need to deform source image like a rubber sheet using 2 sets of keypoints, you need to use thin plate spline (TPS), or, better, piecewice affine transformation like here. The last one is more similar to texture rasterization methods (triangle to triangle texture transform).

dlib/cv2 Working with hundred thousand of pictures

For my next university-project i will have to teach a Convoluted Neural Network how to denoise a picture of a face so i started digging the we for datasets of faces. I stumbled upon this dataset (CelebA) with 200k+ pictures of people and i found the first few problems: there are too many pictures to do basic computation on them.
I should:
Open each image and make a numpy array out of it (dlib.load_rgb_image is fine)
Find a face it, use the 5 point shape predictor to find the eyes and align them
Rotate the picture so that the eyes are in a straight horizontal line
Crop the face and resize it to 256x256 (i could choose 64x64 but its not a huge time saver)
Make a copy and add artificial noise to it
Save them both to two different folder
On a pc that the university gave me i could do about 40ish image each minute, around 57k images every 24hours.
To speedup thing i have tried threads; one thread for each pictures but the speedup is about 2-3 images more per-minute.
This is the code i'm running:
### Out of the threads, before running them ###
def img_crop(img, bounding_box):
# some code using cv2.copyMakeBorder to crop the image
MODEL_5_LANDMARK = "5_point.dat"
shape_preditor = dlib.shape_predictor(MODEL_5_LANDMARK)
detector = dlib.get_frontal_face_detector()
### Inside each thread ###
img_in = dlib.load_rgb_image("img_in.jpg")
dets = detector(img_in, 1)
shape = shape_preditor(img_in, dets[0])
points = []
for i in range(0, shape.num_parts):
point = shape.part(i)
points.append((point.x, point.y))
eye_sx = points[1]
eye_dx = points[3]
dy = eye_dx[1] - eye_sx[1]
dx = eye_dx[0] - eye_sx[0]
angle = math.degrees(math.atan2(dy, dx))
center = (dets[0].center().x, dets[0].center().y)
h, w, _ = img_in.shape
M = cv2.getRotationMatrix2D(center, angle + 180, 1)
img_in = cv2.warpAffine(img_in, M, (w, h))
dets = detector(img_in, 1)
bbox = (dets[0].left(), dets[0].top(), dets[0].right(), dets[0].bottom())
img_out = cv2.resize(imcrop(img_in, bbox), (256, 256))
img_out = cv2.cvtColor(img_out, cv2.COLOR_BGR2RGB)
img_noisy = skimage.util.random_noise(img_out, ....)
cv2.imwrite('out.jpg', img_out)
cv2.imwrite('out_noise.jpg', img_noisy)
My programming language is Python3.6, how i can speedup things?
Another problem will be loading the whole 200k images into memory as numpy array, from my initial testing 12k images will take around 80seconds with a final shape of (12000, 256, 256, 3). Is there a faster way to achieve this?
First of all, forgive me because I am familiar with c++ only. Please find below my suggestion to speed up dlib functions and convert to your python version if it is helpful.
Color does not matter to dlib. Hence, change input image to gray before proceeding to save time.
I saw you call the below function twice, what is the purpose? it could double the consuming time. If you need to get the new landmarks after alignment, try to rotate landmarks points directly instead of re-detecting. How to rotate points
dets = detector(img_in, 1)
Because you just want to detect 1 face per image only. Try to set pyramid_down to 6 (by default it is 1 - room out the image to detect more face). You can test value from 1 - 6
dets = detector(img_in, 6)
Turn on AVX instruction.
Note: more detail could be found here Dlib Github

Hough transform not detecting correct circle

I'm working on a project in which I have to detect Traffic lights (circles obviously). Now I am working with a sample image I picked up from a spot, however after all my efforts I can't get the code to detect the proper circle(light).
Here is the code:-
# import the necessary packages
import numpy as np
import cv2
image = cv2.imread('circleTestsmall.png')
output = image.copy()
# Apply Guassian Blur to smooth the image
blur = cv2.GaussianBlur(image,(9,9),0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 200)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", output)
cv2.imshow('Blur', blur)
cv2.waitKey(0)
The image in which I want to detect the circle-
This is what the output image is:-
I tried playing with the Gaussian blur radius values and the minDist parameter in hough transform but didn't get much of success.
Can anybody point me in the right direction?
P.S- Some out of topic questions but crucial ones to my project-
1. My computer takes about 6-7 seconds to show the final image. Is my code bad or my computer is? My specs are - Intel i3 M350 2.6 GHz(first gen), 6GB RAM, Intel HD Graphics 1000 1625 MB.
2. Will the hough transform work on a binary thresholded image directly?
3. Will this code run fast enough on a Raspberry Pi 3 to be realtime? (I gotta mount it on a moving autonomous robot.)
Thank you!
First of all you should restrict your parameters a bit.
Please refer to: http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghcircles
At least set reasonable values for min and max radius. Try to find that one particular circle first. If you succeed increase your radius tolerance.
Hough transform is a brute force method. It will try any possible radius for every edge pixel in the image. That's why it is not very suitable for real time applications. Especially if you do not provide proper parameters and input. You have no radius limits atm. So you will calculate hundreds, if not thousands of circles for every pixel...
In your case the trafficlight also is not very round so the accumulated result won't be very good. Try finding highly saturated, bright, compact blobs of a reasonable size instead. It should be faster and more robust.
You can further reduce processing time if you restrict the image size. I guess you can assume that the traffic light will always be in the upper half of your image. So omit the lower half. Traffic lights will always be green, red or yellow. Remove everything that is not of that color... I think you get what I mean...
I think that you should first perform a color segmentation based on the stoplight colors. It will tremendously reduce the ROI. Then you can apply the Hough Transform on the ROI edges only (because you want the contour).
Another restriction: Only accept circles where the inside color is homogenous.This would throw out all the false hits in the above example.

Finding shapes in an image using opencv

I'm trying to look for shapes in an image using OpenCV. I know the shapes I want to match (there are some shapes I don't know about, but I don't need to find them) and their orientations. I don't know their sizes (scale) and locations.
My current approach:
Detect contours
For each contour, calculate the maximum bounding box
Match each bounding box to one of the known shapes separately. In my real project, I'm scaling the region to the template size and calculating differences in Sobel gradient, but for this demo, I'm just using the aspect ratio.
Where this approach comes undone is where shapes touch. The contour detection picks up the two adjacent shapes as a single contour (single bounding box). The matching step will then obviously fail.
Is there a way to modify my approach to handle adjacent shapes separately? Also, is there a better way to perform step 3?
For example: (Es colored green, Ys colored blue)
Failed case: (unknown shape in red)
Source code:
import cv
import sys
E = cv.LoadImage('e.png')
E_ratio = float(E.width)/E.height
Y = cv.LoadImage('y.png')
Y_ratio = float(Y.width)/Y.height
EPSILON = 0.1
im = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_GRAYSCALE)
storage = cv.CreateMemStorage(0)
seq = cv.FindContours(im, storage, cv.CV_RETR_EXTERNAL,
cv.CV_CHAIN_APPROX_SIMPLE)
regions = []
while seq:
pts = [ pt for pt in seq ]
x, y = zip(*pts)
min_x, min_y = min(x), min(y)
width, height = max(x) - min_x + 1, max(y) - min_y + 1
regions.append((min_x, min_y, width, height))
seq = seq.h_next()
rgb = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_COLOR)
for x,y,width,height in regions:
pt1 = x,y
pt2 = x+width,y+height
if abs(float(width)/height - E_ratio) < EPSILON:
color = (0,255,0,0)
elif abs(float(width)/height - Y_ratio) < EPSILON:
color = (255,0,0,0)
else:
color = (0,0,255,0)
cv.Rectangle(rgb, pt1, pt2, color, 2)
cv.ShowImage('rgb', rgb)
cv.WaitKey(0)
e.png:
y.png:
good:
bad:
Before anybody asks, no, I'm not trying to break a captcha :) OCR per se isn't really relevant here: the actual shapes in my real project aren't characters -- I'm just lazy, and characters are the easiest thing to draw (and still get detected by trivial methods).
As your shapes can vary in size and ratio, you should look at scaling invariant descriptors. A bunch of such descriptors would be perfect for your application.
Process those descriptors on your test template and then use some kind of simple classification to extract them. It should give pretty good results with simple shapes as you show.
I used Zernike and Hu moments in the past, the latter being the most famous. You can find an example of implementation here : http://www.lengrand.fr/2011/11/classification-hu-and-zernike-moments-matlab/.
Another thing : Given your problem, you should look at OCR technologies (stands for optical character recognition : http://en.wikipedia.org/wiki/Optical_character_recognition ;)).
Hope this helps a bit.
Julien
Have you try Chamfer Matching or contour matching (correspondence) using CCH as descriptor.
Chamfer matching is using distance transform of target image and template contour. not exactly scale invariant but fast.
The latter is rather slow, as the complexity is at least quadratic for bipartite matching problem. on the other hand, this method is invariant to scale, rotation, and probably local distortion (for approximate matching, which IMHO is good for the bad example above).

Categories