Face comparison in group photos - python

I want to compare two photos. The first has the face of one individual. The second is a group photo with many faces. I want to see if the individual from the first photo appears in the second photo.
I have tried to do this with the deepface and face_recognition libraries in python, by pulling faces one by one from the group photo and comparing them to the original photo.
face_locations = face_recognition.face_locations(img2_loaded)
for face in face_locations:
top, right, bottom, left = face
face_img = img2_loaded[top:bottom, left:right]
face_recognition.compare_faces(img1_loaded, face_img)
This results in an error about operands cannot be broadcast together with shapes (3088,2316,3) (90,89,3). I also get this same error when I take the faces I pulled out from the group photo, save them using PIL and then try passing them into deepface. Can anyone recommend any alternative ways to achieve this functionality? Or fix my current attempt? Thank you so much!

deepface is designed to compare two faces but you can still compare one to many face recognition.
You have two pictures. One has just a single face photo. I call this img1.jpg. And second has many faces. I call this img2.jpg.
You can firstly detect faces in img2.jpg with OpenCV.
import cv2
img2 = cv2.imread("img2.jpg")
face_detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
faces = face_detector.detectMultiScale(img2, 1.3, 5)
detected_faces = []
for face in faces:
x,y,w,h = face
detected_face = img2[int(y):int(y+h), int(x):int(x+w)]
detected_faces.append(detected_face)
Then, you need to compare each item of faces variable with img1.jpg.
img1 = cv2.imread("img1.jpg")
targets = face_detector.detectMultiScale(img1, 1.3, 5)
x,y,w,h = targets[0] #this has just a single face
target = img1[int(y):int(y+h), int(x):int(x+w)]
for face in detected_faces:
#compare face and target in each iteration
compare(face, target)
We should design compare function
from deepface import DeepFace
def compare(img1, img2):
resp = DeepFace.verify(img1, img2)
print(resp["verified"])
So, you can adapt deepface for your case like that.

Related

Automated img. dataset cropping of safety helmet faces and heads

I am currently trying to run this code on a dataset which includes multiple full-bodied photos of people with hardhats.
Here is the code:
def detect_face(img):
heregray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.7, minNeighbors=5)
return faces
filename = ("C:\\Users\Vitaliy Yashchenko\\Desktop\\OpenCV Face recognition\\!!!\\dataset\\hattocrop")
for img in glob.glob(filename+'/*.*'):
var_img = cv2.imread(img)
face = detect_face(var_img)
print(face)
if (len(face) == 0):
continue
for(ex, ey, ew, eh) in face:
crop_image = var_img[ey:eh+ey, ex:ex+ew]
cv2.imshow("cropped", crop_image)
cv2.waitKey(0)
cv2.imwrite(os.path.join("outputs/",str(img)),crop_image)
As the haarcascade recognizes only the face, I tried to crop the face with the hard hat and I was slightly confused with the axes ex, ey, ew, eh.
If I run the following in the corresponding line:
crop_image = var_img[ey:eh+ey+100, ex:ex+ew]
I get the lower part of the face.
What is the approriate way to define the higher part of the face(head) in the cropped img so the cropped one will include the safety hard hat?
I am not sure what is the issue but:
dimensions in CV images and basically in computer vision are read from the top left corner (0,0) to the bottom right corner (img_width,img_height)
cv2 has some deep learning tools built-in but if you are really interested in the project you should use other tools such as PyTorch, you can also use the models trained there in OpenCV
last tip (side note) - don't put spaces in your directories names :)

how to use a handwritten T shape which is on a body part as the target and paste an image on it?

I am developing a project for my university assignment which has a AR part that I tried to with Unity and Vuforia. I want to get a simple T shape (or any shape which is easy for user to draw on a body part such as hand) as the image target, because I'm developing an app similar to inkHunter. In this app they have got a smiley as the image target and when the customer draws a smiley on the body and places the camera on it, the camera finds that and shows the selected tattoo design on it. I tried it with Vuforia SDK but they give a rating for the image target, so I can't get what I want as the image target. I think using openCV is the right way to do it but it's so hard to learn and I got less time. I think this is not a big thing to implement so please try to help me with this problem. I think you get my idea. in inkHunter even if I draw the target in a sheet also they show the tattoo on it. I need the same which means I need to detect the Drawn target. It would be great if you could help me in this situation. Thanks.
target can be like this,
I was able to do template matching from pictures and I applied the same to real-time which means I looped through the frames. But it does not seem to be matching the template with frames, And I realized that found(bookkeeping variable) is always None.
import cv2 as cv2
import numpy as np
import imutils
def main():
template = cv2.imread("C:\\Users\\Manthika\\Desktop\\opencvtest\\template.jpg")
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
template = cv2.Canny(template, 50, 200)
(tH, tW) = template.shape[:2]
cv2.imshow("Template", template)
windowName = "Something"
cv2.namedWindow(windowName)
cap = cv2.VideoCapture(0)
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False
# loop over the frames to find the template
while ret:
# load the image, convert it to grayscale, and initialize the
# bookkeeping variable to keep track of the matched region
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
found = None
# loop over the scales of the image
for scale in np.linspace(0.2, 1.0, 20)[::-1]:
# resize the image according to the scale, and keep track
# of the ratio of the resizing
resized = imutils.resize(gray, width=int(gray.shape[1] * scale))
r = gray.shape[1] / float(resized.shape[1])
# if the resized image is smaller than the template, then break
# from the loop
if resized.shape[0] < tH or resized.shape[1] < tW:
break
# detect edges in the resized, grayscale image and apply template
# matching to find the template in the image
edged = cv2.Canny(resized, 50, 200)
result = cv2.matchTemplate(edged, template, cv2.TM_CCOEFF)
(_, maxVal, _, maxLoc) = cv2.minMaxLoc(result)
# if we have found a new maximum correlation value, then update
# the bookkeeping variable
if found is None or maxVal > found[0]:
found = (maxVal, maxLoc, r)
print(found)
# unpack the bookkeeping variable and compute the (x, y) coordinates
# of the bounding box based on the resized ratio
print(found)
if found is None:
# just show only the frames if the template is not detected
cv2.imshow(windowName, frame)
else:
(_, maxLoc, r) = found
(startX, startY) = (int(maxLoc[0] * r), int(maxLoc[1] * r))
(endX, endY) = (int((maxLoc[0] + tW) * r), int((maxLoc[1] + tH) * r))
# draw a bounding box around the detected result and display the image
cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2)
cv2.imshow(windowName, frame)
if cv2.waitKey(1) == 27:
break
cv2.destroyAllWindows()
cap.release()
if __name__ == "__main__":
main()
Please help me to solve this problem
I can hint you with the OpenCV part, but without Unity and Vuforia, hope it may help.
So, the way I see the pipeline for the project:
Detect location, size, and aspect ratio
Use homography for transformation of the image that should be put over the original
Overlay put one image on top of the other
I will assume that the target will be a dark "T" on a white piece of paper, and it may appear in different locations of the paper, as well as the paper itself may move.
1. Detect location, size, and aspect ratio
Firstly, you need to detect the piece of paper, as you know its color and aspect ration, you may use RGB/HSV thresholding for segmentation. You may also try using Deep/Machine Learning (some similar strategy like in R-CNN, HOG-SVM etc.), but it will take time. Then, you can use findContours() function from OpenCV to get the largest object. From the contour you can get the location, size, and aspect ratio of the paper.
After that you do the same thing but within the piece of paper and looking for the "T". Here you can use template matching method, just scan the Region Of Interest with predefined mask of different sizes, or just repeat what steps above.
A useful resource may be this credit card characters recognition example. It helped me a lot one day:)
2. Use homography for transformation of the image that should be put over the original
After extracting aspect ratio you will know the approximate size and shape that should appear on top of the "T". This will let you to use homograpy for transformation of the image you want to put over "T". Here is a good point to start, you can also google for some other sources, there should be plenty of them, and as far as I know, OpenCV should have functions for that.
After the transformation, I would recommend you to use interpolation, because there might be some missing pixels afterwards.
3. Overlay put one image on top of the other
The last step is just to go through all pixels of the input image and put the transformed image over target pixels.
Hope this helps, good luck!:)

How to show the biggest rectangle in OpenCV Haar classifier

I have already trained positive and negative images on side view of a car using haar cascade object detection, now when i use cascade xml file to predict car in the images i get multiple rectangles.
Now
1)why am i getting multiple rectangle around my object.
2)How to show only the largest rectangle detected in image
Output Image
This is the type of output that i am getting on every image
Code
car_cascade = cv2.CascadeClassifier('data/cascade.xml')
img = cv2.imread('test/46.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in cars:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Piglet's answer will help you set a threshold for the minimum / maximum size, but if you wanted to find the largest bounding box in the image, you could do something like this:
areas = [w*h for x,y,w,h in cars]
i_biggest = np.argmax(areas)
biggest = cars[i_biggest]
Here, we're doing the following:
Calculating all bounding box areas using list comprehension
Finding the index of areas with the largest value, storing in i_biggest
Using this index to extract the biggest (largest area) rectangle from cars
As the function name alread suggests cv2.CascadeClassifier.detectMultiScale and the documentation says:
Detects objects of different sizes in the input image
Also from the documentation:
Python: cv2.CascadeClassifier.detectMultiScale(image[, scaleFactor[,
minNeighbors[, flags[, minSize[, maxSize]]]]]) → objects
minSize – Minimum possible object size. Objects smaller than that are
ignored.
So either you filter the list of resulting rectangles by size or you prevent small objects by setting the minSize parameter.

detect patternts and digits in image with openCV and python

I am trying to create a program that can input an image (I am doing it by imageGrab from PIL) and detect some known symbols in it, and their locations. The good thing is that I am pretty sure I don't need neural networks, because I know the exact shape and size of each symbol. the problem is that I have no idea how much of these will be, and what is the color in the background of each symbol. some of the symbols are numbers, I have an image of each digit 0-9, but there may be up to 3-digit numbers. I think I will be able to find a way to know which digits are part of the same number by their location, but lets talk about it later. right now, I have turned the image into grayscale and imshow it using opencv2.
do you have any idea how can I do it with opencv? some other library?
and I need it to be fast enough, hopefuly 10 frames per second.
this is my current code (modified sentdex's "python plays GTA" code, the most bottom of the page):
import numpy as np
from PIL import ImageGrab
import cv2
def screen_record():
while(True):
global printscreen
image = ImageGrab.grab(bbox=(20,270,430,685))
printscreen = np.array(image)
grayscale_image = cv2.cvtColor(printscreen, cv2.COLOR_BGR2GRAY)
cv2.imshow('window', grayscale_image)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
if cv2.waitKey(25) & 0xFF == ord('w'):
image.save("screen_shot.png")
print("Saved current window as image")
screen_record()
EDIT: I managed to get to something with opencv's template match, only with the digit 2 (for now). I found a nice tutorial here. my problem is when there is not exactly 1 match of the template, means no number 2s, or more then 1. when there aren't any it looks like its choosing something random on the image, and when there's more then one, I have only 1 of them detected. is it ossible to apply it in a different way to match my needs?
So, I have a solution to my problem.
For all of those who reach this page in the future to get help, here are the steps to regognize templates in images:
create 2 images, the one you want to detect, and another one for your template.
then, upload the whoever you want using opencv, and copy this function:
def locate_symbol(x, template):
w, h = filter_num2.shape[::-1]
res = cv2.matchTemplate(x, template, cv2.TM_SQDIFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
min_thresh = 0.45
match_locations = np.where(res<=min_thresh)
return w, h, match_locations
and use these lines to draw bounding boxes on the image:
w, h, locs = locate_symbol(grayscale_image, filter_num2)
for (x, y) in zip(locs[1], locs[0]):
cv2.rectangle(printable_image, (x, y), (x+w, y+h), [255, 0, 0], 2)
then you can draw everything with cv2.imshow()

Edge detection and histogram matching of two images using open CV python.

I need to find edge detection of medical images using OpenCV python .Which edge detector will be the best suited for my work? I have tried using canny Edge detector. I want to find edges of the medical images and find the histogram matching between two images.
Thanks in Advance:)
Can you post the images you're working on ? That will be better.
Also, you can try this code. It allows you to change the parameters of canny filters, Thresold 1 and thresold 2 and hence you will get an overall idea how you can apply canny filter to the image.
import cv2
import numpy as np
def nothing(x):
pass
#image window
cv2.namedWindow('image')
#loading images
img = cv2.imread('leo-messi-pic.jpg',0) # load your image with proper path
# create trackbars for color change
cv2.createTrackbar('th1','image',0,255,nothing)
cv2.createTrackbar('th2','image',0,255,nothing)
while(1):
# get current positions of four trackbars
th1 = cv2.getTrackbarPos('th1','image')
th2 = cv2.getTrackbarPos('th2','image')
#apply canny
edges = cv2.Canny(img,th1,th2)
#show the image
cv2.imshow('image',edges)
#press ESC to stop
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
As far as, histogram comparison is concerned. You can find all the histogram related cv2 APIs here.
http://docs.opencv.org/modules/imgproc/doc/histograms.html
Hope it helps.

Categories