Detecting numbers with tesseract - python

I'm trying to use Tesseract to identify the characters 0-9, after reciving very poor results I've built a small test file to check what tesseract is actually running against.
I've taken 39 screenshots of images, and combined them with paint into a jpg file, I've also ran the same code/appraoch with a png and bmp version to the same outcome.
The below is the jpg test data and my output from trying to draw the bounding boxes. I expected 39 small green bounding boxes around each of the numbers in question. They appear (mostly) very clear to my eye.
I've tried the base line tesseract data I get with the pip install and got incredibly poor results, I downloaded additional english data from the official github and matching improved dramatically without any further changes, it wasn't great but it was much better.
In one script I take the screenshot images and I drop the blue and green channels, convert to grayscale, apply binary thresholding followed by OTSU to try and smooth out some of the gaps in the screenshots.
def getTemplate():
w, h = 1920, 1080
monitor = {'top': 0, 'left': 0, 'width': w, 'height': h}
img = Image.frombytes('RGB', (w,h), sct.grab(monitor).rgb)
haystack = np.array(img)
return haystack
path_to_tesseract = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
pytesseract.tesseract_cmd = path_to_tesseract
# # get an updated image of the screen & crop it
# template = get_haystack_image()
# haystack = template[137:167, 54:125]
template = getTemplate()
haystack = template[180:215, 55:125]
r = output.copy()
# set blue and green channels to 0
r[:, :, 0] = 0
r[:, :, 1] = 0
gray_image = cv.cvtColor(output, cv.COLOR_BGR2GRAY)
thresh = cv.threshold(gray_image, 40, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)[1]
img = cv.imwrite('testData.jng', thresh)
And then I read the images, apply the below custom config, draw the boxes and print the results.
The results from the below are:
jpg:
222419230
222101064272787315267
from pytesseract import pytesseract
from pytesseract import Output
import cv2 as cv
import os
path_to_tesseract = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
pytesseract.tesseract_cmd = path_to_tesseract
# Read Images
jpg = cv.imread('testData.jpg')
# Define configuration that only whitelists number characters, set oem and specifiy language
custom_config = r'-l eng --psm 6 --oem 3 -c tessedit_char_whitelist=0123456789'
# Find the numbers in the image
text_jpg = pytesseract.image_to_string(jpg, config=custom_config)
# Print Results
print('jpg:' + '\n' + str(text_jpg))
d = pytesseract.image_to_data(jpg, output_type=Output.DICT)
n_boxes = len(d['level'])
for i in range(n_boxes):
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
cv.rectangle(jpg, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv.imshow('jpg', jpg)
cv.waitKey(0)
cv.imwrite('results.jpg', jpg)
Why are the bounding boxes as overlaped and incorrect as they are and how how can I clean them up to improve my image recognition?

Related

How to remove noise around numbers using OpenCV

I'm trying to use Tesseract-OCR to get the readings on below images but having issues getting consistent results with the spotted background. I have below configuration on my pytesseract
CONFIG = f"—psm 6 -c tessedit_char_whitelist=01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄabcdefghijklmnopqrstuvwxyzåäö.,-"
I have also tried below image pre-processing with some good results, but still not perfect results
blur = cv2.blur(img,(4,4))
(T, threshInv) = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
What I want is to consistently be able to identify the numbers and the decimal separator. What image pre-processing could help in getting consistent results on images as below?
That was a challenge but i think i have an interesting approach: Pattern-matching
If you zoom in, you realize that the pattern in the back only has 4 possible dots, a single full pixle, a double full pixel and a double pixel with a medium left or right. So what i did was grab these 4 patterns from the image with 17.160.000,00 and got to work. Save these to load again, i just grabbed them on the fly
img = cv2.imread('C:/Users/***/17.jpg', cv2.IMREAD_GRAYSCALE)
pattern_1 = img[2:5,1:5]
pattern_2 = img[6:9,5:9]
pattern_3 = img[6:9,11:15]
pattern_4 = img[9:12,22:26]
# just to show it carries over to other pics ;)
img = cv2.imread('C:/Users/****/6.jpg', cv2.IMREAD_GRAYSCALE)
Actual Pattern Matching
Next we match all the patterns and threshold to find all occurrences, i used 0.7 but you can play around with it a little. These patterns take off some pixels on the side and only match a sigle pixel on the left so we pad twice (one with an extra) to hit both for the first 3 patterns. The last one is the single pixel so it doesnt need it
res_1 = cv2.matchTemplate(img,pattern_1,cv2.TM_CCOEFF_NORMED )
thresh_1 = cv2.threshold(res_1,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_1 = np.pad(thresh_1,((1,1),(1,2)),'constant')
pat_thresh_15 = np.pad(thresh_1,((1,1),(2,1)), 'constant')
res_2 = cv2.matchTemplate(img,pattern_2,cv2.TM_CCOEFF_NORMED )
thresh_2 = cv2.threshold(res_2,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_2 = np.pad(thresh_2,((1,1),(1,2)),'constant')
pat_thresh_25 = np.pad(thresh_2,((1,1),(2,1)), 'constant')
res_3 = cv2.matchTemplate(img,pattern_3,cv2.TM_CCOEFF_NORMED )
thresh_3 = cv2.threshold(res_3,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_3 = np.pad(thresh_3,((1,1),(1,2)),'constant')
pat_thresh_35 = np.pad(thresh_3,((1,1),(2,1)), 'constant')
res_4 = cv2.matchTemplate(img,pattern_4,cv2.TM_CCOEFF_NORMED )
thresh_4 = cv2.threshold(res_4,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_4 = np.pad(thresh_4,((1,1),(1,2)),'constant')
Editing the Image
Now the only thing left to do is remove all the matches from the image. Since we have a mostly white backround we just set them to 255 to blend in.
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
Output
Edit:
Take a look at Abstracts answer as well for refining this output and tesseract finetuning
You may find a solution using a slightly more complex approach by filtering in the frequency domain instead of the spatial domain. The thresholds might require some tweaking depending on how tesseract performs with the output images.
Implementation:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
# Perform 2D FFT
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
# Squash all of the frequency magnitudes above a threshold
for idx, x in np.ndenumerate(magnitude_spectrum):
if x > 195:
fshift[idx] = 0
# Inverse FFT back into the real-spatial-domain
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.real(img_back)
img_back = cv2.normalize(img_back, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
out_img = np.copy(img)
# Use the inverted FFT image to keep only the black values below a threshold
for idx, x in np.ndenumerate(img_back):
if x < 100:
out_img[idx] = 0
else:
out_img[idx] = 255
plt.subplot(131),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(img_back, cmap = 'gray')
plt.title('Reversed FFT'), plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(out_img, cmap = 'gray')
plt.title('Output'), plt.xticks([]), plt.yticks([])
plt.show()
Output:
Median Blur Implementation:
import cv2
import numpy as np
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
blur = cv2.medianBlur(img, 3)
for idx, x in np.ndenumerate(blur):
if x < 20:
blur[idx] = 0
cv2.imshow("Test", blur)
cv2.waitKey()
Output:
Final Edit:
So using Eumel's solution and combining this bit of code on the bottom of it yields a 100% successful result:
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
# Eumel's code above this line
img = cv2.erode(img, np.ones((3,3)))
cv2.imwrite("out.png", img)
cv2.imshow("Test", img)
print(pytesseract.image_to_string(Image.open("out.png"), lang='eng', config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789.,'))
Output Image Examples:
Whitelisting the tesseract characters appears to help quite a bit as well to prevent false identification.

How to improve the accuracy of pytesseract?

I startetd an ocr project a few days ago. The input image is a really noisy gray image with white letters. With the EAST text detector it is possible to recognize the text and draw borders around.
After that i crop the rectangle do some image processing. After that, I pass the processed parts to pytesseract, but with bad results. Images and source vode is below. Maybe some have a good idea for better image processing and/or pytesseract settings.
Images
Input image
Rectangles after Recognition
First part
Second part
Third part
Tesseract Result
AY U N74 O54
Sourcecode for image processing
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (8,8))
kernel2 = np.ones((3,3),np.uint8)
kernel3 = np.ones((5,5),np.uint8)
gray = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray, None, fx=7, fy=7)
gray = cv2.GaussianBlur(gray, (5,5), 1)
#cv2.medianBlur(gray, 5)
gray = cv2.dilate(gray, kernel3, iterations = 1)
gray = cv2.erode(gray, kernel3, iterations = 1)
gray = cv2.morphologyEx(gray, cv2.MORPH_DILATE, kernel3)
gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
gray = cv2.bitwise_not(gray)
ts_img = Image.fromarray(gray)
txt = pytesseract.image_to_string(ts_img, config='--oem 3 --psm 12 -c tessedit_char_whitelist=12345678ABCDEFGHIJKLMNOPQRSTUVWXYZ load_system_dawg=false load_freq_dawg=false')
I tried some other psm settings like psm 11, psm 8 and ps6. The results are different, but also bad.
I guess the biggest problem are the black spots which are connected with the letters and digits but I have no idea how to remove them.
I appreciate every help :)
OCR software will perform poorly when interpreting this text as a word or sentence because it's expecting real English words and not a random combination of characters. I'd recommend analyzing the text as individual characters. I solved the (example) problem by first determining which groups of labeled pixels (connected components of a thresholded image) are characters based on the size and location of the group. Then for each image portion containing a (single) character I use easyocr to obtain the character. I found that pytesseract performs poorly or not at all on single characters (even when setting --psm 10 and other arguments). The code below produces this result:
OCR out: 6UAE005X0721295
import cv2
import matplotlib.pyplot as plt
import numpy as np
import easyocr
reader = easyocr.Reader(["en"])
# Threshold image and determine connected components
img_bgr = cv2.imread("C5U3m.png")
img_gray = cv2.cvtColor(img_bgr[35:115, 30:], cv2.COLOR_BGR2GRAY)
ret, img_bin = cv2.threshold(img_gray, 195, 255, cv2.THRESH_BINARY_INV)
retval, labels = cv2.connectedComponents(255 - img_bin, np.zeros_like(img_bin), 8)
fig, axs = plt.subplots(4)
axs[0].imshow(img_gray, cmap="gray")
axs[0].set_title("grayscale")
axs[1].imshow(img_bin, cmap="gray")
axs[1].set_title("thresholded")
axs[2].imshow(labels, vmin=0, vmax=retval - 1, cmap="tab20b")
axs[2].set_title("connected components")
# Find and process individual characters
OCR_out = ""
all_img_chars = np.zeros((labels.shape[0], 0), dtype=np.uint8)
labels_xmin = [np.argwhere(labels == i)[:, 1].min() for i in range(0, retval)]
# Process the labels (connected components) from left to right
for i in np.argsort(labels_xmin):
label_yx = np.argwhere(labels == i)
label_ymin = label_yx[:, 0].min()
label_ymax = label_yx[:, 0].max()
label_xmin = label_yx[:, 1].min()
label_xmax = label_yx[:, 1].max()
# Characters are large blobs that don't border the top/bottom edge
if label_yx.shape[0] > 250 and label_ymin > 0 and label_ymax < labels.shape[0]:
img_char = img_bin[:, label_xmin - 3 : label_xmax + 3]
all_img_chars = np.hstack((all_img_chars, img_char))
# Use EasyOCR on single char (pytesseract performs poorly on single characters)
OCR_out += reader.recognize(img_char, detail=0)[0]
axs[3].imshow(all_img_chars, cmap="gray")
axs[3].set_title("individual characters")
fig.show()
print("Thruth: 6UAE005X0721295")
print("OCR out: " + OCR_out)

Using opencv to find the most similar image that contains another image

If the title isn't clear let's say I have a list of images (10k+), and I have a target image I am searching for.
Here's an example of the target image:
Here's an example of images I will want to be searching to find something 'similar' (ex1, ex2, and ex3):
Here's the matching I do (I use KAZE)
from matplotlib import pyplot as plt
import numpy as np
import cv2
from typing import List
import os
import imutils
def calculate_matches(des1: List[cv2.KeyPoint], des2: List[cv2.KeyPoint]):
"""
does a matching algorithm to match if keypoints 1 and 2 are similar
#param des1: a numpy array of floats that are the descriptors of the keypoints
#param des2: a numpy array of floats that are the descriptors of the keypoints
#return:
"""
# bf matcher with default params
bf = cv2.BFMatcher(cv2.NORM_L2)
matches = bf.knnMatch(des1, des2, k=2)
topResults = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
topResults.append([m])
return topResults
def compare_images_kaze():
cwd = os.getcwd()
target = os.path.join(cwd, 'opencv_target', 'target.png')
images_list = os.listdir('opencv_images')
for image in images_list:
# get my 2 images
img2 = cv2.imread(target)
img1 = cv2.imread(os.path.join(cwd, 'opencv_images', image))
for i in range(0, 360, int(360 / 8)):
# rotate my image by i
img_target_rotation = imutils.rotate_bound(img2, i)
# Initiate KAZE object with default values
kaze = cv2.KAZE_create()
kp1, des1 = kaze.detectAndCompute(img1, None)
kp2, des2 = kaze.detectAndCompute(img2, None)
matches = calculate_matches(des1, des2)
try:
score = 100 * (len(matches) / min(len(kp1), len(kp2)))
except ZeroDivisionError:
score = 0
print(image, score)
img3 = cv2.drawMatchesKnn(img1, kp1, img_target_rotation, kp2, matches,
None, flags=2)
img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB)
plt.imshow(img3)
plt.show()
plt.clf()
if __name__ == '__main__':
compare_images_kaze()
Here's the result of my code:
ex1.png 21.052631578947366
ex2.png 0.0
ex3.png 42.10526315789473
It does alright! It was able to tell that ex1 is similar and ex2 is not similar, however it states that ex3 is similar (even more similar than ex1). Any extra pre-processing or post-processing (maybe ml, assuming ml is actually useful) or just changes I can do to my method that can be done to keep only ex1 as similar and not ex3?
(Note this score I create is something I found online. Not sure if it's an accurate way to go about it)
ADDED MORE EXAMPLES BELOW
Another set of examples:
Here's what I am searching for
I want the above image to be similar to the middle and bottom images (NOTE: I rotate my target image by 45 degrees and compare it to the images below.)
Feature matching (as stated in answers below) were useful in found similarity with the second image, but not the third image (Even after rotating it properly)
Detecting The Most Similar Image
The Code
You can use template matching, where the image you want to detect if it's in the other images is the template. I have that small image saved in template.png, and the other three images in img1.png, img2.png and img3.png.
I defined a function that utilizes the cv2.matchTemplate to calculate the amount of confidence for if a template is in an image. Using the function on every image, the one that results ion the highest confidence is the image that contains the template:
import cv2
template = cv2.imread("template.png", 0)
files = ["img1.png", "img2.png", "img3.png"]
for name in files:
img = cv2.imread(name, 0)
print(f"Confidence for {name}:")
print(cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED).max())
The Output:
Confidence for img1.png:
0.8906427
Confidence for img2.png:
0.4427919
Confidence for img3.png:
0.5933967
The Explanation:
Import the opencv module, and read in the template image as grayscale by setting the second parameter of the cv2.imread method to 0:
import cv2
template = cv2.imread("template.png", 0)
Define your list of images of which you want to determine which one contains the template:
files = ["img1.png", "img2.png", "img3.png"]
Loop through the filenames and read in each one as a grayscale image:
for name in files:
img = cv2.imread(name, 0)
Finally, you can use the cv2.matchTemplate to detect the template in each image. There are many detection methods you can use, but for this I decided to use the cv2.TM_CCOEFF_NORMED method:
print(f"Confidence for {name}:")
print(cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED).max())
The output of the function ranges from between 0 and 1, and as you can see, it successfully detected that the first image is most likely to contain the template image (it has the highest level of confidence).
The Visualization
The Code
If detecting which image contains the template isn't enough, and you want a visualization, you can try the code below:
import cv2
import numpy as np
def confidence(img, template):
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
res = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
conf = res.max()
return np.where(res == conf), conf
files = ["img1.png", "img2.png", "img3.png"]
template = cv2.imread("template.png")
h, w, _ = template.shape
for name in files:
img = cv2.imread(name)
([y], [x]), conf = confidence(img, template)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
text = f'Confidence: {round(float(conf), 2)}'
cv2.putText(img, text, (x, y), 1, cv2.FONT_HERSHEY_PLAIN, (0, 0, 0), 2)
cv2.imshow(name, img)
cv2.imshow('Template', template)
cv2.waitKey(0)
The Output:
The Explanation:
Import the necessary libraries:
import cv2
import numpy as np
Define a function that will take in a full image and a template image. As the cv2.matchTemplate method requires grayscale images, convert the 2 images into grayscale:
def confidence(img, template):
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
Use the cv2.matchTemplate method to detect the template in the image, and return the position of the point with the highest confidence, and return the highest confidence:
res = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)
conf = res.max()
return np.where(res == conf), conf
Define your list of images you want to determine which one contains the template, and read in the template image:
files = ["img1.png", "img2.png", "img3.png"]
template = cv2.imread("template.png")
Get the size of the template image to later use for drawing a rectangle on the images:
h, w, _ = template.shape
Loop though the filenames and read in each image. Using the confidence function we defined before, get the x y position of the top-left corner of the detected template and the confidence amount for the detection:
for name in files:
img = cv2.imread(name)
([y], [x]), conf = confidence(img, template)
Draw a rectangle on the image at the corner and put the text on the image. Finally, show the image:
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
text = f'Confidence: {round(float(conf), 2)}'
cv2.putText(img, text, (x, y), 1, cv2.FONT_HERSHEY_PLAIN, (0, 0, 0), 2)
cv2.imshow(name, img)
Also, show the template for comparison:
cv2.imshow('Template', template)
cv2.waitKey(0)
I'm not sure, if the given images resemble your actual task or data, but for this kind of images, you could try simple template matching, cf. this OpenCV tutorial.
Basically, I just implemented the tutorial with some modifications:
import cv2
import matplotlib.pyplot as plt
# Read images
examples = [cv2.imread(img) for img in ['ex1.png', 'ex2.png', 'ex3.png']]
target = cv2.imread('target.png')
h, w = target.shape[:2]
# Iterate examples
for i, img in enumerate(examples):
# Template matching
# cf. https://docs.opencv.org/4.5.2/d4/dc6/tutorial_py_template_matching.html
res = cv2.matchTemplate(img, target, cv2.TM_CCOEFF_NORMED)
# Get location of maximum
_, max_val, _, top_left = cv2.minMaxLoc(res)
# Set up threshold for decision target found or not
thr = 0.7
if max_val > thr:
# Show found target in example
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img, top_left, bottom_right, (0, 255, 0), 2)
# Visualization
plt.figure(i, figsize=(10, 5))
plt.subplot(1, 2, 1), plt.imshow(img[..., [2, 1, 0]]), plt.title('Example')
plt.subplot(1, 2, 2), plt.imshow(res, vmin=0, vmax=1, cmap='gray')
plt.title('Matching result'), plt.colorbar(), plt.tight_layout()
plt.show()
These are the results:
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Matplotlib: 3.4.1
OpenCV: 4.5.1
----------------------------------------
EDIT: To emphasize the information from the different colors, one might use the hue channel from the HSV color space for the template matching:
import cv2
import matplotlib.pyplot as plt
# Read images
examples = [
[cv2.imread(img) for img in ['ex1.png', 'ex2.png', 'ex3.png']],
[cv2.imread(img) for img in ['ex12.png', 'ex22.png', 'ex32.png']]
]
targets = [
cv2.imread('target.png'),
cv2.imread('target2.png')
]
# Iterate examples and targets
for i, (ex, target) in enumerate(zip(examples, targets)):
for j, img in enumerate(ex):
# Rotate last image from second data set
if (i == 1) and (j == 2):
img = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
h, w = target.shape[:2]
# Get hue channel from HSV color space
target_h = cv2.cvtColor(target, cv2.COLOR_BGR2HSV)[..., 0]
img_h = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[..., 0]
# Template matching
# cf. https://docs.opencv.org/4.5.2/d4/dc6/tutorial_py_template_matching.html
res = cv2.matchTemplate(img_h, target_h, cv2.TM_CCOEFF_NORMED)
# Get location of maximum
_, max_val, _, top_left = cv2.minMaxLoc(res)
# Set up threshold for decision target found or not
thr = 0.6
if max_val > thr:
# Show found target in example
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img, top_left, bottom_right, (0, 255, 0), 2)
# Visualization
plt.figure(i * 10 + j, figsize=(10, 5))
plt.subplot(1, 2, 1), plt.imshow(img[..., [2, 1, 0]]), plt.title('Example')
plt.subplot(1, 2, 2), plt.imshow(res, vmin=0, vmax=1, cmap='gray')
plt.title('Matching result'), plt.colorbar(), plt.tight_layout()
plt.savefig('{}.png'.format(i * 10 + j))
plt.show()
New results:
The Concept
We can use the cv2.matchTemplate method to detect where an image is in another image, but for your second set of images you have rotation. Also, we'll need to take the colors into account.
cv2.matchTemplate will take in an image, a template (the other image) and a template detection method, and will return a grayscale array where the brightest point in the grayscale array will be the point with the most confidence that template is at that point.
We can use the template at 4 different angles and use the one that resulted in the highest confidence. When we detected a possible point that matched the template, we use a function (that we will define ourselves) to check if the most frequent colors in the template is present in the patch of the image we detected. If not, then ignore the patch, regardless of the amount of confidence returned.
The Code
import cv2
import numpy as np
def frequent_colors(img, vals=3):
colors, count = np.unique(np.vstack(img), return_counts=True, axis=0)
sorted_by_freq = colors[np.argsort(count)]
return sorted_by_freq[-vals:]
def get_templates(img):
template = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for i in range(3):
yield cv2.rotate(template, i)
def detect(img, template, min_conf=0.45):
colors = frequent_colors(template)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
conf_max = min_conf
shape = 0, 0, 0, 0
for tmp in get_templates(template):
h, w = tmp.shape
res = cv2.matchTemplate(img_gray, tmp, cv2.TM_CCOEFF_NORMED)
for y, x in zip(*np.where(res > conf_max)):
conf = res[y, x]
if conf > conf_max:
seg = img[y:y + h, x:x + w]
if all(np.any(np.all(seg == color, -1)) for color in colors):
conf_max = conf
shape = x, y, w, h
return shape
files = ["img1_2.png", "img2_2.png", "img3_2.png"]
template = cv2.imread("template2.png")
for name in files:
img = cv2.imread(name)
x, y, w, h = detect(img, template)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.imshow(name, img)
cv2.imshow('Template', template)
cv2.waitKey(0)
The Output
The Explanation
Import the necessary libraries:
import cv2
import numpy as np
Define a function, frequent_colors, that will take in an image and return the most frequent colors in the image. An optional parameter, val, is how many colors to return; if val is 3, then the 3 most frequent colors will be returned:
def frequent_colors(img, vals=3):
colors, count = np.unique(np.vstack(img), return_counts=True, axis=0)
sorted_by_freq = colors[np.argsort(count)]
return sorted_by_freq[-vals:]
Define a function, get_templates, that will take in an image, and yield the image (in grayscale) at 4 different angles - original, 90 clockwise, 180, and 90 counterclockwise:
def get_templates(img):
template = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for i in range(3):
yield cv2.rotate(template, i)
Define a function, detect, that will take in an image and a template image, and return the x, y, w, h of the bounding box of the detected template on the image, and for this function we will be utilizing the frequent_colors and get_templates functions defined earlier. The min_conf parameter will be the minimum amount of confidence needed to classify a detection as an actual detection:
def detect(img, template, min_conf=0.45):
Detect the three most frequent colors in the template and store them in a variable, colors. Also, define a grayscale version of the main image:
colors = frequent_colors(template)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Define the initial value for the greatest confidence detected, and initial values for the detected patch:
conf_max = min_conf
shape = 0, 0, 0, 0
Loop though the grayscale templates at 4 angles, get the shape of the grayscale template (as rotation changes the shape), and use the cv2.matchTemplate method to get the grayscale array of detected templates on the image:
for tmp in get_templates(template):
h, w = tmp.shape
res = cv2.matchTemplate(img_gray, tmp, cv2.TM_CCOEFF_NORMED)
Loop though the x, y coordinates of the detected templates where the confidence is greater than conf_min, and store the confidence in a variable, conf. If conf is greater than the initial greatest confidence variable (conf_max), proceed to detect if all three most frequent colors in the template is present in the patch of the image:
for y, x in zip(*np.where(res > conf_max)):
conf = res[y, x]
if conf > conf_max:
seg = img[y:y + h, x:x + w]
if all(np.any(np.all(seg == color, -1)) for color in colors):
conf_max = conf
shape = x, y, w, h
At the end we can return the shape. If no template is detected in the image, the shape will be the initial values defined for it, 0, 0, 0, 0:
return shape
Finally, loop though each image and use the detect function we defined to get the x, y, w, h of the bounding box. Use the cv2.rectangle method to draw the bounding box onto the images:
files = ["img1_2.png", "img2_2.png", "img3_2.png"]
template = cv2.imread("template2.png")
for name in files:
img = cv2.imread(name)
x, y, w, h = detect(img, template)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.imshow(name, img)
cv2.imshow('Template', template)
cv2.waitKey(0)
First, the data appears in graphs, aren't you able to get the overlapping values from their numerical data?
And have you tried performing some edge detection for the change in color from white-blue and then from blue-red, fitting some circles to those edges and then checking if they overlap?
Since the input data is quite controlled (no organic photography or videos), perhaps you won't have to go the ML route.

How can I randomly add 20 images(10x10) in an empty background image (200x200) in python?

I want the 10 small images to be placed in this circle
I'm working on a small project to randomly place or put several images of size (10 w x 10 h) in another image that will be used as background of size (200 w x 200 h) in python. The small images should be put at a random location in the background image.
I have 20 small images of size (10x10) and one empty image background of size (200x200). I want to put my 20 small images in the empty background image at a random location in the background.
Is there a way to do it in Python?
Code
# Depencies importation
import cv2
# Saving directory
saving_dir = "../Saved_Images/"
# Read the background image
bgimg = cv2.imread("../Images/background.jpg")
# Resizing the bacground image
bgimg_resized = cv2.resize(bgimg, (2050,2050))
# Read the image that will be put in the background image (exemple of 1)
small_img = cv2.imread("../Images/small.jpg")
# Convert the resized background image to gray
bgimg_gray = cv2.cvtColor(bgimg, cv2.COLOR_BGR2GRAY)
# Convert the grayscale image to a binary image
ret, thresh = cv2.threshold(bgimg_gray,127,255,0)
# Determine the moments of the binary image
M = cv2.moments(thresh)
# calculate x,y coordinate of center
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# drawing the circle in the background image
circle = cv2.circle(bgimg, (cX, cY), 930, (0,0,255), 9)
print(circle)
# Saving the new image
cv2.imwrite(saving_dir+"bgimg"+".jpg", bgimg)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.resizeWindow("Test", 1000, 1200)
# Showing the images
cv2.imshow("image", bgimg)
# Waiting for any key to stop the program execution
cv2.waitKey(0)
the above code is for one image, I want to do it for the 20 and to put them in a random location
Assuming you have that background image background.jpg (decreased to 200x200 px) and 10 images: image01.png, image02.png ... image10.png (10x10 px). Then:
import glob
import random
from PIL import Image
img_bg = Image.open('circle.jpg')
width, height = img_bg.size
images = glob.glob('*.png')
for img in images:
img = Image.open(img)
x = random.randint(40, width-40)
y = random.randint(40, height-40)
img_bg.paste(img, (x, y, x+10, y+10))
img_bg.save('result.png', 'PNG')
Output image:

Detect face then autocrop pictures

I am trying to find an app that can detect faces in my pictures, make the detected face centered and crop 720 x 720 pixels of the picture. It is rather very time consuming & meticulous to edit around hundreds of pictures I plan to do that.
I have tried doing this using python opencv mentioned here but I think it is outdated. I've also tried using this but it's also giving me an error in my system. Also tried using face detection plugin for GIMP but it is designed for GIMP 2.6 but I am using 2.8 on a regular basis. I also tried doing what was posted at ultrahigh blog but it is very outdated (since I'm using a Precise derivative of Ubuntu, while the blogpost was made way back when it was still Hardy). Also tried using Phatch but there is no face detection so some cropped pictures have their face cut right off.
I have tried all of the above and wasted half a day trying to make any of the above do what I needed to do.
Do you guys have suggestion to achieve a goal to around 800 pictures I have.
My operating system is Linux Mint 13 MATE.
Note: I was going to add 2 more links but stackexchange prevented me to post two more links as I don't have much reputation yet.
I have managed to grab bits of code from various sources and stitch this together. It is still a work in progress. Also, do you have any example images?
'''
Sources:
http://pythonpath.wordpress.com/2012/05/08/pil-to-opencv-image/
http://www.lucaamore.com/?p=638
'''
#Python 2.7.2
#Opencv 2.4.2
#PIL 1.1.7
import cv
import Image
def DetectFace(image, faceCascade):
#modified from: http://www.lucaamore.com/?p=638
min_size = (20,20)
image_scale = 1
haar_scale = 1.1
min_neighbors = 3
haar_flags = 0
# Allocate the temporary images
smallImage = cv.CreateImage(
(
cv.Round(image.width / image_scale),
cv.Round(image.height / image_scale)
), 8 ,1)
# Scale input image for faster processing
cv.Resize(image, smallImage, cv.CV_INTER_LINEAR)
# Equalize the histogram
cv.EqualizeHist(smallImage, smallImage)
# Detect the faces
faces = cv.HaarDetectObjects(
smallImage, faceCascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size
)
# If faces are found
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(image, pt1, pt2, cv.RGB(255, 0, 0), 5, 8, 0)
return image
def pil2cvGrey(pil_im):
#from: http://pythonpath.wordpress.com/2012/05/08/pil-to-opencv-image/
pil_im = pil_im.convert('L')
cv_im = cv.CreateImageHeader(pil_im.size, cv.IPL_DEPTH_8U, 1)
cv.SetData(cv_im, pil_im.tostring(), pil_im.size[0] )
return cv_im
def cv2pil(cv_im):
return Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
pil_im=Image.open('testPics/faces.jpg')
cv_im=pil2cv(pil_im)
#the haarcascade files tells opencv what to look for.
faceCascade = cv.Load('C:/Python27/Lib/site-packages/opencv/haarcascade_frontalface_default.xml')
face=DetectFace(cv_im,faceCascade)
img=cv2pil(face)
img.show()
Testing on the first page of Google (Googled "faces"):
Update
This code should do exactly what you want. Let me know if you have questions. I tried to include lots of comments in the code:
'''
Sources:
http://opencv.willowgarage.com/documentation/python/cookbook.html
http://www.lucaamore.com/?p=638
'''
#Python 2.7.2
#Opencv 2.4.2
#PIL 1.1.7
import cv #Opencv
import Image #Image from PIL
import glob
import os
def DetectFace(image, faceCascade, returnImage=False):
# This function takes a grey scale cv image and finds
# the patterns defined in the haarcascade function
# modified from: http://www.lucaamore.com/?p=638
#variables
min_size = (20,20)
haar_scale = 1.1
min_neighbors = 3
haar_flags = 0
# Equalize the histogram
cv.EqualizeHist(image, image)
# Detect the faces
faces = cv.HaarDetectObjects(
image, faceCascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size
)
# If faces are found
if faces and returnImage:
for ((x, y, w, h), n) in faces:
# Convert bounding box to two CvPoints
pt1 = (int(x), int(y))
pt2 = (int(x + w), int(y + h))
cv.Rectangle(image, pt1, pt2, cv.RGB(255, 0, 0), 5, 8, 0)
if returnImage:
return image
else:
return faces
def pil2cvGrey(pil_im):
# Convert a PIL image to a greyscale cv image
# from: http://pythonpath.wordpress.com/2012/05/08/pil-to-opencv-image/
pil_im = pil_im.convert('L')
cv_im = cv.CreateImageHeader(pil_im.size, cv.IPL_DEPTH_8U, 1)
cv.SetData(cv_im, pil_im.tostring(), pil_im.size[0] )
return cv_im
def cv2pil(cv_im):
# Convert the cv image to a PIL image
return Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
def imgCrop(image, cropBox, boxScale=1):
# Crop a PIL image with the provided box [x(left), y(upper), w(width), h(height)]
# Calculate scale factors
xDelta=max(cropBox[2]*(boxScale-1),0)
yDelta=max(cropBox[3]*(boxScale-1),0)
# Convert cv box to PIL box [left, upper, right, lower]
PIL_box=[cropBox[0]-xDelta, cropBox[1]-yDelta, cropBox[0]+cropBox[2]+xDelta, cropBox[1]+cropBox[3]+yDelta]
return image.crop(PIL_box)
def faceCrop(imagePattern,boxScale=1):
# Select one of the haarcascade files:
# haarcascade_frontalface_alt.xml <-- Best one?
# haarcascade_frontalface_alt2.xml
# haarcascade_frontalface_alt_tree.xml
# haarcascade_frontalface_default.xml
# haarcascade_profileface.xml
faceCascade = cv.Load('haarcascade_frontalface_alt.xml')
imgList=glob.glob(imagePattern)
if len(imgList)<=0:
print 'No Images Found'
return
for img in imgList:
pil_im=Image.open(img)
cv_im=pil2cvGrey(pil_im)
faces=DetectFace(cv_im,faceCascade)
if faces:
n=1
for face in faces:
croppedImage=imgCrop(pil_im, face[0],boxScale=boxScale)
fname,ext=os.path.splitext(img)
croppedImage.save(fname+'_crop'+str(n)+ext)
n+=1
else:
print 'No faces found:', img
def test(imageFilePath):
pil_im=Image.open(imageFilePath)
cv_im=pil2cvGrey(pil_im)
# Select one of the haarcascade files:
# haarcascade_frontalface_alt.xml <-- Best one?
# haarcascade_frontalface_alt2.xml
# haarcascade_frontalface_alt_tree.xml
# haarcascade_frontalface_default.xml
# haarcascade_profileface.xml
faceCascade = cv.Load('haarcascade_frontalface_alt.xml')
face_im=DetectFace(cv_im,faceCascade, returnImage=True)
img=cv2pil(face_im)
img.show()
img.save('test.png')
# Test the algorithm on an image
#test('testPics/faces.jpg')
# Crop all jpegs in a folder. Note: the code uses glob which follows unix shell rules.
# Use the boxScale to scale the cropping area. 1=opencv box, 2=2x the width and height
faceCrop('testPics/*.jpg',boxScale=1)
Using the image above, this code extracts 52 out of the 59 faces, producing cropped files such as:
Another available option is dlib, which is based on machine learning approaches.
import dlib
from PIL import Image
from skimage import io
import matplotlib.pyplot as plt
def detect_faces(image):
# Create a face detector
face_detector = dlib.get_frontal_face_detector()
# Run detector and get bounding boxes of the faces on image.
detected_faces = face_detector(image, 1)
face_frames = [(x.left(), x.top(),
x.right(), x.bottom()) for x in detected_faces]
return face_frames
# Load image
img_path = 'test.jpg'
image = io.imread(img_path)
# Detect faces
detected_faces = detect_faces(image)
# Crop faces and plot
for n, face_rect in enumerate(detected_faces):
face = Image.fromarray(image).crop(face_rect)
plt.subplot(1, len(detected_faces), n+1)
plt.axis('off')
plt.imshow(face)
facedetect OpenCV CLI wrapper written in Python
https://github.com/wavexx/facedetect is a nice Python OpenCV CLI wrapper, and I have added the following example to their README.
Installation:
sudo apt install python3-opencv opencv-data imagemagick
git clone https://gitlab.com/wavexx/facedetect
git -C facedetect checkout 5f9b9121001bce20f7d87537ff506fcc90df48ca
Get my test image:
mkdir -p pictures
wget -O pictures/test.jpg https://raw.githubusercontent.com/cirosantilli/media/master/Ciro_Santilli_with_a_stone_carved_Budai_in_the_Feilai_Feng_caves_near_the_Lingyin_Temple_in_Hangzhou_in_2012.jpg
Usage:
mkdir -p faces
for file in pictures/*.jpg; do
name=$(basename "$file")
i=0
facedetect/facedetect --data-dir /usr/share/opencv4 "$file" |
while read x y w h; do
convert "$file" -crop ${w}x${h}+${x}+${y} "faces/${name%.*}_${i}.${name##*.}"
i=$(($i+1))
done
done
If you don't pass --data-dir on this system, it fails with:
facedetect: error: cannot load HAAR_FRONTALFACE_ALT2 from /usr/share/opencv/haarcascades/haarcascade_frontalface_alt2.xml
and the file it is looking for is likely at: /usr/share/opencv4/haarcascades on the system.
After running it, the file:
faces/test_0.jpg
contains:
which was extracted from the original image pictures/test.jpg:
Budai was not recognized :-( If it had it would appear under faces/test_1.jpg, but that file does not exist.
Let's try another one with faces partially turned https://raw.githubusercontent.com/cirosantilli/media/master/Ciro_Santilli_with_his_mother_in_law_during_his_wedding_in_2017.jpg
Hmmm, no hits, the faces are not clear enough for the software.
Tested on Ubuntu 20.10, OpenCV 4.2.0.
Autocrop worked out for me pretty well.
It is as easy as autocrop -i pics -o crop -w 400 -H 400.
You can get the usage in their readme file.
usage: autocrop [-h] [-i INPUT] [-o OUTPUT] [-r REJECT] [-w WIDTH] [-H HEIGHT]
[-v] [--no-confirm] [--facePercent FACEPERCENT] [-e EXTENSION]
Automatically crops faces from batches of pictures
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Folder where images to crop are located. Default:
current working directory
-o OUTPUT, --output OUTPUT, -p OUTPUT, --path OUTPUT
Folder where cropped images will be moved to. Default:
current working directory, meaning images are cropped
in place.
-r REJECT, --reject REJECT
Folder where images that could not be cropped will be
moved to. Default: current working directory, meaning
images that are not cropped will be left in place.
-w WIDTH, --width WIDTH
Width of cropped files in px. Default=500
-H HEIGHT, --height HEIGHT
Height of cropped files in px. Default=500
-v, --version show program's version number and exit
--no-confirm Bypass any confirmation prompts
--facePercent FACEPERCENT
Percentage of face to image height
-e EXTENSION, --extension EXTENSION
Enter the image extension which to save at output
This sounds like it might be a better question for one of the more (computer) technology focused exchanges.
That said, have you looked into something like this jquery face detection script? I don't know how savvy you are, but it is one option that is OS independent.
This solution also looks promising, but would require Windows.
the above codes work but this is recent implementation using OpenCV
I was unable to run the above by the latest and found something that works (from various places)
import cv2
import os
def facecrop(image):
facedata = "haarcascade_frontalface_alt.xml"
cascade = cv2.CascadeClassifier(facedata)
img = cv2.imread(image)
minisize = (img.shape[1],img.shape[0])
miniframe = cv2.resize(img, minisize)
faces = cascade.detectMultiScale(miniframe)
for f in faces:
x, y, w, h = [ v for v in f ]
cv2.rectangle(img, (x,y), (x+w,y+h), (255,255,255))
sub_face = img[y:y+h, x:x+w]
fname, ext = os.path.splitext(image)
cv2.imwrite(fname+"_cropped_"+ext, sub_face)
return
facecrop("1.jpg")
Just adding to #Israel Abebe's version. If you add a counter before image extension the algorithm will give all the faces detected. Attaching the code, same as Israel Abebe's. Just adding a counter and accepting the cascade file as an argument. The algorithm works beautifully! Thanks #Israel Abebe for this!
import cv2
import os
import sys
def facecrop(image):
facedata = sys.argv[1]
cascade = cv2.CascadeClassifier(facedata)
img = cv2.imread(image)
minisize = (img.shape[1],img.shape[0])
miniframe = cv2.resize(img, minisize)
faces = cascade.detectMultiScale(miniframe)
counter = 0
for f in faces:
x, y, w, h = [ v for v in f ]
cv2.rectangle(img, (x,y), (x+w,y+h), (255,255,255))
sub_face = img[y:y+h, x:x+w]
fname, ext = os.path.splitext(image)
cv2.imwrite(fname+"_cropped_"+str(counter)+ext, sub_face)
counter += 1
return
facecrop("Face_detect_1.jpg")
PS: Adding as answer. Was not able to add comment because of points issue.
Detect face and then crop and save the cropped image into folder ..
import numpy as np
import cv2 as cv
face_cascade = cv.CascadeClassifier('./haarcascade_frontalface_default.xml')
#eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml')
img = cv.imread('./face/nancy-Copy1.jpg')
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
#eyes = eye_cascade.detectMultiScale(roi_gray)
#for (ex,ey,ew,eh) in eyes:
# cv.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
sub_face = img[y:y+h, x:x+w]
face_file_name = "face/" + str(y) + ".jpg"
plt.imsave(face_file_name, sub_face)
plt.imshow(sub_face)
I have developed an application "Face-Recognition-with-Own-Data-Set" using the python package ‘face_recognition’ and ‘opencv-python’.
The source code and installation guide is in the GitHub - Face-Recognition-with-Own-Data-Set
Or run the source -
import face_recognition
import cv2
import numpy as np
import os
'''
Get current working director and create a Data directory to store the faces
'''
currentDirectory = os.getcwd()
dirName = os.path.join(currentDirectory, 'Data')
print(dirName)
if not os.path.exists(dirName):
try:
os.makedirs(dirName)
except:
raise OSError("Can't create destination directory (%s)!" % (dirName))
'''
For the given path, get the List of all files in the directory tree
'''
def getListOfFiles(dirName):
# create a list of file and sub directories
# names in the given directory
listOfFile = os.listdir(dirName)
allFiles = list()
# Iterate over all the entries
for entry in listOfFile:
# Create full path
fullPath = os.path.join(dirName, entry)
# If entry is a directory then get the list of files in this directory
if os.path.isdir(fullPath):
allFiles = allFiles + getListOfFiles(fullPath)
else:
allFiles.append(fullPath)
return allFiles
def knownFaceEncoding(listOfFiles):
known_face_encodings=list()
known_face_names=list()
for file_name in listOfFiles:
# print(file_name)
if(file_name.lower().endswith(('.png', '.jpg', '.jpeg'))):
known_image = face_recognition.load_image_file(file_name)
# known_face_locations = face_recognition.face_locations(known_image)
# known_face_encoding = face_recognition.face_encodings(known_image,known_face_locations)
face_encods = face_recognition.face_encodings(known_image)
if face_encods:
known_face_encoding = face_encods[0]
known_face_encodings.append(known_face_encoding)
known_face_names.append(os.path.basename(file_name[0:-4]))
return known_face_encodings, known_face_names
# Get the list of all files in directory tree at given path
listOfFiles = getListOfFiles(dirName)
known_face_encodings, known_face_names = knownFaceEncoding(listOfFiles)
video_capture = cv2.VideoCapture(0)
cv2.namedWindow("Video", flags= cv2.WINDOW_NORMAL)
# cv2.namedWindow("Video")
cv2.resizeWindow('Video', 1024,640)
cv2.moveWindow('Video', 20,20)
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# print(ret)
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]
k = cv2.waitKey(1)
# Hit 'c' on capture the image!
# Hit 'q' on the keyboard to quit!
if k == ord('q'):
break
elif k== ord('c'):
face_loc = face_recognition.face_locations(rgb_small_frame)
if face_loc:
print("Enter Name -")
name = input()
img_name = "{}/{}.png".format(dirName,name)
(top, right, bottom, left)= face_loc[0]
top *= 4
right *= 4
bottom *= 4
left *= 4
cv2.imwrite(img_name, frame[top - 5 :bottom + 5,left -5 :right + 5])
listOfFiles = getListOfFiles(dirName)
known_face_encodings, known_face_names = knownFaceEncoding(listOfFiles)
# Only process every other frame of video to save time
if process_this_frame:
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
# print(face_locations)
face_names = []
for face_encoding,face_location in zip(face_encodings,face_locations):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding, tolerance= 0.55)
name = "Unknown"
distance = 0
# use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
#print(face_distances)
if len(face_distances) > 0:
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
# distance = face_distances[best_match_index]
#print(face_distances[best_match_index])
# string_value = '{} {:.3f}'.format(name, distance)
face_names.append(name)
process_this_frame = not process_this_frame
# Display the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom + 46), (right, bottom+11), (0, 0, 155), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom +40), font, 1.0, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', frame)
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
It will create a 'Data' directory in the current location even if this directory does not exist.
When a face is marked with a rectangle, press 'c' to capture the image and in the command prompt, it will ask for the name of the face. Put the name of the image and enter. You can find this image in the 'Data' directory.
I used this shell command:
for f in *.jpg;do PYTHONPATH=/usr/local/lib/python2.7/site-packages python -c 'import cv2;import sys;rects=cv2.CascadeClassifier("/usr/local/opt/opencv/share/OpenCV/haarcascades/haarcascade_frontalface_default.xml").detectMultiScale(cv2.cvtColor(cv2.imread(sys.argv[1]),cv2.COLOR_BGR2GRAY),1.3,5);print("\n".join([" ".join([str(item) for item in row])for row in rects]))' $f|while read x y w h;do convert $f -gravity NorthWest -crop ${w}x$h+$x+$y ${f%jpg}-$x-$y.png;done;done
You can install opencv and imagemagick on OS X with brew install opencv imagemagick.
I think the best option is Google Vision API.
It's updated, it uses machine learning and it improves with the time.
You can check the documentation for examples:
https://cloud.google.com/vision/docs/other-features

Categories