Reading license plate from image using OpenCV Python and Tesseract - python

I have a problem because I have separate registrations for photos. Now I would like to get the registration number from the photo. Unfortunately, the effectiveness of the code I wrote is very low and I would like to ask for help in achieving greater efficiency. any tips?
In the first phase, the photo looks like this
Then transform the photo to gray and only the black color contrasts
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,100,130])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
receives
What can I add or do to improve the effectiveness of the program. is there a way for the program to retrieve registrations a bit? Will this help
configr = ('-l eng --oem 1 --psm 6-c tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
text = pytesseract.image_to_string(mask,lang='eng', config=configr)
print(text)

Here's an approach:
Color threshold to extract black text. We load the image, convert to HSV colorspace, define a lower and upper color range, and use cv2.inRange() to color threshold and obtain a binary mask
Perform morphological operations. Create a kernel and perform morph close to fill holes in the contours.
Filter license plate contours. Find contours and filter using bounding rectangle area. If a contour passes this filter, we extract the ROI and paste it onto a new blank mask.
OCR using Pytesseract. We invert the image so desired text is black and throw it into Pytesseract.
Here's a visualization of each step:
Obtained mask from color thresholding + morph closing
Filter for license plate contours highlighted in green
Pasted plate contours onto a blank mask
Inverted image ready for Tesseract
Result from Tesseract OCR
PZ 689LR
Code
import numpy as np
import pytesseract
import cv2
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, create blank mask, convert to HSV, define thresholds, color threshold
image = cv2.imread('1.png')
result = np.zeros(image.shape, dtype=np.uint8)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0,0,0])
upper = np.array([179,100,130])
mask = cv2.inRange(hsv, lower, upper)
# Perform morph close and merge for 3-channel ROI extraction
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=1)
extract = cv2.merge([close,close,close])
# Find contours, filter using contour area, and extract using Numpy slicing
cnts = cv2.findContours(close, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
area = w * h
if area < 5000 and area > 2500:
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
result[y:y+h, x:x+w] = extract[y:y+h, x:x+w]
# Invert image and throw into Pytesseract
invert = 255 - result
data = pytesseract.image_to_string(invert, lang='eng',config='--psm 6')
print(data)
cv2.imshow('image', image)
cv2.imshow('close', close)
cv2.imshow('result', result)
cv2.imshow('invert', invert)
cv2.waitKey()

Related

How to crop image to only text section with Python OpenCV?

I want to crop the image to only extract the text sections. There are thousands of them with different sizes so I can't hardcode coordinates. I'm trying to remove the unwanted lines on the left and on the bottom. How can I do this?
Original
Expected
Determine the least spanning bounding box by finding all the non-zero points in the image. Finally, crop your image using this bounding box. Finding the contours is time-consuming and unnecessary here, especially because your text is axis-aligned. You may accomplish your goal by combining cv2.findNonZero and cv2.boundingRect.
Hope this will work ! :
import numpy as np
import cv2
img = cv2.imread(r"W430Q.png")
# Read in the image and convert to grayscale
img = img[:-20, :-20] # Perform pre-cropping
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = 255*(gray < 50).astype(np.uint8) # To invert the text to white
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, np.ones(
(2, 2), dtype=np.uint8)) # Perform noise filtering
coords = cv2.findNonZero(gray) # Find all non-zero points (text)
x, y, w, h = cv2.boundingRect(coords) # Find minimum spanning bounding box
# Crop the image - note we do this on the original image
rect = img[y:y+h, x:x+w]
cv2.imshow("Cropped", rect) # Show it
cv2.waitKey(0)
cv2.destroyAllWindows()
in above code from forth line of code is where I set the threshold below 50 to make the dark text white. However, because this outputs a binary image, I convert to uint8, then scale by 255. The text is effectively inverted.
Then, using cv2.findNonZero, we discover all of the non-zero locations for this image.We then passed this to cv2.boundingRect, which returns the top-left corner of the bounding box, as well as its width and height. Finally, we can utilise this to crop the image. This is done on the original image, not the inverted version.
Here's a simple approach:
Obtain binary image. Load the image, grayscale, Gaussian blur, then Otsu's threshold to obtain a binary black/white image.
Remove horizontal lines. Since we're trying to only extract text, we remove horizontal lines to aid us in our next step so incorrect contours will not merge together.
Merge text into a single contour. The idea is that characters which are adjacent to each other are part of the wall of text. So we can dilate individual contours together to obtain a single contour to extract.
Find contours and extract ROI. We find contours, sort contours by area, then extract the largest contour ROI using Numpy slicing.
Here's the visualization of each step:
Binary image -> Removed horizontal lines in green
1
2
Dilate to combine into a single contour -> Detected ROI to extract in green
3
4
Result
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3, 3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Dilate to merge into a single contour
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,30))
dilate = cv2.dilate(thresh, vertical_kernel, iterations=3)
# Find contours, sort for largest contour and extract ROI
cnts, _ = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:-1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 4)
ROI = original[y:y+h, x:x+w]
break
cv2.imshow('image', image)
cv2.imshow('dilate', dilate)
cv2.imshow('thresh', thresh)
cv2.imshow('ROI', ROI)
cv2.waitKey()

Extract foreground images from a white background

I have the following image, which is a scanned printed paper with 4 images. I printed 4 images in the same sheet of paper to save printing resources:
However, now I need to extract image by image and, for each of them, create an individual image file. Is there any easy way of doing that with Python, Matlab or any other programming language?
Here is one way to do that in Python/OpenCV. But it requires that the pictures colors at their sides be sufficiently different from the background color. If so, you can threshold the image, then get contours and use their bounding boxes to crop out each image.
Read the input
Threshold based on the background color
Invert the threshold, so that the background is black
Apply morphology open and close to fill the picture regions and remove noise
Get the external contours
For each contour, get its bounding box and crop the input image and save it to disk
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('4faces.jpg')
# threshold on background color
lowerBound = (230,230,230)
upperBound = (255,255,255)
thresh = cv2.inRange(img, lowerBound, upperBound)
# invert so background black
thresh = 255 - thresh
# apply morphology to ensure regions are filled and remove extraneous noise
kernel = np.ones((7,7), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((11,11), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# get bounding boxes and crop input
i = 1
for cntr in contours:
# get bounding boxes
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imwrite("4faces_crop_{0}.png".format(i), crop)
i = i + 1
# save threshold
cv2.imwrite("4faces_thresh.png",thresh)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image after morphology cleaning:
Cropped Images:

Removing Borders/Margins from Video Frames

I am working with videos, that have borders (margins) around them. Some have it along all 4 sides, some along left&right only and some along top&bottom only. Length of these margins is also not fixed.
I am extracting frames from these videos, as for example,
and
Both of these contain borders on the top and bottom.
Can anyone please suggest some methods to remove these borders from these images (in Python, preferably).
I came across some methods, like this on Stackoverflow, but this deals with an ideal situation where borders are perfectly black (0,0,0). But in my case, they may not be pitch black, and also may contain jittery noises too.
Any help/suggestions would be highly appreciated.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to grayscale and invert
Threshold
Apply morphology to remove small black or white regions then invert again
Get the contour of the one region
Get the bounding box of that contour
Use numpy slicing to crop that area of the image to form the resulting image
Save the resulting image
import cv2
import numpy as np
# read image
img = cv2.imread('gymnast.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# invert gray image
gray = 255 - gray
# gaussian blur
blur = cv2.GaussianBlur(gray, (3,3), 0)
# threshold
thresh = cv2.threshold(blur,236,255,cv2.THRESH_BINARY)[1]
# apply close and open morphology to fill tiny black and white holes
kernel = np.ones((5,5), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# invert thresh
thresh = 255 -thresh
# get contours (presumably just one around the nonzero pixels)
# then crop it to bounding rectangle
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
cntr = contours[0]
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imshow("IMAGE", img)
cv2.imshow("THRESH", thresh)
cv2.imshow("CROP", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save cropped image
cv2.imwrite('gymnast_crop.png',crop)
cv2.imwrite('gymnast_crop.png',crop)
Input:
Thresholded and cleaned image:
Cropped Result:

Detect and replace region from one image onto another image with OpenCV

I have two pictures of the same dimension and I want to detect and replace the white region in the first picture (black image) at the same location in the second picture. Is there any way to do this using OpenCV? I want to replace the blue region in the original image with the white region in the first picture.
First picture
Original image
If I'm understanding you correctly, you want to replace the white ROI on the black image onto the original image. Here's a simple approach:
Obtain binary image. Load image, grayscale, Gaussian blur, then Otsu's threshold
Extract ROI and replace. Find contours with cv2.findContours then filter using contour approximation with cv2.arcLength and cv2.approxPolyDP. With the assumption that the region is a rectangle, if the contour approximation result is 4 then we have found our desired region. In addition, we filter using cv2.contourArea to ensure that we don't include noise. Finally we obtain the bounding box coordinates with cv2.boundingRect and extract the ROI with Numpy slicing. Finally we replace the ROI into the original image.
Detected region to extract/replace highlighted in green
Extracted ROI
Result
Code
import cv2
# Load images, grayscale, Gaussian blur, Otsu's threshold
original = cv2.imread('1.jpg')
image = cv2.imread('2.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find contours, filter using contour approximation + area, then extract
# ROI using Numpy slicing and replace into original image
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
area = cv2.contourArea(c)
if len(approx) == 4 and area > 1000:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h,x:x+w]
original[y:y+h, x:x+w] = ROI
cv2.imshow('thresh', thresh)
cv2.imshow('ROI', ROI)
cv2.imshow('original', original)
cv2.waitKey()

How to remove border edge noise from an image using python?

I am trying to preprocess a photo of the eye vessels by removing the black border and extraneous non-eye features in the image (Ex. see below for text and "clip") by replacing the black areas with the average pixel values from 3 random squares.
crop1 = randomCrop(image2, 50, 50) #Function that finds random 50x50 area
crop2 = randomCrop(image2, 50, 50)
crop3 = randomCrop(image2, 50, 50)
mean1 = RGB_Mean(crop1)
mean2 = RGB_Mean(crop2)
mean3 = RGB_Mean(crop3)
#RGB Mean
result = [statistics.mean(k) for k in zip(mean1, mean2, mean3)]
for i in range(len(image2[:,0, 0])):
for j in range(len(image2[0,:,0])):
thru_pixel = image2[i, j]
if (thru_pixel[0] < 50 and thru_pixel[1] < 50 and thru_pixel[2] < 50):
image2[i,j, :] = result
if (thru_pixel[0] > 190 and thru_pixel[1] > 190 and thru_pixel[2] > 190):
image2[i,j, :] = result
However, there is leftover noise around the border of the image, as well as leftover text and a clip at the bottom left.
Here's an example image.
Original :
and Post-Processing
You can see there's still left over black-gray border noise as well as text at the bottom right and a "clip" at the bottom left. Is there anything I could try to get rid of any of these artifacts while maintaining the integrity of the eye blood vessels?
Thank you for your time and help!
Assuming you want to isolate the eye blood vessels, here's an approach which can be broken down into two stages, one to remove the artifacts and another to isolate blood vessels
Convert image to grayscale
Otsu's threshold to obtain binary image
Perform morphological operations to remove artifacts
Adaptive threshold to isolate blood vessels
Find contours and filter using maximum threshold area
Bitwise-and to get final result
Beginning from your original image, we convert to grayscale and Otsu's threshold to obtain a binary image
Now we perform morph open to remove the artifacts (left). We inverse this mask to obtain the white border and then do a series of bitwise operations to get the removed artifacts image (right)
From here we adaptive threshold to get the veins
Note there is the unwanted border so we find contours and filter using a maximum threshold area. If a contour passes the filter, we draw it onto a blank mask
Finally we perform bitwise-and on the original image to get our result
Note we could have performed additional morph open after the adaptive threshold to remove the small particles of noise but the tradeoff is that it will remove vein details. I'll leave this optional step up to you
import cv2
import numpy as np
# Grayscale, Otsu's threshold, opening
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,15))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)
inverse = 255 - opening
inverse = cv2.merge([inverse,inverse,inverse])
removed_artifacts = cv2.bitwise_and(image,image,mask=opening)
removed_artifacts = cv2.bitwise_or(removed_artifacts, inverse)
# Isolate blood vessels
veins_gray = cv2.cvtColor(removed_artifacts, cv2.COLOR_BGR2GRAY)
adaptive = cv2.adaptiveThreshold(veins_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,3)
cnts = cv2.findContours(adaptive, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 5000:
cv2.drawContours(blank_mask, [c], -1, (255,255,255), 1)
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
final = cv2.bitwise_and(image, image, mask=blank_mask)
# final[blank_mask==0] = (255,255,255) # White version
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('removed_artifacts', removed_artifacts)
cv2.imshow('adaptive', adaptive)
cv2.imshow('blank_mask', blank_mask)
cv2.imshow('final', final)
cv2.waitKey()

Categories