How to extract and recognize the vehicle plate number with Python? - python

I have tried using pytesseract in collaboration with PIL to identify the vehicle registration number from the number plate image. But am not able to get the text from these images.
the code:
from PIL import Image
from pytesseract import image_to_string
img= Image.open('D://carimage1')
text = image_to_string(img)
print(text)
While this is working for normal scanned documents, it is not working for vehicle number plates.
Sample Image 1
Sample image 2

Here is a rough idea on how you can solve your problem. You can build on top of it. You need to extract the number plate from the image and then send the image to your tesseract. Read the code comments to understand what I am trying to do.
import numpy as np
import cv2
import pytesseract
import matplotlib.pyplot as plt
img = cv2.imread('/home/muthu/Documents/3r9OQ.jpg')
#convert my image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#perform adaptive threshold so that I can extract proper contours from the image
#need this to extract the name plate from the image.
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
contours,h = cv2.findContours(thresh,1,2)
#once I have the contours list, i need to find the contours which form rectangles.
#the contours can be approximated to minimum polygons, polygons of size 4 are probably rectangles
largest_rectangle = [0,0]
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx)==4: #polygons with 4 points is what I need.
area = cv2.contourArea(cnt)
if area > largest_rectangle[0]:
#find the polygon which has the largest size.
largest_rectangle = [cv2.contourArea(cnt), cnt, approx]
x,y,w,h = cv2.boundingRect(largest_rectangle[1])
#crop the rectangle to get the number plate.
roi=img[y:y+h,x:x+w]
#cv2.drawContours(img,[largest_rectangle[1]],0,(0,0,255),-1)
plt.imshow(roi, cmap = 'gray')
plt.show()
The output is the number plate as attached below:
Now pass this cropped image into your tesseract.
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
text = pytesseract.image_to_string(roi)
print text
I get the below output for the sample image you shared.
The parsing will be more accurate if you perspective transform your number plate image to the bounding box rectangle and also remove the extra borders around. Let me know if you need help with that too.
The code above doesn't work for second image if used as it is, because I am filtering the search to polygons with 4 sides. Hope you got the idea.

This one works only for the second image:
from PIL import Image, ImageFilter
import pytesseract
img = Image.open('TcjXJ.jpg')
img2 = img.filter(ImageFilter.BLUR)
pixels = img2.load()
width, height = img2.size
x_ = []
y_ = []
for x in range(width):
for y in range(height):
if pixels[x, y] == (255, 255, 255):
x_.append(x)
y_.append(y)
img = img.crop((min(x_), min(y_), max(x_), max(y_)))
text = pytesseract.image_to_string(img, lang='eng', config='-c tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
print(text)
You've got on output:
TN 99 F 2378

You can use OpenVINO engine, it contains pretrained model and sample for plate detection and recognition.
OpenALPR for Python.

Related

How to detect subscript numbers in an image using OCR?

I am using tesseract for OCR, via the pytesseract bindings. Unfortunately, I encounter difficulties when trying to extract text including subscript-style numbers - the subscript number is interpreted as a letter instead.
For example, in the basic image:
I want to extract the text as "CH3", i.e. I am not concerned about knowing that the number 3 was a subscript in the image.
My attempt at this using tesseract is:
import cv2
import pytesseract
img = cv2.imread('test.jpeg')
# Note that I have reduced the region of interest to the known
# text portion of the image
text = pytesseract.image_to_string(
img[200:300, 200:320], config='-l eng --oem 1 --psm 13'
)
print(text)
Unfortunately, this will incorrectly output
'CHs'
It's also possible to get 'CHa', depending on the psm parameter.
I suspect that this issue is related to the "baseline" of the text being inconsistent across the line, but I'm not certain.
How can I accurately extract the text from this type of image?
Update - 19th May 2020
After seeing Achintha Ihalage's answer, which doesn't provide any configuration options to tesseract, I explored the psm options.
Since the region of interest is known (in this case, I am using EAST detection to locate the bounding box of the text), the psm config option for tesseract, which in my original code treats the text as a single line, may not be necessary. Running image_to_string against the region of interest given by the bounding box above gives the output
CH
3
which can, of course, be easily processed to get CH3.
This is because the font of subscript is too small. You could resize the image using a python package such as cv2 or PIL and use the resized image for OCR as coded below.
import pytesseract
import cv2
img = cv2.imread('test.jpg')
img = cv2.resize(img, None, fx=2, fy=2) # scaling factor = 2
data = pytesseract.image_to_string(img)
print(data)
OUTPUT:
CH3
You want to do apply pre-processing to your image before feeding it into tesseract to increase the accuracy of the OCR. I use a combination of PIL and cv2 to do this here because cv2 has good filters for blur/noise removal (dilation, erosion, threshold) and PIL makes it easy to enhance the contrast (distinguish the text from the background) and I wanted to show how pre-processing could be done using either... (use of both together is not 100% necessary though, as shown below). You can write this more elegantly- it's just the general idea.
import cv2
import pytesseract
import numpy as np
from PIL import Image, ImageEnhance
img = cv2.imread('test.jpg')
def cv2_preprocess(image_path):
img = cv2.imread(image_path)
# convert to black and white if not already
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# remove noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# apply a blur
# gaussian noise
img = cv2.threshold(cv2.GaussianBlur(img, (9, 9), 0), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# this can be used for salt and pepper noise (not necessary here)
#img = cv2.adaptiveThreshold(cv2.medianBlur(img, 7), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.imwrite('new.jpg', img)
return 'new.jpg'
def pil_enhance(image_path):
image = Image.open(image_path)
contrast = ImageEnhance.Contrast(image)
contrast.enhance(2).save('new2.jpg')
return 'new2.jpg'
img = cv2.imread(pil_enhance(cv2_preprocess('test.jpg')))
text = pytesseract.image_to_string(img)
print(text)
Output:
CH3
The cv2 pre-process produces an image that looks like this:
The enhancement with PIL gives you:
In this specific example, you can actually stop after the cv2_preprocess step because that is clear enough for the reader:
img = cv2.imread(cv2_preprocess('test.jpg'))
text = pytesseract.image_to_string(img)
print(text)
output:
CH3
But if you are working with things that don't necessarily start with a white background (i.e. grey scaling converts to light grey instead of white)- I have found the PIL step really helps there.
Main point is the methods to increase accuracy of the tesseract typically are:
fix DPI (rescaling)
fix brightness/noise of image
fix tex size/lines
(skewing/warping text)
Doing one of these or all three of them will help... but the brightness/noise can be more generalizable than the other two (at least from my experience).
I think this way can be more suitable for the general situation.
import cv2
import pytesseract
from pathlib import Path
image = cv2.imread('test.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # (suitable for sharper black and white pictures
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1] # is OpenCV2.4 or OpenCV3
result_list = []
for c in contours:
x, y, w, h = cv2.boundingRect(c)
area = cv2.contourArea(c)
if area > 200:
detect_area = image[y:y + h, x:x + w]
# detect_area = cv2.GaussianBlur(detect_area, (3, 3), 0)
predict_char = pytesseract.image_to_string(detect_area, lang='eng', config='--oem 0 --psm 10')
result_list.append((x, predict_char))
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), thickness=2)
result = ''.join([char for _, char in sorted(result_list, key=lambda _x: _x[0])])
print(result) # CH3
output_dir = Path('./temp')
output_dir.mkdir(parents=True, exist_ok=True)
cv2.imwrite(f"{output_dir/Path('image.png')}", image)
cv2.imwrite(f"{output_dir/Path('clean.png')}", thresh)
MORE REFERENCE
I strongly suggest you refer to the following examples, which is a useful reference for OCR.
Get the location of all text present in image using opencv
Using YOLO or other image recognition techniques to identify all alphanumeric text present in images

How to crop license plate from vehicles in a real time video file using python

I have build my custom object detection model using SSD-Mobilenet and Tensorflow. Now, I have to crop license plate number from the video file and display it along with the model name on top of the boundary box.
i5 processor
NVIDIA GeForce MX150 with 2GB VRAM
import numpy as np
import cv2
import pytesseract
import matplotlib.pyplot as plt
img = cv2.imread('/home/dora/Desktop/image1.jpg')
#convert my image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#perform adaptive threshold so that I can extract proper contours from the image
#need this to extract the name plate from the image.
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
contours,h = cv2.findContours(thresh,1,2)
#once I have the contours list, i need to find the contours which form rectangles.
#the contours can be approximated to minimum polygons, polygons of size 4 are probably rectangles
largest_rectangle = [0,0]
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx)==4: #polygons with 4 points is what I need.
area = cv2.contourArea(cnt)
if area > largest_rectangle[0]:
#find the polygon which has the largest size.
largest_rectangle = [cv2.contourArea(cnt), cnt, approx]
x,y,w,h = cv2.boundingRect(largest_rectangle[1])
#crop the rectangle to get the number plate.
roi=img[y:y+h,x:x+w]
#cv2.drawContours(img,[largest_rectangle[1]],0,(0,0,255),-1)
plt.imshow(roi, cmap = 'gray')
plt.show()
The above code crops license plate from images.
I want to crop from a video file.
Looks like you want to read a video. In this case I would use cv2's VideoCapture class.
If you put all your processing in a method, you can just call it on every frame.
.
.
.
video = cv2.VideoCapture("<filename>")
ret = True
while ret: # run while there are more frames
frame, ret = video.read() # get next frame
process(frame) # do your processing
video.release()

How to process a binary image to align sparse letters in a row?

I am trying to use tesseract ocr to convert an image to text. The image always have three letters without rotation/skew, but randomly distributed in an 90x50 png file.
By just cleaning and converting to black/white, tesseract could not get the text in the image. After aligning them by hand in Paint, the ocr gives the exact match. I doesn't even need to be exactly aligned.
What I want is some tips on how to automate this alignment of the characters in the image prior to sending it to tesseract.
I am using python with tesseract and opencv.
Original image:
What I have done - turn black and white:
What I want to do - aligned by code:
You can use the following code to achieve this output. Some of the constants may need to be changed to fit your needs:
import cv2
import numpy as np
# Read the image (resize so it is easier to see)
img = cv2.imread("/home/stephen/Desktop/letters.png",0)
h,w = img.shape
img = cv2.resize(img, (w*5,h*5))
# Threshold the image and find the contours
_, thresh = cv2.threshold(img, 123, 255, cv2.THRESH_BINARY_INV);
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Create a white background iamge to paste the letters on
bg = np.zeros((200,200), np.uint8)
bg[:] = 255
left = 5
# Iterate through the contours
for contour,h in zip(contours, hierarchy[0]):
# Ignore inside parts (circle in a 'p' or 'b')
if h[3] == -1:
# Get the bounding rectangle
x,y,w,h = cv2.boundingRect(contour)
# Paste it onto the background
bg[5:5+h,left:left+w] = img[y:y+h,x:x+w]
left += (w + 5)
cv2.imshow('thresh', bg)
cv2.waitKey()

Extracting components from an edge image and storing for further processing

1Input
Given an edge image, I want to retrieve components in it one by one and store each component as an image so that I can use it later for processing. I guess this is called connected component labeling.
For example, in the input image, there are 2 lines , 1 circle ,2 curves
I want to 5 image files containing these 5 components.
I was able to come up with code as below, but I do not know how to proceed further. Currently I am getting all the components coloured in different colours in output.
import scipy
from skimage import io
from scipy import ndimage
import matplotlib.pyplot as plt
import cv2
import numpy as np
fname='..//Desktop//test1.png'
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#canny
img_canny = cv2.Canny(img,100,200)
threshold = 50
# find connected components
labeled, nr_objects = ndimage.label(img_canny)
print('Number of objects is %d'% nr_objects)
plt.imsave('..//Desktop//out.png', labeled)
Output
New output
You may not need to use cv2.canny() to segment the contours, you can simply use binary thresholding technique as:
img = cv2.imread("/path/to/img.png")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(img_gray, 130, 255, cv2.THRESH_BINARY_INV)
# Opencv v3.x
im, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for i in xrange(len(contours)):
rect = cv2.boundingRect(contours[i])
contour_component = img[rect[1]:rect[1] + rect[3], rect[0]:rect[0] + rect[2]]
cv2.imwrite("component_{}.png".format(i), contour_component)
This:
num=nr_objects
i=0
while i<num:
plt.imshow(labeled)
i=i+1
Does not loop over different labels, it just shows the same image num times. You need to do something like:
for i in range(num):
tmp = np.zeros(labeled.shape)
tmp[labeled == i] = 255
plt.imshow(tmp)
Then you will see one for each label. Also you can use a for loop... If you have any questions leave a comment.

Detecting text in an image using python

I have around 100+ images with 2 different texts on it. The images are below. one is occupied and the other is unoccupied.
So is there any way in python to differentiate these images using some code to detect the text in it?
If so I wanted to identify the occupied images and delete unoccupied images.
Since I am new to python can anyone help me in doing this?
This answer is based on the assumption that that there are only two different texts on the images as you posted in the question. So I assume that the number of characters and the color of the text is always the same ("Room status: Unoccupied" and "Room status" Occupied" in red color). That being said, I would try a more simple way to differentiate between these two different types. These images contain caracters that are very near to each other so in my opinion is that it would be very difficult to seperate each character and identify it with an OCR. I would try a more simple approach like finding the area containg the text and find the pure lenght of the text - "unoccupied" has two more characters in the text as "occupied" and hence has a bigger distance in lenght. So you can transform the image to HSV color space and use the cv2.inRange() function to extract the text (red color). Then you can merge the characters to one contour with cv2.morphologyEx() and get its lenght with cv2.minAreaRect(). Hope it helps or at least gives you a new perspective on how to find your solution. Cheers!
Example code:
import cv2
import numpy as np
# Read the image and transform to HSV colorspace.
img = cv2.imread('ocupied.jpg')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Extract the red text.
lower_red = np.array([0,150,50])
upper_red = np.array([40,255,255])
mask_red = cv2.inRange(hsv, lower_red, upper_red)
# Search for contours on the mask.
_, contours, hierarchy = cv2.findContours(mask_red,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
# Create a new mask for further processing.
mask = np.ones(img.shape, np.uint8)*255
# Draw contours on the mask with size and ratio of borders for threshold (to remove other noises from the image).
for cnt in contours:
size = cv2.contourArea(cnt)
x,y,w,h = cv2.boundingRect(cnt)
if 10000 > size > 50 and w*2.5 > h:
cv2.drawContours(mask, [cnt], -1, (0,0,0), -1)
# Connect neighbour contours and select the biggest one (text).
kernel = np.ones((50,50),np.uint8)
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
gray_op = cv2.cvtColor(opening, cv2.COLOR_BGR2GRAY)
_, threshold_op = cv2.threshold(gray_op, 150, 255, cv2.THRESH_BINARY_INV)
_, contours_op, hierarchy_op = cv2.findContours(threshold_op, cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours_op, key=cv2.contourArea)
# Create rotated rectangle to get the 4 points of the rectangle.
rect = cv2.minAreaRect(cnt)
# Create bounding and calculate the "lenght" of the text.
box = cv2.boxPoints(rect)
a, b, c, d = box = np.int0(box)
bound =[]
bound.append(a)
bound.append(b)
bound.append(c)
bound.append(d)
bound = np.array(bound)
(x1, y1) = (bound[:,0].min(), bound[:,1].min())
(x2, y2) = (bound[:,0].max(), bound[:,1].max())
# Draw the rectangle.
cv2.rectangle(img,(x1,y1),(x2,y2),(0,255,0),1)
# Identify the room status.
if x2 - x1 > 200:
print('unoccupied')
else:
print('occupied')
# Display the result
cv2.imshow('img', img)
Result:
occupied
unoccupied
Using the tesseract OCR Engine and the python wrapper pytesseract, it is only a few lines' task:
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files (x86)\Tesseract-OCR\tesseract.exe"
img = Image.open('D:\\tmp2.jpg').crop((0,0,250,35))
print(pytesseract.image_to_string(img, config='--psm 7'))
I have tested this on Windows 7. Of course, I have assumed that the text appears at the same position in every image (from your example, it does seem to be the case). Else, you need to find a better cropping mechanism.

Categories