Problem to recognize characters in Pytesseract python - python

I'm woorking with this kind of image Original_Image and I'm having some problems to apply character recognition. I'm tried some image treatment (gray, black and white, noise removal,..) and got only bad results. This is a part of the code I'm work in Python.
import cv2
from matplotlib import pyplot as plt
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Users\14231744700\AppData\Local\Tesseract-OCR\tesseract.exe"
image_file = '5295_down.bmp'
img = cv2.imread(image_file)
height,width,channels= img.shape
#The attached image is this one (img_cropped) and I want this data as a string to work on it
img_cropped = img[41*height//50:92*height//100,2*width//14:81*width//100]
#cv2.imshow('Image_cropped', img_cropped)
#cv2.imwrite('image_cropped.png', img_cropped)
#cv2.waitKey(0)
def image_to_string(image):
data = pytesseract.image_to_string(image, lang='eng', config='--psm 6')
return data
image_to_string(img_cropped)
If someone know about a preprocessing step or any other possibility to get better results, I'll be very thankfull.

Related

How to read numbers on screen efficiently (pytesseract)?

I'm trying to read numbers on the screen and for that I'm using pytesseract. The thing is, even though it works, it works slowly and doesn't give good results at all. for example, with this image:
I can make this thresholded image:
and it reads 5852 instead of 585, which is understandable, but sometimes it can be way worse with different thresholding. It can read 1 000 000 as 1 aaa eee for example, or 585 as 5385r (yes it even adds characters without any reason)
Isn't any way to force pytesseract to read only numbers or simply use something that works better than pytesseract?
my code:
from PIL import Image
from pytesseract import pytesseract as pyt
import test
pyt.tesseract_cmd = 'C:/Program Files/Tesseract-OCR/tesseract.exe'
def tti2(location) :
image_file = location
im = Image.open(image_file)
text = pyt.image_to_string(im)
print(text)
for character in "abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ*^&\n" :
text = text.replace(character, "")
return text
test.th("C:\\Users\\Utilisateur\\Pictures\\greenshot\\flea market sniper\\TEST.png")
print(tti2("C:\\Users\\Utilisateur\\Pictures\\greenshot\\flea market sniper\\TESTbis.png"))
code of "test" (it's for the thresholding) :
import cv2
from PIL import Image
def th(Path) :
img = cv2.imread(Path)
# If your image is not already grayscale :
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshold = 60 # to be determined
_, img_binarized = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)
pil_img = Image.fromarray(img_binarized)
Path = Path.replace(".png","")
pil_img.save(Path+"bis.png")
A way to force pytesseract to read only numbers can be done using tessedit_char_whitelist config with only digits values.
You can try to improve results using Tesseract documentation:
Tesseract - Improving the quality of the output
Also i suggest you to use:
White for the background and black for characters font color.
Select desired tesseract psm mode. In the previous case i was using 7 psm mode to treat image as a single text line.
Use tessedit_char_whitelist config to specify only the characters that you are sarching for.
With that in mind, here is the code:
import cv2
import numpy as np
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract'
originalImage = cv2.imread('1.png')
grayImage = cv2.cvtColor(originalImage, cv2.COLOR_BGR2GRAY)
(_, blackAndWhiteImage) = cv2.threshold(grayImage, 127, 255, cv2.THRESH_BINARY_INV)
text = pytesseract.image_to_string(blackAndWhiteImage, config="--psm 7 --oem 3 -c tessedit_char_whitelist=0123456789")
print('Text: ', text)
cv2.imshow('Image result', blackAndWhiteImage)
cv2.waitKey(0)
cv2.destroyAllWindows()
And the desired result:

Image processing to get numbers from an image using cv2 and pytesseract

I am trying to extract the values from photographs of a Ritter biogas counter, specifically I want get the numbers at the black measurer. Here is an example:
I am trying to do this in Python, using the cv2 and pytesseract libraries. Currently my script looks like this:
import argparse
import cv2
import pytesseract
from matplotlib import pyplot as plt
# Parsing input arguments
parser = argparse.ArgumentParser(description='Analyze an image from Ritter counter and extract the measured gas volume')
parser.add_argument("--img", required=True, help="Route to image to be analyzed")
args = parser.parse_args()
img=str(args.img)
# Reading photo as a grayscale image
img = cv2.imread(img, 0)
print("Pixels (height x width):")
print(img.shape[:2])
# Cropping image
img = img[377:420, 295:660]
#Transforming image to a binary image using a fixed threshold
for i in range(65,85,1):
thresh = cv2.threshold(img, i, 255, cv2.THRESH_TOZERO)[1]
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 6')
plt.imshow(thresh)
plt.title("Fixed: " + str(i) + "; Result: " + data)
plt.show()
However, glare differences across the image, and those white lines of flash reflection in the counter's glass are causing me trouble to process the image before pytesseract. This is, currently, my best result:
I have tried using cv's adaptative thresholding with no better results. The expected result would process several images similar as the uploaded one, each with small differences in light reflection intensity and angle.

Using Tesseract-OCR in Python to get number from images

I have thousands of scale images that I would like to extract the reading of the scale from each image. However, when using the Tesseract it gives wrong values. I tried several process for the image but still running to same issue. From my understanding so far after defining region of interest in the image, it has to be converted to white text with black background. However, I am new to python, I tried some functions to do so but still running to same issue. Would be appreciated if someone can help me on this one. The following link is for the image, as I couldn't uploaded it here as it is more than 2 MiB:
https://mega.nz/file/fZMUDRbL#tg4Tc2VmGMMdEpnZzt7blxZjVLdlhMci9jll0FLnIGI
import cv2
import pytesseract
import matplotlib.pyplot as plt
import numpy as np
import imutils
## Reading Image File
Filename = 'C:\\Users\\Abdullah\\Desktop\\Scale Reading\\' #File Path For Images
IName = 'Disk_Test_1_09_07-00000_0.tif' # Image Name
Image = cv2.imread(Filename + IName,0)
## Image Processing
Image_Crop = Image[1680:1890, 550:1240] # Define Region of Interest of the image
#cv2.imshow("cropped", Image_Crop) # Show Cropped Image
#cv2.waitKey(0) # Show Cropped Image
Mask = Image_Crop > 10 # Thershold Image to Value of X
Mask = np.array(Mask, dtype=np.uint8)
plt.imshow(Mask, alpha=1) # Set Opacity (Max 1)
ret,Binary = cv2.threshold(Mask,0,255,cv2.THRESH_BINARY)
#plt.imshow(Image_Crop, cmap="gray") # Transform Image to Gray
#plt.show()
plt.imshow(Binary,'gray',vmin=0,vmax=255)
plt.show()
## Number Recognition
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' # Call Location of Tesseract-OCR
data = pytesseract.image_to_string(Binary, lang='eng',config='--psm 6')
print(data)
Here is the image after processing

How to extract text from the highlighted text from an image

I have a code that highlights the user's name from an image, I want to extract text i.e users name from that image. Below is the code
import matplotlib.pyplot as plt
import cv2
import easyocr
from pylab import rcParams
from IPython.display import Image
rcParams['figure.figsize'] = 8, 16
reader = easyocr.Reader(['en'])
output = reader.readtext('MP-SAMPLE1.jpg')
cord = output[-106][0]
x_min, y_min = [int(min(idx)) for idx in zip(*cord)]
x_max, y_max = [int(max(idx)) for idx in zip(*cord)]
image = cv2.imread('MP-SAMPLE1.jpg')
cv2.rectangle(image,(x_min,y_min),(x_max,y_max),(0,0,255),2)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
I have set coordinates according to my image, you can adjust it according to yours, I need to extract the text which is highlighted under the rectangular box.
I am new in this field please ignore any mistake I would have done.
Here is my partial-solution for the problem.
Since you are beginner, let me give you an advice, always start with pre-processing.
Pre-processing will help you to remove the unwanted-artifacts.
For instance you can do thresholding: Thresholding-result
or median filtering: Median-filter result
I used thresholding, then you can use pytesseract library. The library contains a lot of configuration options.
Also for non-english languages, you can follow this tutorial.
So, you want the text next to the FATHERS HUSBANDS. Therefore we could do
Convert image to the text.
text = pytesseract.image_to_string(Image.open(f_name), lang='eng')
From the text, find the equivalent of FATHERS HUSBANDS
for line in text.split('\n'):
if "FATHERS HUSBANDS" in line:
name = line.split('.')[1].split(',')[0]
print(name)
Result:
GRAMONAN GROVER
The last name is correct but first name is partially correct, it should be BRAJMONAN.
I wrote this answer, hoping you to gude to your solution. Good luck.
Code:
import os
import cv2
import pytesseract
from PIL import Image
img = cv2.imread("FXSCh.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
gry = cv2.threshold(gry, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
f_name = "{}.png".format(os.getpid())
cv2.imwrite(f_name, gry)
text = pytesseract.image_to_string(Image.open(f_name), lang='eng')
for line in text.split('\n'):
if "FATHERS HUSBANDS" in line:
name = line.split('.')[1].split(',')[0]
print(name)
os.remove(f_name)
cv2.imshow("Image", img)
cv2.imshow("Output", gry)
cv2.waitKey(0)

Text Detection of Labels using PyTesseract

A label detection tool that automatically identifies and alphabetically sorts the images based on equipment number (19-V1083AI). I used the pytesseract library to convert the image to a string after the contours of the equipment label were identified. Although the code runs correctly, it never outputs the equipment number. It's my first time using the pytesseract library and the goodFeaturesToTrack function. Any help would be greatly appreciated!
Original Image
import numpy as np
import cv2
import imutils #resizeimage
import pytesseract # convert img to string
from matplotlib import pyplot as plt
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Read the image file
image = cv2.imread('Car Images/s3.JPG')
# Resize the image - change width to 500
image = imutils.resize(image, width=500)
# Display the original image
cv2.imshow("Original Image", image)
cv2.waitKey(0)
# RGB to Gray scale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("1 - Grayscale Conversion", gray)
cv2.waitKey(0)
# Noise removal with iterative bilateral filter(removes noise while preserving edges)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
cv2.imshow("2 - Bilateral Filter", gray)
cv2.waitKey(0)
corners = cv2.goodFeaturesToTrack(gray,60,0.001,10)
corners = np.int0(corners)
for i in corners:
x,y = i.ravel()
cv2.circle(image,(x,y),0,255,-1)
coord = np.where(np.all(image == (255, 0, 0),axis=-1))
plt.imshow(image)
# Use tesseract to covert image into string
text = pytesseract.image_to_string(image, lang='eng')
print("Equipment Number is:", text)
plt.show()
Output Image2
Note: It worked with one of the images but not for the others
Output Image2
I found using a particular configuration option for PyTesseract will find your text -- and some noise. Here are the configuration options explained: https://stackoverflow.com/a/44632770/42346
For this task I chose: "Sparse text. Find as much text as possible in no particular order."
Since there's more "text" returned by PyTesseract you can use a regular expression to filter out the noise.
This particular regular expression looks for two digits, a hyphen, five digits or characters, a space, and then two digits or characters. This can be adjusted to your equipment number format as necessary, but I'm reasonably confident this is a good solution because there's nothing else like this equipment number in the returned text.
import re
import cv2
import pytesseract
image = cv2.imread('Fv0oe.jpg')
text = pytesseract.image_to_string(image, lang='eng', config='--psm 11')
for line in text.split('\n'):
if re.match(r'^\d{2}-\w{5} \w{2}$',line):
print(line)
Result (with no image processing necessary):
19-V1083 AI

Categories