I'm trying to read numbers from a screenshot I'm taking from a game, but I'm having a trouble getting the numbers right.
from pyautogui import *
import pyautogui as pg
import time
import keyboard
import random
import win32api, win32con
import threading
import cv2
import numpy
from pynput.mouse import Button, Controller
from pynput.keyboard import Listener, KeyCode
from PIL import Image
from pytesseract import *
pytesseract.tesseract_cmd = r'D:\Python\Tesseract\tesseract.exe'
#configs
custom_config = r'--dpi 300 --psm 6 --oem 3 -c tessedit_char_whitelist=0123456789'
# 1. load the image as grayscale
img = cv2.imread("price.png",cv2.IMREAD_GRAYSCALE)
# Change all pixels to black, if they aren't white already (since all characters were white)
img[img <= 150] = 231
img[img == 199] = 0
cv2.imwrite('resultfirst.png', img)
# 2. Scale it 10x
scaled = cv2.resize(img, (0,0), fx=10, fy=10, interpolation = cv2.INTER_CUBIC)
# 3. Retained your bilateral filter
filtered = cv2.bilateralFilter(scaled, 11, 17, 17)
# 4. Thresholded OTSU method
thresh = cv2.threshold(filtered, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
time.sleep(1)
# 5. Erode the image to bulk it up for tesseract
kernel = numpy.ones((5,5),numpy.uint8)
eroded = cv2.erode(thresh, kernel, iterations = 2)
pre_processed = eroded
output = pytesseract.image_to_string(pre_processed, config=custom_config)
cv2.imwrite('result.png', pre_processed)
print(output)
Image is pretty clear but returns either 13500 or 18500, but no amount of tinkering returns the 7 correctly. Is there a better way to go at it or am I forgetting something?
EDIT:
I managed to get better results after I converted the yellow (gray after grayscale conversion) to black, to fill the numbers. I added the conversion code to the codeblock.
Before:
This was the original result before
After:
This is the result now
Problem is that pytesseract still returns that 7 as 1 every time. I don't think I can make that 7 more like 7 from this.. what to do?
Not sure how general this solution will be, but if all of your pictures are like this one a threshold of 103 will work:
image = cv2.imread('price.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
threshold = 103
_, img_binarized = cv2.threshold(gray, threshold, 255, cv2.THRESH_BINARY)
print(pytesseract.image_to_string(img_binarized, config='--dpi 300 --psm 6 --oem 1 -c tessedit_char_whitelist=0123456789').strip())
gives 78500 on my machine.
Related
I'm trying to read numbers on the screen and for that I'm using pytesseract. The thing is, even though it works, it works slowly and doesn't give good results at all. for example, with this image:
I can make this thresholded image:
and it reads 5852 instead of 585, which is understandable, but sometimes it can be way worse with different thresholding. It can read 1 000 000 as 1 aaa eee for example, or 585 as 5385r (yes it even adds characters without any reason)
Isn't any way to force pytesseract to read only numbers or simply use something that works better than pytesseract?
my code:
from PIL import Image
from pytesseract import pytesseract as pyt
import test
pyt.tesseract_cmd = 'C:/Program Files/Tesseract-OCR/tesseract.exe'
def tti2(location) :
image_file = location
im = Image.open(image_file)
text = pyt.image_to_string(im)
print(text)
for character in "abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ*^&\n" :
text = text.replace(character, "")
return text
test.th("C:\\Users\\Utilisateur\\Pictures\\greenshot\\flea market sniper\\TEST.png")
print(tti2("C:\\Users\\Utilisateur\\Pictures\\greenshot\\flea market sniper\\TESTbis.png"))
code of "test" (it's for the thresholding) :
import cv2
from PIL import Image
def th(Path) :
img = cv2.imread(Path)
# If your image is not already grayscale :
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshold = 60 # to be determined
_, img_binarized = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)
pil_img = Image.fromarray(img_binarized)
Path = Path.replace(".png","")
pil_img.save(Path+"bis.png")
A way to force pytesseract to read only numbers can be done using tessedit_char_whitelist config with only digits values.
You can try to improve results using Tesseract documentation:
Tesseract - Improving the quality of the output
Also i suggest you to use:
White for the background and black for characters font color.
Select desired tesseract psm mode. In the previous case i was using 7 psm mode to treat image as a single text line.
Use tessedit_char_whitelist config to specify only the characters that you are sarching for.
With that in mind, here is the code:
import cv2
import numpy as np
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract'
originalImage = cv2.imread('1.png')
grayImage = cv2.cvtColor(originalImage, cv2.COLOR_BGR2GRAY)
(_, blackAndWhiteImage) = cv2.threshold(grayImage, 127, 255, cv2.THRESH_BINARY_INV)
text = pytesseract.image_to_string(blackAndWhiteImage, config="--psm 7 --oem 3 -c tessedit_char_whitelist=0123456789")
print('Text: ', text)
cv2.imshow('Image result', blackAndWhiteImage)
cv2.waitKey(0)
cv2.destroyAllWindows()
And the desired result:
Hi Im trying to create a program that takes a screenshot from another app, takes the numbers from said image and outputs it to txt and another window, my problem comes when the pytesseract.imagetostring brings another value other that the originial.
example:
Here the screenshot that my program took,original image, the number is 8258
then I appplied a grayscale to help tesseract out, image grayscale
after that I aplied blur because aparentely this helps tesseract out, image blur
and finally I put a threshold because It helped to get the numbers correctly, image thresh
after all of that is done I call pytessract to make a string form the image:
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 13 --oem 3 -c tessedit_char_whitelist=0123456789')
But my results are this for the txt, tesseract txt (its shows twice because I put 2 prints)
and for my window image window , the value comes out as 68258
I really don´t know why this is happening and Im all out of answers to fix it
im using tesseract v.5.0.0
this is the code Im using:
from ctypes import resize
import pygetwindow
import pyautogui
import cv2
import pytesseract
from tkinter import *
from tkinter import font
from pygetwindow import PyGetWindowException
import numpy as np
import io
path = "C:\Image.png"
//this function takes the screnshot from the window of the other app
def getting_image():
titles = pygetwindow.getAllTitles()
window = pygetwindow.getWindowsWithTitle('Connect')[0]
x1 = window.left
y1 = window.top
height = window.height
width = window.width
x2 = x1 + width
y2 = y1 + height
pyautogui.screenshot(path)
im = Image.open(path)
im = im.crop((x1+640,y1+125,x2-130,y2-670))
im.save(path, dpi=(600,600))
//this function takes the image saved and convert it to text
def send_image():
image = cv2.imread(path) //path is currently my C: drive, C:\Image.png
cv2.imshow('image_org',image)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow('image_bckgray',gray)
blur = cv2.medianBlur(gray,5)
cv2.imshow('image_blur',blur)
thresh = 255 - cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cv2.imshow('image_thresh',thresh)
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 13 --oem 3 -c tessedit_char_whitelist=0123456789')
print(data)
data = wording(data) //this just takes out the \x0c at the end of the file
print(data)
return (data)
I have a problem with the recognition, that some of my input images that are visibly a 1 turn into a 4 after the .image_to_string() command.
My input image is this:
unedited img
I then run some preprocessing steps over it (greyscale, thresholding with otsu, and enlarge the picture) leading to this:
preprocessed img
I also tried dilating the picture with no improvement in the output changing.
After running:
custom_config = "-c tessedit_char_whitelist=0123456789LV --psm 13"
pytesseract.image_to_string(processed_img, config=custom_config)
The final result is a String Displaying:
4LV♀ and I don't understand what I can change to get a 1 instead of the 4.
Thanks in advance for your time.
The ♀ or \n\x0c is because you need custom_config = "-c page_separator=''" as the config because for some reason it adds it as the page separator. you don't need anything else in your config.
To get your number is to do with the processing, mainly to do with the size. However this code i found works best.
import pytesseract
from PIL import Image
import cv2
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
import numpy as np
imagepath = "./Pytesseract Wrong Number/kD3Zy.jpg"
read_img = Image.open(imagepath)
# convert PIL image to cv2 image locally
read_img = read_img.convert('RGB')
level_img = np.array(read_img)
level_img = level_img[:, :, ::-1].copy()
# convert to grayscale
level_img = cv2.cvtColor(level_img, cv2.COLOR_RGB2GRAY)
level_img, img_bin = cv2.threshold(level_img, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
level_img = cv2.bitwise_not(img_bin)
kernel = np.ones((2, 1), np.uint8)
# make the image bigger, because it needs at least 30 pixels for the height for the characters
level_img = cv2.resize(level_img,(0,0),fx=4,fy=4, interpolation=cv2.INTER_CUBIC)
level_img = cv2.dilate(level_img, kernel, iterations=1)
# --debug--
#cv2.imshow("Debug", level_img)
#cv2.waitKey()
#cv2.destroyAllWindows
#cv2.imwrite("1.png", level_img)
custom_config = "-c page_separator=''"
level = pytesseract.image_to_string(level_img, config=custom_config)
print(level)
if you want to save it uncomment #cv2.imwrite("1.png", level_img)
Try settings "--psm 8 --oem 3" All list is at enter link description here, though psm 8 and oem 3 generally works fine.
I'm working on performing OCR of energy meter displays: example 1 example 2 example 3
I tried to use tesseract-ocr with the letsgodigital trained data. But the performance is very poor.
I'm fairly new to the topic and this is what I've done:
import numpy as np
import cv2
import imutils
from skimage import exposure
from pytesseract import image_to_string
import PIL
def process_image(orig_image_arr):
gry_disp_arr = cv2.cvtColor(orig_image_arr, cv2.COLOR_BGR2GRAY)
gry_disp_arr = exposure.rescale_intensity(gry_disp_arr, out_range= (0,255))
#thresholding
ret, thresh = cv2.threshold(gry_disp_arr,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return thresh
def ocr_image(orig_image_arr):
otsu_thresh_image = process_image(orig_image_arr)
cv2_imshow(otsu_thresh_image)
return image_to_string(otsu_thresh_image, lang="letsgodigital", config="--psm 8 -c tessedit_char_whitelist=.0123456789")
img1 = cv2.imread('test2.jpg')
cnv = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
text = ocr_image(cnv)
This gives very poor results with the example images. I have a couple of questions:
How can I identify the four corners of the display? (Edge detection doesn’t seem to work very well)
Is there any futher preprocessing that I can do to improve the performance?
Thanks for any help.
Notice how your power meters either use blue or green LEDs to light up the display; I suggest you use this color display to your advantage. What I'd do is select only one RGB channel based on the LED color. Then I can threshold it based on some algorithm or assumption. After that, you can do the downstream steps of cropping / resizing / transformation / OCR etc.
For example, using your example image 1, look at its histogram here.
Notice how there is a small peak of green to the right of the 150 mark.
I take advantage of this, and set anything below 150 to zero. My assumption being that the green peak is the bright green LED in the image.
img = cv2.imread('example_1.jpg', 1)
# Get only green channel
img_g = img[:,:,1]
# Set threshold for green value, anything less than 150 becomes zero
img_g[img_g < 150] = 0
This is what I get.
This should be much easier for downstream OCR now.
# You should also set anything >= 150 to max value as well, but I didn't in this example
img_g[img_g >= 150] = 255
The above steps should replace this step
_ret, thresh = cv2.threshold(img_g, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Here's the output of this step.
I am trying to read numbers from images and cannot find a way to get it to work consistently (not all images have numbers). These are the images:
(here is the link to the album in case the images are not working)
This is the command I'm using to run tesseract on the images: pytesseract.image_to_string(image, timeout=2, config='--psm 13 --oem 3 -c tessedit_char_whitelist=0123456789'). I have tried multiple configurations, but this seems to work best.
As far as preprocessing goes, this works the best:
gray = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2GRAY)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
im_bw = cv2.threshold(gray, thresh, 255, cv2.THRESH_BINARY_INV)[1]
This works for all images except the 3rd one. To solve the problem of lines in the 3rd image, i tried getting the edges with cv2.Canny and a pretty large threshold which works, but when drawing them back, even though it gets more than 95% of each number's edges, tesseract does not read them correctly.
I have also tried resizing the image, using cv2.morphologyEx, blurring it etc. I cannot find a way to get it to work for each case.
Thank you.
cv2.resize has consistently worked for me with INTER_CUBIC interpolation.
Adding this last step to pre-processing would most likely solve your problem.
im_bw_scaled = cv2.resize(im_bw, (0, 0), fx=4, fy=4, interpolation=cv2.INTER_CUBIC)
You could play around with the scale. I have used '4' above.
EDIT:
The following code worked with your images very well, even special characters. Please try it out with the rest of your dataset. Scaling, OTSU and erosion was the best combination.
import cv2
import numpy
import pytesseract
pytesseract.pytesseract.tesseract_cmd = "<path to tesseract.exe>"
# Page segmentation mode, PSM was changed to 6 since each page is a single uniform text block.
custom_config = r'--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789'
# load the image as grayscale
img = cv2.imread("5.png",cv2.IMREAD_GRAYSCALE)
# Change all pixels to black, if they aren't white already (since all characters were white)
img[img != 255] = 0
# Scale it 10x
scaled = cv2.resize(img, (0,0), fx=10, fy=10, interpolation = cv2.INTER_CUBIC)
# Retained your bilateral filter
filtered = cv2.bilateralFilter(scaled, 11, 17, 17)
# Thresholded OTSU method
thresh = cv2.threshold(filtered, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Erode the image to bulk it up for tesseract
kernel = numpy.ones((5,5),numpy.uint8)
eroded = cv2.erode(thresh, kernel, iterations = 2)
pre_processed = eroded
# Feed the pre-processed image to tesseract and print the output.
ocr_text = pytesseract.image_to_string(pre_processed, config=custom_config)
if len(ocr_text) != 0:
print(ocr_text)
else: print("No string detected")