I'm using pytesseract and cv2 in my project and want to read the text from the following image (after cv2 processing):
The text seems to be clearly visible and pytesseract is returning the lines correctly but is missing "Da-Dumm" and "Ploink".
# load the image
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.threshold(gray, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
gray = cv2.medianBlur(gray, 1)
# store grayscale image as a temp file to apply OCR
filename = "Screens/{}.png".format(os.getpid())
cv2.imwrite(filename, gray)
# load the image as a PIL/Pillow image, apply OCR, and then delete the temporary file
text = pytesseract.image_to_string(Image.open(filename),lang="deu")
Related
I having the following table area from the original image:
I'm trying extract the text,from this table.But when using threshold the whole gray regions get darkening.For example like below,
The threshold type which i did used,
thresh_value = cv2.threshold(original_gray, 128, 255, cv2.THRESH_BINARY_INV +cv2.THRESH_OTSU)[1]
is it possible to extract and change gray background into white and lets remain text pixel as it is if black then?
You should use adaptive thresholding in Python/OpenCV.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("text_table.jpg")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# do adaptive threshold on gray image
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 11)
# write results to disk
cv2.imwrite("text_table_thresh.jpg", thresh)
# display it
cv2.imshow("thresh", thresh)
cv2.waitKey(0)
Result
I am trying to detect prices using pytesseract.
However I am having very bad results.
I have one large image with several prices in different locations.
These locations are constant so I am cropping the image down and saving each area as a new image and then trying to detect the text.
I know the text will only contain 0123456789$¢.
I trained my new font using trainyourtesseract.com.
For example, I take this image.
Double it's size, and threshold it to get this.
Run it through tesseract and get an output of 8.
Any help would be appreciated.
def getnumber(self, img):
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh, grey = cv2.threshold(grey, 50, 255, cv2.THRESH_BINARY_INV)
filename = "{}.png".format(os.getpid())
cv2.imwrite(filename, grey)
text = pytesseract.image_to_string(Image.open(filename), lang='Droid',
config='--psm 13 --oem 3 -c tessedit_char_whitelist=0123456789.$¢')
os.remove(filename)
return(text)
You're on the right track. When preprocessing the image for OCR, you want to get the text in black with the background in white. The idea is to enlarge the image, Otsu's threshold to get a binary image, then perform OCR. We use --psm 6 to tell Pytesseract to assume a single uniform block of text. Look here for more configuration options. Here's the processed image:
Result from OCR:
2¢
Code
import cv2
import pytesseract
import imutils
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Resize, grayscale, Otsu's threshold
image = cv2.imread('1.png')
image = imutils.resize(image, width=500)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Perform text extraction
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.imwrite('thresh.png', thresh)
cv2.waitKey()
Machine specs:
Windows 10
opencv-python==4.2.0.32
pytesseract==0.2.7
numpy==1.14.5
I want to extract text (mainly numbers) from images like this
I tried this code
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
img = Image.open('1.jpg')
text = pytesseract.image_to_string(img, lang='eng')
print(text)
but all i get is this
(hE PPAR)
When performing OCR, it is important to preprocess the image so the desired text to detect is in black with the background in white. To do this, here's a simple approach using OpenCV to Otsu's threshold the image which will result in a binary image. Here's the image after preprocessing:
We use the --psm 6 configuration setting to treat the image as a uniform block of text. Here's other configuration options you can try. Result from Pytesseract
01153521976
Code
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
image = cv2.imread('1.png', 0)
thresh = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.waitKey()
I'm trying to extract text from image using python cv2. The result is pathetic and I can't figure out a way to improve my code.
I believe the image needs to be processed before the extraction of text but not sure how.
I've tried to convert it into black and white but no luck.
import cv2
import os
import pytesseract
from PIL import Image
import time
pytesseract.pytesseract.tesseract_cmd='C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
cam = cv2.VideoCapture(1,cv2.CAP_DSHOW)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 8000)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 6000)
while True:
return_value,image = cam.read()
image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
image = image[127:219, 508:722]
#(thresh, image) = cv2.threshold(image, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
cv2.imwrite('test.jpg',image)
print('Text detected: {}'.format(pytesseract.image_to_string(Image.open('test.jpg'))))
time.sleep(2)
cam.release()
#os.system('del test.jpg')
Preprocessing to clean the image before performing text extraction can help. Here's a simple approach
Convert image to grayscale and sharpen image
Adaptive threshold
Perform morpholgical operations to clean image
Invert image
First we convert to grayscale then sharpen the image using a sharpening kernel
Next we adaptive threshold to obtain a binary image
Now we perform morphological transformations to smooth the image
Finally we invert the image
import cv2
import numpy as np
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
sharpen = cv2.filter2D(gray, -1, sharpen_kernel)
thresh = cv2.threshold(sharpen, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
result = 255 - close
cv2.imshow('sharpen', sharpen)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('result', result)
cv2.waitKey()
I am using pytesseract to extract text from images. Before extracting text with pytesseract, I use Pillow and cv2 to reduce noise and enhance the image:
import numpy as np
import pytesseract
from PIL import Image, ImageFilter, ImageEnhance
import cv2
img = cv2.imread('ss.png')
img = cv2.resize(img, (0,0), fx=3, fy=3)
cv2.imwrite("new.png", img)
img1 = cv2.imread("new.png", 0)
#Apply dilation and erosion
kernel = np.ones((2, 2), np.uint8)
img1 = cv2.dilate(img1, kernel, iterations=1)
img1 = cv2.erode(img1, kernel, iterations=1)
img1 = cv2.adaptiveThreshold(img1,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,2)
cv2.imwrite("new1.png", img1)
img2 = Image.open("new1.png")
#Enhance the image
img2 = im.filter(ImageFilter.MedianFilter())
enhancer = ImageEnhance.Contrast(im)
img2 = enhancer.enhance(2)
img2.save('new2.png')
result = pytesseract.image_to_string(Image.open("new2.png"))
print(result)
I mostly get good results, but when I use some low quality/resolution images, I do not get the expected output. Can I improve this in my code?
Example:
Input:
new1.png:
new2.png:
The string that I get from the console is play. What could I change in my algorithm, so that I get the whole string extracted?
Any help would be greatly appreciated.
This is a late answer, but I just came across this. we can use Pillow and cv2 to reduce noise and enhance the image before extracting text from images using pytesseract. I hope it would help someone in future.
#import required library
src_path = "C:/Users/chethan/Desktop/"
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# Write image after removed noise
cv2.imwrite(src_path + "removed_noise.png", img)
# Apply threshold to get image with only black and white
#img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
# Write the image after apply opencv to do some ...
cv2.imwrite(src_path + "thres.png", img)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "thres.png"))
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(img_path))
# Remove template file
# os.remove(temp)
return result
print(get_string(src_path + "dummy.png"))