I have a cropped image and I am trying to get the numbers on that cropped image
Here's the code I am using
image = cv2.imread('Cropped.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
data = pytesseract.image_to_string(invert, lang='eng', config='--psm 6')
print(data)
Here's the sample cropped image
All what I got some numbers and not all of them. How to enhance such an image to be able to extract only the numbers?
I tried the code on this image but doesn't return correct numbers
You can easily solve this with three-main steps
Upsampling
Applying simple-threshold
set configuration to digits
Upsampling for accurate recognition. Otherwise tesseract may misterpret the digits.
Threshold Displays only the features of the image.
**Configuration Setting will recognize the digits
Result
Upsampling
Threshold
Pytesseract
277032200746
Code:
import cv2
import pytesseract
img1 = cv2.imread("kEpyN.png") # "FX2in.png"
gry1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
(h, w) = gry1.shape[:2]
gry1 = cv2.resize(gry1, (w*2, h*2))
thr1 = cv2.threshold(gry1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
txt1 = pytesseract.image_to_string(thr1, config="digits")
print("".join(t for t in txt1 if t.isalnum()))
cv2.imshow("thr1", thr1)
cv2.waitKey(0)
Update:
Most-probably a version mismatch causes extra words and digits.
One-way to solving is taking a range of the image
For instance, from the thresholded image:
(h_thr, w_thr) = thr1.shape[:2]
thr1 = thr1[0:h_thr-10, int(w_thr/2)-400:int(w_thr/2)+200]
Result will be:
Now if you read, result should be like this output
277032200746
Related
I'm using cv2 and pytesseract library to extract text from image. Here is the image (image3_3.png) and the python code:
def threshold_image(img_src):
"""Grayscale image and apply Otsu's threshold"""
# Grayscale
img_gray = cv2.cvtColor(img_src, cv2.COLOR_BGR2GRAY)
# Binarisation and Otsu's threshold
img_thresh = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
return img_thresh
img = np.array(Image.open('image3_3.png'))
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# normalise the image
norm_img = np.zeros((img.shape[0], img.shape[1]))
img = cv2.normalize(img, norm_img, 0, 255, cv2.NORM_MINMAX)
# Apply blur to smooth out the edges
img = cv2.GaussianBlur(img, (5, 5), 0)
string_ocr = pytesseract.image_to_string(threshold_image(img), lang = 'eng', config = '--psm 6')
print(string_ocr)
Here is the result:
Image A3. This is image A3 with more texts.
ISAS Visual Analytics
INow everyone can easily discover and share powerful
Nsights that inspire action
Why am I not getting the same exact text? Any help highly appreciated.
I have been working on project which involves extracting text from an image. I have researched that tesseract is one of the best libraries available and I decided to use the same along with opencv. Opencv is needed for image manipulation.
I have been playing a lot with tessaract engine and it does not seems to be giving the expected results to me. I have attached the image as an reference. Output I got is:
1] =501 [
Instead, expected output is
TM10-50%L
What I have done so far:
Remove noise
Adaptive threshold
Sending it tesseract ocr engine
Are there any other suggestions to improve the algorithm?
Thanks in advance.
Snippet of the code:
import cv2
import sys
import pytesseract
import numpy as np
from PIL import Image
if __name__ == '__main__':
if len(sys.argv) < 2:
print('Usage: python ocr_simple.py image.jpg')
sys.exit(1)
# Read image path from command line
imPath = sys.argv[1]
gray = cv2.imread(imPath, 0)
# Blur
blur = cv2.GaussianBlur(gray,(9,9), 0)
# Binarizing
thres = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 5, 3)
text = pytesseract.image_to_string(thresh)
print(text)
Images attached.
First image is original image. Original image
Second image is what has been fed to tessaract. Input to tessaract
Before performing OCR on an image, it's important to preprocess the image. The idea is to obtain a processed image where the text to extract is in black with the background in white. For this specific image, we need to obtain the ROI before we can OCR.
To do this, we can convert to grayscale, apply a slight Gaussian blur, then adaptive threshold to obtain a binary image. From here, we can apply morphological closing to merge individual letters together. Next we find contours, filter using contour area filtering, and then extract the ROI. We perform text extraction using the --psm 6 configuration option to assume a single uniform block of text. Take a look here for more options.
Detected ROI
Extracted ROI
Result from Pytesseract OCR
TM10=50%L
Code
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Grayscale, Gaussian blur, Adaptive threshold
image = cv2.imread('1.jpg')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 5)
# Perform morph close to merge letters together
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=3)
# Find contours, contour area filtering, extract ROI
cnts, _ = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
for c in cnts:
area = cv2.contourArea(c)
if area > 1800 and area < 2500:
x,y,w,h = cv2.boundingRect(c)
ROI = original[y:y+h, x:x+w]
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
# Perform text extraction
ROI = cv2.GaussianBlur(ROI, (3,3), 0)
data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6')
print(data)
cv2.imshow('ROI', ROI)
cv2.imshow('close', close)
cv2.imshow('image', image)
cv2.waitKey()
I have these two images:
the first one has clearly an higher quality then the second one (even if it hasn't such a bad quality). I process the two images with OpenCV in order to read the text with Tesseract like that:
import tesseract
import cv2
img = cv2.cvtColor(scr_crop, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(img, 220, 255, cv2.THRESH_BINARY)[1]
# Create custom kernel
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
# Perform closing (dilation followed by erosion)
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# Invert image to use for Tesseract
result = 255 - close
# result = cv2.resize(result, (0, 0), fx=2, fy=2)
text = pytesseract.image_to_string(result, lang="ita")
So I perform first a dilation and then an erosion for the gray-scaled versions of the two images obtaining these two results for the two images
So, as you can see, for the first image I obtain a great result and tessaract is able to read the text while I obtain a bad result for the second image and tesseract is not able to read the text. How can I improve the quality of the second image in order to obtain a better result for tesseract?
For the second image, just apply only thresholding with different threshold types.
Instead of cv2.THRESH_BINARY, use cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU
Image will become:
and if you read:
txt = pytesseract.image_to_string(threshold)
print(txt)
Result will be:
Esiti Positivi: 57
Esiti Negativi: 1512
Numerosita: 1569
Tasso di Conversione: 3.63%
Now what does cv2.THRESH_BINARY_INV and cv2.THRESH_OTSU means?
cv2.THRESH_BINARY_INV is the opposite operation of the cv2.THRESH_BINARY if the current pixel value is greater than the threshold set to the 0. maxval ((255 in our case), otherwise.
source
cv2.THRESH_OTSU finds the optimal threshold value using the OTSU's algorithm. [page 3]
Code:
import cv2
import pytesseract
img = cv2.imread("c7xq9.png")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thr = cv2.threshold(gry, 220, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
txt = pytesseract.image_to_string(thr)
print(txt)
cv2.imshow("thr", thr)
cv2.waitKey(0)
I'm trying to extract text from image using python cv2. The result is pathetic and I can't figure out a way to improve my code.
I believe the image needs to be processed before the extraction of text but not sure how.
I've tried to convert it into black and white but no luck.
import cv2
import os
import pytesseract
from PIL import Image
import time
pytesseract.pytesseract.tesseract_cmd='C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
cam = cv2.VideoCapture(1,cv2.CAP_DSHOW)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 8000)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 6000)
while True:
return_value,image = cam.read()
image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
image = image[127:219, 508:722]
#(thresh, image) = cv2.threshold(image, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
cv2.imwrite('test.jpg',image)
print('Text detected: {}'.format(pytesseract.image_to_string(Image.open('test.jpg'))))
time.sleep(2)
cam.release()
#os.system('del test.jpg')
Preprocessing to clean the image before performing text extraction can help. Here's a simple approach
Convert image to grayscale and sharpen image
Adaptive threshold
Perform morpholgical operations to clean image
Invert image
First we convert to grayscale then sharpen the image using a sharpening kernel
Next we adaptive threshold to obtain a binary image
Now we perform morphological transformations to smooth the image
Finally we invert the image
import cv2
import numpy as np
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
sharpen = cv2.filter2D(gray, -1, sharpen_kernel)
thresh = cv2.threshold(sharpen, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
result = 255 - close
cv2.imshow('sharpen', sharpen)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('result', result)
cv2.waitKey()
Here I am using below script to remove black spot near the image and remove line-through above number but it removes noise but not properly.
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=12)
img = cv2.erode(img, kernel, iterations=12)
# Write image after removed noise
cv2.imwrite(src_path + "removed_noise.png", img)
# Apply threshold to get image with only black and white
img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
# Write the image after apply opencv to do some ...
cv2.imwrite(src_path + "thres.png", img)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "vertical_final.jpg"))
# Remove template file
#os.remove(temp)
return result
but it's not working properly.
Input image:
Output Image:-
I need someone to help me out from these problems it's highly appreciated.
Source Code:-
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1,20), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
#img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
kernel = np.ones((1, 1), np.uint8)
#img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
cv2.imwrite(src_path + "removed_noise.png", img)
img3 = cv2.subtract(cv2.imread(src_path + "removed_noise.png"),cv2.imread(src_path + "tax_amount.png"))
cv2.imwrite(src_path + "removed_noise_makes_00.png", img3)
lower_black = np.array([0,0,0], dtype = "uint16")
upper_black = np.array([70,70,70], dtype = "uint16")
black_mask = cv2.inRange(img3, lower_black, upper_black)
black_mask[np.where((black_mask == [0] ).all(axis = 1))] = [255]
opening = cv2.morphologyEx(black_mask, cv2.MORPH_CLOSE, kernel)
cv2.imwrite(src_path + "removed_noise_makes_00_1.png", opening)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "removed_noise_makes_00_1.png"))
# Remove template file
#os.remove(temp)
return result
Where you do
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=12)
You apply 12 times a dilation with a 1x1 structuring element (SE). Unless OpenCV does something special with such a SE, this code should not change your image at all.
You should create a larger SE:
kernel = np.ones((7, 7), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
This will first dilate and then erode the result. What this accomplishes is that small (thin) black regions disappear. These are the regions where the SE didn't fit. This is the same as
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
To remove the long line, you want to apply a closing with an elongated SE:
kernel = np.ones((1, 30), np.uint8)
line = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
This leaves only the horizontal line. The difference of img and line is the text without the line.
If you think of img as the sum of line and text, then img - line will be text. However, there is a small problem still: img has white background (255), and black foreground. So really, it is img = 255 - text - line, and the line image you found above is really 255 - line, because it also has white background. So directly taking the difference will not produce the desired effect.
The solution is to invert your images first:
img = 255 - img;
line = 255 - line;
text = img - line;