Related
I want to extract numbers from captcha image, so I tried this code from this answer this answer:
try:
from PIL import Image
except ImportError:
import Image
import pytesseract
import cv2
file = 'sample.jpg'
img = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, None, fx=10, fy=10, interpolation=cv2.INTER_LINEAR)
img = cv2.medianBlur(img, 9)
th, img = cv2.threshold(img, 185, 255, cv2.THRESH_BINARY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (4,8))
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
cv2.imwrite("sample2.jpg", img)
file = 'sample2.jpg'
text = pytesseract.image_to_string(file)
print(''.join(x for x in text if x.isdigit()))
and it worked fine for this image: outPut: 436359 But, when I tried it on this image: It gave me nothing, outPut: .
How can I modify my code to get the numbers as a string from the second image?
EDIT:
I tried Matt's answer and it worked just fine for the image above. but it doesn't recognise numbers like (8,1) in image A, and number (7) in image B
image A
image B
How to fix that?
Often, getting OCR just right on an image like this has to do with the order and parameters of the transformations. For example, in the following code snippet, I first convert to grayscale, then erode the pixels, then dilate, then erode again. I use threshold to convert to binary (just blacks and whites) and then dilate and erode one more time. This for me produces the correct value of 859917 and should be reproducible.
import cv2
import numpy as np
import pytesseract
file = 'sample2.jpg'
img = cv2.imread(file)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ekernel = np.ones((1,2),np.uint8)
eroded = cv2.erode(gray, ekernel, iterations = 1)
dkernel = np.ones((2,3),np.uint8)
dilated_once = cv2.dilate(eroded, dkernel, iterations = 1)
ekernel = np.ones((2,2),np.uint8)
dilated_twice = cv2.erode(dilated_once, ekernel, iterations = 1)
th, threshed = cv2.threshold(dilated_twice, 200, 255, cv2.THRESH_BINARY)
dkernel = np.ones((2,2),np.uint8)
threshed_dilated = cv2.dilate(threshed, dkernel, iterations = 1)
ekernel = np.ones((2,2),np.uint8)
threshed_eroded = cv2.erode(threshed_dilated, ekernel, iterations = 1)
text = pytesseract.image_to_string(threshed_eroded)
print(''.join(x for x in text if x.isdigit()))
See: Converting An Image To A Cartoon Using OpenCV
In the above article they have the following image:
And, they wanted to obtain an output like the following:
I ran the following script:
import cv2
window_name = 'image'
img = cv2.imread("photo.png")
cv2.imshow(window_name, img)
cv2.waitKey(0)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray, 5)
edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
cv2.imshow(window_name, edges)
cv2.waitKey(0)
color = cv2.bilateralFilter(img, 9, 250, 250)
cartoon = cv2.bitwise_and(color, color, mask=edges)
cv2.imshow(window_name, cartoon)
cv2.waitKey(0)
Firstly, the script is very slow.
Secondly, the output is not what they promised would be:
How can I fix these two issues?
One simple approach is to use stylization in Python/OpenCV in the Non-Photorealistic Rendering in the Computational Photography section to make a "cartoon". Reference to algorithm is at https://www.inf.ufrgs.br/~eslgastal/DomainTransform/Gastal_Oliveira_SIGGRAPH2011_Domain_Transform.pdf
Input:
import cv2
# read image
img = cv2.imread('beard_man.png')
# use mask with input to do inpainting
result = cv2.stylization(img, sigma_s=50, sigma_r=0.8)
# write result to disk
cv2.imwrite("beard_man_cartoon.png", result)
# display it
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result:
Brief description
I'm so interested in your question, so I tried your suggested website's code, the code you posted, and myself googled a few to tried. Even discussed with my peers, my professor who taught introductory image processing/computer vision using C# that I took couple years ago.
Discussion feedback
Sadly they all respond the same and like what I initially thought, it's not possible to transform/convert directly into the second picture in your post, the posted second picture is most likely to be an artistic graphics photo. Well, maybe you dig deeper maybe there's actually a module or library that can actually transform/convert it 100% like the second picture.
Examples code testing
So, I begin trying out the contents of your posted website, snipped a bit there, adjusted some, but overall, no where near to the second cartoon picture.
The code and result of "Converting An Image To A Cartoon Using OpenCV"
import cv2
from matplotlib import pyplot as plt
# Reading image
img = cv2.imread("img.png")
plt.imshow(img)
# Converting to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
# Detecting edges of the input image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray, 9)
edges = cv2.adaptiveThreshold(
gray, 255,
cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY, 9, 9
)
# Cartoonifying the image
color = cv2.bilateralFilter(img, 9, 250, 250)
cartoon = cv2.bitwise_and(color, color, mask=edges)
plt.imshow(cartoon)
plt.savefig("cartoonify.png")
plt.show()
Moving on, then I tried your code in the post, and it's actually made some differences, and it doesn't run slow or didn't make changes. I ran your code, and it did made some change, the code stays pretty much the same, just added saving image methods at the end, cv2.imwrite().
import cv2
import matplotlib.pyplot as plt
window_name = "image"
img = cv2.imread("img.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray, 5)
edges = cv2.adaptiveThreshold(
gray, 255,
cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,
9, 9
)
color = cv2.bilateralFilter(img, 9, 250, 250)
cartoon = cv2.bitwise_and(color, color, mask=edges)
cv2.imshow(window_name, cartoon)
cv2.waitKey(0)
cv2.imwrite("cartoon_op.png", cartoon)
cv2.waitKey(0)
cv2.destroyAllWindows()
The third, I searched on github, found this code, but for this I used my stackoverlfow profile picture, which it's a headshot, I thought maybe the white background would make more visible difference, but it didn't, compared to previous examples, it's pretty much close.
import cv2
import numpy as np
from tkinter.filedialog import *
photo = askopenfilename()
img = cv2.imread(photo)
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
grey = cv2.medianBlur(grey, 5)
edges = cv2.adaptiveThreshold(grey, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
#cartoonize
color = cv2.bilateralFilter(img, 9, 250, 250)
cartoon = cv2.bitwise_and(color, color, mask = edges)
cv2.imshow("Image", img)
cv2.imshow("Cartoon", cartoon)
#save
cv2.imwrite("cartoon-git.png", cartoon)
cv2.waitKey(0)
cv2.destroyAllWindows()
Just before almost finished with the answer, I found this example
gives the closest result of cartoonized picture example on Dev -
How to cartoonize an image with Python, this example used Elon
Musk's photo to demonstrate, although it's the closest to cartoon,
but the size somehow just got really small.
import numpy as np
import cv2
file_name = "elon.jpg"
def resize_image(image):
scale_ratio = 0.3
width = int(image.shape[1] * scale_ratio)
height = int(image.shape[0] * scale_ratio)
new_dimensions = (width, height)
resized = cv2.resize(
image, new_dimensions,
interpolation=cv2.INTER_AREA
)
return resized
def find_countours(image):
contoured_image = image
gray = cv2.cvtColor(contoured_image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 30, 100)
contours, hierarchy = cv2.findContours(
edged, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE
)
cv2.drawContours(
contoured_image, contours,
contourIdx=-1, color=1,
thickness=1
)
cv2.imshow("Image after contouring", contoured_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return contoured_image
def color_quantization(image, k=4):
z = image.reshape((-1, 3))
z = np.float32(z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,
10000, 0.0001)
compactness, label, center = cv2.kmeans(z, k, None, criteria,
1, cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((image.shape))
return res2
if __name__ == '__main__':
image = cv2.imread(file_name)
resized_image = resize_image(image)
coloured = color_quantization(resized_image)
contoured = find_countours(coloured)
final_image = contoured
save_q = input("Save the image? [y]/[n]: ")
if save_q == "y":
cv2.imwrite("cartoonized_" + file_name, final_image)
print("Image saved!")
Original Elon.jpg
Cartoonized Elon.jpg
Wrapping up
I hope this long answer that sounded like no definitive answer helps, it's just what I found interested and decided to share the process of discovering it.
I am using tesseract for OCR, via the pytesseract bindings. Unfortunately, I encounter difficulties when trying to extract text including subscript-style numbers - the subscript number is interpreted as a letter instead.
For example, in the basic image:
I want to extract the text as "CH3", i.e. I am not concerned about knowing that the number 3 was a subscript in the image.
My attempt at this using tesseract is:
import cv2
import pytesseract
img = cv2.imread('test.jpeg')
# Note that I have reduced the region of interest to the known
# text portion of the image
text = pytesseract.image_to_string(
img[200:300, 200:320], config='-l eng --oem 1 --psm 13'
)
print(text)
Unfortunately, this will incorrectly output
'CHs'
It's also possible to get 'CHa', depending on the psm parameter.
I suspect that this issue is related to the "baseline" of the text being inconsistent across the line, but I'm not certain.
How can I accurately extract the text from this type of image?
Update - 19th May 2020
After seeing Achintha Ihalage's answer, which doesn't provide any configuration options to tesseract, I explored the psm options.
Since the region of interest is known (in this case, I am using EAST detection to locate the bounding box of the text), the psm config option for tesseract, which in my original code treats the text as a single line, may not be necessary. Running image_to_string against the region of interest given by the bounding box above gives the output
CH
3
which can, of course, be easily processed to get CH3.
This is because the font of subscript is too small. You could resize the image using a python package such as cv2 or PIL and use the resized image for OCR as coded below.
import pytesseract
import cv2
img = cv2.imread('test.jpg')
img = cv2.resize(img, None, fx=2, fy=2) # scaling factor = 2
data = pytesseract.image_to_string(img)
print(data)
OUTPUT:
CH3
You want to do apply pre-processing to your image before feeding it into tesseract to increase the accuracy of the OCR. I use a combination of PIL and cv2 to do this here because cv2 has good filters for blur/noise removal (dilation, erosion, threshold) and PIL makes it easy to enhance the contrast (distinguish the text from the background) and I wanted to show how pre-processing could be done using either... (use of both together is not 100% necessary though, as shown below). You can write this more elegantly- it's just the general idea.
import cv2
import pytesseract
import numpy as np
from PIL import Image, ImageEnhance
img = cv2.imread('test.jpg')
def cv2_preprocess(image_path):
img = cv2.imread(image_path)
# convert to black and white if not already
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# remove noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# apply a blur
# gaussian noise
img = cv2.threshold(cv2.GaussianBlur(img, (9, 9), 0), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# this can be used for salt and pepper noise (not necessary here)
#img = cv2.adaptiveThreshold(cv2.medianBlur(img, 7), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.imwrite('new.jpg', img)
return 'new.jpg'
def pil_enhance(image_path):
image = Image.open(image_path)
contrast = ImageEnhance.Contrast(image)
contrast.enhance(2).save('new2.jpg')
return 'new2.jpg'
img = cv2.imread(pil_enhance(cv2_preprocess('test.jpg')))
text = pytesseract.image_to_string(img)
print(text)
Output:
CH3
The cv2 pre-process produces an image that looks like this:
The enhancement with PIL gives you:
In this specific example, you can actually stop after the cv2_preprocess step because that is clear enough for the reader:
img = cv2.imread(cv2_preprocess('test.jpg'))
text = pytesseract.image_to_string(img)
print(text)
output:
CH3
But if you are working with things that don't necessarily start with a white background (i.e. grey scaling converts to light grey instead of white)- I have found the PIL step really helps there.
Main point is the methods to increase accuracy of the tesseract typically are:
fix DPI (rescaling)
fix brightness/noise of image
fix tex size/lines
(skewing/warping text)
Doing one of these or all three of them will help... but the brightness/noise can be more generalizable than the other two (at least from my experience).
I think this way can be more suitable for the general situation.
import cv2
import pytesseract
from pathlib import Path
image = cv2.imread('test.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # (suitable for sharper black and white pictures
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1] # is OpenCV2.4 or OpenCV3
result_list = []
for c in contours:
x, y, w, h = cv2.boundingRect(c)
area = cv2.contourArea(c)
if area > 200:
detect_area = image[y:y + h, x:x + w]
# detect_area = cv2.GaussianBlur(detect_area, (3, 3), 0)
predict_char = pytesseract.image_to_string(detect_area, lang='eng', config='--oem 0 --psm 10')
result_list.append((x, predict_char))
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), thickness=2)
result = ''.join([char for _, char in sorted(result_list, key=lambda _x: _x[0])])
print(result) # CH3
output_dir = Path('./temp')
output_dir.mkdir(parents=True, exist_ok=True)
cv2.imwrite(f"{output_dir/Path('image.png')}", image)
cv2.imwrite(f"{output_dir/Path('clean.png')}", thresh)
MORE REFERENCE
I strongly suggest you refer to the following examples, which is a useful reference for OCR.
Get the location of all text present in image using opencv
Using YOLO or other image recognition techniques to identify all alphanumeric text present in images
I am using pytesseract to extract text from images. Before extracting text with pytesseract, I use Pillow and cv2 to reduce noise and enhance the image:
import numpy as np
import pytesseract
from PIL import Image, ImageFilter, ImageEnhance
import cv2
img = cv2.imread('ss.png')
img = cv2.resize(img, (0,0), fx=3, fy=3)
cv2.imwrite("new.png", img)
img1 = cv2.imread("new.png", 0)
#Apply dilation and erosion
kernel = np.ones((2, 2), np.uint8)
img1 = cv2.dilate(img1, kernel, iterations=1)
img1 = cv2.erode(img1, kernel, iterations=1)
img1 = cv2.adaptiveThreshold(img1,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,2)
cv2.imwrite("new1.png", img1)
img2 = Image.open("new1.png")
#Enhance the image
img2 = im.filter(ImageFilter.MedianFilter())
enhancer = ImageEnhance.Contrast(im)
img2 = enhancer.enhance(2)
img2.save('new2.png')
result = pytesseract.image_to_string(Image.open("new2.png"))
print(result)
I mostly get good results, but when I use some low quality/resolution images, I do not get the expected output. Can I improve this in my code?
Example:
Input:
new1.png:
new2.png:
The string that I get from the console is play. What could I change in my algorithm, so that I get the whole string extracted?
Any help would be greatly appreciated.
This is a late answer, but I just came across this. we can use Pillow and cv2 to reduce noise and enhance the image before extracting text from images using pytesseract. I hope it would help someone in future.
#import required library
src_path = "C:/Users/chethan/Desktop/"
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# Write image after removed noise
cv2.imwrite(src_path + "removed_noise.png", img)
# Apply threshold to get image with only black and white
#img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
# Write the image after apply opencv to do some ...
cv2.imwrite(src_path + "thres.png", img)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "thres.png"))
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(img_path))
# Remove template file
# os.remove(temp)
return result
print(get_string(src_path + "dummy.png"))
I have an invoice image, and I want to detect the text on it. So I plan to use 2 steps: first is to identify the text areas, and then using OCR to recognize the text.
I am using OpenCV 3.0 in python for that. I am able to identify the text(including some non text areas) but I further want to identify text boxes from the image(also excluding the non-text areas).
My input image is:
And the output is:
I am using the below code for this:
img = cv2.imread('/home/mis/Text_Recognition/bill.jpg')
mser = cv2.MSER_create()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #Converting to GrayScale
gray_img = img.copy()
regions = mser.detectRegions(gray, None)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions]
cv2.polylines(gray_img, hulls, 1, (0, 0, 255), 2)
cv2.imwrite('/home/mis/Text_Recognition/amit.jpg', gray_img) #Saving
Now, I want to identify the text boxes, and remove/unidentify any non-text areas on the invoice. I am new to OpenCV and am a beginner in Python. I am able to find some examples in MATAB example and C++ example, but If I convert them to python, it will take a lot of time for me.
Is there any example with python using OpenCV, or can anyone help me with this?
Below is the code
# Import packages
import cv2
import numpy as np
#Create MSER object
mser = cv2.MSER_create()
#Your image path i-e receipt path
img = cv2.imread('/home/rafiullah/PycharmProjects/python-ocr-master/receipts/73.jpg')
#Convert to gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
vis = img.copy()
#detect regions in gray scale image
regions, _ = mser.detectRegions(gray)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions]
cv2.polylines(vis, hulls, 1, (0, 255, 0))
cv2.imshow('img', vis)
cv2.waitKey(0)
mask = np.zeros((img.shape[0], img.shape[1], 1), dtype=np.uint8)
for contour in hulls:
cv2.drawContours(mask, [contour], -1, (255, 255, 255), -1)
#this is used to find only text regions, remaining are ignored
text_only = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("text only", text_only)
cv2.waitKey(0)
This is an old post, yet I'd like to contribute that if you are trying to extract all the texts out of an image, here is the code to get that text in an array.
import cv2
import numpy as np
import re
import pytesseract
from pytesseract import image_to_string
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
from PIL import Image
image_obj = Image.open("screenshot.png")
rgb = cv2.imread('screenshot.png')
small = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
#threshold the image
_, bw = cv2.threshold(small, 0.0, 255.0, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
# get horizontal mask of large size since text are horizontal components
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
# find all the contours
contours, hierarchy,=cv2.findContours(connected.copy(),cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Segment the text lines
counter=0
array_of_texts=[]
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
cropped_image = image_obj.crop((x-10, y, x+w+10, y+h ))
str_store = re.sub(r'([^\s\w]|_)+', '', image_to_string(cropped_image))
array_of_texts.append(str_store)
counter+=1
print(array_of_texts)