I am working on OCR problem for Bank receipts and I need to extract details like the Date and Account Number for the same. After processing the input,I am using Tessaract-OCR (using pyteserract in python) for the same.I have obtained the hocr output file however I am not able to make sense of it.How do we extract information from the HOCR output file?Note that the receipt has numbers filled in Boxes like the normal forms.
I used the below text for extraction.Should I use a different Encoding?
import os
if os.path.isfile('output.hocr'):
fp=open('output.hocr','r',encoding='UTF-8')
text=fp.read()
fp.close()
Note:The attached image is one example of data.These images are available in pdf files which I am converting programmatically into images.
I personally would use something more like tesseract to do the OCR and then perhaps something like opencv with surf for the tick boxes...
or even do edge detection with opencv and surf for each section and ocr that specific area to make it more robust by analyzing that specific area rather than the whole document..
You can simply provide the image as input, instead of processing and creating an HOCR output file.
Try:-
from PIL import Image
import pytesseract
im = Image.open("reciept.jpg")
text = pytesseract.image_to_string(im, lang = 'eng')
print(text)
This program takes in the location of your image which is to be run through OCR, and extracts text from it, stores it in a variable text, and prints it out. If you want you can store the data in text in a separate file too.
P.S.:- The Image that you are trying to process, is way too complex as compared to images that tesseract is made to deal with. Due to this you may get incorrect results, after the text is processed. I would definitely recommend you to optimize it before using, like reducing the character set used, processing the image before passing it to OCR, upsampling image, having dpi over 250 etc.
Related
I am tying to convert a PDF to an image so I can OCR it. But the quality is being degraded during the conversion.
There seem to be two main methods for converting a PDF to an image (JPG/PNG) with Python - pdf2image and ImageMagick/Wand.
#pdf2image (altering dpi to 300/600 etc does not seem to make a difference):
pages = convert_from_path("page.pdf", dpi=300)
for page in pages:
page.save("page.jpg", 'JPEG')
#ImageMagick (Wand lib)
with Image(filename="page.pdf", resolution=300) as img:
img.compression_quality = 100
img.save(filename="page.jpg")
But if I simply take a screenshot of the PDF on a Mac, the quality is higher than using either Python conversion method.
A good way to see this is to run Tesseract OCR on the resulting images - both Python methods give average results, whereas the screenshot gives perfect results. (I've tried both PNG and JPG.)
Assume I have infinite time, computing power and storage space. I am only interested in image quality and OCR output. It's frustrating to have the perfect image just within reach, but not be able to generate it in code.
What is going on here? Is there a better way to convert a PDF? Is there a way I can get more direct control? What is a screenshot doing such a better job than an actual conversion?
You can use PyMuPDF and set the dpi you want:
import fitz
doc = fitz.open('some/pdf/path')
page = doc.load_page(0)
pixmap = page.get_pixmap(dpi=300)
img = pixmap.tobytes()
# Continue with whatever logic...
So I have these PDFs that are scanned copies of a structured feedback form. The form has these checkboxes and spaces for hand written notes. I am trying to extract the data from these PDFs and save it to an unstructured CSV file.
Now using pytesseract I am able to grab the printed text (by first converting the PDF to image) but I am not able to capture the handwritten content. Is there any of doing it.
I am enclosing a sample form for reference.
!https://imgur.com/a/2FYqWJf
PyTesseract is an OCR program. It has not been trained or designed to recognize handwriting. So you have two options: 1) Retrain it for handwriting (this would be quite time-consuming and complicated though) 2) Use another library actually meant for recognizing handwriting and not printed text like this one: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-hand-text
I need to write a desktop application that performs the following operations. I'm thinking of using Python as the programming language, but I'd be more than glad to switch, if there's an appropriate approach or library in any other languages.
The file I wish to capture is an HWP file, that only certain word processors can run.
Capture the entire HWP document in an image, might span multiple pages (>10 and <15)
The HWP file contains an MCQ formatted quiz
Parse the data from the image that is separate out the questions and answers and save them as separate image files.
I have looked into the following python library, but am still not able to figure out how to perform both 1 and 3.
https://pypi.python.org/pypi/pyscreenshot
Any help would be appreciated.
If i got it correctly , you need to extract text from image.
For this one you should use an OCR like tesseract.
Before using an OCR, try to clear noises from image.
To split the image try to add some unique strings to distinguish between the quiz Q/A
Here is the image after the Pre Processed of a water meter reading...
But whenever I am using tesseract to recognize the digits its not giving an appropriate output.
So, I want to extract/segment out the digits part only as an region of Interest and to save it in a new image file, such that the tesseract can recognize it properly...
I am able to remove those extra noises in an image, that's why I am using this option.
Is there any way to do that ?
The Unprocessed Image is
Before you try extracting your digits from this image, try to reduce your image size so that your digit size would be about 16 pixels height. Secondly, reduce your tesseract scanned characters whitelist to "0123456789", to avoid other characters like ",.;'/" and so on being scanned (that is quite common on this type of pictures). Lowering your image size should help tesseract to dump this noise and not scan in or mix it with digits. This method should not work by 100% on this kind of image for sure, but to clear this kind of noise would be a challenge withoud a doubt by other ways. Maybe you could try to provide us with unprocessed image if you have one, lets see what is possible then.
I want to extract the text information contained in a postscript image file (the captions to my axis labels).
These images were generated with pgplot. I have tried ps2ascii and ps2txt on Ubuntu but they didn't produce any useful results. Does anyone know of another method?
Thanks
It's likely that pgplot drew the fonts in the text directly with lines rather than using text. Especially since pgplot is designed to output to a huge range of devices including plotters where you would have to do this.
Edit:
If you have enough plots to be worth
the effort than it's a very simple
image processing task. Convert each
page to something like tiff, in mono
chrome Threshold the image to binary,
the text will be max pixel value.
Use a template matching technique.
If you have a limited set of
possible labels then just match the
entire label, you can even start
with a template of the correct size
and rotation. Then just flag each
plot as containing label[1-n], no
need to read the actual text.
If you
don't know the label then you can
still do OCR fairly easily, just
extract the region around the axis,
rotate it for the vertical - and use
Google's free OCR lib
If you have pgplot you can even
build the training set for OCR or
the template images directly rather
than having to harvest them from the
image list