Capturing screenshot and parsing data from the captured image - python

I need to write a desktop application that performs the following operations. I'm thinking of using Python as the programming language, but I'd be more than glad to switch, if there's an appropriate approach or library in any other languages.
The file I wish to capture is an HWP file, that only certain word processors can run.
Capture the entire HWP document in an image, might span multiple pages (>10 and <15)
The HWP file contains an MCQ formatted quiz
Parse the data from the image that is separate out the questions and answers and save them as separate image files.
I have looked into the following python library, but am still not able to figure out how to perform both 1 and 3.
https://pypi.python.org/pypi/pyscreenshot
Any help would be appreciated.

If i got it correctly , you need to extract text from image.
For this one you should use an OCR like tesseract.
Before using an OCR, try to clear noises from image.
To split the image try to add some unique strings to distinguish between the quiz Q/A

Related

Searching Information at PDFs

I am a beginner user at Python / programming world and I am trying to solve a problem.
I have a kind of keyword list. I want to look for these keywords at some folders which contain a lot of PDFs. PDFs are not character based, they are image based (they contain text as image). In other words, the PDFs are scanned via scanner at first decade of 2000s. So, I can not search a word in the PDF file. I could not use Windows search etc. I can control only with my eyes and this is time consuming & boring.
I researched the question on the internet and found some solutions. According to these solutions, I tried to write a code via Python. It worked but success rate is a bit low.
Firstly, my code converts the PDF file to image files (PyMuPDF package).
Secondly, my code reads text on these images and creates a text information as string (PIL, pytesseract packages)
Finally, the code searches keywords at this text information and returns True if a keyword is found.
Example;
keyword_list = ["a-001", "b-002", "c-003"]
pdf_list = ["a.pdf", "b.pdf", "c.pdf", ...., "z.pdf"]
Code should find a-001 at a.pdf file. Because I controlled via my eyes and a.pdf contains a-001. The code found actually.
Code should find b-002 at b.pdf file too. Because I controlled via my eyes and b.pdf contains b-001. The code could not find.
So my code's success rate is %50. When it finds, it finds true pdf file; I have no problem on that. Found PDF really contains what I am looking for. But sometimes, it could not detect the keyword at the PDF file which I can see clearly.
Do you have any better idea to solve this problem more accurately? I am not chasing %100 success rate, it is impossible. Because, some PDFs contain handwriting. But, most of them contain computer writing and they should be detected. Can I rise the success rate %75?
Your best chance is to extract the image with the highest possible resolution, which might mean not "converting the PDF to an image" but rather "parsing the PDF and extracting the image stream" (given it was 2000's scanned, it is probably a TIFF stream, at that). This is an example using PyMuPdf.
You can perhaps try and further improve the image by adjusting brightness, contrast and using filters such as "despeckling". With poorly scanned images I have had good results with sharpening filters, and there are some filters ("erode" and "washout") that might improve poor typewriting (I remember some "e"'s where the eye of the "e" was almost completely dark, and they got easily mistaken for "c"'s).
Then train Tesseract to improve recognition ratio. I am not sure of how this can be done with the Python interface, though.

How can I extract data from a handwritten, scanned PDF using Python?

So I have these PDFs that are scanned copies of a structured feedback form. The form has these checkboxes and spaces for hand written notes. I am trying to extract the data from these PDFs and save it to an unstructured CSV file.
Now using pytesseract I am able to grab the printed text (by first converting the PDF to image) but I am not able to capture the handwritten content. Is there any of doing it.
I am enclosing a sample form for reference.
!https://imgur.com/a/2FYqWJf
PyTesseract is an OCR program. It has not been trained or designed to recognize handwriting. So you have two options: 1) Retrain it for handwriting (this would be quite time-consuming and complicated though) 2) Use another library actually meant for recognizing handwriting and not printed text like this one: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-hand-text

OCR for Bank Receipts

I am working on OCR problem for Bank receipts and I need to extract details like the Date and Account Number for the same. After processing the input,I am using Tessaract-OCR (using pyteserract in python) for the same.I have obtained the hocr output file however I am not able to make sense of it.How do we extract information from the HOCR output file?Note that the receipt has numbers filled in Boxes like the normal forms.
I used the below text for extraction.Should I use a different Encoding?
import os
if os.path.isfile('output.hocr'):
fp=open('output.hocr','r',encoding='UTF-8')
text=fp.read()
fp.close()
Note:The attached image is one example of data.These images are available in pdf files which I am converting programmatically into images.
I personally would use something more like tesseract to do the OCR and then perhaps something like opencv with surf for the tick boxes...
or even do edge detection with opencv and surf for each section and ocr that specific area to make it more robust by analyzing that specific area rather than the whole document..
You can simply provide the image as input, instead of processing and creating an HOCR output file.
Try:-
from PIL import Image
import pytesseract
im = Image.open("reciept.jpg")
text = pytesseract.image_to_string(im, lang = 'eng')
print(text)
This program takes in the location of your image which is to be run through OCR, and extracts text from it, stores it in a variable text, and prints it out. If you want you can store the data in text in a separate file too.
P.S.:- The Image that you are trying to process, is way too complex as compared to images that tesseract is made to deal with. Due to this you may get incorrect results, after the text is processed. I would definitely recommend you to optimize it before using, like reducing the character set used, processing the image before passing it to OCR, upsampling image, having dpi over 250 etc.

Extract text from screen in python

Is there a library etc for extracting text from a png bitmap screen shot?
It is for a automizer and would (for example) be able to read buttons etc. I've checked Tesseract, but it seems to be made for pictures, not computer screen fonts.
If you're dealing with a small amount of possible matches (i.e.: you want to recognize two or three different buttons), the simplest way is to isolate those in a previous screenshot, save them to individual files, and then use some form of template matching, which is quite easy in opencv.
If, however, you need to actually perform recognition of the button text, you're going to need a OCR engine. Tesseract is a good candidate, if you can get it trained for your font (it's a lengthy process). As you mention, you'll need to do this if you're dealing with a small font, which tesseract is not originally trained to recognize. If you can't, there's a couple other engines usable in python around, like Ocropus

Is there a way to extract text information from a postscript file? (.ps .eps)

I want to extract the text information contained in a postscript image file (the captions to my axis labels).
These images were generated with pgplot. I have tried ps2ascii and ps2txt on Ubuntu but they didn't produce any useful results. Does anyone know of another method?
Thanks
It's likely that pgplot drew the fonts in the text directly with lines rather than using text. Especially since pgplot is designed to output to a huge range of devices including plotters where you would have to do this.
Edit:
If you have enough plots to be worth
the effort than it's a very simple
image processing task. Convert each
page to something like tiff, in mono
chrome Threshold the image to binary,
the text will be max pixel value.
Use a template matching technique.
If you have a limited set of
possible labels then just match the
entire label, you can even start
with a template of the correct size
and rotation. Then just flag each
plot as containing label[1-n], no
need to read the actual text.
If you
don't know the label then you can
still do OCR fairly easily, just
extract the region around the axis,
rotate it for the vertical - and use
Google's free OCR lib
If you have pgplot you can even
build the training set for OCR or
the template images directly rather
than having to harvest them from the
image list

Categories