I am trying to extract data and images from a pdf and pass them to a database. I had tried several libraries/packages in R and Python, nut still facing the problem that I can not relate the extracted image with the data which describes it.
I attached an image of a pdf file as a sample to illustrate the problem.
My need is to finally have a dataframe as follows:
NUMBER ORDER IMAGE
09090087 345679 345679.jpg
09090087 535278 535278.jpg
And the files 345679.jpg, which is a cat, and 535278.jpg, which is a dog, extracted to some folder...
At the moment I have managed to extract images but I can not figure out how to relate the image with text labels.
My code:
from __future__ import print_function
import fitz
import sys, time, re
checkXO = r"/Type(?= */XObject)"
checkIM = r"/Subtype(?= */Image)"
doc = fitz.open(sys.argv[1])
imgcount = 0
lenXREF = doc._getXrefLength()
for i in range(1, lenXREF):
text = doc._getObjectString(i)
isXObject = re.search(checkXO, text)
isImage = re.search(checkIM, text)
if not isXObject or not isImage:
continue
imgcount += 1
pix = fitz.Pixmap(doc, i)
if pix.n < 5:
pix.writePNG("pdfimg/img-%s.png" % (i,))
else:
pix0 = fitz.Pixmap(fitz.csRGB, pix)
pix0.writePNG("pdfimg/img-%s.png" % (i,))
pix0 = None
pix = None
ANY ideas?
Related
I wrote some code that uses OCR to extract text from screenshots of follower lists and then transfer them into a data frame.
The reason I have to do the hustle with "name" / "display name" and removing blank lines is that the initial text extraction looks something like this:
Screenname 1
name 1
Screenname 2
name 2
(and so on)
So I know in which order each extraction will be.
My code works well for 1-30 images, but if I take more than that its gets a bit slow. My goal is to run around 5-10k screenshots through it at once. I'm pretty new to programming so any ideas/tips on how to optimize the speed would be very appreciated! Thank you all in advance :)
from PIL import Image
from pytesseract import pytesseract
import os
import pandas as pd
from itertools import chain
list_final = [""]
list_name = [""]
liste_anzeigename = [""]
list_raw = [""]
anzeigename = [""]
name = [""]
sort = [""]
f = r'/Users/PycharmProjects/pythonProject/images'
myconfig = r"--psm 4 --oem 3"
os.listdir(f)
for file in os.listdir(f):
f_img = f+"/"+file
img = Image.open(f_img)
img = img.crop((240, 400, 800, 2400))
img.save(f_img)
for file in os.listdir(f):
f_img = f + "/" + file
test = pytesseract.image_to_string(PIL.Image.open(f_img), config=myconfig)
lines = test.split("\n")
list_raw = [line for line in lines if line.strip() != ""]
sort.append(list_raw)
name = {list_raw[0], list_raw[2], list_raw[4],
list_raw[6], list_raw[8], list_raw[10],
list_raw[12], list_raw[14], list_raw[16]}
list_name.append(name)
anzeigename = {list_raw[1], list_raw[3], list_raw[5],
list_raw[7], list_raw[9], list_raw[11],
list_raw[13], list_raw[15], list_raw[17]}
liste_anzeigename.append(anzeigename)
reihenfolge_name = list(chain.from_iterable(list_name))
index_anzeigename = list(chain.from_iterable(liste_anzeigename))
sortieren = list(chain.from_iterable(sort))
print(list_raw)
sort_name = sorted(reihenfolge_name, key=sortieren.index)
sort_anzeigename = sorted(index_anzeigename, key=sortieren.index)
final = pd.DataFrame(zip(sort_name, sort_anzeigename), columns=['name', 'anzeigename'])
print(final)
Use a multiprocessing.Pool.
Combine the code under the for-loops, and put it into a function process_file.
This function should accept a single argument; the name of a file to process.
Next using listdir, create a list of files to process.
Then create a Pool and use its map method to process the list;
import multiprocessing as mp
def process_file(name):
# your code goes here.
return anzeigename # Or watever the result should be.
if __name__ is "__main__":
f = r'/Users/PycharmProjects/pythonProject/images'
p = mp.Pool()
liste_anzeigename = p.map(process_file, os.listdir(f))
This will run your code in parallel in as many cores as your CPU has.
For a N-core CPU this will take approximately 1/N times the time as doing it without multiprocessing.
Note that the return value of the worker function should be pickleable; it has to be returned from the worker process to the parent process.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Is there any Python way to identify if the PDF has been OCR’d (the quality of the text is bad) vs a searchable PDF (the quality of the text is perfect)?
Using metadata of pdf
import pprint
import PyPDF2
def get_doc_info(path):
pp = pprint.PrettyPrinter(indent =4)
pdf_file = PyPDF2.PdfFileReader(path, 'rb')
doc_info = pdf_file.getDocumentInfo()
pp.pprint(doc_info)
I find :
result = get_doc_info(PDF_SEARCHABLE_HAS_BEEN_OCRD.pdf)
{ '/Author': 'NAPS2',
'/CreationDate': "D:20200701104101+02'00'",
'/Creator': 'NAPS2',
'/Keywords': '',
'/ModDate': "D:20200701104101+02'00'",
'/Producer': 'PDFsharp 1.50.4589 (www.pdfsharp.com)'}
result = get_doc_info(PDF_SEARCHABLE_TRUE.pdf)
{ '/CreationDate': 'D:20210802122000Z',
'/Creator': 'Quadient CXM AG~Inspire~14.3.49.7',
'/Producer': ''}
Can i check the type of the PDF (True PDF or OCR PDF) using Creator from metaData of the PDF?
There is another way using python ?
If there is no solution to the problem, how can i use the deep learning/Machine learning to detect the type of the pdf searchable (True or OCR) ?
This is a video to understand the difference between TRUE PDF and OCR PDF : https://www.youtube.com/watch?v=xs8KQbxsMcw
Not long ago i ran into the same problem!
I developed (based on some SO post i cannot recall) this function:
def get_scanned_pages_percentage(filepath: str) -> float:
"""
INPUT: path to a pdf file
OUTPUT: % of pages OCR'd which include text
"""
total_pages = 0
total_scanned_pages = 0
with fitz.open(filepath) as doc:
for page in doc:
text = page.getText().strip()
if len(text) == 0:
# Ignore "empty" pages
continue
total_pages += 1
pix1 = page.getPixmap(alpha=False) # render page to an image
remove_all_text(doc, page)
pix2 = page.getPixmap(alpha=False)
img1 = pix1.getImageData("png")
img2 = pix2.getImageData("png")
if img1 == img2:
# print(f"{page.number} was scanned or has no text")
if len(text) > 0:
# print(f"\tHas text of length {len(text):,} characters")
total_scanned_pages += 1
else:
pass
if total_pages == 0:
return 0
return (total_scanned_pages / total_pages) * 100
This function will give a 100 (or close to it) is the pdf is an image containing an OCR'd text, and a 0 if its a native digital pdf.
remove all text:
def remove_all_text(doc, page):
"""Removes all text from a doc pdf page (metadata)"""
page.cleanContents() # syntax cleaning of page appearance commands
# xref of the cleaned command source (bytes object)
xref = page.getContents()[0]
cont = doc.xrefStream(xref) # read it
# The metadata is stored, it extracts it as bytes. Then searches fot the tags refering to text and deletes it.
ba_cont = bytearray(cont) # a modifyable version
pos = 0
changed = False # switch indicates changes
while pos < len(cont) - 1:
pos = ba_cont.find(b"BT\n", pos) # begin text object
if pos < 0:
break # not (more) found
pos2 = ba_cont.find(b"ET\n", pos) # end text object
if pos2 <= pos:
break # major error in PDF page definition!
ba_cont[pos: pos2 + 2] = b"" # remove text object
changed = True
if changed: # we have indeed removed some text
doc.updateStream(xref, ba_cont) # write back command stream w/o text
I'm trying to save a captured 640x480 RGB image with NAO's front camera to my computer. I'm using python and PIL to do so. Unfortunately, the image just won't save on my computer, no matter what image type or path I use for the parameters of the Image.save()- Method. the image created with PIL contains valid RGB-information though. Here's my code sample from choregraphe:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
result = Image.fromstring("RGB", (w, h), pixel_array)
#the following line doesnt work
result.save("C:\Users\Claudia\Desktop\NAO\Bilder\test.png", "PNG")
cam_input.releaseImage(nameId)
cam_input.unsubscribe(nameId)
pass
Thank you so much for your help in advance!
- a frustrated student
In the comment, you say the code is pasted from choregraphe, so I guess you launch it using choregraphe.
If so, then the code is injected into your robot then started.
So your image is saved to the NAO hard drive and I guess your robot doesn't have a folder named: "C:\Users\Claudia\Desktop\NAO\Bilder\test.png".
So change the path to "/home/nao/test.png", start your code, then log into your NAO using putty or browse folder using winscp (as it looks like you're using windows).
And you should see your image-file.
In order for your code to run correctly it needs to be properly indented. Your code should look like this:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
...
Make sure to indent everything that's inside the def onInput_onStart(self): method.
Sorry for the late response, but it maybe helpful for someone. You should try it with naoqi. Here is the documentation for retriving images
http://doc.aldebaran.com/2-4/dev/python/examples/vision/get_image.html
The original code was not working for me so I made some tweeks.
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="nao.local.",
help="Robot IP address. On robot or Local Naoqi: use
'nao.local.'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
pass
"""
First get an image, then show it on the screen with PIL.
"""
# Get the service ALVideoDevice.
video_service = session.service("ALVideoDevice")
resolution = 2 # VGA
colorSpace = 11 # RGB
videoClient = video_service.subscribe("python_client",0,3,13,1)
t0 = time.time()
# Get a camera image.
# image[6] contains the image data passed as an array of ASCII chars.
naoImage = video_service.getImageRemote(videoClient)
t1 = time.time()
# Time the image transfer.
print ("acquisition delay ", t1 - t0)
#video_service.unsubscribe(videoClient)
# Now we work with the image returned and save it as a PNG using ImageDraw
# package.
# Get the image size and pixel array.
imageWidth = naoImage[0]
imageHeight = naoImage[1]
array = naoImage[6]
image_string = str(bytearray(array))
# Create a PIL Image from our pixel array.
im = Image.fromstring("RGB", (imageWidth, imageHeight), image_string)
# Save the image.
im.save("C:\\Users\\Lenovo\\Desktop\\PROJEKTI\\python2-
connect4\\camImage.png", "PNG")
Be careful to use Python 2.7.
The code runs on your computer not the NAO robot!
I have a lmdb database and I'm trying to read its contents. The irony is nothing gets printed on screen. This is the code snippet that I have written for reading from lmdb:
import caffe
import lmdb
import numpy as np
from caffe.proto import caffe_pb2
import cv2
import sys
db_train = lmdb.open('mnist_train_lmdb')
db_train_txn = db_train.begin()
cursor = db_train_txn.cursor()
print db_train
print db_train_txn
print db_train_txn.cursor()
datum = caffe_pb2.Datum()
index = sys.argv[0]
size_train = 50000
size_test = 10000
data_train = np.zeros((size_train, 1, 28, 28))
label_train = np.zeros(size_train, dtype=int)
print 'Reading training data...'
i = -1
for key, value in cursor:
i = i + 1
if i % 1000 == 0:
print i
if i == size_train:
break
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
data_train[i] = data
label_train[i] = label
This prints :
<Environment object at 0x0000000009CE3990>
<Transaction object at 0x0000000009CE1810>
<Cursor object at 0x0000000009863738>
Reading training data...
Reading test data...
It seems the for loop doesn't run at all. What am I missing here?
I checked and it seems this is the normal way of reading from lmdb, all source examples that I have seen have similar approach.
Correct myself:
Both way of using lmdb.Cursor()
for key, value in cursor:
and
while cursor.next():
are right and I was wrong in the original answer.
You didn't use cursor properly and a slight modification should be made in your code like:
... # original stuff
print 'Reading training data...'
i = -1
while cursor.next(): # Move to the next element, and
i = i + 1 # note cursor starts in an unpositioned state
if i % 1000 == 0:
print i
if i == size_train:
break
datum.ParseFromString(cursor.value())
label = datum.label
data = caffe.io.datum_to_array(datum)
data_train[i] = data
label_train[i] = label
For more usage about lmdb python binding, you can refer here.
OK, it seems the database was faulty! I used another database and it worked just fine. both my code snippet and what was suggested by #DaleSong.
Since yesterday I'm trying to extract the text from some highlighted annotations in one pdf, using python-poppler-qt4.
According to this documentation, looks like I have to get the text using the Page.text() method, passing a Rectangle argument from the higlighted annotation, which I get using Annotation.boundary(). But I get only blank text. Can someone help me? I copied my code below and added a link for the PDF I am using. Thanks for any help!
import popplerqt4
import sys
import PyQt4
def main():
doc = popplerqt4.Poppler.Document.load(sys.argv[1])
total_annotations = 0
for i in range(doc.numPages()):
page = doc.page(i)
annotations = page.annotations()
if len(annotations) > 0:
for annotation in annotations:
if isinstance(annotation, popplerqt4.Poppler.Annotation):
total_annotations += 1
if(isinstance(annotation, popplerqt4.Poppler.HighlightAnnotation)):
print str(page.text(annotation.boundary()))
if total_annotations > 0:
print str(total_annotations) + " annotation(s) found"
else:
print "no annotations found"
if __name__ == "__main__":
main()
Test pdf:
https://www.dropbox.com/s/10plnj67k9xd1ot/test.pdf
Looking at the documentation for Annotations it seems that the boundary property Returns this annotation's boundary rectangle in normalized coordinates. Although this seems a strange decision we can simply scale the coordinates by the page.pageSize().width() and .height() values.
import popplerqt4
import sys
import PyQt4
def main():
doc = popplerqt4.Poppler.Document.load(sys.argv[1])
total_annotations = 0
for i in range(doc.numPages()):
#print("========= PAGE {} =========".format(i+1))
page = doc.page(i)
annotations = page.annotations()
(pwidth, pheight) = (page.pageSize().width(), page.pageSize().height())
if len(annotations) > 0:
for annotation in annotations:
if isinstance(annotation, popplerqt4.Poppler.Annotation):
total_annotations += 1
if(isinstance(annotation, popplerqt4.Poppler.HighlightAnnotation)):
quads = annotation.highlightQuads()
txt = ""
for quad in quads:
rect = (quad.points[0].x() * pwidth,
quad.points[0].y() * pheight,
quad.points[2].x() * pwidth,
quad.points[2].y() * pheight)
bdy = PyQt4.QtCore.QRectF()
bdy.setCoords(*rect)
txt = txt + unicode(page.text(bdy)) + ' '
#print("========= ANNOTATION =========")
print(unicode(txt))
if total_annotations > 0:
print str(total_annotations) + " annotation(s) found"
else:
print "no annotations found"
if __name__ == "__main__":
main()
Additionally, I decided to concatenate the .highlightQuads() to get a better representation of what was actually highlighted.
Please be aware of the explicit <space> I have appended to each quad region of text.
In the example document the returned QString could not be passed directly to print() or str(), the solution to this was to use unicode() instead.
I hope this helps someone as it helped me.
Note: Page rotation may affect the scaling values, I have not been able to test this.