I have a conversion script, which converts pdf files and image files to text files. But it takes forever to run my script. It took me almost 48 hours to finished 2000 pdf documents. Right now, I have a pool of documents (around 12000+) that I need to convert. Based on my previous rate, I can't imagine how long will it take to finish the conversion using my code. I am wondering is there anything I can do/change with my code to make it run faster?
Here is the code that I used.
def tesseractOCR_pdf(pdf):
filePath = pdf
pages = convert_from_path(filePath, 500)
# Counter to store images of each page of PDF to image
image_counter = 1
# Iterate through all the pages stored above
for page in pages:
# Declaring filename for each page of PDF as JPG
# For each page, filename will be:
# PDF page 1 -> page_1.jpg
# PDF page 2 -> page_2.jpg
# PDF page 3 -> page_3.jpg
# ....
# PDF page n -> page_n.jpg
filename = "page_"+str(image_counter)+".jpg"
# Save the image of the page in system
page.save(filename, 'JPEG')
# Increment the counter to update filename
image_counter = image_counter + 1
# Variable to get count of total number of pages
filelimit = image_counter-1
# Create an empty string for stroing purposes
text = ""
# Iterate from 1 to total number of pages
for i in range(1, filelimit + 1):
# Set filename to recognize text from
# Again, these files will be:
# page_1.jpg
# page_2.jpg
# ....
# page_n.jpg
filename = "page_"+str(i)+".jpg"
# Recognize the text as string in image using pytesserct
text += str(((pytesseract.image_to_string(Image.open(filename)))))
text = text.replace('-\n', '')
#Delete all the jpg files that created from above
for i in glob.glob("*.jpg"):
os.remove(i)
return text
def tesseractOCR_img(img):
filePath = img
text = str(pytesseract.image_to_string(filePath,lang='eng',config='--psm 6'))
text = text.replace('-\n', '')
return text
def Tesseract_ALL(docDir, txtDir, troubleDir):
if docDir == "": docDir = os.getcwd() + "\\" #if no docDir passed in
for doc in os.listdir(docDir): #iterate through docs in doc directory
try:
fileExtension = doc.split(".")[-1]
if fileExtension == "pdf":
pdfFilename = docDir + doc
text = tesseractOCR_pdf(pdfFilename) #get string of text content of pdf
textFilename = txtDir + doc + ".txt"
textFile = open(textFilename, "w") #make text file
textFile.write(text) #write text to text file
else:
# elif (fileExtension == "tif") | (fileExtension == "tiff") | (fileExtension == "jpg"):
imgFilename = docDir + doc
text = tesseractOCR_img(imgFilename) #get string of text content of img
textFilename = txtDir + doc + ".txt"
textFile = open(textFilename, "w") #make text file
textFile.write(text) #write text to text file
except:
print("Error in file: "+ str(doc))
shutil.move(os.path.join(docDir, doc), troubleDir)
for filename in os.listdir(txtDir):
fileExtension = filename.split(".")[-2]
if fileExtension == "pdf":
os.rename(txtDir + filename, txtDir + filename.replace('.pdf', ''))
elif fileExtension == "tif":
os.rename(txtDir + filename, txtDir + filename.replace('.tif', ''))
elif fileExtension == "tiff":
os.rename(txtDir + filename, txtDir + filename.replace('.tiff', ''))
elif fileExtension == "jpg":
os.rename(txtDir + filename, txtDir + filename.replace('.jpg', ''))
docDir = "/drive/codingstark/Project/pdf/"
txtDir = "/drive/codingstark/Project/txt/"
troubleDir = "/drive/codingstark/Project/trouble_pdf/"
Tesseract_ALL(docDir, txtDir, troubleDir)
Does anyone know how can I edit my code to make it run faster?
I think a process pool would be perfect for your case.
First you need to figure out parts of your code that can run independent of each other, than you wrap it into a function.
Here is an example
from concurrent.futures import ProcessPoolExecutor
def do_some_OCR(filename):
pass
with ProcessPoolExecutor() as executor:
for file in range(file_list):
_ = executor.submit(do_some_OCR, file)
The code above will open a new process for each file and start processing things in parallel.
You can find the oficinal documentation here: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
There is also an really awesome video that shows step-by-step how to use processes for exactly this: https://www.youtube.com/watch?v=fKl2JW_qrso
Here is a compact version of the function removing the file write stuff. I think this should work based on what I was reading on the APIs but I haven't tested this.
Note that I changed from string to list because adding to a list is MUCH less costly than appending to a string (See this about join vs concatenation
How slow is Python's string concatenation vs. str.join?) TLDR is that string concat makes a new string every time you are concatenating so with large strings you start having to copy many times.
Also, when you were calling replace each iteration on the string after concatenation, it was doing again creating a new string. So I moved that to operate on each string that is generated. Note that if for some reason that string '-\n' is an artifact that occured due to the concatenation previously, then it should be removed from where it is and placed here: return ''.join(pageText).replace('-\n','') but realize putting it there will be creating a new string with the join, then creating a whole new string from the replace.
def tesseractOCR_pdf(pdf):
pages = convert_from_path(pdf, 500)
# Counter to store images of each page of PDF to image
# Create an empty list for storing purposes
pageText = []
# Iterate through all the pages stored above will be a PIL Image
for page in pages:
# Recognize the text as string in image using pytesserct
# Add the text to a list while removing the -\n characters.
pageText.append(str(pytesseract.image_to_string(page)).replace('-\n',''))
return ''.join(pageText)
An even more compact one-liner version
def tesseractOCR_pdf(pdf):
#This takes each page of the pdf, extracts the text, removing -\n and combines the text.
return ''.join([str(pytesseract.image_to_string(page)).replace('-\n', '') for page in convert_from_path(pdf, 500)])
Related
I am trying to convert all pdf files to .jpg files and then remove them from the directory. I am able to convert all pdf's to jpg's but when I try to delete them, I get the error "The process is being used by another person".
Could you please help me?
Below is the code
Below script wil convert all pdfs to jpegs and storesin the same location.
for fn in files:
doc = fitz.open(pdffile)
page = doc.loadPage(0) # number of page
pix = page.getPixmap()
fn1 = fn.replace('.pdf', '.jpg')
output = fn1
pix.writePNG(output)
os.remove(fn) # one file at a time.
path = 'D:/python_ml/Machine Learning/New folder/Invoice/'
i = 0
for file in os.listdir(path):
path_to_zip_file = os.path.join(path, folder)
if file.endswith('.pdf'):
os.remove(file)
i += 1
As #K J noted in their comment, most probably the problem is with files not being closed, and indeed your code misses closing the doc object(s).
(Based on the line fitz.open(pdffile), I guess you use the pymupdf library.)
The problematic fragment:
doc = fitz.open(pdffile)
page = doc.loadPage(0) # number of page
pix = page.getPixmap()
fn1 = fn.replace('.pdf', '.jpg')
output = fn1
pix.writePNG(output)
...should be adjusted, e.g., in the following way:
with fitz.open(pdffile) as doc:
page = doc.loadPage(0) # number of page
pix = page.getPixmap()
output = fn.replace('.pdf', '.jpg')
pix.writePNG(output)
(Side note: the fn1 variable seems to be completely redundant so I got rid of it. Also, shouldn't pdffile be replaced with fn? What pdffile actually is?)
I've found some guides online on how to make a PDF searchable if it was scanned. However, I'm currently struggling with figuring out how to do it for a multipage PDF.
My code takes multipaged PDFs, converts each page into a JPG, runs OCR on each page and then converts it into a PDF. However, only the last page is returned.
import pytesseract
from pdf2image import convert_from_path
pytesseract.pytesseract.tesseract_cmd = 'directory'
TESSDATA_PREFIX = 'directory'
tessdata_dir_config = '--tessdata-dir directory'
# Path of the pdf
PDF_file = r"pdf directory"
def pdf_text():
# Store all the pages of the PDF in a variable
pages = convert_from_path(PDF_file, 500)
image_counter = 1
for page in pages:
# Declare file names
filename = "page_"+str(image_counter)+".jpg"
# Save the image of the page in system
page.save(filename, 'JPEG')
# Increment the counter to update filename
image_counter = image_counter + 1
# Variable to get count of total number of pages
filelimit = image_counter-1
outfile = "out_text.pdf"
# Open the file in append mode so that all contents of all images are added to the same file
f = open(outfile, "a")
# Iterate from 1 to total number of pages
for i in range(1, filelimit + 1):
filename = "page_"+str(i)+".jpg"
# Recognize the text as string in image using pytesseract
result = pytesseract.image_to_pdf_or_hocr(filename, lang="eng", config=tessdata_dir_config)
f = open(outfile, "w+b")
f.write(bytearray(result))
f.close()
pdf_text()
How can I run this for all pages and output one merged PDF?
I can't run it but I think all problem is because you use open(..., 'w+b') inside loop - and this remove previous content, and finally you write only last page.
You should use already opened file open(outfile, "a") and close it after loop.
# --- before loop ---
f = open(outfile, "ab")
# --- loop ---
for i in range(1, filelimit+1):
filename = f"page_{i}.jpg"
result = pytesseract.image_to_pdf_or_hocr(filename, lang="eng", config=tessdata_dir_config)
f.write(bytearray(result))
# --- after loop ---
f.close()
BTW:
But there is other problem - image_to_pdf_or_hocr creates full PDF - with special headers and maybe footers - and appending two results can't create correct PDF. You would have to use special modules to merge pdfs. Like Merge PDF files
Something similar to
# --- before loop ---
from PyPDF2 import PdfFileMerger
import io
merger = PdfFileMerger()
# --- loop ---
for i in range(1, filelimit + 1):
filename = "page_"+str(i)+".jpg"
result = pytesseract.image_to_pdf_or_hocr(filename, lang="eng", config=tessdata_dir_config)
pdf_file_in_memory = io.BytesIO(result)
merger.append(pdf_file_in_memory)
# --- after loop ---
merger.write(outfile)
merger.close()
There are a number of potential issues here and without being able to debug it's hard to say what is the root cause.
Are the JPGs being successfully created, and as separate files as is expected?
I would suspect that pages = convert_from_path(PDF_file, 500) is not returning as expected - have you manually verified they are being created as expected?
i am using textract to convert doc/docx file to text
here is my method
def extract_text_from_doc(file):
temp = None
temp = textract.process(file)
temp = temp.decode("UTF-8")
text = [line.replace('\t', ' ') for line in temp.split('\n') if line]
return ''.join(text)
I have two doc files both with docx extension. when i try to convert one file to string it is working fine but for other one it is throwing exception
'There is no item named \'word/document.xml\' in the archive'
I tried to look further and i found that zipfile.ZipFile(docx) is causing the problem
Code looks like this
def process(docx, img_dir=None):
text = u''
# unzip the docx in memory
zipf = zipfile.ZipFile(docx)
filelist = zipf.namelist()
# get header text
# there can be 3 header files in the zip
header_xmls = 'word/header[0-9]*.xml'
for fname in filelist:
if re.match(header_xmls, fname):
text += xml2text(zipf.read(fname))
# get main text
doc_xml = 'word/document.xml'
text += xml2text(zipf.read(doc_xml))
# some more code
In the above code, for the file(for which it is working) returns filelist with values like 'word/document.xml', 'word/header1.xml'
but for the file(for which it is not working) its returns filelist with values
['[Content_Types].xml', '_rels/.rels', 'theme/theme/themeManager.xml', 'theme/theme/theme1.xml', 'theme/theme/_rels/themeManager.xml.rels']
since second filelist dont contain 'word/document.xml'
doc_xml = 'word/document.xml'
text += xml2text(zipf.read(doc_xml))
is throwing exception(internally it try to open file name with word/document.xml)
can anyone please help me. i dont know its problem with docx file or code.
I have a list of pdf files and I need to highlight specific text on each page of these files and save a snapshot for each of the text instances.
So far I am able to highlight the text and save the entire page of a pdf file as a snapshot. But, I want to find the position of highlighted text and take a zoomed in the snapshot which will be more detailed compared to the full page snapshot.
I'm pretty sure there must be a solution to this problem. I am new to Python and hence I am not able to find it. I would be really grateful if someone can help me out with this.
I have tried using PyPDF2, Pymupdf libraries but I couldn't figure out the solution. I also tried highlighting by providing coordinates which works but couldn't find a way to get these coordinates as output.
[![Sample snapshot from the code[![\]\[1\]][1]][1]][1]
#import PyPDF2
import os
import fitz
from wand.image import Image
import csv
#import re
#from pdf2image import convert_from_path
check = r'C:\Users\Pradyumna.M\Desktop\Pradyumna\Automation\Intel Bytes\Create Source Docs\Sample Check 8 Apr 2019'
dir1 = check + '\\Source Docs\\'
dir2 = check + '\\Output\\'
dir = [dir1, dir2]
for x in dir:
try:
os.mkdir(x)
except FileExistsError:
print("Directory ", x, " already exists")
### READ PDF FILE
with open('upload1.csv', newline='') as myfile:
reader = csv.reader(myfile)
for row in reader:
rowarray = '; '.join(row)
src = rowarray.split("; ")
file = check + '\\' + src[4] + '.pdf'
print(file)
#pdfFileObj = open(file,'rb')
#pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
#print("Total number of pages: " + str(pdfReader.numPages))
doc = fitz.open(file)
print(src[5])
for i in range(int(src[5])-1, int(src[5])):
i = int(i)
page = doc[i]
print("Processing page: " + str(i))
text = src[3]
#SEARCH TEXT
print("Searching: " + text)
text_instances = page.searchFor(text)
for inst in text_instances:
highlight = page.addHighlightAnnot(inst)
file1 = check + '\\Output\\' + src[4] + '_output.pdf'
print(file1)
doc.save(file1, garbage=4, deflate=True, clean=True)
### Screenshot
with(Image(filename=file1, resolution=150)) as source:
images = source.sequence
newfilename = check + "\\Source Docs\\" + src[0] + '.jpeg'
Image(images[i]).save(filename=newfilename)
print("Screenshot of " + src[0] + " saved")
"couldn't find a way to get these coordinates as output"
- you can get the coordinates out by doing this:
for inst in text_instances:
print(inst)
inst are fitz.Rect objects which contain the top left and bottom right coordinates of the piece of text that was found. All the information is available in the docs.
I managed to highlight points and also save a cropped region using the following snippet of code. I am using python 3.7.1 and my output for fitz.version is ('1.14.13', '1.14.0', '20190407064320').
import fitz
doc = fitz.open("foo.pdf")
inst_counter = 0
for pi in range(doc.pageCount):
page = doc[pi]
text = "hello"
text_instances = page.searchFor(text)
five_percent_height = (page.rect.br.y - page.rect.tl.y)*0.05
for inst in text_instances:
inst_counter += 1
highlight = page.addHighlightAnnot(inst)
# define a suitable cropping box which spans the whole page
# and adds padding around the highlighted text
tl_pt = fitz.Point(page.rect.tl.x, max(page.rect.tl.y, inst.tl.y - five_percent_height))
br_pt = fitz.Point(page.rect.br.x, min(page.rect.br.y, inst.br.y + five_percent_height))
hl_clip = fitz.Rect(tl_pt, br_pt)
zoom_mat = fitz.Matrix(2, 2)
pix = page.getPixmap(matrix=zoom_mat, clip = hl_clip)
pix.writePNG(f"pg{pi}-hl{inst_counter}.png")
doc.close()
I tested this on a sample pdf that i peppered with "hello":
Some of the outputs from the script:
I composed the solution out of the following pages of the documentation:
Tutorial page to get introduced into the library
page.searchFor to figure out the return type of the searchFor method
fitz.Rect to understand what the returned objects from page.searchFor are
Collection of Recipes page (called faq in the URL) to figure out how to crop and save part of a pdf page
I have a script that parses HTML and saves the images to disk.
However, for some reason it outputs the filename wrongly.
It is not saving the file with the correct file extension in Windows. Eg, the image should be saved as <filename>.jpg or <filename>.gif. Instead the images are being saved with no filename extension.
Could you help me to see why this script is not saving the extension correctly in the filename?
I'm running Python 2.7.
""" Tumbrl downloader
This program will download all the images from a Tumblr blog """
from urllib import urlopen, urlretrieve
import os, sys, re
def download_images(images, path):
for im in images:
print(im)
filename = re.findall("([^/]*).(?:jpg|gif|png)",im)[0]
filename = os.path.join(path,filename)
try:
urlretrieve(im, filename.replace("500","1280"))
except:
try:
urlretrieve(im, filename)
except:
print("Failed to download "+im)
def main():
#Check input arguments
if len(sys.argv) < 2:
print("usage: ./tumblr_rip.py url [starting page]")
sys.exit(1)
url = sys.argv[1]
if len(sys.argv) == 3:
pagenum = int(sys.argv[2])
else:
pagenum = 1
if (check_url(url) == ""):
print("Error: Malformed url")
sys.exit(1)
if (url[-1] != "/"):
url.append("/")
blog_name = url.replace("http://", "")
blog_name = re.findall("(?:.[^\.]*)", blog_name)[0]
current_path = os.getcwd()
path = os.path.join(current_path, blog_name)
#Create blog directory
if not os.path.isdir(path):
os.mkdir(path)
html_code_old = ""
while(True):
#fetch html from url
print("\nFetching images from page "+str(pagenum)+"\n")
f = urlopen(url+"page/"+str(pagenum))
html_code = f.read()
html_code = str(html_code)
if(check_end(html_code, html_code_old, pagenum)):
break
images = get_images_page(html_code)
download_images(images, path)
html_code_old = html_code
pagenum += 1
print("Done downloading all images from " + url)
if __name__ == '__main__':
main()
The line
filename = re.findall("([^/]*).(?:jpg|gif|png)",im)[0]
Does not do what you think it does. First off, the dot is unescaped, meaning it will match any character, not just a period.
But the bigger problem is that you messed up the groups. You're acessing the value of the first group in the match, which is the first part inside parenthesis, giving you only the base filename without extension. The second group, containing the extension, is a seperate, noncapturing group. The (?:...) syntax makes a group noncapturing.
The way I fixed it was by putting a group around the entire match and making the existing groups noncapturing.
re.findall("((?:[^/]*)\.(?:jpg|gif|png))",im)[0]
P.S. Another problem is that the pattern is greedy so it can match multiple filenames at once. However, this isn't necessarily invalid, since spaces and periods are allowed in filenames. So if you want to match multiple filenames here, you'll have to figure out what to do yourself. Something like "((?:\w+)\.(?:jpg|gif|png))" would be more intuitive though.