Page breaks for images in Python fpdf - python

I want to insert some lengthy images (each image can extend to 2 pages) using python fpdf. But the images don't continue to next page once a page break is triggered. How can I print the rest of the images into the next page starting from a given set_y value (header included)?

Although I believe there is a way with fpdf, I suggest you switch to pdfkit / wkhtmltopdf based solution.
I have many similar issues with reportlab, which is very like fpdf, then switched to wkhtmltopdf, feels much better.
With wkhtmltopdf, you could control layout with standard web stack, use CSS and javascript, use a favorite template engine, then convert HTML file to pdf, easier and much more powerful.

If your pdf-dataset is a data string you can start searching for the "\r" and/or (multiple) '\n' character(s) using the below code:
chrs = ['\r', '\n', '\n\n']
for item in chars:
pdf-data.replace(item)
In case its formatted strings in a list:
list = pdaf-data
for list-item in list:
for list-item in chars:
pdf-data.replace(item)
In case you read directly from a file use "open file, etc" and readline().

Just an example of my above comment,
from fpdf import FPDF
from PIL import Image
pdf = FPDF()
pdf.add_page()
im = Image.open('1.jpg')
width_img, height_img = im.size
#mm = (pixels * 25.4) / dpi
# - 25.4 millimeters in an inch
w = (width_img*25.4)/300
h = (height_img*25.4)/300
pdf.image('1.jpg',0,0,w,h)
pdf.output("yourfile.pdf", "F")

Related

How to trim (crop) bottom whitespace of a PDF document, in memory

I am using wkhtmltopdf to render a (Django-templated) HTML document to a single-page PDF file. I would like to either render it immediately with the correct height (which I've failed to do so far) or render it incorrectly and trim it. I'm using Python.
Attempt type 1:
wkhtmltopdf render to a very, very long single-page PDF with a lot of extra space using --page-height
Use pdfCropMargins to trim: crop(["-p4", "100", "0", "100", "100", "-a4", "0", "-28", "0", "0", "input.pdf"])
The PDF is rendered perfectly with 28 units of margin at the bottom, but I had to use the filesystem to execute the crop command. It seems that the tool expects an input file and output file, and also creates temporary files midway through. So I can't use it.
Attempt type 2:
wkhtmltopdf render to multi-page PDF with default parameters
Use PyPDF4 (or PyPDF2) to read the file and combine pages into a long, single page
The PDF is rendered fine-ish in most cases, however, sometimes a lot of extra white space can be seen on the bottom if by chance the last PDF page had very little content.
Ideal scenario:
The ideal scenario would involve a function that takes HTML and renders it into a single-page PDF with the expected amount of white space at the bottom. I would be happy with rendering the PDF using wkhtmltopdf, since it returns bytes, and later processing these bytes to remove any extra white space. But I don't want to involve the file system in this, as instead, I want to perform all operations in memory. Perhaps I can somehow inspect the PDF directly and remove the white space manually, or do some HTML magic to determine the render height before-hand?
What am I doing now:
Note that pdfkit is a wkhtmltopdf wrapper
# This is not a valid HTML (includes Django-specific stuff)
template: Template = get_template("some-django-template.html")
# This is now valid HTML
rendered = template.render({
"foo": "bar",
})
# This first renders PDF from HTML normally (multiple pages)
# Then counts how many pages were created and determines the required single-page height
# Then renders a single-page PDF from HTML using the page height and width arguments
return pdfkit.from_string(rendered, options={
"page-height": f"{297 * PdfFileReader(BytesIO(pdfkit.from_string(rendered))).getNumPages()}mm",
"page-width": "210mm"
})
It's equivalent to Attempt type 2, except I don't use PyDPF4 here to stitch the pages together, but instead render again with wkhtmltopdf using precomputed page height.
There might be better ways to do this, but this at least works.
I'm assuming that you are able to crop the PDF yourself, and all I'm doing here is determining how far down on the last page you still have content. If that assumption is wrong, I could probably figure out how to crop the PDF. Or otherwise, just crop the image (easy in Pillow) and then convert that to PDF?
Also, if you have one big PDF, you might need to figure how how far down on the whole PDF the text ends. I'm just finding out how far down on the last page the content ends. But converting from one to the other is like just an easy arithmetic problem.
Tested code:
import pdfkit
from PyPDF2 import PdfFileReader
from io import BytesIO
# This library isn't named fitz on pypi,
# obtain this library with `pip install PyMuPDF==1.19.4`
import fitz
# `pip install Pillow==8.3.1`
from PIL import Image
import numpy as np
# However you arrive at valid HTML, it makes no difference to the solution.
rendered = "<html><head></head><body><h3>Hello World</h3><p>hello</p></body></html>"
# This first renders PDF from HTML normally (multiple pages)
# Then counts how many pages were created and determines the required single-page height
# Then renders a single-page PDF from HTML using the page height and width arguments
pdf_bytes = pdfkit.from_string(rendered, options={
"page-height": f"{297 * PdfFileReader(BytesIO(pdfkit.from_string(rendered))).getNumPages()}mm",
"page-width": "210mm"
})
# convert the pdf into an image.
pdf = fitz.open(stream=pdf_bytes, filetype="pdf")
last_page = pdf[pdf.pageCount-1]
matrix = fitz.Matrix(1, 1)
image_pixels = last_page.get_pixmap(matrix=matrix, colorspace="GRAY")
image = Image.frombytes("L", [image_pixels.width, image_pixels.height], image_pixels.samples)
#Uncomment if you want to see.
#image.show()
# Now figure out where the end of the text is:
# First binarize. This might not be the most efficient way to do this.
# But it's how I do it.
THRESHOLD = 100
# I wrote this code ages ago and don't remember the details but
# basically, we treat every pixel > 100 as a white pixel,
# We convert the result to a true/false matrix
# And then invert that.
# The upshot is that, at the end, a value of "True"
# in the matrix will represent a black pixel in that location.
binary_matrix = np.logical_not(image.point( lambda p: 255 if p > THRESHOLD else 0 ).convert("1"))
# Now find last white row, starting at the bottom
row_count, column_count = binary_matrix.shape
last_row = 0
for i, row in enumerate(reversed(binary_matrix)):
if any(row):
last_row = i
break
else:
continue
percentage_from_top = (1 - last_row / row_count) * 100
print(percentage_from_top)
# Now you know where the page ends.
# Go back and crop the PDF accordingly.

How do I extract text in the right order from PDF using PyPDF2?

I am currently doing a project to extract the contents of a PDF. The code runs smoothly and I am able to extract the text but the extracted text are not in the right order. The code extracts the text in a weird way. The order of the text is all over the place. It does not go from top to bottom and is really confusing.
I looked up online but there was very little help on how to order the text extraction. Most tutorials came up with the same result. For reference, this is the PDF that I am currently testing it on (page 5): https://www.pidm.gov.my/PIDM/files/13/134b5c79-5319-4199-ac68-99f62aca6047.pdf
import PyPDF2
with open('pdftest2.pdf', 'rb') as pdfTest:
reader = PyPDF2.PdfFileReader(pdfTest)
page5 = reader.getPage(4)
text = page5.extractText()
print(text)
The extracted text would always start with the footer of the page and then go its way from bottom to top. I noticed in the next page it would start from top to bottom but only for a few certain sentences. Then it would extract text from a different position of the page instead of continuing from where it left off.
All of the text does get extracted but the order of which it is extracted is all over the place. Is there any solution for this problem?
I had to deal with a problem that was similar and it turned out that the module pdfplumber worked better than PyPDF. I guess it depends on the document itself, you should try.
Otherwise another answer to your problem would be to treat the PDFs as images with the pdf2image module and extract the text within them using pytesseract. However it might not be perfect method as the pdf2image method convert_from_path can take quite a long time to run.
I drop some code down here if you are interested.
First of all make sure you install all necessary depedencies as well as Tesseract and ImageMagik. You can find any information regarding install on the website. If you are working with windows there's a good Medium article here.
To convert PDFs to images using pdf2image:
Don't forget to add your poppler path if you are working on windows. It should look like something like that r'C:\<your_path>\poppler-21.02.0\Library\bin'
def pdftoimg(fic,output_folder, poppler_path):
# Store all the pages of the PDF in a variable
pages = convert_from_path(fic, dpi=500,output_folder=output_folder,thread_count=9, poppler_path=poppler_path)
image_counter = 0
# Iterate through all the pages stored above
for page in pages:
filename = "page_"+str(image_counter)+".jpg"
page.save(output_folder+filename, 'JPEG')
image_counter = image_counter + 1
for i in os.listdir(output_folder):
if i.endswith('.ppm'):
os.remove(output_folder+i)
To extract text from the image:
Your tesseract path is going to be something like that: r'C:\Program Files\Tesseract-OCR\tesseract.exe'
def imgtotext(img, tesseract_path):
# Recognize the text as string in image using pytesserct
pytesseract.pytesseract.tesseract_cmd = tesseract_path
text = str(((pytesseract.image_to_string(Image.open(img)))))
text = text.replace('-\n', '')
return text
I recently started using PyMuPDF. It’s licensing is a little confusing but some of their methods have ways to correctly sort the text as it naturally appears (left to right, top to bottom). Something like page.get_text(“words”, sort=True) is all it takes.

extract specific text from image

I'm trying to extract specific (or the whole text and then parse it) text from the image.
the image is in the Hebrew language.
what I already tried in nodejs is using in Tesseract library but in Hebrew, it does not recognize the text good.
I'm also tried to convert the image to pdf and then parse from pdf but it's not working well in Hebrew.
anyone has already tried to do that? maybe with python or node js?
I'm trying to do something like cloud vision google text
have you tried preprocessing the image you feed to tesseract? In case you didn't I would give a try to use OpenCV contour detection, particularly Hough Line Transform, and then clean it up a bit. https://www.youtube.com/watch?v=lhMXDqQHf9g&list=PLQVvvaa0QuDeETZEOy4VdocT7TOjfSA8a&index=5 this guy doesn't do your stuff exactly, but if ya take time to scroll bit you can see how it can be useful.
Based on our conversation in OP. Here is some options for you to consider.
Option 1:
If you are working directly with PDFs as your input file
import fitz
input_file = '/path/to/your/pdfs/'
pdf_file = input_file
doc = fitz.open(pdf_file)
noOfPages = doc.pageCount
for pageNo in range(noOfPages):
page = doc.loadPage(pageNo)
pageTextblocks = page.getText('blocks') # This creates a list of items (x0,y0,x1,y1,"line1\nline2\nline3...",...)
pageTextblocks.sort(key=lambda block: block[3])
for block in pageTextblocks:
targetBlock = block[4] # This gets to the content of each block and you can work your logic here to get relevant data
Option 2:
If you are working with image as your input and you need to convert it to PDFs before processing it using code snippet in Option 1.
doc = fitz.open(input_file)
pdfbytes = doc.convertToPDF() # open it as a pdf file
pdf = fitz.open("pdf", pdfbytes) # extract data as a pdf file
One useful tip for processing image in PyMuPDF is to use zoom factor for better resolution if the image is somewhat hard to be recognized.
zoom = 1.2 # scale the image by 120%
mat = fitz.Matrix(zoom,zoom)
Option 3:
A hybrid approach with PyMuPDF and pytesseract since you've mentioned tesseract. I am not sure if this approach fits your needs to extract Hebrew language but it's an idea. The example is used for PDFs.
import fitz
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'/path/to/your/tesseract/cmd'
input_file = '/path/to/pdfs'
pdf_file = input_file
fullText = ""
doc = fitz.open(pdf_file)
zoom = 1.2
mat = fitz.Matrix(zoom, zoom)
noOfPages = doc.pageCount
for pageNo in range(noOfPages):
page = doc.loadPage(pageNo) #number of page
pix = page.getPixmap(matrix = mat)
output = '/path/to/save/image' + str(pageNo) + '.jpg'
pix.writePNG(output)
print('Converting PDFs to Image ... ' + output)
text_of_each_page = str(((pytesseract.image_to_string(Image.open(output)))))
fullText += text_without_whitespace
fullText += '\n'
Hope this helps. If you need more information about PyMuPDF, click this link and it has a more detailed explanation to fit your needs.

Searching text in a PDF using Python? [duplicate]

This question already has answers here:
How to extract text from a PDF file?
(33 answers)
Closed 2 months ago.
Problem
I'm trying to determine what type a document is (e.g. pleading, correspondence, subpoena, etc) by searching through its text, preferably using python. All PDFs are searchable, but I haven't found a solution to parsing it with python and applying a script to search it (short of converting it to a text file first, but that could be resource-intensive for n documents).
What I've done so far
I've looked into pypdf, pdfminer, adobe pdf documentation, and any questions here I could find (though none seemed to directly solve this issue). PDFminer seems to have the most potential, but after reading through the documentation I'm not even sure where to begin.
Is there a simple, effective method for reading PDF text, either by page, line, or the entire document? Or any other workarounds?
This is called PDF mining, and is very hard because:
PDF is a document format designed to be printed, not to be parsed. Inside a PDF document,
text is in no particular order (unless order is important for printing), most of the time
the original text structure is lost (letters may not be grouped
as words and words may not be grouped in sentences, and the order they are placed in
the paper is often random).
There are tons of software generating PDFs, many are defective.
Tools like PDFminer use heuristics to group letters and words again based on their position in the page. I agree, the interface is pretty low level, but it makes more sense when you know
what problem they are trying to solve (in the end, what matters is choosing how close from the neighbors a letter/word/line has to be in order to be considered part of a paragraph).
An expensive alternative (in terms of time/computer power) is generating images for each page and feeding them to OCR, may be worth a try if you have a very good OCR.
So my answer is no, there is no such thing as a simple, effective method for extracting text from PDF files - if your documents have a known structure, you can fine-tune the rules and get good results, but it is always a gambling.
I would really like to be proven wrong.
[update]
The answer has not changed but recently I was involved with two projects: one of them is using computer vision in order to extract data from scanned hospital forms. The other extracts data from court records. What I learned is:
Computer vision is at reach of mere mortals in 2018. If you have a good sample of already classified documents you can use OpenCV or SciKit-Image in order to extract features and train a machine learning classifier to determine what type a document is.
If the PDF you are analyzing is "searchable", you can get very far extracting all the text using a software like pdftotext and a Bayesian filter (same kind of algorithm used to classify SPAM).
So there is no reliable and effective method for extracting text from PDF files but you may not need one in order to solve the problem at hand (document type classification).
I am totally a green hand, but this script works for me:
# import packages
import PyPDF2
import re
# open the pdf file
reader = PyPDF2.PdfReader("test.pdf")
# get number of pages
num_pages = len(reader.pages)
# define key terms
string = "Social"
# extract text and do the search
for page in reader.pages:
rext = page.extract_text()
# print(text)
res_search = re.search(string, text)
print(res_search)
I've written extensive systems for the company I work for to convert PDF's into data for processing (invoices, settlements, scanned tickets, etc.), and #Paulo Scardine is correct--there is no completely reliable and easy way to do this. That said, the fastest, most reliable, and least-intensive way is to use pdftotext, part of the xpdf set of tools. This tool will quickly convert searchable PDF's to a text file, which you can read and parse with Python. Hint: Use the -layout argument. And by the way, not all PDF's are searchable, only those that contain text. Some PDF's contain only images with no text at all.
I recently started using ScraperWiki to do what you described.
Here's an example of using ScraperWiki to extract PDF data.
The scraperwiki.pdftoxml() function returns an XML structure.
You can then use BeautifulSoup to parse that into a navigatable tree.
Here's my code for -
import scraperwiki, urllib2
from bs4 import BeautifulSoup
def send_Request(url):
#Get content, regardless of whether an HTML, XML or PDF file
pageContent = urllib2.urlopen(url)
return pageContent
def process_PDF(fileLocation):
#Use this to get PDF, covert to XML
pdfToProcess = send_Request(fileLocation)
pdfToObject = scraperwiki.pdftoxml(pdfToProcess.read())
return pdfToObject
def parse_HTML_tree(contentToParse):
#returns a navigatibale tree, which you can iterate through
soup = BeautifulSoup(contentToParse)
return soup
pdf = process_PDF('http://greenteapress.com/thinkstats/thinkstats.pdf')
pdfToSoup = parse_HTML_tree(pdf)
soupToArray = pdfToSoup.findAll('text')
for line in soupToArray:
print line
This code is going to print a whole, big ugly pile of <text> tags.
Each page is separated with a </page>, if that's any consolation.
If you want the content inside the <text> tags, which might include headings wrapped in <b> for example, use line.contents
If you only want each line of text, not including tags, use line.getText()
It's messy, and painful, but this will work for searchable PDF docs. So far I've found this to be accurate, but painful.
Here is the solution that I found it comfortable for this issue. In the text variable you get the text from PDF in order to search in it. But I have kept also the idea of spiting the text in keywords as I found on this website: https://medium.com/#rqaiserr/how-to-convert-pdfs-into-searchable-key-words-with-python-85aab86c544f from were I took this solution, although making nltk was not very straightforward, it might be useful for further purposes:
import PyPDF2
import textract
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
def searchInPDF(filename, key):
occurrences = 0
pdfFileObj = open(filename,'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
num_pages = pdfReader.numPages
count = 0
text = ""
while count < num_pages:
pageObj = pdfReader.getPage(count)
count +=1
text += pageObj.extractText()
if text != "":
text = text
else:
text = textract.process(filename, method='tesseract', language='eng')
tokens = word_tokenize(text)
punctuation = ['(',')',';',':','[',']',',']
stop_words = stopwords.words('english')
keywords = [word for word in tokens if not word in stop_words and not word in punctuation]
for k in keywords:
if key == k: occurrences+=1
return occurrences
pdf_filename = '/home/florin/Downloads/python.pdf'
search_for = 'string'
print searchInPDF (pdf_filename,search_for)
I agree with #Paulo PDF data-mining is a huge pain. But you might have success with pdftotext which is part of the Xpdf suite freely available here:
http://www.foolabs.com/xpdf/download.html
This should be sufficient for your purpose if you are just looking for single keywords.
pdftotext is a command line utility, but very straightforward to use. It will give you text files, which you may find easier to work with.
If you are on bash, There is a nice tool called pdfgrep,
Since, This is in apt repository, You can install this with:
sudo apt install pdfgrep
It had served my requirements well.
Trying to pick through PDFs for keywords is not an easy thing to do. I tried to use the pdfminer library with very limited success. It’s basically because PDFs are pandemonium incarnate when it comes to structure. Everything in a PDF can stand on its own or be a part of a horizontal or vertical section, backwards or forwards. Pdfminer was having issues translating one page, not recognizing the font, so I tried another direction — optical character recognition of the document. That worked out almost perfectly.
Wand converts all the separate pages in the PDF into image blobs, then you run OCR over the image blobs. What I have as a BytesIO object is the content of the PDF file from the web request. BytesIO is a streaming object that simulates a file load as if the object was coming off of disk, which wand requires as the file parameter. This allows you to just take the data in memory instead of having to save the file to disk first and then load it.
Here’s a very basic code block that should be able to get you going. I can envision various functions that would loop through different URL / files, different keyword searches for each file, and different actions to take, possibly even per keyword and file.
# http://docs.wand-py.org/en/0.5.9/
# http://www.imagemagick.org/script/formats.php
# brew install freetype imagemagick
# brew install PIL
# brew install tesseract
# pip3 install wand
# pip3 install pyocr
import pyocr.builders
import requests
from io import BytesIO
from PIL import Image as PI
from wand.image import Image
if __name__ == '__main__':
pdf_url = 'https://www.vbgov.com/government/departments/city-clerk/city-council/Documents/CurrentBriefAgenda.pdf'
req = requests.get(pdf_url)
content_type = req.headers['Content-Type']
modified_date = req.headers['Last-Modified']
content_buffer = BytesIO(req.content)
search_text = 'tourism investment program'
if content_type == 'application/pdf':
tool = pyocr.get_available_tools()[0]
lang = 'eng' if tool.get_available_languages().index('eng') >= 0 else None
image_pdf = Image(file=content_buffer, format='pdf', resolution=600)
image_jpeg = image_pdf.convert('jpeg')
for img in image_jpeg.sequence:
img_page = Image(image=img)
txt = tool.image_to_string(
PI.open(BytesIO(img_page.make_blob('jpeg'))),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
if search_text in txt.lower():
print('Alert! {} {} {}'.format(search_text, txt.lower().find(search_text),
modified_date))
req.close()
This answer follows #Emma Yu's:
If you want to print out all the matches of a string pattern on every page.
(Note that Emma's code prints a match per page):
import PyPDF2
import re
pattern = input("Enter string pattern to search: ")
fileName = input("Enter file path and name: ")
object = PyPDF2.PdfFileReader(fileName)
numPages = object.getNumPages()
for i in range(0, numPages):
pageObj = object.getPage(i)
text = pageObj.extractText()
for match in re.finditer(pattern, text):
print(f'Page no: {i} | Match: {match}')
A version using PyMuPDF. I find it to be more robust than PyPDF2.
import fitz
import re
# load document
doc = fitz.open(filename)
# define keyterms
String = "hours"
# get text, search for string and print count on page.
for page in doc:
text = ''
text += page.getText()
print(f'count on page {page.number +1} is: {len(re.findall(String, text))}')
Example with pdfminer.six
from pdfminer import high_level
with open('file.pdf', 'rb') as f:
text = high_level.extract_text(f)
print(text)
Compared to PyPDF2, it can work with cyrillic

Python: How to make Reportlab move to next page in PDF output

I'm using the open source version Reportlab with Python on Windows. My code loops through multiple PNG files & combines them to form a single PDF. Each PNG is stretched to the full LETTER spec (8.5x11).
Problem is, all the images saved to output.pdf are sandwiched on top of each other and only the last image added is visible. Is there something that I need to add between each drawImage() to offset to a new page? Here's a simple linear view of what I'm doing -
WIDTH,HEIGHT = LETTER
canv = canvas.Canvas('output.pdf',pagesize=LETTER)
canv.setPageCompression(0)
page = Image.open('one.png')
canv.drawImage(ImageReader(page),0,0,WIDTH,HEIGHT)
page = Image.open('two.png')
canv.drawImage(ImageReader(page),0,0,WIDTH,HEIGHT)
page = Image.open('three.png')
canv.drawImage(ImageReader(page),0,0,WIDTH,HEIGHT)
canv.save()
[Follow up of the post's comment]
Use canv.showPage() after you use canv.drawImage(...) each time.
( http://www.reportlab.com/apis/reportlab/dev/pdfgen.html#reportlab.pdfgen.canvas.Canvas.showPage )
Follow the source document(for that matter any tool you are using, you should dig into it's respective website documentation):
http://www.reportlab.com/apis/reportlab/dev/pdfgen.html

Categories