Extract a text from PDF file with protected text - python

Please I want help extracting text from a protected text PDF file (not password-protected text).

There are several methods to extract text from a protected PDF file. One option is to use optical character recognition (OCR) software, which can recognize and convert scanned images of text into editable text. Adobe Acrobat Pro DC and ABBYY FineReader are examples of OCR software that can be used for this purpose.
Another option is to copy and paste the text manually, although this may not be feasible if the PDF file has extensive content and is heavily protected.
Keep in mind that some PDF files may have restrictions on copying, printing, or editing the content, and attempting to extract text from such files may violate copyright laws.

Related

Reading an online based pdf files in python and separating data in to columns -OSError

I m having an issue with python in getting an online based pdf file to python. The below is the code i wrote
import PyPDF2
import pandas as pd
from PyPDF2 import PdfReader
reader = PdfReader(r"http://www.meteo.gov.lk/images/mergepdf/20221004MERGED.pdf")
text = ""
for page in reader.pages:
text += page.extract_text() + "\n"
nd this gives me an error
OSError: [Errno 22] Invalid argument: 'http://www.meteo.gov.lk/images/mergepdf/20221004MERGED.pdf'
If we fix this, how do we separate the extracted data in to separate columns using pandas?
there are three tables in this pdf file.I need the first one. I have tried so many tutorials but none of them helped me. Can anyone help me in this regard please?
Thanks,
Snyder
Part one of your question is how to access the PDF content for extraction.
In order to view modify or extract the contents the bitstream it needs to be saved as a editable file. Thats why a binary DTP / printout file needs download to view. Every character in your browser screen was downloaded as text then converted from local file byte storage into graphics.
The simplest method is
curl -O http://www.meteo.gov.lk/images/mergepdf/20221004MERGED.pdf
which saves a working copy locally as 20221004MERGED.pdf
The next issue is that multi language files are a devil to extract and that one has many errors that need editing before extraction.
Here we see in Acrobat or other viewers (on the left) there are failures where eastern characters are mixed in with western ones due to the language mixing, so need corrective edit as shown on the right. Also the underlying text for extraction as seen by pdf readers is western characters that get translated inside the PDF by glyph mapping but for the extractable searchable text are just garbled plain text. this is what Adobe sees for search that first line k`l²zìq&` m[&Sw`n so you can see the W 3rd character from right.
Seriously there are just so many subset related problems to address, that it is easiest to open the PDF in any editor to reset the fonts to what they should be in each language.
The fonts you need in Word Office etc. are Kandy as I used to correct that word plus these others :-

Retaining the fond of pdf to epub

I'm currently working on a project which is to convert pdf to epub using python. While converting the pdf to epub the styling like font family, font size need to be exactly same in epub as that of pdf. Is there a way to achieve this using python? And i don't need any external softwares to do it. I used aspose.
#code i used
import aspose.words as aw
doc = aw.Document("Input.pdf")
doc.save("Output.epub")
and it is a simple text pdf.
You are going to get a variety of answers/comments that will ask you to show code as to what you tried and post sample documents etc.
Let me save you the trouble. Your question seems straightforward in that want to convert a pdf to epub and retain the style information.
Good luck.
It will all depend on your PDF file. Does it have embedded fonts or does it rely on system fonts? Complicated layout? Headers and footers? What about images? Dingbats characters? What if there is no text in the pdf, but just postscript drawing of text characters? What if the PDF just consists of multiple scans of pages in a pdf container? Is everything in English? Any Unicode characters? Are you looking to get the styles right at the page level? Paragraph? Sentence? Word? or Character Level?
Basically this is a hard problem. PDF was designed as an end use format not an interchangeable format. Most things get converted to PDF because someone wanted to control how the final product looked. You can look at text extraction tools for PDF, but there is not an easy solution with opensource or commercial tools.
You can easily convert PDF to EPUB using Aspose.Words for Python. The code is pretty simple:
import aspose.words as aw
doc = aw.Document("C:\\Temp\\in.pdf")
doc.save("C:\\Temp\\out.epub")
However, upon loading PDF into Aspose.Words Document Object Model it is converted from fixed page layout to flow document. And when document is saved to EPUB it is saved as flow document. I am afraid, this might lead into layout and formatting loses upon conversion.

how can I extract Chinese text from PDF using simple ‘with open’?

I need to extract pdf text using python,but pdfminer and others are too big to use,but when using simple "with open xxx as xxx" method, I met a problem , the content part didn't extract appropriately. The text looks like bytes because it start with b'. My code and the result screenshot:
with open(r"C:\Users\admin\Desktop\aaa.pdf","rb") as file:
aa=file.readlines()
for a in aa:
print(a)
Output Screenshot:
To generate an answer from the comments...
when using simple "with open xxx as xxx" method, I met a problem , the content part didn't extract appropriately
The reason is that PDF is not a plain text format but instead a binary format whose contents may be compressed and/or encrypted. For example the object you posted a screenshot of,
4 0 obj
<</Filter/FlateDecode/Length 210>>
stream
...
endstream
endobj
contains FLATE compressed data between stream and endstream (which is indicated by the Filter value FlateDecode).
But even if it was not compressed or encrypted, you might still not recognize any text displayed because each PDF font object can use its own, completely custom encoding. Furthermore, glyphs you see grouped in a text line do not need to be drawn by the same drawing instruction in the PDF, you may have to arrange all the strings in drawing instructions by coordinate to be able to find the text of a text line.
(For some more details and backgrounds read this answer which focuses on the related topic of replacement of text in a PDF.)
Thus, when you say
pdfminer and others are too big to use
please consider that they are so big for a reason: They are so big because you need that much code for adequate text extraction. This is in particular true for Chinese text; for simple PDFs with English text there are some short cuts working in benign circumstances, but for PDFs with CJK text you should not expect such short cuts.
If you want to try nonetheless and implement text extraction yourself, grab a copy of ISO 32000-1 or ISO 32000-2 (Google for pdf32000 for a free copy of the former) and study that pdf specification. Based on that information you can step by step learn to parse those binary strings to pdf objects, find content streams therein, parse the instructions in those content streams, retrieve the text pieces drawn by those instructions, and arrange those pieces correctly to a whole text.
Don't expect your solution to be much smaller than pdfminer etc...

Is there any way of generating exact HTML page from a PDF file?

I am trying different python libraries like pdftotree, pdfminer, tabula etc. But could not get the exact results. I mean I can get text from PDF, Images and Tabular data in HTML, but not as maintained and organized as original PDF file. Can someone help me with something regarding this? I would be thankful.
Mostly yes. Translate the PDF to SVG, and embed the SVG in your web page.
SVG's image model (what it can represent and how) is a near-superset of the PDF image model (which is itself a superset of PostScript), though SVG lacks some of the print-specific features of PDF. There are probably quite a few PDF->SVG converters out there already. Googling "Pdf to SVG" turned up quite a few promising hits
There will be some complications:
Many PDF files are longer than 1 page. You might need to generate 10 SVG files for a single 10 page PDF file, and then build a web page around those 10 SVGs. Throw in some dynamic HTML to "turn pages" and you've got a good web-based PDF viewer.
There are parts of PDF that aren't within its image model at all... bookmarks, annotations (form fields, digital signatures), document metadata (author, creation date, etc), and so forth. Some of the non-image-model stuff is common enough that a PDF to SVG utility might handle it directly (links), while other stuff doesn't have an HTML equivalent and would be lost.
You could preserve the appearance of a digital signature, but the actual security represented by those visuals would be gone. Preserving that signature's appearance could be considered lying about the security.

how to extract formatted text content from PDF

How can I extract the text content (not images) from a PDF while (roughly) maintaining the style and layout like Google Docs can?
To extract the text from the PDF AND get it's position you can use PDFMiner. PDFMiner can also export the PDF directly in HTML keeping the text at the good position.
I don't know your use case, but there's a lot of problems you can encounter when doing this because PDF is really presentation oriented and not content oriented, the text flow is not continous. So, if you want the text to be editable, it will not be an easy task.
Have you tried pyPDF or ReportLab PDF libraries? I personally have not used them but you can have a go at them. here is useful too
Xpdf has a utility call PDFtoText that does a great job. http://foolabs.com/xpdf/download.html
If you want to do it just like Google:
Google converts the PDF to an image, and then overlays the image, where text used to be, with JavaScript highlightable areas (which is about like Voodoo magic). The areas appear to be text when you scroll over them with your cursor, but they're not. This might not help you to know, but that's how they do it. If you want to reverse engineer it, you might start with https://www.mercurial-scm.org/ On the home page, they do the same thing with JavaScript to make the text highlightable and copyable. You can extract the text from the PDF, and find it's location in the page with on of the mentioned libraries in the other answers. Then you can overlay an extracted image of the file with the same style of JavaScript areas.
If you don't have your heart set on doing this with python, Ghostscript can do this for you. Check out pdf2ascii (a script that comes with GS) to get the plain text. Styles are more complicated as they can be specified in a few different ways.
Acrobat Professional can do the job. In the "File" menu, choose export. Then, choose Text.

Categories