Is there a way to convert .pdf to .csv using python? - python

I'm currently experimenting with tabula-py, but all documentation samples I tried when extracting pdf data resulted in the following error: returned non-zero exit status 1.
So I'm just curious if there is other ways to convert data in tables on a pdf to a csv file using python.

The answer for tabula-py is already available on StackOverflow & other resources.. to try using Camelot:
pip install camelot-py[cv]
import camelot
tables = camelot.read_pdf('X.pdf')
tables.export('X.csv', f='csv', compress=True) # you can also save it different file formats
Refer this link for more.

If you are looking to export tables from PDF to CSV files using Python the best way it to use libraries like Taluba and Camelot.
First we'll need to extract tables from individual pages and then libraries like pandas to export them into CSVs or other required formats.
However, if the documents are non-electronic, we'll have to use OCR or ML techniques to extract tables.
Here's a blog post which has a few examples: https://nanonets.com/blog/pdf-table-to-csv/#pdf-table-extraction-to-csv-with-python

Related

How to extract specific table from word or PDF using python [duplicate]

I am trying to extract a table from a PDF document (example). It's not a scan/an image, so please focus on non-OCR solutions. OCR table extraction is here.
I tried the route of pdf -> html -> extract table. The pdf that I mentioned above when converted to html produces garbage, maybe because of the font, the document is not in English.
Extracting the pdf using x and y coordinate is not an option as this solution needs to work for future pdf from the url mention above which will have the table but not always in the same position.
The PDF does not contain explicit table data. It only contains lines and character glyphs which we tend to interpret as tables. Thus your task involves putting our human table recognition capabilities into code which is quite a task.
Generally speaking, if you are sure enough future PDFs will be generated by the same software in a very similar manner, it might be worth the time to investigate the file for some easy to follow hints to recognize the contents of individual fields.
Your specific document, though, has an additional shortcoming: It does not contain the required information for direct text extraction! You can try copying & pasting from Adobe Reader and you'll get (at least I do) semi-random characters from the WinAnsi range.
This is due to the fact that all fonts in the document claim that they use WinAnsiEncoding even though the characters referenced this way definitively are not from the WinAnsi character selection.
Thus reliable text extraction from your document without OCR is impossible after all!
(Trying copy&paste from Adobe Reader generally is a good first test whether text extraction is feasible at all; the text extraction methods of the Reader have been developed for many many years and, therefore, have become quite good. If you cannot extract anything sensible with Acrobat Reader, text extraction will be a very difficult task indeed.)
You could use Tabula:
http://tabula.nerdpower.org
It's free and kinda easy to use
One option is to use pdf-table-extract: https://github.com/ashima/pdf-table-extract.
Extracting tables from PDF documents is extremely hard as PDF does not contain a semantic layer.
Camelot
You can try camelot, maybe even in combination with its web interface excalibur:
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
<TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
>>> tables[0]
<Table shape=(7, 7)>
>>> tables[0].parsing_report
{
'accuracy': 99.02,
'whitespace': 12.24,
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].df # get a pandas DataFrame!
See also python-camelot
Tabula
tabula can be installed via
pip install tabula-py
But it requires Java, as tabula-py is only a wrapper for the Java project.
It's used like this:
import tabula
# Read pdf into list of DataFrame
dfs = tabula.read_pdf("test.pdf", pages='all')
See also:
Reading a specific table with tabula
tabula
AWS Textract
I haven't tried it recently, but AWS Textract claims:
Amazon Textract can extract tables in a document, and extract cells, merged cells, and column headers within a table.
PdfPlumber
pdfplubmer table extraction methods:
import pdfplumber
pdf = pdfplumber.open("example.pdf")
page = pdf.pages[0]
page.extract_table()
See also
Tabula vs Camelot

Navigate through a pdf file to find specific pages and extract tabular data from image with python

I've come across an assignment which requires me to extract tabular data from images in a pdf file to neatly formatted dataframes via python code. There are several files to be processed and the relevant pages in all the files the may have different page numbers, hence the sequence of steps for this problem (my assumption) are:
Navigate to relevant section of the pdf
Extract images of the tabular data
Extract data from the images, format and convert to dataframes.
Some google searches resulted in me finding libraries for pdf text extraction, table extraction and more - modular solutions only.
I would appreciate some help in this regard. What packages should I use? Is my approach correct?
Can I get references to any helpful code snippets for similar problems?
page structure of the required tables
This started as a comment. I believe the answer is valid as it is in no way an endorsement of the service. I don't even use it. I know Azure uses SO as well.
This is the stuff of commercial services. You can try Azure Form Recognizer (with which I am not affiliated):
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer
Here are some python examples of how to use it:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/how-to-guides/try-sdk-rest-api?pivots=programming-language-python
The AWS equivalent is Textract https://aws.amazon.com/textract
The Google Cloud version is called Form Parser - see https://cloud.google.com/document-ai/docs/processors-list#processor_form-parser

comparing PDF report to Database

I have one use-case .Lets say there is pdf report which has data from testing of some manufacturing components
and this PDF report is loaded in DB using some internally developed software.
We need to develop some reconciliation program wherein the data needs to be compared from PDF report to Database. We can assume pdf file has a fixed template.
If there are many tables and some raw text data in pdf then how mysql save this pdf data..in One table or in many tables .
Please suggest some approach(preferably in python) for comparing data
Finding and extracting specific text from URL PDF files, without downloading or writing (solution) have a look at this example and see if it will help. I found it worked efficiently for me, this is if the pdf is URL based, but you could simply change the input source to be your DB. In your case you can remove the two if statements under the if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): line. You mention having PDFs with the same template, if you are looking to extract text from one specific area of the template, use the print statement that has been commented out to find coordinates of desired data. Then as is done in the example, use those coordinates in if statements.

How to extract tables data from pdf in python? [duplicate]

I am trying to extract a table from a PDF document (example). It's not a scan/an image, so please focus on non-OCR solutions. OCR table extraction is here.
I tried the route of pdf -> html -> extract table. The pdf that I mentioned above when converted to html produces garbage, maybe because of the font, the document is not in English.
Extracting the pdf using x and y coordinate is not an option as this solution needs to work for future pdf from the url mention above which will have the table but not always in the same position.
The PDF does not contain explicit table data. It only contains lines and character glyphs which we tend to interpret as tables. Thus your task involves putting our human table recognition capabilities into code which is quite a task.
Generally speaking, if you are sure enough future PDFs will be generated by the same software in a very similar manner, it might be worth the time to investigate the file for some easy to follow hints to recognize the contents of individual fields.
Your specific document, though, has an additional shortcoming: It does not contain the required information for direct text extraction! You can try copying & pasting from Adobe Reader and you'll get (at least I do) semi-random characters from the WinAnsi range.
This is due to the fact that all fonts in the document claim that they use WinAnsiEncoding even though the characters referenced this way definitively are not from the WinAnsi character selection.
Thus reliable text extraction from your document without OCR is impossible after all!
(Trying copy&paste from Adobe Reader generally is a good first test whether text extraction is feasible at all; the text extraction methods of the Reader have been developed for many many years and, therefore, have become quite good. If you cannot extract anything sensible with Acrobat Reader, text extraction will be a very difficult task indeed.)
You could use Tabula:
http://tabula.nerdpower.org
It's free and kinda easy to use
One option is to use pdf-table-extract: https://github.com/ashima/pdf-table-extract.
Extracting tables from PDF documents is extremely hard as PDF does not contain a semantic layer.
Camelot
You can try camelot, maybe even in combination with its web interface excalibur:
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
<TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
>>> tables[0]
<Table shape=(7, 7)>
>>> tables[0].parsing_report
{
'accuracy': 99.02,
'whitespace': 12.24,
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].df # get a pandas DataFrame!
See also python-camelot
Tabula
tabula can be installed via
pip install tabula-py
But it requires Java, as tabula-py is only a wrapper for the Java project.
It's used like this:
import tabula
# Read pdf into list of DataFrame
dfs = tabula.read_pdf("test.pdf", pages='all')
See also:
Reading a specific table with tabula
tabula
AWS Textract
I haven't tried it recently, but AWS Textract claims:
Amazon Textract can extract tables in a document, and extract cells, merged cells, and column headers within a table.
PdfPlumber
pdfplubmer table extraction methods:
import pdfplumber
pdf = pdfplumber.open("example.pdf")
page = pdf.pages[0]
page.extract_table()
See also
Tabula vs Camelot

Tabula Py not detecting tables in PDF

I have used the following code but nothing is getting detected. I have also tried on various other PDF formats but getting the same result.
from tabula import read_pdf,convert_into
df=read_pdf("1415_048.pdf",output_format="dataframe",encoding='utf-8',java_options=None,multiple_tables=True)
Where the PDF looks like this
[] #This is the result I'm getting
tabula-py is based on tabula-java. And it works only with text based PDF.
According to tabula.app website https://tabula.technology/ :
Note: Tabula only works on text-based PDFs, not scanned documents.

Categories