For a Python web project (with Django) I developed a tool that generates an XLSX file. For questions of ergonomics and ease for users I would like to integrate this excel on my HTML page.
So I first thought of converting the XLSX to an HTML array, with the xlsx2html python library. It works but since I can’t determine the desired size for my cells or trim the content during conversion, I end up with huge cells and tiny text..
I found an interesting way with the html tag associated with OneDrive to embed an excel window into a web page, but my file being in my code and not on Excel Online I cannot import it like that. Yet the display is perfect and I don’t need the user to interact with this table.
I have searched a lot for other methods but apart from developing a function to browse my file and generate the script of the html table line by line, I have the feeling that I cannot simply use a method to convert or display it on my web page.
I am not accustomed to this need and wonder if there would not be a cleaner method to display an excel file in html.
Does it make sense to develop a function that builds my html table script in str? Or should I find a library that does it? Maybe there is a specific Django library ?
Thank you for your experience
I am trying to extract a table from a PDF document (example). It's not a scan/an image, so please focus on non-OCR solutions. OCR table extraction is here.
I tried the route of pdf -> html -> extract table. The pdf that I mentioned above when converted to html produces garbage, maybe because of the font, the document is not in English.
Extracting the pdf using x and y coordinate is not an option as this solution needs to work for future pdf from the url mention above which will have the table but not always in the same position.
The PDF does not contain explicit table data. It only contains lines and character glyphs which we tend to interpret as tables. Thus your task involves putting our human table recognition capabilities into code which is quite a task.
Generally speaking, if you are sure enough future PDFs will be generated by the same software in a very similar manner, it might be worth the time to investigate the file for some easy to follow hints to recognize the contents of individual fields.
Your specific document, though, has an additional shortcoming: It does not contain the required information for direct text extraction! You can try copying & pasting from Adobe Reader and you'll get (at least I do) semi-random characters from the WinAnsi range.
This is due to the fact that all fonts in the document claim that they use WinAnsiEncoding even though the characters referenced this way definitively are not from the WinAnsi character selection.
Thus reliable text extraction from your document without OCR is impossible after all!
(Trying copy&paste from Adobe Reader generally is a good first test whether text extraction is feasible at all; the text extraction methods of the Reader have been developed for many many years and, therefore, have become quite good. If you cannot extract anything sensible with Acrobat Reader, text extraction will be a very difficult task indeed.)
You could use Tabula:
http://tabula.nerdpower.org
It's free and kinda easy to use
One option is to use pdf-table-extract: https://github.com/ashima/pdf-table-extract.
Extracting tables from PDF documents is extremely hard as PDF does not contain a semantic layer.
Camelot
You can try camelot, maybe even in combination with its web interface excalibur:
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
<TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
>>> tables[0]
<Table shape=(7, 7)>
>>> tables[0].parsing_report
{
'accuracy': 99.02,
'whitespace': 12.24,
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].df # get a pandas DataFrame!
See also python-camelot
Tabula
tabula can be installed via
pip install tabula-py
But it requires Java, as tabula-py is only a wrapper for the Java project.
It's used like this:
import tabula
# Read pdf into list of DataFrame
dfs = tabula.read_pdf("test.pdf", pages='all')
See also:
Reading a specific table with tabula
tabula
AWS Textract
I haven't tried it recently, but AWS Textract claims:
Amazon Textract can extract tables in a document, and extract cells, merged cells, and column headers within a table.
PdfPlumber
pdfplubmer table extraction methods:
import pdfplumber
pdf = pdfplumber.open("example.pdf")
page = pdf.pages[0]
page.extract_table()
See also
Tabula vs Camelot
I received an Oracle AWR Report, statistics fold under text and lst extensions.
Is there a way I can read or parse these into a csv ? read them in Python?
I asked for an another HTML extraction. Could you please share a snippet to read html AWR?
I can't seem to manage to read the .lst reports.
I've come across an assignment which requires me to extract tabular data from images in a pdf file to neatly formatted dataframes via python code. There are several files to be processed and the relevant pages in all the files the may have different page numbers, hence the sequence of steps for this problem (my assumption) are:
Navigate to relevant section of the pdf
Extract images of the tabular data
Extract data from the images, format and convert to dataframes.
Some google searches resulted in me finding libraries for pdf text extraction, table extraction and more - modular solutions only.
I would appreciate some help in this regard. What packages should I use? Is my approach correct?
Can I get references to any helpful code snippets for similar problems?
page structure of the required tables
This started as a comment. I believe the answer is valid as it is in no way an endorsement of the service. I don't even use it. I know Azure uses SO as well.
This is the stuff of commercial services. You can try Azure Form Recognizer (with which I am not affiliated):
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer
Here are some python examples of how to use it:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/how-to-guides/try-sdk-rest-api?pivots=programming-language-python
The AWS equivalent is Textract https://aws.amazon.com/textract
The Google Cloud version is called Form Parser - see https://cloud.google.com/document-ai/docs/processors-list#processor_form-parser
I'm aware that PDFX is a rule-based system designed to reconstruct the logical structure of scholarly articles in PDF form, regardless of their formatting style. The system's output is an XML document that describes the input article's logical structure in terms of title, sections, tables, references, etc.
I've been trying to convert some PDF files into XML using PDFX on python but http://pdfx.cs.man.ac.uk/ is not responding.
The code I use for the conversion is:
response = requests.post('http://pdfx.cs.man.ac.uk/', headers=headers, data=data)
Is it still available? Is there any other option to convert the documents reconstructing the structure of scholarly articles?
Thanks in advance!
From the research I've been doing this days, I could find a similar tool called GROBID.
Home page: https://grobid.readthedocs.io/en/latest/
GitHub: https://github.com/kermitt2/grobid
Is a machine learning software for extracting information from scholarly documents