PDF to XML conversion using PDFX http://pdfx.cs.man.ac.uk/ - python

I'm aware that PDFX is a rule-based system designed to reconstruct the logical structure of scholarly articles in PDF form, regardless of their formatting style. The system's output is an XML document that describes the input article's logical structure in terms of title, sections, tables, references, etc.
I've been trying to convert some PDF files into XML using PDFX on python but http://pdfx.cs.man.ac.uk/ is not responding.
The code I use for the conversion is:
response = requests.post('http://pdfx.cs.man.ac.uk/', headers=headers, data=data)
Is it still available? Is there any other option to convert the documents reconstructing the structure of scholarly articles?
Thanks in advance!

From the research I've been doing this days, I could find a similar tool called GROBID.
Home page: https://grobid.readthedocs.io/en/latest/
GitHub: https://github.com/kermitt2/grobid
Is a machine learning software for extracting information from scholarly documents

Related

Navigate through a pdf file to find specific pages and extract tabular data from image with python

I've come across an assignment which requires me to extract tabular data from images in a pdf file to neatly formatted dataframes via python code. There are several files to be processed and the relevant pages in all the files the may have different page numbers, hence the sequence of steps for this problem (my assumption) are:
Navigate to relevant section of the pdf
Extract images of the tabular data
Extract data from the images, format and convert to dataframes.
Some google searches resulted in me finding libraries for pdf text extraction, table extraction and more - modular solutions only.
I would appreciate some help in this regard. What packages should I use? Is my approach correct?
Can I get references to any helpful code snippets for similar problems?
page structure of the required tables
This started as a comment. I believe the answer is valid as it is in no way an endorsement of the service. I don't even use it. I know Azure uses SO as well.
This is the stuff of commercial services. You can try Azure Form Recognizer (with which I am not affiliated):
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer
Here are some python examples of how to use it:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/how-to-guides/try-sdk-rest-api?pivots=programming-language-python
The AWS equivalent is Textract https://aws.amazon.com/textract
The Google Cloud version is called Form Parser - see https://cloud.google.com/document-ai/docs/processors-list#processor_form-parser

Extract headings and sub headings from PDF Parsing with Python 3

I'm trying to parse the pdf into an html, and then I would like to extract the headings and subheading from the tags. The pdf document was generated by Microsoft word so, I'm pretty sure there must be a way to get those headings.
So far, I have tried parsing with Apache Tika and PDFMiner.six but so far the html I have got doesn't have such tags which I could use to extract headings and subheadings of the document.
I wonder if there is a way to do it, would appreciate any help. Thank you
I suggest you to use GROBID which is a machine learning library for extracting, parsing and re-structuring raw documents such as PDF into structured XML/TEI encoded documents with a particular focus on technical and scientific publications.
A simple python client for GROBID REST services is available at https://github.com/kermitt2/grobid-client-python
This Python client can be used to process a set of PDF in a given directory by the GROBID service. Results are written in a given output directory and include the resulting XML TEI representation of the PDF.
Hope this helps.

How to extract tables data from pdf in python? [duplicate]

I am trying to extract a table from a PDF document (example). It's not a scan/an image, so please focus on non-OCR solutions. OCR table extraction is here.
I tried the route of pdf -> html -> extract table. The pdf that I mentioned above when converted to html produces garbage, maybe because of the font, the document is not in English.
Extracting the pdf using x and y coordinate is not an option as this solution needs to work for future pdf from the url mention above which will have the table but not always in the same position.
The PDF does not contain explicit table data. It only contains lines and character glyphs which we tend to interpret as tables. Thus your task involves putting our human table recognition capabilities into code which is quite a task.
Generally speaking, if you are sure enough future PDFs will be generated by the same software in a very similar manner, it might be worth the time to investigate the file for some easy to follow hints to recognize the contents of individual fields.
Your specific document, though, has an additional shortcoming: It does not contain the required information for direct text extraction! You can try copying & pasting from Adobe Reader and you'll get (at least I do) semi-random characters from the WinAnsi range.
This is due to the fact that all fonts in the document claim that they use WinAnsiEncoding even though the characters referenced this way definitively are not from the WinAnsi character selection.
Thus reliable text extraction from your document without OCR is impossible after all!
(Trying copy&paste from Adobe Reader generally is a good first test whether text extraction is feasible at all; the text extraction methods of the Reader have been developed for many many years and, therefore, have become quite good. If you cannot extract anything sensible with Acrobat Reader, text extraction will be a very difficult task indeed.)
You could use Tabula:
http://tabula.nerdpower.org
It's free and kinda easy to use
One option is to use pdf-table-extract: https://github.com/ashima/pdf-table-extract.
Extracting tables from PDF documents is extremely hard as PDF does not contain a semantic layer.
Camelot
You can try camelot, maybe even in combination with its web interface excalibur:
>>> import camelot
>>> tables = camelot.read_pdf('foo.pdf')
>>> tables
<TableList n=1>
>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite
>>> tables[0]
<Table shape=(7, 7)>
>>> tables[0].parsing_report
{
'accuracy': 99.02,
'whitespace': 12.24,
'order': 1,
'page': 1
}
>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite
>>> tables[0].df # get a pandas DataFrame!
See also python-camelot
Tabula
tabula can be installed via
pip install tabula-py
But it requires Java, as tabula-py is only a wrapper for the Java project.
It's used like this:
import tabula
# Read pdf into list of DataFrame
dfs = tabula.read_pdf("test.pdf", pages='all')
See also:
Reading a specific table with tabula
tabula
AWS Textract
I haven't tried it recently, but AWS Textract claims:
Amazon Textract can extract tables in a document, and extract cells, merged cells, and column headers within a table.
PdfPlumber
pdfplubmer table extraction methods:
import pdfplumber
pdf = pdfplumber.open("example.pdf")
page = pdf.pages[0]
page.extract_table()
See also
Tabula vs Camelot

Web scraping - how to identify main content on a webpage

Given a news article webpage (from any major news source such as times or bloomberg), I want to identify the main article content on that page and throw out the other misc elements such as ads, menus, sidebars, user comments.
What's a generic way of doing this that will work on most major news sites?
What are some good tools or libraries for data mining? (preferably python based)
There are a number of ways to do it, but, none will always work. Here are the two easiest:
if it's a known finite set of websites: in your scraper convert each url from the normal url to the print url for a given site (cannot really be generalized across sites)
Use the arc90 readability algorithm (reference implementation is in javascript) http://code.google.com/p/arc90labs-readability/ . The short version of this algorithm is it looks for divs with p tags within them. It will not work for some websites but is generally pretty good.
A while ago I wrote a simple Python script for just this task. It uses a heuristic to group text blocks together based on their depth in the DOM. The group with the most text is then assumed to be the main content. It's not perfect, but works generally well for news sites, where the article is generally the biggest grouping of text, even if broken up into multiple div/p tags.
You'd use the script like: python webarticle2text.py <url>
There's no way to do this that's guaranteed to work, but one strategy you might use is to try to find the element with the most visible text inside of it.
Diffbot offers a free(10.000 urls) API to do that, don't know if that approach is what you are looking for, but it might help someone http://www.diffbot.com/
Check the following script. It is really amazing:
from newspaper import Article
URL = "https://www.ksat.com/money/philippines-stops-sending-workers-to-qatar"
article = Article(URL)
article.download()
print(article.html)
article.parse()
print(article.authors)
print(article.publish_date)
#print(article.text)
print(article.top_image)
print(article.movies)
article.nlp()
print(article.keywords)
print(article.summary)
More documentation can be found at http://newspaper.readthedocs.io/en/latest/ and https://github.com/codelucas/newspaper you should install it using:
pip3 install newspaper3k
For a solution in Java have a look at https://github.com/kohlschutter/boilerpipe :
The boilerpipe library provides algorithms to detect and remove the surplus "clutter" (boilerplate, templates) around the main textual content of a web page.
The library already provides specific strategies for common tasks (for example: news article extraction) and may also be easily extended for individual problem settings.
But there is also a python wrapper around this available here:
https://github.com/misja/python-boilerpipe
It might be more useful to extract the RSS feeds (<link type="application/rss+xml" href="..."/>) on that page and parse the data in the feed to get the main content.
Another possibility of separating "real" content from noise is by measuring HTML density of the parts of a HTML page.
You will need a bit of experimentation with the thresholds to extract the "real" content, and I guess you could improve the algorithm by applying heuristics to specify the exact bounds of the HTML segment after having identified the interesting content.
Update: Just found out the URL above does not work right now; here is an alternative link to a cached version of archive.org.
There is a recent (early 2020) comparison of various methods of extracting article body, without and ads, menus, sidebars, user comments, etc. - see https://github.com/scrapinghub/article-extraction-benchmark. A report, data and evaluation scripts are available. It compares many options mentioned in the answers here, as well as some options which were not mentioned:
python-readability
boilerpipe
newspaper3k
dragnet
html-text
Diffbot
Scrapinghub AutoExtract
In short, "smart" open source libraries are adequate if you need to remove e.g. sidebar and menu, but they don't handle removal of unnecessary content inside articles, and are quite noisy overall; sometimes they remove an article itself and return nothing. Commercial services use Computer Vision and Machine Learning, which allows them to provide a much more precise output.
For some use cases simpler libraries like html-text are preferrable, both to commercial services and to "smart" open source libraries - they are fast, and ensure information is not missing (i.e. recall is high).
I would not recommend copy-pasting code snippets, as there are many edge cases even for a seemingly simple task of extracting text from HTML, and there are libraries available (like html-text or html2text) which should be handling these edge cases.
To use a commercial tool, in general one needs to get an API key, and then use a client library. For example, for AutoExtract by Scrapinghub (disclaimer: I work there) you would need to install pip install scrapinghub-autoextract. There is a Python API available - see https://github.com/scrapinghub/scrapinghub-autoextract README for details, but an easy way to get extractions is to create a .txt file with URLs to extract, and then run
python -m autoextract urls.txt --page-type article --api-key <API_KEY> --output res.jl
I wouldn't try to scrape it from the web page - too many things could mess it up - but instead see which web sites publish RSS feeds. For example, the Guardian's RSS feed has most of the text from their leading articles:
http://feeds.guardian.co.uk/theguardian/rss
I don't know if The Times (The London Times, not NY) has one because it's behind a paywall. Good luck with that...

How to parse/extract data from a mediawiki marked-up article via python

Source Mediawiki markup
Right now I'm using a variety of regexes to "parse" the data in the mediawiki mark-up into lists/dictionaries, so that elements within the article can be used.
This is hardly the best method, as the number of cases that have to be made are large.
How would one parse an article's mediawiki markup into a variety of python objects so that the data within can be used?
Example being:
Extract all headlines to a
dictionary, hashing it with its
section.
Grab all interwiki links, and
stick them into a list (I know
this can be done from the API but I'd
rather only have one API call to
reduce bandwidth use).
Extract all image names and hash them with
their sections
A variety of regexes can achieve the above, but I'm finding the number I have to make rather large.
Here's the mediawiki unofficial specification (I don't find their official specification as useful).
mwlib - MediaWiki parser and utility library
pediapress/mwlib:
mwlib provides a library for parsing MediaWiki articles and converting them to different output formats. mwlib is used by wikipedia's "Print/export" feature in order to generate PDF documents from wikipedia articles.
Here's the documentation page. The older doc page used have a one-liner example:
from mwlib.uparser import simpleparse
simpleparse("=h1=\n*item 1\n*item2\n==h2==\nsome [[Link|caption]] there\n")
If you want to see how it's used in action, see the test cases that come with the code. (mwlib/tests/test_parser.py from git repository):
from mwlib import parser, expander, uparser
from mwlib.expander import DictDB
from mwlib.xfail import xfail
from mwlib.dummydb import DummyDB
from mwlib.refine import util, core
parse = uparser.simpleparse
def test_headings():
r=parse(u"""
= 1 =
== 2 ==
= 3 =
""")
sections = [x.children[0].asText().strip() for x in r.children if isinstance(x, parser.Section)]
assert sections == [u"1", u"3"]
Also see Markup spec and Alternative parsers for more information.
This question is old, but for others coming here: There is a mediawiki parser written in Python on github. It seems very easy to transform articles into pure plain text, something I, if I remember correctly, couldn't solve in the past with mwlib.
I was searching for simillar solution to parse certain wiki , and stumbled upon Pandoc which takes multiple input formats and generates multiple too.
From the site:
Pandoc - a universal document converter
If you need to convert files from one markup format into another, pandoc is your swiss-army knife. Pandoc can convert documents in markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, TWiki markup, OPML, Emacs Org-Mode, Txt2Tags, Microsoft Word docx, EPUB, or Haddock markup to
HTML formats: XHTML, HTML5, and HTML slide shows using Slidy, reveal.js, Slideous, S5, or DZSlides.
Word processor formats: Microsoft Word docx, OpenOffice/LibreOffice ODT, OpenDocument XML
Ebooks: EPUB version 2 or 3, FictionBook2
Documentation formats: DocBook, GNU TexInfo, Groff man pages, Haddock markup
Page layout formats: InDesign ICML
Outline formats: OPML
TeX formats: LaTeX, ConTeXt, LaTeX Beamer slides
PDF via LaTeX
Lightweight markup formats: Markdown (including CommonMark), reStructuredText, AsciiDoc, MediaWiki markup, DokuWiki markup, Emacs Org-Mode, Textile
Custom formats: custom writers can be written in lua.

Categories