Removing extra spaces in Chinese HTML files using lxml - python

I have a bunch of improperly formatted Chinese html files. They contain unnecessary spaces and line breaks which will be displayed as extra spaces in the browser. I've written a script using lxml to modify the html files. It works fine on simple tags, but I'm stuck on nested ones. For example:
<p>祝你<span>19</span>岁
生日快乐。</p>
will be displayed is the browser as:
祝你19岁 生日快乐。
Notice the extra space. This is what needs to be deleted. The result html should be like this:
<p>祝你<span>19</span>岁生日快乐。</p>
How do I do this?
Note that the nesting(like the span tag) could be arbitrary, but I don't need to consider the content in the nested elements, they should be preserved as they are. Only the text in the outer element needs to by formatted.
This is what I've got:
# -*- coding: utf-8 -*-
import lxml.html
import re
s1 = u"""<p>祝你19岁
生日快乐。</p>"""
p1 = lxml.html.fragment_fromstring(s1)
print p1.text # I get the whole line.
p1.text = re.sub("\s+", "", p1.text)
print p1.tostring() # spaces are removed.
s2 = u"""<p>祝你<span>19</span>岁
生日快乐。</p>"""
p2 = lxml.html.fragment_fromstring(s2)
print p2.text # I get "祝你"
print p2.tail # I get None
i = p2.itertext()
print i.next() # I get "祝你"
print i.next() # I get "19" from <span>
print i.next() # I get the tailed text, but how do I assemble them back?
print p2.text_content() # The whole text, but how do I put <span> back?

>>> root = etree.fromstring('<p>祝你<span>19</span>岁\n生日快乐。</p>')
>>> etree.tostring(root)
b'<p>祝你<span>19</span>岁\n生日快乐。</p>'
>>> for e in root.xpath('/p/*'):
... if e.tail:
... e.tail = e.tail.replace('\n', '')
...
>>> etree.tostring(root)
b'<p>祝你<span>19</span>岁生日快乐。</p>'

Controversially, I wonder whether this is possible to complete without using an HTML/XML parser, considering that it appears to be cause by line wrapping.
I built a regular expression to look for whitespace between Chinese text with the help of this solution here: https://stackoverflow.com/a/2718268/267781
I don't know whether the catch-all of any whitespace between characters or whether the more specific [char]\n\s*[char] is most suitable to your problem.
# -*- coding: utf-8 -*-
import re
# Whitespace in Chinese HTML
## Used this solution to create regexp: https://stackoverflow.com/a/2718268/267781
## \s+
fixwhitespace2 = re.compile(u'[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d](\s+)[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d]',re.M)
## \n\s*
fixwhitespace = re.compile(u'[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d](\n\s*)[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d]',re.M)
sample = u'<html><body><p>\u795d\u4f6019\u5c81\n \u751f\u65e5\u5feb\u4e50\u3002</p></body></html>'
fixwhitespace.sub('',sample)
Yielding
<html><body><p>祝你19日快乐。</p></body></html>
However, here's how you might do it using the parser and xpath to find linefeeds:
# -*- coding: utf-8 -*-
from lxml import etree
import re
fixwhitespace = re.compile(u'[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d](\n\s*)[\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u3005\u3007\u3021-\u3029\u3038-\u303a\u303b\u3400-\u4db5\u4e00-\u9fc3\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\U00020000-\U0002a6d6\U0002f800-\U0002fa1d]',re.M)
sample = u'<html><body><p>\u795d\u4f6019\u5c81\n \u751f\u65e5\u5feb\u4e50\u3002</p></body></html>'
doc = etree.HTML(sample)
for t in doc.xpath("//text()[contains(.,'\n')]"):
if t.is_tail:
t.getparent().tail = fixwhitespace.sub('',t)
elif t.is_text:
t.getparent().text = fixwhitespace.sub('',t)
print etree.tostring(doc)
Yields:
<html><body><p>祝你19日快乐。</p></body></html>
I'm curious what the best match to your working data is.

Related

Getting raw text from lxml

Trying to get the text from an HtmlElement in lxml. For example, I have the HTML read in by
thing = lxml.html.fromstring("<code><div></code>")
But when I call thing.text I get <div>, meaning that lxml is translating escape characters. Is there a way to get this raw text, i.e., <div>? This is part of the output when I do lxml.html.tostring(thing), but that includes the opening and closing tags which I don't want.
Tried calling tostring with a few different encoding options but no luck.
So I looked into it a bit closer:
cdef tostring(...) in src\lxml\etree.pyx - see https://github.com/lxml/lxml/blob/master/src/lxml/etree.pyx
cdef _tostring(...) in src\lxml\serializer.pxi - see https://github.com/lxml/lxml/blob/master/src/lxml/serializer.pxi
and I couldn't find anything that would suggest you could get the escaped string by configuring the parameters of the tostring() function. It seems like it will always return the unescaped string maybe due to security concerns ...
The way I see it, you would have to use another function such as html.escape to get the escaped string:
import lxml.html
from html import escape as html_escape
thing = lxml.html.fromstring("<code><div>MY TEST DIV</div></code>")
raw_thing = lxml.html.tostring(thing, method="text", encoding="unicode") # <div>MY TEST DIV</div>
escaped_thing = html_escape(raw_thing) # <div>MY TEST DIV</div>
print(escaped_thing)
Essentialy what you are looking for is lxml.html.tostring(root, method='text', encoding="unicode"):
import lxml.html
thing = lxml.html.fromstring("<code><div>MY TEST DIV</div></code>")
output = lxml.html.tostring(thing, method='xml', encoding="unicode")
print(output) # <code><div>MY TEST DIV</div></code>
The problem is that it cannot separate the root element from its child in <code><div>MY TEST DIV</div></code>
However with a different approach you can get the desired output:
import xml.etree.ElementTree as ET
thing = """
<code><div>MY TEST DIV</div><div><div>AAA</div></div><div><div>BBB</div></div></code>
"""
root = ET.fromstring(thing)
root_text = ET._escape_attrib(root.text)
print(root_text)
for child in root:
child_text = ET._escape_attrib(child.text)
print(child_text)
The code above prints out:
<div>MY TEST DIV</div>
<div>AAA</div>
<div>BBB</div>

How to ignore some characters when parsing a string?

I like to parse a list of log strings in Python3 via the parse module.
A string can look like:
[00:00:01.913,100] [0m<inf> main: starting v2.9.0(b2393) on Zephyr v2.6.99[0m
or
[00:00:02.789,700] [1;33m<wrn> BC127: codec config differs. set default one[0m
So far I defined my parser
myParser = parse.compile("[{}] <{}> {}: {}")
Of course my parser fails. I don't know how to tell the parse module that it should discard for example everything between ] and <. Like "]*<".
my pseudo code is
myParser = parse.compile("[{}]*<{}> {}: {}*")
Use Regular Expression:
>>> import re
>>> s = '<# """#$ FSDF > important information < not important >'
>>> re.sub('<[^>]+>', '', s
'important information'

Getting parsing error when attempting to remove empty xml tags with lxml

I'm in the process of creating an xml file from a csv source, and have had some very useful input from users in this process. It appears that my last requirement (of which I've only just become aware), is to drop any empty xml tags (that is, those with no content) before calling an API.
I'm attempting to use the etree remove method to drop the empty tags, but I'm getting an error that etree.fromstring can only parse strings. Here is my sample data and code.
ACTION|INV_ACCT_CLASS|EXT_INV_ID|WAREHOUSE_ID|NAME|CNTRY_CD|PHONE|ADDR_STR1|ADDR_STR2|CITY|ST|ZIP|ADD_KEY_NUM
add|2|AAA_00005|1001213|Company 1|US|9995555555|1313 Mockingbird Lane||New York|NY|10001|44433322
add|2|BBB_00008|1004312|Company 2|US|43255511110|Some other address||Stamford|CT|44112|11122233
import lxml.etree
from lxml.builder import E
import csv
import string
import date from datetime
with open("filename.csv") as csvfile:
results = E.paiInv(*(
E.invrec(
E.action(row['ACTION']),
E.investor(
E.inv_account_class(row['INV_ACCOUNT_CLASS']),
E.ext_inv_id(row['EXT_INV_ID']),
E.warehouse_id(row['WAREHOUSE_ID']),
E.name(row['NAME']),
E.cntry_cd(row['CNTRY_CD']),
E.phone(row['PHONE']),
E.addr_str1(row['ADDRESS_STR1']),
E.addr_str2(row['ADDRESS_STR2']),
E.city(row['CITY']),
E.st(row['ST']),
E.zip(row['ZIP']),
E.add_key_num(row['ADD_KEY_NUM'])
)
) for row in csv.DictReader(csvfile, delimiter = '|'))
)
req = '<request_id>Investor' + str(date.today()) + '</request_id>'
doc = lxml.etree.ElementTree(results)
ins = lxml.etree.fromstring(req)
ins.tail = "\n"
dest = doc.xpath('/paiInv')[0]
dest.insert(0,ins)
This gives me exactly what I need, except that if any columns in the csv are empty, I'll get empty xml tags, as would be expected. Since we're required to drop any empty tags, I've tried this code:
root = lxml.etree.fromstring(results)
for element in root.xpath(".//*[not(node())]"):
element.getparent().remove(element)
I receive an error saying "can only parse strings". I'm trying to understand why it would find anything other than strings, and how I should tweak the code that it will correctly drop empty tags.
Thanks!
Here's one way to handle it: change your for loop at the end to:
for element in root.xpath('//*'):
if element.text is None:
elememt.getparent().remove(element)
The output should now have removed both <addr_str2> nodes which are empty.

Encoding error using Python

I wrote a code to connect to imap and then parse the body information and insert into database. But I am having some problems with accents.
From email header I got this information:
Content-Type: text/html; charset=ISO-8859-1
But, I am not sure if I can trust in this information...
The email was wrote in portuguese, so we have a lot of words with accents. For example, I extract the following phrase from the email source code (using my browser):
"...instalação de eletrônicos..."
So, I connected to imap and fetched some emails:
... typ, data = M.fetch(num, '(RFC822)') ...
When I print the content, I get the following word:
print data[0][1]
instala+º+úo de eletr+¦nicos
I tried to use .decode('utf-8') but I had no success.
instalação de eletrônicos
How can I make it a human readable? My database is in utf-8.
The header says it is using "ISO-8859-1" charset. So you need to decode the string with that encoding.
Try this:
data[0][1].decode('iso-8859-1')
Specifying the source code encoding worked for me. It's the code at the top of my example code below. This should be defined at the top of your python file.
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
value = """...instalação de eletrônicos...""".decode("iso-8859-15")
print value
# prints: ...instalação de eletrônicos...
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii','ignore')
print value
# prints: ...instalacao de eletronicos...
And now you can do str(value) without an exception as well.
See: http://docs.python.org/2/library/unicodedata.html
This seems to keep all accents:
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
import unicodedata
value = """...instalação de eletrônicos...""".decode("iso-8859-15")
value = unicodedata.normalize('NFKC', value).encode('utf-8')
print value
print str(value)
# prints (without exceptions/errors):
# ...instalação de eletrônicos...
# ...instalação de eletrônicos...
EDIT:
Do note that with the last version even though the outcome looks the same it doesn't return equal is True. In example:
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
import unicodedata
inValue = """...instalação de eletrônicos...""".decode("iso-8859-15")
normalizedValue = unicodedata.normalize('NFKC', inValue).encode('utf-8')
try:
print inValue == normalizedValue
except UnicodeWarning:
pass
# False
EDIT2:
This returns the same:
normalizedValue = unicode("""...instalação de eletrônicos...""".decode("iso-8859-15")).encode('utf-8')
print normalizedValue
print str(normalizedValue )
# prints (without exceptions/errors):
# ...instalação de eletrônicos...
# ...instalação de eletrônicos...
Though I'm not sure this will actually be valid for a utf-8 encoded database. Probably not?
Thanks for Martijn Pieters. We figured out that the email had two different encode. I had to split this parts and treat individually.

Extracting text from HTML file using Python

I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
Update html2text looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
Related questions:
Filter out HTML tags and resolve entities in python
Convert XML/HTML Entities into Unicode String in Python
The best piece of code I found for extracting text without getting javascript or not wanted things :
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
You just have to install BeautifulSoup before :
pip install beautifulsoup4
html2text is a Python program that does a pretty good job at this.
NOTE: NTLK no longer supports clean_html function
Original answer below, and an alternative in the comments sections.
Use NLTK
I wasted my 4-5 hours fixing the issues with html2text. Luckily i could encounter NLTK.
It works magically.
import nltk
from urllib import urlopen
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
raw = nltk.clean_html(html)
print(raw)
Found myself facing just the same problem today. I wrote a very simple HTML parser to strip incoming content of all markups, returning the remaining text with only a minimum of formatting.
from HTMLParser import HTMLParser
from re import sub
from sys import stderr
from traceback import print_exc
class _DeHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.__text = []
def handle_data(self, data):
text = data.strip()
if len(text) > 0:
text = sub('[ \t\r\n]+', ' ', text)
self.__text.append(text + ' ')
def handle_starttag(self, tag, attrs):
if tag == 'p':
self.__text.append('\n\n')
elif tag == 'br':
self.__text.append('\n')
def handle_startendtag(self, tag, attrs):
if tag == 'br':
self.__text.append('\n\n')
def text(self):
return ''.join(self.__text).strip()
def dehtml(text):
try:
parser = _DeHTMLParser()
parser.feed(text)
parser.close()
return parser.text()
except:
print_exc(file=stderr)
return text
def main():
text = r'''
<html>
<body>
<b>Project:</b> DeHTML<br>
<b>Description</b>:<br>
This small script is intended to allow conversion from HTML markup to
plain text.
</body>
</html>
'''
print(dehtml(text))
if __name__ == '__main__':
main()
I know there are a lot of answers already, but the most elegent and pythonic solution I have found is described, in part, here.
from bs4 import BeautifulSoup
text = ' '.join(BeautifulSoup(some_html_string, "html.parser").findAll(text=True))
Update
Based on Fraser's comment, here is more elegant solution:
from bs4 import BeautifulSoup
clean_text = ' '.join(BeautifulSoup(some_html_string, "html.parser").stripped_strings)
Here is a version of xperroni's answer which is a bit more complete. It skips script and style sections and translates charrefs (e.g., ') and HTML entities (e.g., &).
It also includes a trivial plain-text-to-html inverse converter.
"""
HTML <-> text conversions.
"""
from HTMLParser import HTMLParser, HTMLParseError
from htmlentitydefs import name2codepoint
import re
class _HTMLToText(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._buf = []
self.hide_output = False
def handle_starttag(self, tag, attrs):
if tag in ('p', 'br') and not self.hide_output:
self._buf.append('\n')
elif tag in ('script', 'style'):
self.hide_output = True
def handle_startendtag(self, tag, attrs):
if tag == 'br':
self._buf.append('\n')
def handle_endtag(self, tag):
if tag == 'p':
self._buf.append('\n')
elif tag in ('script', 'style'):
self.hide_output = False
def handle_data(self, text):
if text and not self.hide_output:
self._buf.append(re.sub(r'\s+', ' ', text))
def handle_entityref(self, name):
if name in name2codepoint and not self.hide_output:
c = unichr(name2codepoint[name])
self._buf.append(c)
def handle_charref(self, name):
if not self.hide_output:
n = int(name[1:], 16) if name.startswith('x') else int(name)
self._buf.append(unichr(n))
def get_text(self):
return re.sub(r' +', ' ', ''.join(self._buf))
def html_to_text(html):
"""
Given a piece of HTML, return the plain text it contains.
This handles entities and char refs, but not javascript and stylesheets.
"""
parser = _HTMLToText()
try:
parser.feed(html)
parser.close()
except HTMLParseError:
pass
return parser.get_text()
def text_to_html(text):
"""
Convert the given text to html, wrapping what looks like URLs with <a> tags,
converting newlines to <br> tags and converting confusing chars into html
entities.
"""
def f(mo):
t = mo.group()
if len(t) == 1:
return {'&':'&', "'":''', '"':'"', '<':'<', '>':'>'}.get(t)
return '%s' % (t, t)
return re.sub(r'https?://[^] ()"\';]+|[&\'"<>]', f, text)
I know there's plenty of answers here already but I think newspaper3k also deserves a mention. I recently needed to complete a similar task of extracting the text from articles on the web and this library has done an excellent job of achieving this so far in my tests. It ignores the text found in menu items and side bars as well as any JavaScript that appears on the page as the OP requests.
from newspaper import Article
article = Article(url)
article.download()
article.parse()
article.text
If you already have the HTML files downloaded you can do something like this:
article = Article('')
article.set_html(html)
article.parse()
article.text
It even has a few NLP features for summarizing the topics of articles:
article.nlp()
article.summary
You can use html2text method in the stripogram library also.
from stripogram import html2text
text = html2text(your_html_string)
To install stripogram run sudo easy_install stripogram
There is Pattern library for data mining.
http://www.clips.ua.ac.be/pages/pattern-web
You can even decide what tags to keep:
s = URL('http://www.clips.ua.ac.be').download()
s = plaintext(s, keep={'h1':[], 'h2':[], 'strong':[], 'a':['href']})
print s
if you need more speed and less accuracy then you could use raw lxml.
import lxml.html as lh
from lxml.html.clean import clean_html
def lxml_to_text(html):
doc = lh.fromstring(html)
doc = clean_html(doc)
return doc.text_content()
PyParsing does a great job. The PyParsing wiki was killed so here is another location where there are examples of the use of PyParsing (example link). One reason for investing a little time with pyparsing is that he has also written a very brief very well organized O'Reilly Short Cut manual that is also inexpensive.
Having said that, I use BeautifulSoup a lot and it is not that hard to deal with the entities issues, you can convert them before you run BeautifulSoup.
Goodluck
This isn't exactly a Python solution, but it will convert text Javascript would generate into text, which I think is important (E.G. google.com). The browser Links (not Lynx) has a Javascript engine, and will convert source to text with the -dump option.
So you could do something like:
fname = os.tmpnam()
fname.write(html_source)
proc = subprocess.Popen(['links', '-dump', fname],
stdout=subprocess.PIPE,
stderr=open('/dev/null','w'))
text = proc.stdout.read()
Instead of the HTMLParser module, check out htmllib. It has a similar interface, but does more of the work for you. (It is pretty ancient, so it's not much help in terms of getting rid of javascript and css. You could make a derived class, but and add methods with names like start_script and end_style (see the python docs for details), but it's hard to do this reliably for malformed html.) Anyway, here's something simple that prints the plain text to the console
from htmllib import HTMLParser, HTMLParseError
from formatter import AbstractFormatter, DumbWriter
p = HTMLParser(AbstractFormatter(DumbWriter()))
try: p.feed('hello<br>there'); p.close() #calling close is not usually needed, but let's play it safe
except HTMLParseError: print ':(' #the html is badly malformed (or you found a bug)
I recommend a Python Package called goose-extractor
Goose will try to extract the following information:
Main text of an article
Main image of article
Any Youtube/Vimeo movies embedded in article
Meta Description
Meta tags
More :https://pypi.python.org/pypi/goose-extractor/
Anyone has tried bleach.clean(html,tags=[],strip=True) with bleach? it's working for me.
install html2text using
pip install html2text
then,
>>> import html2text
>>>
>>> h = html2text.HTML2Text()
>>> # Ignore converting links from HTML
>>> h.ignore_links = True
>>> print h.handle("<p>Hello, <a href='http://earth.google.com/'>world</a>!")
Hello, world!
Best worked for me is inscripts .
https://github.com/weblyzard/inscriptis
import urllib.request
from inscriptis import get_text
url = "http://www.informationscience.ch"
html = urllib.request.urlopen(url).read().decode('utf-8')
text = get_text(html)
print(text)
The results are really good
Beautiful soup does convert html entities. It's probably your best bet considering HTML is often buggy and filled with unicode and html encoding issues. This is the code I use to convert html to raw text:
import BeautifulSoup
def getsoup(data, to_unicode=False):
data = data.replace(" ", " ")
# Fixes for bad markup I've seen in the wild. Remove if not applicable.
masssage_bad_comments = [
(re.compile('<!-([^-])'), lambda match: '<!--' + match.group(1)),
(re.compile('<!WWWAnswer T[=\w\d\s]*>'), lambda match: '<!--' + match.group(0) + '-->'),
]
myNewMassage = copy.copy(BeautifulSoup.BeautifulSoup.MARKUP_MASSAGE)
myNewMassage.extend(masssage_bad_comments)
return BeautifulSoup.BeautifulSoup(data, markupMassage=myNewMassage,
convertEntities=BeautifulSoup.BeautifulSoup.ALL_ENTITIES
if to_unicode else None)
remove_html = lambda c: getsoup(c, to_unicode=True).getText(separator=u' ') if c else ""
Another non-python solution: Libre Office:
soffice --headless --invisible --convert-to txt input1.html
The reason I prefer this one over other alternatives is that every HTML paragraph gets converted into a single text line (no line breaks), which is what I was looking for. Other methods require post-processing. Lynx does produce nice output, but not exactly what I was looking for. Besides, Libre Office can be used to convert from all sorts of formats...
I had a similar question and actually used one of the answers with BeautifulSoup.
The problem was it was really slow. I ended up using library called selectolax.
It's pretty limited but it works for this task.
The only issue was that I had manually remove unnecessary white spaces.
But it seems to be working much faster that BeautifulSoup solution.
from selectolax.parser import HTMLParser
def get_text_selectolax(html):
tree = HTMLParser(html)
if tree.body is None:
return None
for tag in tree.css('script'):
tag.decompose()
for tag in tree.css('style'):
tag.decompose()
text = tree.body.text(separator='')
text = " ".join(text.split()) # this will remove all the whitespaces
return text
Another option is to run the html through a text based web browser and dump it. For example (using Lynx):
lynx -dump html_to_convert.html > converted_html.txt
This can be done within a python script as follows:
import subprocess
with open('converted_html.txt', 'w') as outputFile:
subprocess.call(['lynx', '-dump', 'html_to_convert.html'], stdout=testFile)
It won't give you exactly just the text from the HTML file, but depending on your use case it may be preferable to the output of html2text.
#PeYoTIL's answer using BeautifulSoup and eliminating style and script content didn't work for me. I tried it using decompose instead of extract but it still didn't work. So I created my own which also formats the text using the <p> tags and replaces <a> tags with the href link. Also copes with links inside text. Available at this gist with a test doc embedded.
from bs4 import BeautifulSoup, NavigableString
def html_to_text(html):
"Creates a formatted text email message as a string from a rendered html template (page)"
soup = BeautifulSoup(html, 'html.parser')
# Ignore anything in head
body, text = soup.body, []
for element in body.descendants:
# We use type and not isinstance since comments, cdata, etc are subclasses that we don't want
if type(element) == NavigableString:
# We use the assumption that other tags can't be inside a script or style
if element.parent.name in ('script', 'style'):
continue
# remove any multiple and leading/trailing whitespace
string = ' '.join(element.string.split())
if string:
if element.parent.name == 'a':
a_tag = element.parent
# replace link text with the link
string = a_tag['href']
# concatenate with any non-empty immediately previous string
if ( type(a_tag.previous_sibling) == NavigableString and
a_tag.previous_sibling.string.strip() ):
text[-1] = text[-1] + ' ' + string
continue
elif element.previous_sibling and element.previous_sibling.name == 'a':
text[-1] = text[-1] + ' ' + string
continue
elif element.parent.name == 'p':
# Add extra paragraph formatting newline
string = '\n' + string
text += [string]
doc = '\n'.join(text)
return doc
I've had good results with Apache Tika. Its purpose is the extraction of metadata and text from content, hence the underlying parser is tuned accordingly out of the box.
Tika can be run as a server, is trivial to run / deploy in a Docker container, and from there can be accessed via Python bindings.
While alot of people mentioned using regex to strip html tags, there are a lot of downsides.
for example:
<p>hello world</p>I love you
Should be parsed to:
Hello world
I love you
Here's a snippet I came up with, you can cusomize it to your specific needs, and it works like a charm
import re
import html
def html2text(htm):
ret = html.unescape(htm)
ret = ret.translate({
8209: ord('-'),
8220: ord('"'),
8221: ord('"'),
160: ord(' '),
})
ret = re.sub(r"\s", " ", ret, flags = re.MULTILINE)
ret = re.sub("<br>|<br />|</p>|</div>|</h\d>", "\n", ret, flags = re.IGNORECASE)
ret = re.sub('<.*?>', ' ', ret, flags=re.DOTALL)
ret = re.sub(r" +", " ", ret)
return ret
in a simple way
import re
html_text = open('html_file.html').read()
text_filtered = re.sub(r'<(.*?)>', '', html_text)
this code finds all parts of the html_text started with '<' and ending with '>' and replace all found by an empty string
In Python 3.x you can do it in a very easy way by importing 'imaplib' and 'email' packages. Although this is an older post but maybe my answer can help new comers on this post.
status, data = self.imap.fetch(num, '(RFC822)')
email_msg = email.message_from_bytes(data[0][1])
#email.message_from_string(data[0][1])
#If message is multi part we only want the text version of the body, this walks the message and gets the body.
if email_msg.is_multipart():
for part in email_msg.walk():
if part.get_content_type() == "text/plain":
body = part.get_payload(decode=True) #to control automatic email-style MIME decoding (e.g., Base64, uuencode, quoted-printable)
body = body.decode()
elif part.get_content_type() == "text/html":
continue
Now you can print body variable and it will be in plaintext format :) If it is good enough for you then it would be nice to select it as accepted answer.
Here's the code I use on a regular basis.
from bs4 import BeautifulSoup
import urllib.request
def processText(webpage):
# EMPTY LIST TO STORE PROCESSED TEXT
proc_text = []
try:
news_open = urllib.request.urlopen(webpage.group())
news_soup = BeautifulSoup(news_open, "lxml")
news_para = news_soup.find_all("p", text = True)
for item in news_para:
# SPLIT WORDS, JOIN WORDS TO REMOVE EXTRA SPACES
para_text = (' ').join((item.text).split())
# COMBINE LINES/PARAGRAPHS INTO A LIST
proc_text.append(para_text)
except urllib.error.HTTPError:
pass
return proc_text
I hope that helps.
you can extract only text from HTML with BeautifulSoup
url = "https://www.geeksforgeeks.org/extracting-email-addresses-using-regular-expressions-python/"
con = urlopen(url).read()
soup = BeautifulSoup(con,'html.parser')
texts = soup.get_text()
print(texts)
Another example using BeautifulSoup4 in Python 2.7.9+
includes:
import urllib2
from bs4 import BeautifulSoup
Code:
def read_website_to_text(url):
page = urllib2.urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = '\n'.join(chunk for chunk in chunks if chunk)
return str(text.encode('utf-8'))
Explained:
Read in the url data as html (using BeautifulSoup), remove all script and style elements, and also get just the text using .get_text(). Break into lines and remove leading and trailing space on each, then break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")). Then using text = '\n'.join, drop blank lines, finally return as sanctioned utf-8.
Notes:
Some systems this is run on will fail with https:// connections because of SSL issue, you can turn off the verify to fix that issue. Example fix: http://blog.pengyifan.com/how-to-fix-python-ssl-certificate_verify_failed/
Python < 2.7.9 may have some issue running this
text.encode('utf-8') can leave weird encoding, may want to just return str(text) instead.
Answer using Pandas to get table data from HTML.
If you want to extract table data quickly from HTML. You can use the read_HTML function, docs are here. Before using this function you should read the gotchas/issues surrounding the BeautifulSoup4/html5lib/lxml parsers HTML parsing libraries.
import pandas as pd
http = r'https://www.ibm.com/docs/en/cmofz/10.1.0?topic=SSQHWE_10.1.0/com.ibm.ondemand.mp.doc/arsa0257.htm'
table = pd.read_html(http)
df = table[0]
df
output
There are a number of option that can be played with see here and here.

Categories