MemoryError when parsing XML file - python

I am trying to find the specific tag in an XML file and I used BeautifulSoup to read the XML file. It produces the following error:
soup = BeautifulSoup(XML, 'xml')
Traceback (most recent call last):
File "<ipython-input-5-f431fabb5903>", line 1, in <module>
soup = BeautifulSoup(XML, 'xml')
File "D:\software\Anaconda3\envs\py37\lib\site-packages\bs4\__init__.py", line 362, in __init__
self._feed()
File "D:\software\Anaconda3\envs\py37\lib\site-packages\bs4\__init__.py", line 448, in _feed
self.builder.feed(self.markup)
File "D:\software\Anaconda3\envs\py37\lib\site-packages\bs4\builder\_lxml.py", line 203, in feed
markup = StringIO(markup)
MemoryError
The size of the file is 353 MB but it has also parsed a larger file and did not produce this error. Do you know what the problem is?

Related

Transform docx to html raises python MemoryError

I have a function that converts a docx to html and a large docx file to be converted.
The problem is this function is part of a bigger program and the converted html is parsed afterwards so I cannot afford to use another converter without impacting the rest of the code (which is not wanted). Running on python 2.7.13 installed on 32-bit, but changing to 64-bit is also not desired.
This is the function:
import logging
from ooxml import serialize
def trasnformDocxtoHtml(inputFile, outputFile):
logging.basicConfig(filename='ooxml.log', level=logging.INFO)
dfile = ooxml.read_from_file(inputFile)
with open(outputFile,'w') as htmlFile:
htmlFile.write( serialize.serialize(dfile.document))
and here's the error:
>>> import library
>>> library.trasnformDocxtoHtml(r'large_file.docx', 'output.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "library.py", line 9, in trasnformDocxtoHtml
dfile = ooxml.read_from_file(inputFile)
File "C:\Python27\lib\site-packages\ooxml\__init__.py", line 52, in read_from_file
dfile.parse()
File "C:\Python27\lib\site-packages\ooxml\docxfile.py", line 46, in parse
self._doc = parse_from_file(self)
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 655, in parse_from_file
document = parse_document(doc_content)
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 463, in parse_document
document.elements.append(parse_table(document, elem))
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 436, in parse_table
for p in tc.xpath('./w:p', namespaces=NAMESPACES):
File "src\lxml\etree.pyx", line 1583, in lxml.etree._Element.xpath
MemoryError
no mem for new parser
MemoryError
Could I somehow increase the buffer memory in python? Or fix the function without impacting the html output format?

Python - BeautifulSoup error while scraping

UPDATE: Using lxml instead of html.parser helped solve the problem, as Freddier suggested in the answer below!
I am trying to webscrape some information off of this website: https://www.ticketmonster.co.kr/deal/952393926.
I get an error when I run soup(thispage, 'html.parser) but this error only happens for this specific page. Does anyone know why this is happening?
The code I have so far is very simple:
from bs4 import BeautifulSoup as soup
openU = urlopen(url)
thispage = openU.read()
open.close()
pageS = soup(thispage, 'html.parser')
The error I get is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\site-packages\bs4\__init__.py", line 228, in __init__
self._feed()
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\site- packages\bs4\__init__.py", line 289, in _feed
self.builder.feed(self.markup)
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\site-packages\bs4\builder\_htmlparser.py", line 215, in feed
parser.feed(markup)
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\html\parser.py", line 111, in feed
self.goahead(0)
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\html\parser.py", line 179, in goahead
k = self.parse_html_declaration(i)
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\html\parser.py", line 264, in parse_html_declaration
return self.parse_marked_section(i)
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\_markupbase.py", line 149, in parse_marked_section
sectName, j = self._scan_name( i+3, i )
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\_markupbase.py", line 391, in _scan_name
% rawdata[declstartpos:declstartpos+20])
File "C:\Users\Kathy\AppData\Local\Programs\Python\Python36\lib\_markupbase.py", line 34, in error
"subclasses of ParserBase must override error()")
NotImplementedError: subclasses of ParserBase must override error()
Please help!
Try using
pageS = soup(thispage, 'lxml')
insted of
pageS = soup(thispage, 'html.parser')
It looks may be a problem with characters encoding using "html.parser"

Error trying parsing xml using python : xml.etree.ElementTree.ParseError: syntax error: line 1,

In python, simply trying to parse XML:
import xml.etree.ElementTree as ET
data = 'info.xml'
tree = ET.fromstring(data)
but got error:
Traceback (most recent call last):
File "C:\mesh\try1.py", line 3, in <module>
tree = ET.fromstring(data)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1312, in XML
return parser.close()
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1665, in close
self._raiseerror(v)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1517, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
thats a bit of xml, i have:
<?xml version="1.0" encoding="utf-16"?>
<AnalysisData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<BlendOperations OperationNumber="1">
<ComponentQuality>
<MaterialName>Oil</MaterialName>
<Weight>1067.843017578125</Weight>
<WeightPercent>31.545017776585109</WeightPercent>
Why is it happening?
You're trying to parse the string 'info.xml' instead of the contents of the file.
You could call tree = ET.parse('info.xml') which will open the file.
Or you could read the file directly:
ET.fromstring(open('info.xml').read())

how to unpack dmoz urls from rdf dump with python and rdflib?

i tried to open rdf file (dmoz rdf dump), but a get this error message
Traceback (most recent call last):
File "/media/_dev_/ODP_RDF_get_links.py", line 4, in <module>
result = g.parse("data/content.rdf")
File "/usr/local/lib/python2.7/dist-packages/rdflib/graph.py", line 1033, in parse
parser.parse(source, self, **args)
File "/usr/local/lib/python2.7/dist-packages/rdflib/plugins/parsers/rdfxml.py", line 577, in parse
self._parser.parse(source)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/lib/python2.7/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 210, in feed
self._parser.Parse(data, isFinal)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 352, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/lib/python2.7/dist-packages/rdflib/plugins/parsers/rdfxml.py", line 160, in endElementNS
self.current.end(name, qname)
File "/usr/local/lib/python2.7/dist-packages/rdflib/plugins/parsers/rdfxml.py", line 331, in node_element_end
self.error("Repeat node-elements inside property elements: %s"%"".join(name))
File "/usr/local/lib/python2.7/dist-packages/rdflib/plugins/parsers/rdfxml.py", line 185, in error
raise ParserError(info + message)
file:///media/_dev_/data/content.rdf:5:12: Repeat node-elements inside property elements: http://dmoz.org/rdf/catid
my simple code is as follow:
import rdflib
g = rdflib.Graph()
result = g.parse("data/content.rdf")
print("graph has %s statements." % len(g))
i need to be able to read the file.
extract all links in the world category.
thanks for any possible help
EDIT:
PS: found this wikipedia rdf_dumps, so developing custom scripts is necessary to use this dump

Beautiful Soup and uTidy

I want to pass the results of utidy to Beautiful Soup, ala:
page = urllib2.urlopen(url)
options = dict(output_xhtml=1,add_xml_decl=0,indent=1,tidy_mark=0)
cleaned_html = tidy.parseString(page.read(), **options)
soup = BeautifulSoup(cleaned_html)
When run, the following error results:
Traceback (most recent call last):
File "soup.py", line 34, in <module>
soup = BeautifulSoup(cleaned_html)
File "/var/lib/python-support/python2.6/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/var/lib/python-support/python2.6/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/var/lib/python-support/python2.6/BeautifulSoup.py", line 1245, in _feed
smartQuotesTo=self.smartQuotesTo, isHTML=isHTML)
File "/var/lib/python-support/python2.6/BeautifulSoup.py", line 1751, in __init__
self._detectEncoding(markup, isHTML)
File "/var/lib/python-support/python2.6/BeautifulSoup.py", line 1899, in _detectEncoding
xml_encoding_match = re.compile(xml_encoding_re).match(xml_data)
TypeError: expected string or buffer
I gather utidy returns an XML document while BeautifulSoup wants a string. Is there a way to cast cleaned_html? Or am I doing it wrong and should take a different approach?
Just wrap str() around cleaned_html
when passing it to BeautifulSoup.
Convert the value passed to BeautifulSoup into a string.
In your case, do the following edit to the last line:
soup = BeautifulSoup(str(cleaned_html))

Categories