Parsing a file with multiple xmls in it - python
Is there a way to parse a file which contains multiple xmls in it?
eg., if I have a file called stocks.xml and within the stocks.xml i have more than one xml content, is there any way to parse this xml file ?.
-- stocks.xml
<?xml version="1.0" encoding="ASCII"?><PRODUCT><ID>A001</ID>..</PRODUCT><SHOP-1><QUANTITY>nn</QUANITY><SHOP-1><QUANTITY>nn</QUANITY>
<?xml version="1.0" encoding="ASCII"?><PRODUCT><ID>A002</ID>..</PRODUCT><SHOP-1><QUANTITY>nn</QUANITY><SHOP-1><QUANTITY>nn</QUANITY>
If you can assume that each xml document begins with <?xml version="1.0" ..., simply read the file line-by-line looking for a lines that match that pattern (or, read all the data and then do a search through the data).
Once you find a line, keep it, and append subsequent lines until the next xml document is found or you hit EOF. lather, rinse, repeat.
You now have one xml document in a string. You can then parse the string using the normal XML parsing tools, or you write it to a file.
This will work fine in most cases, but of course it could fall down if one of your embedded xml documents contains data that exactly matches the same pattern as the beginning of a document. Most likely you don't have to worry about that, and if you do there are ways to avoid that with a little more cleverness.
The right solution really depends on your needs. If you're creating a general purpose must-work-at-all-times solution this might not be right for you. For real world, special purpose problems it's probably more than Good Enough, and often Good Enough is indeed Good Enough.
You should see this python program by Michiel de Hoon
And if you want to parse multiple files, then a rule to detect that we are in other xml must be developed, for example,at first you read <stocks> .... and at the end you must reead </stocks> when you find that then if there is something else,well, continue reading and do the same parser until reach eof.
# Copyright 2008 by Michiel de Hoon. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Parser for XML results returned by NCBI's Entrez Utilities. This
parser is used by the read() function in Bio.Entrez, and is not intended
be used directly.
"""
# The question is how to represent an XML file as Python objects. Some
# XML files returned by NCBI look like lists, others look like dictionaries,
# and others look like a mix of lists and dictionaries.
#
# My approach is to classify each possible element in the XML as a plain
# string, an integer, a list, a dictionary, or a structure. The latter is a
# dictionary where the same key can occur multiple times; in Python, it is
# represented as a dictionary where that key occurs once, pointing to a list
# of values found in the XML file.
#
# The parser then goes through the XML and creates the appropriate Python
# object for each element. The different levels encountered in the XML are
# preserved on the Python side. So a subelement of a subelement of an element
# is a value in a dictionary that is stored in a list which is a value in
# some other dictionary (or a value in a list which itself belongs to a list
# which is a value in a dictionary, and so on). Attributes encountered in
# the XML are stored as a dictionary in a member .attributes of each element,
# and the tag name is saved in a member .tag.
#
# To decide which kind of Python object corresponds to each element in the
# XML, the parser analyzes the DTD referred at the top of (almost) every
# XML file returned by the Entrez Utilities. This is preferred over a hand-
# written solution, since the number of DTDs is rather large and their
# contents may change over time. About half the code in this parser deals
# wih parsing the DTD, and the other half with the XML itself.
import os.path
import urlparse
import urllib
import warnings
from xml.parsers import expat
# The following four classes are used to add a member .attributes to integers,
# strings, lists, and dictionaries, respectively.
class IntegerElement(int):
def __repr__(self):
text = int.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "IntegerElement(%s, attributes=%s)" % (text, repr(attributes))
class StringElement(str):
def __repr__(self):
text = str.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "StringElement(%s, attributes=%s)" % (text, repr(attributes))
class UnicodeElement(unicode):
def __repr__(self):
text = unicode.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "UnicodeElement(%s, attributes=%s)" % (text, repr(attributes))
class ListElement(list):
def __repr__(self):
text = list.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "ListElement(%s, attributes=%s)" % (text, repr(attributes))
class DictionaryElement(dict):
def __repr__(self):
text = dict.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "DictElement(%s, attributes=%s)" % (text, repr(attributes))
# A StructureElement is like a dictionary, but some of its keys can have
# multiple values associated with it. These values are stored in a list
# under each key.
class StructureElement(dict):
def __init__(self, keys):
dict.__init__(self)
for key in keys:
dict.__setitem__(self, key, [])
self.listkeys = keys
def __setitem__(self, key, value):
if key in self.listkeys:
self[key].append(value)
else:
dict.__setitem__(self, key, value)
def __repr__(self):
text = dict.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "DictElement(%s, attributes=%s)" % (text, repr(attributes))
class NotXMLError(ValueError):
def __init__(self, message):
self.msg = message
def __str__(self):
return "Failed to parse the XML data (%s). Please make sure that the input data are in XML format." % self.msg
class CorruptedXMLError(ValueError):
def __init__(self, message):
self.msg = message
def __str__(self):
return "Failed to parse the XML data (%s). Please make sure that the input data are not corrupted." % self.msg
class ValidationError(ValueError):
"""Validating parsers raise this error if the parser finds a tag in the XML that is not defined in the DTD. Non-validating parsers do not raise this error. The Bio.Entrez.read and Bio.Entrez.parse functions use validating parsers by default (see those functions for more information)"""
def __init__(self, name):
self.name = name
def __str__(self):
return "Failed to find tag '%s' in the DTD. To skip all tags that are not represented in the DTD, please call Bio.Entrez.read or Bio.Entrez.parse with validate=False." % self.name
class DataHandler:
home = os.path.expanduser('~')
local_dtd_dir = os.path.join(home, '.biopython', 'Bio', 'Entrez', 'DTDs')
del home
from Bio import Entrez
global_dtd_dir = os.path.join(str(Entrez.__path__[0]), "DTDs")
del Entrez
def __init__(self, validate):
self.stack = []
self.errors = []
self.integers = []
self.strings = []
self.lists = []
self.dictionaries = []
self.structures = {}
self.items = []
self.dtd_urls = []
self.validating = validate
self.parser = expat.ParserCreate(namespace_separator=" ")
self.parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS)
self.parser.XmlDeclHandler = self.xmlDeclHandler
def read(self, handle):
"""Set up the parser and let it parse the XML results"""
try:
self.parser.ParseFile(handle)
except expat.ExpatError, e:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, so we can be sure that
# we are parsing XML data. Most likely, the XML file is
# corrupted.
raise CorruptedXMLError(e)
else:
# We have not seen the initial <!xml declaration, so probably
# the input data is not in XML format.
raise NotXMLError(e)
try:
return self.object
except AttributeError:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, and expat didn't notice
# any errors, so self.object should be defined. If not, this is
# a bug.
raise RuntimeError("Failed to parse the XML file correctly, possibly due to a bug in Bio.Entrez. Please contact the Biopython developers at biopython-dev#biopython.org for assistance.")
else:
# We did not see the initial <!xml declaration, so probably
# the input data is not in XML format.
raise NotXMLError("XML declaration not found")
def parse(self, handle):
BLOCK = 1024
while True:
#Read in another block of the file...
text = handle.read(BLOCK)
if not text:
# We have reached the end of the XML file
if self.stack:
# No more XML data, but there is still some unfinished
# business
raise CorruptedXMLError
try:
for record in self.object:
yield record
except AttributeError:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, and expat
# didn't notice any errors, so self.object should be
# defined. If not, this is a bug.
raise RuntimeError("Failed to parse the XML file correctly, possibly due to a bug in Bio.Entrez. Please contact the Biopython developers at biopython-dev#biopython.org for assistance.")
else:
# We did not see the initial <!xml declaration, so
# probably the input data is not in XML format.
raise NotXMLError("XML declaration not found")
self.parser.Parse("", True)
self.parser = None
return
try:
self.parser.Parse(text, False)
except expat.ExpatError, e:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, so we can be sure
# that we are parsing XML data. Most likely, the XML file
# is corrupted.
raise CorruptedXMLError(e)
else:
# We have not seen the initial <!xml declaration, so
# probably the input data is not in XML format.
raise NotXMLError(e)
if not self.stack:
# Haven't read enough from the XML file yet
continue
records = self.stack[0]
if not isinstance(records, list):
raise ValueError("The XML file does not represent a list. Please use Entrez.read instead of Entrez.parse")
while len(records) > 1: # Then the top record is finished
record = records.pop(0)
yield record
def xmlDeclHandler(self, version, encoding, standalone):
# XML declaration found; set the handlers
self.parser.StartElementHandler = self.startElementHandler
self.parser.EndElementHandler = self.endElementHandler
self.parser.CharacterDataHandler = self.characterDataHandler
self.parser.ExternalEntityRefHandler = self.externalEntityRefHandler
self.parser.StartNamespaceDeclHandler = self.startNamespaceDeclHandler
def startNamespaceDeclHandler(self, prefix, un):
raise NotImplementedError("The Bio.Entrez parser cannot handle XML data that make use of XML namespaces")
def startElementHandler(self, name, attrs):
self.content = ""
if name in self.lists:
object = ListElement()
elif name in self.dictionaries:
object = DictionaryElement()
elif name in self.structures:
object = StructureElement(self.structures[name])
elif name in self.items: # Only appears in ESummary
name = str(attrs["Name"]) # convert from Unicode
del attrs["Name"]
itemtype = str(attrs["Type"]) # convert from Unicode
del attrs["Type"]
if itemtype=="Structure":
object = DictionaryElement()
elif name in ("ArticleIds", "History"):
object = StructureElement(["pubmed", "medline"])
elif itemtype=="List":
object = ListElement()
else:
object = StringElement()
object.itemname = name
object.itemtype = itemtype
elif name in self.strings + self.errors + self.integers:
self.attributes = attrs
return
else:
# Element not found in DTD
if self.validating:
raise ValidationError(name)
else:
# this will not be stored in the record
object = ""
if object!="":
object.tag = name
if attrs:
object.attributes = dict(attrs)
if len(self.stack)!=0:
current = self.stack[-1]
try:
current.append(object)
except AttributeError:
current[name] = object
self.stack.append(object)
def endElementHandler(self, name):
value = self.content
if name in self.errors:
if value=="":
return
else:
raise RuntimeError(value)
elif name in self.integers:
value = IntegerElement(value)
elif name in self.strings:
# Convert Unicode strings to plain strings if possible
try:
value = StringElement(value)
except UnicodeEncodeError:
value = UnicodeElement(value)
elif name in self.items:
self.object = self.stack.pop()
if self.object.itemtype in ("List", "Structure"):
return
elif self.object.itemtype=="Integer" and value:
value = IntegerElement(value)
else:
# Convert Unicode strings to plain strings if possible
try:
value = StringElement(value)
except UnicodeEncodeError:
value = UnicodeElement(value)
name = self.object.itemname
else:
self.object = self.stack.pop()
return
value.tag = name
if self.attributes:
value.attributes = dict(self.attributes)
del self.attributes
current = self.stack[-1]
if current!="":
try:
current.append(value)
except AttributeError:
current[name] = value
def characterDataHandler(self, content):
self.content += content
def elementDecl(self, name, model):
"""This callback function is called for each element declaration:
<!ELEMENT name (...)>
encountered in a DTD. The purpose of this function is to determine
whether this element should be regarded as a string, integer, list
dictionary, structure, or error."""
if name.upper()=="ERROR":
self.errors.append(name)
return
if name=='Item' and model==(expat.model.XML_CTYPE_MIXED,
expat.model.XML_CQUANT_REP,
None, ((expat.model.XML_CTYPE_NAME,
expat.model.XML_CQUANT_NONE,
'Item',
()
),
)
):
# Special case. As far as I can tell, this only occurs in the
# eSummary DTD.
self.items.append(name)
return
# First, remove ignorable parentheses around declarations
while (model[0] in (expat.model.XML_CTYPE_SEQ,
expat.model.XML_CTYPE_CHOICE)
and model[1] in (expat.model.XML_CQUANT_NONE,
expat.model.XML_CQUANT_OPT)
and len(model[3])==1):
model = model[3][0]
# PCDATA declarations correspond to strings
if model[0] in (expat.model.XML_CTYPE_MIXED,
expat.model.XML_CTYPE_EMPTY):
self.strings.append(name)
return
# List-type elements
if (model[0] in (expat.model.XML_CTYPE_CHOICE,
expat.model.XML_CTYPE_SEQ) and
model[1] in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP)):
self.lists.append(name)
return
# This is the tricky case. Check which keys can occur multiple
# times. If only one key is possible, and it can occur multiple
# times, then this is a list. If more than one key is possible,
# but none of them can occur multiple times, then this is a
# dictionary. Otherwise, this is a structure.
# In 'single' and 'multiple', we keep track which keys can occur
# only once, and which can occur multiple times.
single = []
multiple = []
# The 'count' function is called recursively to make sure all the
# children in this model are counted. Error keys are ignored;
# they raise an exception in Python.
def count(model):
quantifier, name, children = model[1:]
if name==None:
if quantifier in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP):
for child in children:
multiple.append(child[2])
else:
for child in children:
count(child)
elif name.upper()!="ERROR":
if quantifier in (expat.model.XML_CQUANT_NONE,
expat.model.XML_CQUANT_OPT):
single.append(name)
elif quantifier in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP):
multiple.append(name)
count(model)
if len(single)==0 and len(multiple)==1:
self.lists.append(name)
elif len(multiple)==0:
self.dictionaries.append(name)
else:
self.structures.update({name: multiple})
def open_dtd_file(self, filename):
path = os.path.join(DataHandler.local_dtd_dir, filename)
try:
handle = open(path, "rb")
except IOError:
pass
else:
return handle
path = os.path.join(DataHandler.global_dtd_dir, filename)
try:
handle = open(path, "rb")
except IOError:
pass
else:
return handle
return None
def externalEntityRefHandler(self, context, base, systemId, publicId):
"""The purpose of this function is to load the DTD locally, instead
of downloading it from the URL specified in the XML. Using the local
DTD results in much faster parsing. If the DTD is not found locally,
we try to download it. If new DTDs become available from NCBI,
putting them in Bio/Entrez/DTDs will allow the parser to see them."""
urlinfo = urlparse.urlparse(systemId)
#Following attribute requires Python 2.5+
#if urlinfo.scheme=='http':
if urlinfo[0]=='http':
# Then this is an absolute path to the DTD.
url = systemId
elif urlinfo[0]=='':
# Then this is a relative path to the DTD.
# Look at the parent URL to find the full path.
url = self.dtd_urls[-1]
source = os.path.dirname(url)
url = os.path.join(source, systemId)
self.dtd_urls.append(url)
# First, try to load the local version of the DTD file
location, filename = os.path.split(systemId)
handle = self.open_dtd_file(filename)
if not handle:
# DTD is not available as a local file. Try accessing it through
# the internet instead.
message = """\
Unable to load DTD file %s.
Bio.Entrez uses NCBI's DTD files to parse XML files returned by NCBI Entrez.
Though most of NCBI's DTD files are included in the Biopython distribution,
sometimes you may find that a particular DTD file is missing. While we can
access the DTD file through the internet, the parser is much faster if the
required DTD files are available locally.
For this purpose, please download %s from
%s
and save it either in directory
%s
or in directory
%s
in order for Bio.Entrez to find it.
Alternatively, you can save %s in the directory
Bio/Entrez/DTDs in the Biopython distribution, and reinstall Biopython.
Please also inform the Biopython developers about this missing DTD, by
reporting a bug on http://bugzilla.open-bio.org/ or sign up to our mailing
list and emailing us, so that we can include it with the next release of
Biopython.
Proceeding to access the DTD file through the internet...
""" % (filename, filename, url, self.global_dtd_dir, self.local_dtd_dir, filename)
warnings.warn(message)
try:
handle = urllib.urlopen(url)
except IOError:
raise RuntimeException("Failed to access %s at %s" % (filename, url))
parser = self.parser.ExternalEntityParserCreate(context)
parser.ElementDeclHandler = self.elementDecl
parser.ParseFile(handle)
handle.close()
self.dtd_urls.pop()
return 1
So you have a file containing multiple XML documents one after the other? Here is an example which strips out the <?xml ?> PIs and wraps the data in a root tag to parse the whole thing as a single XML document:
import re
import lxml.etree
re_strip_pi = re.compile('<\?xml [^?>]+\?>', re.M)
data = '<root>' + open('stocks.xml', 'rb').read() + '</root>'
match = re_strip_pi.search(data)
data = re_strip_pi.sub('', data)
tree = lxml.etree.fromstring(match.group() + data)
for prod in tree.xpath('//PRODUCT'):
print prod
You can't have multiple XML documents in one XML file. Split the documents - composed in whatever way - into single XML files and parse them one-by-one.
Related
pyral deleteAttachment to delete attachment from a Test Case definition not working
I'm trying to delete the attachments from a Test Case definition on Rally using pyral: del_attachment = rally.deleteAttachment('TestCase',filename) any suggestions, what is going wrong ?
If you look at the code of pyral, you get the following signature: def deleteAttachment(self, artifact, filename): """ Still unclear for WSAPI v2.0 if Attachment items can be deleted. Apparently AttachmentContent items can be deleted. """ art_type, artifact = self._realizeArtifact(artifact) if not art_type: return False current_attachments = [att for att in artifact.Attachments] hits = [att for att in current_attachments if att.Name == filename] if not hits: return False ... So the first argument is an artifact (i.e. the test case object), not a string. The could should be like this: import logging logging.basicConfig(format="%(levelname)s:%(module)s:%(lineno)d:%(msg)s") try: # Get number of existing steps testcase = rally.get("TestCase", query="FormattedID = %s" % tcid, instance=True) has_been_deleted = rally.deleteAttachment(testcase, filename) if not has_been_deleted: msg = "Attachment '{0}' of Test Case {1} not deleted successfully" logging.warning(msg.format(filename, testcase.FormattedID)) except RallyRESTAPIError as e: logging.error("Error while deleting attachment '{0}': {1}".format(filename, e))
passing a string of the FormattedID of the artifact should work because pyral tries to identify the type of artifact and retrieve it for you in the call below.. art_type, artifact = self._realizeArtifact(artifact) have look at the code for _realizeArtifact... def _realizeArtifact(self, artifact): """ Helper method to identify the artifact type and to retrieve it if the artifact value is a FormattedID. If the artifact is already an instance of a Rally entity, then all that needs to be done is deduce the art_type from the class name. If the artifact argument given is neither of those two conditions, return back a 2 tuple of (False, None). Once you have a Rally instance of the artifact, return back a 2 tuple of (art_type, artifact) """ art_type = False if 'pyral.entity.' in str(type(artifact)): # we've got the artifact already... art_type = artifact.__class__.__name__ elif self.FORMATTED_ID_PATTERN.match(artifact): # artifact is a potential FormattedID value prefix = artifact[:2] if prefix[1] in string.digits: prefix = prefix[0] art_type = self.ARTIFACT_TYPE[prefix] response = self.get(art_type, fetch=True, query='FormattedID = %s' % artifact) if response.resultCount == 1: artifact = response.next() else: art_type = False else: # the supplied artifact isn't anything we can deal with here... pass return art_type, artifact
How to Parse YAML Using PyYAML if there are '!' within the YAML
I have a YAML file that I'd like to parse the description variable only; however, I know that the exclamation points in my CloudFormation template (YAML file) are giving PyYAML trouble. I am receiving the following error: yaml.constructor.ConstructorError: could not determine a constructor for the tag '!Equals' The file has many !Ref and !Equals. How can I ignore these constructors and get a specific variable I'm looking for -- in this case, the description variable.
If you have to deal with a YAML document with multiple different tags, and are only interested in a subset of them, you should still handle them all. If the elements you are intersted in are nested within other tagged constructs you at least need to handle all of the "enclosing" tags properly. There is however no need to handle all of the tags individually, you can write a constructor routine that can handle mappings, sequences and scalars register that to PyYAML's SafeLoader using: import yaml inp = """\ MyEIP: Type: !Join [ "::", [AWS, EC2, EIP] ] Properties: InstanceId: !Ref MyEC2Instance """ description = [] def any_constructor(loader, tag_suffix, node): if isinstance(node, yaml.MappingNode): return loader.construct_mapping(node) if isinstance(node, yaml.SequenceNode): return loader.construct_sequence(node) return loader.construct_scalar(node) yaml.add_multi_constructor('', any_constructor, Loader=yaml.SafeLoader) data = yaml.safe_load(inp) print(data) which gives: {'MyEIP': {'Type': ['::', ['AWS', 'EC2', 'EIP']], 'Properties': {'InstanceId': 'MyEC2Instance'}}} (inp can also be a file opened for reading). As you see above will also continue to work if an unexpected !Join tag shows up in your code, as well as any other tag like !Equal. The tags are just dropped. Since there are no variables in YAML, it is a bit of guesswork what you mean by "like to parse the description variable only". If that has an explicit tag (e.g. !Description), you can filter out the values by adding 2-3 lines to the any_constructor, by matching the tag_suffix parameter. if tag_suffix == u'!Description': description.append(loader.construct_scalar(node)) It is however more likely that there is some key in a mapping that is a scalar description, and that you are interested in the value associated with that key. if isinstance(node, yaml.MappingNode): d = loader.construct_mapping(node) for k in d: if k == 'description': description.append(d[k]) return d If you know the exact position in the data hierarchy, You can of course also walk the data structure and extract anything you need based on keys or list positions. Especially in that case you'd be better of using my ruamel.yaml, was this can load tagged YAML in round-trip mode without extra effort (assuming the above inp): from ruamel.yaml import YAML with YAML() as yaml: data = yaml.load(inp)
You can define a custom constructors using a custom yaml.SafeLoader import yaml doc = ''' Conditions: CreateNewSecurityGroup: !Equals [!Ref ExistingSecurityGroup, NONE] ''' class Equals(object): def __init__(self, data): self.data = data def __repr__(self): return "Equals(%s)" % self.data class Ref(object): def __init__(self, data): self.data = data def __repr__(self): return "Ref(%s)" % self.data def create_equals(loader,node): value = loader.construct_sequence(node) return Equals(value) def create_ref(loader,node): value = loader.construct_scalar(node) return Ref(value) class Loader(yaml.SafeLoader): pass yaml.add_constructor(u'!Equals', create_equals, Loader) yaml.add_constructor(u'!Ref', create_ref, Loader) a = yaml.load(doc, Loader) print(a) Outputs: {'Conditions': {'CreateNewSecurityGroup': Equals([Ref(ExistingSecurityGroup), 'NONE'])}}
Parsing Huge XML Files With 600M
Can please telle me if there are any way to parse an XML file(size = 600M) with unstagle /python In Fact I use untangle.parse(file.xml) and I got error message : Process finished with exit code 137 IS there any way to parse this file by bloc for example or other option used by the function untangle.parse() or a specific linux configuration...? Thanks
You can use xml modules sax(Simple API for XML) parser. SAX is a streaming context over XML and the Document is processed in the linear fashion. This is advantageous when DOM Tree would consume too much of the memory as usual DOM implementations use 10 bytes of memory to represent 1 byte of XML. Sample code for doing some like this : import xml.sax def stream_gz_decompress(stream) : dec = zlib.decompressobj(32 + zlib.MAX_WBITS) for chunk in stream : rv = dec.decompress(chunk) if rv : yield rv class stream_handler(xml.sax.handler.ContentHandler) : last_entry = None last_name = None def startElement(self, name, attrs) : self.last_name = name if name == 'item': self.last_entry = {} elif name != 'root' and name != 'updated' : self.last_entry[name] = {'attrs': attrs, 'content': ''} def endElement(self, name): if name == 'item': # YOUR APPLICATION LOGIC GOES HERE self.last_entry = None elif name == 'root': raise StopIteration def characters(self, content): if self.last_entry: self.last_entry[self.last_name]['content'] += content parser = xml.sax.make_parser() parser.setContentHandler(stream_handler()) with open(os.path.basename('FILENAME'), "rb") as local_file: for data in stream_gz_decompress(local_file): parser.feed(data)
It's possible to use sax with untangle?, mean that I load the file by sax and read it by untangle , because I have a lot of code wrote using untagle and I developped since long times , and I don't want to restart from scratch Thanks
How to get meta data with MMPython for images and video
I'm trying to get the creation date for all the photos and videos in a folder, and having mixed success. I have .jpg, .mov, and .mp4 videos in this folder. I spent a long time looking at other posts, and I saw quite a few references to the MMPython library here: http://sourceforge.net/projects/mmpython/ Looking through the MMPython source I think this will give me what I need, but the problem is that I don't know how to invoke it. In other words, I have my file, but I don't know how to interface with MMPython and I can't see any examples Here is my script: import os import sys import exifread import hashlib import ExifTool if len(sys.argv) > 1: var = sys.argv[1] else: var = raw_input("Please enter the directory: ") direct = '/Users/bbarr233/Documents/Personal/projects/photoOrg/photos' print "direct: " + direct print "var: " + var var = var.rstrip() for root, dirs, filenames in os.walk(var): print "root " + root for f in filenames: #make sure that we are dealing with images or videos if f.find(".jpg") > -1 or f.find(".jpeg") > -1 or f.find(".mov") > -1 or f.find(".mp4") > -1: print "file " + root + "/" + f f = open(root + "/" + f, 'rb') #Now I want to do something like this, but don't know which method to call: #tags = mmpython.process_file(f) # do something with the creation date Can someone hint me on on how I can use the MMPython library? Thanks!!! PS. I've looked at some other threads on this, such as: Link to thread:This one didn't make sense to me Link to thread: This one worked great for mov but not for my mp4s, it said the creation date was 1946 Link to thread: This thread is one of the ones that suggested MMPython, but like I said I don't know how to use it.
Here is a well commented code example I found which will show you how to use mmpython.. This module extracts metadata from new media files, using mmpython, and provides utilities for converting metadata between formats. # Copyright (C) 2005 Micah Dowty <micah#navi.cx> # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA import md5, os, cPickle import mmpython from mmpython.audio import mp3info import sqlite from RioKarma import Paths class RidCalculator: """This object calculates the RID of a file- a sparse digest used by Rio Karma. For files <= 64K, this is the file's md5sum. For larger files, this is the XOR of three md5sums, from 64k blocks in the beginning, middle, and end. """ def fromSection(self, fileObj, start, end, blockSize=0x10000): """This needs a file-like object, as well as the offset and length of the portion the RID is generated from. Beware that there is a special case for MP3 files. """ # It's a short file, compute only one digest if end-start <= blockSize: fileObj.seek(start) return md5.md5(fileObj.read(end-start)).hexdigest() # Three digests for longer files fileObj.seek(start) a = md5.md5(fileObj.read(blockSize)).digest() fileObj.seek(end - blockSize) b = md5.md5(fileObj.read(blockSize)).digest() fileObj.seek((start + end - blockSize) / 2) c = md5.md5(fileObj.read(blockSize)).digest() # Combine the three digests return ''.join(["%02x" % (ord(a[i]) ^ ord(b[i]) ^ ord(c[i])) for i in range(16)]) def fromFile(self, filename, length=None, mminfo=None): """Calculate the RID from a file, given its name. The file's length and mmpython results may be provided if they're known, to avoid duplicating work. """ if mminfo is None: mminfo = mmpython.parse(filename) f = open(filename, "rb") if length is None: f.seek(0, 2) length = f.tell() f.seek(0) # Is this an MP3 file? For some silliness we have to skip the header # and the last 128 bytes of the file. mmpython can tell us where the # header starts, but only in a somewhat ugly way. if isinstance(mminfo, mmpython.audio.eyed3info.eyeD3Info): try: offset = mp3info.MPEG(f)._find_header(f)[0] except ZeroDivisionError: # This is a bit of a kludge, since mmpython seems to crash # here on some MP3s for a currently-unknown reason. print "WARNING, mmpython got a div0 error on %r" % filename offset = 0 if offset < 0: # Hmm, it couldn't find the header? Set this to zero # so we still get a usable RID, but it probably # won't strictly be a correct RID. offset = 0 f.seek(0) return self.fromSection(f, offset, length-128) # Otherwise, use the whole file else: return self.fromSection(f, 0, length) class BaseCache: """This is an abstract base class for objects that cache metadata dictionaries on disk. The cache is implemented as a sqlite database, with a 'dict' table holding administrative key-value data, and a 'files' table holding both a pickled representation of the metadata and separate columns for all searchable keys. """ # This must be defined by subclasses as a small integer that changes # when any part of the database schema or our storage format changes. schemaVersion = None # This is the template for our SQL schema. All searchable keys are # filled in automatically, but other items may be added by subclasses. schemaTemplate = """ CREATE TABLE dict ( name VARCHAR(64) PRIMARY KEY, value TEXT ); CREATE TABLE files ( %(keys)s, _pickled TEXT NOT NULL ); """ # A list of searchable keys, used to build the schema and validate queries searchableKeys = None keyType = "VARCHAR(255)" # The primary key is what ensures a file's uniqueness. Inserting a file # with a primary key identical to an existing one will update that # file rather than creating a new one. primaryKey = None def __init__(self, name): self.name = name self.connection = None def open(self): """Open the cache, creating it if necessary""" if self.connection is not None: return self.connection = sqlite.connect(Paths.getCache(self.name)) self.cursor = self.connection.cursor() # See what version of the database we got. If it's empty # or it's old, we need to reset it. try: version = self._dictGet('schemaVersion') except sqlite.DatabaseError: version = None if version != str(self.schemaVersion): self.empty() def close(self): if self.connection is not None: self.sync() self.connection.close() self.connection = None def _getSchema(self): """Create a complete schema from our schema template and searchableKeys""" keys = [] for key in self.searchableKeys: type = self.keyType if key == self.primaryKey: type += " PRIMARY KEY" keys.append("%s %s" % (key, type)) return self.schemaTemplate % dict(keys=', '.join(keys)) def _encode(self, obj): """Encode an object that may not be a plain string""" if type(obj) is unicode: obj = obj.encode('utf-8') elif type(obj) is not str: obj = str(obj) return "'%s'" % sqlite.encode(obj) def _dictGet(self, key): """Return a value stored in the persistent dictionary. Returns None if the key has no matching value. """ self.cursor.execute("SELECT value FROM dict WHERE name = '%s'" % key) row = self.cursor.fetchone() if row: return sqlite.decode(row[0]) def _dictSet(self, key, value): """Create or update a value stored in the persistent dictionary""" encodedValue = self._encode(value) # First try inserting a new item try: self.cursor.execute("INSERT INTO dict (name, value) VALUES ('%s', %s)" % (key, encodedValue)) except sqlite.IntegrityError: # Violated the primary key constraint, update an existing item self.cursor.execute("UPDATE dict SET value = %s WHERE name = '%s'" % ( encodedValue, key)) def sync(self): """Synchronize in-memory parts of the cache with disk""" self.connection.commit() def empty(self): """Reset the database to a default empty state""" # Find and destroy every table in the database self.cursor.execute("SELECT tbl_name FROM sqlite_master WHERE type='table'") tables = [row.tbl_name for row in self.cursor.fetchall()] for table in tables: self.cursor.execute("DROP TABLE %s" % table) # Apply the schema self.cursor.execute(self._getSchema()) self._dictSet('schemaVersion', self.schemaVersion) def _insertFile(self, d): """Insert a new file into the cache, given a dictionary of its metadata""" # Make name/value lists for everything we want to update dbItems = {'_pickled': self._encode(cPickle.dumps(d, -1))} for column in self.searchableKeys: if column in d: dbItems[column] = self._encode(d[column]) # First try inserting a new row try: names = dbItems.keys() self.cursor.execute("INSERT INTO files (%s) VALUES (%s)" % (",".join(names), ",".join([dbItems[k] for k in names]))) except sqlite.IntegrityError: # Violated the primary key constraint, update an existing item self.cursor.execute("UPDATE files SET %s WHERE %s = %s" % ( ", ".join(["%s = %s" % i for i in dbItems.iteritems()]), self.primaryKey, self._encode(d[self.primaryKey]))) def _deleteFile(self, key): """Delete a File from the cache, given its primary key""" self.cursor.execute("DELETE FROM files WHERE %s = %s" % ( self.primaryKey, self._encode(key))) def _getFile(self, key): """Return a metadata dictionary given its primary key""" self.cursor.execute("SELECT _pickled FROM files WHERE %s = %s" % ( self.primaryKey, self._encode(key))) row = self.cursor.fetchone() if row: return cPickle.loads(sqlite.decode(row[0])) def _findFiles(self, **kw): """Search for files. The provided keywords must be searchable. Yields a list of details dictionaries, one for each match. Any keyword can be None (matches anything) or it can be a string to match. Keywords that aren't provided are assumed to be None. """ constraints = [] for key, value in kw.iteritems(): if key not in self.searchableKeys: raise ValueError("Key name %r is not searchable" % key) constraints.append("%s = %s" % (key, self._encode(value))) if not constraints: constraints.append("1") self.cursor.execute("SELECT _pickled FROM files WHERE %s" % " AND ".join(constraints)) row = None while 1: row = self.cursor.fetchone() if not row: break yield cPickle.loads(sqlite.decode(row[0])) def countFiles(self): """Return the number of files cached""" self.cursor.execute("SELECT COUNT(_pickled) FROM files") return int(self.cursor.fetchone()[0]) def updateStamp(self, stamp): """The stamp for this cache is any arbitrary value that is expected to change when the actual data on the device changes. It is used to check the cache's validity. This function update's the stamp from a value that is known to match the cache's current contents. """ self._dictSet('stamp', stamp) def checkStamp(self, stamp): """Check whether a provided stamp matches the cache's stored stamp. This should be used when you have a stamp that matches the actual data on the device, and you want to see if the cache is still valid. """ return self._dictGet('stamp') == str(stamp) class LocalCache(BaseCache): """This is a searchable metadata cache for files on the local disk. It can be used to speed up repeated metadata lookups for local files, but more interestingly it can be used to provide full metadata searching on local music files. """ schemaVersion = 1 searchableKeys = ('type', 'rid', 'title', 'artist', 'source', 'filename') primaryKey = 'filename' def lookup(self, filename): """Return a details dictionary for the given filename, using the cache if possible""" filename = os.path.realpath(filename) # Use the mtime as a stamp to see if our cache is still valid mtime = os.stat(filename).st_mtime cached = self._getFile(filename) if cached and int(cached.get('mtime')) == int(mtime): # Yay, still valid return cached['details'] # Nope, generate a new dict and cache it details = {} Converter().detailsFromDisk(filename, details) generated = dict( type = details.get('type'), rid = details.get('rid'), title = details.get('title'), artist = details.get('artist'), source = details.get('source'), mtime = mtime, filename = filename, details = details, ) self._insertFile(generated) return details def findFiles(self, **kw): """Search for files that match all given search keys. This returns an iterator over filenames, skipping any files that aren't currently valid in the cache. """ for cached in self._findFiles(**kw): try: mtime = os.stat(cached['filename']).st_mtime except OSError: pass else: if cached.get('mtime') == mtime: yield cached['filename'] def scan(self, path): """Recursively scan all files within the specified path, creating or updating their cache entries. """ for root, dirs, files in os.walk(path): for name in files: filename = os.path.join(root, name) self.lookup(filename) # checkpoint this after every directory self.sync() _defaultLocalCache = None def getLocalCache(create=True): """Get the default instance of LocalCache""" global _defaultLocalCache if (not _defaultLocalCache) and create: _defaultLocalCache = LocalCache("local") _defaultLocalCache.open() return _defaultLocalCache class Converter: """This object manages the connection between different kinds of metadata- the data stored within a file on disk, mmpython attributes, Rio attributes, and file extensions. """ # Maps mmpython classes to codec names for all formats the player # hardware supports. codecNames = { mmpython.audio.eyed3info.eyeD3Info: 'mp3', mmpython.audio.mp3info.MP3Info: 'mp3', mmpython.audio.flacinfo.FlacInfo: 'flac', mmpython.audio.pcminfo.PCMInfo: 'wave', mmpython.video.asfinfo.AsfInfo: 'wma', mmpython.audio.ogginfo.OggInfo: 'vorbis', } # Maps codec names to extensions. Identity mappings are the # default, so they are omitted. codecExtensions = { 'wave': 'wav', 'vorbis': 'ogg', } def filenameFromDetails(self, details, unicodeEncoding = 'utf-8'): """Determine a good filename to use for a file with the given metadata in the Rio 'details' format. If it's a data file, this will use the original file as stored in 'title'. Otherwise, it uses Navi's naming convention: Artist_Name/album_name/##_track_name.extension """ if details.get('type') == 'taxi': return details['title'] # Start with just the artist... name = details.get('artist', 'None').replace(os.sep, "").replace(" ", "_") + os.sep album = details.get('source') if album: name += album.replace(os.sep, "").replace(" ", "_").lower() + os.sep track = details.get('tracknr') if track: name += "%02d_" % track name += details.get('title', 'None').replace(os.sep, "").replace(" ", "_").lower() codec = details.get('codec') extension = self.codecExtensions.get(codec, codec) if extension: name += '.' + extension return unicode(name).encode(unicodeEncoding, 'replace') def detailsFromDisk(self, filename, details): """Automagically load media metadata out of the provided filename, adding entries to details. This works on any file type mmpython recognizes, and other files should be tagged appropriately for Rio Taxi. """ info = mmpython.parse(filename) st = os.stat(filename) # Generic details for any file. Note that we start out assuming # all files are unreadable, and label everything for Rio Taxi. # Later we'll mark supported formats as music. details['length'] = st.st_size details['type'] = 'taxi' details['rid'] = RidCalculator().fromFile(filename, st.st_size, info) # We get the bulk of our metadata via mmpython if possible if info: self.detailsFromMM(info, details) if details['type'] == 'taxi': # All taxi files get their filename as their title, regardless of what mmpython said details['title'] = os.path.basename(filename) # Taxi files also always get a codec of 'taxi' details['codec'] = 'taxi' # Music files that still don't get a title get their filename minus the extension if not details.get('title'): details['title'] = os.path.splitext(os.path.basename(filename))[0] def detailsFromMM(self, info, details): """Update Rio-style 'details' metadata from MMPython info""" # Mime types aren't implemented consistently in mmpython, but # we can look at the type of the returned object to decide # whether this is a format that the Rio probably supports. # This dictionary maps mmpython clases to Rio codec names. for cls, codec in self.codecNames.iteritems(): if isinstance(info, cls): details['type'] = 'tune' details['codec'] = codec break # Map simple keys that don't require and hackery for fromKey, toKey in ( ('artist', 'artist'), ('title', 'title'), ('album', 'source'), ('date', 'year'), ('samplerate', 'samplerate'), ): v = info[fromKey] if v is not None: details[toKey] = v # The rio uses a two-letter prefix on bit rates- the first letter # is 'f' or 'v', presumably for fixed or variable. The second is # 'm' for mono or 's' for stereo. There doesn't seem to be a good # way to get VBR info out of mmpython, so currently this always # reports a fixed bit rate. We also have to kludge a bit because # some metdata sources give us bits/second while some give us # kilobits/second. And of course, there are multiple ways of # reporting stereo... kbps = info['bitrate'] if type(kbps) in (int, float) and kbps > 0: stereo = bool( (info['channels'] and info['channels'] >= 2) or (info['mode'] and info['mode'].find('stereo') >= 0) ) if kbps > 8000: kbps = kbps // 1000 details['bitrate'] = ('fm', 'fs')[stereo] + str(kbps) # If mmpython gives us a length it seems to always be in seconds, # whereas the Rio expects milliseconds. length = info['length'] if length: details['duration'] = int(length * 1000) # mmpython often gives track numbers as a fraction- current/total. # The Rio only wants the current track, and we might as well also # strip off leading zeros and such. trackNo = info['trackno'] if trackNo: details['tracknr'] = int(trackNo.split("/", 1)[0]) Reference: http://svn.navi.cx/misc/trunk/rio-karma/python/RioKarma/Metadata.py Further: Including Python modules
You should look at the os.stat functions https://docs.python.org/2/library/os.html os.stat returns file creation and modified times ctime, mtime It should be something like this: Import os st= os.stat(full_file_path) file_ctime= st.st_ctime print(file_ctime)
PyYaml Expect scalar but found Sequence
I am trying to load with user defined tags in my python code, with PyYaml. Dont have much experience with pyYaml loader, constructor, representer parser, resolver and dumpers. Below is my code what i could come up with: import yaml, os from collections import OrderedDict root = os.path.curdir def construct_position_object(loader, suffix, node): return loader.construct_yaml_map(node) def construct_position_sym(loader, node): return loader.construct_yaml_str(node) yaml.add_multi_constructor(u"!Position", construct_position_object) yaml.add_constructor(u"!Position", construct_position_sym) def main(): file = open('C:\calcWorkspace\\13.3.1.0\PythonTest\YamlInput\Exception_V5.yml','r') datafile = yaml.load_all(file) for data in datafile: yaml.add_representer(literal, literal_presenter) yaml.add_representer(OrderedDict, ordered_dict_presenter) d = OrderedDict(l=literal(data)) print yaml.dump(data, default_flow_style=False) print datafile.get('abcd').get('addresses') yaml.add_constructor('!include', include) def include(loader, node): """Include another YAML file.""" global root old_root = root filename = os.path.join(root, loader.construct_scalar(node)) root = os.path.split(filename)[0] data = yaml.load(open(filename, 'r')) root = old_root return data class literal(str): pass def literal_presenter(dumper, data): return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|') def ordered_dict_presenter(dumper, data): return dumper.represent_dict(data.items()) if __name__ == '__main__': main() This is my Yaml file: #sid: Position[SIK,sourceDealID,UTPI] sid: Position[1232546, 0634.10056718.0.1096840.0,] ASSET_CLASS: "Derivative" SOURCE_DEAL_ID: "0634.10056718.0.1096840.0" INSTR_ID: "UKCM.L" PRODUCT_TYPE_ID: 0 SOURCE_PRODUCT_TYPE: "CDS" NOTIONAL_USD: 14.78 NOTIONAL_CCY: LOB: PRODUCT_TYPE: #GIM UNDERLIER_INSTRUMENT_ID: MTM_USD: MTM_CCY: TRADER_SID: SALES_PERSON_SID: CLIENT_SPN: CLIENT_UCN: CLIENT_NAME: LE: --- sid: Position[1258642, 0634.10056718.0.1096680.0,] #sid: Position[1] ASSET_CLASS: "Derivative" SOURCE_DEAL_ID: "0634.10056718.0.1096840.0" INSTR_ID: "UKCM.L" PRODUCT_TYPE_ID: 0 SOURCE_PRODUCT_TYPE: "CDS" NOTIONAL_AMT: 18.78 NOTIONAL_CCY: "USD" LOB: PRODUCT_TYPE: UNDERLIER_INSTRUMENT_ID: MTM_AMT: MTM_CCY: TRADER_SID: SALES_PERSON_SID: CLIENT_SPN: CLIENT_UCN: CLIENT_NAME: LE: --- # Excption documents to follow from here!!! Exception: src_excp_id: 100001 # CONFIGURABLE OBJECT, VALUE TO BE POPULATED RUNTIME (impact_obj COMES FROM CONFIG FILE) # VALUE STARTS FROM "!POSITION..." A USER DEFINED DATATYPE impact_obj: !Position [1232546, 0634.10056718.0.1096840.0,] # CONFIGURABLE OBJECT, VALUE TO BE POPULATED RUNTIME (rsn_obj COMES FROM CONFIG FILE) # VALUE STARTS FROM "_POSITION..." AN IDENTIFIER FOR CONFIGURABLE OBJECTS rsn_obj: !Position [1258642, 0634.10056718.0.1096680.0,] exception_txt: "Invalid data, NULL value provided" severity: "High" Looks like my code is unable to identify the !Position user-defined data type. Any help would be appericiated Regards.
Needed to change: def construct_position_sym(loader, node): return loader.construct_yaml_str(node) to : def construct_position_sym(loader, node): return loader.construct_yaml_seq(node) Because the position object was a sequence: !Position [something, something] So the constructor had to be a sequence type. Works perfect!!!