Comparing two xml files in python - python

I am new to programming in python,´and i have some troubles understanding the concept. I wish to compare two xml files. These xml files are quite large.
I will give an example for the type of files i wish to compare.
xmlfile1:
<xml>
<property1>
<property2>
<property3>
</property3>
</property2>
</property1>
</xml>
xml file2:
<xml>
<property1>
<property2>
<property3>
<property4>
</property4>
</property3>
</property2>
</property1>
</xml>
the property1,property2 that i have named are different from the ones that are actually in the file. There are a lot of properties within the xml file.
ANd i wish to compare the two xml files.
I am using an lxml parser to try to compare the two files and to print out the difference between them.
I do not know how to parse it and compare it automatically.
I tried reading through the lxml parser, but i couldnt understand how to use it to my problem.
Can someone please tell me how should i proceed with this problem.
Code snippets can be very useful
One more question, Am i following the right concept or i am missing something else? Please correct me of any new concepts that you knwo about

This is actually a reasonably challenging problem (due to what "difference" means often being in the eye of the beholder here, as there will be semantically "equivalent" information that you probably don't want marked as differences).
You could try using xmldiff, which is based on work in the paper Change Detection in Hierarchically Structured Information.

My approach to the problem was transforming each XML into a xml.etree.ElementTree and iterating through each of the layers.
I also included the functionality to ignore a list of attributes while doing the comparison.
The first block of code holds the class used:
import xml.etree.ElementTree as ET
import logging
class XmlTree():
def __init__(self):
self.hdlr = logging.FileHandler('xml-comparison.log')
self.formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
#staticmethod
def convert_string_to_tree( xmlString):
return ET.fromstring(xmlString)
def xml_compare(self, x1, x2, excludes=[]):
"""
Compares two xml etrees
:param x1: the first tree
:param x2: the second tree
:param excludes: list of string of attributes to exclude from comparison
:return:
True if both files match
"""
if x1.tag != x2.tag:
self.logger.debug('Tags do not match: %s and %s' % (x1.tag, x2.tag))
return False
for name, value in x1.attrib.items():
if not name in excludes:
if x2.attrib.get(name) != value:
self.logger.debug('Attributes do not match: %s=%r, %s=%r'
% (name, value, name, x2.attrib.get(name)))
return False
for name in x2.attrib.keys():
if not name in excludes:
if name not in x1.attrib:
self.logger.debug('x2 has an attribute x1 is missing: %s'
% name)
return False
if not self.text_compare(x1.text, x2.text):
self.logger.debug('text: %r != %r' % (x1.text, x2.text))
return False
if not self.text_compare(x1.tail, x2.tail):
self.logger.debug('tail: %r != %r' % (x1.tail, x2.tail))
return False
cl1 = x1.getchildren()
cl2 = x2.getchildren()
if len(cl1) != len(cl2):
self.logger.debug('children length differs, %i != %i'
% (len(cl1), len(cl2)))
return False
i = 0
for c1, c2 in zip(cl1, cl2):
i += 1
if not c1.tag in excludes:
if not self.xml_compare(c1, c2, excludes):
self.logger.debug('children %i do not match: %s'
% (i, c1.tag))
return False
return True
def text_compare(self, t1, t2):
"""
Compare two text strings
:param t1: text one
:param t2: text two
:return:
True if a match
"""
if not t1 and not t2:
return True
if t1 == '*' or t2 == '*':
return True
return (t1 or '').strip() == (t2 or '').strip()
The second block of code holds a couple of XML examples and their comparison:
xml1 = "<note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>"
xml2 = "<note><to>Tove</to><from>Daniel</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>"
tree1 = XmlTree.convert_string_to_tree(xml1)
tree2 = XmlTree.convert_string_to_tree(xml2)
comparator = XmlTree()
if comparator.xml_compare(tree1, tree2, ["from"]):
print "XMLs match"
else:
print "XMLs don't match"
Most of the credit for this code must be given to syawar

If your intent is to compare the XML content and attributes, and not just compare the files byte-by-byte, there are subtleties to the question, so there is no solution that fits all cases.
You have to know something about what is important in the XML files.
The order of attributes listed in an element tag is generally not supposed to matter. That is, two XML files that differ only in the order of element attributes generally ought to be judged the same.
But that's the generic part.
The tricky part is application-dependent. For instance, it may be that white-space formatting of some elements of the file doesn't matter, and white-space might be added to the XML for legibility. And so on.
Recent versions of the ElementTree module have a function canonicalize(), which can take care of simpler cases, by putting the XML string into a canonical format.
I used this function in the unit tests of a recent project, to compare a known XML output with output from a package that sometimes changes the order of attributes. In this case, white space in the text elements was unimportant, but it was sometimes used for formatting.
import xml.etree.ElementTree as ET
def _canonicalize_XML( xml_str ):
""" Canonicalizes XML strings, so they are safe to
compare directly.
Strips white space from text content."""
if not hasattr( ET, "canonicalize" ):
raise Exception( "ElementTree missing canonicalize()" )
root = ET.fromstring( xml_str )
rootstr = ET.tostring( root )
return ET.canonicalize( rootstr, strip_text=True )
To use it, something like this:
file1 = ET.parse('file1.xml')
file2 = ET.parse('file2.xml')
canon1 = _canonicalize_XML( ET.tostring( file1.getroot() ) )
canon2 = _canonicalize_XML( ET.tostring( file2.getroot() ) )
print( canon1 == canon2 )
In my distribution, the Python 2 doesn't have canonicalize(), but Python 3 does.

Another script using xml.etree. Its awful but it works :)
#!/usr/bin/env python
import sys
import xml.etree.ElementTree as ET
from termcolor import colored
tree1 = ET.parse(sys.argv[1])
root1 = tree1.getroot()
tree2 = ET.parse(sys.argv[2])
root2 = tree2.getroot()
class Element:
def __init__(self,e):
self.name = e.tag
self.subs = {}
self.atts = {}
for child in e:
self.subs[child.tag] = Element(child)
for att in e.attrib.keys():
self.atts[att] = e.attrib[att]
print "name: %s, len(subs) = %d, len(atts) = %d" % ( self.name, len(self.subs), len(self.atts) )
def compare(self,el):
if self.name!=el.name:
raise RuntimeError("Two names are not the same")
print "----------------------------------------------------------------"
print self.name
print "----------------------------------------------------------------"
for att in self.atts.keys():
v1 = self.atts[att]
if att not in el.atts.keys():
v2 = '[NA]'
color = 'yellow'
else:
v2 = el.atts[att]
if v2==v1:
color = 'green'
else:
color = 'red'
print colored("first:\t%s = %s" % ( att, v1 ), color)
print colored("second:\t%s = %s" % ( att, v2 ), color)
for subName in self.subs.keys():
if subName not in el.subs.keys():
print colored("first:\thas got %s" % ( subName), 'purple')
print colored("second:\thasn't got %s" % ( subName), 'purple')
else:
self.subs[subName].compare( el.subs[subName] )
e1 = Element(root1)
e2 = Element(root2)
e1.compare(e2)

Related

Get the structure of the folders from the given path list as strings

Given a list of paths as:
'alpha/beta/gamma/delta alpha/beta/sigma beta/phi/pi/rho'
I want to Print it as:
-alpha
-beta
-gamma
delta
-sigma
-beta
-phi
-pi
rho
Can you please help me out with this?
I was able to make a list of dictionaries of dictionaries. (I am kinda lost here)
There are simpler ways to do this where I can directly print the data but I want to do it in a structure such that I might be able to use this data somewhere else too.
paths = 'alpha/beta/gamma/delta alpha/beta/sigma b/f/g/h r/g/t/y q/w/er/rat'
folder_list = []
def get_children(ippath, e_dict):
remaining_path = '/'.join(ippath.split('/')[1:])
try:
splitted_path = ippath.split('/')[0]
if splitted_path:
e_dict[splitted_path] = {}
e_dict[splitted_path].update(get_children(remaining_path, e_dict[ippath.split('/')[0]]))
return e_dict
else:
return e_dict
except:
return remaining_path
for path in paths.split(' '):
end_dict = dict()
output = get_children(path, end_dict)
if output:
folder_list.append(output)
# final_list.update(output)
else:
continue
print(folder_list)
It gives me a list of nested dictionaries but still not what I want.
Thank you, I really appreciate the help
Are you fine with using another library? if so, dpath will work great for this.
It allows you to create dicts based on strings
https://pypi.org/project/dpath/
Here's a straightforward solution:
First, build a set of all distinct full paths, including the intermediate paths.
Sort the paths. This puts them in depth-first order, guaranteeing that a parent directory will always appear before its children.
Iterate through the paths, maintaining a stack:
Pop from the stack until you find the parent of the current path.
Print just the difference between the current path and its parent. The indentation level is determined by the length of the stack.
Push the current path to the stack.
To get the - symbols in the right place, we can keep track of which paths are leaf nodes in the tree. Here's the code:
def dir_tree(s):
paths = set()
for path in s.split():
parts = path.split('/')
is_leaf = True
while parts:
path = '/'.join(parts) + '/'
paths.add( (path, is_leaf) )
parts.pop()
is_leaf = False
stack = ['']
for path, is_leaf in sorted(paths):
while not path.startswith(stack[-1]):
stack.pop()
suffix = path[len(stack[-1]):-1]
tabs = len(stack) - 1
print('\t'*tabs + ('' if is_leaf else '-') + suffix)
stack.append(path)
Output:
-alpha
-beta
-gamma
delta
sigma
-beta
-phi
-pi
rho
I finally got it to work.. :)
Ron Serruya's suggested library helped me rethink my structure.
import json
paths = 'alpha/beta/gamma/delta alpha/beta/sigma beta/phi/pi/rho'
folder_list = {}
def get_children(ippath, e_dict):
remaining_path = '/'.join(ippath.split('/')[1:])
try:
splitted_path = ippath.split('/')[0]
if splitted_path:
e_dict[splitted_path] = {}
e_dict[splitted_path].update(get_children(remaining_path, e_dict[ippath.split('/')[0]]))
return e_dict
else:
return e_dict
except:
return remaining_path
def merge_dictionaries(new_dictionary, main_dictionary):
key = list(new_dictionary.keys())[0]
if list(new_dictionary[key].keys())[0] in list(main_dictionary[key].keys()):
merge_dictionaries(new_dictionary[key], main_dictionary[key])
else:
main_dictionary[key][list(new_dictionary[key].keys())[0]] = new_dictionary[key][list(new_dictionary[key].keys())[0]]
def main():
for path in paths.split(' '):
end_dict = dict()
output = get_children(path, end_dict)
if output:
if list(output.keys())[0] not in list(folder_list.keys()):
folder_list.update(output)
else:
merge_dictionaries(output, folder_list)
else:
continue
print(str(json.dumps(folder_list, sort_keys=True, indent=4, separators=('', ''))).replace('{', '').replace('}', ''))
main()
Gives Output:
"alpha"
"beta"
"gamma"
"delta"
"sigma"
"beta"
"phi"
"pi"
"rho"
Sorry for really bad structure of the code, I am up for suggestion to improve this structurally.

Merge two xml files Python and also keep comments

I'm trying to merge two xml files in python with the following code. That I found in another thread: Merge xml files with nested elements without external libraries
import sys
from xml.etree import ElementTree as et
class hashabledict(dict):
def __hash__(self):
return hash(tuple(sorted(self.items())))
class XMLCombiner(object):
def __init__(self, filenames):
assert len(filenames) > 0, 'No filenames!'
# save all the roots, in order, to be processed later
self.roots = [et.parse(f).getroot() for f in filenames]
def combine(self):
for r in self.roots[1:]:
# combine each element with the first one, and update that
self.combine_element(self.roots[0], r)
# return the string representation
return et.ElementTree(self.roots[0])
def combine_element(self, one, other):
"""
This function recursively updates either the text or the children
of an element if another element is found in `one`, or adds it
from `other` if not found.
"""
# Create a mapping from tag name to element, as that's what we are fltering with
mapping = {(el.tag, hashabledict(el.attrib)): el for el in one}
for el in other:
if len(el) == 0:
# Not nested
try:
# Update the text
mapping[(el.tag, hashabledict(el.attrib))].text = el.text
except KeyError:
# An element with this name is not in the mapping
mapping[(el.tag, hashabledict(el.attrib))] = el
# Add it
one.append(el)
else:
try:
# Recursively process the element, and update it in the same way
self.combine_element(mapping[(el.tag, hashabledict(el.attrib))], el)
except KeyError:
# Not in the mapping
mapping[(el.tag, hashabledict(el.attrib))] = el
# Just add it
one.append(el)
if __name__ == '__main__':
r = XMLCombiner(sys.argv[1:-1]).combine()
print '-'*20
print et.tostring(r.getroot())
r.write(sys.argv[-1], encoding="iso-8859-1", xml_declaration=True)
The code works perfectly for merging two xml files, however I would also like to merge the comments I have in the files. I'm new at this and don't know how to not just merge the xml but also the comments I have in the files.

How to get meta data with MMPython for images and video

I'm trying to get the creation date for all the photos and videos in a folder, and having mixed success. I have .jpg, .mov, and .mp4 videos in this folder.
I spent a long time looking at other posts, and I saw quite a few references to the MMPython library here: http://sourceforge.net/projects/mmpython/
Looking through the MMPython source I think this will give me what I need, but the problem is that I don't know how to invoke it. In other words, I have my file, but I don't know how to interface with MMPython and I can't see any examples
Here is my script:
import os
import sys
import exifread
import hashlib
import ExifTool
if len(sys.argv) > 1:
var = sys.argv[1]
else:
var = raw_input("Please enter the directory: ")
direct = '/Users/bbarr233/Documents/Personal/projects/photoOrg/photos'
print "direct: " + direct
print "var: " + var
var = var.rstrip()
for root, dirs, filenames in os.walk(var):
print "root " + root
for f in filenames:
#make sure that we are dealing with images or videos
if f.find(".jpg") > -1 or f.find(".jpeg") > -1 or f.find(".mov") > -1 or f.find(".mp4") > -1:
print "file " + root + "/" + f
f = open(root + "/" + f, 'rb')
#Now I want to do something like this, but don't know which method to call:
#tags = mmpython.process_file(f)
# do something with the creation date
Can someone hint me on on how I can use the MMPython library?
Thanks!!!
PS. I've looked at some other threads on this, such as:
Link to thread:This one didn't make sense to me
Link to thread: This one worked great for mov but not for my mp4s, it said the creation date was 1946
Link to thread: This thread is one of the ones that suggested MMPython, but like I said I don't know how to use it.
Here is a well commented code example I found which will show you how to use mmpython..
This module extracts metadata from new media files, using mmpython,
and provides utilities for converting metadata between formats.
# Copyright (C) 2005 Micah Dowty <micah#navi.cx>
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import md5, os, cPickle
import mmpython
from mmpython.audio import mp3info
import sqlite
from RioKarma import Paths
class RidCalculator:
"""This object calculates the RID of a file- a sparse digest used by Rio Karma.
For files <= 64K, this is the file's md5sum. For larger files, this is the XOR
of three md5sums, from 64k blocks in the beginning, middle, and end.
"""
def fromSection(self, fileObj, start, end, blockSize=0x10000):
"""This needs a file-like object, as well as the offset and length of the portion
the RID is generated from. Beware that there is a special case for MP3 files.
"""
# It's a short file, compute only one digest
if end-start <= blockSize:
fileObj.seek(start)
return md5.md5(fileObj.read(end-start)).hexdigest()
# Three digests for longer files
fileObj.seek(start)
a = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek(end - blockSize)
b = md5.md5(fileObj.read(blockSize)).digest()
fileObj.seek((start + end - blockSize) / 2)
c = md5.md5(fileObj.read(blockSize)).digest()
# Combine the three digests
return ''.join(["%02x" % (ord(a[i]) ^ ord(b[i]) ^ ord(c[i])) for i in range(16)])
def fromFile(self, filename, length=None, mminfo=None):
"""Calculate the RID from a file, given its name. The file's length and
mmpython results may be provided if they're known, to avoid duplicating work.
"""
if mminfo is None:
mminfo = mmpython.parse(filename)
f = open(filename, "rb")
if length is None:
f.seek(0, 2)
length = f.tell()
f.seek(0)
# Is this an MP3 file? For some silliness we have to skip the header
# and the last 128 bytes of the file. mmpython can tell us where the
# header starts, but only in a somewhat ugly way.
if isinstance(mminfo, mmpython.audio.eyed3info.eyeD3Info):
try:
offset = mp3info.MPEG(f)._find_header(f)[0]
except ZeroDivisionError:
# This is a bit of a kludge, since mmpython seems to crash
# here on some MP3s for a currently-unknown reason.
print "WARNING, mmpython got a div0 error on %r" % filename
offset = 0
if offset < 0:
# Hmm, it couldn't find the header? Set this to zero
# so we still get a usable RID, but it probably
# won't strictly be a correct RID.
offset = 0
f.seek(0)
return self.fromSection(f, offset, length-128)
# Otherwise, use the whole file
else:
return self.fromSection(f, 0, length)
class BaseCache:
"""This is an abstract base class for objects that cache metadata
dictionaries on disk. The cache is implemented as a sqlite database,
with a 'dict' table holding administrative key-value data, and a
'files' table holding both a pickled representation of the metadata
and separate columns for all searchable keys.
"""
# This must be defined by subclasses as a small integer that changes
# when any part of the database schema or our storage format changes.
schemaVersion = None
# This is the template for our SQL schema. All searchable keys are
# filled in automatically, but other items may be added by subclasses.
schemaTemplate = """
CREATE TABLE dict
(
name VARCHAR(64) PRIMARY KEY,
value TEXT
);
CREATE TABLE files
(
%(keys)s,
_pickled TEXT NOT NULL
);
"""
# A list of searchable keys, used to build the schema and validate queries
searchableKeys = None
keyType = "VARCHAR(255)"
# The primary key is what ensures a file's uniqueness. Inserting a file
# with a primary key identical to an existing one will update that
# file rather than creating a new one.
primaryKey = None
def __init__(self, name):
self.name = name
self.connection = None
def open(self):
"""Open the cache, creating it if necessary"""
if self.connection is not None:
return
self.connection = sqlite.connect(Paths.getCache(self.name))
self.cursor = self.connection.cursor()
# See what version of the database we got. If it's empty
# or it's old, we need to reset it.
try:
version = self._dictGet('schemaVersion')
except sqlite.DatabaseError:
version = None
if version != str(self.schemaVersion):
self.empty()
def close(self):
if self.connection is not None:
self.sync()
self.connection.close()
self.connection = None
def _getSchema(self):
"""Create a complete schema from our schema template and searchableKeys"""
keys = []
for key in self.searchableKeys:
type = self.keyType
if key == self.primaryKey:
type += " PRIMARY KEY"
keys.append("%s %s" % (key, type))
return self.schemaTemplate % dict(keys=', '.join(keys))
def _encode(self, obj):
"""Encode an object that may not be a plain string"""
if type(obj) is unicode:
obj = obj.encode('utf-8')
elif type(obj) is not str:
obj = str(obj)
return "'%s'" % sqlite.encode(obj)
def _dictGet(self, key):
"""Return a value stored in the persistent dictionary. Returns None if
the key has no matching value.
"""
self.cursor.execute("SELECT value FROM dict WHERE name = '%s'" % key)
row = self.cursor.fetchone()
if row:
return sqlite.decode(row[0])
def _dictSet(self, key, value):
"""Create or update a value stored in the persistent dictionary"""
encodedValue = self._encode(value)
# First try inserting a new item
try:
self.cursor.execute("INSERT INTO dict (name, value) VALUES ('%s', %s)" %
(key, encodedValue))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE dict SET value = %s WHERE name = '%s'" % (
encodedValue, key))
def sync(self):
"""Synchronize in-memory parts of the cache with disk"""
self.connection.commit()
def empty(self):
"""Reset the database to a default empty state"""
# Find and destroy every table in the database
self.cursor.execute("SELECT tbl_name FROM sqlite_master WHERE type='table'")
tables = [row.tbl_name for row in self.cursor.fetchall()]
for table in tables:
self.cursor.execute("DROP TABLE %s" % table)
# Apply the schema
self.cursor.execute(self._getSchema())
self._dictSet('schemaVersion', self.schemaVersion)
def _insertFile(self, d):
"""Insert a new file into the cache, given a dictionary of its metadata"""
# Make name/value lists for everything we want to update
dbItems = {'_pickled': self._encode(cPickle.dumps(d, -1))}
for column in self.searchableKeys:
if column in d:
dbItems[column] = self._encode(d[column])
# First try inserting a new row
try:
names = dbItems.keys()
self.cursor.execute("INSERT INTO files (%s) VALUES (%s)" %
(",".join(names), ",".join([dbItems[k] for k in names])))
except sqlite.IntegrityError:
# Violated the primary key constraint, update an existing item
self.cursor.execute("UPDATE files SET %s WHERE %s = %s" % (
", ".join(["%s = %s" % i for i in dbItems.iteritems()]),
self.primaryKey, self._encode(d[self.primaryKey])))
def _deleteFile(self, key):
"""Delete a File from the cache, given its primary key"""
self.cursor.execute("DELETE FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
def _getFile(self, key):
"""Return a metadata dictionary given its primary key"""
self.cursor.execute("SELECT _pickled FROM files WHERE %s = %s" % (
self.primaryKey, self._encode(key)))
row = self.cursor.fetchone()
if row:
return cPickle.loads(sqlite.decode(row[0]))
def _findFiles(self, **kw):
"""Search for files. The provided keywords must be searchable.
Yields a list of details dictionaries, one for each match.
Any keyword can be None (matches anything) or it can be a
string to match. Keywords that aren't provided are assumed
to be None.
"""
constraints = []
for key, value in kw.iteritems():
if key not in self.searchableKeys:
raise ValueError("Key name %r is not searchable" % key)
constraints.append("%s = %s" % (key, self._encode(value)))
if not constraints:
constraints.append("1")
self.cursor.execute("SELECT _pickled FROM files WHERE %s" %
" AND ".join(constraints))
row = None
while 1:
row = self.cursor.fetchone()
if not row:
break
yield cPickle.loads(sqlite.decode(row[0]))
def countFiles(self):
"""Return the number of files cached"""
self.cursor.execute("SELECT COUNT(_pickled) FROM files")
return int(self.cursor.fetchone()[0])
def updateStamp(self, stamp):
"""The stamp for this cache is any arbitrary value that is expected to
change when the actual data on the device changes. It is used to
check the cache's validity. This function update's the stamp from
a value that is known to match the cache's current contents.
"""
self._dictSet('stamp', stamp)
def checkStamp(self, stamp):
"""Check whether a provided stamp matches the cache's stored stamp.
This should be used when you have a stamp that matches the actual
data on the device, and you want to see if the cache is still valid.
"""
return self._dictGet('stamp') == str(stamp)
class LocalCache(BaseCache):
"""This is a searchable metadata cache for files on the local disk.
It can be used to speed up repeated metadata lookups for local files,
but more interestingly it can be used to provide full metadata searching
on local music files.
"""
schemaVersion = 1
searchableKeys = ('type', 'rid', 'title', 'artist', 'source', 'filename')
primaryKey = 'filename'
def lookup(self, filename):
"""Return a details dictionary for the given filename, using the cache if possible"""
filename = os.path.realpath(filename)
# Use the mtime as a stamp to see if our cache is still valid
mtime = os.stat(filename).st_mtime
cached = self._getFile(filename)
if cached and int(cached.get('mtime')) == int(mtime):
# Yay, still valid
return cached['details']
# Nope, generate a new dict and cache it
details = {}
Converter().detailsFromDisk(filename, details)
generated = dict(
type = details.get('type'),
rid = details.get('rid'),
title = details.get('title'),
artist = details.get('artist'),
source = details.get('source'),
mtime = mtime,
filename = filename,
details = details,
)
self._insertFile(generated)
return details
def findFiles(self, **kw):
"""Search for files that match all given search keys. This returns an iterator
over filenames, skipping any files that aren't currently valid in the cache.
"""
for cached in self._findFiles(**kw):
try:
mtime = os.stat(cached['filename']).st_mtime
except OSError:
pass
else:
if cached.get('mtime') == mtime:
yield cached['filename']
def scan(self, path):
"""Recursively scan all files within the specified path, creating
or updating their cache entries.
"""
for root, dirs, files in os.walk(path):
for name in files:
filename = os.path.join(root, name)
self.lookup(filename)
# checkpoint this after every directory
self.sync()
_defaultLocalCache = None
def getLocalCache(create=True):
"""Get the default instance of LocalCache"""
global _defaultLocalCache
if (not _defaultLocalCache) and create:
_defaultLocalCache = LocalCache("local")
_defaultLocalCache.open()
return _defaultLocalCache
class Converter:
"""This object manages the connection between different kinds of
metadata- the data stored within a file on disk, mmpython attributes,
Rio attributes, and file extensions.
"""
# Maps mmpython classes to codec names for all formats the player
# hardware supports.
codecNames = {
mmpython.audio.eyed3info.eyeD3Info: 'mp3',
mmpython.audio.mp3info.MP3Info: 'mp3',
mmpython.audio.flacinfo.FlacInfo: 'flac',
mmpython.audio.pcminfo.PCMInfo: 'wave',
mmpython.video.asfinfo.AsfInfo: 'wma',
mmpython.audio.ogginfo.OggInfo: 'vorbis',
}
# Maps codec names to extensions. Identity mappings are the
# default, so they are omitted.
codecExtensions = {
'wave': 'wav',
'vorbis': 'ogg',
}
def filenameFromDetails(self, details,
unicodeEncoding = 'utf-8'):
"""Determine a good filename to use for a file with the given metadata
in the Rio 'details' format. If it's a data file, this will use the
original file as stored in 'title'.
Otherwise, it uses Navi's naming convention: Artist_Name/album_name/##_track_name.extension
"""
if details.get('type') == 'taxi':
return details['title']
# Start with just the artist...
name = details.get('artist', 'None').replace(os.sep, "").replace(" ", "_") + os.sep
album = details.get('source')
if album:
name += album.replace(os.sep, "").replace(" ", "_").lower() + os.sep
track = details.get('tracknr')
if track:
name += "%02d_" % track
name += details.get('title', 'None').replace(os.sep, "").replace(" ", "_").lower()
codec = details.get('codec')
extension = self.codecExtensions.get(codec, codec)
if extension:
name += '.' + extension
return unicode(name).encode(unicodeEncoding, 'replace')
def detailsFromDisk(self, filename, details):
"""Automagically load media metadata out of the provided filename,
adding entries to details. This works on any file type
mmpython recognizes, and other files should be tagged
appropriately for Rio Taxi.
"""
info = mmpython.parse(filename)
st = os.stat(filename)
# Generic details for any file. Note that we start out assuming
# all files are unreadable, and label everything for Rio Taxi.
# Later we'll mark supported formats as music.
details['length'] = st.st_size
details['type'] = 'taxi'
details['rid'] = RidCalculator().fromFile(filename, st.st_size, info)
# We get the bulk of our metadata via mmpython if possible
if info:
self.detailsFromMM(info, details)
if details['type'] == 'taxi':
# All taxi files get their filename as their title, regardless of what mmpython said
details['title'] = os.path.basename(filename)
# Taxi files also always get a codec of 'taxi'
details['codec'] = 'taxi'
# Music files that still don't get a title get their filename minus the extension
if not details.get('title'):
details['title'] = os.path.splitext(os.path.basename(filename))[0]
def detailsFromMM(self, info, details):
"""Update Rio-style 'details' metadata from MMPython info"""
# Mime types aren't implemented consistently in mmpython, but
# we can look at the type of the returned object to decide
# whether this is a format that the Rio probably supports.
# This dictionary maps mmpython clases to Rio codec names.
for cls, codec in self.codecNames.iteritems():
if isinstance(info, cls):
details['type'] = 'tune'
details['codec'] = codec
break
# Map simple keys that don't require and hackery
for fromKey, toKey in (
('artist', 'artist'),
('title', 'title'),
('album', 'source'),
('date', 'year'),
('samplerate', 'samplerate'),
):
v = info[fromKey]
if v is not None:
details[toKey] = v
# The rio uses a two-letter prefix on bit rates- the first letter
# is 'f' or 'v', presumably for fixed or variable. The second is
# 'm' for mono or 's' for stereo. There doesn't seem to be a good
# way to get VBR info out of mmpython, so currently this always
# reports a fixed bit rate. We also have to kludge a bit because
# some metdata sources give us bits/second while some give us
# kilobits/second. And of course, there are multiple ways of
# reporting stereo...
kbps = info['bitrate']
if type(kbps) in (int, float) and kbps > 0:
stereo = bool( (info['channels'] and info['channels'] >= 2) or
(info['mode'] and info['mode'].find('stereo') >= 0) )
if kbps > 8000:
kbps = kbps // 1000
details['bitrate'] = ('fm', 'fs')[stereo] + str(kbps)
# If mmpython gives us a length it seems to always be in seconds,
# whereas the Rio expects milliseconds.
length = info['length']
if length:
details['duration'] = int(length * 1000)
# mmpython often gives track numbers as a fraction- current/total.
# The Rio only wants the current track, and we might as well also
# strip off leading zeros and such.
trackNo = info['trackno']
if trackNo:
details['tracknr'] = int(trackNo.split("/", 1)[0])
Reference: http://svn.navi.cx/misc/trunk/rio-karma/python/RioKarma/Metadata.py
Further:
Including Python modules
You should look at the os.stat functions
https://docs.python.org/2/library/os.html
os.stat returns file creation and modified times ctime, mtime
It should be something like this:
Import os
st= os.stat(full_file_path)
file_ctime= st.st_ctime
print(file_ctime)

Python / json : Check content of keys that may or may not exist

For those familiar with imageboards, an OP post may or may not contain a 'subject' and a 'comment'
I wrote this to search all pages of a given board for thread subjects and OP posts.
If my search term exists on one of them but the other key is inexistent it will not get appended to my res list.
So how do I search json keys where 1 key or the other may not exist?
import urllib, json, HTMLParser
def s4Chan(board, search):
logo = '3::54chan'
res = []
p = HTMLParser.HTMLParser()
catalog = json.load(urllib.urlopen('https://api.4chan.org/%s/catalog.json' % board))
for i in catalog:
for j in i['threads']:
try:
if search.lower() in j['sub'].lower() or search.lower() in j['com'].lower():
subject = j['sub']
post = p.unescape(str(j['com'])).replace('<br>', ' ')
if len(post) > 300:
post = post[0:300]
post = post + '...'
text = str('%s /%s/ %s | %s | %s (R:%s, I:%s)' % (logo, board, subject, post, 'https://4chan.org/%s/res/%s' % (board, j['no']), j['replies'], j['images']))
res.append(text)
except(KeyError):
continue
return res
json.load returns objects as Python dictionaries. You can, for example, use the get method of dict:
if search.lower() in j.get('sub', '').lower() or search.lower() in j.get('com', '').lower():

Parsing a file with multiple xmls in it

Is there a way to parse a file which contains multiple xmls in it?
eg., if I have a file called stocks.xml and within the stocks.xml i have more than one xml content, is there any way to parse this xml file ?.
-- stocks.xml
<?xml version="1.0" encoding="ASCII"?><PRODUCT><ID>A001</ID>..</PRODUCT><SHOP-1><QUANTITY>nn</QUANITY><SHOP-1><QUANTITY>nn</QUANITY>
<?xml version="1.0" encoding="ASCII"?><PRODUCT><ID>A002</ID>..</PRODUCT><SHOP-1><QUANTITY>nn</QUANITY><SHOP-1><QUANTITY>nn</QUANITY>
If you can assume that each xml document begins with <?xml version="1.0" ..., simply read the file line-by-line looking for a lines that match that pattern (or, read all the data and then do a search through the data).
Once you find a line, keep it, and append subsequent lines until the next xml document is found or you hit EOF. lather, rinse, repeat.
You now have one xml document in a string. You can then parse the string using the normal XML parsing tools, or you write it to a file.
This will work fine in most cases, but of course it could fall down if one of your embedded xml documents contains data that exactly matches the same pattern as the beginning of a document. Most likely you don't have to worry about that, and if you do there are ways to avoid that with a little more cleverness.
The right solution really depends on your needs. If you're creating a general purpose must-work-at-all-times solution this might not be right for you. For real world, special purpose problems it's probably more than Good Enough, and often Good Enough is indeed Good Enough.
You should see this python program by Michiel de Hoon
And if you want to parse multiple files, then a rule to detect that we are in other xml must be developed, for example,at first you read <stocks> .... and at the end you must reead </stocks> when you find that then if there is something else,well, continue reading and do the same parser until reach eof.
# Copyright 2008 by Michiel de Hoon. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Parser for XML results returned by NCBI's Entrez Utilities. This
parser is used by the read() function in Bio.Entrez, and is not intended
be used directly.
"""
# The question is how to represent an XML file as Python objects. Some
# XML files returned by NCBI look like lists, others look like dictionaries,
# and others look like a mix of lists and dictionaries.
#
# My approach is to classify each possible element in the XML as a plain
# string, an integer, a list, a dictionary, or a structure. The latter is a
# dictionary where the same key can occur multiple times; in Python, it is
# represented as a dictionary where that key occurs once, pointing to a list
# of values found in the XML file.
#
# The parser then goes through the XML and creates the appropriate Python
# object for each element. The different levels encountered in the XML are
# preserved on the Python side. So a subelement of a subelement of an element
# is a value in a dictionary that is stored in a list which is a value in
# some other dictionary (or a value in a list which itself belongs to a list
# which is a value in a dictionary, and so on). Attributes encountered in
# the XML are stored as a dictionary in a member .attributes of each element,
# and the tag name is saved in a member .tag.
#
# To decide which kind of Python object corresponds to each element in the
# XML, the parser analyzes the DTD referred at the top of (almost) every
# XML file returned by the Entrez Utilities. This is preferred over a hand-
# written solution, since the number of DTDs is rather large and their
# contents may change over time. About half the code in this parser deals
# wih parsing the DTD, and the other half with the XML itself.
import os.path
import urlparse
import urllib
import warnings
from xml.parsers import expat
# The following four classes are used to add a member .attributes to integers,
# strings, lists, and dictionaries, respectively.
class IntegerElement(int):
def __repr__(self):
text = int.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "IntegerElement(%s, attributes=%s)" % (text, repr(attributes))
class StringElement(str):
def __repr__(self):
text = str.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "StringElement(%s, attributes=%s)" % (text, repr(attributes))
class UnicodeElement(unicode):
def __repr__(self):
text = unicode.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "UnicodeElement(%s, attributes=%s)" % (text, repr(attributes))
class ListElement(list):
def __repr__(self):
text = list.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "ListElement(%s, attributes=%s)" % (text, repr(attributes))
class DictionaryElement(dict):
def __repr__(self):
text = dict.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "DictElement(%s, attributes=%s)" % (text, repr(attributes))
# A StructureElement is like a dictionary, but some of its keys can have
# multiple values associated with it. These values are stored in a list
# under each key.
class StructureElement(dict):
def __init__(self, keys):
dict.__init__(self)
for key in keys:
dict.__setitem__(self, key, [])
self.listkeys = keys
def __setitem__(self, key, value):
if key in self.listkeys:
self[key].append(value)
else:
dict.__setitem__(self, key, value)
def __repr__(self):
text = dict.__repr__(self)
try:
attributes = self.attributes
except AttributeError:
return text
return "DictElement(%s, attributes=%s)" % (text, repr(attributes))
class NotXMLError(ValueError):
def __init__(self, message):
self.msg = message
def __str__(self):
return "Failed to parse the XML data (%s). Please make sure that the input data are in XML format." % self.msg
class CorruptedXMLError(ValueError):
def __init__(self, message):
self.msg = message
def __str__(self):
return "Failed to parse the XML data (%s). Please make sure that the input data are not corrupted." % self.msg
class ValidationError(ValueError):
"""Validating parsers raise this error if the parser finds a tag in the XML that is not defined in the DTD. Non-validating parsers do not raise this error. The Bio.Entrez.read and Bio.Entrez.parse functions use validating parsers by default (see those functions for more information)"""
def __init__(self, name):
self.name = name
def __str__(self):
return "Failed to find tag '%s' in the DTD. To skip all tags that are not represented in the DTD, please call Bio.Entrez.read or Bio.Entrez.parse with validate=False." % self.name
class DataHandler:
home = os.path.expanduser('~')
local_dtd_dir = os.path.join(home, '.biopython', 'Bio', 'Entrez', 'DTDs')
del home
from Bio import Entrez
global_dtd_dir = os.path.join(str(Entrez.__path__[0]), "DTDs")
del Entrez
def __init__(self, validate):
self.stack = []
self.errors = []
self.integers = []
self.strings = []
self.lists = []
self.dictionaries = []
self.structures = {}
self.items = []
self.dtd_urls = []
self.validating = validate
self.parser = expat.ParserCreate(namespace_separator=" ")
self.parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS)
self.parser.XmlDeclHandler = self.xmlDeclHandler
def read(self, handle):
"""Set up the parser and let it parse the XML results"""
try:
self.parser.ParseFile(handle)
except expat.ExpatError, e:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, so we can be sure that
# we are parsing XML data. Most likely, the XML file is
# corrupted.
raise CorruptedXMLError(e)
else:
# We have not seen the initial <!xml declaration, so probably
# the input data is not in XML format.
raise NotXMLError(e)
try:
return self.object
except AttributeError:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, and expat didn't notice
# any errors, so self.object should be defined. If not, this is
# a bug.
raise RuntimeError("Failed to parse the XML file correctly, possibly due to a bug in Bio.Entrez. Please contact the Biopython developers at biopython-dev#biopython.org for assistance.")
else:
# We did not see the initial <!xml declaration, so probably
# the input data is not in XML format.
raise NotXMLError("XML declaration not found")
def parse(self, handle):
BLOCK = 1024
while True:
#Read in another block of the file...
text = handle.read(BLOCK)
if not text:
# We have reached the end of the XML file
if self.stack:
# No more XML data, but there is still some unfinished
# business
raise CorruptedXMLError
try:
for record in self.object:
yield record
except AttributeError:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, and expat
# didn't notice any errors, so self.object should be
# defined. If not, this is a bug.
raise RuntimeError("Failed to parse the XML file correctly, possibly due to a bug in Bio.Entrez. Please contact the Biopython developers at biopython-dev#biopython.org for assistance.")
else:
# We did not see the initial <!xml declaration, so
# probably the input data is not in XML format.
raise NotXMLError("XML declaration not found")
self.parser.Parse("", True)
self.parser = None
return
try:
self.parser.Parse(text, False)
except expat.ExpatError, e:
if self.parser.StartElementHandler:
# We saw the initial <!xml declaration, so we can be sure
# that we are parsing XML data. Most likely, the XML file
# is corrupted.
raise CorruptedXMLError(e)
else:
# We have not seen the initial <!xml declaration, so
# probably the input data is not in XML format.
raise NotXMLError(e)
if not self.stack:
# Haven't read enough from the XML file yet
continue
records = self.stack[0]
if not isinstance(records, list):
raise ValueError("The XML file does not represent a list. Please use Entrez.read instead of Entrez.parse")
while len(records) > 1: # Then the top record is finished
record = records.pop(0)
yield record
def xmlDeclHandler(self, version, encoding, standalone):
# XML declaration found; set the handlers
self.parser.StartElementHandler = self.startElementHandler
self.parser.EndElementHandler = self.endElementHandler
self.parser.CharacterDataHandler = self.characterDataHandler
self.parser.ExternalEntityRefHandler = self.externalEntityRefHandler
self.parser.StartNamespaceDeclHandler = self.startNamespaceDeclHandler
def startNamespaceDeclHandler(self, prefix, un):
raise NotImplementedError("The Bio.Entrez parser cannot handle XML data that make use of XML namespaces")
def startElementHandler(self, name, attrs):
self.content = ""
if name in self.lists:
object = ListElement()
elif name in self.dictionaries:
object = DictionaryElement()
elif name in self.structures:
object = StructureElement(self.structures[name])
elif name in self.items: # Only appears in ESummary
name = str(attrs["Name"]) # convert from Unicode
del attrs["Name"]
itemtype = str(attrs["Type"]) # convert from Unicode
del attrs["Type"]
if itemtype=="Structure":
object = DictionaryElement()
elif name in ("ArticleIds", "History"):
object = StructureElement(["pubmed", "medline"])
elif itemtype=="List":
object = ListElement()
else:
object = StringElement()
object.itemname = name
object.itemtype = itemtype
elif name in self.strings + self.errors + self.integers:
self.attributes = attrs
return
else:
# Element not found in DTD
if self.validating:
raise ValidationError(name)
else:
# this will not be stored in the record
object = ""
if object!="":
object.tag = name
if attrs:
object.attributes = dict(attrs)
if len(self.stack)!=0:
current = self.stack[-1]
try:
current.append(object)
except AttributeError:
current[name] = object
self.stack.append(object)
def endElementHandler(self, name):
value = self.content
if name in self.errors:
if value=="":
return
else:
raise RuntimeError(value)
elif name in self.integers:
value = IntegerElement(value)
elif name in self.strings:
# Convert Unicode strings to plain strings if possible
try:
value = StringElement(value)
except UnicodeEncodeError:
value = UnicodeElement(value)
elif name in self.items:
self.object = self.stack.pop()
if self.object.itemtype in ("List", "Structure"):
return
elif self.object.itemtype=="Integer" and value:
value = IntegerElement(value)
else:
# Convert Unicode strings to plain strings if possible
try:
value = StringElement(value)
except UnicodeEncodeError:
value = UnicodeElement(value)
name = self.object.itemname
else:
self.object = self.stack.pop()
return
value.tag = name
if self.attributes:
value.attributes = dict(self.attributes)
del self.attributes
current = self.stack[-1]
if current!="":
try:
current.append(value)
except AttributeError:
current[name] = value
def characterDataHandler(self, content):
self.content += content
def elementDecl(self, name, model):
"""This callback function is called for each element declaration:
<!ELEMENT name (...)>
encountered in a DTD. The purpose of this function is to determine
whether this element should be regarded as a string, integer, list
dictionary, structure, or error."""
if name.upper()=="ERROR":
self.errors.append(name)
return
if name=='Item' and model==(expat.model.XML_CTYPE_MIXED,
expat.model.XML_CQUANT_REP,
None, ((expat.model.XML_CTYPE_NAME,
expat.model.XML_CQUANT_NONE,
'Item',
()
),
)
):
# Special case. As far as I can tell, this only occurs in the
# eSummary DTD.
self.items.append(name)
return
# First, remove ignorable parentheses around declarations
while (model[0] in (expat.model.XML_CTYPE_SEQ,
expat.model.XML_CTYPE_CHOICE)
and model[1] in (expat.model.XML_CQUANT_NONE,
expat.model.XML_CQUANT_OPT)
and len(model[3])==1):
model = model[3][0]
# PCDATA declarations correspond to strings
if model[0] in (expat.model.XML_CTYPE_MIXED,
expat.model.XML_CTYPE_EMPTY):
self.strings.append(name)
return
# List-type elements
if (model[0] in (expat.model.XML_CTYPE_CHOICE,
expat.model.XML_CTYPE_SEQ) and
model[1] in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP)):
self.lists.append(name)
return
# This is the tricky case. Check which keys can occur multiple
# times. If only one key is possible, and it can occur multiple
# times, then this is a list. If more than one key is possible,
# but none of them can occur multiple times, then this is a
# dictionary. Otherwise, this is a structure.
# In 'single' and 'multiple', we keep track which keys can occur
# only once, and which can occur multiple times.
single = []
multiple = []
# The 'count' function is called recursively to make sure all the
# children in this model are counted. Error keys are ignored;
# they raise an exception in Python.
def count(model):
quantifier, name, children = model[1:]
if name==None:
if quantifier in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP):
for child in children:
multiple.append(child[2])
else:
for child in children:
count(child)
elif name.upper()!="ERROR":
if quantifier in (expat.model.XML_CQUANT_NONE,
expat.model.XML_CQUANT_OPT):
single.append(name)
elif quantifier in (expat.model.XML_CQUANT_PLUS,
expat.model.XML_CQUANT_REP):
multiple.append(name)
count(model)
if len(single)==0 and len(multiple)==1:
self.lists.append(name)
elif len(multiple)==0:
self.dictionaries.append(name)
else:
self.structures.update({name: multiple})
def open_dtd_file(self, filename):
path = os.path.join(DataHandler.local_dtd_dir, filename)
try:
handle = open(path, "rb")
except IOError:
pass
else:
return handle
path = os.path.join(DataHandler.global_dtd_dir, filename)
try:
handle = open(path, "rb")
except IOError:
pass
else:
return handle
return None
def externalEntityRefHandler(self, context, base, systemId, publicId):
"""The purpose of this function is to load the DTD locally, instead
of downloading it from the URL specified in the XML. Using the local
DTD results in much faster parsing. If the DTD is not found locally,
we try to download it. If new DTDs become available from NCBI,
putting them in Bio/Entrez/DTDs will allow the parser to see them."""
urlinfo = urlparse.urlparse(systemId)
#Following attribute requires Python 2.5+
#if urlinfo.scheme=='http':
if urlinfo[0]=='http':
# Then this is an absolute path to the DTD.
url = systemId
elif urlinfo[0]=='':
# Then this is a relative path to the DTD.
# Look at the parent URL to find the full path.
url = self.dtd_urls[-1]
source = os.path.dirname(url)
url = os.path.join(source, systemId)
self.dtd_urls.append(url)
# First, try to load the local version of the DTD file
location, filename = os.path.split(systemId)
handle = self.open_dtd_file(filename)
if not handle:
# DTD is not available as a local file. Try accessing it through
# the internet instead.
message = """\
Unable to load DTD file %s.
Bio.Entrez uses NCBI's DTD files to parse XML files returned by NCBI Entrez.
Though most of NCBI's DTD files are included in the Biopython distribution,
sometimes you may find that a particular DTD file is missing. While we can
access the DTD file through the internet, the parser is much faster if the
required DTD files are available locally.
For this purpose, please download %s from
%s
and save it either in directory
%s
or in directory
%s
in order for Bio.Entrez to find it.
Alternatively, you can save %s in the directory
Bio/Entrez/DTDs in the Biopython distribution, and reinstall Biopython.
Please also inform the Biopython developers about this missing DTD, by
reporting a bug on http://bugzilla.open-bio.org/ or sign up to our mailing
list and emailing us, so that we can include it with the next release of
Biopython.
Proceeding to access the DTD file through the internet...
""" % (filename, filename, url, self.global_dtd_dir, self.local_dtd_dir, filename)
warnings.warn(message)
try:
handle = urllib.urlopen(url)
except IOError:
raise RuntimeException("Failed to access %s at %s" % (filename, url))
parser = self.parser.ExternalEntityParserCreate(context)
parser.ElementDeclHandler = self.elementDecl
parser.ParseFile(handle)
handle.close()
self.dtd_urls.pop()
return 1
So you have a file containing multiple XML documents one after the other? Here is an example which strips out the <?xml ?> PIs and wraps the data in a root tag to parse the whole thing as a single XML document:
import re
import lxml.etree
re_strip_pi = re.compile('<\?xml [^?>]+\?>', re.M)
data = '<root>' + open('stocks.xml', 'rb').read() + '</root>'
match = re_strip_pi.search(data)
data = re_strip_pi.sub('', data)
tree = lxml.etree.fromstring(match.group() + data)
for prod in tree.xpath('//PRODUCT'):
print prod
You can't have multiple XML documents in one XML file. Split the documents - composed in whatever way - into single XML files and parse them one-by-one.

Categories