Get array from file which also includes strings - python

I'm writing an automation script in Python that makes use of another library. The output I'm given contains the array I need, however the output also includes log messages in string format that are irrelevant.
For my script to work, I need to retrieve only the array which is in the file.
Here's an example of the output I'm getting.
Split /adclix.$~image into 2 rules
Split /mediahosting.engine$document,script into 2 rules
[
{
"action": {
"type": "block"
},
"trigger": {
"url-filter": "/adservice\\.",
"unless-domain": [
"adservice.io"
]
}
}
]
Generated a total of 1 rules (1 blocks, 0 exceptions)
How would I get only the array from this file?
FWIW, I'd rather not have the logic based on the strings outside of the array, as they could be subject to change.
UPDATE: Script I'm getting the data from is here: https://github.com/brave/ab2cb/tree/master/ab2cb
My full code is here:
def pipe_in(process, filter_lists):
try:
for body, _, _ in filter_lists:
process.stdin.write(body)
finally:
process.stdin.close()
def write_block_lists(filter_lists, path, expires):
block_list = generate_metadata(filter_lists, expires)
process = subprocess.Popen(('ab2cb'),
cwd=ab2cb_dirpath,
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
threading.Thread(target=pipe_in, args=(process, filter_lists)).start()
result = process.stdout.read()
with open('output.json', 'w') as destination_file:
destination_file.write(result)
destination_file.close()
if process.wait():
raise Exception('ab2cb returned %s' % process.returncode)
The output will ideally be modified in stdout and written later to file as I still need to modify the data within the previously mentioned array.

You can use regex too
import re
input = """
Split /adclix.$~image into 2 rules
Split /mediahosting.engine$document,script into 2 rules
[
{
"action": {
"type": "block"
},
"trigger": {
"url-filter": "/adservice\\.",
"unless-domain": [
"adservice.io"
]
}
}
]
Generated a total of 1 rules (1 blocks, 0 exceptions)
asd
asd
"""
regex = re.compile(r"\[(.|\n)*(?:^\]$)", re.M)
x = re.search(regex, input)
print(x.group(0))
EDIT
re.M turns on 'MultiLine matching'
https://repl.it/repls/InfantileDopeyLink

I have written a library for this purpose. It's not often that I get to plug it!
from jsonfinder import jsonfinder
logs = r"""
Split /adclix.$~image into 2 rules
Split /mediahosting.engine$document,script into 2 rules
[
{
"action": {
"type": "block"
},
"trigger": {
"url-filter": "/adservice\\.",
"unless-domain": [
"adservice.io"
]
}
}
]
Generated a total of 1 rules (1 blocks, 0 exceptions)
Something else that looks like JSON: [1, 2]
"""
for start, end, obj in jsonfinder(logs):
if (
obj
and isinstance(obj, list)
and isinstance(obj[0], dict)
and {"action", "trigger"} <= obj[0].keys()
):
print(obj)
Demo: https://repl.it/repls/ImperfectJuniorBootstrapping
Library: https://github.com/alexmojaki/jsonfinder
Install with pip install jsonfinder.

Related

Inconsistent error: json.decoder.JSONDecodeError: Extra data: line 30 column 2 (char 590)

I have .json documents generated from the same code. Here multiple nested dicts are being dumped to the json documents. While loadling with json.load(opened_json), I get the json.decoder.JSONDecodeError: Extra data: line 30 column 2 (char 590) like error for some of of the files whereas not for others. It is not understood why. What is the proper way to dump multiple dicts (maybe nested) into json docs and in my current case what is way to read them all? (Extra: Dicts can be over multiple lines, so 'linesplitting' does not work probably.)
Ex: Say I am json.dump(data, file) with data = {'meta_data':{some_data}, 'real_data':{more_data}}.
Let us take these two fake files:
{
"meta_data": {
"id": 0,
"start": 1238397024.0,
"end": 1238397056.0,
"best": []
},
"real_data": {
"YAS": {
"t1": [
1238397047.2182617
],
"v1": [
5.0438767766574255
],
"v2": [
4.371670270544587
]
}
}
}
and
{
"meta_data": {
"id": 0,
"start": 1238397056.0,
"end": 1238397088.0,
"best": []
},
"real_data": {
"XAS": {
"t1": [
1238397047.2182617
],
"v1": [
5.0438767766574255
],
"v2": [
4.371670270544587
]
}
}
}
and try to load them using json.load(open(file_path)) for duplicatling the problem.
You chose not to offer a
reprex.
Here is the code I'm running
which is intended to represent what you're running.
If there is some discrepancy, update the original
question to clarify the details.
import json
from io import StringIO
some_data = dict(a=1)
more_data = dict(b=2)
data = {"meta_data": some_data, "real_data": more_data}
file = StringIO()
json.dump(data, file)
file.seek(0)
d = json.load(file)
print(json.dumps(d, indent=4))
output
{
"meta_data": {
"a": 1
},
"real_data": {
"b": 2
}
}
As is apparent, over the circumstances you have
described the JSON library does exactly what we
would expect of it.
EDIT
Your screenshot makes it pretty clear
that a bunch of ASCII NUL characters are appended
to the 1st file.
We can easily reproduce that JSONDecodeError: Extra data
symptom by adding a single line:
json.dump(data, file)
file.write(chr(0))
(Or perhaps chr(0) * 80 more closely matches the truncated screenshot.)
If your file ends with extraneous characters, such as NUL,
then it will no longer be valid JSON and compliant
parsers will report a diagnostic message when they
attempt to read it.
And there's nothing special about NUL, as a simple
file.write("X") suffices to produce that same
diagnostic.
You will need to trim those NULs from the file's end
before attempting to parse it.
For best results, use UTF8 unicode encoding with no
BOM.
Your editor should have settings for
switching to utf8.
Use $ file foo.json to verify encoding details,
and $ iconv --to-code=UTF-8 < foo.json
to alter an unfortunate encoding.
You need to read the file, you can do both of these.
data = json.loads(open("data.json").read())
or
with open("data.json", "r") as file:
data = json.load(file)

How to remove the first and last portion of a string in Python?

How can i cut from such a string (json) everything before and including the first [ and everything behind and including the last ] with Python?
{
"Customers": [
{
"cID": "w2-502952",
"soldToId": "34124"
},
...
...
],
"status": {
"success": true,
"message": "Customers: 560",
"ErrorCode": ""
}
}
I want to have at least only
{
"cID" : "w2-502952",
"soldToId" : "34124",
}
...
...
String manipulation is not the way to do this. You should parse your JSON into Python and extract the relevant data using normal data structure access.
obj = json.loads(data)
relevant_data = obj["Customers"]
Addition to #Daniel Rosman answer, if you want all the list from JSON.
result = []
obj = json.loads(data)
for value in obj.values():
if isinstance(value, list):
result.append(*value)
While I agree that Daniel's answer is the absolute best way to go, if you must use string splitting, you can try .find()
string = #however you are loading this json text into a string
start = string.find('[')
end = string.find(']')
customers = string[start:end]
print(customers)
output will be everything between the [ and ] braces.
If you really want to do this via string manipulation (which I don't recommend), you can do it this way:
start = s.find('[') + 1
finish = s.find(']')
inner = s[start : finish]

Extract info from json file, Python

I am trying to extract information out of a JSON file that I dumped. I used Mido module to get the information I need, and the only way I've found to get these features is by dumping it as a JSON. But, after some search, I've tried and not managed to extract and store the info in a python array (numpy array)*
Below, you see the sample code. I am pretty sure that double [[ at the beginning of the file is making all this trouble.
if __name__ == '__main__':
filename = 'G:\\ΣΧΟΛΗ p12127\\Πτυχιακη\\MAPS Dataset\\MAPS_AkPnBcht_1\\AkPnBcht\\ISOL\\CH\\MAPS_ISOL_CH0.1_F_AkPnBcht.mid'
midi_file = mdfile(filename)
for i, track in enumerate(midi_file.tracks):
print('Track: ', i,'\nNumber of messages: ', track)
for message in track:
print(message)
def midifile_to_dict(mid):
tracks = []
for track in mid.tracks:
tracks.append([vars(msg).copy() for msg in track])
return {
'ticks_per_beat': mid.ticks_per_beat,
'tracks': tracks,
}
mid = mido.MidiFile('G:\\ΣΧΟΛΗ p12127\\Πτυχιακη\\MAPSDataset\\MAPS_AkPnBcht_1\\AkPnBcht\\ISOL\\CH\\MAPS_ISOL_CH0.1_F_AkPnBcht.mid')
dataj = json.dumps(midifile_to_dict(mid), indent=2)
data = json.loads(dataj)
Output :
{
"ticks_per_beat": 32767,
"tracks": [[
{
"type": "set_tempo",
"tempo": 439440,
"time": 0
},
{
"type": "end_of_track",
"time": 0
}
]]
}
So, to close this up, how could I extract this info? Is JSON even a good approach? And if so, how could I actually extract this info? Or a way to go inside the [[ ]].
[Windows 7, Python 3.6+]

Recursively looping through JSON output in Python

I have the following json output:
{
"code":0,
"message":"success",
"data":[
{
"group_id":"12345678901234567890",
"display_name":"GROUP",
"description":"Group 1",
"monitors":[
"12345678901234567890",
"12345678901234567890",
"12345678901234567890"
]
},
{
"group_id":"12345678901234567890",
"display_name":"KK-GROUP1",
"description":"KK Group 1",
"monitors":[
"12345678901234567890",
"12345678901234567890",
"12345678901234567890"
]
},
{
"group_id":"12345678901234567890",
"display_name":"KK-GROUP2",
"description":"KK Group 2",
"monitors":[
"12345678901234567890",
"12345678901234567890",
"12345678901234567890"
]
},
{
"group_id":"12345678901234567890",
"display_name":"KK-GROUP3",
"description":"KK Group 3",
"monitors":[
"12345678901234567890",
"12345678901234567890",
"12345678901234567890"
]
}
]
}
I have this definition that should be looping through the JSON output received from a pycurl command and find all groups that start with KK for example and create a list from all the ID's in the respective monitors field to add to another section of a script I wrote. In the above output it should provide 9 IDs (3 from each group) ... For whatever reason it's only grabbing the first 3 monitor IDs.
def ReturnedMonitors():
listOfChecks = json.loads(connectSite('GET',''))
for i in listOfChecks['data']:
while i['display_name'].startswith(options.clusterName.upper()):
return i['monitors']
The output from this will be passed into the following:
for monitor in ReturnedMonitors():
putData = 'activate/' + monitor
print "Activated: " + modifyMonitors('PUT',putData)
modifyMonitors is another pycurl definition that will post to a site.
If you are trying to create a generator function with - ReturnedMonitors() , you have created it wrongly , when you do return it returns from the function, you need to use the yield keyword , also, if you need to yield each id in `monitors list separately , you should loop over them and yield separately. Example -
def ReturnedMonitors():
listOfChecks = json.loads(connectSite('GET',''))
for i in listOfChecks['data']:
while i['display_name'].startswith(options.clusterName.upper()):
for x in i['monitors']:
yield x
For Python 3.3 + , you can use yield from to yield all values from an iterable/iterator (called generator delegation) -
def ReturnedMonitors():
listOfChecks = json.loads(connectSite('GET',''))
for i in listOfChecks['data']:
while i['display_name'].startswith(options.clusterName.upper()):
yield from i['monitors']

Converting XML to JSON using Python?

I've seen a fair share of ungainly XML->JSON code on the web, and having interacted with Stack's users for a bit, I'm convinced that this crowd can help more than the first few pages of Google results can.
So, we're parsing a weather feed, and we need to populate weather widgets on a multitude of web sites. We're looking now into Python-based solutions.
This public weather.com RSS feed is a good example of what we'd be parsing (our actual weather.com feed contains additional information because of a partnership w/them).
In a nutshell, how should we convert XML to JSON using Python?
xmltodict (full disclosure: I wrote it) can help you convert your XML to a dict+list+string structure, following this "standard". It is Expat-based, so it's very fast and doesn't need to load the whole XML tree in memory.
Once you have that data structure, you can serialize it to JSON:
import xmltodict, json
o = xmltodict.parse('<e> <a>text</a> <a>text</a> </e>')
json.dumps(o) # '{"e": {"a": ["text", "text"]}}'
There is no "one-to-one" mapping between XML and JSON, so converting one to the other necessarily requires some understanding of what you want to do with the results.
That being said, Python's standard library has several modules for parsing XML (including DOM, SAX, and ElementTree). As of Python 2.6, support for converting Python data structures to and from JSON is included in the json module.
So the infrastructure is there.
You can use the xmljson library to convert using different XML JSON conventions.
For example, this XML:
<p id="1">text</p>
translates via the BadgerFish convention into this:
{
'p': {
'#id': 1,
'$': 'text'
}
}
and via the GData convention into this (attributes are not supported):
{
'p': {
'$t': 'text'
}
}
... and via the Parker convention into this (attributes are not supported):
{
'p': 'text'
}
It's possible to convert from XML to JSON and from JSON to XML using the same
conventions:
>>> import json, xmljson
>>> from lxml.etree import fromstring, tostring
>>> xml = fromstring('<p id="1">text</p>')
>>> json.dumps(xmljson.badgerfish.data(xml))
'{"p": {"#id": 1, "$": "text"}}'
>>> xmljson.parker.etree({'ul': {'li': [1, 2]}})
# Creates [<ul><li>1</li><li>2</li></ul>]
Disclosure: I wrote this library. Hope it helps future searchers.
To anyone that may still need this. Here's a newer, simple code to do this conversion.
from xml.etree import ElementTree as ET
xml = ET.parse('FILE_NAME.xml')
parsed = parseXmlToJson(xml)
def parseXmlToJson(xml):
response = {}
for child in list(xml):
if len(list(child)) > 0:
response[child.tag] = parseXmlToJson(child)
else:
response[child.tag] = child.text or ''
# one-liner equivalent
# response[child.tag] = parseXmlToJson(child) if len(list(child)) > 0 else child.text or ''
return response
If some time you get only response code instead of all data then error like json parse will be there so u need to convert it as text
import xmltodict
data = requests.get(url)
xpars = xmltodict.parse(data.text)
json = json.dumps(xpars)
print json
Here's the code I built for that. There's no parsing of the contents, just plain conversion.
from xml.dom import minidom
import simplejson as json
def parse_element(element):
dict_data = dict()
if element.nodeType == element.TEXT_NODE:
dict_data['data'] = element.data
if element.nodeType not in [element.TEXT_NODE, element.DOCUMENT_NODE,
element.DOCUMENT_TYPE_NODE]:
for item in element.attributes.items():
dict_data[item[0]] = item[1]
if element.nodeType not in [element.TEXT_NODE, element.DOCUMENT_TYPE_NODE]:
for child in element.childNodes:
child_name, child_dict = parse_element(child)
if child_name in dict_data:
try:
dict_data[child_name].append(child_dict)
except AttributeError:
dict_data[child_name] = [dict_data[child_name], child_dict]
else:
dict_data[child_name] = child_dict
return element.nodeName, dict_data
if __name__ == '__main__':
dom = minidom.parse('data.xml')
f = open('data.json', 'w')
f.write(json.dumps(parse_element(dom), sort_keys=True, indent=4))
f.close()
There is a method to transport XML-based markup as JSON which allows it to be losslessly converted back to its original form. See http://jsonml.org/.
It's a kind of XSLT of JSON. I hope you find it helpful
I'd suggest not going for a direct conversion. Convert XML to an object, then from the object to JSON.
In my opinion, this gives a cleaner definition of how the XML and JSON correspond.
It takes time to get right and you may even write tools to help you with generating some of it, but it would look roughly like this:
class Channel:
def __init__(self)
self.items = []
self.title = ""
def from_xml( self, xml_node ):
self.title = xml_node.xpath("title/text()")[0]
for x in xml_node.xpath("item"):
item = Item()
item.from_xml( x )
self.items.append( item )
def to_json( self ):
retval = {}
retval['title'] = title
retval['items'] = []
for x in items:
retval.append( x.to_json() )
return retval
class Item:
def __init__(self):
...
def from_xml( self, xml_node ):
...
def to_json( self ):
...
You may want to have a look at http://designtheory.org/library/extrep/designdb-1.0.pdf. This project starts off with an XML to JSON conversion of a large library of XML files. There was much research done in the conversion, and the most simple intuitive XML -> JSON mapping was produced (it is described early in the document). In summary, convert everything to a JSON object, and put repeating blocks as a list of objects.
objects meaning key/value pairs (dictionary in Python, hashmap in Java, object in JavaScript)
There is no mapping back to XML to get an identical document, the reason is, it is unknown whether a key/value pair was an attribute or an <key>value</key>, therefore that information is lost.
If you ask me, attributes are a hack to start; then again they worked well for HTML.
Well, probably the simplest way is just parse the XML into dictionaries and then serialize that with simplejson.
When I do anything with XML in python I almost always use the lxml package. I suspect that most people use lxml. You could use xmltodict but you will have to pay the penalty of parsing the XML again.
To convert XML to json with lxml you:
Parse XML document with lxml
Convert lxml to a dict
Convert list to json
I use the following class in my projects. Use the toJson method.
from lxml import etree
import json
class Element:
'''
Wrapper on the etree.Element class. Extends functionality to output element
as a dictionary.
'''
def __init__(self, element):
'''
:param: element a normal etree.Element instance
'''
self.element = element
def toDict(self):
'''
Returns the element as a dictionary. This includes all child elements.
'''
rval = {
self.element.tag: {
'attributes': dict(self.element.items()),
},
}
for child in self.element:
rval[self.element.tag].update(Element(child).toDict())
return rval
class XmlDocument:
'''
Wraps lxml to provide:
- cleaner access to some common lxml.etree functions
- converter from XML to dict
- converter from XML to json
'''
def __init__(self, xml = '<empty/>', filename=None):
'''
There are two ways to initialize the XmlDocument contents:
- String
- File
You don't have to initialize the XmlDocument during instantiation
though. You can do it later with the 'set' method. If you choose to
initialize later XmlDocument will be initialized with "<empty/>".
:param: xml Set this argument if you want to parse from a string.
:param: filename Set this argument if you want to parse from a file.
'''
self.set(xml, filename)
def set(self, xml=None, filename=None):
'''
Use this to set or reset the contents of the XmlDocument.
:param: xml Set this argument if you want to parse from a string.
:param: filename Set this argument if you want to parse from a file.
'''
if filename is not None:
self.tree = etree.parse(filename)
self.root = self.tree.getroot()
else:
self.root = etree.fromstring(xml)
self.tree = etree.ElementTree(self.root)
def dump(self):
etree.dump(self.root)
def getXml(self):
'''
return document as a string
'''
return etree.tostring(self.root)
def xpath(self, xpath):
'''
Return elements that match the given xpath.
:param: xpath
'''
return self.tree.xpath(xpath);
def nodes(self):
'''
Return all elements
'''
return self.root.iter('*')
def toDict(self):
'''
Convert to a python dictionary
'''
return Element(self.root).toDict()
def toJson(self, indent=None):
'''
Convert to JSON
'''
return json.dumps(self.toDict(), indent=indent)
if __name__ == "__main__":
xml='''<system>
<product>
<demod>
<frequency value='2.215' units='MHz'>
<blah value='1'/>
</frequency>
</demod>
</product>
</system>
'''
doc = XmlDocument(xml)
print doc.toJson(indent=4)
The output from the built in main is:
{
"system": {
"attributes": {},
"product": {
"attributes": {},
"demod": {
"attributes": {},
"frequency": {
"attributes": {
"units": "MHz",
"value": "2.215"
},
"blah": {
"attributes": {
"value": "1"
}
}
}
}
}
}
}
Which is a transformation of this xml:
<system>
<product>
<demod>
<frequency value='2.215' units='MHz'>
<blah value='1'/>
</frequency>
</demod>
</product>
</system>
I found for simple XML snips, use regular expression would save troubles. For example:
# <user><name>Happy Man</name>...</user>
import re
names = re.findall(r'<name>(\w+)<\/name>', xml_string)
# do some thing to names
To do it by XML parsing, as #Dan said, there is not one-for-all solution because the data is different. My suggestion is to use lxml. Although not finished to json, lxml.objectify give quiet good results:
>>> from lxml import objectify
>>> root = objectify.fromstring("""
... <root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
... <a attr1="foo" attr2="bar">1</a>
... <a>1.2</a>
... <b>1</b>
... <b>true</b>
... <c>what?</c>
... <d xsi:nil="true"/>
... </root>
... """)
>>> print(str(root))
root = None [ObjectifiedElement]
a = 1 [IntElement]
* attr1 = 'foo'
* attr2 = 'bar'
a = 1.2 [FloatElement]
b = 1 [IntElement]
b = True [BoolElement]
c = 'what?' [StringElement]
d = None [NoneElement]
* xsi:nil = 'true'
While the built-in libs for XML parsing are quite good I am partial to lxml.
But for parsing RSS feeds, I'd recommend Universal Feed Parser, which can also parse Atom.
Its main advantage is that it can digest even most malformed feeds.
Python 2.6 already includes a JSON parser, but a newer version with improved speed is available as simplejson.
With these tools building your app shouldn't be that difficult.
My answer addresses the specific (and somewhat common) case where you don't really need to convert the entire xml to json, but what you need is to traverse/access specific parts of the xml, and you need it to be fast, and simple (using json/dict-like operations).
Approach
For this, it is important to note that parsing an xml to etree using lxml is super fast. The slow part in most of the other answers is the second pass: traversing the etree structure (usually in python-land), converting it to json.
Which leads me to the approach I found best for this case: parsing the xml using lxml, and then wrapping the etree nodes (lazily), providing them with a dict-like interface.
Code
Here's the code:
from collections import Mapping
import lxml.etree
class ETreeDictWrapper(Mapping):
def __init__(self, elem, attr_prefix = '#', list_tags = ()):
self.elem = elem
self.attr_prefix = attr_prefix
self.list_tags = list_tags
def _wrap(self, e):
if isinstance(e, basestring):
return e
if len(e) == 0 and len(e.attrib) == 0:
return e.text
return type(self)(
e,
attr_prefix = self.attr_prefix,
list_tags = self.list_tags,
)
def __getitem__(self, key):
if key.startswith(self.attr_prefix):
return self.elem.attrib[key[len(self.attr_prefix):]]
else:
subelems = [ e for e in self.elem.iterchildren() if e.tag == key ]
if len(subelems) > 1 or key in self.list_tags:
return [ self._wrap(x) for x in subelems ]
elif len(subelems) == 1:
return self._wrap(subelems[0])
else:
raise KeyError(key)
def __iter__(self):
return iter(set( k.tag for k in self.elem) |
set( self.attr_prefix + k for k in self.elem.attrib ))
def __len__(self):
return len(self.elem) + len(self.elem.attrib)
# defining __contains__ is not necessary, but improves speed
def __contains__(self, key):
if key.startswith(self.attr_prefix):
return key[len(self.attr_prefix):] in self.elem.attrib
else:
return any( e.tag == key for e in self.elem.iterchildren() )
def xml_to_dictlike(xmlstr, attr_prefix = '#', list_tags = ()):
t = lxml.etree.fromstring(xmlstr)
return ETreeDictWrapper(
t,
attr_prefix = '#',
list_tags = set(list_tags),
)
This implementation is not complete, e.g., it doesn't cleanly support cases where an element has both text and attributes, or both text and children (only because I didn't need it when I wrote it...) It should be easy to improve it, though.
Speed
In my specific use case, where I needed to only process specific elements of the xml, this approach gave a suprising and striking speedup by a factor of 70 (!) compared to using #Martin Blech's xmltodict and then traversing the dict directly.
Bonus
As a bonus, since our structure is already dict-like, we get another alternative implementation of xml2json for free. We just need to pass our dict-like structure to json.dumps. Something like:
def xml_to_json(xmlstr, **kwargs):
x = xml_to_dictlike(xmlstr, **kwargs)
return json.dumps(x)
If your xml includes attributes, you'd need to use some alphanumeric attr_prefix (e.g. "ATTR_"), to ensure the keys are valid json keys.
I haven't benchmarked this part.
check out lxml2json (disclosure: I wrote it)
https://github.com/rparelius/lxml2json
it's very fast, lightweight (only requires lxml), and one advantage is that you have control over whether certain elements are converted to lists or dicts
jsonpickle or if you're using feedparser, you can try feed_parser_to_json.py
You can use declxml. It has advanced features like multi attributes and complex nested support. You just need to write a simple processor for it. Also with the same code, you can convert back to JSON as well. It is fairly straightforward and the documentation is awesome.
Link: https://declxml.readthedocs.io/en/latest/index.html
If you don't want to use any external libraries and 3rd party tools, Try below code.
Code
import re
import json
def getdict(content):
res=re.findall("<(?P<var>\S*)(?P<attr>[^/>]*)(?:(?:>(?P<val>.*?)</(?P=var)>)|(?:/>))",content)
if len(res)>=1:
attreg="(?P<avr>\S+?)(?:(?:=(?P<quote>['\"])(?P<avl>.*?)(?P=quote))|(?:=(?P<avl1>.*?)(?:\s|$))|(?P<avl2>[\s]+)|$)"
if len(res)>1:
return [{i[0]:[{"#attributes":[{j[0]:(j[2] or j[3] or j[4])} for j in re.findall(attreg,i[1].strip())]},{"$values":getdict(i[2])}]} for i in res]
else:
return {res[0]:[{"#attributes":[{j[0]:(j[2] or j[3] or j[4])} for j in re.findall(attreg,res[1].strip())]},{"$values":getdict(res[2])}]}
else:
return content
with open("test.xml","r") as f:
print(json.dumps(getdict(f.read().replace('\n',''))))
Sample Input
<details class="4b" count=1 boy>
<name type="firstname">John</name>
<age>13</age>
<hobby>Coin collection</hobby>
<hobby>Stamp collection</hobby>
<address>
<country>USA</country>
<state>CA</state>
</address>
</details>
<details empty="True"/>
<details/>
<details class="4a" count=2 girl>
<name type="firstname">Samantha</name>
<age>13</age>
<hobby>Fishing</hobby>
<hobby>Chess</hobby>
<address current="no">
<country>Australia</country>
<state>NSW</state>
</address>
</details>
Output
[
{
"details": [
{
"#attributes": [
{
"class": "4b"
},
{
"count": "1"
},
{
"boy": ""
}
]
},
{
"$values": [
{
"name": [
{
"#attributes": [
{
"type": "firstname"
}
]
},
{
"$values": "John"
}
]
},
{
"age": [
{
"#attributes": []
},
{
"$values": "13"
}
]
},
{
"hobby": [
{
"#attributes": []
},
{
"$values": "Coin collection"
}
]
},
{
"hobby": [
{
"#attributes": []
},
{
"$values": "Stamp collection"
}
]
},
{
"address": [
{
"#attributes": []
},
{
"$values": [
{
"country": [
{
"#attributes": []
},
{
"$values": "USA"
}
]
},
{
"state": [
{
"#attributes": []
},
{
"$values": "CA"
}
]
}
]
}
]
}
]
}
]
},
{
"details": [
{
"#attributes": [
{
"empty": "True"
}
]
},
{
"$values": ""
}
]
},
{
"details": [
{
"#attributes": []
},
{
"$values": ""
}
]
},
{
"details": [
{
"#attributes": [
{
"class": "4a"
},
{
"count": "2"
},
{
"girl": ""
}
]
},
{
"$values": [
{
"name": [
{
"#attributes": [
{
"type": "firstname"
}
]
},
{
"$values": "Samantha"
}
]
},
{
"age": [
{
"#attributes": []
},
{
"$values": "13"
}
]
},
{
"hobby": [
{
"#attributes": []
},
{
"$values": "Fishing"
}
]
},
{
"hobby": [
{
"#attributes": []
},
{
"$values": "Chess"
}
]
},
{
"address": [
{
"#attributes": [
{
"current": "no"
}
]
},
{
"$values": [
{
"country": [
{
"#attributes": []
},
{
"$values": "Australia"
}
]
},
{
"state": [
{
"#attributes": []
},
{
"$values": "NSW"
}
]
}
]
}
]
}
]
}
]
}
]
This stuff here is actively maintained and so far is my favorite: xml2json in python
I published one on github a while back..
https://github.com/davlee1972/xml_to_json
This converter is written in Python and will convert one or more XML files into JSON / JSONL files
It requires a XSD schema file to figure out nested json structures (dictionaries vs lists) and json equivalent data types.
python xml_to_json.py -x PurchaseOrder.xsd PurchaseOrder.xml
INFO - 2018-03-20 11:10:24 - Parsing XML Files..
INFO - 2018-03-20 11:10:24 - Processing 1 files
INFO - 2018-03-20 11:10:24 - Parsing files in the following order:
INFO - 2018-03-20 11:10:24 - ['PurchaseOrder.xml']
DEBUG - 2018-03-20 11:10:24 - Generating schema from PurchaseOrder.xsd
DEBUG - 2018-03-20 11:10:24 - Parsing PurchaseOrder.xml
DEBUG - 2018-03-20 11:10:24 - Writing to file PurchaseOrder.json
DEBUG - 2018-03-20 11:10:24 - Completed PurchaseOrder.xml
I also have a follow up xml to parquet converter that works in a similar fashion
https://github.com/blackrock/xml_to_parquet

Categories