How to just initialise the tile in python dictionary? - python

I try to generate JSON file from python dictionary data type.
Here is the segment of python code involved in this issue before I dump it to Json format :
channelSeg = {}
channelSeg["ch"] = None
channelSeg["chdata"] = []
for e in channelPkg:
print e
attr = e.split(':')
if attr[0] == "ch":
channel = attr[1].split(',')
channelSeg["ch"] = int(channel[0])
Heading
I am doing this to init dictionary index then later I could append more data in my for loop like this:
channelSeg["ch"] = None
channelSeg["chdata"] = []
but I really want to do is without assign them any data just
channelSeg["ch"]
channelSeg["chdata"]
but python doesn't like me to do that .
So after dump operation , I got repetitive Json data like this(part of it)
"datapkg": [
{
"dataseg": [
{
"ch": 0,
"chdata": [
{
"euler": {
"y": "-19.32",
"x": "93.84",
"z": "-134.14"
}
},
{
"areal": {
"y": "57",
"x": "-242",
"z": "-210"
}
}
]
},
{
"ch": 1,
"chdata": [
{
"areal": {
"y": "-63",
"x": "-30",
"z": "10"
}
}
]
},
{
"ch": null,
"chdata": []
}
],
"t": "174464",
"n": "9884"
},
I always have redundant :
{
"ch": null,
"chdata": []
}
Which make this JSON data package not healthy enough , is there anyway to remove this piece of redundant data?
Many thanks for any advices
===========v2==============
after I consider Edward's answer I found I could only solve it with channelSeg["ch"] = None but I don't know how to deal with another redundant list, it is because I didn't post enough code , so I past more complete code here , and still looking for solutions ..
My code after modify :
for elem in sensorPkg:
channelPkg = elem.split('&') # channelPkg contain each channel's reading
# each channel need a dictonary to store data
channelSeg = {}
# channelSeg["ch"] = None
channelSeg["chdata"] = []
for e in channelPkg:
attr = e.split(':')
if attr[0] == "ch":
new_channel = {
'ch': int((attr[1].split(','))[0])
#channelSeg["ch"] = int(channel[0])
}
channelSeg["chdata"].append(new_channel)
# store channel numbers
elif attr[0] == "euler":
# create euler package
numbers = attr[1].split(',')
eulerSeg = {}
d = {}
d["x"] = numbers[0]
d["y"] = numbers[1]
d["z"] = numbers[2]
eulerSeg["euler"] = d
# append to channel segement
channelSeg["chdata"].append(eulerSeg)
elif attr[0] == "areal": # real accelrometer readings
# create areal package
numbers = attr[1].split(',')
arealSeg = {}
d = {}
d["x"] = numbers[0]
d["y"] = numbers[1]
d["z"] = numbers[2]
arealSeg["areal"] = d
# append to channel segement
channelSeg["chdata"].append(arealSeg)
#and so on
and here is the outcome
{
"dataseg": [
{
"chdata": [
{
"ch": 0
},
{
"euler": {
"y": "6.51",
"x": "73.16",
"z": "-133.69"
}
},
{
"areal": {
"y": "516",
"x": "-330",
"z": "-7"
}
}
]
},
{
"chdata": [
{
"ch": 1
},
{
"euler": {
"y": "24.86",
"x": "4.30",
"z": "-71.39"
}
},
{
"areal": {
"y": "120",
"x": "316",
"z": "273"
}
}
]
},
{
"chdata": [
{
"ch": 2
},
{
"euler": {
"y": "62.32",
"x": "-60.34",
"z": "-120.82"
}
},
{
"areal": {
"y": "440",
"x": "-611",
"z": "816"
}
}
]
},
{
"chdata": []
}
],
"t": "14275",
"n": "794"
},
which
{
"chdata": []
}
Still there

In the data structure that you're working with, I notice that 'dataseg' is a list of channels. Now, you don't need to initialize each channel before adding it to dataseg. First initialize dataseg as an empty list, then, while iterating over your entries in channelPkg, you can create new channel dicts using the information read from channelPkg, and append them immediately:
dataseg = []
for e in channelPkg:
attr = e.split(':')
if attr[0] == "ch":
new_channel = {
'ch': int(attr[1].split(',')),
'data': []
}
dataseg.append(new_channel)
Hope that helps -- I'm not sure what the context of your question is exactly, so comment if this doesn't solve your problem.
Edit
I think that your problem is that the very last channelPkg is empty. So, for e in channelPkg: is equivalent to for e in [], and as a result, the last iteration of the outer loop appends just the initialized values (nothing inside for e in channelPkg executes).
Try adding two lines to test if the sensorPkg has a ch property (I'm assuming that all valid sensorPkgs have a ch property):
for elem in sensorPkg:
channelPkg = elem.split('&')
# Add this to prevent appending an empty channel
if 'ch' not in [e.split(':')[0] for e in channelPkg]:
break
channelSeg = {}
channelSeg["chdata"] = []
for e in channelPkg:
# ... etc

Try using a conditional dictionary comprehension:
channelSeg["chdata"] = {ch.split(',')[0] if ch for ch in e.split(':')}

Related

creating nested dictionaries and lists from parsed CSV

I have been working on a project that involves parsing a CSV file in order to turn all the data into a very specifically formatted JSON following a complex schema. I have to custom make this program as the required complexity of the JSON makes existing converters fail. I am mostly there, I have run into one final roadblock though:
I have nested dictionaries, and occasionally there must be a list within those, this list will contain further dictionaries. This is fine, I have been able to complete that, BUT now I need to find a way to add more nested dictionaries within those. Below is a simplified breakdown of the concept.
the CSV will look something like this, where the # before a tag indicates it's a list
x.a, x.b.z, x.b.y, x.#c.z.nest1, x.#c.z.nest2, x.#c.yy, x.d, x.e.z, x.e.y
ab, cd, ef, gh, ij, kl, mn, op, qr
this should result in the following JSON
{
"x": {
"a": "ab",
"b": {
"z": "cd",
"y": "ef"
},
"c": [
{
"z": {
"nest1": "gh",
"nest2": "ij"
}
},
{
"yy": "kl"
}
],
"d": "mn",
"e": {
"z": "op",
"y": "qr"
}
}
}
This is one issue that I haven't been able to solve, my current code can only do one dictionary after the list item, not further. I also need to be able to somehow do the following within a list of dictionaries:
"c": [
{
"z": {
"nest1": "gh"
},
"zz": {
"nest2": "ij"
}
},
{
"yy": "kl"
}
i.e. somehow add multiple nested dictionaries within the dictionary in the list. The problem with this occurs within the fact that these aren't reference-able by name, so I do not know how I could potentially indicate to do that within the CSV format.
Here is the code I have that works up to the first dictionary nested within a list:
import json
import pandas as pd
from os.path import exists
# df1 = pd.read_csv("excelTestFacilities.csv", header = 1, sep=",", keep_default_na=False, engine="python")
# df2 = pd.read_csv("excelTestFacilityContacts.csv", header = 1, sep=",", keep_default_na=False, engine="python")
# df = pd.merge(df1, df2, how = 'inner')
df = pd.read_csv("csvTestFile.csv", header = 1, sep=", ", keep_default_na=False, engine="python")
#print(df) # uncomment to see the transformation
json_data = df.to_dict(orient="records")
#print(json_data)
def unflatten_dic(dic):
"""
Unflattens a CSV list into a set of nested dictionaries
"""
ini = {}
for k,v in list(dic.items()):
node = ini
list_bool = False
*parents, key = k.split('.')
for parent in parents:
if parent[0] == '#':
list_bool = True
if list_bool:
for parent in parents:
if parent[0] == '#':
node[parent[1:]] = node = node.get(parent[1:], [])
else:
node[parent] = node = node.get(parent, {})
node.append({key : v})
else:
for parent in parents:
node[parent] = node = node.get(parent, {})
node[key] = v
return ini
def merge_lists(dic):
"""
Removes duplicates within sets
"""
for k,v in list(dic.items()):
if isinstance(v, dict):
keys = list(v.keys())
vals = list(v.values())
if all(isinstance(l, list) and len(l)==len(vals[0]) for l in vals):
dic[k] = []
val_tuple = set(zip(*vals)) # removing duplicates with set()
for t in val_tuple:
dic[k].append({subkey: t[i] for i, subkey in enumerate(keys)})
else:
merge_lists(v)
elif isinstance(v, list):
dic[k] = list(set(v)) # removing list duplicates
def clean_blanks(value):
"""
Recursively remove all None values from dictionaries and lists, and returns
the result as a new dictionary or list.
"""
if isinstance(value, list):
return [clean_blanks(x) for x in value if x != ""]
elif isinstance(value, dict):
return {
key: clean_blanks(val)
for key, val in value.items()
if val != "" and val != {}
}
else:
return value
def add_to_dict(section_added_to, section_to_add, value, reportNum):
"""
Adds a value to a given spot within a dictionary set.
section_added_to is optional for adding the set to a deeper section such as facility
section_to_add is the name that the new dictionary entry will have
value is the item to be added
reportNum is the number indicating which report to add to, starting at 0
"""
if section_added_to != '':
end_list[reportNum][section_added_to][section_to_add] = value
else:
end_list[reportNum][section_to_add] = value
def read_add_vals(filename_prefix, added_to, section):
for i in range(len(end_list)):
temp_list = []
filename = filename_prefix + str(i+1) + ".csv"
if not exists(filename):
continue;
temp_df = pd.read_csv(filename, header = 1, sep=",", keep_default_na=False, engine="python")
temp_json = temp_df.to_dict(orient="records")
for y in temp_json:
return_ini = unflatten_dic(y)
temp_list.append(return_ini)
add_to_dict(added_to, section, temp_list, i)
global end_list
end_list = []
for x in json_data:
return_ini = unflatten_dic(x)
end_list.append(return_ini)
#read_add_vals('excelTestPermitsFac', 'facility', 'permits');
json_data = clean_blanks(end_list)
final_json = {"year":2021, "version":"2022-02-14", "reports":json_data}
print(json.dumps(final_json, indent=4))
There is some parts of this code that are involved in other components of the overall end JSON, but I am mainly concerned with how to change unflatten_dic()
Here is my current working code for changing unflatten_dic(), even though it doesn't work...
def list_get(list, list_item):
i = 0
for dict in list:
if list_item in dict:
return dict.get(list_item, {})
i += 1
return {}
def check_in_list(list, list_item):
i = 0
for dict in list:
if list_item in dict:
return i
i += 1
return -1
def unflatten_dic(dic):
"""
Unflattens a CSV list into a set of nested dictionaries
"""
ini = {}
for k,v in list(dic.items()):
node = ini
list_bool = False
*parents, key = k.split('.')
for parent in parents:
if parent[0] == '#':
list_bool = True
previous_node_list = False
if list_bool:
for parent in parents:
print(parent)
if parent[0] == '#':
node[parent[1:]] = node = node.get(parent[1:], [])
ends_with_dict = False
previous_node_list = True
else:
print("else")
if previous_node_list:
print("prev list")
i = check_in_list(node, parent)
if i >= 0:
node[i] = node = list_get(node, parent)
else:
node.append({parent : {}})
previous_node_list = False
ends_with_dict = True
else:
print("not prev list")
node[parent] = node = node.get(parent, {})
previous_node_list = False
if ends_with_dict:
node[key] = v
else:
node.append({key : v})
else:
for parent in parents:
node[parent] = node = node.get(parent, {})
node[key] = v
#print(node)
return ini
Any, even small, amount of help would be greatly appreciated.
It is easiest to use recursion and collections.defaultdict to group child entries on their parents (each entry is separated by the . in the csv data):
from collections import defaultdict
def to_dict(vals, is_list = 0):
def form_child(a, b):
return b[0][0] if len(b[0]) == 1 else to_dict(b, a[0] == '#')
d = defaultdict(list)
for a, *b in vals:
d[a].append(b)
if not is_list:
return {a[a[0] == '#':]:form_child(a, b) for a, b in d.items()}
return [{a[a[0] == '#':]:form_child(a, b)} for a, b in d.items()]
import csv, json
with open('filename.csv') as f:
data = list(csv.reader(f))
r = [a.split('.')+[b] for i in range(0, len(data), 2) for a, b in zip(data[i], data[i+1])]
print(json.dumps(to_dict(r), indent=4))
Output:
{
"x": {
"a": "ab",
"b": {
"z": "cd",
"y": "ef"
},
"c": [
{
"z": {
"nest1": "gh",
"nest2": "ij"
}
},
{
"yy": "kl"
}
],
"d": "mn",
"e": {
"z": "op",
"y": "qr"
}
}
}
I managed to get it working in what seems to be all scenarios. Here is the code that I made for the unflatten_dic() function.
def unflatten_dic(dic):
"""
Unflattens a CSV list into a set of nested dictionaries
"""
ini = {}
for k,v in list(dic.items()):
node = ini
list_bool = False
*parents, key = k.split('.')
# print("parents")
# print(parents)
for parent in parents:
if parent[0] == '#':
list_bool = True
if list_bool:
for parent in parents:
if parent[0] == '#':
node[parent[1:]] = node = node.get(parent[1:], [])
elif parent.isnumeric():
# print("numeric parent")
# print("length of node")
# print(len(node))
if len(node) > int(parent):
# print("node length good")
node = node[int(parent)]
else:
node.append({})
node = node[int(parent)]
else:
node[parent] = node = node.get(parent, {})
try:
node.append({key : v})
except AttributeError:
node[key] = v
else:
for parent in parents:
node[parent] = node = node.get(parent, {})
node[key] = v
return ini
I haven't run into an issue thus far, this is based on the following rules for the CSV:
# before any name results in that item being a list
if the section immediately after a list in the CSV is a number, that will create multiple dictionaries within the list. Here is an example
x.a, x.b.z, x.b.y, x.#c.0.zz, x.#c.1.zz, x.#c.2.zz, x.d, x.e.z, x.e.y, x.#c.1.yy.l, x.#c.1.yy.#m.q, x.#c.1.yy.#m.r
ab, cd, ef, gh, , kl, mn, op, qr, st, uv, wx
12, 34, 56, 78, 90, 09, , 65, 43, 21, , 92
This will result in the following JSON after formatting
"reports": [
{
"x": {
"a": "ab",
"b": {
"z": "cd",
"y": "ef"
},
"c": [
{
"zz": "gh"
},
{
"yy": {
"l": "st",
"m": [
{
"q": "uv"
},
{
"r": "wx"
}
]
}
},
{
"zz": "kl"
}
],
"d": "mn",
"e": {
"z": "op",
"y": "qr"
}
}
},
{
"x": {
"a": "12",
"b": {
"z": "34",
"y": "56"
},
"c": [
{
"zz": "78"
},
{
"zz": "90",
"yy": {
"l": "21",
"m": [
{
"r": "92"
}
]
}
},
{
"zz": "09"
}
],
"e": {
"z": "65",
"y": "43"
}
}
}
]

How to combine a dict to a json file as an object with same index in Python?

The question may be confusing I know however, I don't know how to ask this properly.
Let me explain the issue. I have a json file like this:
{
"0": "MyItem",
"1": "AnotherItem"
}
Then I am generating a dictionary with the same context above. Like this.
{
"UniqueId": "52355",
"AnotherUniqueId": "234235"
}
They have same length. What I want to do is I want to parse this dictionary to this json file at the same index as an object like:
{
{"0": "MyItem", "UniqueId": "52355"}
{"1": "AnotherItem", "AnotherUniqueId": "234235"}
}
How to achieve this ?
it takes item of each dict and combines them with { **dict1, **dict2 }
then stores each dict as key-value pairs of final dicts.
n = {
"0": "MyItem",
"1": "AnotherItem"
}
m = {
"UniqueId": "52355",
"AnotherUniqueId": "234235"
}
c = {}
for i, keys in enumerate(zip(n, m)):
a, b = keys
c[i] = { **{a:n[a]} , **{b:m[b]} }
print(c)
output :
{
0: {'0': 'MyItem', 'UniqueId': '52355'},
1: {'1': 'AnotherItem', 'AnotherUniqueId': '234235'}
}
Your dictionaries in the final dictionary need to be accompanied by some sort of key since a dictionary is a key-value pair, it wouldn't make sense to not have a key for a value. The output you should go after is this for example
{
0: {"0": "MyItem", "UniqueId": "52355"},
1: {"1": "AnotherItem", "AnotherUniqueId": "234235"}
}
Here's my solution
b = {
"UniqueId": "52355",
"AnotherUniqueId": "234235"
}
a = {
"0": "MyItem",
"1": "AnotherItem"
}
# Assuming a and b are of the same length
c = {} # will contain the final dictionaries
index = 0
for i,j in zip(a,b):
temp = {}
temp[i]=a[i]
temp[j]=b[j]
c[index] = temp
index+=1
print(c)

Foreach loop in Python to extract value from array in json response

I've got this json response:
{
"properties": {
"basic": {
"bandwidth_class": "",
"failure_pool": "",
"max_connection_attempts": 0,
"max_idle_connections_pernode": 50,
"max_timed_out_connection_attempts": 2,
"monitors": [
"Simple HTTP"
],
"node_close_with_rst": false,
"node_connection_attempts": 3,
"node_delete_behavior": "immediate",
"node_drain_to_delete_timeout": 0,
"nodes_table": [
{
"node": "abc1.prod.local:80",
"priority": 1,
"state": "active",
"weight": 1
},
{
"node": "def1.prod.local:80",
"priority": 1,
"state": "disabled",
"weight": 1
},
{
"node": "ghi1.prod.local:80",
"priority": 1,
"state": "disabled",
"weight": 1
},
{
"node": "jkl1.prod.local:80",
"priority": 1,
"state": "active",
"weight": 1
}
],
"note": "",
"passive_monitoring": true,
"persistence_class": "",
"transparent": false
}
}
}
And this powershell script:
$nodesAarray = "abc1.prod.local:80", "jkl1.prod.local:80"
foreach($node in $nodesArray)
{
$nodes_match_and_enabled = $GetNodesResponse.properties.basic.nodes_table | Where { $_.node -eq $node -and $_.state -eq "active" }
if($nodes_match_and_enabled)
{
Write-Output "$node exists in the pool and active"
}
else
{
Write-Output "$node is either not active or the name mismatches"
$global:invalidNodeArray.Add($node)
}
}
In my powershell script I am looping to check the two nodes in my array actually match by value and the state is active. It works as I expect.
However, I am scripting the same exact logic in Python (I am a beginner) but not sure how to approach it. Any idea what the script would look like in Python???
First, filter all active nodes, then compare with node list:
data = json.loads(text)
active_nodes = {
n['node']
for n in data['properties']['basic']['nodes_table']
if n['state'] == 'active'
}
nodes = {"abc1.prod.local:80", "jkl1.prod.local:80"}
for node in nodes:
if node in active_nodes:
print('{} exists in the pool and active'.format(node))
else:
print('{} is either not active or the name mismatches'.format(node))
invalid_nodes = nodes - active_nodes
Should work in Python 2 or 3, I think:
#!/usr/bin/env python
import sys
import json
res = ""
for line in sys.stdin:
res += line.rstrip()
res_obj = json.loads(res)
nodes = [ 'abc1.prod.local:80', 'jkl1.prod.local:80' ]
invalid_nodes = []
for node in nodes:
try:
found = False
test_node_objs = res_obj['properties']['basic']['nodes_table']
for test_node_obj in test_node_objs:
test_node = test_node_obj['node']
if node == test_node:
found = True
break
if found:
sys.stdout.write("%s exists in the pool and active\n" % (node))
else:
sys.stdout.write("%s is either not active or the name mismatches\n" % (node))
invalid_nodes.append(node)
except KeyError as ke:
sys.stderr.write("malformed response? check input...\n")
pass
Example usage:
$ ./parse_response.py < response.json
Here's an implementation:
jsonObj = json.loads(jsonSrc)
expectedNodes = {"abc1.prod.local:80", "jkl1.prod.local:80"}
for node in expectedNodes:
node_table = jsonObj['properties']['basic']['nodes_table']
node_match = list(filter(lambda t_node: node == t_node['node'], node_table))
is_node_matches_and_active = len(node_match) > 0 and node_match[0]['state'] == "active"
if is_node_matches_and_active:
print('node {} exists and is active'.format(node))
else:
print('node {} not found or not active'.format(node))
Output :
node jkl1.prod.local:80 exists and is active
node abc1.prod.local:80 exists and is active

Nested dictionary from data in a text file

I am new with python and I am trying to create a dictionary that outputs in a JSON file, this with data from a text file. So the text file would be this one.
557e155fc5f0 557e155fc5f0 1 557e155fc602 1
557e155fc610 557e155fc610 2
557e155fc620 557e155fc620 1 557e155fc626 1
557e155fc630 557e155fc630 1 557e155fc636 1
557e155fc640 557e155fc640 1
557e155fc670 557e155fc670 1 557e155fc698 1
557e155fc6a0 557e155fc6a0 1 557e155fc6d8 1
And the desired output for the first two lines would be
{ "functions": [
{
"address": "557e155fc5f0",
"blocks": [
"557e155fc5f0": "calls":{1}
"557e155fc602": "calls":{1}
]
},
{
"address": " 557e155fc610",
"blocks": [
" 557e155fc610": "calls":{2}
]
},
I have wrote a script to begin but I don't know how to continue.
import json
filename = 'calls2.out' # here the name of the output file
funs = {}
bbls = {}
with open(filename) as fh: # open file
for line in fh: # walk line by line
if line.strip(): # non-empty line?
rtn,bbl = line.split(None,1) # None means 'all whitespace', the default
for j in range(len(bbl)):
funs[rtn] = bbl.split()
print(json.dumps(funs, indent=2, sort_keys=True))
#json = json.dumps(fun, indent=2, sort_keys=True) # to save it into a file
#f = open("fout.json","w")
#f.write(json)
#f.close()
this script gives me this output
"557e155fc5f0": [
"557e155fc5f0",
"1",
"557e155fc602",
"1"
],
"557e155fc610": [
"557e155fc610",
"2"
],
"557e155fc620": [
"557e155fc620",
"1",
"557e155fc626",
"1"
],
funs[rtn] = bbl.split()
Here you add "557e155fc5f0", "1" as value to the rtnkey, because bbl is 557e155fc5f0 1 at this point, but you want to add it as a dictionary.
temp_dict = {bbl.split()[0]: bbl.split()[1]}
funs[rtn] = temp_dict
This will give you following json:
{
"557e155fc6a0": {
"557e155fc6a0": "1"
}
}
If you need the calls as key in the json you'd need to extend a bit:
temp_dict = {bbl.split()[0]: {"calls": bbl.split()[1]}}
funs[rtn] = temp_dict
Gives you this:
{
"557e155fc6a0": {
"557e155fc6a0": {
"calls": "1"
}
}
}
Also, your example json is malformed, I assume you want sth like this:
{
"functions": {
"address": "557e155fc5f0",
"blocks": {
"557e155fc5f0": {
"calls": 1
},
"557e155fc602": {
"calls": 1
}
}
},
"address": " 557e155fc610",
"blocks": {
"557e155fc610": {
"calls": 2
}
}
}
I'd try an Online JSON Editor for testing/creating examples.
Hope it helps!

Comparing Nested Python dict with list and dict

I've seen similar questions but none that exactly match what I'm doing and I believe other developers might face same issue if they are working with MongoDB.
I'm looking to compare two nested dict objects with dict and arrays and return a dict with additions and deletion (like you would git diff two files)
Here is what I have so far:
def dict_diff(alpha, beta, recurse_adds=False, recurse_dels=False):
"""
:return: differences between two python dict with adds and dels
example:
(This is the expected output)
{
'adds':
{
'specific_hours': [{'ends_at': '2015-12-25'}],
}
'dels':
{
'specific_hours': [{'ends_at': '2015-12-24'}],
'subscription_products': {'review_management': {'thiswillbedeleted': 'deleteme'}}
}
}
"""
if type(alpha) is dict and type(beta) is dict:
a_keys = alpha.keys()
b_keys = beta.keys()
dels = {}
adds = {}
for key in a_keys:
if type(alpha[key]) is list:
if alpha[key] != beta[key]:
adds[key] = dict_diff(alpha[key], beta[key], recurse_adds=True)
dels[key] = dict_diff(alpha[key], beta[key], recurse_dels=True)
elif type(alpha[key]) is dict:
if alpha[key] != beta[key]:
adds[key] = dict_diff(alpha[key], beta[key], recurse_adds=True)
dels[key] = dict_diff(alpha[key], beta[key], recurse_dels=True)
elif key not in b_keys:
dels[key] = alpha[key]
elif alpha[key] != beta[key]:
adds[key] = beta[key]
dels[key] = alpha[key]
for key in b_keys:
if key not in a_keys:
adds[key] = beta[key]
elif type(alpha) is list and type(beta) is list:
index = 0
adds=[]
dels=[]
for elem in alpha:
if alpha[index] != beta[index]:
dels.append(alpha[index])
adds.append(beta[index])
# print('update', adds, dels)
index+=1
else:
raise Exception("dict_diff function can only get dict objects")
if recurse_adds:
if bool(adds):
return adds
return {}
if recurse_dels:
if bool(dels):
return dels
return {}
return {'adds': adds, 'dels': dels}
The result I'm getting now is:
{'adds': {'specific_hours': [{'ends_at': '2015-12-24',
'open_hours': ['07:30-11:30', '12:30-21:30'],
'starts_at': '2015-12-22'},
{'ends_at': '2015-01-03',
'open_hours': ['07:30-11:30'],
'starts_at': '2015-01-0'}],
'subscription_products': {'review_management': {}}},
'dels': {'specific_hours': [{'ends_at': '2015-12-24',
'open_hours': ['07:30-11:30', '12:30-21:30'],
'starts_at': '2015-12-2'},
{'ends_at': '2015-01-03',
'open_hours': ['07:30-11:30'],
'starts_at': '2015-01-0'}],
'subscription_products': {'review_management': {'thiswillbedeleted': 'deleteme'}}}}
And this is the two objects I'm trying to compare:
alpha = {
'specific_hours': [
{
"starts_at": "2015-12-2",
"ends_at": "2015-12-24",
"open_hours": [
"07:30-11:30",
"12:30-21:30"
]
},
{
"starts_at": "2015-01-0",
"ends_at": "2015-01-03",
"open_hours": [
"07:30-11:30"
]
}
],
'subscription_products': {'presence_management':
{'expiration_date': 1953291600,
'payment_type': {
'free': 'iamfree',
'test': "test",
},
},
'review_management':
{'expiration_date': 1511799660,
'payment_type': {
'free': 'iamfree',
'test': "test",
},
'thiswillbedeleted': "deleteme",
}
},
}
beta = {
'specific_hours': [
{
"starts_at": "2015-12-22",
"ends_at": "2015-12-24",
"open_hours": [
"07:30-11:30",
"12:30-21:30"
]
},
{
"starts_at": "2015-01-0",
"ends_at": "2015-01-03",
"open_hours": [
"07:30-11:30"
]
}
],
'subscription_products': {'presence_management':
{'expiration_date': 1953291600,
'payment_type': {
'free': 'iamfree',
'test': "test",
},
},
'review_management':
{'expiration_date': 1511799660,
'payment_type': {
'free': 'iamfree',
'test': "test",
},
}
},
}

Categories