compare and complete lists with each other - python

I have here a very tricky task here.I want to compare x number of lists in list of lists and that lists contain dictionaries.So i want to compare the dictionaries in these lists based on the 'name' key in the dictionaries if it match it should pass if not it should copy the whole dictionary to the lists that don't have it with editing the 'balance' key vlaue to '0'.
For example let's assume we have list of lists like this :
list_of_lists=[[{'name': u'Profit','balance': 10},{'name': u'Income','balance': 30},{'name': u'NotIncome','balance': 15}],[{'name': u'Profit','balance': 20},{'name': u'Income','balance': 10}]]
So the result should be :
list_of_lists=[[{'name': u'Profit','balance': 10},{'name': u'Income','balance': 30},{'name': u'NotIncome','balance': 15}],[{'name': u'Profit','balance': 20},{'name': u'Income','balance': 10},{'name': u'NotIncome','balance': 0}]]
Here is my code but i can't get it work with 2 lists or more(I don't know the number of lists in the list (maybe 2,3 or 4 etc...) :
for line in lines:
for d1, d2 in zip(line[0], line[1]):
for key, value in d1.items():
if value != d2[key]:
print key, value, d2[key]

You could first create a set containing all the names and then iterate the sublists one by one adding the missing dicts:
import pprint
l = [
[
{'name': u'Profit','balance': 10},
{'name': u'Income','balance': 30},
{'name': u'NotIncome','balance': 15}
],
[
{'name': u'Profit','balance': 20},
{'name': u'Income','balance': 10}
],
[]
]
all_names = {d['name'] for x in l for d in x}
for sub_list in l:
for name in (all_names - {d['name'] for d in sub_list}):
sub_list.append({'name': name, 'balance': 0})
pprint.pprint(l)
Output:
[[{'balance': 10, 'name': u'Profit'},
{'balance': 30, 'name': u'Income'},
{'balance': 15, 'name': u'NotIncome'}],
[{'balance': 20, 'name': u'Profit'},
{'balance': 10, 'name': u'Income'},
{'balance': 0, 'name': u'NotIncome'}],
[{'balance': 0, 'name': u'Profit'},
{'balance': 0, 'name': u'Income'},
{'balance': 0, 'name': u'NotIncome'}]]
That said you should consider converting sublists to dicts where keys are names and values are balances in order to ease the processing.

Related

Merging two list of dicts with different keys effectively

I've got two lists:
lst1 = [{"name": "Hanna", "age":3},
{"name": "Kris", "age": 18},
{"name":"Dom", "age": 15},
{"name":"Tom", "age": 5}]
and the second one contains a few of above key name values under different key:
lst2 = [{"username": "Kris", "Town": "Big City"},
{"username":"Dom", "Town": "NYC"}]
I would like to merge them with result:
lst = [{"name": "Hanna", "age":3},
{"name": "Kris", "age": 18, "Town": "Big City"},
{"name":"Dom", "age": 15, "Town": "NYC"},
{"name":"Tom", "age":"5"}]
The easiest way is to go one by one (for each element from lst1, check whether it exists in lst2), but for big lists, this is quite ineffective (my lists have a few hundred elements each). What is the most effective way to achieve this?
To avoid iterating over another list again and again, you can build a name index first.
lst1 = [{"name": "Hanna", "age":3},
{"name": "Kris", "age": 18},
{"name":"Dom", "age": 15},
{"name":"Tom", "age": 5}]
lst2 = [{"username": "Kris", "Town": "Big City"},
{"username":"Dom", "Town": "NYC"}]
name_index = { dic['username'] : idx for idx, dic in enumerate(lst2) if dic.get('username') }
for dic in lst1:
name = dic.get('name')
if name in name_index:
dic.update(lst2[name_index[name]]) # update in-place to further save time
dic.pop('username')
print(lst1)
One way to do this a lot more efficient than by lists is to create an intermediate dictionary from lst1 with name as key, so that you're searching a dictionary not a list.
d1 = {elem['name']: {k:v for k,v in elem.items()} for elem in lst1}
for elem in lst2:
d1[elem['username']].update( {k:v for k,v in elem.items() if k != 'username'} )
lst = list(d1.values())
Output:
[{'name': 'Hanna', 'age': 3}, {'name': 'Kris', 'age': 18, 'Town': 'Big City'}, {'name': 'Dom', 'age': 15, 'Town': 'NYC'}, {'name': 'Tom', 'age': 5}]
edited to only have one intermediate dict
Use zip function to pair both lists. We need to order both lists using some criteria, in this case, you must use the username and name keys for the lists because those values will be your condition to perform the updating action, for the above reason is used the sorted function with key param. It is important to sort them out to get the match.
Finally your list lst2 has a little extra procedure, I expanded it taking into account the length of lst1, that is what I do using lst2 * abs(len(lst1) - len(lst2). Theoretically, you are iterating once over an iterable zip object, therefore I consider this could be a good solution for your requirements.
for d1, d2 in zip(sorted(lst1, key=lambda d1: d1['name']),
sorted(lst2 * abs(len(lst1) - len(lst2)), key=lambda d2: d2['username'])):
if d1['name'] == d2['username']:
d1.update(d2)
# Just we delete the username
del d1['username']
print(lst1)
Output:
[{'name': 'Hanna', 'age': 3}, {'name': 'Kris', 'age': 18, 'Town': 'Big City'}, {'name': 'Dom', 'age': 15, 'Town': 'NYC'}, {'name': 'Tom', 'age': 5}]

How to ignore a single/multiple keys of all the dictionaries while looping over a list of dictionaries?

I am looping over a list of dictionaries and I have to drop/ignore either one or more keys of the each dictionary in the list and write it to a MongoDB. What is the efficient pythonic way of doing this ?
Example:
employees = [
{'name': "Tom", 'age': 10, 'salary': 10000, 'floor': 10},
{'name': "Mark", 'age': 5, 'salary': 12000, 'floor': 11},
{'name': "Pam", 'age': 7, 'salary': 9500, 'floor': 9}
]
Let's say I want to drop key = 'floor' or keys = ['floor', 'salary'].
Currently I am using del employees['floor'] inside the loop to delete the key and my_collection.insert_one() to simply write the dictionary into my MongoDB.
My code:
for d in employees:
del d['floor']
my_collection.insert_one(d)
The solution you proposed is the most efficient to use since you have no control on what happens inside the method insert_one.
If you have more keys, just loop over them:
ignored_keys = ['floor', 'salary']
for d in employees:
for k in ignored_keys:
del d[k]
my_collection.insert_one(d)
Let's say you want to drop keys = ['floor', 'salary']. You can try:
exclude_keys = ['salary', 'floor']
for d in employees:
my_collection.insert_one({k: d[k] for k in set(list(d.keys())) - set(exclude_keys)})

python, map name from a list to a list of dict

I have the following list and list of dicts:
data = [dict(position=1, value=150.3),
dict(position=0, value=28.5),
dict(position=2, value=1050.3)]
names = ["CL", "ES", "EUR"]
I would like to map the values of the list into the list of dicts so they match the value stated in the key "position" of the dict - to get the following result:
data = [dict(name="ES", position=1, value=150.3),
dict(name="CL", position=0, value=28.5),
dict(name="EUR", position=2, value=1050.3)]
Is there any "smart" and pythonic way to achieve that?
First of all, please present your question in actual Python form. dict is a type; the way you represent dictionaries in Python is with {}.
Also, you don't have "a dict and list", you have two lists, one of which consists of three dictionaries. So:
data = [
{'position': 1, 'value': 150.3},
{'position': 0, 'value': 28.5},
{'position': 2, 'value': 1050.3}
]
names = ["CL", "ES", "EUR"]
So, given that you do have lists, there is no concern about ordering. A simple loop will give you what you want:
for d in data:
d['name'] = names[d['position']]
This updates data in place:
>>> data
[{'position': 1, 'name': 'ES', 'value': 150.3}, {'position': 0, 'name': 'CL', 'value': 28.5}, {'position': 2, 'name': 'EUR', 'value': 1050.3}]
You can use a list comprehension and dictionary update:
data = [dict(position = 2, value=150.3),
dict(position = 1, value = 28.5),
dict(position=3, value=1050.3)]
names = ['CL', 'ES', 'EUR']
# Sort names according to "position" value of the dictionary
sorted_names = [names[idx] for idx in map(lambda x: x['position'], data)]
# Update modifies inplace
_ = [data[idx].update({'name' : el}) for idx, el in enumerate(sorted_names)]
Which gives the expected output:
data
[{'name': 'ES', 'position': 2, 'value': 150.3},
{'name': 'CL', 'position': 1, 'value': 28.5},
{'name': 'EUR', 'position': 3, 'value': 1050.3}]
You could try:
data = [{"position": 2, "value": 150},
{"position": 1, "value": 200}]
names = ["CL", "ES"]
for item in data:
item["name"] = names[item["pos"] - 1]
Where we go through all the dictionaries in the list, then for each dictionary we set the "name" key to be equal to the value in data at the position described in item["pos"] minus 1.
This of course assumes your data is clean and all items in data correctly map to items in names.
If this is not the case, use a try-except:
for item in data:
try:
item["name"] = names[item["pos"] - 1]
except IndexError:
item["name"] = None
You can also use the update method on the dictionary elements in the list, if you like the keyword-argument convention, as the style of your question suggests.
for item in data:
item.update(name=names[item["position"]])
A one liner implementation using list comprehension.
print [dict(d.items()+[('name',names[d['position']-1])]) for d in data]

Get a unique set of dicts [duplicate]

Let's say I have a list of dictionaries:
[
{'id': 1, 'name': 'john', 'age': 34},
{'id': 1, 'name': 'john', 'age': 34},
{'id': 2, 'name': 'hanna', 'age': 30},
]
How can I obtain a list of unique dictionaries (removing the duplicates)?
[
{'id': 1, 'name': 'john', 'age': 34},
{'id': 2, 'name': 'hanna', 'age': 30},
]
So make a temporary dict with the key being the id. This filters out the duplicates.
The values() of the dict will be the list
In Python2.7
>>> L=[
... {'id':1,'name':'john', 'age':34},
... {'id':1,'name':'john', 'age':34},
... {'id':2,'name':'hanna', 'age':30},
... ]
>>> {v['id']:v for v in L}.values()
[{'age': 34, 'id': 1, 'name': 'john'}, {'age': 30, 'id': 2, 'name': 'hanna'}]
In Python3
>>> L=[
... {'id':1,'name':'john', 'age':34},
... {'id':1,'name':'john', 'age':34},
... {'id':2,'name':'hanna', 'age':30},
... ]
>>> list({v['id']:v for v in L}.values())
[{'age': 34, 'id': 1, 'name': 'john'}, {'age': 30, 'id': 2, 'name': 'hanna'}]
In Python2.5/2.6
>>> L=[
... {'id':1,'name':'john', 'age':34},
... {'id':1,'name':'john', 'age':34},
... {'id':2,'name':'hanna', 'age':30},
... ]
>>> dict((v['id'],v) for v in L).values()
[{'age': 34, 'id': 1, 'name': 'john'}, {'age': 30, 'id': 2, 'name': 'hanna'}]
The usual way to find just the common elements in a set is to use Python's set class. Just add all the elements to the set, then convert the set to a list, and bam the duplicates are gone.
The problem, of course, is that a set() can only contain hashable entries, and a dict is not hashable.
If I had this problem, my solution would be to convert each dict into a string that represents the dict, then add all the strings to a set() then read out the string values as a list() and convert back to dict.
A good representation of a dict in string form is JSON format. And Python has a built-in module for JSON (called json of course).
The remaining problem is that the elements in a dict are not ordered, and when Python converts the dict to a JSON string, you might get two JSON strings that represent equivalent dictionaries but are not identical strings. The easy solution is to pass the argument sort_keys=True when you call json.dumps().
EDIT: This solution was assuming that a given dict could have any part different. If we can assume that every dict with the same "id" value will match every other dict with the same "id" value, then this is overkill; #gnibbler's solution would be faster and easier.
EDIT: Now there is a comment from André Lima explicitly saying that if the ID is a duplicate, it's safe to assume that the whole dict is a duplicate. So this answer is overkill and I recommend #gnibbler's answer.
In case the dictionaries are only uniquely identified by all items (ID is not available) you can use the answer using JSON. The following is an alternative that does not use JSON, and will work as long as all dictionary values are immutable
[dict(s) for s in set(frozenset(d.items()) for d in L)]
Here's a reasonably compact solution, though I suspect not particularly efficient (to put it mildly):
>>> ds = [{'id':1,'name':'john', 'age':34},
... {'id':1,'name':'john', 'age':34},
... {'id':2,'name':'hanna', 'age':30}
... ]
>>> map(dict, set(tuple(sorted(d.items())) for d in ds))
[{'age': 30, 'id': 2, 'name': 'hanna'}, {'age': 34, 'id': 1, 'name': 'john'}]
You can use numpy library (works for Python2.x only):
import numpy as np
list_of_unique_dicts=list(np.unique(np.array(list_of_dicts)))
To get it worked with Python 3.x (and recent versions of numpy), you need to convert array of dicts to numpy array of strings, e.g.
list_of_unique_dicts=list(np.unique(np.array(list_of_dicts).astype(str)))
a = [
{'id':1,'name':'john', 'age':34},
{'id':1,'name':'john', 'age':34},
{'id':2,'name':'hanna', 'age':30},
]
b = {x['id']:x for x in a}.values()
print(b)
outputs:
[{'age': 34, 'id': 1, 'name': 'john'}, {'age': 30, 'id': 2, 'name': 'hanna'}]
Since the id is sufficient for detecting duplicates, and the id is hashable: run 'em through a dictionary that has the id as the key. The value for each key is the original dictionary.
deduped_dicts = dict((item["id"], item) for item in list_of_dicts).values()
In Python 3, values() doesn't return a list; you'll need to wrap the whole right-hand-side of that expression in list(), and you can write the meat of the expression more economically as a dict comprehension:
deduped_dicts = list({item["id"]: item for item in list_of_dicts}.values())
Note that the result likely will not be in the same order as the original. If that's a requirement, you could use a Collections.OrderedDict instead of a dict.
As an aside, it may make a good deal of sense to just keep the data in a dictionary that uses the id as key to begin with.
We can do with pandas
import pandas as pd
yourdict=pd.DataFrame(L).drop_duplicates().to_dict('r')
Out[293]: [{'age': 34, 'id': 1, 'name': 'john'}, {'age': 30, 'id': 2, 'name': 'hanna'}]
Notice slightly different from the accept answer.
drop_duplicates will check all column in pandas , if all same then the row will be dropped .
For example :
If we change the 2nd dict name from john to peter
L=[
{'id': 1, 'name': 'john', 'age': 34},
{'id': 1, 'name': 'peter', 'age': 34},
{'id': 2, 'name': 'hanna', 'age': 30},
]
pd.DataFrame(L).drop_duplicates().to_dict('r')
Out[295]:
[{'age': 34, 'id': 1, 'name': 'john'},
{'age': 34, 'id': 1, 'name': 'peter'},# here will still keeping the dict in the out put
{'age': 30, 'id': 2, 'name': 'hanna'}]
There are a lot of answers here, so let me add another:
import json
from typing import List
def dedup_dicts(items: List[dict]):
dedupped = [ json.loads(i) for i in set(json.dumps(item, sort_keys=True) for item in items)]
return dedupped
items = [
{'id': 1, 'name': 'john', 'age': 34},
{'id': 1, 'name': 'john', 'age': 34},
{'id': 2, 'name': 'hanna', 'age': 30},
]
dedup_dicts(items)
I have summarized my favorites to try out:
https://repl.it/#SmaMa/Python-List-of-unique-dictionaries
# ----------------------------------------------
# Setup
# ----------------------------------------------
myList = [
{"id":"1", "lala": "value_1"},
{"id": "2", "lala": "value_2"},
{"id": "2", "lala": "value_2"},
{"id": "3", "lala": "value_3"}
]
print("myList:", myList)
# -----------------------------------------------
# Option 1 if objects has an unique identifier
# -----------------------------------------------
myUniqueList = list({myObject['id']:myObject for myObject in myList}.values())
print("myUniqueList:", myUniqueList)
# -----------------------------------------------
# Option 2 if uniquely identified by whole object
# -----------------------------------------------
myUniqueSet = [dict(s) for s in set(frozenset(myObject.items()) for myObject in myList)]
print("myUniqueSet:", myUniqueSet)
# -----------------------------------------------
# Option 3 for hashable objects (not dicts)
# -----------------------------------------------
myHashableObjects = list(set(["1", "2", "2", "3"]))
print("myHashAbleList:", myHashableObjects)
In python 3, simple trick, but based on unique field (id):
data = [ {'id': 1}, {'id': 1}]
list({ item['id'] : item for item in data}.values())
I don't know if you only want the id of your dicts in the list to be unique, but if the goal is to have a set of dict where the unicity is on all keys' values.. you should use tuples key like this in your comprehension :
>>> L=[
... {'id':1,'name':'john', 'age':34},
... {'id':1,'name':'john', 'age':34},
... {'id':2,'name':'hanna', 'age':30},
... {'id':2,'name':'hanna', 'age':50}
... ]
>>> len(L)
4
>>> L=list({(v['id'], v['age'], v['name']):v for v in L}.values())
>>>L
[{'id': 1, 'name': 'john', 'age': 34}, {'id': 2, 'name': 'hanna', 'age': 30}, {'id': 2, 'name': 'hanna', 'age': 50}]
>>>len(L)
3
Hope it helps you or another person having the concern....
Expanding on John La Rooy (Python - List of unique dictionaries) answer, making it a bit more flexible:
def dedup_dict_list(list_of_dicts: list, columns: list) -> list:
return list({''.join(row[column] for column in columns): row
for row in list_of_dicts}.values())
Calling Function:
sorted_list_of_dicts = dedup_dict_list(
unsorted_list_of_dicts, ['id', 'name'])
If there is not a unique id in the dictionaries, then I'd keep it simple and define a function as follows:
def unique(sequence):
result = []
for item in sequence:
if item not in result:
result.append(item)
return result
The advantage with this approach, is that you can reuse this function for any comparable objects. It makes your code very readable, works in all modern versions of Python, preserves the order in the dictionaries, and is fast too compared to its alternatives.
>>> L = [
... {'id': 1, 'name': 'john', 'age': 34},
... {'id': 1, 'name': 'john', 'age': 34},
... {'id': 2, 'name': 'hanna', 'age': 30},
... ]
>>> unique(L)
[{'id': 1, 'name': 'john', 'age': 34}, {'id': 2, 'name': 'hanna', 'age': 30}]
In python 3.6+ (what I've tested), just use:
import json
#Toy example, but will also work for your case
myListOfDicts = [{'a':1,'b':2},{'a':1,'b':2},{'a':1,'b':3}]
#Start by sorting each dictionary by keys
myListOfDictsSorted = [sorted(d.items()) for d in myListOfDicts]
#Using json methods with set() to get unique dict
myListOfUniqueDicts = list(map(json.loads,set(map(json.dumps, myListOfDictsSorted))))
print(myListOfUniqueDicts)
Explanation: we're mapping the json.dumps to encode the dictionaries as json objects, which are immutable. set can then be used to produce an iterable of unique immutables. Finally, we convert back to our dictionary representation using json.loads. Note that initially, one must sort by keys to arrange the dictionaries in a unique form. This is valid for Python 3.6+ since dictionaries are ordered by default.
Well all the answers mentioned here are good, but in some answers one can face error if the dictionary items have nested list or dictionary, so I propose simple answer
a = [str(i) for i in a]
a = list(set(a))
a = [eval(i) for i in a]
Objects can fit into sets. You can work with objects instead of dicts and if needed after all set insertions convert back to a list of dicts. Example
class Person:
def __init__(self, id, age, name):
self.id = id
self.age = age
self.name = name
my_set = {Person(id=2, age=3, name='Jhon')}
my_set.add(Person(id=3, age=34, name='Guy'))
my_set.add({Person(id=2, age=3, name='Jhon')})
# if needed convert to list of dicts
list_of_dict = [{'id': obj.id,
'name': obj.name,
'age': obj.age} for obj in my_set]
A quick-and-dirty solution is just by generating a new list.
sortedlist = []
for item in listwhichneedssorting:
if item not in sortedlist:
sortedlist.append(item)
Let me add mine.
sort target dict so that {'a' : 1, 'b': 2} and {'b': 2, 'a': 1} are not treated differently
make it as json
deduplicate via set (as set does not apply to dicts)
again, turn it into dict via json.loads
import json
[json.loads(i) for i in set([json.dumps(i) for i in [dict(sorted(i.items())) for i in target_dict]])]
There may be more elegant solutions, but I thought it might be nice to add a more verbose solution to make it easier to follow. This assumes there is not a unique key, you have a simple k,v structure, and that you are using a version of python that guarantees list order. This would work for the original post.
data_set = [
{'id': 1, 'name': 'john', 'age': 34},
{'id': 1, 'name': 'john', 'age': 34},
{'id': 2, 'name': 'hanna', 'age': 30},
]
# list of keys
keys = [k for k in data_set[0]]
# Create a List of Lists of the values from the data Set
data_set_list = [[v for v in v.values()] for v in data_set]
# Dedupe
new_data_set = []
for lst in data_set_list:
# Check if list exists in new data set
if lst in new_data_set:
print(lst)
continue
# Add list to new data set
new_data_set.append(lst)
# Create dicts
new_data_set = [dict(zip(keys,lst)) for lst in new_data_set]
print(new_data_set)
Pretty straightforward option:
L = [
{'id':1,'name':'john', 'age':34},
{'id':1,'name':'john', 'age':34},
{'id':2,'name':'hanna', 'age':30},
]
D = dict()
for l in L: D[l['id']] = l
output = list(D.values())
print output
Heres an implementation with little memory overhead at the cost of not being as compact as the rest.
values = [ {'id':2,'name':'hanna', 'age':30},
{'id':1,'name':'john', 'age':34},
{'id':1,'name':'john', 'age':34},
{'id':2,'name':'hanna', 'age':30},
{'id':1,'name':'john', 'age':34},]
count = {}
index = 0
while index < len(values):
if values[index]['id'] in count:
del values[index]
else:
count[values[index]['id']] = 1
index += 1
output:
[{'age': 30, 'id': 2, 'name': 'hanna'}, {'age': 34, 'id': 1, 'name': 'john'}]
This is the solution I found:
usedID = []
x = [
{'id':1,'name':'john', 'age':34},
{'id':1,'name':'john', 'age':34},
{'id':2,'name':'hanna', 'age':30},
]
for each in x:
if each['id'] in usedID:
x.remove(each)
else:
usedID.append(each['id'])
print x
Basically you check if the ID is present in the list, if it is, delete the dictionary, if not, append the ID to the list

generate list from values of certain field in list of objects

How would I generate a list of values of a certain field of objects in a list?
Given the list of objects:
[ {name: "Joe", group: 1}, {name: "Kirk", group: 2}, {name: "Bob", group: 1}]
I want to generate list of the name field values:
["Joe", "Kirk", "Bob"]
The built-in filter() function seems to come close, but it will return the entire objects themselves.
I'd like a clean, one line solution such as:
filterLikeFunc(function(obj){return obj.name}, mylist)
Sorry, I know that's c syntax.
Just replace filter built-in function with map built-in function.
And use get function which will not give you key error in the absence of that particular key to get value for name key.
data = [{'name': "Joe", 'group': 1}, {'name': "Kirk", 'group': 2}, {'name': "Bob", 'group': 1}]
print map(lambda x: x.get('name'), data)
In Python 3.x
print(list(map(lambda x: x.get('name'), data)))
Results:
['Joe', 'Kirk', 'Bob']
Using List Comprehension:
print [each.get('name') for each in data]
Using a list comprehension approach you get:
objects = [{'group': 1, 'name': 'Joe'}, {'group': 2, 'name': 'Kirk'}, {'group': 1, 'name': 'Bob'}]
names = [i["name"] for i in objects]
For a good intro to list comprehensions, see https://docs.python.org/2/tutorial/datastructures.html
Just iterate over your list of dicts and pick out the name value and put them in a list.
x = [ {'name': "Joe", 'group': 1}, {'name': "Kirk", 'group': 2}, {'name': "Bob", 'group': 1}]
y = [y['name'] for y in x]
print(y)

Categories