Find a minimal value in each array in Python - python

Suppose I have the following data in Python 3.3:
my_array =
[{'man_id': 1, '_id': ObjectId('1234566'), 'type': 'worker', 'value': 11},
{'man_id': 1, '_id': ObjectId('1234577'), 'type': 'worker', 'value': 12}],
[{'man_id': 2, '_id': ObjectId('1234588'), 'type': 'worker', 'value': 11},
{'man_id': 2, '_id': ObjectId('3243'), 'type': 'worker', 'value': 7},
{'man_id': 2, '_id': ObjectId('54'), 'type': 'worker', 'value': 99},
{'man_id': 2, '_id': ObjectId('9879878'), 'type': 'worker', 'value': 135}],
#.............................
[{'man_id': 13, '_id': ObjectId('111'), 'type': 'worker', 'value': 1},
{'man_id': 13, '_id': ObjectId('222'), 'type': 'worker', 'value': 2},
{'man_id': 13, '_id': ObjectId('3333'), 'type': 'worker', 'value': 9}]
There are 3 arrays. How do I find an element in each array with minimal value?

[min(arr, key=lambda s:s['value']) for arr in my_array]

Maybe something like that is acc for you:
for arr in my_array:
minVal = min([row['value'] for row in arr])
print [row for row in arr if row['value'] == minVal]

Related

Python: Descending order and just 3 objects has a high value [duplicate]

This question already has answers here:
How do I sort a list of dictionaries by a value of the dictionary?
(20 answers)
Closed 6 months ago.
I have an array object like that, Not sort value, I want descending order and just 3 objects has a high value:
[{'id': 1, 'value': 3},
{'id': 2, 'value': 6},
{'id': 3, 'value': 8},
{'id': 4, 'value': 8},
{'id': 5, 'value': 10},
{'id': 6, 'value': 9},
{'id': 7, 'value': 8},
{'id': 8, 'value': 4},
{'id': 9, 'value': 5}]
I want result is descending order and just 3 objects have a high value, like this
[{'id': 5, 'value': 10},
{'id': 6, 'value': 9},
{'id': 7, 'value': 8},
{'id': 3, 'value': 8},
{'id': 4, 'value': 8},]
Please help me, thanks
t = [{'id': 1, 'value': 3},
{'id': 2, 'value': 6},
{'id': 3, 'value': 8},
{'id': 4, 'value': 8},
{'id': 5, 'value': 10},
{'id': 6, 'value': 9},
{'id': 7, 'value': 8}]
newlist = sorted(t, key=lambda d: d['value'])
newlist.reverse()
print(newlist[:3])
# [{'id': 5, 'value': 10}, {'id': 6, 'value': 9}, {'id': 7, 'value': 8}]
More info about list slicing
More info about reverse()
More info

How to convert key to value in dictionary type?

I have a question about the convert key.
First, I have this type of word count in Data Frame.
[Example]
dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
I want to get this result.
[Result]
result = {'name': 'forest', 'value': 10,
'name': 'station', 'value': 3,
'name': 'office', 'value': 7,
'name': 'park', 'value': 2}
Please check this issue.
As Rakesh said:
dict cannot have duplicate keys
The closest way to achieve what you want is to build something like that
my_dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
result = list(map(lambda x: {'name': x[0], 'value': x[1]}, my_dict.items()))
You will get
result = [
{'name': 'forest', 'value': 10},
{'name': 'station', 'value': 3},
{'name': 'office', 'value': 7},
{'name': 'park', 'value': 2},
]
As Rakesh said, You can't have duplicate values in the dictionary
You can simply try this.
dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
result = {}
count = 0;
for key in dict:
result[count] = {'name':key, 'value': dict[key]}
count = count + 1;
print(result)

How to explode Panda column with data having different dict and list of dict

I have a panda dataframe with different set of values like first one is an list or array and other elements or not
>>> df_3['integration-outbound:IntegrationEntity.integrationEntityDetails.supplier.forms.form.records.record']
0 [{'Internalid': '24348', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3127'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4434'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5545'}]}}, {'Internalid': '24349', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3125'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4268'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5418'}]}}, {'Internalid': '24350', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3122'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel425'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5221'}]}}]
0 {'isDelete': 'false', 'fields': {'field': [{'id': 'S_EAST', 'value': 'N'}, {'id': 'W_EST', 'value': 'N'}, {'id': 'M_WEST', 'value': 'N'}, {'id': 'N_EAST', 'value': 'N'}, {'id': 'LOW_AREYOU_ASSET', 'value': '-1'}, {'id': 'LOW_SWART_PROG', 'value': '-1'}]}}
0 {'isDelete': 'false', 'fields': {'field': {'id': 'LOW_COD_CONDUCT', 'value': '-1'}}}
0 {'isDelete': 'false', 'fields': {'field': [{'id': 'LOW_SUPPLIER_TYPE', 'value': '2'}, {'id': 'LOW_DO_INT_BOTH', 'value': '1'}]}}
I want explode this into multiple rows. The first row is list and other rows or not ?
>>> type(df_3)
<class 'pandas.core.frame.DataFrame'>
>>> type(df_3['integration-outbound:IntegrationEntity.integrationEntityDetails.supplier.forms.form.records.record'])
<class 'pandas.core.series.Series'>
Expected output -
{'Internalid': '24348', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3127'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4434'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5545'}]}}
{'Internalid': '24349', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3125'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4268'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5418'}]}}
{'Internalid': '24350', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3122'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel425'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5221'}]}}]
{'isDelete': 'false', 'fields': {'field': [{'id': 'S_EAST', 'value': 'N'}, {'id': 'W_EST', 'value': 'N'}, {'id': 'M_WEST', 'value': 'N'}, {'id': 'N_EAST', 'value': 'N'}, {'id': 'LOW_AREYOU_ASSET', 'value': '-1'}, {'id': 'LOW_SWART_PROG', 'value': '-1'}]}}
{'isDelete': 'false', 'fields': {'field': {'id': 'LOW_COD_CONDUCT', 'value': '-1'}}}
{'isDelete': 'false', 'fields': {'field': [{'id': 'LOW_SUPPLIER_TYPE', 'value': '2'}, {'id': 'LOW_DO_INT_BOTH', 'value': '1'}]}}
i tried to explode this columns
>>> df_3.explode('integration-outbound:IntegrationEntity.integrationEntityDetails.supplier.forms.form.records.record')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib64/python3.6/site-packages/pandas/core/frame.py", line 6318, in explode
result = df[column].explode()
File "/usr/local/lib64/python3.6/site-packages/pandas/core/series.py", line 3504, in explode
values, counts = reshape.explode(np.asarray(self.array))
File "pandas/_libs/reshape.pyx", line 129, in pandas._libs.reshape.explode
KeyError: 0
I can run through each row and try to find out if its a list and implement something but it doesnt seems right
if str(type(df_3.loc[i,'{}'.format(c)])) == "<class 'list'>":
Is there any way we ca use an explode function on such kind of data
alternative way using pandas-read-xml
from pandas_read_xml import flatten, fully_flatten
df = flatten(df)
I was able to do it, but the exploded rows are all filtered to the top of the DataFrame (in case there are more list type object in lower rows).
pd.concat((df.iloc[[type(item) == list for item in df['Column']]].explode('Column'),
df.iloc[[type(item) != list for item in df['Column']]]))
It essentially does what you've said: check if object type is list, if so, explode. Then concatenate this exploded Series with the rest of the data (i.e. the non-lists). Performance doesn't seem to hurt much from longer DataFrames.
Output:
Column
0 {'Internalid': '24348', 'isDelete': 'false', '...
0 {'Internalid': '24349', 'isDelete': 'false', '...
0 {'Internalid': '24350', 'isDelete': 'false', '...
1 {'isDelete': 'false', 'fields': {'field': [{'i...
2 {'isDelete': 'false', 'fields': {'field': {'id...
3 {'isDelete': 'false', 'fields': {'field': [{'i...

Converting a RESTful API response from list to dictionary

Currently I have a function (shown below) that makes a GET request from an API that I made myself
def get_vehicles(self):
result = "http://127.0.0.1:8000/vehicles"
response = requests.get(result)
data = response.content
data_dict = json.loads(data)
return data_dict
The data I got is in this format. Which is a list of dictionary
data_dict = [{'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}]
Is it possible to convert it to just a dictionary? Example:
data_dict = {'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}
No, the second result is a tuple, not a dict.
data_dict = {'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}
print(type(data_dict))
# <class 'tuple'>
It is the same as:
data_dict = ({'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'})
That's why it is a tuple.
If you only want to merge them in a dict,it seems to be impossible because dict couldn't have the same keys.But you could merge the value as a list,like:
d = {key: list(value) for key, value in zip(data_dict[0].keys(), zip(data_dict[0].values(), data_dict[1].values()))}
print(d)
Result(Make sure they has the same length):
{
'colour': ['Black', 'White'],
'cost': [10, 15],
'latitude': [-37.806152, -37.803913],
'longitude': [144.95787, 144.964859],
'rentalStatus': ['True', 'False'],
'seats': [4, 4],
'user': [None, {
'firstname': 'Test',
'imageName': None,
'password': 'password',
'surname': 'Ing',
'userID': 15,
'username': 'Testing'
}],
'vehicleBrand': ['Toyota', 'Honda'],
'vehicleID': [1, 3],
'vehicleModel': ['Altis', 'Civic']
}
This a list of dictionaries.
Therefore you can access them using the array syntax: data_dict[0] for the first element for example.

Merging arrays of versioned dictionaries

Given the following two arrays of dictionaries, how can I merge them such that the resulting array of dictionaries contains only those dictionaries whose version is greatest?
data1 = [{'id': 1, 'name': u'Oneeee', 'version': 2},
{'id': 2, 'name': u'Two', 'version': 1},
{'id': 3, 'name': u'Three', 'version': 2},
{'id': 4, 'name': u'Four', 'version': 1},
{'id': 5, 'name': u'Five', 'version': 1}]
data2 = [{'id': 1, 'name': u'One', 'version': 1},
{'id': 2, 'name': u'Two', 'version': 1},
{'id': 3, 'name': u'Threeee', 'version': 3},
{'id': 6, 'name': u'Six', 'version': 2}]
The merged result should look like this:
data3 = [{'id': 1, 'name': u'Oneeee', 'version': 2},
{'id': 2, 'name': u'Two', 'version': 1},
{'id': 3, 'name': u'Threeee', 'version': 3},
{'id': 4, 'name': u'Four', 'version': 1},
{'id': 5, 'name': u'Five', 'version': 1},
{'id': 6, 'name': u'Six', 'version': 2}]
If you want to get the highest version according to the dictionaries ids then you can use itertools.groupby method like this:
sdata = sorted(data1 + data2, key=lambda x:x['id'])
res = []
for _,v in itertools.groupby(sdata, key=lambda x:x['id']):
v = list(v)
if len(v) > 1: # happened that the same id was in both datas
# append the one with higher version
res.append(v[0] if v[0]['version'] > v[1]['version'] else v[1])
else: # the id was in one of the two data
res.append(v[0])
The solution is not a one liner but I think is simple enough (once you understand groupby() which is not trivial).
This will result in res containing this list:
[{'id': 1, 'name': u'Oneeee', 'version': 2},
{'id': 2, 'name': u'Two', 'version': 1},
{'id': 3, 'name': u'Threeee', 'version': 3},
{'id': 4, 'name': u'Four', 'version': 1},
{'id': 5, 'name': u'Five', 'version': 1},
{'id': 6, 'name': u'Six', 'version': 2}]
I think is possible to shrink the solution even more, but it could be quite hard to understand.
Hope this helps!
A fairly straightforward procedural solution, where we build a dictionary keyed by item id, and then replace the items:
indexed_data = { item['id']: item for item in data1 }
# or, pre-Python2.7:
# indexed_data = dict((item['id'], item) for item in data1)
for item in data2:
if indexed_data.get(item['id'], {'version': float('-inf')})['version'] < item['version']:
indexed_data[item['id']] = item
data3 = [item for (_, item) in sorted(indexed_data.items())]
The same thing, but using a more functional approach:
sorted_items = sorted(data1 + data2, key=lambda item: (item['id'], item['version']))
merged = { item['id']: item for item in sorted_items }
# or, pre-Python2.7:
# merged = dict((item['id'], item) for item in sorted_items )
data3 = [item for (_, item) in sorted(merged.items())]

Categories