Append Dates in Chronological Order - python

This is the JSON:
[{'can_occur_before': False,
'categories': [{'id': 8, 'name': 'Airdrop'}],
'coins': [{'id': 'cashaa', 'name': 'Cashaa', 'symbol': 'CAS'}],
'created_date': '2018-05-26T03:34:05+01:00',
'date_event': '2018-06-05T00:00:00+01:00',
'title': 'Unsold Token Distribution',
'twitter_account': None,
'vote_count': 125},
{'can_occur_before': False,
'categories': [{'id': 4, 'name': 'Exchange'}],
'coins': [{'id': 'tron', 'name': 'TRON', 'symbol': 'TRX'}],
'created_date': '2018-06-04T03:54:59+01:00',
'date_event': '2018-06-05T00:00:00+01:00',
'title': 'Indodax Listing',
'twitter_account': '#PutraDwiJuliyan',
'vote_count': 75},
{'can_occur_before': False,
'categories': [{'id': 5, 'name': 'Conference'}],
'coins': [{'id': 'modum', 'name': 'Modum', 'symbol': 'MOD'}],
'created_date': '2018-05-26T03:18:03+01:00',
'date_event': '2018-06-05T00:00:00+01:00',
'title': 'SAPPHIRE NOW',
'twitter_account': None,
'vote_count': 27},
{'can_occur_before': False,
'categories': [{'id': 4, 'name': 'Exchange'}],
'coins': [{'id': 'apr-coin', 'name': 'APR Coin', 'symbol': 'APR'}],
'created_date': '2018-05-29T17:45:16+01:00',
'date_event': '2018-06-05T00:00:00+01:00',
'title': 'TopBTC Listing',
'twitter_account': '#cryptoalarm',
'vote_count': 23}]
I want to take all the date_events and append them to a list in chronological order. I currently have this code and am not sure how to order them chronologically.
date = []
for i in getevents:
date.append(i['date_event'][:10])
Thanks for any help !

Simple way is to compose a list and then apply sort() method
data = json.load(open('filename.json','r'))
dates = [item['date_event'] for i in data]
dates.sort()
Using your example data with field 'creation_date' ('date_event' values are all the same) we'll get:
['2018-05-26T03:18:03+01:00',
'2018-05-26T03:34:05+01:00',
'2018-05-29T17:45:16+01:00',
'2018-06-04T03:54:59+01:00']

First of all, all the date_event in your array of objects are all the same, so not much sense in sorting them.. Also your approach will not get you far, you need to convert the dates to native date/time objects so that you can sort them through a sorting function.
The easiest way to parse properly formatted Date/Times is to use dateutil.parse.parser, and sorting an existing list is done by list.sort() - I made a quick example on how to use these tools, also i changed the date_event values to showcase it: https://repl.it/repls/BogusSpecificRate
After you have decoded the JSON string (json.loads) and have a Python list to work with, you can proceed with sorting the list:
# Ascending
events.sort(key=lambda e: parser.parse(e['date_event']))
print([":".join([e['title'], e['date_event']]) for e in events])
# Descending
events.sort(key=lambda e: parser.parse(e['date_event']), reverse=True)
print([":".join([e['title'], e['date_event']]) for e in events])

Related

pandas split list like object

Hi I have this column of data named labels:
[{'id': 123456,
'name': John,
'age': 22,
'pet': None,
'gender': male,
'result': [{'id': 'vEo0PIYPEE',
'type': 'choices',
'value': {'choices': ['Same Person']},
'to_name': 'image',
'from_name': 'person_evaluation'}]}]
[{'id': 123457,
'name': May,
'age': 21,
'pet': None,
'gender': female,
'result': [{'id': zTHYuKIOQ',
'type': 'choices',
'value': {'choices': ['Different Person']},
'to_name': 'image',
'from_name': 'person_evaluation'}]}]
......
Not sure what type is this, and I would like to break this down, to extract the value [Same Person], the outcome should be something like this:
0 [Same Person]
1 [Different Person]
....
How should I achieve this?
Based on the limited data that you have provided, would this work?
df['labels_new'] = df['labels'].apply(lambda x: x[0].get('result')[0].get('value').get('choices'))
labels labels_new
0 [{'id': 123456, 'name': 'John', 'age': 22, 'pe... [Same Person]
1 [{'id': 123457, 'name': 'May', 'age': 21, 'pet... [Different Person]
You can use the following as well, but I find dict.get() to be more versatile (returning default values for example) and has better exception handling.
df['labels'].apply(lambda x: x[0]['result'][0]['value']['choices'])
You could consider using pd.json_normalize (read more here) but for the current state of your column that you have, its going to be a bit complex to extract the data with that, rather than simply using a lambda function

pandas create a one row dataframe from nested dict

I know the "create pandas dataframe from nested dict" has a lot of entries here but I'm not found the answer that applies to my problem:
I have a dict like this:
{'id': 1,
'creator_user_id': {'id': 12170254,
'name': 'Nicolas',
'email': 'some_mail#some_email_provider.com',
'has_pic': 0,
'pic_hash': None,
'active_flag': True,
'value': 12170254}....,
and after reading with pandas look like this:
df = pd.DataFrame.from_dict(my_dict,orient='index')
print(df)
id 1
creator_user_id {'id': 12170254, 'name': 'Nicolas', 'email': '...
user_id {'id': 12264469, 'name': 'Daniela Giraldo G', ...
person_id {'active_flag': True, 'name': 'Cristina Cardoz...
org_id {'name': 'Cristina Cardozo', 'people_count': 1...
stage_id 2
title Cristina Cardozo
I would like to create a one-row dataframe where, for example, the nested creator_user_id column results in several columns that I after can name: creator_user_id_id, creator_user_id_name, etc.
thank you for your time!
Given you want one row, just use json_normalize()
pd.json_normalize({'id': 1,
'creator_user_id': {'id': 12170254,
'name': 'Nicolas',
'email': 'some_mail#some_email_provider.com',
'has_pic': 0,
'pic_hash': None,
'active_flag': True,
'value': 12170254}})

Pandas - Extracting values from a Dataframe column

I have a Dataframe in the below format:
cust_id, cust_details
101, [{'self': 'https://website.com/rest/api/2/customFieldOption/1', 'value': 'Type-A', 'id': '1'},
{'self': 'https://website.com/rest/api/2/customFieldOption/2', 'value': 'Type-B', 'id': '2'},
{'self': 'https://website.com/rest/api/2/customFieldOption/3', 'value': 'Type-C', 'id': '3'},
{'self': 'https://website.com/rest/api/2/customFieldOption/4', 'value': 'Type-D', 'id': '4'}]
102, [{'self': 'https://website.com/rest/api/2/customFieldOption/5', 'value': 'Type-X', 'id': '5'},
{'self': 'https://website.com/rest/api/2/customFieldOption/6', 'value': 'Type-Y', 'id': '6'}]
I am trying to extract for every cust_id all cust_detail values
Expected output:
cust_id, new_value
101,Type-A, Type-B, Type-C, Type-D
102,Type-X, Type-Y
Easy answer:
df['new_value'] = df.cust_details.apply(lambda ds: [d['value'] for d in ds])
More complex, potentially better answer:
Rather than storing lists of dictionaries in the first place, I'd recommend making each dictionary a row in the original dataframe.
df = pd.concat([
df['cust_id'],
pd.DataFrame(
df['cust_details'].explode().values.tolist(),
index=df['cust_details'].explode().index
)
], axis=1)
If you need to group values by id, you can do so via standard groupby methods:
df.groupby('cust_id')['value'].apply(list)
This may seem more complex, but depending on your use case might save you effort in the long-run.

Using Python module Glom, Extract Irregular Nested Lists into a Flattened List of Dictionaries

Glom makes accessing complex nested data structures easier.
https://github.com/mahmoud/glom
Given the following toy data structure:
target = [
{
'user_id': 198,
'id': 504508,
'first_name': 'John',
'last_name': 'Doe',
'active': True,
'email_address': 'jd#test.com',
'new_orders': False,
'addresses': [
{
'location': 'home',
'address': 300,
'street': 'Fulton Rd.'
}
]
},
{
'user_id': 209,
'id': 504508,
'first_name': 'Jane',
'last_name': 'Doe',
'active': True,
'email_address': 'jd#test.com',
'new_orders': True,
'addresses': [
{
'location': 'home',
'address': 251,
'street': 'Maverick Dr.'
},
{
'location': 'work',
'address': 4532,
'street': 'Fulton Cir.'
},
]
},
]
I am attempting to extract all address fields in the data structure into a flattened list of dictionaries.
from glom import glom as glom
from glom import Coalesce
import pprint
"""
Purpose: Test the use of Glom
"""
# Create Glomspec
spec = [{'address': ('addresses', 'address') }]
# Glom the data
result = glom(target, spec)
# Display
pprint.pprint(result)
The above spec provides:
[
{'address': [300]},
{'address': [251]}
]
The desired result is:
[
{'address':300},
{'address':251},
{'address':4532}
]
What Glomspec will generate the desired result?
As of glom 19.1.0 you can use the Flatten() spec to succinctly get the results you want:
from glom import glom, Flatten
glom(target, (['addresses'], Flatten(), [{'address': 'address'}]))
# [{'address': 300}, {'address': 251}, {'address': 4532}]
And that's all there is to it!
You may also want to check out the convenient flatten() function, as well as the powerful Fold() spec, for all your flattening needs :)
Prior to 19.1.0, glom did not have first-class flattening or reduction (as in map-reduce) capabilities. But one workaround would have been to use Python's built-in sum() function to flatten the addresses:
>>> from glom import glom, T, Call # pre-19.1.0 solution
>>> glom(target, ([('addresses', [T])], Call(sum, args=(T, [])), [{'address': 'address'}]))
[{'address': 300}, {'address': 251}, {'address': 4532}]
Three steps:
Traverse the lists, as you had done.
Call sum on the resulting list, flattening/reducing it.
Filter down the items in the resulting list to only contain the 'address' key.
Note the usage of T, which represents the current target, sort of like a cursor.
Anyways, no need to do that anymore, in part due to this answer. So, thanks for the great question!

API Call - Multi dimensional nested dictionary to pandas data frame

I need your help with converting a multidimensional dict to a pandas data frame. I get the dict from a JSON file which I retrieve from a API call (Shopify).
response = requests.get("URL", auth=("ID","KEY"))
data = json.loads(response.text)
The "data" dictionary looks as follows:
{'orders': [{'created_at': '2016-09-20T22:04:49+02:00',
'email': 'test#aol.com',
'id': 4314127108,
'line_items': [{'destination_location':
{'address1': 'Teststreet 12',
'address2': '',
'city': 'Berlin',
'country_code': 'DE',
'id': 2383331012,
'name': 'Test Test',
'zip': '10117'},
'gift_card': False,
'name': 'Blueberry Cup'}]
}]}
In this case the dictionary has 4 Dimensions and I would like to convert the dict into a pandas data frame. I tried everything ranging from json_normalize() to pandas.DataFrame.from_dict(), yet I did not manage to get anywhere. When I try to convert the dict to a df, I get columns which contain list of lists.
Does anyone know how to approach that?
Thanks
EDITED:
Thank you #piRSquared. Your solution works fine! However, how you solve it if there was another product in the order? Because then it does work. JSON response of an order with 2 products is as follows (goals is to have a second row with the same "created_at". "email" etc. columns):
{'orders': [{'created_at': '2016-09-20T22:04:49+02:00',
'email': 'test#aol.com',
'id': 4314127108,
'line_items': [{'destination_location':
{'address1': 'Teststreet 12',
'address2': '',
'city': 'Berlin',
'country_code': 'DE',
'id': 2383331012,
'name': 'Test Test',
'zip': '10117'},
'gift_card': False,
'name': 'Blueberry Cup'},
{'destination_location':
{'address1': 'Teststreet 12',
'address2': '',
'city': 'Berlin',
'country_code': 'DE',
'id': 2383331012,
'name': 'Test Test',
'zip': '10117'},
'gift_card': False,
'name': 'Strawberry Cup'}]
}]}
So the df in the end should be on a row by row basis for all sold products. Thank you, I really appreciate your help!
There are a number of ways to do this. This is just a way I decided to do it. You need to explore how you want to see this represented, then figure out how to get there.
df = pd.DataFrame(data['orders'])
df1 = df.line_items.str[0].apply(pd.Series)
df2 = df1.destination_location.apply(pd.Series)
pd.concat([df.drop('line_items', 1), df1.drop('destination_location', 1), df2],
axis=1, keys=['', 'line_items', 'destination_location'])

Categories