folium, plugins.TimestampedGeoJson, "Time not avaliable" - python

I want to draw Timestamped GeoJSON using plugins.TimestampedGeoJson.
But my time bar in folium map shows "Time not available" and doesn't work.
I cannot find which part of my variables are wrong.
Variables I used are as follows.
How can I make my time bar work?
points
[{'coordinates': [[37.4725028, 126.4313798],
[37.478525899999994, 126.6663152],
[37.561648999999996, 126.79433700000001]],
'popup': 1,
'time': '2020-01-19'},
{'coordinates': [[37.679657, 126.763737],
[37.4725028, 126.4313798],
[37.0796065, 127.0561043],
[37.0220402, 126.8134938],
[37.557176, 127.00768799999999]],
'popup': 1,
'time': '2020-01-20'},
{'coordinates': [[37.673517, 126.7553],
[37.670964, 126.761146],
[37.679657, 126.763737],
[37.520878, 127.02286299999999],
[37.524661, 127.028002],
[37.503137, 127.04909099999999],
[37.0520115, 126.91724440000002],
[37.463504, 126.65055600000001]],
'popup': 1,
'time': '2020-01-21'},
{'coordinates': [[37.560362700000006, 126.776299],
[37.567226899999994, 126.75337079999998],
[37.549605299999996, 126.86608829999999],
[37.567226899999994, 126.75337079999998],
[37.5672454, 127.00347020000001],
[37.524661, 127.028002],
[37.530772, 127.031924],
[37.503137, 127.04909099999999],
[37.0220402, 126.8134938],
[37.523118200000006, 127.03281489999999],
[37.555136, 126.97048899999999]],
'popup': 1,
'time': '2020-01-22'},
{'coordinates': [[37.524703, 127.015943],
[37.500735, 127.036373],
[37.494607, 127.06329199999999],
[37.503137, 127.04909099999999],
[37.0220402, 126.8134938],
[37.483702, 126.77811299999999]],
'popup': 1,
'time': '2020-01-23'},
{'coordinates': [[37.524661, 127.028002],
[37.658513, 126.832025],
[37.674671999999994, 126.776701],
[37.678166, 126.812165],
[37.0220402, 126.8134938],
[37.5616902, 126.97456809999998],
[37.266184, 126.999655],
[37.263417, 127.028654],
[37.361576, 126.935174]],
'popup': 1,
'time': '2020-01-24'},
{'coordinates': [[37.679657, 126.763737],
[37.642371999999995, 126.831253],
[37.0520115, 126.91724440000002],
[37.5616902, 126.97456809999998],
[37.483538, 127.032643],
[35.967625, 126.73678899999999],
[35.967625, 126.73678899999999],
[37.359123, 126.93095500000001],
[37.359123, 126.93095500000001]],
'popup': 1,
'time': '2020-01-25'},
{'coordinates': [[37.35132410000001, 127.12124329999999],
[37.5917891, 127.0164831],
[37.564001, 127.02953500000001],
[37.5903342, 127.01303200000001],
[37.590492100000006, 127.0119803],
[37.590611700000004, 126.9441293],
[37.5863425, 126.99763390000001],
[37.5616902, 126.97456809999998],
[35.9867, 126.70813000000001]],
'popup': 1,
'time': '2020-01-26'},
{'coordinates': [[37.5920615, 127.01670959999998],
[37.590611700000004, 126.9441293],
[37.5921286, 126.98387890000001],
[37.5863425, 126.99763390000001],
[37.5863425, 126.99763390000001],
[37.5863425, 126.99763390000001],
[37.5616902, 126.97456809999998],
[35.968089, 126.716128],
[37.557176, 127.00768799999999]],
'popup': 1,
'time': '2020-01-27'},
{'coordinates': [[37.5916736, 127.016226],
[37.5854777, 127.08637140000002],
[37.5982157, 127.0797739],
[37.5236782, 127.04434930000001],
[37.60656420000001, 127.09043],
[37.5616902, 126.97456809999998],
[35.954685, 126.71244399999999],
[37.483702, 126.77811299999999]],
'popup': 1,
'time': '2020-01-28'},
{'coordinates': [[37.594741799999994, 127.0728561],
[37.60656420000001, 127.09043],
[37.5616902, 126.97456809999998],
[35.976046000000004, 126.705522],
[35.982751, 126.734844]],
'popup': 1,
'time': '2020-01-29'},
{'coordinates': [[37.5647424, 126.99496140000001],
[37.579669, 126.99897],
[35.964349, 126.959676],
[37.641158000000004, 126.791979],
[37.5863425, 126.99763390000001],
[37.641158000000004, 126.791979],
[37.5863425, 126.99763390000001],
[37.498415, 126.762864]],
'popup': 1,
'time': '2020-01-30'},
{'coordinates': [[35.964349, 126.959676],
[37.5673125, 126.9706395],
[37.579669, 126.99897],
[37.579669, 126.99897],
[37.561648, 126.7855822],
[37.481458, 126.7804963]],
'popup': 1,
'time': '2020-01-31'},
{'coordinates': [[37.351375, 127.123411],
[37.481458, 126.7804963],
[37.304349, 127.0079881],
[37.391714799999995, 127.147098]],
'popup': 1,
'time': '2020-02-01'},
{'coordinates': [[37.5672412, 127.00347020000001], [37.351375, 127.123411]],
'popup': 1,
'time': '2020-02-02'}]
Below is my features variable.
features
[{'geometry': {'coordinates': [[37.4725028, 126.4313798],
[37.478525899999994, 126.6663152],
[37.561648999999996, 126.79433700000001]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-19'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.679657, 126.763737],
[37.4725028, 126.4313798],
[37.0796065, 127.0561043],
[37.0220402, 126.8134938],
[37.557176, 127.00768799999999]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-20'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.673517, 126.7553],
[37.670964, 126.761146],
[37.679657, 126.763737],
[37.520878, 127.02286299999999],
[37.524661, 127.028002],
[37.503137, 127.04909099999999],
[37.0520115, 126.91724440000002],
[37.463504, 126.65055600000001]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-21'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.560362700000006, 126.776299],
[37.567226899999994, 126.75337079999998],
[37.549605299999996, 126.86608829999999],
[37.567226899999994, 126.75337079999998],
[37.5672454, 127.00347020000001],
[37.524661, 127.028002],
[37.530772, 127.031924],
[37.503137, 127.04909099999999],
[37.0220402, 126.8134938],
[37.523118200000006, 127.03281489999999],
[37.555136, 126.97048899999999]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-22'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.524703, 127.015943],
[37.500735, 127.036373],
[37.494607, 127.06329199999999],
[37.503137, 127.04909099999999],
[37.0220402, 126.8134938],
[37.483702, 126.77811299999999]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-23'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.524661, 127.028002],
[37.658513, 126.832025],
[37.674671999999994, 126.776701],
[37.678166, 126.812165],
[37.0220402, 126.8134938],
[37.5616902, 126.97456809999998],
[37.266184, 126.999655],
[37.263417, 127.028654],
[37.361576, 126.935174]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-24'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.679657, 126.763737],
[37.642371999999995, 126.831253],
[37.0520115, 126.91724440000002],
[37.5616902, 126.97456809999998],
[37.483538, 127.032643],
[35.967625, 126.73678899999999],
[35.967625, 126.73678899999999],
[37.359123, 126.93095500000001],
[37.359123, 126.93095500000001]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-25'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.35132410000001, 127.12124329999999],
[37.5917891, 127.0164831],
[37.564001, 127.02953500000001],
[37.5903342, 127.01303200000001],
[37.590492100000006, 127.0119803],
[37.590611700000004, 126.9441293],
[37.5863425, 126.99763390000001],
[37.5616902, 126.97456809999998],
[35.9867, 126.70813000000001]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-26'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.5920615, 127.01670959999998],
[37.590611700000004, 126.9441293],
[37.5921286, 126.98387890000001],
[37.5863425, 126.99763390000001],
[37.5863425, 126.99763390000001],
[37.5863425, 126.99763390000001],
[37.5616902, 126.97456809999998],
[35.968089, 126.716128],
[37.557176, 127.00768799999999]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-27'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.5916736, 127.016226],
[37.5854777, 127.08637140000002],
[37.5982157, 127.0797739],
[37.5236782, 127.04434930000001],
[37.60656420000001, 127.09043],
[37.5616902, 126.97456809999998],
[35.954685, 126.71244399999999],
[37.483702, 126.77811299999999]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-28'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.594741799999994, 127.0728561],
[37.60656420000001, 127.09043],
[37.5616902, 126.97456809999998],
[35.976046000000004, 126.705522],
[35.982751, 126.734844]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-29'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.5647424, 126.99496140000001],
[37.579669, 126.99897],
[35.964349, 126.959676],
[37.641158000000004, 126.791979],
[37.5863425, 126.99763390000001],
[37.641158000000004, 126.791979],
[37.5863425, 126.99763390000001],
[37.498415, 126.762864]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-30'},
'type': 'Feature'},
{'geometry': {'coordinates': [[35.964349, 126.959676],
[37.5673125, 126.9706395],
[37.579669, 126.99897],
[37.579669, 126.99897],
[37.561648, 126.7855822],
[37.481458, 126.7804963]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-01-31'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.351375, 127.123411],
[37.481458, 126.7804963],
[37.304349, 127.0079881],
[37.391714799999995, 127.147098]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-02-01'},
'type': 'Feature'},
{'geometry': {'coordinates': [[37.5672412, 127.00347020000001],
[37.351375, 127.123411]],
'type': 'Point'},
'properties': {'icon': 'marker',
'iconstyle': {'iconSize': [20, 20],
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'},
'id': 'house',
'popup': 1,
'time': '2020-02-02'},
'type': 'Feature'}]
I tried to draw folium map.
But my time lower left in map left doesn't work.
m = folium.Map([37.5650172,126.8494648], zoom_start = 10)
plugins.TimestampedGeoJson( { 'type': 'FeatureCollection', 'features': features },
period='P1D',
add_last_point=True,
auto_play=False,
loop=False,
max_speed=1,
loop_button=True,
date_options='YYYY-MM-DD',
time_slider_drag_update=True,
duration='P1D' ).add_to(m)
m

I had the same problem, which was solved by the following in my case:
The issue you described occurs likely because your sub-dict-keyword "time" doesn't contain a list of equal length to the employed "coordinates" in each "feature".
Also, I suggest you to rename "time" to "times", as you will see in the original post on GitHub mentioned below.
Hereafter, I made a minimal example based on your provided feature list:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# * Example of correct feature list with length 1 * #
## Coordinates
coordinate_list1 = [[37.4725028, 126.4313798],
[37.478525899999994, 126.6663152],
[37.561648999999996, 126.79433700000001]]
## Times
# NOTE: if the same timestamp applies to all coordinates, create list with repeating timestamp of same length as coordinate list
times_list1 = ['2020-01-19'] * len(coordinate_list1)
## Features
# NOTE: fill in the coordinates and times, and name the associated time-keyword "times"
features = [{
'geometry': {
'coordinates': coordinate_list1,
'type': 'Point'
},
'properties': {
'icon': 'marker',
'iconstyle': {
'iconSize': [20, 20],
'iconUrl':
'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png'
},
'id': 'house',
'popup': 1,
'times': times_list1
},
'type': 'Feature'
}]
Fix this and it should work just fine.
The solution was cited from this discussion:
[...] the plugin requires each feature to have a property "times", which
should be a list containing either unix epoch timestamps in ms or ISO
format timestamps. See
https://python-visualization.github.io/folium/plugins.html#folium.plugins.TimestampedGeoJson

df = pd.read_excel("file.xlsx")
IconUrl = "https://cdn.icon-icons.com/icons2/1465/PNG/128/604bullettrain_100995.png"
m = folium.Map([37.5650172,126.8494648], zoom_start = 10)
# step 1
TempFeature = []
# do the following command for each marker
temp_df = df.query(f"Marker_id == {1}")
TempFeature.append({ 'type': 'Feature',
'geometry': { 'type': 'LineString',
# example coordinates
# [
# (-3.049648, 53.4372116),
# (-3.04967139134615, 53.4372056616587),
# (-3.04972986971154, 53.4371908158053),
# .
# .
# .
# (-3.04972986971154, 53.4371908158053),
# ]
'coordinates': list(zip(temp_df['Longitude'],temp_df['Latitude']))
},
'properties':{
'icon': 'marker',
'iconstyle': {'iconUrl': IconUrl, 'iconSize': [20, 20], },
# time must be like the following format : '2018-12-01T05:53:00'
'times': temp_df['Timetable'] ,
'popup': '<html> <head></head> <body> comments </body> </html>'
}})
# step 2
TimestampedGeoJson({
'type': 'FeatureCollection',
'features': TempFeature,
}
, period='P1D'
).add_to(m)
m

Related

spark.read.json error: (java.lang.ArrayStoreException: java.util.HashMap)

I'm using the code below to pull in multiple json files into 1 using pagination. When I try to create a spark dataframe I get the error '
java.lang.ArrayStoreException: java.util.HashMap' during spark.read.json. Below the code block I've provided output from printing the 'issues' data set (minus proprietary info). I've done a bit of research and can't figure out what I can try to make this work. Any assistance would be greatly appreciated!!
import requests
import json
limit = 2
startat = 0
issues = []
for page_num in range(2):
startat = page_num*50
url = f"https://URL/rest/api/2/search?jql=TEST&startAt={startat}&maxResults={limit}"
req = requests.get(url, headers={'Accept': 'application/json', 'Authorization': 'Basic xxxxxxxxxxxxxxxxxxxxxxxxxx'})
data = req.json()
issues.extend(data['issues'])
jsonDF = spark.read.json(issues)
jsonDF.printSchema()
[{'expand': 'operations,versionedRepresentations,editmeta,changelog,customfield_10010.requestTypePractice,renderedFields', 'id': '11441', 'self': 'https://my.url.net/rest/api/2/issue/11441', 'key': 'TS-1401', 'fields': {'statuscategorychangedate': '2022-11-29T07:05:17.359-0800', 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10004', 'id': '10004', 'description': 'Functionality or a feature expressed as a user goal.', 'iconUrl': 'https://my.url.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10315?size=medium', 'name': 'Story', 'subtask': False, 'avatarId': 10315, 'hierarchyLevel': 0}, 'parent': {'id': '11420', 'key': 'TS-1380', 'self': 'https://my.url.net/rest/api/2/issue/11420', 'fields': {'summary': 'Clone30 - Migration Epics', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10000', 'id': '10000', 'description': 'A big user story that needs to be broken down. Created by Jira Software - do not edit or delete.', 'iconUrl': 'https://my.url.net/images/icons/issuetypes/epic.svg', 'name': 'Epic', 'subtask': False, 'hierarchyLevel': 1}}}, 'timespent': None, 'project': {'self': 'https://my.url.net/rest/api/2/project/10001', 'id': '10001', 'key': 'TS', 'name': 'Project', 'projectTypeKey': 'software', 'simplified': False, 'avatarUrls': {'48x48': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556', '24x24': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=small', '16x16': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=xsmall', '32x32': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=medium'}}, 'customfield_10033': None, 'fixVersions': [], 'aggregatetimespent': None, 'customfield_10034': [], 'customfield_10035': None, 'resolution': None, 'customfield_10036': None, 'customfield_10037': None, 'customfield_10027': None, 'customfield_10028': None, 'customfield_10029': None, 'resolutiondate': None, 'workratio': -1, 'watches': {'self': 'https://my.url.net/rest/api/2/issue/TS-1401/watchers', 'watchCount': 1, 'isWatching': True}, 'lastViewed': '2022-12-08T10:06:57.022-0800', 'created': '2022-11-29T07:05:16.501-0800', 'customfield_10020': None, 'customfield_10021': None, 'customfield_10022': None, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'customfield_10023': None, 'customfield_10024': None, 'customfield_10025': None, 'customfield_10026': None, 'labels': [], 'customfield_10016': None, 'customfield_10017': None, 'customfield_10018': {'hasEpicLinkFieldDependency': False, 'showField': False, 'nonEditableReason': {'reason': 'EPIC_LINK_SHOULD_BE_USED', 'message': 'To set an epic as the parent, use the epic link instead'}}, 'customfield_10019': '0|i008a3:', 'timeestimate': None, 'aggregatetimeoriginalestimate': None, 'versions': [], 'issuelinks': [], 'assignee': None, 'updated': '2022-11-29T07:05:20.759-0800', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'components': [], 'timeoriginalestimate': None, 'description': 'Data owner completes template (understand scope of migration efforts)', 'customfield_10010': None, 'customfield_10014': 'TS-1380', 'customfield_10015': None, 'customfield_10005': None, 'customfield_10006': None, 'customfield_10007': None, 'security': None, 'customfield_10008': None, 'customfield_10009': None, 'aggregatetimeestimate': None, 'summary': 'Template', 'creator': {'self': 'https://my.url.net/rest/api/2/user?accountId=5d669f4bf81f2c0d99ee9e38', 'accountId': '5d669f4bf81f2c0d99ee9e38', 'emailAddress': 'test#aol.com', 'avatarUrls': {'48x48': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '24x24': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '16x16': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '32x32': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png'}, 'displayName': 'Joe Test', 'active': True, 'timeZone': 'America/Los_Angeles', 'accountType': 'atlassian'}, 'subtasks': [{'id': '11442', 'key': 'TS-1402', 'self': 'https://my.url.net/rest/api/2/issue/11442', 'fields': {'summary': 'Complete Template with table/views required (in) and produced (out)', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10006', 'id': '10006', 'description': "A small piece of work that's part of a larger task.", 'iconUrl': 'https://my.url.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10316?size=medium', 'name': 'Sub-task', 'subtask': True, 'avatarId': 10316, 'hierarchyLevel': -1}}}], 'reporter': {'self': 'https://my.url.net/rest/api/2/user?accountId=5d669f4bf81f2c0d99ee9e38', 'accountId': '5d669f4bf81f2c0d99ee9e38', 'emailAddress': 'test#aol.com', 'avatarUrls': {'48x48': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '24x24': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '16x16': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '32x32': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png'}, 'displayName': 'Joe Test', 'active': True, 'timeZone': 'America/Los_Angeles', 'accountType': 'atlassian'}, 'aggregateprogress': {'progress': 0, 'total': 0}, 'customfield_10001': None, 'customfield_10002': None, 'customfield_10003': None, 'customfield_10004': None, 'customfield_10038': None, 'environment': None, 'duedate': None, 'progress': {'progress': 0, 'total': 0}, 'votes': {'self': 'https://my.url.net/rest/api/2/issue/TS-1401/votes', 'votes': 0, 'hasVoted': False}}}, {'expand': 'operations,versionedRepresentations,editmeta,changelog,customfield_10010.requestTypePractice,renderedFields', 'id': '11438', 'self': 'https://my.url.net/rest/api/2/issue/11438', 'key': 'TS-1398', 'fields': {'statuscategorychangedate': '2022-11-29T07:05:09.126-0800', 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10004', 'id': '10004', 'description': 'Functionality or a feature expressed as a user goal.', 'iconUrl': 'https://my.url.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10315?size=medium', 'name': 'Story', 'subtask': False, 'avatarId': 10315, 'hierarchyLevel': 0}, 'parent': {'id': '11420', 'key': 'TS-1380', 'self': 'https://my.url.net/rest/api/2/issue/11420', 'fields': {'summary': 'Clone30 - Migration Epics', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10000', 'id': '10000', 'description': 'A big user story that needs to be broken down. Created by Jira Software - do not edit or delete.', 'iconUrl': 'https://my.url.net/images/icons/issuetypes/epic.svg', 'name': 'Epic', 'subtask': False, 'hierarchyLevel': 1}}}, 'timespent': None, 'project': {'self': 'https://my.url.net/rest/api/2/project/10001', 'id': '10001', 'key': 'TS', 'name': 'Project', 'projectTypeKey': 'software', 'simplified': False, 'avatarUrls': {'48x48': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556', '24x24': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=small', '16x16': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=xsmall', '32x32': 'https://my.url.net/rest/api/2/universal_avatar/view/type/project/avatar/10556?size=medium'}}, 'fixVersions': [], 'customfield_10033': None, 'customfield_10034': [], 'aggregatetimespent': None, 'customfield_10035': None, 'resolution': None, 'customfield_10036': None, 'customfield_10037': None, 'customfield_10027': None, 'customfield_10028': None, 'customfield_10029': None, 'resolutiondate': None, 'workratio': -1, 'lastViewed': None, 'watches': {'self': 'https://my.url.net/rest/api/2/issue/TS-1398/watchers', 'watchCount': 1, 'isWatching': True}, 'created': '2022-11-29T07:05:08.312-0800', 'customfield_10020': None, 'customfield_10021': None, 'customfield_10022': None, 'customfield_10023': None, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'customfield_10024': None, 'customfield_10025': None, 'customfield_10026': None, 'labels': [], 'customfield_10016': None, 'customfield_10017': None, 'customfield_10018': {'hasEpicLinkFieldDependency': False, 'showField': False, 'nonEditableReason': {'reason': 'EPIC_LINK_SHOULD_BE_USED', 'message': 'To set an epic as the parent, use the epic link instead'}}, 'customfield_10019': '0|i008ae:y', 'timeestimate': None, 'aggregatetimeoriginalestimate': None, 'versions': [], 'issuelinks': [], 'assignee': None, 'updated': '2022-11-29T07:05:22.417-0800', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'components': [], 'timeoriginalestimate': None, 'description': 'Creating reports/reporting cubes; need to find out reports used', 'customfield_10010': None, 'customfield_10014': 'TS-1380', 'customfield_10015': None, 'customfield_10005': None, 'customfield_10006': None, 'security': None, 'customfield_10007': None, 'customfield_10008': None, 'customfield_10009': None, 'aggregatetimeestimate': None, 'summary': '\xa0create reports/cubes', 'creator': {'self': 'https://my.url.net/rest/api/2/user?accountId=5d669f4bf81f2c0d99ee9e38', 'accountId': '5d669f4bf81f2c0d99ee9e38', 'emailAddress': 'test#aol.com', 'avatarUrls': {'48x48': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '24x24': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '16x16': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '32x32': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png'}, 'displayName': 'Joe Test', 'active': True, 'timeZone': 'America/Los_Angeles', 'accountType': 'atlassian'}, 'subtasks': [{'id': '11439', 'key': 'TS-1399', 'self': 'https://my.url.net/rest/api/2/issue/11439', 'fields': {'summary': 'Confirm: any reporting cubes required using this data are created and in production?', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10006', 'id': '10006', 'description': "A small piece of work that's part of a larger task.", 'iconUrl': 'https://my.url.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10316?size=medium', 'name': 'Sub-task', 'subtask': True, 'avatarId': 10316, 'hierarchyLevel': -1}}}, {'id': '11440', 'key': 'TS-1400', 'self': 'https://my.url.net/rest/api/2/issue/11440', 'fields': {'summary': 'Confirm: any structured reports using this data are created and in production?', 'status': {'self': 'https://my.url.net/rest/api/2/status/10003', 'description': '', 'iconUrl': 'https://my.url.net/', 'name': 'Backlog', 'id': '10003', 'statusCategory': {'self': 'https://my.url.net/rest/api/2/statuscategory/2', 'id': 2, 'key': 'new', 'colorName': 'blue-gray', 'name': 'To Do'}}, 'priority': {'self': 'https://my.url.net/rest/api/2/priority/3', 'iconUrl': 'https://my.url.net/images/icons/priorities/medium.svg', 'name': 'Medium', 'id': '3'}, 'issuetype': {'self': 'https://my.url.net/rest/api/2/issuetype/10006', 'id': '10006', 'description': "A small piece of work that's part of a larger task.", 'iconUrl': 'https://my.url.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10316?size=medium', 'name': 'Sub-task', 'subtask': True, 'avatarId': 10316, 'hierarchyLevel': -1}}}], 'reporter': {'self': 'https://my.url.net/rest/api/2/user?accountId=5d669f4bf81f2c0d99ee9e38', 'accountId': '5d669f4bf81f2c0d99ee9e38', 'emailAddress': 'test#aol.com', 'avatarUrls': {'48x48': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '24x24': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '16x16': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png', '32x32': 'https://secure.gravatar.com/avatar/69b7db33e65c274c27a07b28b356e329?d=https%3A%2F%2Favatar-management--avatars.us-west-2.test.png'}, 'displayName': 'Joe Test', 'active': True, 'timeZone': 'America/Los_Angeles', 'accountType': 'atlassian'}, 'aggregateprogress': {'progress': 0, 'total': 0}, 'customfield_10001': None, 'customfield_10002': None, 'customfield_10003': None, 'customfield_10004': None, 'customfield_10038': None, 'environment': None, 'duedate': None, 'progress': {'progress': 0, 'total': 0}, 'votes': {'self': 'https://my.url.net/rest/api/2/issue/TS-1398/votes', 'votes': 0, 'hasVoted': False}}}]

How to create python dataframe from nested json dictionary with increasing key

I have a json file with the following structure:
{'0': {'transaction': [{'transaction_key': '406.l.657872.tr.374',
'transaction_id': '374',
'type': 'add/drop',
'status': 'successful',
'timestamp': '1639593953'},
{'players': {'0': {'player': [[{'player_key': '406.p.100006'},
{'player_id': '100006'},
{'name': {'full': 'Dallas',
'first': 'Dallas',
'last': '',
'ascii_first': 'Dallas',
'ascii_last': ''}},
{'editorial_team_abbr': 'Dal'},
{'display_position': 'DEF'},
{'position_type': 'DT'}],
{'transaction_data': [{'type': 'add',
'source_type': 'freeagents',
'destination_type': 'team',
'destination_team_key': '406.l.657872.t.10',
'destination_team_name': 'Team 1'}]}]},
'1': {'player': [[{'player_key': '406.p.24793'},
{'player_id': '24793'},
{'name': {'full': 'Julio Jones',
'first': 'Julio',
'last': 'Jones',
'ascii_first': 'Julio',
'ascii_last': 'Jones'}},
{'editorial_team_abbr': 'Ten'},
{'display_position': 'WR'},
{'position_type': 'O'}],
{'transaction_data': {'type': 'drop',
'source_type': 'team',
'source_team_key': '406.l.657872.t.10',
'source_team_name': 'Team 1',
'destination_type': 'waivers'}}]},
'count': 2}}]},
'1': {'transaction': [{'transaction_key': '406.l.657872.tr.373',
'transaction_id': '373',
'type': 'add/drop',
'status': 'successful',
'timestamp': '1639575496'},
{'players': {'0': {'player': [[{'player_key': '406.p.32722'},
{'player_id': '32722'},
{'name': {'full': 'Cam Akers',
'first': 'Cam',
'last': 'Akers',
'ascii_first': 'Cam',
'ascii_last': 'Akers'}},
{'editorial_team_abbr': 'LAR'},
{'display_position': 'RB'},
{'position_type': 'O'}],
{'transaction_data': [{'type': 'add',
'source_type': 'freeagents',
'destination_type': 'team',
'destination_team_key': '406.l.657872.t.5',
'destination_team_name': 'Team 2'}]}]},
'1': {'player': [[{'player_key': '406.p.100007'},
{'player_id': '100007'},
{'name': {'full': 'Denver',
'first': 'Denver',
'last': '',
'ascii_first': 'Denver',
'ascii_last': ''}},
{'editorial_team_abbr': 'Den'},
{'display_position': 'DEF'},
{'position_type': 'DT'}],
{'transaction_data': {'type': 'drop',
'source_type': 'team',
'source_team_key': '406.l.657872.t.5',
'source_team_name': 'Team 2',
'destination_type': 'waivers'}}]},
'count': 2}}]},
'2': {'transaction': [{'transaction_key': '406.l.657872.tr.372',
'transaction_id': '372',
'type': 'add/drop',
'status': 'successful',
'timestamp': '1639575448'},
{'players': {'0': {'player': [[{'player_key': '406.p.33413'},
{'player_id': '33413'},
{'name': {'full': 'Travis Etienne',
'first': 'Travis',
'last': 'Etienne',
'ascii_first': 'Travis',
'ascii_last': 'Etienne'}},
{'editorial_team_abbr': 'Jax'},
{'display_position': 'RB'},
{'position_type': 'O'}],
{'transaction_data': [{'type': 'add',
'source_type': 'freeagents',
'destination_type': 'team',
'destination_team_key': '406.l.657872.t.5',
'destination_team_name': 'Team 2'}]}]},
'1': {'player': [[{'player_key': '406.p.24815'},
{'player_id': '24815'},
{'name': {'full': 'Mark Ingram II',
'first': 'Mark',
'last': 'Ingram II',
'ascii_first': 'Mark',
'ascii_last': 'Ingram II'}},
{'editorial_team_abbr': 'NO'},
{'display_position': 'RB'},
{'position_type': 'O'}],
{'transaction_data': {'type': 'drop',
'source_type': 'team',
'source_team_key': '406.l.657872.t.5',
'source_team_name': 'Team 2',
'destination_type': 'waivers'}}]},
'count': 2}}]}
These are transactions for a fantasy football league and I'd like to organize each transaction into a dataframe, however I'm running into issues normalizing the data. I figure I'd need to begin a loop, but am slightly stuck in the mud and would appreciate if anyone has any suggestions. Thank You.
Ideally, I'm looking to summarize each transaction with the following dataframe structure:
transaction_id type added pos_1 dropped pos_2 timestamp
374 add/drop Dallas DEF Julio Jones WR 1639593953
373 add/drop Cam Akers RB Denver DEF 1639575496
372 add/drop Travis Etienne RB Mark Ingram II RB 1639575448

create dictionary of values based on matching keys in list from nested dictionary

i have nested dictionary with upto 300 items from TYPE1 TO TYPE300 called mainlookup
mainlookup = {'TYPE1': [{'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}],
'TYPE2': [{'Song': 'Jazz', 'Type': 'Slow', 'Price': '5'}],
'TYPE37': [{'Song': 'Country', 'Type': 'Fast', 'Price': '7'}]}
input list to search in lookup based on string TYPE1, TYPE2 and so one
input_list = ['thissong-fav-user:type1-chan-44-John',
'thissong-fav-user:type1-chan-45-kelly-md',
'thissong-fav-user:type2-rock-45-usa',
'thissong-fav-user:type737-chan-45-patrick-md',
'thissong-fav-user:type37-chan-45-kelly-md']
i want to find the string TYPE IN input_list and then create a dictionary as shown below
Output_Desired = {'thissong-fav-user:type1-chan-44-John': [{'Song': 'Rock', 'Type': 'Hard',
'Price':'10'}],
'thissong-fav-user:type1-chan-45-kelly-md': [{'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}],
'thissong-fav-user:type2-rock-45-usa': [{'Song': 'Jazz', 'Type': 'Slow', 'Price': '5'}],
'thissong-fav-user:type37-chan-45-kelly-md': [{'Song': 'Country', 'Type': 'Fast', 'Price': '7'}]}
Note-thissong-fav-user:type737-chan-45-patrick-md in the list has no match so i want to create a
seperate list if value is not found in main lookup
Notfound_list = ['thissong-fav-user:type737-chan-45-patrick-md', and so on..]
Appreciate your help.
You can try this:
mainlookup = {'TYPE1': [{'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}],
'TYPE2': [{'Song': 'Jazz', 'Type': 'Slow', 'Price': '5'}], 'TYPE37': [{'Song': 'Country', 'Type': 'Fast', 'Price': '7'}]}
input_list = ['thissong-fav-user:type1-chan-44-John',
'thissong-fav-user:type1-chan-45-kelly-md', 'thissong-fav-user:type737-chan-45-kelly-md']
dct={i:mainlookup[i.split(':')[1].split('-')[0].upper()] for i in input_list if i.split(':')[1].split('-')[0].upper() in mainlookup.keys()}
Notfoundlist=[i for i in input_list if i not in dct.keys() ]
print(dct)
print(Notfoundlist)
Output:
{'thissong-fav-user:type1-chan-44-John': [{'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}], 'thissong-fav-user:type1-chan-45-kelly-md': [{'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}]}
['thissong-fav-user:type737-chan-45-kelly-md']
An answer using regular expressions:
import re
from pprint import pprint
input_list = ['thissong-fav-user:type1-chan-44-John', 'thissong-fav-user:type1-chan-45-kelly-md', 'thissong-fav-user:type2-rock-45-usa', 'thissong-fav-user:type737-chan-45-patrick-md', 'thissong-fav-user:type37-chan-45-kelly-md']
mainlookup = {'TYPE2': {'Song': 'Reggaeton', 'Type': 'Hard', 'Price': '30'}, 'TYPE1': {'Song': 'Rock', 'Type': 'Hard', 'Price': '10'}, 'TYPE737': {'Song': 'Jazz', 'Type': 'Hard', 'Price': '99'}, 'TYPE37': {'Song': 'Rock', 'Type': 'Soft', 'Price': '1'}}
pattern = re.compile('type[0-9]+')
matches = [re.search(pattern, x).group(0) for x in input_list]
result = {x: [mainlookup[matches[i].upper()]] for i, x in enumerate(input_list)}
pprint(result)
Output:
{'thissong-fav-user:type1-chan-44-John': [{'Price': '10',
'Song': 'Rock',
'Type': 'Hard'}],
'thissong-fav-user:type1-chan-45-kelly-md': [{'Price': '10',
'Song': 'Rock',
'Type': 'Hard'}],
'thissong-fav-user:type2-rock-45-usa': [{'Price': '30',
'Song': 'Reggaeton',
'Type': 'Hard'}],
'thissong-fav-user:type37-chan-45-kelly-md': [{'Price': '1',
'Song': 'Rock',
'Type': 'Soft'}],
'thissong-fav-user:type737-chan-45-patrick-md': [{'Price': '99',
'Song': 'Jazz',
'Type': 'Hard'}]}

Converting a RESTful API response from list to dictionary

Currently I have a function (shown below) that makes a GET request from an API that I made myself
def get_vehicles(self):
result = "http://127.0.0.1:8000/vehicles"
response = requests.get(result)
data = response.content
data_dict = json.loads(data)
return data_dict
The data I got is in this format. Which is a list of dictionary
data_dict = [{'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}]
Is it possible to convert it to just a dictionary? Example:
data_dict = {'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}
No, the second result is a tuple, not a dict.
data_dict = {'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'}
print(type(data_dict))
# <class 'tuple'>
It is the same as:
data_dict = ({'colour': 'Black', 'cost': 10, 'latitude': -37.806152, 'longitude': 144.95787, 'rentalStatus': 'True', 'seats': 4, 'user': None, 'vehicleBrand': 'Toyota', 'vehicleID': 1, 'vehicleModel': 'Altis'}, {'colour': 'White', 'cost': 15, 'latitude': -37.803913, 'longitude': 144.964859, 'rentalStatus': 'False', 'seats': 4, 'user': {'firstname': 'Test', 'imageName': None, 'password': 'password', 'surname': 'Ing', 'userID': 15, 'username': 'Testing'}, 'vehicleBrand': 'Honda', 'vehicleID': 3, 'vehicleModel': 'Civic'})
That's why it is a tuple.
If you only want to merge them in a dict,it seems to be impossible because dict couldn't have the same keys.But you could merge the value as a list,like:
d = {key: list(value) for key, value in zip(data_dict[0].keys(), zip(data_dict[0].values(), data_dict[1].values()))}
print(d)
Result(Make sure they has the same length):
{
'colour': ['Black', 'White'],
'cost': [10, 15],
'latitude': [-37.806152, -37.803913],
'longitude': [144.95787, 144.964859],
'rentalStatus': ['True', 'False'],
'seats': [4, 4],
'user': [None, {
'firstname': 'Test',
'imageName': None,
'password': 'password',
'surname': 'Ing',
'userID': 15,
'username': 'Testing'
}],
'vehicleBrand': ['Toyota', 'Honda'],
'vehicleID': [1, 3],
'vehicleModel': ['Altis', 'Civic']
}
This a list of dictionaries.
Therefore you can access them using the array syntax: data_dict[0] for the first element for example.

How to separate values in a dictionary to be put into CSV? [duplicate]

This question already has answers here:
How can I convert JSON to CSV?
(26 answers)
Closed 3 years ago.
I am trying to write my JSON output to CSV, but I'm not sure how to separate my values into individual columns
This is my current code
with open('dict.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
for key, value in response.json().items():
writer.writerow([value])
print(value)
This is the csv file I am getting:
current csv file
This is the desired csv file/output I want to get:
desired output
This is an example of my JSON Output
[{'id': '123', 'custom_id': '12', 'company': 28, 'company_name': 'Sunshine'}, {'id': '224', 'custom_id': '14', 'company': 38, 'company_name': 'Flowers'},
{'id': '888', 'custom_id': '10', 'company': 99, 'company_name': 'Fields'}]
how about this JSON format? (a more complicated one)
[{'id': '777', 'custom_id': '000112', 'company': 28, 'company_name':
'Weddings Inc', 'delivery_address': '25 olive park terrace, 61234', 'delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'}, 'sender_name': 'Joline', 'sender_email': '', 'sender_contact': '91234567', 'removed': None, 'recipient_name': 'Joline', 'recipient_contact': '91866655', 'notes': '', 'items': [{'id': 21668, 'name': 'Loose hair flowers', 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 21667, 'name': "Groom's Boutonniere", 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 21666, 'name': 'Bridal Bouquet', 'quantity': 1, 'metadata': {}, 'removed': None}], 'latitude': '1.1234550920764211111', 'longitude': '103.864352476201000000', 'created': '2019-08-15T05:40:30.385467Z', 'updated': '2019-08-15T05:41:27.930110Z', 'status': 'pending', 'verbose_status': 'Pending', 'logs': [{'id': 56363, 'order': '50c402', 'order_custom_id': '000112', 'order_delivery_address': '25 olive park terrace, 61234', 'order_delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'}, 'message': 'Order was created.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-15T05:40:30.431790Z', 'removed': None}, {'id': 56364, 'order': '50c402d8-7c76-45b5-b883-e2fb887a507e', 'order_custom_id': 'INV-000112', 'order_delivery_address': '25 olive park terrace, 61234', 'order_delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'}, 'message': 'Order is pending.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-15T05:40:30.433139Z', 'removed': None}], 'reschedule_requests': [], 'signature': None},
{'id': '241', 'custom_id': '000123', 'company': 22, 'company_name': 'Pearl Pte Ltd', 'delivery_address': '90 Merchant Road, Hotel Royal, 223344', 'delivery_timeslot': {'lower': '2019-11-29T10:00:00Z', 'upper': '2019-11-29T13:00:00Z', 'bounds': '[)'}, 'sender_name': 'Vera Smith', 'sender_email': '', 'sender_contact': '81234567', 'removed': None, 'recipient_name': 'Vera Smith', 'recipient_contact': '81234561', 'notes': '', 'items': [{'id': 22975, 'name': 'Custom wrapped bouquet', 'quantity': 2, 'metadata': {}, 'removed': None}, {'id': 22974, 'name': "Parents' boutonniere x 3", 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 22973, 'name': "Groom's boutonniere", 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 22972, 'name': 'Loose hair flowers', 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 22971, 'name': 'Bridal Bouquet', 'quantity': 1, 'metadata': {}, 'removed': None}], 'latitude': '1.28821802835873000000', 'longitude': '103.84569230314800000000', 'created': '2019-08-30T03:20:17.477528Z', 'updated': '2019-08-30T03:29:25.307856Z', 'status': 'pending', 'verbose_status': 'Pending', 'logs': [{'id': 59847, 'order': '24117085-9104-4442-841b-4a734f801d39', 'order_custom_id': 'INV-000123', 'order_delivery_address': '90 Merchant Road, Hotel Royal, 223344', 'order_delivery_timeslot': {'lower': '2019-11-29T10:00:00Z', 'upper': '2019-11-29T13:00:00Z', 'bounds': '[)'}, 'message': 'Order was created.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-30T03:20:17.511250Z', 'removed': None}, {'id': 59848, 'order': '24117085-9104-4442-841b-4a734f801d39', 'order_custom_id': 'INV-000123', 'order_delivery_address': '90 Merchant Road, Hotel Royal, 223344', 'order_delivery_timeslot': {'lower': '2019-11-29T10:00:00Z', 'upper': '2019-11-29T13:00:00Z', 'bounds': '[)'}, 'message': 'Order is pending.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-30T03:20:17.513132Z', 'removed': None}], 'reschedule_requests': [], 'signature': None}]
Use pandas library:
df.to_csv() - Write object to a comma-separated values (csv) file.
Ex.
import pandas as pd
data = [{'id': '123', 'custom_id': '12', 'company': 28, 'company_name': 'Sunshine'},
{'id': '224', 'custom_id': '14', 'company': 38, 'company_name': 'Flowers'},
{'id': '888', 'custom_id': '10', 'company': 99, 'company_name': 'Fields'}]
df = pd.DataFrame(data)
df.to_csv('sample.csv')
Try:
import csv
csv_file = 'my_file.csv'
csv_columns = ['id', 'custom_id', 'company', 'company_name']
dict_data = [{'id': '123', 'custom_id': '12', 'company': 28, 'company_name': 'Sunshine'}, {'id': '224', 'custom_id': '14', 'company': 38, 'company_name': 'Flowers'}, {'id': '888', 'custom_id': '10', 'company': 99, 'company_name': 'Fields'}]
try:
with open(csv_file, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=csv_columns)
writer.writeheader()
for data in dict_data:
writer.writerow(data)
except IOError:
print("I/O error")
Given your response data in json format
response = [{'id': '123', 'custom_id': '12', 'company': 28, 'company_name': 'Sunshine'},
{'id': '224', 'custom_id': '14', 'company': 38, 'company_name': 'Flowers'},
{'id': '888', 'custom_id': '10', 'company': 99, 'company_name': 'Fields'}]
You can convert it to a list of lists using
header = [response[0].keys()]
data = [row.values() for row in response]
csv_list = header + data
And then save it to csv using
with open('dict.csv', "w") as f:
for row in csv_list:
f.write("%s\n" % ','.join(str(col) for col in row))
This should yield your desired output

Categories