Error when I try to extract info in a json - python

I have this code:
api_key = "_________"
ciudad = input("put the city: ")
url = "https://api.openweathermap.org/data/2.5/forecast?q=" +ciudad+ "&appid=" + api_key
print(url)
data = urllib.request.urlopen(url).read().decode()
js = json.loads(data)
And it is all okey
but I need the temp max and min and I try this:
for res in js["list"][0]["main"]:
print("the value of", res["main.temp_min"])
and the code give me this error
TypeError: string indices must be integers
The json it is like:
{'cod': '200', 'message': 0, 'cnt': 40, 'list': [{'dt': 1669032000, 'main': {'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 75}, 'wind': {'speed': 9.85, 'deg': 296, 'gust': 13.2}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.55}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 12:00:00'}, {'dt': 1669042800, 'main': {'temp': 287.59, 'feels_like': 286.94, 'temp_min': 284.8, 'temp_max': 287.59, 'pressure': 1014, 'sea_level': 1014, 'grnd_level': 1008, 'humidity': 71, 'temp_kf': 2.79}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10d'}], 'clouds': {'all': 78}, 'wind': {'speed': 9.77, 'deg': 314, 'gust': 14.1}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 2.28}, 'sys': {'pod': 'd'}, 'dt_txt': '2022-11-21 15:00:00'}, {'dt': 1669053600, 'main': {'temp': 286.12, 'feels_like': 285.14, 'temp_min': 284.68, 'temp_max': 286.12, 'pressure': 1016, 'sea_level': 1016, 'grnd_level': 1009, 'humidity': 64, 'temp_kf': 1.44}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 86}, 'wind': {'speed': 8.5, 'deg': 308, 'gust': 12.41}, 'visibility': 10000, 'pop': 1, 'rain': {'3h': 1.46}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 18:00:00'}, {'dt': 1669064400, 'main': {'temp': 284.63, 'feels_like': 283.53, 'temp_min': 284.63, 'temp_max': 284.63, 'pressure': 1019, 'sea_level': 1019, 'grnd_level': 1010, 'humidity': 65, 'temp_kf': 0}, 'weather': [{'id': 500, 'main': 'Rain', 'description': 'light rain', 'icon': '10n'}], 'clouds': {'all': 100}, 'wind': {'speed': 7.04, 'deg': 300, 'gust': 10.08}, 'visibility': 10000, 'pop': 0.57, 'rain': {'3h': 0.42}, 'sys': {'pod': 'n'}, 'dt_txt': '2022-11-21 21:00:00'}, {'dt': 1669075200, 'main': {'temp': 284.82, 'feels_like': 283.95, 'temp_min': 284.82, 'temp_max': 284.82, 'pressure': 1018, 'sea_level': 1018, 'grnd_level': 1009, 'humidity': 73, 'temp_kf': 0}

js["list"][0]["main"] is a dictionary:
{'temp': 288.99, 'feels_like': 288.35, 'temp_min': 286.43, 'temp_max': 288.99, 'pressure': 1012, 'sea_level': 1012, 'grnd_level': 1007, 'humidity': 66, 'temp_kf': 2.56}
for res in js["list"][0]["main"] iterates over its keys. So res is one of the keys in this dictionary which are strings (hence the error). What you probably want is:
for l in js["list"]:
print("the value of", l["main"]["temp_min"])

Related

how to make json_normalize build a dataframe from openweather respons

Hi I'm struggling to extract the data from openweather response. I am using json_normalize to to the table but the construction of statement is not clear for me. I managed to divide a peace of data in to smaller portions and to normalize it but I wonder if there is a nice and smooth way of doing it.
'daily': [{'dt': 1612432800, 'sunrise': 1612419552, 'sunset': 1612452288,'temp': {'day': -4.21, 'min': -10.24, 'max': -2.31, 'night': - 10.24, 'eve': -5.11, 'morn': -3.43},
'feels_like': {'day': -10.78, 'night': -13.48, 'eve': -9.52, 'morn': -11.35}, 'pressure': 1010, 'humidity': 96,
'dew_point': -5.84, 'wind_speed': 5.69, 'wind_deg': 13,
'weather': [{'id': 601, 'main': 'Snow', 'description': 'snow', 'icon': '13d'}], 'clouds': 100, 'pop': 1,
'snow': 10.24, 'uvi': 0.89}, {'dt': 1612519200, 'sunrise': 1612505843, 'sunset': 1612538809,
'temp': {'day': -3.7, 'min': -10.24, 'max': -2.6, 'night': -9.09, 'eve': -6.92,'morn': -8.96},
'feels_like': {'day': -8.01, 'night': -13.25, 'eve': -10.96, 'morn': -13.11},
'pressure': 1023, 'humidity': 98, 'dew_point': -4.64, 'wind_speed': 2.59,
'wind_deg': 273, 'weather': [{'id': 802, 'main': 'Clouds', 'description': 'scattered clouds','icon': '03d'}], 'clouds': 29,
'pop': 0.16, 'uvi': 0.91},{'dt': 1612605600, 'sunrise': 1612592132, 'sunset': 1612625330,
'temp': {'day': -8.27, 'min': -15.93, 'max': -7.49, 'night': -15.93, 'eve': -12.8, 'morn': -10.72},
'feels_like': {'day': -12.82, 'night': -20.74, 'eve': -17.38, 'morn': -14.93}, 'pressure': 1024,
'humidity': 92, 'dew_point': -11.71, 'wind_speed': 2.21, 'wind_deg': 32,
'weather': [{'id': 803, 'main': 'Clouds', 'description': 'broken clouds', 'icon': '04d'}], 'clouds': 67,
'pop': 0, 'uvi': 0.86}, {'dt': 1612692000, 'sunrise': 1612678420, 'sunset': 1612711851,
'temp': {'day': -11.72, 'min': -16.93, 'max': -9.81, 'night': -14.36, 'eve': -11.18,'morn': -16.76},
'feels_like': {'day': -17.5, 'night': -20.73, 'eve': -17.09, 'morn': -22},
'pressure': 1023, 'humidity': 94, 'dew_point': -13.77, 'wind_speed': 3.65,
'wind_deg': 81, 'weather': [{'id': 803, 'main': 'Clouds', 'description': 'broken clouds', 'icon': '04d'}], 'clouds': 54, 'pop': 0,'uvi': 0.98}, {'dt': 1612778400, 'sunrise': 1612764705, 'sunset': 1612798372,
'temp': {'day': -12.41, 'min': -15.94, 'max': -8.43, 'night': -11.33,'eve': -9.23, 'morn': -15.94},
'feels_like': {'day': -20.36, 'night': -19.04, 'eve': -17.44,'morn': -22.64}, 'pressure': 1015, 'humidity': 90,
'dew_point': -16.35, 'wind_speed': 6.64, 'wind_deg': 69, 'weather': [{'id': 804, 'main': 'Clouds', 'description': 'overcast clouds', 'icon': '04d'}], 'clouds': 97, 'pop': 0,'uvi': 1.01},{'dt': 1612864800, 'sunrise': 1612850989, 'sunset': 1612884894,'temp': {'day': -13.58, 'min': -14.7, 'max': -11.21, 'night': -11.4, 'eve': -11.26, 'morn': 13.48},'feels_like': {'day': -19.95, 'night': -17.27, 'eve': -17.3, 'morn': -20.35}, 'pressure': 1014, 'humidity': 94,'dew_point': -15.84, 'wind_speed': 4.33, 'wind_deg': 60,'weather': [{'id': 600, 'main': 'Snow', 'description': 'light snow', 'icon': '13d'}], 'clouds': 100,
'pop': 0.73, 'snow': 0.83, 'uvi': 0.98}, {'dt': 1612951200, 'sunrise': 1612937272, 'sunset': 1612971415,
'temp': {'day': -13.58, 'min': -17.87, 'max': -11.37,'night': -17.87, 'eve': -13.19, 'morn': -13.34},
'feels_like': {'day': -19.11, 'night': -23.19, 'eve': -18.44,'morn': -18.75}, 'pressure': 1021, 'humidity': 94,'dew_point': -15.74, 'wind_speed': 3.14, 'wind_deg': 54, 'weather': [{'id': 600, 'main': 'Snow', 'description': 'light snow', 'icon': '13d'}], 'clouds': 82, 'pop': 0.73,'snow': 0.78, 'uvi': 1},{'dt': 1613037600, 'sunrise': 1613023553, 'sunset': 1613057936,
'temp': {'day': -16.26, 'min': -20.28, 'max': -13.32, 'night': -19.55, 'eve': -14.36, 'morn': -19.46},
'feels_like': {'day': -22.12, 'night': -25.23, 'eve': -20, 'morn': -24.97}, 'pressure': 1028, 'humidity': 93,
'dew_point': -18.8, 'wind_speed': 3.41, 'wind_deg': 77,'weather': [{'id': 801, 'main': 'Clouds', 'description': 'few clouds', 'icon': '02d'}], 'clouds': 18, 'pop': 0,'uvi': 1}]}
day = temp_Json['daily']
data_frame_day = pd.json_normalize(day, 'weather', ['dt', 'sunrise', 'sunset', 'pressure', 'humidity', 'dew_point', 'wind_speed','wind_deg', 'clouds', 'pop', 'snow', 'uvi', ['temp', 'day'],['temp', 'min'],['temp', 'max'], ['temp', 'night'], ['temp', 'eve'], ['temp', 'morn'],['feels_like', 'day'], ['feels_like', 'night'], ['feels_like', 'eve'],['feels_like', 'morn']], errors='ignore')
The error is:
Traceback (most recent call last):
File "C:\Users\Jakub\PycharmProjects\Tests\main.py", line 263, in <module>
data_frame_day = pd.json_normalize(day, 'weather',
File "C:\Users\Jakub\PycharmProjects\Tests\venv\lib\site-packages\pandas\io\json\_normalize.py", line 336, in _json_normalize
_recursive_extract(data, record_path, {}, level=0)
File "C:\Users\Jakub\PycharmProjects\Tests\venv\lib\site-packages\pandas\io\json\_normalize.py", line 329, in _recursive_extract
raise KeyError(
KeyError: "Try running with errors='ignore' as key 'snow' is not always present"
This is how I would normalize the records:
df = pd.DataFrame(day)
# since weather column contains a list we need to transform each element to a row
df = df.explode('weather')
# normalize columns
weather = pd.json_normalize(df['weather']).add_prefix('weather.')
feels_like = pd.json_normalize(df['feels_like']).add_prefix('feels_like.')
temp = pd.json_normalize(df['temp']).add_prefix('temp.')
# join columns together after normalization and drop original unnormalized columns
df_normalized = pd.concat([weather, temp, feels_like, df], axis=1).drop(columns=['weather', 'temp', 'feels_like'])
This will give you the normalized dataframe.

How to dynamically format nested list of dict with less latency

I need your expertise to easy the nested dictionary formatting. I have list of input signals which need to be grouped on the u_id and on timestamp field based on minute precision and convert to respective output format. I have posted the formatting i have tried. I need to easily format and process it as fast as possible, because time complexity is involved. help highly appreciated.
Code snippet
final_output = []
sorted_signals = sorted(signals, key=lambda x: (x['u_id'], str(x['start_ts'])[0:8]))
data = itertools.groupby(sorted_signals, key=lambda x: (x['u_id'], calendar.timegm(time.strptime(datetime.utcfromtimestamp(x['start_ts']).strftime('%Y-%m-%d-%H:%M'),'%Y-%m-%d-%H:%M'))))
def format_signals(v):
result =[]
for i in v:
temp_dict = {}
temp_dict.update({'timestamp_utc': i['start_ts']})
for data in i['sign']:
temp_dict.update({data['name'].split('.')[0]: data['val']})
result.append(temp_dict)
return result
for k, v in data:
output_format = {'ui_id': k[0], 'minute_utc': datetime.fromtimestamp(int(k[1])), 'data': format_signals(v),
'processing_timestamp_utc': datetime.strptime(datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S")}
final_output.append(output_format)
print(final_output)
Input
signals = [
{'c_id': '1234', 'u_id': 288, 'f_id': 331,
'sign': [{'name': 'speed', 'val': 9},
{'name': 'pwr', 'val': 1415}], 'start_ts': 1598440244,
'crt_ts': 1598440349, 'map_crt_ts': 1598440351, 'ca_id': 'AT123', 'c_n': 'demo',
'msg_cnt': 2, 'window': 'na', 'type': 'na'},
{'c_id': '1234', 'u_id': 288, 'f_id': 331,
'sign': [{'name': 'speed', 'val': 10},
{'name': 'pwr', 'val': 1416}], 'start_ts': 1598440243,
'crt_ts': 1598440349, 'map_crt_ts': 1598440351, 'ca_id': 'AT123', 'c_n': 'demo',
'msg_cnt': 2, 'window': 'na', 'type': 'na'},
{'c_id': '1234', 'u_id': 287, 'f_id': 331,
'sign': [{'name': 'speed', 'val': 10},
{'name': 'pwr', 'val': 1417}], 'start_ts': 1598440344,
'crt_ts': 1598440349, 'map_crt_ts': 1598440351, 'ca_id': 'AT123', 'c_n': 'demo',
'msg_cnt': 2, 'window': 'na', 'type': 'na'},
{'c_id': '1234', 'u_id': 288, 'f_id': 331,
'sign': [{'name': 'speed.', 'val': 8.2},
{'name': 'pwr', 'val': 925}], 'start_ts': 1598440345,
'crt_ts': 1598440349, 'map_crt_ts': 1598440351, 'ca_id': 'AT172', 'c_n': 'demo',
'msg_cnt': 2, 'window': 'na', 'type': 'na'}
]
Current output
[{
'ui_id': 287,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 42),
'data': [{
'timestamp_utc': 1598440344,
'speed': 10,
'pwr': 1417
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}, {
'ui_id': 288,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 40),
'data': [{
'timestamp_utc': 1598440244,
'speed': 9,
'pwr': 1415
}, {
'timestamp_utc': 1598440243,
'speed': 10,
'pwr': 1416
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}, {
'ui_id': 288,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 42),
'data': [{
'timestamp_utc': 1598440345,
'speed': 8.2,
'pwr': 925
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}]
Required Output
[{
'ui_id': 287,
'f_id': 311,
'c_id': 1234,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 42),
'data': [{
'timestamp_utc': 1598440344,
'speed': 10,
'pwr': 1417
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}, {
'ui_id': 288,
'f_id': 311,
'c_id': 1234,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 40),
'data': [{
'timestamp_utc': 1598440244,
'speed': 9,
'pwr': 1415
}, {
'timestamp_utc': 1598440243,
'speed': 10,
'pwr': 1416
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}, {
'ui_id': 288,
'f_id': 311,
'c_id': 1234,
'minute_utc': datetime.datetime(2020, 8, 26, 16, 42),
'data': [{
'timestamp_utc': 1598440345,
'speed': 8.2,
'pwr': 925
}],
'processing_timestamp_utc': datetime.datetime(2020, 8, 29, 19, 35, 46)
}]
So, let's define simple function which will extract from each object keys which required for grouping:
def extract(obj):
return obj['u_id'], obj['f_id'], obj['c_id'], obj['start_ts'] // 60 * 60
Note: to implement "minutes precision" I've divided timestamp to 60 to cut seconds and multiply to 60 to get valid timestamp back.
Then let's group objects and form final list:
from itertools import groupby
from datetime import datetime
...
final_output = []
for (uid, fid, cid, ts), ss in groupby(sorted(signals, key=extract), extract):
obj = {
'ui_id': uid,
'f_id': fid,
'c_id': int(cid),
'minute_utc': datetime.utcfromtimestamp(ts),
'data': [],
'processing_timestamp_utc': datetime.utcnow()
}
for s in ss:
obj['data'].append({
'timestamp_utc': s['start_ts'],
**{i['name']: i['val'] for i in s['sign']}
})
final_output.append(obj)
To print final_output in readable form we could use pprint:
from pprint import pprint
...
pprint(final_output, sort_dicts=False)
Maybe this helps you to write the code in a more straightforward way. If you can just go through the signals and organize them in one loop, maybe you don't need the sort and groupby which may be heavier.
As you want to gather the signals based on the u_id, a dictionary is handy to get a single entry per u_id. This does that much, you just need to add creating the output based on this organized dict of signals:
organized = {}
for s in signals:
u_id = s['u_id']
entry = organized.get(u_id, None)
if entry is None:
entry = []
organized[u_id] = entry
entry.append(s)
pprint.pprint(organized)
Is executable there, and output pasted below, https://repl.it/repls/ShallowQuintessentialInteger
{287: [{'c_id': '1234',
'c_n': 'demo',
'ca_id': 'AT123',
'crt_ts': 1598440349,
'f_id': 331,
'map_crt_ts': 1598440351,
'msg_cnt': 2,
'sign': [{'name': 'speed', 'val': 10}, {'name': 'pwr', 'val': 1417}],
'start_ts': 1598440344,
'type': 'na',
'u_id': 287,
'window': 'na'}],
288: [{'c_id': '1234',
'c_n': 'demo',
'ca_id': 'AT123',
'crt_ts': 1598440349,
'f_id': 331,
'map_crt_ts': 1598440351,
'msg_cnt': 2,
'sign': [{'name': 'speed', 'val': 9}, {'name': 'pwr', 'val': 1415}],
'start_ts': 1598440244,
'type': 'na',
'u_id': 288,
'window': 'na'},
{'c_id': '1234',
'c_n': 'demo',
'ca_id': 'AT123',
'crt_ts': 1598440349,
'f_id': 331,
'map_crt_ts': 1598440351,
'msg_cnt': 2,
'sign': [{'name': 'speed', 'val': 10}, {'name': 'pwr', 'val': 1416}],
'start_ts': 1598440243,
'type': 'na',
'u_id': 288,
'window': 'na'},
{'c_id': '1234',
'c_n': 'demo',
'ca_id': 'AT172',
'crt_ts': 1598440349,
'f_id': 331,
'map_crt_ts': 1598440351,
'msg_cnt': 2,
'sign': [{'name': 'speed.', 'val': 8.2}, {'name': 'pwr', 'val': 925}],
'start_ts': 1598440345,
'type': 'na',
'u_id': 288,
'window': 'na'}]}

Cannot convert strings to dictionaries/json

I am reading Excel file into a Pandas dataframe.
In the column "segment_efforts" there are string representations of list of dictionaries that I want to turn into dictionaries like that:
for cell in range(len(df)):
d = ast.literal_eval(df['segment_efforts'].values[cell]
However, I get
EOL while scanning string literal
literal_eval will work fine, however, for single cell copied to editor and triple-quoted.
How do I fix the above code to achieve same effect as putting triple quotes around a string?
Example string:
[{'elapsed_time': 628, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 1, 'id': 4679807677, 'average_watts': 143.0, 'average_speed': 7.5, 'average_cadence': 114.4, 'start_date_local': '2018-03-20T19:45:40Z', 'distance': 4708.72, 'split': 1, 'start_index': 0, 'name': 'Lap 1', 'max_speed': 8.8, 'average_heartrate': 126.3, 'end_index': 628, 'moving_time': 628, 'start_date': '2018-03-20T18:45:40Z', 'max_heartrate': 143.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 309, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 2, 'id': 4679807680, 'average_watts': 209.2, 'average_speed': 8.09, 'average_cadence': 119.2, 'start_date_local': '2018-03-20T19:56:09Z', 'distance': 2499.13, 'split': 2, 'start_index': 629, 'name': 'Lap 2', 'max_speed': 8.3, 'average_heartrate': 150.6, 'end_index': 937, 'moving_time': 309, 'start_date': '2018-03-20T18:56:09Z', 'max_heartrate': 155.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 96, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 3, 'id': 4679807683, 'average_watts': 137.2, 'average_speed': 5.03, 'average_cadence': 84.6, 'start_date_local': '2018-03-20T20:01:18Z', 'distance': 482.84, 'split': 3, 'start_index': 938, 'name': 'Lap 3', 'max_speed': 7.5, 'average_heartrate': 131.3, 'end_index': 1034, 'moving_time': 96, 'start_date': '2018-03-20T19:01:18Z', 'max_heartrate': 151.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 306, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 4, 'id': 4679807685, 'average_watts': 209.3, 'average_speed': 8.06, 'average_cadence': 119.1, 'start_date_local': '2018-03-20T20:02:55Z', 'distance': 2467.17, 'split': 4, 'start_index': 1035, 'name': 'Lap 4', 'max_speed': 8.3, 'average_heartrate': 148.6, 'end_index': 1340, 'moving_time': 306, 'start_date': '2018-03-20T19:02:55Z', 'max_heartrate': 157.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 94, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 5, 'id': 4679807688, 'average_watts': 149.6, 'average_speed': 6.09, 'average_cadence': 91.5, 'start_date_local': '2018-03-20T20:08:01Z', 'distance': 572.04, 'split': 5, 'start_index': 1341, 'name': 'Lap 5', 'max_speed': 8.1, 'average_heartrate': 136.6, 'end_index': 1435, 'moving_time': 94, 'start_date': '2018-03-20T19:08:01Z', 'max_heartrate': 147.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 323, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 6, 'id': 4679807691, 'average_watts': 209.0, 'average_speed': 8.09, 'average_cadence': 119.1, 'start_date_local': '2018-03-20T20:09:36Z', 'distance': 2612.63, 'split': 6, 'start_index': 1436, 'name': 'Lap 6', 'max_speed': 8.3, 'average_heartrate': 148.9, 'end_index': 1759, 'moving_time': 323, 'start_date': '2018-03-20T19:09:36Z', 'max_heartrate': 157.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 103, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 7, 'id': 4679807693, 'average_watts': 145.2, 'average_speed': 5.97, 'average_cadence': 91.5, 'start_date_local': '2018-03-20T20:15:00Z', 'distance': 614.71, 'split': 7, 'start_index': 1760, 'name': 'Lap 7', 'max_speed': 7.7, 'average_heartrate': 134.1, 'end_index': 1862, 'moving_time': 103, 'start_date': '2018-03-20T19:15:00Z', 'max_heartrate': 153.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 308, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 8, 'id': 4679807695, 'average_watts': 209.0, 'average_speed': 7.97, 'average_cadence': 117.9, 'start_date_local': '2018-03-20T20:16:43Z', 'distance': 2454.1, 'split': 8, 'start_index': 1863, 'name': 'Lap 8', 'max_speed': 8.3, 'average_heartrate': 147.7, 'end_index': 2170, 'moving_time': 308, 'start_date': '2018-03-20T19:16:43Z', 'max_heartrate': 154.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 96, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 9, 'id': 4679807697, 'average_watts': 142.5, 'average_speed': 5.25, 'average_cadence': 80.9, 'start_date_local': '2018-03-20T20:21:51Z', 'distance': 503.87, 'split': 9, 'start_index': 2171, 'name': 'Lap 9', 'max_speed': 7.8, 'average_heartrate': 133.8, 'end_index': 2266, 'moving_time': 96, 'start_date': '2018-03-20T19:21:51Z', 'max_heartrate': 152.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 306, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 10, 'id': 4679807701, 'average_watts': 218.6, 'average_speed': 8.14, 'average_cadence': 120.0, 'start_date_local': '2018-03-20T20:23:27Z', 'distance': 2491.13, 'split': 10, 'start_index': 2267, 'name': 'Lap 10', 'max_speed': 8.4, 'average_heartrate': 151.5, 'end_index': 2573, 'moving_time': 306, 'start_date': '2018-03-20T19:23:27Z', 'max_heartrate': 158.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 265, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 11, 'id': 4679807703, 'average_watts': 115.7, 'average_speed': 6.56, 'average_cadence': 102.1, 'start_date_local': '2018-03-20T20:28:34Z', 'distance': 1739.46, 'split': 11, 'start_index': 2574, 'name': 'Lap 11', 'max_speed': 7.9, 'average_heartrate': 125.3, 'end_index': 2839, 'moving_time': 265, 'start_date': '2018-03-20T19:28:34Z', 'max_heartrate': 154.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}]
I am assuming your data came originally from Python directly without being explicitly turned into JSON format, thus you get a few errors when trying to parse the example string as JSON.
These issues are:
The use of the single quotation mark instead of double
The use of Python's True instead of JavaScript's true
The use of Python's False instead of JavaScript's false
The use of Python's None instead of JavaScript's null
If you replace these four problematic styles in your string, it is possible to parse as JSON without raising a JSONDecodeError.
Example:
import json
s = """[{'elapsed_time': 628, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 1, 'id': 4679807677, 'average_watts': 143.0, 'average_speed': 7.5, 'average_cadence': 114.4, 'start_date_local': '2018-03-20T19:45:40Z', 'distance': 4708.72, 'split': 1, 'start_index': 0, 'name': 'Lap 1', 'max_speed': 8.8, 'average_heartrate': 126.3, 'end_index': 628, 'moving_time': 628, 'start_date': '2018-03-20T18:45:40Z', 'max_heartrate': 143.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 309, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 2, 'id': 4679807680, 'average_watts': 209.2, 'average_speed': 8.09, 'average_cadence': 119.2, 'start_date_local': '2018-03-20T19:56:09Z', 'distance': 2499.13, 'split': 2, 'start_index': 629, 'name': 'Lap 2', 'max_speed': 8.3, 'average_heartrate': 150.6, 'end_index': 937, 'moving_time': 309, 'start_date': '2018-03-20T18:56:09Z', 'max_heartrate': 155.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 96, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 3, 'id': 4679807683, 'average_watts': 137.2, 'average_speed': 5.03, 'average_cadence': 84.6, 'start_date_local': '2018-03-20T20:01:18Z', 'distance': 482.84, 'split': 3, 'start_index': 938, 'name': 'Lap 3', 'max_speed': 7.5, 'average_heartrate': 131.3, 'end_index': 1034, 'moving_time': 96, 'start_date': '2018-03-20T19:01:18Z', 'max_heartrate': 151.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 306, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 4, 'id': 4679807685, 'average_watts': 209.3, 'average_speed': 8.06, 'average_cadence': 119.1, 'start_date_local': '2018-03-20T20:02:55Z', 'distance': 2467.17, 'split': 4, 'start_index': 1035, 'name': 'Lap 4', 'max_speed': 8.3, 'average_heartrate': 148.6, 'end_index': 1340, 'moving_time': 306, 'start_date': '2018-03-20T19:02:55Z', 'max_heartrate': 157.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 94, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 5, 'id': 4679807688, 'average_watts': 149.6, 'average_speed': 6.09, 'average_cadence': 91.5, 'start_date_local': '2018-03-20T20:08:01Z', 'distance': 572.04, 'split': 5, 'start_index': 1341, 'name': 'Lap 5', 'max_speed': 8.1, 'average_heartrate': 136.6, 'end_index': 1435, 'moving_time': 94, 'start_date': '2018-03-20T19:08:01Z', 'max_heartrate': 147.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 323, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 6, 'id': 4679807691, 'average_watts': 209.0, 'average_speed': 8.09, 'average_cadence': 119.1, 'start_date_local': '2018-03-20T20:09:36Z', 'distance': 2612.63, 'split': 6, 'start_index': 1436, 'name': 'Lap 6', 'max_speed': 8.3, 'average_heartrate': 148.9, 'end_index': 1759, 'moving_time': 323, 'start_date': '2018-03-20T19:09:36Z', 'max_heartrate': 157.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 103, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 7, 'id': 4679807693, 'average_watts': 145.2, 'average_speed': 5.97, 'average_cadence': 91.5, 'start_date_local': '2018-03-20T20:15:00Z', 'distance': 614.71, 'split': 7, 'start_index': 1760, 'name': 'Lap 7', 'max_speed': 7.7, 'average_heartrate': 134.1, 'end_index': 1862, 'moving_time': 103, 'start_date': '2018-03-20T19:15:00Z', 'max_heartrate': 153.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 308, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 8, 'id': 4679807695, 'average_watts': 209.0, 'average_speed': 7.97, 'average_cadence': 117.9, 'start_date_local': '2018-03-20T20:16:43Z', 'distance': 2454.1, 'split': 8, 'start_index': 1863, 'name': 'Lap 8', 'max_speed': 8.3, 'average_heartrate': 147.7, 'end_index': 2170, 'moving_time': 308, 'start_date': '2018-03-20T19:16:43Z', 'max_heartrate': 154.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 96, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 9, 'id': 4679807697, 'average_watts': 142.5, 'average_speed': 5.25, 'average_cadence': 80.9, 'start_date_local': '2018-03-20T20:21:51Z', 'distance': 503.87, 'split': 9, 'start_index': 2171, 'name': 'Lap 9', 'max_speed': 7.8, 'average_heartrate': 133.8, 'end_index': 2266, 'moving_time': 96, 'start_date': '2018-03-20T19:21:51Z', 'max_heartrate': 152.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 306, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 10, 'id': 4679807701, 'average_watts': 218.6, 'average_speed': 8.14, 'average_cadence': 120.0, 'start_date_local': '2018-03-20T20:23:27Z', 'distance': 2491.13, 'split': 10, 'start_index': 2267, 'name': 'Lap 10', 'max_speed': 8.4, 'average_heartrate': 151.5, 'end_index': 2573, 'moving_time': 306, 'start_date': '2018-03-20T19:23:27Z', 'max_heartrate': 158.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}, {'elapsed_time': 265, 'device_watts': True, 'activity': {'resource_state': 1, 'id': 1462917682}, 'lap_index': 11, 'id': 4679807703, 'average_watts': 115.7, 'average_speed': 6.56, 'average_cadence': 102.1, 'start_date_local': '2018-03-20T20:28:34Z', 'distance': 1739.46, 'split': 11, 'start_index': 2574, 'name': 'Lap 11', 'max_speed': 7.9, 'average_heartrate': 125.3, 'end_index': 2839, 'moving_time': 265, 'start_date': '2018-03-20T19:28:34Z', 'max_heartrate': 154.0, 'resource_state': 2, 'athlete': {'resource_state': 1, 'id': 3255732}, 'total_elevation_gain': 0.0}]"""
s = s.replace("'", '"')
s = s.replace("True", "true")
s = s.replace("False", "false")
s = s.replace("None", "null")
d = json.loads(s)

change format of python dictionary

I have a python dictionary in this format:
{('first', 'negative'): 57, ('first', 'neutral'): 366, ('first', 'positive'): 249, ('second', 'negative'): 72, ('second', 'neutral'): 158, ('second', 'positive'): 99, ('third', 'negative'): 156, ('third', 'neutral'): 348, ('third', 'positive'): 270}
I want to convert it to:
{'first': [{'sentiment':'negative', 'value': 57}, {'sentiment': 'neutral', 'value': 366}, {'sentiment': 'positive', 'value': 249}], 'second': [{'sentiment':'negative', 'value': 72}, {'sentiment': 'neutral', 'value': 158}, {'sentiment': 'positive', 'value': 99}], 'third': [{'sentiment':'negative', 'value': 156}, {'sentiment': 'neutral', 'value': 348}, {'sentiment': 'positive', 'value': 270}]}
Thanks in advance
This should help.
o = {('first', 'negative'): 57, ('first', 'neutral'): 366, ('first', 'positive'): 249, ('second', 'negative'): 72, ('second', 'neutral'): 158, ('second', 'positive'): 99, ('third', 'negative'): 156, ('third', 'neutral'): 348, ('third', 'positive'): 270}
d = {}
for k,v in o.items(): #Iterate over your dict
if k[0] not in d:
d[k[0]] = [{"sentiment":k[1] , "value": v}]
else:
d[k[0]].append({"sentiment":k[1] , "value": v})
print d
Output:
{'second': [{'value': 72, 'sentiment': 'negative'}, {'value': 99, 'sentiment': 'positive'}, {'value': 158, 'sentiment': 'neutral'}], 'third': [{'value': 156, 'sentiment': 'negative'}, {'value': 348, 'sentiment': 'neutral'}, {'value': 270, 'sentiment': 'positive'}], 'first': [{'value': 57, 'sentiment': 'negative'}, {'value': 366, 'sentiment': 'neutral'}, {'value': 249, 'sentiment': 'positive'}]}
from collections import defaultdict
out = defaultdict(list)
for (label, sentiment), value in input_dict.items():
out[label].append(dict(sentiment=sentiment, value=value))

Converting data from Json to String in python

Could anybody please explain how to convert the following json data into a string in python. It's very big but i need your help...
You can see it from the following link:- http://api.openweathermap.org/data/2.5/forecast/daily?q=delhi&mode=json&units=metric&cnt=7&appid=146f5f89c18a703450d3bd6737d4fc94
Please suggest it's solution it is important for my project :-)
You can decode a JSON string in python like this:
import json
data = json.loads('json_string')
Source: https://docs.python.org/2/library/json.html
import requests
url = 'http://api.openweathermap.org/data/2.5/forecast/daily?q=delhi&mode=json&units=metric&cnt=7&appid=146f5f89c18a703450d3bd6737d4fc94'
response = requests.get(url)
response.text # this is a string
response.json() # this is a json dictionary
s = "The City is {city[name]} todays HIGH is {list[0][temp][max]}".format(**response.json())
print s
Some simple code that will read the JSON from your page and produce a Python dictionary follows. I have used the implicit concatenation of adjacent strings to improve the layout of the code.
import json
import urllib.request
f = urllib.request.urlopen
(url="http://api.openweathermap.org/data/2.5/forecast/daily?"
"q=delhi&mode=json&units=metric&"
"cnt=7&appid=146f5f89c18a703450d3bd6737d4fc94")
content = f.read()
result = json.loads(content.decode("utf-8"))
print(result)
This gives me the following output (which I have not shown in code style as it would appear in a single long line):
{'city': {'coord': {'lat': 28.666668, 'lon': 77.216667}, 'country': 'IN', 'id': 1273294, 'population': 0, 'name': 'Delhi'}, 'cnt': 7, 'message': 0.0081, 'list': [{'dt': 1467093600, 'weather': [{'icon': '01n', 'id': 800, 'description': 'clear sky', 'main': 'Clear'}], 'humidity': 82, 'clouds': 0, 'pressure': 987.37, 'speed': 2.63, 'temp': {'max': 32, 'eve': 32, 'night': 30.67, 'min': 30.67, 'day': 32, 'morn': 32}, 'deg': 104}, {'dt': 1467180000, 'weather': [{'icon': '10d', 'id': 501, 'description': 'moderate rain', 'main': 'Rain'}], 'humidity': 74, 'clouds': 12, 'pressure': 989.2, 'speed': 4.17, 'rain': 9.91, 'temp': {'max': 36.62, 'eve': 36.03, 'night': 31.08, 'min': 29.39, 'day': 35.61, 'morn': 29.39}, 'deg': 126}, {'dt': 1467266400, 'weather': [{'icon': '02d', 'id': 801, 'description': 'few clouds', 'main': 'Clouds'}], 'humidity': 71, 'clouds': 12, 'pressure': 986.56, 'speed': 3.91, 'temp': {'max': 36.27, 'eve': 35.19, 'night': 30.87, 'min': 29.04, 'day': 35.46, 'morn': 29.04}, 'deg': 109}, {'dt': 1467352800, 'weather': [{'icon': '10d', 'id': 502, 'description': 'heavy intensity rain', 'main': 'Rain'}], 'humidity': 100, 'clouds': 48, 'pressure': 984.48, 'speed': 0, 'rain': 18.47, 'temp': {'max': 30.87, 'eve': 30.87, 'night': 28.24, 'min': 24.96, 'day': 27.16, 'morn': 24.96}, 'deg': 0}, {'dt': 1467439200, 'weather': [{'icon': '10d', 'id': 501, 'description': 'moderate rain', 'main': 'Rain'}], 'humidity': 0, 'clouds': 17, 'pressure': 983.1, 'speed': 6.54, 'rain': 5.31, 'temp': {'max': 35.48, 'eve': 32.96, 'night': 27.82, 'min': 27.82, 'day': 35.48, 'morn': 29.83}, 'deg': 121}, {'dt': 1467525600, 'weather': [{'icon': '10d', 'id': 501, 'description': 'moderate rain', 'main': 'Rain'}], 'humidity': 0, 'clouds': 19, 'pressure': 984.27, 'speed': 3.17, 'rain': 7.54, 'temp': {'max': 34.11, 'eve': 34.11, 'night': 27.88, 'min': 27.53, 'day': 33.77, 'morn': 27.53}, 'deg': 133}, {'dt': 1467612000, 'weather': [{'icon': '10d', 'id': 503, 'description': 'very heavy rain', 'main': 'Rain'}], 'humidity': 0, 'clouds': 60, 'pressure': 984.82, 'speed': 5.28, 'rain': 54.7, 'temp': {'max': 33.12, 'eve': 33.12, 'night': 26.15, 'min': 25.78, 'day': 31.91, 'morn': 25.78}, 'deg': 88}], 'cod': '200'}

Categories