generating a Chart.js chart with python data - python

I am working on a litte stock-market project in Python.
Every week, a status.xlsx file is generated that tells me what sectors make up my portfolio, e.g.
xls = pd.ExcelFile('Status_week_9.xlsx')
df1=pd.read_excel(xls)
print(df1)
I am looping over all files, so I get a dataframe similar to this for every file:
sector pct
Consumer Cyclical 0.319638
Industrials 0.203268
Financial Services 0.200217
...
Now I would like to loop through these weekly reports and pass the data to a Chart.js template in order to get a stacked bar for each week that shows the increase / decrease of sectors over time, e.g. https://jsfiddle.net/qg1ya5vk/4/ (demo only, does not add up to 1)
The template file looks like this:
The idea was to use a template with placeholders for the chart and use
from string import Template
to replace the placeholders with the corresponding values.
The variables are "labels" and "dataset". "Labels" consists of the calendar weeks. This can be extracted from the filenames. So far, so good ;-)
Now for the "dataset", I'm not sure how to get information from the n dataframes.
One dataset would have to look like this:
{label: 'Energy',
data: [0.037975, 0.038512, 0.039817, 0.065010],}
So this would mean that the Energy sector had a share of 3.7975% in week 1, 3.8512% in week 2 etc. Complicating things even further, it's also possible that one sector is not present in one week, so I would have to add a 0 then, because the sector is not part of the original dataframe.
There can be n (comma-separated) datasets.
I probably have to transpose the dataframe but it doesn't quite do the trick (yet).
So far my code looks like this:
import pandas as pd
import glob
import os
from string import Template
labels=[]
dataset=[]
files =...
for i in files:
cw = i.split('_')[3].split('.')[0] ## extract calendar week from filename
xls = pd.ExcelFile(i)
df1 = pd.read_excel(xls, 'Sectors') ## select sheet 'Sectors'
df1['CW'] = cw ## add cw to df1
df1_t = df1.T. ## transpose df1
sectors = df1.sector.to_list()
share = df1.pct.to_list()
labels.append(cw)
dataset.append(df1_t) ##??
# {
# label: 'Energy',
# data: [0.037975, 0.038512, 0.039817, 0.065010],
# }
d = {'label' : labels, 'datasets' : dataset}
## open Chart template and put in values:
with open('template.txt', 'r') as f:
src = Template(f.read())
result = src.substitute(d)
print(result)
How would you generate the datasets?
I am also thinking this is a little bit long-winded and error-prone. Maybe there is another way to tackle this?
Template for chart:
var label = $label;
var ctx = document.getElementById("myChart4").getContext('2d');
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: label,
datasets: [
$datasets //**this is the culprit**
],
},
options: {
tooltips: {
displayColors: true,
callbacks:{
mode: 'x',
},
},
scales: {
xAxes: [{
stacked: true,
gridLines: {
display: false,
}
}],
yAxes: [{
stacked: true,
ticks: {
beginAtZero: true,
},
type: 'linear',
}]
},
responsive: true,
maintainAspectRatio: false,
legend: { position: 'bottom' },
}
});

What you are looking for is the json module. You can simply arrange your data in the correct form in python and write to a JSON that Chart.js can understand.
import json
data = {'label' : labels, 'datasets' : dataset}
with open('data.json', 'w') as f:
json.dump(data, f)
This question is about how to get your JSON into Chart.js.

Related

How to manipulate and slice multi-dimensional JSON data in Python?

I'm trying to set up a convenient system for storing and analyzing data from experiments. For the data files I use the following JSON format:
{
"sample_id": "",
"timestamp": "",
"other_metadata1": "",
"measurements": {
"type1": {
"timestamp": "",
"other_metadata2": ""
"data": {
"parameter1": [1,2,3],
"parameter2": [4,5,6]
}
}
"type2": { ... }
}
}
Now for analyzing many of these files, I want to filter for sample metadata and measurement metadata to get a subset of the data to plot. I wrote a function like this:
def get_subset(data_dict, include_samples={}, include_measurements={}):
# Start with a copy of all datasets
subset = copy.deepcopy(data_dict)
# Include samples if they satisfy certain properties
for prop, req in include_samples.items():
subset = {file: sample for file, sample in subset.items() if sample[prop] == req}
# Filter by measurement properties
for file, sample in subset.items():
measurements = sample['measurements'].copy()
for prop, req in include_measurements.items():
measurements = [meas for meas in measurements if meas[prop] == req]
# Replace the measurements list
sample['measurements'] = measurements
return subset
While this works, I feel like I'm re-inventing the wheel of something like pandas. I would like to have more functionality like dropping all NaN values, excluding based on metadata, etc, All of which is available in pandas. However my data format is not compatible with the 2D nature of that.
Any suggestions on how to go about manipulating and slicing such data strutures without reinventing a lot of things?

Advice - Ranked Heatmap in R or Python or Highcharts

So this is a bit of a open question and any feedback would be appreciated. Essentially, I want to create the below chart (possibly in highcharts, highcharter, python or just R). What is the name of this specific chart (ranked heatmap)
The issue I keep running into on each of my attempts is that there is no fixed y-axis. As the chart above shows, each year is ranked from best performing asset to worst with each asset having a specific colour.
I have tried to create a heatmap but due to the y-axis still being fixed, the ranked aspect does not work. Below is a draft version of what I tried to create in highcharter.
JS fiddle reference: https://www.highcharts.com/demo/heatmap
So pretty much, if someone can point me in the right direction or share their thoughts ito creating a chart like the first one that would be usefull
Thank you in advance.
You can simply calculate y value based on your data. Assuming that you have similar data format as below:
const columnsData = [{
year: '2001',
data: [{
name: 'A',
value: 55
}, {
name: 'B',
value: 45
}, ...]
}, {
year: '2002',
data: [...]
}, {
year: '2003',
data: [...]
}];
You can loop through data and build data structure required by Highcharts:
columnsData.forEach(column => {
column.data.sort((a, b) => a.value - b.value);
column.data.forEach((dataEl, index) => {
processedData.push({
name: column.year,
y: index,
value: dataEl.value,
dataLabels: {
format: dataEl.name + ': ' + dataEl.value
}
});
});
});
Highcharts.chart('container', {
...,
series: [{
data: processedData,
...
}]
});
Live demo: https://jsfiddle.net/BlackLabel/jkzsbv4c/
API Reference: https://api.highcharts.com/highcharts/series.heatmap.data

Turning JSON into dataframe with pandas

I'm trying to get a data frame but keep running into various error messages depending on the arguments I specify in read.json after I specify my file.
I've run through many of the arguments in the pandas.read_json documentation, but haven't been able to identify a solution.
import pandas
json_file = "https://gis.fema.gov/arcgis/rest/services/NSS/OpenShelters/MapServer/0/query?where=1%3D1&outFields=*&returnGeometry=false&outSR=4326&f=json"
pandas.read_json(json_file)
I'm trying to get a data frame but keep running into various error messages depending on the arguments I specify in read.json after I specify my file.
Because the JSON is not directly convertible to DataFrame. read_json works only with a few formats defined by the orient parameter. Your JSON doesn't follow any of the allowed formats so you need to manipulate the JSON before converting it to a data frame.
Let's take a high level look at your JSON:
{
"displayFieldName": ...,
"fieldAliases": {...},
"fields": {...},
"features": [...]
}
I'm gonna fathom a guess and assume the features node is what you want. Let's div deeper into features:
"features": [
{
"attributes": {
"OBJECTID": 1,
"SHELTER_ID": 223259,
...
}
},
{
"attributes": {
"OBJECTID": 2,
"SHELTER_ID": 223331,
...
}
},
...
]
features contains a list of objects, each having an attributes node. The data contained in the attributes node is what you actually want.
Here's the code
import pandas as pd
import json
from urllib.request import urlopen
json_file = "https://gis.fema.gov/arcgis/rest/services/NSS/OpenShelters/MapServer/0/query?where=1%3D1&outFields=*&returnGeometry=false&outSR=4326&f=json"
data = urlopen(json_file).read()
raw_json = json.loads(data)
formatted_json = [feature['attributes'] for feature in raw_json['features']]
formatted_json is now a list of dictionaries containing the data we are after. It is no longer JSON. To create the data frame:
df = pd.DataFrame(formatted_json)

Filtering pandas dataframe by date to count views for timeline of programs

I need to count viewers by program for a streaming channel from a json logfile.
I identify the programs by their starttimes, such as:
So far I have two Dataframes like this:
The first one contains all the timestamps from the logfile
viewers_from_log = pd.read_json('sqllog.json', encoding='UTF-8')
# Convert date string to pandas datetime object:
viewers_from_log['time'] = pd.to_datetime(viewers_from_log['time'])
Source JSON file:
[
{
"logid": 191605,
"time": "0:00:17"
},
{
"logid": 191607,
"time": "0:00:26"
},
{
"logid": 191611,
"time": "0:01:20"
}
]
The second contains the starting times and titles of the programs
programs_start_time = pd.DataFrame.from_dict('programs.json', orient='index')
Source JSON file:
{
"2019-05-29": [
{
"title": "\"Amiről a kövek mesélnek\"",
"startTime_dt": "2019-05-29T00:00:40Z"
},
{
"title": "Koffer - Kedvcsináló Kul(t)túrák Külföldön",
"startTime_dt": "2019-05-29T00:22:44Z"
},
{
"title": "Gubancok",
"startTime_dt": "2019-05-29T00:48:08Z"
}
]
}
So what I need to do is to count the entries / program in the log file and link them to the program titles.
My approach is to slice log data for each date range from program data and get the shape. Next add column for program data with results:
import pandas as pd
# setup test data
log_data = {'Time': ['2019-05-30 00:00:26', '2019-05-30 00:00:50', '2019-05-30 00:05:50','2019-05-30 00:23:26']}
log_data = pd.DataFrame(data=log_data)
program_data = {'Time': ['2019-05-30 00:00:00', '2019-05-30 00:22:44'],
'Program': ['Program 1', 'Program 2']}
program_data = pd.DataFrame(data=program_data)
counts = []
for index, row in program_data.iterrows():
# get counts on selected range
try:
log_range = log_data[(log_data['Time'] > program_data.loc[index].values[0]) & (log_data['Time'] < program_data.loc[index+1].values[0])]
counts.append(log_range.shape[0])
except:
log_range = log_data[log_data['Time'] > program_data.loc[index].values[0]]
counts.append(log_range.shape[0])
# add aditional column with collected counts
program_data['Counts'] = counts
Output:
Time Program Counts
0 2019-05-30 00:00:00 Program 1 3
1 2019-05-30 00:22:44 Program 2 1
A working (but maybe a little quick and dirty) method:
Use the .shift(-1) method on the timestamp column of programs_start_time dataframe, to get an additional column with a name date_end indicating the timestamp of end for each TV program.
Then for each example_timestamp in the log file, you can query the TV programs dataframe like this: df[(df['date_start']=<example_timestamp) & (df['date_end']>example_timestamp)] (make sure you substitute df with your dataframe's name: programs_start_time) which will give you exactly one dataframe row and extract from it the name of the TV programm.
Hope this helps!
Solution with histogram, using numpy:
import pandas as pd
import numpy as np
df_p = pd.DataFrame([
{
"title": "\"Amiről a kövek mesélnek\"",
"startTime_dt": "2019-05-29T00:00:40Z"
},
{
"title": "Koffer - Kedvcsináló Kul(t)túrák Külföldön",
"startTime_dt": "2019-05-29T00:22:44Z"
},
{
"title": "Gubancok",
"startTime_dt": "2019-05-29T00:48:08Z"
}
])
df_v = pd.DataFrame([
{
"logid": 191605,
"time": "2019-05-29 0:00:17"
},
{
"logid": 191607,
"time": "2019-05-29 0:00:26"
},
{
"logid": 191611,
"time": "2019-05-29 0:01:20"
}
])
df_p.startTime_dt = pd.to_datetime(df_p.startTime_dt)
df_v.time = pd.to_datetime(df_v.time)
# here's part where I convert datetime to timestamp in seconds - astype(int) casts it to nanoseconds, hence there's // 10**9
programmes_start = df_p.startTime_dt.astype(int).values // 10**9
viewings_starts = df_v.time.astype(int).values // 10**9
# make bins for histogram
# add zero to the beginning of the array
# add value that is time an hour after the start of the last given programme to the end of the array
programmes_start = np.pad(programmes_start, (1, 1), mode='constant', constant_values=(0, programmes_start.max()+3600))
histogram = np.histogram(viewings_starts, bins=programmes_start)
print(histogram[0]
# prints [2 1 0 0]
Interpretation: there were 2 log entries before 'Amiről a kövek mesélnek' started, 1 log entry between starts of 'Amiről a kövek mesélnek' and 'Koffer - Kedvcsináló Kul(t)túrák Külföldön', 0 log entries between starts of 'Koffer - Kedvcsináló Kul(t)túrák Külföldön' and 'Gubancok' and 0 entries after start od 'Gubancok'. Which, looking at the data you provided, seems correct :) Hope this helps.
NOTE: I assume, that you have the date of the viewings. You don't have them in the example log file, but they appear in the screenshot - so I assumed that you can compute/get them somehow and added them by hand to the input dict.

How to convert csv to json with multi-level nesting using pandas

I've tried to follow a bunch of answers I've seen on SO, but I'm really stuck here. I'm trying to convert a CSV to JSON.
The JSON schema has multiple levels of nesting and some of the values in the CSV will be shared.
Here's a link to one record in the CSV.
Think of this sample as two different parties attached to one document.
The fields on the document (document_source_id, document_amount, record_date, source_url, document_file_url, document_type__title, apn, situs_county_id, state_code) should not duplicate.
While the fields of each entity are unique.
I've tried to nest these using a complex groupby statement, but am stuck getting the data into my schema.
Here's what I've tried. It doesn't contain all fields because I'm having a difficult time understanding what it all means.
j = (df.groupby(['state_code',
'record_date',
'situs_county_id',
'document_type__title',
'document_file_url',
'document_amount',
'source_url'], as_index=False)
.apply(lambda x: x[['source_url']].to_dict('r'))
.reset_index()
.rename(columns={0:'metadata', 1:'parcels'})
.to_json(orient='records'))
Here's how the sample CSV should output
{
"metadata":{
"source_url":"https://a836-acris.nyc.gov/DS/DocumentSearch/DocumentDetail?doc_id=2019012901225004",
"document_file_url":"https://a836-acris.nyc.gov/DS/DocumentSearch/DocumentImageView?doc_id=2019012901225004"
},
"state_code":"NY",
"nested_data":{
"parcels":[
{
"apn":"3972-61",
"situs_county_id":"36005"
}
],
"participants":[
{
"entity":{
"name":"5 AIF WILLOW, LLC",
"situs_street":"19800 MACARTHUR BLVD",
"situs_city":"IRVINE",
"situs_unit":"SUITE 1150",
"state_code":"CA",
"situs_zip":"92612"
},
"participation_type":"Grantee"
},
{
"entity":{
"name":"5 ARCH INCOME FUND 2, LLC",
"situs_street":"19800 MACARTHUR BLVD",
"situs_city":"IRVINE",
"situs_unit":"SUITE 1150",
"state_code":"CA",
"situs_zip":"92612"
},
"participation_type":"Grantor"
}
]
},
"record_date":"01/31/2019",
"situs_county_id":"36005",
"document_source_id":"2019012901225004",
"document_type__title":"ASSIGNMENT, MORTGAGE"
}
You might need to use the json_normalize function from pandas.io.json
from pandas.io.json import json_normalize
import csv
li = []
with open('filename.csv', 'r') as f:
reader = csv.DictReader(csvfile)
for row in reader:
li.append(row)
df = json_normalize(li)
Here , we are creating a list of dictionaries from the csv file and creating a dataframe from the function json_normalize.
Below is one way to export your data:
# all columns used in groupby()
grouped_cols = ['state_code', 'record_date', 'situs_county_id', 'document_source_id'
, 'document_type__title', 'source_url', 'document_file_url']
# adjust some column names to map to those in the 'entity' node in the desired JSON
situs_mapping = {
'street_number_street_name': 'situs_street'
, 'city_name': 'situs_city'
, 'unit': 'situs_unit'
, 'state_code': 'state_code'
, 'zipcode_full': 'situs_zip'
}
# define columns used for 'entity' node. python 2 need to adjust to the syntax
entity_cols = ['name', *situs_mapping.values()]
#below for python 2#
#entity_cols = ['name'] + list(situs_mapping.values())
# specify output fields
output_cols = ['metadata','state_code','nested_data','record_date'
, 'situs_county_id', 'document_source_id', 'document_type__title']
# define a function to get nested_data
def get_nested_data(d):
return {
'parcels': d[['apn', 'situs_county_id']].drop_duplicates().to_dict('r')
, 'participants': d[['entity', 'participation_type']].to_dict('r')
}
j = (df.rename(columns=situs_mapping)
.assign(entity=lambda x: x[entity_cols].to_dict('r'))
.groupby(grouped_cols)
.apply(get_nested_data)
.reset_index()
.rename(columns={0:'nested_data'})
.assign(metadata=lambda x: x[['source_url', 'document_file_url']].to_dict('r'))[output_cols]
.to_json(orient="records")
)
print(j)
Note: If participants contain duplicates and must run drop_duplicates() as we do on parcels, then assign(entity) can be moved to defining the participants in the get_nested_data() function:
, 'participants': d[['participation_type', *entity_cols]] \
.drop_duplicates() \
.assign(entity=lambda x: x[entity_cols].to_dict('r')) \
.loc[:,['entity', 'participation_type']] \
.to_dict('r')

Categories