I need some small help. I have data containing hospital names and Birth weights in kilograms. Now I do want to group and count weights below 1kg and above 1kg per individual hospitals . Here is how my data looks like
# intialise data of lists.
data = {'Hospital':['Ruack', 'Ruack', 'Pens', 'Rick','Pens', 'Rick'],'Birth_weight':['1.0', '0.1', '2.1', '0.9', '2.19', '0.88']}
# Create DataFrame
dfy = pd.DataFrame(data)
# Print the output.
print(dfy)
Here is what I tried
#weight below 1kg
weight_count=pd.DataFrame(dfy.groupby('Hospital')['Birth_weight'] < 1.value_counts())
weight_count = weight_count.rename({'Birth_weight': 'weight_count'}, axis='columns')
weight_final = weight_count.reset_index()
#weight above 1kg
weight_count=pd.DataFrame(dfy.groupby('Hospital')['Birth_weight'] > 1.value_counts())
weight_count = weight_count.rename({'Birth_weight': 'weight_count'}, axis='columns')
weight_final = weight_count.reset_index()
end results
Expected result is a table with weight counts of birth weights under 1kg and above 1kg grouped per hospital.
EXPECTED TABLE
# intialise data of lists.
data = {'Hospital':['Ruack' , 'Rick','pens'],'< 1kg_count':['1', '2' , 'NAN'], '>1kg_count':['1','NAN' ,'2']}
# Create DataFrame
df_final = pd.DataFrame(data)
# Print the output.
print(df_final)
Use numpy.where for catagorize to new column and then GroupBy.size with Series.unstack:
#if encessary convert to floats
dfy['Birth_weight'] = dfy['Birth_weight'].astype(float)
dfy['group'] = np.where(dfy['Birth_weight'] < 1,'< 1kg_count','>1kg_count')
df = dfy.groupby(['Hospital', 'group']).size().unstack().reset_index()
print (df)
group Hospital < 1kg_count >1kg_count
0 Pens NaN 2.0
1 Rick 2.0 NaN
2 Ruack 1.0 1.0
Another idea with DataFrame.pivot_table:
dfy['Birth_weight'] = dfy['Birth_weight'].astype(float)
g = np.where(dfy['Birth_weight'] < 1,'< 1kg_count','>1kg_count')
df = dfy.pivot_table(index='Hospital', columns=g, aggfunc='size').reset_index()
print (df)
Hospital < 1kg_count >1kg_count
0 Pens NaN 2.0
1 Rick 2.0 NaN
2 Ruack 1.0 1.0
EDIT: If want binning of column use cut:
dfy['Birth_weight'] = dfy['Birth_weight'].astype(float)
bins = np.arange(0, 5.5, 0.5)
labels = ['{}-{}kg_count'.format(i, j) for i, j in zip(bins[:-1], bins[1:])]
#print (bins)
#print (labels)
g = pd.cut(dfy['Birth_weight'], bins=bins, labels=labels)
df = dfy.pivot_table(index='Hospital', columns=g, aggfunc='size')
print (df)
Birth_weight 0.0-0.5kg_count 0.5-1.0kg_count 2.0-2.5kg_count
Hospital
Pens NaN NaN 2.0
Rick NaN 2.0 NaN
Ruack 1.0 1.0 NaN
Are you looking for something like this?
a=(dfy['Birth_weight'].astype(float)<1).map({True: 'Less than 1kg', False: 'More than 1kg'})
dfy.groupby(['Hospital',a])['Birth_weight'].count().reset_index(name='Count')
Output
Hospital Birth_weight Count
0 Pens More than 1kg 2
1 Rick Less than 1kg 2
2 Ruack Less than 1kg 1
3 Ruack More than 1kg 1
import pandas as pd
import numpy as np
# intialise data of lists.
data = {'Hospital':['Ruack', 'Ruack', 'Pens', 'Rick','Pens', 'Rick'],'Birth_weight':
['1.0', '0.1', '2.1', '0.9', '2.19', '0.88']}
# Create DataFrame
dfy = pd.DataFrame(data)
dfy['Birth_weight'] = dfy['Birth_weight'].astype(float)
df1 = dfy.groupby(['Hospital','Birth_weight'])
df1.filter(lambda x: x['Birth_weight']>1)
df1.filter(lambda x: x['Birth_weight']<1)
Related
I have multiple dataframes with same columns. First one is df1
Name
I
Jack
1.0
Louis
1.0
Jack
2.0
Louis
5.0
Jack
4.0
Mark
2.0
-
-
Mark
3.0
df_2
Name
I
Jack
3.0
Louis
3.0
Jack
2.0
Louis
1.0
Jack
6.0
Mark
7.0
-
-
Mark
3.0
I should have a new dataframe ndf as
Name
res_df1
res_df2
Jack
7.0
11.0
Louis
6.0
4.0
Mark
5.0
10.0
res_df1 and res_df2 are the sum grouped by name from corresponding dataframes.
How to get res table. How to match the sum of group results from different dataframes and write the sum result to the corresponding group in new df. I have done like this:
frames =[df1, df2, ...df9]
ndf = pd.concat(frames)
ndf = ndf.drop_duplicates('Name')
ndf['res_df1'] = df1.groupby(['Name', sort=False)[I'].transform('sum').round(2)
ndf['res_df2'] = df2.groupby(['Name', sort=False)[I'].transform('sum').round(2)
---
ndf['res_df9'] = df9.groupby(['Name', sort=False)[I'].transform('sum').round(2)
But the problem is I can't get right sum.
Try:
frames = [df_1, df_2]
final_df = pd.DataFrame()
for index, df in enumerate(frames):
df_count = df.groupby('Name')['l'].sum().reset_index(name=f'res_df{index}')
if index==0:
final_df = df_count.copy(deep=True)
else:
final_df = final_df.merge(df_count, how='outer', on='Name')
print(final_df)
I'm trying to update column entries by counting the frequency of row entries in different columns. Here is a sample of my data. Actual data consistes 10k samples each having length of 220. (220 seconds).
d = {'ID':['a12', 'a12','a12','a12','a12', 'a12', 'a12','a12','v55','v55','v55','v55','v55','v55','v55', 'v55'],
'Exp_A':[0.012,0.154,0.257,0.665,1.072,1.514,1.871,2.144, 0.467, 0.812,1.59,2.151,2.68,3.013,3.514,4.015],
'freq':['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04',
'00:00:05', '00:00:06', '00:00:07','00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04',
'00:00:05', '00:00:06', '00:00:07'],
'A_Bullseye':[0,0,0,0,1,0,1,0, 0,0,1,0,0,0,1,0], 'A_Bull_Total':[0,0,0,0,0,1,1,2,0,0,0,1,1,1,1,2], 'A_Shot':[0,1,1,1,0,1,0,0, 1,1,0,1,0,1,0,0]}
df = pd.DataFrame(data=d)
Per each second, only Bullseye or Shot can be registered.
Count1: Number of df.A_Shot == 1 before the first df.A_Bullseye == 1 for each ID is 3 & 2 for ID=a12 and ID=v55 resp.
Count2: Number of df.A_Shot == 1 from the end of count1 to the second df.A_Bullseye == 1, 1 for df[df.ID=='a12'] and 2 for df[df.ID=='v55']
Where i in count(i) is df.groupby(by='ID')[A_Bull_Total].max(). Here i is 2.
So, if I can compute the average count for each i, then I will be able to adjust the values of df.Exp_A using the average of the above counts.
mask_A_Shot= df.A_Shot == 1
mask_A_Bullseye= df.A_Bulleseye == 0
mask = mask_A_Shot & mask_A_Bulleseye
df[mask.groupby(df['ID'])].mean()
Ideally I would like to have something like for each i (Bullseye), how many Shots are needed and how many seconds it took.
Create a grouping key of Bullseye within each ID using .cumsum and then you can find how many shots, and how much time elapsed between the bullseyes.
import pandas as pd
df['freq'] = pd.to_timedelta(df.freq, unit='s')
df['Bullseye'] = df.groupby('ID').A_Bullseye.cumsum()+1
# Chop off any shots after the final bullseye
m = df.Bullseye <= df.groupby('ID').A_Bullseye.transform(lambda x: x.cumsum().max())
df[m].groupby(['ID', 'Bullseye']).agg({'A_Shot': 'sum',
'freq': lambda x: x.max()-x.min()})
Output:
A_Shot freq
ID Bullseye
a12 1 3 00:00:03
2 1 00:00:01
v55 1 2 00:00:01
2 2 00:00:03
Edit:
Given your comment, here is how I would proceed. We're going to .shift the Bullseye column so instead of incrementing the counter at the Bullseye, we increment the counter the row after the bullseye. We'll modify A_Shot so bullseyes are also considered to be a shot.
df['freq'] = pd.to_timedelta(df.freq, unit='s')
df['Bullseye'] = df.groupby('ID').A_Bullseye.apply(lambda x: x.shift().cumsum().fillna(0)+1)
# Also consider Bullseye's as a shot:
df.loc[df.A_Bullseye == 1, 'A_Shot'] = 1
# Chop off any shots after the final bullseye
m = df.Bullseye <= df.groupby('ID').A_Bullseye.transform(lambda x: x.cumsum().max())
df1 = (df[m].groupby(['ID', 'Bullseye'])
.agg({'A_Shot': 'sum',
'freq': lambda x: (x.max()-x.min()).total_seconds()}))
Output: df1
A_Shot freq
ID Bullseye
a12 1.0 4 4.0
2.0 2 1.0
v55 1.0 3 2.0
2.0 3 3.0
And now since freq is an integer number of seconds, you can do divisions easily:
df1.A_Shot / df1.freq
#ID Bullseye
#a12 1.0 1.0
# 2.0 2.0
#v55 1.0 1.5
# 2.0 1.0
#dtype: float64
Suppoose df.bun (df is a Pandas dataframe)is a multi-index(date and name) with variable being category values written in string,
date name values
20170331 A122630 stock-a
A123320 stock-a
A152500 stock-b
A167860 bond
A196030 stock-a
A196220 stock-a
A204420 stock-a
A204450 curncy-US
A204480 raw-material
A219900 stock-a
How can I make this to represent total counts in the same date and its percentage to make table like below with each of its date,
date variable counts Percentage
20170331 stock 7 70%
bond 1 10%
raw-material 1 10%
curncy 1 10%
I have done print(df.groupby('bun').count()) as a resort to this question but it lacks..
cf) Before getting df.bun I used the following code to import nested dictionary to Pandas dataframe.
import numpy as np
import pandas as pd
result = pd.DataFrame()
origDict = np.load("Hannah Lee.npy")
for item in range(len(origDict)):
newdict = {(k1, k2):v2 for k1,v1 in origDict[item].items() for k2,v2 in origDict[item][k1].items()}
df = pd.DataFrame([newdict[i] for i in sorted(newdict)],
index=pd.MultiIndex.from_tuples([i for i in sorted(newdict.keys())]))
print(df.bun)
I believe need SeriesGroupBy.value_counts:
g = df.groupby('date')['values']
df = pd.concat([g.value_counts(),
g.value_counts(normalize=True).mul(100)],axis=1, keys=('counts','percentage'))
print (df)
counts percentage
date values
20170331 stock-a 6 60.0
bond 1 10.0
curncy-US 1 10.0
raw-material 1 10.0
stock-b 1 10.0
Another solution with size for counts and then divide by new Series created by transform and sum:
df2 = df.reset_index().groupby(['date', 'values']).size().to_frame('count')
df2['percentage'] = df2['count'].div(df2.groupby('date')['count'].transform('sum')).mul(100)
print (df2)
count percentage
date values
20170331 bond 1 10.0
curncy-US 1 10.0
raw-material 1 10.0
stock-a 6 60.0
stock-b 1 10.0
Difference between solutions is first sort by values per groups and second sort MultiIndex.
I have a DataFrame which I want to transpose:
import pandas as pd
sid= '13HKQ0Ue1_YCP-pKUxFuqdiqgmW_AZeR7P3VsUwrCnZo' # spreadsheet id
gid = 0 # sheet unique id (0 equals sheet0)
url = 'https://docs.google.com/spreadsheets/d/{}/export?gid={}&format=csv'.format(sid,gid)
df = pd.read_csv(url)
What I want to do is get the StoreName and CATegory as column header and have weights vs price for every category.
Desired Output :
I have tried Loops, Pandas but cannot figure it out,
I thought it could have been done by df.GroupBy but the returned object is not a DataFrame.
I get all this from a JSON output of an API:
API Link for 1STORE
import pandas as pd
import json, requests
from cytoolz.dicttoolz import merge
page = requests.get(mainurl)
dict_dta = json.loads(page.text) # load in Python DICT
list_columns = ['id', 'name', 'category_name', 'ounce', 'gram', 'two_grams', 'quarter', 'eighth','half_ounce','unit','half_gram'] # get the unformatted output
df = pd.io.json.json_normalize(dict_dta, ['categories', ['items']]).pipe(lambda x: x.drop('prices', 1).join(x.prices.apply(lambda y: pd.Series(merge(y)))))[list_columns]
df.to_csv('name')
I have tried tons of methods.
If someone could just point me in the right direction, it would be very helpful.
Is this in the right direction?
import pandas as pd
sid= '13HKQ0Ue1_YCP-pKUxFuqdiqgmW_AZeR7P3VsUwrCnZo' # spreadsheet id
gid = 0 # sheet unique id (0 equals sheet0)
url = 'https://docs.google.com/spreadsheets/d/{}/export?gid={}&format=csv'.format(sid,gid)
df = pd.read_csv(url)
for idx, dfx in df.groupby(df.CAT):
if idx != 'Flower':
continue
df_test = dfx.drop(['CAT','NAME'], axis=1)
df_test = df_test.rename(columns={'StoreNAME':idx}).set_index(idx).T
df_test
Returns:
Flower Pueblo West Organics - Adult Use Pueblo West Organics - Adult Use \
UNIT NaN NaN
HALFOUNCE 15.0 50.0
EIGHTH NaN 25.0
TWOGRAMS NaN NaN
QUARTER NaN 40.0
OUNCE 30.0 69.0
GRAM NaN 9.0
Flower Pueblo West Organics - Adult Use Three Rivers Dispensary - REC \
UNIT NaN NaN
HALFOUNCE 50.0 75.0
EIGHTH 25.0 20.0
TWOGRAMS NaN NaN
QUARTER 40.0 45.0
OUNCE 69.0 125.0
GRAM 9.0 8.0
Flower Three Rivers Dispensary - REC
UNIT NaN
HALFOUNCE 75.0
EIGHTH 20.0
TWOGRAMS NaN
QUARTER 40.0
OUNCE 125.0
GRAM 8.0
I have two dataframes: one has multi levels of columns, and another has only single level column (which is the first level of the first dataframe, or say the second dataframe is calculated by grouping the first dataframe).
These two dataframes look like the following:
first dataframe-df1
second dataframe-df2
The relationship between df1 and df2 is:
df2 = df1.groupby(axis=1, level='sector').mean()
Then, I get the index of rolling_max of df1 by:
result1=pd.rolling_apply(df1,window=5,func=lambda x: pd.Series(x).idxmax(),min_periods=4)
Let me explain result1 a little bit. For example, during the five days (window length) 2016/2/23 - 2016/2/29, the max price of the stock sh600870 happened in 2016/2/24, the index of 2016/2/24 in the five-day range is 1. So, in result1, the value of stock sh600870 in 2016/2/29 is 1.
Now, I want to get the sector price for each stock by the index in result1.
Let's take the same stock as example, the stock sh600870 is in sector ’家用电器视听器材白色家电‘. So in 2016/2/29, I wanna get the sector price in 2016/2/24, which is 8.770.
How can I do that?
idxmax (or np.argmax) returns an index which is relative to the rolling
window. To make the index relative to df1, add the index of the left edge of
the rolling window:
index = pd.rolling_apply(df1, window=5, min_periods=4, func=np.argmax)
shift = pd.rolling_min(np.arange(len(df1)), window=5, min_periods=4)
index = index.add(shift, axis=0)
Once you have ordinal indices relative to df1, you can use them to index
into df1 or df2 using .iloc.
For example,
import numpy as np
import pandas as pd
np.random.seed(2016)
N = 15
columns = pd.MultiIndex.from_product([['foo','bar'], ['A','B']])
columns.names = ['sector', 'stock']
dates = pd.date_range('2016-02-01', periods=N, freq='D')
df1 = pd.DataFrame(np.random.randint(10, size=(N, 4)), columns=columns, index=dates)
df2 = df1.groupby(axis=1, level='sector').mean()
window_size, min_periods = 5, 4
index = pd.rolling_apply(df1, window=window_size, min_periods=min_periods, func=np.argmax)
shift = pd.rolling_min(np.arange(len(df1)), window=window_size, min_periods=min_periods)
# alternative, you could use
# shift = np.pad(np.arange(len(df1)-window_size+1), (window_size-1, 0), mode='constant')
# but this is harder to read/understand, and therefore it maybe more prone to bugs.
index = index.add(shift, axis=0)
result = pd.DataFrame(index=df1.index, columns=df1.columns)
for col in index:
sector, stock = col
mask = pd.notnull(index[col])
idx = index.loc[mask, col].astype(int)
result.loc[mask, col] = df2[sector].iloc[idx].values
print(result)
yields
sector foo bar
stock A B A B
2016-02-01 NaN NaN NaN NaN
2016-02-02 NaN NaN NaN NaN
2016-02-03 NaN NaN NaN NaN
2016-02-04 5.5 5 5 7.5
2016-02-05 5.5 5 5 8.5
2016-02-06 5.5 6.5 5 8.5
2016-02-07 5.5 6.5 5 8.5
2016-02-08 6.5 6.5 5 8.5
2016-02-09 6.5 6.5 6.5 8.5
2016-02-10 6.5 6.5 6.5 6
2016-02-11 6 6.5 4.5 6
2016-02-12 6 6.5 4.5 4
2016-02-13 2 6.5 4.5 5
2016-02-14 4 6.5 4.5 5
2016-02-15 4 6.5 4 3.5
Note in pandas 0.18 the rolling_apply syntax was changed. DataFrames and Series now have a rolling method, so that now you would use:
index = df1.rolling(window=window_size, min_periods=min_periods).apply(np.argmax)
shift = (pd.Series(np.arange(len(df1)))
.rolling(window=window_size, min_periods=min_periods).min())
index = index.add(shift.values, axis=0)