I'm using Pandas 0.19.
Considering the following data frame:
FID admin0 admin1 admin2 windspeed population
0 cntry1 state1 city1 60km/h 700
1 cntry1 state1 city1 90km/h 210
2 cntry1 state1 city2 60km/h 100
3 cntry1 state2 city3 60km/h 70
4 cntry1 state2 city4 60km/h 180
5 cntry1 state2 city4 90km/h 370
6 cntry2 state3 city5 60km/h 890
7 cntry2 state3 city6 60km/h 120
8 cntry2 state3 city6 90km/h 420
9 cntry2 state3 city6 120km/h 360
10 cntry2 state4 city7 60km/h 740
How can I create a table like this one?
population
60km/h 90km/h 120km/h
admin0 admin1 admin2
cntry1 state1 city1 700 210 0
cntry1 state1 city2 100 0 0
cntry1 state2 city3 70 0 0
cntry1 state2 city4 180 370 0
cntry2 state3 city5 890 0 0
cntry2 state3 city6 120 420 360
cntry2 state4 city7 740 0 0
I have tried with the following pivot table:
table = pd.pivot_table(df,index=["admin0","admin1","admin2"], columns=["windspeed"], values=["population"],fill_value=0)
In general it works great, but unfortunately I am not able to sort the new columns in the right order: the 120km/h column appears before the ones for 60km/h and 90km/h. How can I specify the order of the new columns?
Moreover, as a second step I need to add subtotals both for admin0 and admin1. Ideally, the table I need should be like this:
population
60km/h 90km/h 120km/h
admin0 admin1 admin2
cntry1 state1 city1 700 210 0
cntry1 state1 city2 100 0 0
SUM state1 800 210 0
cntry1 state2 city3 70 0 0
cntry1 state2 city4 180 370 0
SUM state2 250 370 0
SUM cntry1 1050 580 0
cntry2 state3 city5 890 0 0
cntry2 state3 city6 120 420 360
SUM state3 1010 420 360
cntry2 state4 city7 740 0 0
SUM state4 740 0 0
SUM cntry2 1750 420 360
SUM ALL 2800 1000 360
you can do it using reindex() method and custom sorting:
In [26]: table
Out[26]:
population
windspeed 120km/h 60km/h 90km/h
admin0 admin1 admin2
cntry1 state1 city1 0 700 210
city2 0 100 0
state2 city3 0 70 0
city4 0 180 370
cntry2 state3 city5 0 890 0
city6 360 120 420
state4 city7 0 740 0
In [27]: cols = sorted(table.columns.tolist(), key=lambda x: int(x[1].replace('km/h','')))
In [28]: cols
Out[28]: [('population', '60km/h'), ('population', '90km/h'), ('population', '120km/h')]
In [29]: table = table.reindex(columns=cols)
In [30]: table
Out[30]:
population
windspeed 60km/h 90km/h 120km/h
admin0 admin1 admin2
cntry1 state1 city1 700 210 0
city2 100 0 0
state2 city3 70 0 0
city4 180 370 0
cntry2 state3 city5 890 0 0
city6 120 420 360
state4 city7 740 0 0
Solution with subtotals and MultiIndex.from_arrays. Last concat and all Dataframes, sort_index and add all sum:
#replace km/h and convert to int
df.windspeed = df.windspeed.str.replace('km/h','').astype(int)
print (df)
FID admin0 admin1 admin2 windspeed population
0 0 cntry1 state1 city1 60 700
1 1 cntry1 state1 city1 90 210
2 2 cntry1 state1 city2 60 100
3 3 cntry1 state2 city3 60 70
4 4 cntry1 state2 city4 60 180
5 5 cntry1 state2 city4 90 370
6 6 cntry2 state3 city5 60 890
7 7 cntry2 state3 city6 60 120
8 8 cntry2 state3 city6 90 420
9 9 cntry2 state3 city6 120 360
10 10 cntry2 state4 city7 60 740
#pivoting
table = pd.pivot_table(df,
index=["admin0","admin1","admin2"],
columns=["windspeed"],
values=["population"],
fill_value=0)
print (table)
population
windspeed 60 90 120
admin0 admin1 admin2
cntry1 state1 city1 700 210 0
city2 100 0 0
state2 city3 70 0 0
city4 180 370 0
cntry2 state3 city5 890 0 0
city6 120 420 360
state4 city7 740 0 0
#groupby and create sum dataframe by levels 0,1
df1 = table.groupby(level=[0,1]).sum()
df1.index = pd.MultiIndex.from_arrays([df1.index.get_level_values(0),
df1.index.get_level_values(1)+ '_sum',
len(df1.index) * ['']])
print (df1)
population
windspeed 60 90 120
admin0
cntry1 state1_sum 800 210 0
state2_sum 250 370 0
cntry2 state3_sum 1010 420 360
state4_sum 740 0 0
df2 = table.groupby(level=0).sum()
df2.index = pd.MultiIndex.from_arrays([df2.index.values + '_sum',
len(df2.index) * [''],
len(df2.index) * ['']])
print (df2)
population
windspeed 60 90 120
cntry1_sum 1050 580 0
cntry2_sum 1750 420 360
#concat all dataframes together, sort index
df = pd.concat([table, df1, df2]).sort_index(level=[0])
#add km/h to second level in columns
df.columns = pd.MultiIndex.from_arrays([df.columns.get_level_values(0),
df.columns.get_level_values(1).astype(str) + 'km/h'])
#add all sum
df.loc[('All_sum','','')] = table.sum().values
print (df)
population
60km/h 90km/h 120km/h
admin0 admin1 admin2
cntry1 state1 city1 700 210 0
city2 100 0 0
state1_sum 800 210 0
state2 city3 70 0 0
city4 180 370 0
state2_sum 250 370 0
cntry1_sum 1050 580 0
cntry2 state3 city5 890 0 0
city6 120 420 360
state3_sum 1010 420 360
state4 city7 740 0 0
state4_sum 740 0 0
cntry2_sum 1750 420 360
All_sum 2800 1000 360
EDIT by comment:
def f(x):
print (x)
if (len(x) > 1):
return x.sum()
df1 = table.groupby(level=[0,1]).apply(f).dropna(how='all')
df1.index = pd.MultiIndex.from_arrays([df1.index.get_level_values(0),
df1.index.get_level_values(1)+ '_sum',
len(df1.index) * ['']])
print (df1)
population
windspeed 60 90 120
admin0
cntry1 state1_sum 800.0 210.0 0.0
state2_sum 250.0 370.0 0.0
cntry2 state3_sum 1010.0 420.0 360.0
Related
I have the following data. It is all in one excel file.
Sheet name: may2019
Productivity Count
Date : 01-Apr-2020 00:00 to 30-Apr-2020 23:59
Date Type: Finalized Date Modality: All
Name MR DX CT US MG BMD TOTAL
Svetlana 29 275 101 126 5 5 541
Kate 32 652 67 171 1 0 923
Andrew 0 452 0 259 1 0 712
Tom 50 461 61 104 4 0 680
Maya 0 353 0 406 0 0 759
Ben 0 1009 0 143 0 0 1152
Justin 0 2 9 0 1 9 21
Total 111 3204 238 1209 12 14 4788
Sheet Name: June 2020
Productivity Count
Date : 01-Jun-2019 00:00 to 30-Jun-2019 23:59
Date Type: Finalized Date Modality: All
NAme US DX CT MR MG BMD TOTAL
Svetlana 4 0 17 6 0 4 31
Kate 158 526 64 48 1 0 797
Andrew 154 230 0 0 0 0 384
Tom 1 0 19 20 2 8 50
Maya 260 467 0 0 1 1 729
Ben 169 530 59 40 3 0 801
Justin 125 164 0 0 4 0 293
Alvin 0 1 0 0 0 0 1
Total 871 1918 159 114 11 13 3086
I want to merge all the sheets into on sheet, drop the first 3 rows of all the sheets and and this is the output I am looking for
Sl.No Name US_jun2019 DX_jun2019 CT_jun2019 MR_jun2019 MG_jun2019 BMD_jun2019 TOTAL_jun2019 MR_may2019 DX_may2019 CT_may2019 US_may2019 MG_may2019 BMD_may2019 TOTAL_may2019
1 Svetlana 4 0 17 6 0 4 31 29 275 101 126 5 5 541
2 Kate 158 526 64 48 1 0 797 32 652 67 171 1 0 923
3 Andrew 154 230 0 0 0 0 384 0 353 0 406 0 0 759
4 Tom 1 0 19 20 2 8 50 0 2 9 0 1 9 21
5 Maya 260 467 0 0 1 1 729 0 1009 0 143 0 0 1152
6 Ben 169 530 59 40 3 0 801 50 461 61 104 4 0 680
7 Justin 125 164 0 0 4 0 293 0 452 0 259 1 0 712
8 Alvin 0 1 0 0 0 0 1 #N/A #N/A #N/A #N/A #N/A #N/A #N/A
I tried the following code but the output is not the one i am looking for.
df=pd.concat(df,sort=False)
df= df.drop(df.index[[0,1]])
df=df.rename(columns=df.iloc[0])
df= df.drop(df.index[[0]])
df=df.drop(['Sl.No'], axis = 1)
print(df)
First, read both Excel sheets.
>>> df1 = pd.read_excel('path/to/excel/file.xlsx', sheet_name="may2019")
>>> df2 = pd.read_excel('path/to/excel/file.xlsx', sheet_name="jun2019")
Drop the first three rows.
>>> df1.drop(index=range(3), inplace=True)
>>> df2.drop(index=range(3), inplace=True)
Rename columns to the first row, and drop the first row
>>> df1.rename(columns=dict(zip(df1.columns, df1.iloc[0])), inplace=True)
>>> df1.drop(index=[0], inplace=True)
>>> df2.rename(columns=dict(zip(df2.columns, df2.iloc[0])), inplace=True)
>>> df2.drop(index=[0], inplace=True)
Add suffixes to the columns.
>>> df1.rename(columns=lambda col_name: col_name + '_may2019', inplace=True)
>>> df2.rename(columns=lambda col_name: col_name + '_jun2019', inplace=True)
Remove the duplicate name column in the second DF.
>>> df2.drop(columns=['Name'], inplace=True)
Concatenate both the dataframes
>>> df = pd.concat([df1, df2], axis=1, inplace=True)
All the code in one place:
import pandas as pd
df1 = pd.read_excel('path/to/excel/file.xlsx', sheet_name="may2019")
df2 = pd.read_excel('path/to/excel/file.xlsx', sheet_name="jun2019")
df1.drop(index=range(3), inplace=True)
df2.drop(index=range(3), inplace=True)
df1.rename(columns=dict(zip(df1.columns, df1.iloc[0])), inplace=True)
df1.drop(index=[0], inplace=True)
df2.rename(columns=dict(zip(df2.columns, df2.iloc[0])), inplace=True)
df2.drop(index=[0], inplace=True)
df1.rename(columns=lambda col_name: col_name + '_may2019', inplace=True)
df2.rename(columns=lambda col_name: col_name + '_jun2019', inplace=True)
df2.drop(columns=['Name'], inplace=True)
df = pd.concat([df2, df1], axis=1, inplace=True)
print(df)
Consider the following table: I have some values for each state per year and age.
Age
Year
State1
State2
State3
1
2010
123
456
789
2
2010
111
222
333
1
2011
444
555
666
2
2011
777
888
999
Now I'd like to transpose the table in such a way, that the Year becomes the columns:
Age
State
2010
2011
1
State1
123
444
1
State2
456
555
1
State3
789
666
2
State1
111
777
2
State2
222
888
2
State3
333
999
I can't get it to work, to transpose only that specific column.
What would be a good solution to achieve this in Pandas?
You can stack and unstack your dataframe:
out = (
df.set_index(["Age", "Year"])
.stack()
.unstack("Year")
.reset_index()
.rename(columns={"level_1": "State"})
)
Year Age State 2010 2011
0 1 State1 123 444
1 1 State2 456 555
2 1 State3 789 666
3 2 State1 111 777
4 2 State2 222 888
5 2 State3 333 999
What you're looking for is pd.melt
we can use this along with a combination of applying a custom index & unstack
df1 = pd.melt(df,id_vars=['Year','Age'],var_name=['State'])
out = df1.set_index([df1.groupby(['Year']).cumcount(),'Year','State','Age'])\
.unstack('Year').droplevel(0,1).reset_index([1,2])
Year State Age 2010 2011
0 State1 1 123 444
1 State1 2 111 777
2 State2 1 456 555
3 State2 2 222 888
4 State3 1 789 666
5 State3 2 333 999
I have a dataframe - df as below :
Stud_id card Nation Gender Age Code Amount yearmonth
111 1 India M Adult 543 100 201601
111 1 India M Adult 543 100 201601
111 1 India M Adult 543 150 201602
111 1 India M Adult 612 100 201602
111 1 India M Adult 715 200 201603
222 2 India M Adult 715 200 201601
222 2 India M Adult 543 100 201604
222 2 India M Adult 543 100 201603
333 3 India M Adult 543 100 201601
333 3 India M Adult 543 100 201601
333 4 India M Adult 543 150 201602
333 4 India M Adult 612 100 201607
Now, I want two dataframes as below :
df_1 :
card Code Total_Amount Avg_Amount
1 543 350 175
2 543 200 100
3 543 200 200
4 543 150 150
1 612 100 100
4 612 100 100
1 715 200 200
2 715 200 200
Logic for df_1 :
1. Total_Amount : For each unique card and unique Code get the sum of amount ( For eg : card : 1 , Code : 543 = 350 )
2. Avg_Amount: Divide the Total amount by no.of unique yearmonth for each unique card and unique Code ( For eg : Total_Amount = 350, No. Of unique yearmonth is 2 = 175
df_2 :
Code Avg_Amount
543 156.25
612 100
715 200
Logic for df_2 :
1. Avg_Amount: Sum of Avg_Amount of each Code in df_1 (For eg. Code:543 the Sum of Avg_Amount is 175+100+200+150 = 625. Divide it by no.of rows - 4. So 625/4 = 156.25
Code to create the data frame - df :
df=pd.DataFrame({'Cus_id': (111,111,111,111,111,222,222,222,333,333,333,333),
'Card': (1,1,1,1,1,2,2,2,3,3,4,4),
'Nation':('India','India','India','India','India','India','India','India','India','India','India','India'),
'Gender': ('M','M','M','M','M','M','M','M','M','M','M','M'),
'Age':('Adult','Adult','Adult','Adult','Adult','Adult','Adult','Adult','Adult','Adult','Adult','Adult'),
'Code':(543,543,543,612,715,715,543,543,543,543,543,612),
'Amount': (100,100,150,100,200,200,100,100,100,100,150,100),
'yearmonth':(201601,201601,201602,201602,201603,201601,201604,201603,201601,201601,201602,201607)})
Code to get the required df_2 :
df1 = df_toy.groupby(['Card','Code'])['yearmonth','Amount'].apply(
lambda x: [sum(x.Amount),sum(x.Amount)/len(set(x.yearmonth))]).apply(
pd.Series).reset_index()
df1.columns= ['Card','Code','Total_Amount','Avg_Amount']
df2 = df1.groupby('Code')['Avg_Amount'].apply(lambda x: sum(x)/len(x)).reset_index(
name='Avg_Amount')
Though the code works fine, since my dataset is huge its taking time. I am looking for the optimized code ? I think apply function is taking time ? Is there a better optimized code pls ?
For DataFrame 1 you can do this:
tmp = df.groupby(['Card', 'Code'], as_index=False) \
.agg({'Amount': 'sum', 'yearmonth': pd.Series.nunique})
df1 = tmp.assign(Avg_Amount=tmp.Amount / tmp.yearmonth) \
.drop(columns=['yearmonth'])
Card Code Amount Avg_Amount
0 1 543 350 175.0
1 1 612 100 100.0
2 1 715 200 200.0
3 2 543 200 100.0
4 2 715 200 200.0
5 3 543 200 200.0
6 4 543 150 150.0
7 4 612 100 100.0
For DataFrame 2 you can do this:
df1.groupby('Code', as_index=False) \
.agg({'Avg_Amount': 'mean'})
Code Avg_Amount
0 543 156.25
1 612 100.00
2 715 200.00
I have data that I've left in a format that will allow me to pivot on dates that look like:
Region 0 1 2 3
Date 2005-01-01 2005-02-01 2005-03-01 ....
East South Central 400 500 600
Pacific 100 200 150
.
.
Mountain 500 600 450
I need to pivot this table so it looks like:
0 Date Region value
1 2005-01-01 East South Central 400
2 2005-02-01 East South Central 500
3 2005-03-01 East South Central 600
.
.
4 2005-03-01 Pacific 100
4 2005-03-01 Pacific 200
4 2005-03-01 Pacific 150
.
.
Since both Date and Region are under one another I'm not sure how to melt or pivot around these strings so that I can get my desired output.
How can I go about this?
I think this is the solution you are looking for. Shown by example.
import pandas as pd
import numpy as np
N=100
regions = list('abcdef')
df = pd.DataFrame([[i for i in range(N)], ['2016-{}'.format(i) for i in range(N)],
list(np.random.randint(0,500, N)), list(np.random.randint(0,500, N)),
list(np.random.randint(0,500, N)), list(np.random.randint(0,500, N))])
df.index = ['Region', 'Date', 'a', 'b', 'c', 'd']
print(df)
This gives
0 1 2 3 4 5 6 7 \
Region 0 1 2 3 4 5 6 7
Date 2016-0 2016-1 2016-2 2016-3 2016-4 2016-5 2016-6 2016-7
a 96 432 181 64 87 355 339 314
b 360 23 162 98 450 78 114 109
c 143 375 420 493 321 277 208 317
d 371 144 207 108 163 67 465 130
And the solution to pivot this into the form you want is
df.transpose().melt(id_vars=['Date'], value_vars=['a', 'b', 'c', 'd'])
which gives
Date variable value
0 2016-0 a 96
1 2016-1 a 432
2 2016-2 a 181
3 2016-3 a 64
4 2016-4 a 87
5 2016-5 a 355
6 2016-6 a 339
7 2016-7 a 314
8 2016-8 a 111
9 2016-9 a 121
10 2016-10 a 124
11 2016-11 a 383
12 2016-12 a 424
13 2016-13 a 453
...
393 2016-93 d 176
394 2016-94 d 277
395 2016-95 d 256
396 2016-96 d 174
397 2016-97 d 349
398 2016-98 d 414
399 2016-99 d 132
I want estimate the strategy I make:
buy- where the K_Class is 1
sell- where the K_Class is 0
all prices is refered to Close Column at the time
for example:
Suppose that I have the amount of money 10000, the first time I buy is 2017/03/13, the first time I sell is 2017/03/17. The second time I buy is 2017/03/20, the second time I sell is on 2017/03/22
My Question: Till the end, how do I calculate the amount of money?
Time Close K_Class
0 2017/03/06 31.72 0
1 2017/03/08 33.99 0
2 2017/03/09 32.02 0
3 2017/03/10 30.66 0
4 2017/03/13 30.94 1
5 2017/03/15 32.56 1
6 2017/03/17 33.31 0
7 2017/03/20 34.07 1
8 2017/03/22 34.40 0
9 2017/03/24 32.98 1
10 2017/03/27 33.26 0
11 2017/03/28 31.60 0
12 2017/03/29 30.36 0
13 2017/03/30 28.83 0
14 2017/04/11 27.01 0
15 2017/04/12 24.31 0
16 2017/04/14 24.22 0
17 2017/04/17 21.80 0
18 2017/04/18 21.20 1
19 2017/04/19 23.32 1
20 2017/04/20 24.43 0
21 2017/04/24 23.85 1
22 2017/04/26 23.97 1
23 2017/04/27 24.31 1
24 2017/04/28 23.50 1
25 2017/05/02 22.57 1
26 2017/05/03 22.67 1
27 2017/05/04 22.11 1
28 2017/05/05 21.26 1
29 2017/05/08 19.37 1
.. ... ... ...
275 2018/08/01 13.38 0
276 2018/08/03 12.49 0
277 2018/08/06 12.50 0
278 2018/08/07 12.78 0
279 2018/08/09 12.93 0
280 2018/08/10 13.15 0
281 2018/08/13 13.14 1
282 2018/08/14 13.15 0
283 2018/08/15 12.80 0
284 2018/08/17 12.29 0
285 2018/08/21 12.39 0
286 2018/08/22 12.15 0
287 2018/08/23 12.27 0
288 2018/08/24 12.31 0
289 2018/08/27 12.47 0
290 2018/08/29 12.31 0
291 2018/08/30 12.13 0
292 2018/08/31 11.69 0
293 2018/09/03 11.60 1
294 2018/09/04 11.65 0
295 2018/09/05 11.45 0
296 2018/09/07 11.42 0
297 2018/09/10 10.71 0
298 2018/09/11 10.76 1
299 2018/09/12 10.74 0
300 2018/09/13 10.85 1
301 2018/09/14 10.79 0
302 2018/09/18 10.58 1
303 2018/09/19 10.65 1
304 2018/09/21 10.73 1
You can start with this:
df = pd.DataFrame({'price':np.arange(10), 'class':np.random.randint(2, size=10)})
df['diff'] = -1 * df['class'].diff()
df.loc[0,['diff']] = -1 * df.loc[0,['class']].values
df['money'] = df['price']*df['diff']
so the 'diff' represent the buy and sell action (-1 for buy and +1 for sell). The product of it and the price gives the changes of money you have. Sum it up, plus your initial money, you'll get your final money.
df['diff'] = df['K_Class'].diff()
stock_sell = 0
current_amount = 10000
for n in range(0, df.index.size-1):
print(n)
if df.iloc[n, 10] == 1:
stock_sell = current_amount/df.iloc[n, 4]
if df.iloc[n, 10] == -1:
current_amount = stock_sell*df.iloc[n, 4]
print(current_amount)