Calculating CAGR by row by row in a pandas data frame? - python

I am working with company data. I have a data set of round about 1900 companies (index) and 30 variables per company (columns). These varibales always come in pairs of three (three periods). It basically looks like this
df = pd.DataFrame({'id' : ['1','2','3','7'],
'revenue_0' : [7,2,5,4],
'revenue_1' : [5,6,3,1],
'revenue_2' : [1,9,4,8],
'profit_0' : [3,6,4,4],
'profit_1' : [4,6,9,1],
'profit_2' : [5,5,9,8]})
I am trying to compute the compound annual growth rate (CAGR) for e.g. revenue for each company (id) - such that revenue_cagr = ((revenue_2/revenue_1)^(1/3))-1
I would like to pass a function to a set of columns row by row - at least, that is my idea.
def CAGR(start_value, end_value, periods):
((end_value/start_value)^(1/periods))-1
Is it possible to apply this function row by row for a set of columns (maybe with for i, row in df.iterrows(): or df.apply())? Respectively, is there a smarter way to do this?
Update
The desired outcome - examplified with the column revenue_cagr - should look as follows:
df = pd.DataFrame({'id' : ['1','2','3','7'],
'revenue_0' : [7,2,5,4],
'revenue_1' : [5,6,3,1],
'revenue_2' : [1,9,4,8],
'profit_0' : [3,6,4,4],
'profit_1' : [4,6,9,1],
'profit_2' : [5,5,9,8],
'revenue_cagr' : [-0.48, 0.65, -0.07, 0.26],
'profit_cagr' : [0.19, -0.06, 0.31, 0.26]
})

You can use set_index + str.rsplit for triples first:
df1 = df.set_index('id')
df1.columns = df1.columns.str.rsplit('_', expand=True, n=1)
print (df1)
profit revenue
0 1 2 0 1 2
id
1 3 4 5 7 5 1
2 6 6 5 2 6 9
3 4 9 9 5 3 4
7 4 1 8 4 1 8
Then divide by div all 2 with 0 levels selected by xs, add pow, sub and add_suffix:
df1 = df1.xs('2', axis=1, level=1)
.div(df1.xs('0', axis=1, level=1))
.pow((1./3))
.sub(1)
.add_suffix('_cagr')
print (df1)
profit_cagr revenue_cagr
id
1 0.185631 -0.477242
2 -0.058964 0.650964
3 0.310371 -0.071682
7 0.259921 0.259921
Last join to original:
df = df.join(df1, on='id')
print (df)
id profit_0 profit_1 profit_2 revenue_0 revenue_1 revenue_2 \
0 1 3 4 5 7 5 1
1 2 6 6 5 2 6 9
2 3 4 9 9 5 3 4
3 7 4 1 8 4 1 8
profit_cagr revenue_cagr
0 0.185631 -0.477242
1 -0.058964 0.650964
2 0.310371 -0.071682
3 0.259921 0.259921

Related

Sum up value in different numbers of columns for each row

I have a data frame including number of sold tickets in different price buckets for each flight.
For each record/row, I want to use the value in one column as an index in iloc function, to sum up values in a specific number of columns.
Like, for each row, I want to sum up values from column index 5 to value in ['iloc_index']
I tried df.iloc[:, 5:df['iloc_index']].sum(axis=1) but it did not work.
sample data:
A B C D iloc_value total
0 1 2 3 2 1
1 1 3 4 2 2
2 4 6 3 2 1
for each row, I want to sum up the number of columns based on the value in ['iloc_value']
for example,
for row0, I want the total to be 1+2
for row1, I want the total to be 1+3+4
for row2, I want the total to be 4+6
EDIT:
I quickly got the results this way:
First define a function that can do it for one row:
def sum_till_iloc_value(row):
return sum(row[:row['iloc_value']+1])
Then apply it to all rows to generate your output:
df_flights['sum'] = df_flights.apply(sum_till_iloc_value, axis=1)
A B C D iloc_value sum
0 1 2 3 2 1 3
1 1 3 4 2 2 8
2 4 6 3 2 1 10
PREVIOUSLY:
Assuming you have information that looks like:
df_flights = pd.DataFrame({'flight':['f1', 'f2', 'f3'], 'business':[2,3,4], 'economy':[6,7,8]})
df_flights
flight business economy
0 f1 2 6
1 f2 3 7
2 f3 4 8
you can sum the columns you want as below:
df_flights['seat_count'] = df_flights['business'] + df_flights['economy']
This will create a new column that you can later select:
df_flights[['flight', 'seat_count']]
flight seat_count
0 f1 8
1 f2 10
2 f3 12
Here's a way to do that in a fully vectorized way: melting the dataframe, summing only the relevant columns, and getting the total back into the dataframe:
d = dict([[y, x] for x, y in enumerate(df.columns[:-1])])
temp_df = df.copy()
temp_df = temp_df.rename(columns=d)
temp_df = temp_df.reset_index().melt(id_vars = ["index", "iloc_value"])
temp_df = temp_df[temp_df.variable <= temp_df.iloc_value]
df["total"] = temp_df.groupby("index").value.sum()
The output is:
A B C D iloc_value total
0 1 2 3 2 1 3
1 1 3 4 2 2 8
2 4 6 3 2 1 10

How to rest a row value to the nths rows values of another dataframe

I have this two df's
df1:
lon lat
0 -60.7 -2.8333333333333335
1 -55.983333333333334 -2.4833333333333334
2 -51.06666666666667 -0.05
3 -66.96666666666667 -0.11666666666666667
4 -48.483333333333334 -1.3833333333333333
5 -54.71666666666667 -2.4333333333333336
6 -44.233333333333334 -2.6
7 -59.983333333333334 -3.15
df2:
lon lat
0 -24.109 -2.0035
1 -17.891 -1.70911
2 -14.5822 -1.7470700000000001
3 -12.8138 -1.72322
4 -14.0688 -1.5028700000000002
5 -13.8406 -1.44416
6 -12.1292 -0.671266
7 -13.8406 -0.8824270000000001
8 -15.12 -18.223
I want to rest each value of df1['lat'] with all values of df2
Something like this :
results0=df1.loc[0,'lat']-df2.loc[:,'lat']
results1=df1.loc[1,'lat']-df2.loc[:,'lat']
#etc etc....
So i tried this:
for i,j in zip(range(len(df1)), range(len(df2))):
exec(f"result{i}=df1.loc[{i},'lat']-df2.loc[{j},'lat']")
But it only gave me one result value for each result, instead of 8 values for each result.
I will appreciate any possible solution. Thanks!
You can create list of Series:
L = [df1.loc[i,'lat']-df2['lat'] for i in df1.index]
Or you can use numpy for new DataFrame:
arr = df1['lat'].to_numpy() - df2['lat'].to_numpy()[:, None]
df3 = pd.DataFrame(arr, index=df2.index, columns=df1.index)
print (df3)
0 1 2 3 4 5 \
0 -0.829833 -0.479833 1.953500 1.886833 0.620167 -0.429833
1 -1.124223 -0.774223 1.659110 1.592443 0.325777 -0.724223
2 -1.086263 -0.736263 1.697070 1.630403 0.363737 -0.686263
3 -1.110113 -0.760113 1.673220 1.606553 0.339887 -0.710113
4 -1.330463 -0.980463 1.452870 1.386203 0.119537 -0.930463
5 -1.389173 -1.039173 1.394160 1.327493 0.060827 -0.989173
6 -2.162067 -1.812067 0.621266 0.554599 -0.712067 -1.762067
7 -1.950906 -1.600906 0.832427 0.765760 -0.500906 -1.550906
8 15.389667 15.739667 18.173000 18.106333 16.839667 15.789667
6 7
0 -0.596500 -1.146500
1 -0.890890 -1.440890
2 -0.852930 -1.402930
3 -0.876780 -1.426780
4 -1.097130 -1.647130
5 -1.155840 -1.705840
6 -1.928734 -2.478734
7 -1.717573 -2.267573
8 15.623000 15.073000
Since df1 has one less row than df2
df1['lat'] = df1['lat'] - df2.loc[:df1.shape[0]-1, 'lat']
output:
0 -0.829833
1 -0.774223
2 1.697070
3 1.606553
4 0.119537
5 -0.989173
6 -1.928734
7 -2.267573
Name: lat, dtype: float64

How to find the maximum value of a column with pandas?

I have a table with 40 columns and 1500 rows. I want to find the maximum value among the 30-32nd (3 columns). How can it be done? I want to return the maximum value among these 3 columns and the index of dataframe.
print(Max_kVA_df.iloc[30:33].max())
hi you can refer this example
import pandas as pd
df=pd.DataFrame({'col1':[1,2,3,4,5],
'col2':[4,5,6,7,8],
'col3':[2,3,4,5,7]
})
print(df)
#print(df.iloc[:,0:3].max())# Mention range of the columns which you want, In your case change 0:3 to 30:33, here 33 will be excluded
ser=df.iloc[:,0:3].max()
print(ser.max())
Output
8
Select values by positions and use np.max:
Sample: for maximum by first 5 rows:
np.random.seed(123)
df = pd.DataFrame(np.random.randint(10, size=(10, 3)), columns=list('ABC'))
print (df)
A B C
0 2 2 6
1 1 3 9
2 6 1 0
3 1 9 0
4 0 9 3
print (df.iloc[0:5])
A B C
0 2 2 6
1 1 3 9
2 6 1 0
3 1 9 0
4 0 9 3
print (np.max(df.iloc[0:5].max()))
9
Or use iloc this way:
print(df.iloc[[30, 31], 2].max())

Encode pandas column as categorical values

I have a dataframe as follow:
d = {'item': [1, 2,3,4,5,6], 'time': [1297468800, 1297468809, 12974688010, 1297468890, 1297468820,1297468805]}
df = pd.DataFrame(data=d)
the output of df is as follow:
item time
0 1 1297468800
1 2 1297468809
2 3 1297468801
3 4 1297468890
4 5 1297468820
5 6 1297468805
the time here is based on the unixsystem time. My goal is to replace the time column in the dataframe.
such as the
mintime = 1297468800
maxtime = 1297468890
And I want to split the time into 10 (can be changed by using parameter like 20 intervals) interval, and recode the time column in df. Such as
item time
0 1 1
1 2 1
2 3 1
3 4 9
4 5 3
5 6 1
what is the most efficient way to do this since I have billion of records? Thanks
You can use pd.cut with np.linspace to specify the bins. This encodes your column categorically, from which you can then extract the codes in order:
bins = np.linspace(df.time.min() - 1, df.time.max(), 10)
df['time'] = pd.cut(df.time, bins=bins, right=True).cat.codes + 1
df
item time
0 1 1
1 2 1
2 3 1
3 4 9
4 5 3
5 6 1
Alternatively, depending on how you treat the interval edges, you could also do
bins = np.linspace(df.time.min(), df.time.max() + 1, 10)
pd.cut(df.time, bins=bins, right=False).cat.codes + 1
0 1
1 1
2 1
3 9
4 2
5 1
dtype: int8

Ranking with multiple columns in Dataframe

I have a dataframe with 3 columns
Alpha Bravo Charlie
20 30 40
50 10 20
40 60 10
I wish to create 3 new columns with rankings that produces the following that gives the highest among the 3 columns a rank of 3 to 1:
AlphaRank BravoRank CharlieRank
1 2 3
3 1 2
2 3 1
I understand there is dataframe.rank function but I only saw example for 1 column not 3
I tried this with issues:
for newrank in ['Alpha', 'Bravo', 'Charlie']:
ranksys = df[newrank]
ranksystem = newrank +'Rank'
df[ranksystem] = ranksys.rank(axis=1).astype(int)
I think need rank + astype:
cols = ['Alpha', 'Bravo', 'Charlie']
df[cols] = df[cols].rank().astype(int)
print (df)
Alpha Bravo Charlie
0 1 2 3
1 3 1 2
2 2 3 1
Numpy alternative with numpy.argsort:
df[cols] = pd.DataFrame(df[cols].values.argsort(axis=0) + 1,index=df.index,columns=df.columns)

Categories