Python - Pandas: how to divide by specific key's value - python

I would like to calculate the column by other row of pandas dataframe.
For example, when I have these dataframes,
df = pd.DataFrame({
"year" : ['2017', '2017', '2017', '2017', '2017','2017', '2017', '2017', '2017'],
"rooms" : ['1', '2', '3', '1', '2', '3', '1', '2', '3'],
"city" : ['tokyo', 'tokyo', 'toyko', 'nyc','nyc', 'nyc', 'paris', 'paris', 'paris'],
"rent" : [1000, 1500, 2000, 1200, 1600, 1900, 900, 1500, 2200],
})
print(df)
city rent rooms year
0 tokyo 1000 1 2017
1 tokyo 1500 2 2017
2 toyko 2000 3 2017
3 nyc 1200 1 2017
4 nyc 1600 2 2017
5 nyc 1900 3 2017
6 paris 900 1 2017
7 paris 1500 2 2017
8 paris 2200 3 2017
I'd like to add the rent compared to other city's rent in the same year and rooms.
Ideal results are like below,
city rent rooms year vs_nyc
0 tokyo 1000 1 2017 0.833333
1 tokyo 1500 2 2017 0.9375
2 toyko 2000 3 2017 1.052631
3 nyc 1200 1 2017 1.0
4 nyc 1600 2 2017 1.0
5 nyc 1900 3 2017 1.0
6 paris 900 1 2017 0.75
7 paris 1500 2 2017 0.9375
8 paris 2200 3 2017 1.157894
How to add column like vs_nyc taking account of the year and rooms?
I tried some but not worked,
# filtering gets NaN value, and fillna(method='pad') also not worked
df.rent / df[df['city'] == 'nyc'].rent
0 NaN
1 NaN
2 NaN
3 1.0
4 1.0
5 1.0
6 NaN
7 NaN
8 NaN
Name: rent, dtype: float64

To illustrate:
set_index + unstack
d1 = df.set_index(['city', 'year', 'rooms']).rent.unstack('city')
d1
city nyc paris tokyo toyko
year rooms
2017 1 1200.0 900.0 1000.0 NaN
2 1600.0 1500.0 1500.0 NaN
3 1900.0 2200.0 NaN 2000.0
Then we can divide
d1.div(d1.nyc, 0)
city nyc paris tokyo toyko
year rooms
2017 1 1.0 0.750000 0.833333 NaN
2 1.0 0.937500 0.937500 NaN
3 1.0 1.157895 NaN 1.052632
solution
d1 = df.set_index(['city', 'year', 'rooms']).rent.unstack('city')
df.join(d1.div(d1.nyc, 0).stack().rename('vs_nyc'), on=['year', 'rooms', 'city'])
city rent rooms year vs_nyc
0 tokyo 1000 1 2017 0.833333
1 tokyo 1500 2 2017 0.937500
2 toyko 2000 3 2017 1.052632
3 nyc 1200 1 2017 1.000000
4 nyc 1600 2 2017 1.000000
5 nyc 1900 3 2017 1.000000
6 paris 900 1 2017 0.750000
7 paris 1500 2 2017 0.937500
8 paris 2200 3 2017 1.157895
A little cleaned up
cols = ['city', 'year', 'rooms']
ny_rent = df.set_index(cols).rent.loc['nyc'].rename('ny_rent')
df.assign(vs_nyc=df.rent / df.join(d1, on=d1.index.names).ny_rent)

Related

Fill up columns in dataframe based on condition

I have a dataframe that looks as follows:
id cyear month datadate fyear
1 1988 3 nan nan
1 1988 4 nan nan
1 1988 5 1988-05-31 1988
1 1988 6 nan nan
1 1988 7 nan nan
1 1988 8 nan nan
1 1988 9 nan nan
1 1988 12 nan nan
1 1989 1 nan nan
1 1989 2 nan nan
1 1989 3 nan nan
1 1989 4 nan nan
1 1989 5 1989-05-31 1989
1 1989 6 nan nan
1 1989 7 nan nan
1 1989 8 nan nan
1 1990 8 nan nan
4 2000 1 nan nan
4 2000 2 nan nan
4 2000 3 nan nan
4 2000 4 nan nan
4 2000 5 nan nan
4 2000 6 nan nan
4 2000 7 nan nan
4 2000 8 nan nan
4 2000 9 nan nan
4 2000 10 nan nan
4 2000 11 nan nan
4 2000 12 2000-12-31 2000
5 2000 11 nan nan
More specifically, I have a dataframe consisting of monthly (month) data on firms (id) per calendar year (cyear). If the respective row, i.e. month, represents the end of a fiscal year of the firm, the datadate column will denote the respective months end as a date variable and the fyear column will denote the respective fiscal year that just ended.
I now want the fyear value to indicate the respective fiscal year not just in the last month of the respective companies fiscal year, but in every month within the respective fiscal year:
id cyear month datadate fyear
1 1988 3 nan 1988
1 1988 4 nan 1988
1 1988 5 1988-05-31 1988
1 1988 6 nan 1989
1 1988 7 nan 1989
1 1988 8 nan 1989
1 1988 9 nan 1989
1 1988 12 nan 1989
1 1989 1 nan 1989
1 1989 2 nan 1989
1 1989 3 nan 1989
1 1989 4 nan 1989
1 1989 5 1989-05-31 1989
1 1989 6 nan 1990
1 1989 7 nan 1990
1 1989 8 nan 1990
1 1990 8 nan 1991
4 2000 1 nan 2000
4 2000 2 nan 2000
4 2000 3 nan 2000
4 2000 4 nan 2000
4 2000 5 nan 2000
4 2000 6 nan 2000
4 2000 7 nan 2000
4 2000 8 nan 2000
4 2000 9 nan 2000
4 2000 10 nan 2000
4 2000 11 nan 2000
4 2000 12 2000-12-31 2000
5 2000 11 nan nan
Note that months may be missing, as evident in case of id 1, and fiscal years may end on different months in fyear=cyear or fyear=cyear+1 (I have included only the former example, one could construct the latter example by adding 1 to the current fyear values of e.g. id 1). Also, the last row(s) of a given firm may not necessarily be its fiscal year end month, as evident in case of id 1. Lastly, there may exist firms for which no information on fiscal years is available.
I appreciate any help on this.
Do you want this?
def backword_fill(x):
x = x.bfill()
x = x.ffill() + x.isna().astype(int)
return x
df.fyear = df.groupby('id')['fyear'].transform(backword_fill)
Output
id cyear month datadate fyear
0 1 1988 3 <NA> 1988
1 1 1988 4 <NA> 1988
2 1 1988 5 1988-05-31 1988
3 1 1988 6 <NA> 1989
4 1 1988 7 <NA> 1989
5 1 1988 8 <NA> 1989
6 1 1988 9 <NA> 1989
7 1 1988 12 <NA> 1989
8 1 1989 1 <NA> 1989
9 1 1989 2 <NA> 1989
10 1 1989 3 <NA> 1989
11 1 1989 4 <NA> 1989
12 1 1989 5 1989-05-31 1989
13 1 1989 6 <NA> 1990
14 4 2000 1 <NA> 2000
15 4 2000 2 <NA> 2000
16 4 2000 3 <NA> 2000
17 4 2000 4 <NA> 2000
18 4 2000 5 <NA> 2000
19 4 2000 6 <NA> 2000
20 4 2000 7 <NA> 2000
21 4 2000 8 <NA> 2000
22 4 2000 9 <NA> 2000
23 4 2000 10 <NA> 2000
24 4 2000 11 <NA> 2000
25 4 2000 12 2000-12-31 2000

column names setup after group by the data in python

My table is as bellowed
datetime source Day area Town County Country
0 2019-01-01 16:22:46 1273 Tuesday Brighton Brighton East Sussex England
1 2019-01-02 09:33:29 1823 Wednesday Taunton Taunton Somerset England
2 2019-01-02 09:44:46 1977 Wednesday Pontefract Pontefract West Yorkshire England
3 2019-01-02 10:01:42 1983 Wednesday Isle of Wight NaN NaN NaN
4 2019-01-02 12:03:13 1304 Wednesday Dover Dover Kent England
My codes are
counts_by_counties = call_by_counties.groupby(['County','Town']).count()
counts_by_counties.head()
My grouped result (Do the column name disappeared?)
datetime source Day area Country
County Town
Aberdeenshire Aberdeen 8 8 8 8 8
Banchory 1 1 1 1 1
Blackburn 18 18 18 18 18
Ellon 6 6 6 6 6
Fraserburgh 2 2 2 2 2
I used this codes to rename the column, I am wondering if there is other efficent way to change the column name.
# slicing of the table
counts_by_counties = counts_by_counties[['datetime',]]
# rename by datetime into Counts
counts_by_counties.rename(columns={'datetime': 'Counts'})
Expected result
Counts
County Town
Aberdeenshire Aberdeen 8
Banchory 1
Blackburn 18
Call reset_index as below.
Replace
counts_by_counties = call_by_counties.groupby(['County','Town']).count()
with
counts_by_counties = call_by_counties.groupby(['County','Town']).count().reset_index()

How to replace NA's with the mean of variables when the mean is calculated per each Country's data

I need help to code in python to replace NA's with the mean of the variable (suicide rates is the variable, and there are 18 years of data for each country (country is another variable)). So I want the mean of the 17 years worth of suicide rates for the specific country to replace the NA for the 18th year. Example - Saudi Arabia has one year of data missing out of the 18 years. I want to find the mean of the 17 years suicide rates and replace the NA with that year. I need to have the code loop through to replace the NA's for every variable. All variables are rates of suicides or deaths. The picture shows a highlighted cell which is an example of one that is missing data. Each country has the data for the 18 years from 1990 to 2018.
Suppose you had this dataframe:
ID Year Entity Variable_1 Variable_2
0 0 2000 Canada 120.0 600.0
1 1 2001 Canada 100.0 700.0
2 2 2002 Canada NaN 800.0
3 3 2000 Switzerland 300.0 200.0
4 4 2001 Switzerland 400.0 NaN
5 5 2002 Switzerland 500.0 400.0
You could create another dataframe with the means for each country and variable:
means = df.groupby('Entity').mean()
Then you could loop through each country and each variable, and set the missing values to the appropriate mean for that country and variable:
for country in df.Entity:
for col in df.drop(columns = ['ID','Year','Entity']).columns:
df.loc[(df.Entity == country) & (df[col].isnull()),col] = means.loc[country,col]
Result:
ID Year Entity Variable_1 Variable_2
0 0 2000 Canada 120.0 600.0
1 1 2001 Canada 100.0 700.0
2 2 2002 Canada 110.0 800.0
3 3 2000 Switzerland 300.0 200.0
4 4 2001 Switzerland 400.0 300.0
5 5 2002 Switzerland 500.0 400.0

Adding column in pandas based on values from other columns with conditions

I have a dataframe with information about sales of some products (unit):
unit year month price
0 1 2018 6 100
1 1 2013 4 70
2 2 2015 10 80
3 2 2015 2 110
4 3 2017 4 120
5 3 2002 6 90
6 4 2016 1 55
and I would like to add, for each sale, columns with information about the previous sales and NaN if there is no previous sale.
unit year month price prev_price prev_year prev_month
0 1 2018 6 100 70.0 2013.0 4.0
1 1 2013 4 70 NaN NaN NaN
2 2 2015 10 80 110.0 2015.0 2.0
3 2 2015 2 110 NaN NaN NaN
4 3 2017 4 120 90.0 2002.0 6.0
5 3 2002 6 90 NaN NaN NaN
6 4 2016 1 55 NaN NaN NaN
For the moment I am doing some grouping on the unit, keeping those that have several rows, then extracting the information for these units that are associated with the minimal date. Then joining this table with my original table keeping only the rows that have a different date in the 2 tables that have been merged.
I feel like there is a much simple way to do this but I am not sure how.
Use DataFrameGroupBy.shift with add_prefix and join to append new DataFrame to original:
#if real data are not sorted
#df = df.sort_values(['unit','year','month'], ascending=[True, False, False])
df = df.join(df.groupby('unit', sort=False).shift(-1).add_prefix('prev_'))
print (df)
unit year month price prev_year prev_month prev_price
0 1 2018 6 100 2013.0 4.0 70.0
1 1 2013 4 70 NaN NaN NaN
2 2 2015 10 80 2015.0 2.0 110.0
3 2 2015 2 110 NaN NaN NaN
4 3 2017 4 120 2002.0 6.0 90.0
5 3 2002 6 90 NaN NaN NaN
6 4 2016 1 55 NaN NaN NaN

Pandas Dataframe: shift/merge multiple rows sharing the same column values into one row

Sorry for any possible confusion with the title. I will describe my question better with the following code and pictures.
Now I have a dataframe with multiple columns. The first two columns, by which they are sorted, 'Route' and 'ID' (Sorry about the formatting, all the rows here have 'Route' value of '100' and 'ID' from 1 to 3.
df1.head(9)
Route ID Year Vol Truck_Vol Truck_%
0 100 1 2017.0 7016 635.0 9.1
1 100 1 2014.0 6835 NaN NaN
2 100 1 2011.0 5959 352.0 5.9
3 100 2 2018.0 15828 NaN NaN
4 100 2 2015.0 13114 2964.0 22.6
5 100 2 2009.0 11844 1280.0 10.8
6 100 3 2016.0 15434 NaN NaN
7 100 3 2013.0 18699 2015.0 10.8
8 100 3 2010.0 15903 NaN NaN
What I want to have is
Route ID Year Vol1 Truck_Vol1 Truck_%1 Year2 Vol2 Truck_Vol2 Truck_%2 Year3 Vol3 Truck_Vol3 Truck_%3
0 100 1 2017 7016 635.0 9.1 2014 6835 NaN NaN 2011 5959 352.0 5.9
1 100 2 2018 15828 NaN NaN 2015 13114 2964.0 22.6 2009 11844 1280.0 10.8
2 100 3 2016 15434 NaN NaN 2013 18699 2015.0 10.8 2010 15903 NaN NaN
Again, sorry for the messy formatting. Let me try a simplified version.
Input:
Route ID Year Vol T_%
0 100 1 2017 100 1.0
1 100 1 2014 200 NaN
2 100 1 2011 300 2.0
3 100 2 2018 400 NaN
4 100 2 2015 500 3.0
5 100 2 2009 600 4.0
Desired Output:
Route ID Year Vol T_% Year.1 Vol.1 T_%.1 Year.2 Vol.2 T_%.2
0 100 1 2017 100 1.0 2014 200 NaN 2011 300 2
1 100 2 2018 400 NaN 2015 500 3.0 2009 600 4
So basically just move the cells shown in the picture
I am stumped here. The names for the newly generated columns don't matter.
For this current dataframe, I have three rows per 'group' like shown in the code. It will be great if the answer can accommodate any number of rows each group.
Thanks for your time.
with groupby + cumcount + set_index + unstack
df1 = df.assign(cid = df.groupby(['Route', 'ID']).cumcount()).set_index(['Route', 'ID', 'cid']).unstack(-1).sort_index(1,1)
df1.columns = [f'{x}{y}' for x,y in df1.columns]
df1 = df1.reset_index()
Output df1:
Route ID T_%0 Vol0 Year0 T_%1 Vol1 Year1 T_%2 Vol2 Year2
0 100 1 1.0 100 2017 NaN 200 2014 2.0 300 2011
1 100 2 NaN 400 2018 3.0 500 2015 4.0 600 2009
melt + pivot_table
v = df.melt(id_vars=['Route', 'ID'])
v['variable'] += v.groupby(['Route', 'ID', 'variable']).cumcount().astype(str)
res = v.pivot_table(index=['Route', 'ID'], columns='variable', values='value')
variable T_% 0 T_% 1 T_% 2 Vol 0 Vol 1 Vol 2 Year 0 Year 1 Year 2
Route ID
100 1 1.0 NaN 2.0 100.0 200.0 300.0 2017.0 2014.0 2011.0
2 NaN 3.0 4.0 400.0 500.0 600.0 2018.0 2015.0 2009.0
If you want to sort these:
c = res.columns.str.extract(r'(\d+)')[0].values.astype(int)
res.iloc[:,np.argsort(c)]
variable T_%0 Vol0 Year0 T_%1 Vol1 Year1 T_%2 Vol2 Year2
Route ID
100 1 1.0 100.0 2017.0 NaN 200.0 2014.0 2.0 300.0 2011.0
2 NaN 400.0 2018.0 3.0 500.0 2015.0 4.0 600.0 2009.0
You asked about why I used cumcount. To explain, here is what v looks like from above:
Route ID variable value
0 100 1 Year 2017.0
1 100 1 Year 2014.0
2 100 1 Year 2011.0
3 100 2 Year 2018.0
4 100 2 Year 2015.0
5 100 2 Year 2009.0
6 100 1 Vol 100.0
7 100 1 Vol 200.0
8 100 1 Vol 300.0
9 100 2 Vol 400.0
10 100 2 Vol 500.0
11 100 2 Vol 600.0
12 100 1 T_% 1.0
13 100 1 T_% NaN
14 100 1 T_% 2.0
15 100 2 T_% NaN
16 100 2 T_% 3.0
17 100 2 T_% 4.0
If I used pivot_table on this DataFrame, you would end up with something like this:
variable T_% Vol Year
Route ID
100 1 1.5 200.0 2014.0
2 3.5 500.0 2014.0
Obviously you are losing data here. cumcount is the solution, as it turns the variable series into this:
Route ID variable value
0 100 1 Year0 2017.0
1 100 1 Year1 2014.0
2 100 1 Year2 2011.0
3 100 2 Year0 2018.0
4 100 2 Year1 2015.0
5 100 2 Year2 2009.0
6 100 1 Vol0 100.0
7 100 1 Vol1 200.0
8 100 1 Vol2 300.0
9 100 2 Vol0 400.0
10 100 2 Vol1 500.0
11 100 2 Vol2 600.0
12 100 1 T_%0 1.0
13 100 1 T_%1 NaN
14 100 1 T_%2 2.0
15 100 2 T_%0 NaN
16 100 2 T_%1 3.0
17 100 2 T_%2 4.0
Where you have a count of repeated elements per unique Route and ID.

Categories