Pandas dataframe groupby multiple years rolling stat - python

I have a pandas dataframe for which I'm trying to compute an expanding windowed aggregation after grouping by columns. The data structure is something like this:
df = pd.DataFrame([['A',1,2015,4],['A',1,2016,5],['A',1,2017,6],['B',1,2015,10],['B',1,2016,11],['B',1,2017,12],
['A',1,2015,24],['A',1,2016,25],['A',1,2017,26],['B',1,2015,30],['B',1,2016,31],['B',1,2017,32],
['A',2,2015,4],['A',2,2016,5],['A',2,2017,6],['B',2,2015,10],['B',2,2016,11],['B',2,2017,12]],columns=['Typ','ID','Year','dat'])\
.sort_values(by=['Typ','ID','Year'])
i.e.
Typ ID Year dat
0 A 1 2015 4
6 A 1 2015 24
1 A 1 2016 5
7 A 1 2016 25
2 A 1 2017 6
8 A 1 2017 26
12 A 2 2015 4
13 A 2 2016 5
14 A 2 2017 6
3 B 1 2015 10
9 B 1 2015 30
4 B 1 2016 11
10 B 1 2016 31
5 B 1 2017 12
11 B 1 2017 32
15 B 2 2015 10
16 B 2 2016 11
17 B 2 2017 12
In general, there is a completely varying number of years per Type-ID and rows per Type-ID-Year. I need to group this dataframe by the columns Type and ID, then compute an expanding windowed median & std of all observations by Year. I would like to get output results like this:
Typ ID Year median std
0 A 1 2015 14.0 14.14
1 A 1 2016 14.5 11.56
2 A 1 2017 15.0 10.99
3 A 2 2015 4.0 0
4 A 2 2016 4.5 0
5 A 2 2017 5.0 0
6 B 1 2015 20.0 14.14
7 B 1 2016 20.5 11.56
8 B 1 2017 21.0 10.99
9 B 2 2015 10.0 0
10 B 2 2016 10.5 0
11 B 2 2017 11.0 0
Hence, I want something like a groupby by ['Type','ID','Year'], with the median & std for each Type-ID-Year computed for all data with the same Type-ID and cumulative inclusive that Year.
How can I do this without manual iteration?

There's been no activity on this question, so I'll post the solution I found.
mn = df.groupby(by=['Typ','ID']).dat.expanding().median().reset_index().set_index('level_2')
mylast = lambda x: x.iloc[-1]
mn = mn.join(df['Year'])
mn = mn.groupby(by=['Typ','ID','Year']).agg(mylast).reset_index()
My solution follows this algorithm:
group the data, compute the windowed median, and get the original index back
with the original index back, get the year back from the original dataframe
group by the grouping columns, taking the last (in order) value for each
This gives the output desired. The same process can be followed for the standard deviation (or any other statistic desired).

Related

Doing joins between 2 csv files [duplicate]

For df2 which only has data in the year of 2019:
type year value
0 a 2019 13
1 b 2019 5
2 c 2019 5
3 d 2019 20
df1 has multiple years data:
type year value
0 a 2015 12
1 a 2016 2
2 a 2019 3
3 b 2018 50
4 b 2019 10
5 c 2017 1
6 c 2016 5
7 c 2019 8
I need to concatenate them together while replacing df2's values in 2019 with the values from df1's same year.
The expected result will like this:
type date value
0 a 2015 12
1 a 2016 2
2 b 2018 50
3 c 2017 1
4 c 2016 5
5 a 2019 13
6 b 2019 5
7 c 2019 5
8 d 2019 20
The result from pd.concat([df1, df2], ignore_index=True, sort =False), which clearly have multiple values in year of 2019 for one type. How should I improve the code? Thank you.
type date value
0 a 2019 13
1 b 2019 5
2 c 2019 5
3 d 2019 20
4 a 2015 12
5 a 2016 2
6 a 2019 3
7 b 2018 50
8 b 2019 10
9 c 2017 1
10 c 2016 5
11 c 2019 8
Add DataFrame.drop_duplicates for get last rows per type and date after concat.
Solution working if type and date pairs are unique in both DataFrames.
df = (pd.concat([df1, df2], ignore_index=True, sort =False)
.drop_duplicates(['type','date'], keep='last'))

merging two csv using python [duplicate]

For df2 which only has data in the year of 2019:
type year value
0 a 2019 13
1 b 2019 5
2 c 2019 5
3 d 2019 20
df1 has multiple years data:
type year value
0 a 2015 12
1 a 2016 2
2 a 2019 3
3 b 2018 50
4 b 2019 10
5 c 2017 1
6 c 2016 5
7 c 2019 8
I need to concatenate them together while replacing df2's values in 2019 with the values from df1's same year.
The expected result will like this:
type date value
0 a 2015 12
1 a 2016 2
2 b 2018 50
3 c 2017 1
4 c 2016 5
5 a 2019 13
6 b 2019 5
7 c 2019 5
8 d 2019 20
The result from pd.concat([df1, df2], ignore_index=True, sort =False), which clearly have multiple values in year of 2019 for one type. How should I improve the code? Thank you.
type date value
0 a 2019 13
1 b 2019 5
2 c 2019 5
3 d 2019 20
4 a 2015 12
5 a 2016 2
6 a 2019 3
7 b 2018 50
8 b 2019 10
9 c 2017 1
10 c 2016 5
11 c 2019 8
Add DataFrame.drop_duplicates for get last rows per type and date after concat.
Solution working if type and date pairs are unique in both DataFrames.
df = (pd.concat([df1, df2], ignore_index=True, sort =False)
.drop_duplicates(['type','date'], keep='last'))

Efficient way subtract a row with previous row seperated by group with Pandas

The objective is to subtract a row (N) with previous row (N-1) separated by groups.
Given a df
years nchar nval
0 2019 a 1
1 2019 b 1
2 2019 c 1
3 2020 a 1
4 2020 s 4
Lets,separate into group of year 2019, and we denote it as df_2019
For df_2019, there we assign constant 10.
Then,only for index 0, we do the following operation and assign to a new column 'B`
df_2019.loc[df_2019.index[0], 'B']= 10 - df_2019['nval'].values[0]
Whereas, the other index
df_2019.loc[df_2019.index[N], 'B'] = df_2019['B'].values[N-1] - df_2019['nval'].values[N]
This, will produced the following table
years nchar nval C D B
1 2019 a 1 9
2 2019 b 1 8
3 2019 c 1 7
For the group 2020, the same computation apply. However, the only difference is, the constant value is the 7, which is taken from the last index of column B.
To answer this requirement, the following code is produced with extra possible groups.
import pandas as pd
year=[2019,2019,2019,2020,2020,2020,2020,2022,2022,2022]
nval=[1,1,1,1,4,1,4,5,6,7]
nchar=['a','b','c','a','s','c','a','b','c','g']
df=pd.DataFrame(zip(year,nchar,nval),columns=['years','nchar','nval'])
print(df)
year_ls=[2019,2020,2022]
nspacing_total=2
nspacing_between_df=4
all_df=[]
default_val=10
for idx,dyear in enumerate(year_ls):
df_=df[df['years']==dyear].reset_index(drop=True)
t=pd.DataFrame([[''] * 3]*len(df_), columns=["C", "D", "B"])
df_=pd.concat([df_,t],axis=1)
Total = df_['nval'].sum()
df_=pd.DataFrame([[''] * len(df.columns)]*1, columns=df.columns).append(df_).reset_index(drop=True)
if idx ==0:
df_.loc[df_.index[0], 'B']=default_val
if idx !=0:
pre_df=all_df[idx-1]
pre_val=pre_df['B'].values[-1]
nposi=1
pre_years=pre_df['years'].values[nposi]
df_.loc[df_.index[0], 'nchar']=f'From {pre_years}'
df_.loc[df_.index[0], 'B']=pre_val
for ndexd in range(df_.shape[0]-1):
df_.loc[df_.index[ndexd+1], 'B']=df_['B'].values[ndexd]-df_['nval'].values[ndexd+1]
df_=df_.append(pd.DataFrame([[''] * len(df.columns)]*nspacing_total, columns=df.columns)).reset_index(drop=True)
df_.loc[df_.index[-1], 'nval']=Total
df_.loc[df_.index[-1], 'nchar']='Total'
df_.loc[df_.index[-1], 'B']=df_['B'].values[0]-df_['nval'].values[-1]
all_df.append(df_)
However, I wonder whether this proposal can be further simplified further using pandas groupby or other. I really appreciate for any tips.
Ultimately, I would like to express the table as below, which will be exported to excel
years nchar nval C D B
0 10
1 2019 a 1 9
2 2019 b 1 8
3 2019 c 1 7
4
5 Total 3 7
6
7
8
9
10 From 2019 7
11 2020 a 1 6
12 2020 s 4 2
13 2020 c 1 1
14 2020 a 4 -3
15
16 Total 10 -3
17
18
19
20
21 From 2020 -3
22 2022 b 5 -8
23 2022 c 6 -14
24 2022 g 7 -21
25
26 Total 18 -21
27
28
29
30
The code to produced the above table
# Optional to represent the table above
all_ap_df=[]
for a_df in all_df:
df=a_df.append(pd.DataFrame([[''] * len(df.columns)]*nspacing_between_df, columns=df.columns)).reset_index(drop=True)
all_ap_df.append(df)
df=pd.concat(all_ap_df,axis=0).reset_index(drop=True)
df.loc[df_.index[0], 'D']=df['B'].values[0]
df.loc[df_.index[0], 'B']=''
df = df.fillna('')
I think this is actually quite simple. Use groupby + cumsum:
df['B'] = 10 - df['nval'].cumsum()
Output:
>>> df
years nchar nval B
0 2019 a 1 9
1 2019 b 1 8
2 2019 c 1 7
3 2020 a 1 6
4 2020 s 4 2
In your case chain with groupby
df['new'] = df.groupby('years')['nval'].cumsum().rsub(10)
Out[8]:
0 9
1 8
2 7
3 9
4 5
Name: nval, dtype: int64

replacing the missing value with different values on the same column in pandas dataframe

A B C D
1 2010 one 0 0
2 2020 one 2 4
3 2007 two 0 8
4 2010 one 8 4
5 2020 four 6 12
6 2007 three 0 14
7 2006 four 7 14
8 2010 two 10 12
I need to replace 0 with the average of the C values of years.For example 2010 C value would be 9. What is the best way to do this? i have over 10,000 rows.
You can use replace to change 0's to np.nan in Column C, and use fillna to map the yearly averages:
df.C.replace({0:np.nan},inplace=True)
df.C.fillna(
df.A.map(
df.groupby(df['A']).\
C.mean().fillna(0)\
.to_dict()
),inplace=True
)
print(df)
A B C D
0 2010 one 9.0 0
1 2020 one 2.0 4
2 2007 two 0.0 8
3 2010 one 8.0 4
4 2020 four 6.0 12
5 2007 three 0.0 14
6 2006 four 7.0 14
7 2010 two 10.0 12
2007 is still NaN because we have no values other than 0's in the initial data.
Here is what I think I will do it. The code below will be pseudo-code.
1: You find the avg for each year, and put it to a dict.
my_year_dict = {'2020':xxx,'2021':xxx}
2: Use apply & lambda functions
df[New C Col] = df[C].apply(lambda x: my_year_dict[x] if x is 0)
Hope it can be a start!

Calculating distance to a row with a certain value

I am working on a data with pandas in which a maintenance work is done at a location. The maintenance is done every four years at each site. I want to find the years since the last maintenance action at each site. I am giving here only two sites in the following example but in the original dataset, I have thousands of them. My data only covers the years 2014 through 2017.
Action = 0 means no action has been performed that year, Action = 1 means some action has been done. Measurement is a performance reading related to the effect of the action. The action can happen in any year. I know that if the action has been performed in Year Y, the previous maintenance has been performed in Year Y-4.
Site Year Action Measurement
A 2014 0 100
A 2015 0 150
A 2016 1 300
A 2017 0 80
B 2014 0 200
B 2015 1 250
B 2016 0 60
B 2017 0 110
Given this dataset; first, I want to have a temporary dataset like this:
Item Year Action Measurement Years_Since_Last_Action
A 2014 0 100 2
A 2015 0 150 3
A 2016 1 300 4
A 2017 0 80 1
B 2014 0 200 3
B 2015 1 250 4
B 2016 0 60 1
B 2017 0 110 2
Then, I want to have:
Years_Since_Last_Action Mean_Measurement
1 70
2 105
3 175
4 275
Thanks in advance!
Your first question
s=df.loc[df.Action==1,['Site','Year']].set_index('Site') # get all year have the action and map back to the whole dataframe
df['Newyear']=df.Site.map(s.Year)
s1=df.Year-df.Newyear
df['action since last year']=np.where(s1<=0,s1+4,s1)# using np.where get the condition
df
Out[167]:
Site Year Action Measurement Newyear action since last year
0 A 2014 0 100 2016 2
1 A 2015 0 150 2016 3
2 A 2016 1 300 2016 4
3 A 2017 0 80 2016 1
4 B 2014 0 200 2015 3
5 B 2015 1 250 2015 4
6 B 2016 0 60 2015 1
7 B 2017 0 110 2015 2
2nd question
df.groupby('action since last year').Measurement.mean()
Out[168]:
action since last year
1 70
2 105
3 175
4 275
Name: Measurement, dtype: int64
First, build your intermediate using groupby, *fill and a little arithmetic.
v = (df.Year
.where(df.Action.astype(bool))
.groupby(df.Site)
.ffill()
.bfill()
.sub(df.Year))
df['Years_Since_Last_Action'] = np.select([v > 0, v < 0], [4 - v, v.abs()], default=4)
df
Site Year Action Measurement Years_Since_Last_Action
0 A 2014 0 100 2.0
1 A 2015 0 150 3.0
2 A 2016 1 300 4.0
3 A 2017 0 80 1.0
4 B 2014 0 200 3.0
5 B 2015 1 250 4.0
6 B 2016 0 60 1.0
7 B 2017 0 110 2.0
Next,
df.groupby('Years_Since_Last_Action', as_index=False).Measurement.mean()
Years_Since_Last_Action Measurement
0 1.0 70
1 2.0 105
2 3.0 175
3 4.0 275
How about:
delta_year = df.loc[df.groupby("Site")["Action"].transform("idxmax"), "Year"].values
years_since = ((df.Year - delta_year) % 4).replace(0, 4)
df["Years_Since_Last_Action"] = years_since
out = df.groupby("Years_Since_Last_Action")["Measurement"].mean().reset_index()
out = out.rename(columns={"Measurement": "Mean_Measurement"})
which gives me
In [230]: df
Out[230]:
Site Year Action Measurement Years_Since_Last_Action
0 A 2014 0 100 2
1 A 2015 0 150 3
2 A 2016 1 300 4
3 A 2017 0 80 1
4 B 2014 0 200 3
5 B 2015 1 250 4
6 B 2016 0 60 1
7 B 2017 0 110 2
In [231]: out
Out[231]:
Years_Since_Last_Action Mean_Measurement
0 1 70
1 2 105
2 3 175
3 4 275

Categories