I have a DataFrame that looks like this:
FinancialYearStart MonthOfFinancialYear SalesTotal
0 2015 1 10
1 2015 2 10
2 2015 5 10
3 2015 6 50
4 2016 1 10
5 2016 3 20
6 2016 2 30
7 2017 6 70
8 2017 7 80
And I would like to calculate the YTD Sales total for each month, producing a table that looks like this:
FinancialYearStart MonthOfFinancialYear SalesTotal YTDTotal
0 2015 1 10 10
1 2015 2 10 20
2 2015 5 10 30
3 2015 6 50 50
4 2016 1 10 60
5 2016 3 20 80
6 2016 2 30 110
7 2017 6 70 70
8 2017 7 80 150
How might I achieve this?
More specifically, I actually need to calculate this on a group by group basis.
For example:
Year Month Customer TotalMonthlySales
2015 1 Dog 10
2015 2 Dog 10
2015 3 Cat 20
2015 4 Dog 30
2015 5 Cat 10
2015 7 Cat 20
2015 7 Dog 10
2016 1 Dog 40
2016 2 Dog 20
2016 3 Cat 70
2016 4 Dog 30
2016 5 Cat 10
2016 6 Cat 20
2016 7 Dog 10
Would give:
Year Month Customer TotalMonthlySales YTDSales
2015 1 Dog 10 10
2015 2 Dog 10 20
2015 3 Cat 20 20
2015 4 Dog 30 50
2015 5 Cat 10 30
2015 7 Cat 20 40
2015 7 Dog 10 60
2016 1 Dog 40 40
2016 2 Dog 20 60
2016 3 Cat 70 70
2016 4 Dog 30 90
2016 5 Cat 10 80
2016 6 Cat 20 100
2016 7 Dog 10 100
Use groupby + cumsum:
df['YTDSales'] = df.groupby(['Year','Customer'])['TotalMonthlySales'].cumsum()
print (df)
Year Month Customer TotalMonthlySales YTDSales
0 2015 1 Dog 10 10
1 2015 2 Dog 10 20
2 2015 3 Cat 20 20
3 2015 4 Dog 30 50
4 2015 5 Cat 10 30
5 2015 7 Cat 20 50
6 2015 7 Dog 10 60
7 2016 1 Dog 40 40
8 2016 2 Dog 20 60
9 2016 3 Cat 70 70
10 2016 4 Dog 30 90
11 2016 5 Cat 10 80
12 2016 6 Cat 20 100
13 2016 7 Dog 10 100
For first:
df['YTDTotal'] = df.groupby('FinancialYearStart')['SalesTotal'].cumsum()
print (df)
FinancialYearStart MonthOfFinancialYear SalesTotal YTDTotal
0 2015 1 10 10
1 2015 2 10 20
2 2015 5 10 30
3 2015 6 50 80
4 2016 1 10 10
5 2016 3 20 30
6 2016 2 30 60
7 2017 6 70 70
8 2017 7 80 150
Related
We can apply a 30D monthly rolling sum operations as:
df.rolling("30D").sum()
However, how can I achieve a month-to-date (or even year-to-date) rolling sum in a similar fashion?
Month-to-date meaning that we only sum from the beginning of the month up to the current date (or row)?
Consider the following database:
Year Month week Revenue
0 2020 1 1 10
1 2020 1 2 20
2 2020 1 3 10
3 2020 1 4 20
4 2020 2 1 10
5 2020 2 2 20
6 2020 2 3 10
7 2020 2 4 20
8 2020 3 1 10
9 2020 3 2 20
10 2020 3 3 10
11 2020 3 4 20
12 2021 1 1 10
13 2021 1 2 20
14 2021 1 3 10
15 2021 1 4 20
16 2021 2 1 10
17 2021 2 2 20
18 2021 2 3 10
19 2021 2 4 20
20 2021 3 1 10
21 2021 3 2 20
22 2021 3 3 10
23 2021 3 4 20
You could use a combination of group_by + cumsum to get what you want:
df['Year_To_date'] = df.groupby('Year')['Revenue'].cumsum()
df['Month_To_date'] = df.groupby(['Year', 'Month'])['Revenue'].cumsum()
Results:
Year Month week Revenue Year_To_date Month_To_date
0 2020 1 1 10 10 10
1 2020 1 2 20 30 30
2 2020 1 3 10 40 40
3 2020 1 4 20 60 60
4 2020 2 1 10 70 10
5 2020 2 2 20 90 30
6 2020 2 3 10 100 40
7 2020 2 4 20 120 60
8 2020 3 1 10 130 10
9 2020 3 2 20 150 30
10 2020 3 3 10 160 40
11 2020 3 4 20 180 60
12 2021 1 1 10 10 10
13 2021 1 2 20 30 30
14 2021 1 3 10 40 40
15 2021 1 4 20 60 60
16 2021 2 1 10 70 10
17 2021 2 2 20 90 30
18 2021 2 3 10 100 40
19 2021 2 4 20 120 60
20 2021 3 1 10 130 10
21 2021 3 2 20 150 30
22 2021 3 3 10 160 40
23 2021 3 4 20 180 60
Note that Month-to-date makes sense only if you have a week/date column in your data model.
EXTRAS:
The goal of cumsum is to compute the cumulative sum over date by different periods. However, if the index of the original data frame is not ordered in the desired sequence,cumsum is computed by the original index within a group.That's because Pandas operates sequence by row indexes.
Thus, data frame first needs to be sorted by the desired order([Year,Month,Week] or [Date]), followed by resetting the index to match the order of the variable of interest. Now, the output is summed up by group of periods , in the chronological order.
df=df.sort_values(['Year', 'Month','Week']).reset_index(drop=True)
It is posible to convert a dataframe on Pandas like that:
Into a time series where each year its behind the last one
This is likely what df.unstack(level=1) is meant for.
np.random.seed(111) # reproducibility
df = pd.DataFrame(
data={
"2009": np.random.randn(12),
"2010": np.random.randn(12),
"2011": np.random.randn(12),
},
index=range(1, 13)
)
print(df)
Out[45]:
2009 2010 2011
1 -1.133838 -1.440585 0.570594
2 0.384319 0.773703 0.915420
3 1.496554 -1.027967 -1.669341
4 -0.355382 -0.090986 0.482714
5 -0.787534 0.492003 -0.310473
6 -0.459439 0.424672 2.394690
7 -0.059169 1.283049 1.550931
8 -0.354174 0.315986 -0.646465
9 -0.735523 -0.408082 -0.928937
10 -1.183940 -0.067948 -1.654976
11 0.238894 -0.952427 0.350193
12 -0.589920 -0.110677 -0.141757
df_out = df.unstack(1).reset_index()
df_out.columns = ["year", "month", "value"]
print(df_out)
Out[46]:
year month value
0 2009 1 -1.133838
1 2009 2 0.384319
2 2009 3 1.496554
3 2009 4 -0.355382
4 2009 5 -0.787534
5 2009 6 -0.459439
6 2009 7 -0.059169
7 2009 8 -0.354174
8 2009 9 -0.735523
9 2009 10 -1.183940
10 2009 11 0.238894
11 2009 12 -0.589920
12 2010 1 -1.440585
13 2010 2 0.773703
14 2010 3 -1.027967
15 2010 4 -0.090986
16 2010 5 0.492003
17 2010 6 0.424672
18 2010 7 1.283049
19 2010 8 0.315986
20 2010 9 -0.408082
21 2010 10 -0.067948
22 2010 11 -0.952427
23 2010 12 -0.110677
24 2011 1 0.570594
25 2011 2 0.915420
26 2011 3 -1.669341
27 2011 4 0.482714
28 2011 5 -0.310473
29 2011 6 2.394690
30 2011 7 1.550931
31 2011 8 -0.646465
32 2011 9 -0.928937
33 2011 10 -1.654976
34 2011 11 0.350193
35 2011 12 -0.141757
Here's the data in csv format:
Name 2012 2013 2014 2015 2016 2017 2018 2019 2020
Jack 1 15 25 3 5 11 5 8 3
Jill 5 10 32 5 5 14 6 8 7
I don't want Name column to be include as it gives an error.
I tried
df.cumsum()
Try with set_index and reset_index to keep the name column:
df.set_index('Name').cumsum().reset_index()
Output:
Name 2012 2013 2014 2015 2016 2017 2018 2019 2020
0 Jack 1 15 25 3 5 11 5 8 3
1 Jill 6 25 57 8 10 25 11 16 10
How can I use the value from the same month in the previous year to fill values in the following table for 2020:
Category Month Year Value
A 1 2019 15
B 2 2019 20
A 2 2019 90
A 3 2019 50
B 4 2019 40
A 5 2019 20
A 6 2019 15
A 7 2019 17
A 8 2019 18
A 9 2019 12
A 10 2019 11
A 11 2019 19
A 12 2019 15
A 1 2020 18
A 2 2020 53
A 3 2020 80
The final desired result is the following:
Category Month Year Value
A 1 2019 15
B 2 2019 20
A 2 2019 90
A 3 2019 50
B 4 2019 40
A 4 2019 40
A 5 2019 20
A 6 2019 15
A 7 2019 17
A 8 2019 18
A 9 2019 12
A 10 2019 11
A 11 2019 19
A 12 2019 15
A 1 2020 18
A 2 2020 53
A 3 2020 80
B 4 2020 40
A 4 2020 40
A 5 2020 20
A 6 2020 15
A 7 2020 17
A 8 2020 18
A 9 2020 12
A 10 2020 11
A 11 2020 19
A 12 2020 15
I tried using pandas groupby but not sure if that is the right approach.
IIUC we use the pivot then ffill with stack
s=df.pivot_table(index=['Category','Year'],columns='Month',values='Value').groupby(level=0).ffill().stack().reset_index()
Category Year level_2 0
0 A 2019 1 15.0
1 A 2019 2 90.0
2 A 2019 3 50.0
3 A 2019 5 20.0
4 A 2019 6 15.0
5 A 2019 7 17.0
6 A 2019 8 18.0
7 A 2019 9 12.0
8 A 2019 10 11.0
9 A 2019 11 19.0
10 A 2019 12 15.0
11 A 2020 1 18.0
12 A 2020 2 53.0
13 A 2020 3 80.0
14 A 2020 5 20.0
15 A 2020 6 15.0
16 A 2020 7 17.0
17 A 2020 8 18.0
18 A 2020 9 12.0
19 A 2020 10 11.0
20 A 2020 11 19.0
21 A 2020 12 15.0
22 B 2019 2 20.0
23 B 2019 4 40.0
You can accomplish this with a combination of loc, concat, and drop_duplicates.
The idea here is to concatenate the dataframe with a copy of the 2019 data where year is changed to 2020, and then only keeping the first value for Category, Month, Year.
df2 = df.loc[df['Year'] == 2019, :]
df2['Year'] = 2020
pd.concat([df, df2]).drop_duplicates(subset=['Category', 'Month', 'Year'], keep='first')
Output
Category Month Year Value
0 A 1 2019 15
1 B 2 2019 20
2 A 2 2019 90
3 A 3 2019 50
4 B 4 2019 40
5 A 5 2019 20
6 A 6 2019 15
7 A 7 2019 17
8 A 8 2019 18
9 A 9 2019 12
10 A 10 2019 11
11 A 11 2019 19
12 A 12 2019 15
13 A 1 2020 18
14 A 2 2020 53
15 A 3 2020 80
1 B 2 2020 20
4 B 4 2020 40
5 A 5 2020 20
6 A 6 2020 15
7 A 7 2020 17
8 A 8 2020 18
9 A 9 2020 12
10 A 10 2020 11
11 A 11 2020 19
12 A 12 2020 15
A B C Delta
**1 Jan 10 0**
1 Feb 20 10
1 Mar 40 30
**2 Jan 10 0**
2 Feb 30 20
2 Mar 20 10
2 Oct 40 30
**3 Jan 10 0**
3 Feb 20 10
3 Mar 30 20
3 Oct 40 30
3 Dec 50 40
how can I calculate delta column?
I couldn't find it anywhere.
Please let me know. how to calculate
Subtract column C by Series.sub with repeated first values per groups by GroupBy.transform and GroupBy.first:
df['Delta'] = df['C'].sub(df.groupby('A')['C'].transform('first'))
print (df)
A B C Delta
0 1 Jan 10 0
1 1 Feb 20 10
2 1 Mar 40 30
3 2 Jan 10 0
4 2 Feb 30 20
5 2 Mar 20 10
6 2 Oct 40 30
7 3 Jan 10 0
8 3 Feb 20 10
9 3 Mar 30 20
10 3 Oct 40 30
11 3 Dec 50 40
Detail:
print (df.groupby('A')['C'].transform('first'))
0 10
1 10
2 10
3 10
4 10
5 10
6 10
7 10
8 10
9 10
10 10
11 10
Name: C, dtype: int64