Hello StackOverflow Community,
I have been interested in calculating anomalies for data in pandas 1.2.0 using Python 3.9.1 and Numpy 1.19.5, but have been struggling to figure out the most "Pythonic" and "pandas" way to complete this task (or any way for that matter). Below I have some created some dummy data and put it into a pandas DataFrame. In addition, I have tried to clearly outline my methodology for calculating monthly anomalies for the dummy data.
What I am trying to do is take "n" years of monthly values (in this example, 2 years of monthly data = 25 months) and calculate monthly averages for all years (for example group all the January values togeather and calculate the mean). I have been able to do this using pandas.
Next, I would like to take each monthly average and subtract it from all elements in my DataFrame that fall into that specific month (for example subtract each January value from the overall January mean value). In the code below you will see some lines of code that attempt to do this subtraction, but to no avail.
If anyone has any thought or tips on what may be a good way to approach this, I really appreciate your insight. If you require further clarification, let me know. Thanks for your time and thoughts.
-Marian
#Import packages
import numpy as np
import pandas as pd
#-------------------------------------------------------------
#Create a pandas dataframe with some data that will represent:
#Column of dates for two years, at monthly resolution
#Column of corresponding values for each date.
#Create two years worth of monthly dates
dates = pd.date_range(start='2018-01-01', end='2020-01-01', freq='MS')
#Create some random data that will act as our data that we want to compute the anomalies of
values = np.random.randint(0,100,size=25)
#Put our dates and values into a dataframe to demonsrate how we have tried to calculate our anomalies
df = pd.DataFrame({'Dates': dates, 'Values': values})
#-------------------------------------------------------------
#Anomalies will be computed by finding the mean value of each month over all years
#And then subtracting the mean value of each month by each element that is in that particular month
#Group our df according to the month of each entry and calculate monthly mean for each month
monthly_means = df.groupby(df['Dates'].dt.month).mean()
#-------------------------------------------------------------
#Now, how do we go about subtracting these grouped monthly means from each element that falls
#in the corresponding month.
#For example, if the monthly mean over 2 years for January is 20 and the value is 21 in January 2018, the anomaly would be +1 for January 2018
#Example lines of code I have tried, but have not worked
#ValueError:Unable to coerce to Series, length must be 1: given 12
#anomalies = socal_csv.groupby(socal_csv['Date'].dt.month) - monthly_means
#TypeError: unhashable type: "list"
#anomalies = socal_csv.groupby(socal_csv['Date'].dt.month).transform([np.subtract])
You can use pd.merge like this :
import numpy as np
import pandas as pd
dates = pd.date_range(start='2018-01-01', end='2020-01-01', freq='MS')
values = np.random.randint(0,100,size=25)
df = pd.DataFrame({'Dates': dates, 'Values': values})
monthly_means = df.groupby(df['Dates'].dt.month.mean()
df['month']=df['Dates'].dt.strftime("%m").astype(int)
df=df.merge(monthly_means.rename(columns={'Dates':'month','Values':'Mean'}),on='month',how='left')
df['Diff']=df['Mean']-df['Values']
output:
df['Diff']
Out[19]:
0 33.333333
1 19.500000
2 -29.500000
3 -22.500000
4 -24.000000
5 -3.000000
6 10.000000
7 2.500000
8 14.500000
9 -17.500000
10 44.000000
11 31.000000
12 -11.666667
13 -19.500000
14 29.500000
15 22.500000
16 24.000000
17 3.000000
18 -10.000000
19 -2.500000
20 -14.500000
21 17.500000
22 -44.000000
23 -31.000000
24 -21.666667
You can use abs() if you want absolute difference
A one-line solution is:
df = pd.DataFrame({'Values': values}, index=dates)
df.groupby(df.index.month).transform(lambda x: x-x.mean())
Related
I got a dataset that is built ike this:
hour weekday
12 2
14 1
12 2
and so on.
I want to display in a heatmap per weekday when the dataframe had most action (which is the sum of all events that happened on that weekday during that hour)
I tried wo work with groupBy
hm = df.groupby(['hour']).sum()
which shows me all events for the hours, but does not split the events across the weekdays
How can I keep the list so I have the weekdays as an x-axis and on the y-axis the sum of the hours on that weekday?
thanks for your help!
The output you expect is unclear, but I imagine you could be looking for pandas.crosstab:
# computing crosstab
hm = pd.crosstab(df['hour'], df['weekday'])
# plotting heatmap
import seaborn as sns
sns.heatmap(hm, cmap='Greys')
output:
weekday 1 2
hour
12 0 2
14 1 0
Data import from csv:
Date
Item_1
Item 2
1990-01-01
34
78
1990-01-02
42
19
.
.
.
.
.
.
2020-12-31
41
23
df = pd.read_csv(r'Insert file directory')
df.index = pd.to_datetime(df.index)
gb= df.groupby([(df.index.year),(df.index.month)]).mean()
Issue:
So basically, the requirement is to group the data according to year and month before processing and I thought that the groupby function would have grouped the data so that the mean() calculate the averages of all values grouped under Jan-1990, Feb-1990 and so on. However, I was wrong. The output result in the average of all values under Item_1
My example is similar to the below post but in my case, it is calculating the mean. I am guessing that it has to do with the way the data is arranged after groupby or some parameters in mean() have to be specified but I have no idea which is the cause. Can someone enlighten me on how to correct the code?
Pandas groupby month and year
Update:
Hi all, I have created the sample data file .csv with 3 items and 3 months of data. I am wondering if the cause has to do with the conversion of data into df when it is imported from .csv because I have noticed some weird time data on the leftmost as shown below:
Link to sample file is:
https://www.mediafire.com/file/t81wh3zem6vf4c2/test.csv/file
import pandas as pd
df = pd.read_csv( 'test.csv', index_col = 'date' )
df.index = pd.to_datetime( df.index )
df.groupby([(df.index.year),(df.index.month)]).mean()
Seems to do the trick from the provided data.
IIUC, you want to calculate the mean of all elements. You can use numpy's mean function that operates on the flattened array by default:
df.index = pd.to_datetime(df.index, format='%d/%m/%Y')
gb = df.groupby([(df.index.year),(df.index.month)]).apply(lambda d: np.mean(d.values))
output:
date date
1990 1 0.563678
2 0.489105
3 0.459131
4 0.755165
5 0.424466
6 0.523857
7 0.612977
8 0.396031
9 0.452538
10 0.527063
11 0.397951
12 0.600371
dtype: float64
I am trying to figure out how to calculate the mean values for each row in this Python Pandas Pivot table that I have created.
I also want to add the sum of each year at the bottom of the pivot table.
The last step I want to do is to take the average value for each month calculated above and divide it with the total average in order to get the average distribution per year.
import pandas as pd
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2011, 1, 1)
end = datetime.datetime(2017, 12, 31)
libor = web.DataReader('USD1MTD156N', 'fred', start, end) # Reading the data
libor = libor.dropna(axis=0, how= 'any') # Dropping the NAN values
libor = libor.resample('M').mean() # Calculating the mean value per date
libor['Month'] = pd.DatetimeIndex(libor.index).month # Adding month value after each
libor['Year'] = pd.DatetimeIndex(libor.index).year # Adding month value after each
pivot = libor.pivot(index='Month',columns='Year',values='USD1MTD156N')
print pivot
Any suggestions how to proceed?
Thank you in advance
I think this is what you want (This is on python3 - I think only the print command is different in this script):
# Mean of each row
ave_month = pivot.mean(1)
#sum of each year at the bottom of the pivot table.
sum_year = pivot.sum(0)
# average distribution per year.
ave_year = sum_year/sum_year.mean()
print(ave_month, '\n', sum_year, '\n', ave_year)
Month
1 0.324729
2 0.321348
3 0.342014
4 0.345907
5 0.345993
6 0.369418
7 0.382524
8 0.389976
9 0.392838
10 0.392425
11 0.406292
12 0.482017
dtype: float64
Year
2011 2.792864
2012 2.835645
2013 2.261839
2014 1.860015
2015 2.407864
2016 5.953718
2017 13.356432
dtype: float64
Year
2011 0.621260
2012 0.630777
2013 0.503136
2014 0.413752
2015 0.535619
2016 1.324378
2017 2.971079
dtype: float64
I would use pivot_table over pivot, and then use the aggfunc parameter.
pivot = libor.pivot(index='Month',columns='Year',values='USD1MTD156N')
would be
import numpy as np
pivot = libor.pivot_table(index='Month',columns='Year',values='USD1MTD156N', aggfunc=np.mean)
YOu should be able to drop the resample statement also if I'm not mistaken
A link ot the docs:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html
I ended up figuring it out while writing out this question so I'll just post anyway and answer my own question in case someone else needs a little help.
Problem
Suppose we have a DataFrame, df, containing this data.
import pandas as pd
from io import StringIO
data = StringIO(
"""\
date spendings category
2014-03-25 10 A
2014-04-05 20 A
2014-04-15 10 A
2014-04-25 10 B
2014-05-05 10 B
2014-05-15 10 A
2014-05-25 10 A
"""
)
df = pd.read_csv(data,sep="\s+",parse_dates=True,index_col="date")
Goal
For each row, sum the spendings over every row that is within one month of it, ideally using DataFrame.rolling as it's a very clean syntax.
What I have tried
df = df.rolling("M").sum()
But this throws an exception
ValueError: <MonthEnd> is a non-fixed frequency
version: pandas==0.19.2
Use the "D" offset rather than "M" and specifically use "30D" for 30 days or approximately one month.
df = df.rolling("30D").sum()
Initially, I intuitively jumped to using "M" as I figured it stands for one month, but now it's clear why that doesn't work.
To address why you cannot use things like "AS" or "Y", in this case, "Y" offset is not "a year", it is actually referencing YearEnd (http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases), and therefore the rolling function does not get a fixed window (e.g. you get a 365 day window if your index falls on Jan 1, and 1 day if Dec 31).
The proposed solution (offset by 30D) works if you do not need strict calendar months. Alternatively, you would iterate over your date index, and slice with an offset to get more precise control over your sum.
If you have to do it in one line (separated for readability):
df['Sum'] = [
df.loc[
edt - pd.tseries.offsets.DateOffset(months=1):edt, 'spendings'
].sum() for edt in df.index
]
spendings category Sum
date
2014-03-25 10 A 10
2014-04-05 20 A 30
2014-04-15 10 A 40
2014-04-25 10 B 50
2014-05-05 10 B 50
2014-05-15 10 A 40
2014-05-25 10 A 40
I have a simple PANDAS dataframe:
V1
Index
1 5
2 6
3 7
4 8
5 9
6 10
I want to fit an ARMA model from statsmodels. When I try to do it, I get the following:
ValueError: Given a pandas object and the index does not contain dates
I guess it means that the index is not set as a date. How can I transform the index to a date? I consider the current index to be days, so in the above example the dataframe runs for 6 days. How can I make PANDAS/statsmodels understand that it is dates of daily frequency? Thank you very much for your help.
You could probably set the index to be daily ending today.
df.index = pd.DatetimeIndex(end=pd.datetime.today(), periods=len(df), freq='1D')