I have the following dataframe:
COD CHM DATE
0 5713 0.0 2020-07-16
1 5713 1.0 2020-08-11
2 5713 2.0 2020-06-20
3 5713 3.0 2020-06-19
4 5713 4.0 2020-06-01
... ... ... ...
2135283 73306036 0.0 2020-09-30
2135284 73306055 12.0 2020-09-30
2135285 73306479 9.0 2020-09-30
2135286 73306656 3.0 2020-09-30
2135287 73306676 1.0 2020-09-30
I want to calculate the mean and the standard deviation for each COD throughout the dates (time).
For this, I am doing:
traf_user_chm_med =traf_user_chm_med.groupby(['COD', 'DATE'])['CHM'].sum().reset_index()
dates = pd.date_range(start=traf_user_chm_med.DATE.min(), end=traf_user_chm_med.DATE.max(), freq='MS', closed='left').sort_values(ascending=False)
clients = traf_user_chm_med['COD'].unique()
idx = pd.MultiIndex.from_product((clients, dates), names=['COD', 'DATE'])
M0 = pd.to_datetime('2020-08')
M1 = M0-pd.DateOffset(month=M0.month-1)
M2 = M0-pd.DateOffset(month=M0.month-2)
M3 = M0-pd.DateOffset(month=M0.month-3)
M4 = M0-pd.DateOffset(month=M0.month-4)
M5 = M0-pd.DateOffset(month=M0.month-5)
def filter_dates(grp):
grp.set_index('YEAR_MONTH', inplace=True)
grp=grp[M0:M5].reset_index()
return grp
traf_user_chm_med = traf_user_chm_med.groupby('COD').apply(filter_dates)
Not sure why it doesn't work, it returns an empty dataframe.
After this I would unstack to get the activity in the several months and calculate the mean and standard deviation for each COD.
This is a long proccess, not sure if there is a faster way to do it that gets me the values I want.
Still, if anyone can help me get this one working would be aweosome!
df['mean'] = df.groupby('DATE')['COD'].transform('mean')
If I understand correctly, you're simply requiring this:
df.groupby("COD")["CHM"].agg("std")
As a general principle, there's almost always a "pythonic" way to do these things that's fewer lines and easy to understand!
You can use transform to broadcast your mean and std
...
df['mean'] = df.groupby('DATE')['COD'].transform('mean')
df['std'] = df.groupby('DATE')['COD'].transform('std')
Related
I have time-series data in a dataframe. Is there any way to calculate for each day the percent change of that day's value from the average of the previous 7 days?
I have tried
df['Change'] = df['Column'].pct_change(periods=7)
However, this simply finds the difference between t and t-7 days. I need something like:
For each value of Ti, find the average of the previous 7 days, and subtract from Ti
Sure, you can for example use:
s = df['Column']
n = 7
mean = s.rolling(n, closed='left').mean()
df['Change'] = (s - mean) / mean
Note on closed='left'
There was a bug prior to pandas=1.2.0 that caused incorrect handling of closed for fixed windows. Make sure you have pandas>=1.2.0; for example, pandas=1.1.3 will not give the result below.
As described in the docs:
closed: Make the interval closed on the ‘right’, ‘left’, ‘both’ or ‘neither’ endpoints. Defaults to ‘right’.
A simple way to understand is to try with some very simple data and a small window:
a = pd.DataFrame(range(5), index=pd.date_range('2020', periods=5))
b = a.assign(
sum_left=a.rolling(2, closed='left').sum(),
sum_right=a.rolling(2, closed='right').sum(),
sum_both=a.rolling(2, closed='both').sum(),
sum_neither=a.rolling(2, closed='neither').sum(),
)
>>> b
0 sum_left sum_right sum_both sum_neither
2020-01-01 0 NaN NaN NaN NaN
2020-01-02 1 NaN 1.0 1.0 NaN
2020-01-03 2 1.0 3.0 3.0 NaN
2020-01-04 3 3.0 5.0 6.0 NaN
2020-01-05 4 5.0 7.0 9.0 NaN
I am trying to loop through a dataframe creating a dynamic ranges that are limited to the last 6 months of every row index.
Because I am looking back 6 months, I start from the first index row that has a date >= the first date in row index 0 of the dataframe. The condition which I have managed to create is shown below:
for i in df.index:
if datetime.strptime(df['date'][i], '%Y-%m-%d %H:%M:%S') >= (datetime.strptime(df['date'].iloc[0], '%Y-%m-%d %H:%M:%S') + dateutil.relativedelta.relativedelta(months=6)):
However, this merely creates ranges that grow in size incorporating, all data that is indexed after
the first index row that has a date >= the first date in row index 0 of the dataframe.
How can I limit the condition statement to only the last 6 months of each row index?
I'm not sure what exactly you want to do once you have your "dynamic ranges".
You can obtain a list of intervals (t - 6mo, t) for each t in your DatetimeIndex):
intervals = [(t - pd.DateOffset(months=6), t) for t in df.index]
But doing selection operations in a big for-loop might be slow.
Instead, you might be interested in Pandas's rolling operations. It can even use a date offset (as long as it is fixed-frequency) instead of a fixed-sized int window width. However, "6 months" is a non-fixed frequency, and as such the regular rolling won't accept it.
Still, if you are ok with an approximation, say "182 days", then the following might work well.
# setup
n = 10
df = pd.DataFrame(
{'a': np.arange(n), 'b': np.ones(n)},
index=pd.date_range('2019-01-01', freq='M', periods=n))
# example: sum
df.rolling('182D', min_periods=0).sum()
# out:
a b
2019-01-31 0.0 1.0
2019-02-28 1.0 2.0
2019-03-31 3.0 3.0
2019-04-30 6.0 4.0
2019-05-31 10.0 5.0
2019-06-30 15.0 6.0
2019-07-31 21.0 7.0
2019-08-31 27.0 6.0
2019-09-30 33.0 6.0
2019-10-31 39.0 6.0
If you want to be strict on the 6 months windows, you can implement your own pandas.api.indexers.BaseIndexer and use that as arg of rolling.
Hi I am using the date difference as a machine learning feature, analyzing how the weight of a patient changed over time.
I successfully tested a method to do that as shown below, but the question is how to extend this to a dataframe where I have to see date difference for each patient as shown in the figure above. The encircled column is what im aiming to get. So basically the baseline date from which the date difference is calculated changes every time for a new patient name so that we can track the weight progress over time for that patient! Thanks
s='17/6/2016'
s1='22/6/16'
a=pd.to_datetime(s,infer_datetime_format=True)
b=pd.to_datetime(s1,infer_datetime_format=True)
e=b.date()-a.date()
str(e)
str(e)[0:2]
I think it would be something like this, (but im not sure how to do this exactly):
def f(row):
# some logic here
return val
df['Datediff'] = df.apply(f, axis=1)
You can use transform with first
df['Datediff'] = df['Date'] - df1.groupby('Name')['Date'].transform('first')
Another solution can be using cumsum
df['Datediff'] = df.groupby('Name')['Date'].apply(lambda x:x.diff().cumsum().fillna(0))
df["Datediff"] = df.groupby("Name")["Date"].diff().fillna(0)/ np.timedelta64(1, 'D')
df["Datediff"]
0 0.0
1 12.0
2 14.0
3 66.0
4 23.0
5 0.0
6 10.0
7 15.0
8 14.0
9 0.0
10 14.0
Name: Datediff, dtype: float64
I have a dataframe, sega_df:
Month 2016-11-01 2016-12-01
Character
Sonic 12.0 3.0
Shadow 5.0 23.0
I would like to create multiple new columns, by applying a formula for each already existing column within my dataframe (to put it shortly, pretty much double the number of columns). That formula is (100 - [5*eachcell])*0.2.
For example, for November for Sonic, (100-[5*12.0])*0.2 = 8.0, and December for Sonic, (100-[5*3.0])*0.2 = 17.0 My ideal output is:
Month 2016-11-01 2016-12-01 Weighted_2016-11-01 Weighted_2016-12-01
Character
Sonic 12.0 3.0 8.0 17.0
Shadow 5.0 23.0 15.0 -3.0
I know how to create a for loop to create one column. This is for if only one month was in consideration:
for w in range(1,len(sega_df.index)):
sega_df['Weighted'] = (100 - 5*sega_df)*0.2
sega_df[sega_df < 0] = 0
I haven't gotten the skills or experience yet to create multiple columns. I've looked for other questions that may answer what exactly I am doing but haven't gotten anything to work yet. Thanks in advance.
One vectorised approach is to drown to numpy:
A = sega_df.values
A = (100 - 5*A) * 0.2
res = pd.DataFrame(A, index=sega_df.index, columns=('Weighted_'+sega_df.columns))
Then join the result to your original dataframe:
sega_df = sega_df.join(res)
I'm trying to implement the Kaufman Efficiency Ratio (ER) in Python with Pandas.
In a Pandas DataFrame, I have two columns:
Date
Closing Price of a stock (the German DAX index, ^GDAXI, in this example):
Date Close
2016-01-05 10310.10
2016-01-06 10214.02
2016-01-07 9979.85
2016-01-08 9849.34
2016-01-11 9825.07
2016-01-12 9985.43
2016-01-13 9960.96
2016-01-14 9794.20
What I need is a third column that includes the ER for a given period n.
Definition of the ER:
ER = Direction / Volatility
Where:
Direction = ABS (Close – Close[n])
Volatility = n * ∑ (ABS(Close – Close[1]))
n = The efficiency ratio period.
Here is an example of a n=3 period ER (taken from http://etfhq.com/blog/2011/02/07/kaufmans-efficiency-ratio/):
What I'm struggeling with is how to do this in Python with Pandas?
In the end, my dataframe should look like this, according to the calculation above:
Date Adj Close ER(3)
2016-01-04 10283.44
2016-01-05 10310.10
2016-01-06 10214.02
2016-01-07 9979.85 0.9
2016-01-08 9849.34 1.0
2016-01-11 9825.07 1.0
2016-01-12 9985.43 0.0
2016-01-13 9960.96 0.5
2016-01-14 9794.20 0.1
How do I make Pandas to look back at the previous n rows for the calculation needed for the ER?
Any help is greatly appreciated!
Thank you in advance.
Dirk
No need to write a rolling function, just use diff and rolling_sum:
df['direction'] = df['Close'].diff(3).abs()
df['volatility'] = pd.rolling_sum(df['Close'].diff().abs(), 3)
I think the code is pretty much self-explanatory. Please let me know if you would like explanations.
In [11]: df['direction'] / df['volatility']
Out[11]:
0 NaN
1 NaN
2 NaN
3 1.000000
4 1.000000
5 0.017706
6 0.533812
7 0.087801
dtype: float64
This looks like what you're looking for.