I have daily data, and also monthly numbers. I would like to normalize the daily data by the monthly number - so for example the first 31 days of 2017 are all divided by the number corresponding to January 2017 from another data set.
import pandas as pd
import datetime as dt
N=100
start=dt.datetime(2017,1,1)
df_daily=pd.DataFrame({"a":range(N)}, index=pd.date_range(start, start+dt.timedelta(N-1)))
df_monthly=pd.Series([1, 2, 3], index=pd.PeriodIndex(["2017-1", "2017-2", "2017-3"], freq="M"))
df_daily["a"] / df_monthly # ???
I was hoping the time series data would align in a one-to-many fashion and do the required operation, but instead I get a lot of NaN.
How would you do this one-to-many data alignment correctly in Pandas?
I might also want to concat the data, in which case I expect the monthly data to duplicate values within one month.
You can extract the information with to_period('M') and then use map.
df_daily["month"] = df_daily.index.to_period('M')
df_daily['a'] / df_daily["month"].map(df_monthly)
Without creating the month column, you can use
df_daily['a'] / df_daily.index.to_period('M').to_series().map(df_monthly)
You can create a temporary key from the index's month, then merge both the dataframe on the key i.e
df_monthly = df_monthly.to_frame().assign(key=df_monthly.index.month)
df_daily = df_daily.assign(key=df_daily.index.month)
df_new = df_daily.merge(df_monthly,how='left').set_index(df_daily.index).drop('key',1)
a 0
2017-01-01 0 1.0
2017-01-02 1 1.0
2017-01-03 2 1.0
2017-01-04 3 1.0
2017-01-05 4 1.0
For division you can then simply do :
df_new['b'] = df_new['a'] / df_new[0]
Related
I have a dataset that follows a weekly indexation, and a list of dates that I need to get interpolated data for. For example, I have the following df with weekly aggregation:
data value
1/01/2021 10
7/01/2021 10
14/01/2021 10
28/01/2021 10
and a list of dates that do not coincide with the df indexed dates, for example:
list_dates = [12/01/2021, 13/01/2021 ...]
I need to get what the interpolated values would be for every date on the list_dates but within a given window (for ex: using only 4 values in the df to calculate to interpolation, split between before and after --> so the 2 first dates before the list date and the 2 first dates after the list date).
To get the interpolated value of the list date 12/01/2021 in the list, I would need to use:
1/1/2021
7/1/2021
14/1/2021
28/1/2021
The output would then be:
data value
1/01/2021 10
7/01/2021 10
12/01/2021 10
13/01/2021 10
14/01/2021 10
28/01/2021 10
I have successfully coded an example of this but it fails for when there are multiple NaNs consecutively (for ex: 12/01 and 13/01). I also can't concat the interpolated value before running the next one in the list, as that would be using the interpolated date to calc the new interpolated date (for ex: using 12/01 to calculate 13/01).
Any advice on how to do this?
Use interpolate to get expected outcome but before you have to prepare your dataframe like below.
I slightly modify your input data to show you interpolation with datetimeindex (method='time'):
# Input data
df = pd.DataFrame({'data': ['1/01/2021', '7/01/2021', '14/01/2021', '28/01/2021'],
'value': [10, 10, 17, 10]})
list_dates = ['12/01/2021', '13/01/2021']
# Conversion of dates
df['data'] = pd.to_datetime(df['data'], format='%d/%m/%Y')
new_dates = pd.to_datetime(list_dates, format='%d/%m/%Y')
# Set datetime column as index and append new dates
df = df.set_index('data')
df = df.reindex(df.index.append(new_dates)).sort_index()
# Interpolate with method='time'
df['value'] = df['value'].interpolate(method='time')
Output:
>>> df
value
2021-01-01 10.0
2021-01-07 10.0
2021-01-12 15.0 # <- time interpolation
2021-01-13 16.0 # <- time interpolation
2021-01-14 17.0 # <- changed from 10 to 17
2021-01-28 10.0
I have a data set with first column is the Date, Second column is the Collaborator and third column is price paid.
I want to get the mean price paid of every Collaborator for the previous month. I want to return a table tha looks like this:
I used some solutions like rolling but i could get only the past X days, not the past month
Pandas has a built-in method .rolling
x = 3 # This is where you define the number of previous entries
df.rolling(x).mean() # Apply the mean
Hence:
df['LastMonthMean'] = df['Price'].rolling(x).mean()
I'm not sure how you want to calculate your mean but hope this helps
I would first add month column and then use groupby and would retrieve the first item
import pandas as pd
df = pd.DataFrame({
'month': [1, 1, 1, 2, 2, 2],
'collaborator': [1, 2, 3, 1, 2, 3],
'price': [100, 200, 300, 400, 500, 600]
})
df.groupby(['collaborator', 'month']).mean()
The rolling() method would have to be applied to the DataFrame grouped by Collaborator to obtain the mean sale price of every collaborator in the previous month.
Because the data would be grouped by and summarised, the number of data points would not match the original dataset, thus not allowing you to easily append the result to the original dataset.
If you use a DatetimeIndex in your DataFrame it will be considered a time series and then you can resample() the data more easily.
I have produced a replicable solution below, based on your initial question in which I resample the data and append the last month's mean to it. Thanks to #akilat90 for the function to generate random dates within a range.
import pandas as pd
import numpy as np
def random_dates(start, end, n=10):
# Function copied from #akilat90
# Available on https://stackoverflow.com/questions/50559078/generating-random-dates-within-a-given-range-in-pandas
start_u = pd.to_datetime(start).value//10**9
end_u = pd.to_datetime(end).value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
size = 1000
index = random_dates(start='2021-01-01', end='2021-06-30', n=size).sort_values()
collaborators = np.random.randint(low=1, high=4, size=size)
prices = np.random.uniform(low=5., high=25., size=size)
data = pd.DataFrame({'Collaborator': collaborators,
'Price': prices}, index=index)
monthly_mean = data.groupby('Collaborator').resample('M')['Price'].mean()
data_final = pd.merge(data, monthly_mean, how='left', left_on=['Collaborator', data.index.month],
right_on=[monthly_mean.index.get_level_values('Collaborator'), monthly_mean.index.get_level_values(1).month + 1])
data_final.index = data.index
data_final = data_final.drop('key_1', axis=1)
data_final.columns = ['Collaborator', 'Price', 'LastMonthMean']
This is the output:
Collaborator Price LastMonthMean
2021-01-31 04:26:16 2 21.838910 NaN
2021-01-31 05:33:04 2 19.164086 NaN
2021-01-31 12:32:44 2 24.949444 NaN
2021-01-31 12:58:02 2 8.907224 NaN
2021-01-31 14:43:07 1 7.446839 NaN
2021-01-31 18:38:11 3 6.565208 NaN
2021-02-01 00:08:25 2 24.520149 15.230642
2021-02-01 09:25:54 2 20.614261 15.230642
2021-02-01 09:59:48 2 10.879633 15.230642
2021-02-02 10:12:51 1 22.134549 14.180087
2021-02-02 17:22:18 2 24.469944 15.230642
As you can see, the records in January 2021, the first month in this time series, do not have a valid Last Month Mean, unlike the records in February.
Given a pandas dataframe in the following format:
toy = pd.DataFrame({
'id': [1,2,3,
1,2,3,
1,2,3],
'date': ['2015-05-13', '2015-05-13', '2015-05-13',
'2016-02-12', '2016-02-12', '2016-02-12',
'2018-07-23', '2018-07-23', '2018-07-23'],
'my_metric': [395, 634, 165,
144, 305, 293,
23, 395, 242]
})
# Make sure 'date' has datetime format
toy.date = pd.to_datetime(toy.date)
The my_metric column contains some (random) metric I wish to compute a time-dependent moving average of, conditional on the column id
and within some specified time interval that I specify myself. I will refer to this time interval as the "lookback time"; which could be 5 minutes
or 2 years. To determine which observations that are to be included in the lookback calculation, we use the date column (which could be the index if you prefer).
To my frustration, I have discovered that such a procedure is not easily performed using pandas builtins, since I need to perform the calculation conditionally
on id and at the same time the calculation should only be made on observations within the lookback time (checked using the date column). Hence, the output dataframe should consist of one row for each id-date combination, with the my_metric column now being the average of all observations that is contatined within the lookback time (e.g. 2 years, including today's date).
For clarity, I have included a figure with the desired output format (apologies for the oversized figure) when using a 2-year lookback time:
I have a solution but it does not make use of specific pandas built-in functions and is likely sub-optimal (combination of list comprehension and a single for-loop). The solution I am looking for will not make use of a for-loop, and is thus more scalable/efficient/fast.
Thank you!
Calculating lookback time: (Current_year - 2 years)
from dateutil.relativedelta import relativedelta
from dateutil import parser
import datetime
In [1691]: dt = '2018-01-01'
In [1695]: dt = parser.parse(dt)
In [1696]: lookback_time = dt - relativedelta(years=2)
Now, filter the dataframe on lookback time and calculate rolling average
In [1722]: toy['new_metric'] = ((toy.my_metric + toy[toy.date > lookback_time].groupby('id')['my_metric'].shift(1))/2).fillna(toy.my_metric)
In [1674]: toy.sort_values('id')
Out[1674]:
date id my_metric new_metric
0 2015-05-13 1 395 395.0
3 2016-02-12 1 144 144.0
6 2018-07-23 1 23 83.5
1 2015-05-13 2 634 634.0
4 2016-02-12 2 305 305.0
7 2018-07-23 2 395 350.0
2 2015-05-13 3 165 165.0
5 2016-02-12 3 293 293.0
8 2018-07-23 3 242 267.5
So, after some tinkering I found an answer that will generalize adequately. I used a slightly different 'toy' dataframe (slightly more relevant to my case). For completeness sake, here is the data:
Consider now the following code:
# Define a custom function which groups by time (using the index)
def rolling_average(x, dt):
xt = x.sort_index().groupby(lambda x: x.time()).rolling(window=dt).mean()
xt.index = xt.index.droplevel(0)
return xt
dt='730D' # rolling average window: 730 days = 2 years
# Group by the 'id' column
g = toy.groupby('id')
# Apply the custom function
df = g.apply(rolling_average, dt=dt)
# Massage the data to appropriate format
df.index = df.index.droplevel(0)
df = df.reset_index().drop_duplicates(keep='last', subset=['id', 'date'])
The result is as expected:
I have found plenty of information related to moving averages when the data is sampled to regular intervals (ie 1 min, 5 mins, etc). However, I need a solution for a time series dataset that has irregular time intervals.
The dataset contains two columns, Timestamp and Price. Timestamp goes down to the millisecond, and there is no set interval for rows. I need to take my dataframe and add three moving average columns:
1 min
5 min
10 min
I do not want to resample the data, I want the end result to be the same number of rows but with the three columns filled as applicable. (IE, NaN until the 1/5/10 min interval for each column, respectively)
I feel like I am getting close, but cannot figure out how to pass the moving average variable to this function:
import pandas as pd
import numpy as np
# Load IBM data from CSV
df = pd.read_csv(
"C:/Documents/Python Scripts/MA.csv", names=['Timestamp',
'Price'])
# Create three moving average signals
df['Timestamp'] = pd.to_datetime(df['Timestamp'], errors='coerce')
df.set_index('Timestamp', inplace=True)
def movingaverage(values, window):
weights = np.repeat(1.0, window)/window
smas = np.convolve(values, weights, 'valid')
return smas
MA_1M = movingaverage(df, 1)
MA_5M = movingaverage(df, 5)
MA_10M = movingaverage(df, 10)
print(MA_1M)
Example Data:
Timestamp Price
2018-10-08 04:00:00.013 152.59
2018-10-08 04:00:00.223 156.34
2018-10-08 04:01:00.000 152.73
2018-10-08 04:05:00.127 156.34
2018-10-08 04:10:00.000 152.73
Expected Output:
Timestamp Price MA_1M MA_5M MA10M
2018-10-08 04:00:00.013 152.59 N/A N/A N/A
2018-10-08 04:00:00.223 156.34 N/A N/A N/A
2018-10-08 04:01:00.000 154.73 154.55 N/A N/A
2018-10-08 04:05:00.127 155.34 155.34 155.47 N/A
2018-10-08 04:10:00.000 153.73 153.73 154.54 154.55
At each row, the MA column takes that timestamp and looks back 1, 5 or 10 minutes and calculates the average. The thing that makes this difficult is that the rows can be generated at any millisecond. In my code above I am simply trying to get a moving average to work with a time variable. I am assuming that as long as the row counts match I can use the logic to add a column to my df.
The following works, except for the NaNs - I don't know how attached you are to those:
foo = df.apply(lambda x: df[(df['Timestamp'] <= x['Timestamp']) & (df['timestamp']> x['timestamp'] - pd.Timedelta('5 min'))]['Price'].mean(), axis=1)
I am trying to plot data that has been binned by certain date ranges.
Say for example I have the following dataframe:
dates = pd.date_range(start=pd.datetime(2013, 6, 1), periods=50, freq='D')
df = pd.DataFrame(np.random.normal(10, 3, 50), columns=['x'], index=dates)
df[:3]
x
2013-06-01 9.819422
2013-06-02 3.659629
2013-06-03 14.862231
I would like to group the dates by 3 week intervals and plot the data, this gives me the average I am looking for,
df.resample('3w', how='mean')
x
2013-06-02 11.424715
2013-06-23 9.443888
2013-07-14 8.572851
2013-08-04 9.873879
But I would like to keep all of the data so that I can use boxplots in seaborn or include standard error using matplotlib. I'm completely stuck on how to achieve this without explicitly defining the ranges (which is not possible with the actual dataframes I am working with). It seems there must be a fairly straightforward way to do this in pandas so the output would be something like:
x week
2013-06-01 9.819422 1
2013-06-02 3.659629 1
2013-06-03 14.862231 1
Where week is a categorical variable representing the binned data. Any thoughts would be appreciated.
Perhaps you can use TimeGrouper.
df.groupby(pd.TimeGrouper('3w', how=np.mean)).describe().unstack()
x
count mean std min 25% 50% 75% max
2013-06-02 2 10.864835 3.794379 8.181803 9.523319 10.864835 12.206350 13.547866
2013-06-23 21 9.888556 3.452331 3.503944 7.838625 9.739525 12.403285 16.031644
2013-07-14 21 10.475142 2.687320 6.605619 8.399518 11.209683 11.818895 16.265771
2013-08-04 6 9.471931 3.196345 5.492205 8.122607 8.502217 10.901065 14.638198
>>> g = df.groupby(pd.TimeGrouper('3w', how=np.mean)).boxplot()
To add the period start date (as a string) to the original data:
df = pd.DataFrame(np.random.normal(10, 3, 50), columns=['x'], index=dates)
tg = df.groupby(pd.TimeGrouper('3W', closed='left'))
df['period'] = None
for p, idx in tg.indices.iteritems():
df.ix[idx, 'period'] = p.strftime('%Y-%m-%d')
>>> df.head()
x period
2013-06-01 7.972202 2013-06-16
2013-06-02 12.184312 2013-06-16
2013-06-03 6.884374 2013-06-16
2013-06-04 8.414091 2013-06-16
2013-06-05 12.368407 2013-06-16
Here how I would do:
for idx,w in enumerate(df.groupby(pd.TimeGrouper("3w-SAT"))): # your first day is a saturday
df.loc[w[0], "week"] = idx+1
# propagate the week number
df["week"] = df.week.fillna(method="ffill")
# remove added date by the Timegrouper as your number of date is not a multiple of 3 weeks.
df.dropna(inplace=1)
df.tail()
x week
2013-07-16 15.717111 3
2013-07-17 9.815201 3
2013-07-18 9.426426 3
2013-07-19 12.725350 3
2013-07-20 16.100748 3
# just use seaborn as usual
sns.boxplot(data=df, x="week", y="x") # plot it
I don't know if there is a better way to use TimeGrouper with seaborn directly
HTH