Average time between timestamps per group not in order - python

I would like to get the mean time between timestamps per group. However, the groups are not ordered.
Code to create df:
d = {'ID': ['AI100', 'AI200', 'AI200', 'AI100','AI200','AI100'],
'Date': ['2019-01-10', '2018-06-01', '2018-06-11','2019-01-15','2018-06-21', '2019-01-22']}
data = pd.DataFrame(data=d)
data = data[['ID', 'Date']]
data['Date'] = pd.to_datetime(data['Date'])
data
ID Date
0 AI100 2019-01-10
1 AI200 2018-06-01
2 AI200 2018-06-11
3 AI100 2019-01-15
4 AI200 2018-06-21
5 AI100 2019-01-22
I tried the following:
data = data.sort_values(['ID','Date'],ascending=True).groupby('ID').head(3) #group the IDs
data['diffs'] = data['Date'].diff()
data['diffs'] = data['diffs'].apply(lambda x: x.days)
data = data.groupby(['ID'])[('diffs')].agg('mean')
However, this yields:
data.add_suffix('ID').reset_index()
ID diffs
0 AI100ID 6.000000
1 AI200ID -71.666667
The mean time for group AI100ID is correct, but not for group AI200ID.
What is going wrong?

I think the issue you're having here is that you aren't calculating your diffs by the group so it's calculating the difference between the previous group's last value and the new group's first value.
Change your line to this and you should get the expected result:
data['diffs'] = data.groupby('ID')['Date'].diff()
Footnote:
Another other tip unrelated to the main problem, but just in case you were unaware:
data['diffs'] = data['diffs'].apply(lambda x: x.days)
Can be written to use faster vectorised operations using the .dt accessor:
data['diffs'] = data['diffs'].dt.days

Related

How to interpolate only over a specific window?

I have a dataset that follows a weekly indexation, and a list of dates that I need to get interpolated data for. For example, I have the following df with weekly aggregation:
data value
1/01/2021 10
7/01/2021 10
14/01/2021 10
28/01/2021 10
and a list of dates that do not coincide with the df indexed dates, for example:
list_dates = [12/01/2021, 13/01/2021 ...]
I need to get what the interpolated values would be for every date on the list_dates but within a given window (for ex: using only 4 values in the df to calculate to interpolation, split between before and after --> so the 2 first dates before the list date and the 2 first dates after the list date).
To get the interpolated value of the list date 12/01/2021 in the list, I would need to use:
1/1/2021
7/1/2021
14/1/2021
28/1/2021
The output would then be:
data value
1/01/2021 10
7/01/2021 10
12/01/2021 10
13/01/2021 10
14/01/2021 10
28/01/2021 10
I have successfully coded an example of this but it fails for when there are multiple NaNs consecutively (for ex: 12/01 and 13/01). I also can't concat the interpolated value before running the next one in the list, as that would be using the interpolated date to calc the new interpolated date (for ex: using 12/01 to calculate 13/01).
Any advice on how to do this?
Use interpolate to get expected outcome but before you have to prepare your dataframe like below.
I slightly modify your input data to show you interpolation with datetimeindex (method='time'):
# Input data
df = pd.DataFrame({'data': ['1/01/2021', '7/01/2021', '14/01/2021', '28/01/2021'],
'value': [10, 10, 17, 10]})
list_dates = ['12/01/2021', '13/01/2021']
# Conversion of dates
df['data'] = pd.to_datetime(df['data'], format='%d/%m/%Y')
new_dates = pd.to_datetime(list_dates, format='%d/%m/%Y')
# Set datetime column as index and append new dates
df = df.set_index('data')
df = df.reindex(df.index.append(new_dates)).sort_index()
# Interpolate with method='time'
df['value'] = df['value'].interpolate(method='time')
Output:
>>> df
value
2021-01-01 10.0
2021-01-07 10.0
2021-01-12 15.0 # <- time interpolation
2021-01-13 16.0 # <- time interpolation
2021-01-14 17.0 # <- changed from 10 to 17
2021-01-28 10.0

Concatenate arrays into a single table using pandas

I have a .csv file, from this file I group it by year so that it gives me as a result the maximum, minimum and average values
import pandas as pd
DF = pd.read_csv("PJME_hourly.csv")
for i in range(2002,2019):
neblina = DF[DF.Datetime.str.contains(str(i))]
dateframe = neblina.agg({"PJME_MW" : ['max','min','mean']})
print(i , pd.concat([dateframe],axis=0,sort= False))
His output is as follows:
2002 PJME_MW
max 55934.000000
min 19247.000000
mean 31565.617106
2003 PJME_MW
max 53737.000000
min 19414.000000
mean 31698.758621
2004 PJME_MW
max 51962.000000
min 19543.000000
mean 32270.434867
I would like to know how I can make it all join in a single column (PJME_MW), but that each group of operations (max, min, mean) is identified by the year that corresponds to it.
If you convert the dates to_datetime(), you can group them using the dt.year accessor:
df = pd.read_csv('PJME_hourly.csv')
df.Datetime = pd.to_datetime(df.Datetime)
df.groupby(df.Datetime.dt.year).agg(['min', 'max', 'mean'])
Toy example:
df = pd.DataFrame({'Datetime': ['2019-01-01','2019-02-01','2020-01-01','2020-02-01','2021-01-01'], 'PJME_MV': [3,5,30,50,100]})
# Datetime PJME_MV
# 0 2019-01-01 3
# 1 2019-02-01 5
# 2 2020-01-01 30
# 3 2020-02-01 50
# 4 2021-01-01 100
df.Datetime = pd.to_datetime(df.Datetime)
df.groupby(df.Datetime.dt.year).agg(['min', 'max', 'mean'])
# PJME_MV
# min max mean
# Datetime
# 2019 3 5 4
# 2020 30 50 40
# 2021 100 100 100
The code could be optimized but how is now works, change this part of your code:
for i in range(2002,2019):
neblina = DF[DF.Datetime.str.contains(str(i))]
dateframe = neblina.agg({"PJME_MW" : ['max','min','mean']})
print(i , pd.concat([dateframe],axis=0,sort= False))
Use this instead
aggs = ['max','min','mean']
df_group = df.groupby('Datetime')['PJME_MW'].agg(aggs).reset_index()
out_columns = ['agg_year', 'PJME_MW']
out = []
aux = pd.DataFrame(columns=out_columns)
for agg in aggs:
aux['agg_year'] = agg + '_' + df_group['Datetime']
aux['PJME_MW'] = df_group[agg]
out.append(aux)
df_out = pd.concat(out)
Edit: Concatenation form has been changed
Final edit: I didn't understand the whole problem, sorry. You don't need the code after groupby function

How to get mean of last month in pandas

I have a data set with first column is the Date, Second column is the Collaborator and third column is price paid.
I want to get the mean price paid of every Collaborator for the previous month. I want to return a table tha looks like this:
I used some solutions like rolling but i could get only the past X days, not the past month
Pandas has a built-in method .rolling
x = 3 # This is where you define the number of previous entries
df.rolling(x).mean() # Apply the mean
Hence:
df['LastMonthMean'] = df['Price'].rolling(x).mean()
I'm not sure how you want to calculate your mean but hope this helps
I would first add month column and then use groupby and would retrieve the first item
import pandas as pd
df = pd.DataFrame({
'month': [1, 1, 1, 2, 2, 2],
'collaborator': [1, 2, 3, 1, 2, 3],
'price': [100, 200, 300, 400, 500, 600]
})
df.groupby(['collaborator', 'month']).mean()
The rolling() method would have to be applied to the DataFrame grouped by Collaborator to obtain the mean sale price of every collaborator in the previous month.
Because the data would be grouped by and summarised, the number of data points would not match the original dataset, thus not allowing you to easily append the result to the original dataset.
If you use a DatetimeIndex in your DataFrame it will be considered a time series and then you can resample() the data more easily.
I have produced a replicable solution below, based on your initial question in which I resample the data and append the last month's mean to it. Thanks to #akilat90 for the function to generate random dates within a range.
import pandas as pd
import numpy as np
def random_dates(start, end, n=10):
# Function copied from #akilat90
# Available on https://stackoverflow.com/questions/50559078/generating-random-dates-within-a-given-range-in-pandas
start_u = pd.to_datetime(start).value//10**9
end_u = pd.to_datetime(end).value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
size = 1000
index = random_dates(start='2021-01-01', end='2021-06-30', n=size).sort_values()
collaborators = np.random.randint(low=1, high=4, size=size)
prices = np.random.uniform(low=5., high=25., size=size)
data = pd.DataFrame({'Collaborator': collaborators,
'Price': prices}, index=index)
monthly_mean = data.groupby('Collaborator').resample('M')['Price'].mean()
data_final = pd.merge(data, monthly_mean, how='left', left_on=['Collaborator', data.index.month],
right_on=[monthly_mean.index.get_level_values('Collaborator'), monthly_mean.index.get_level_values(1).month + 1])
data_final.index = data.index
data_final = data_final.drop('key_1', axis=1)
data_final.columns = ['Collaborator', 'Price', 'LastMonthMean']
This is the output:
Collaborator Price LastMonthMean
2021-01-31 04:26:16 2 21.838910 NaN
2021-01-31 05:33:04 2 19.164086 NaN
2021-01-31 12:32:44 2 24.949444 NaN
2021-01-31 12:58:02 2 8.907224 NaN
2021-01-31 14:43:07 1 7.446839 NaN
2021-01-31 18:38:11 3 6.565208 NaN
2021-02-01 00:08:25 2 24.520149 15.230642
2021-02-01 09:25:54 2 20.614261 15.230642
2021-02-01 09:59:48 2 10.879633 15.230642
2021-02-02 10:12:51 1 22.134549 14.180087
2021-02-02 17:22:18 2 24.469944 15.230642
As you can see, the records in January 2021, the first month in this time series, do not have a valid Last Month Mean, unlike the records in February.

Pandas: Group by operation on dynamically selected columns with conditional filter

I have a dataframe as follows:
Date Group Value Duration
2018-01-01 A 20 30
2018-02-01 A 10 60
2018-01-01 B 15 180
2018-02-01 B 30 210
2018-03-01 B 25 238
2018-01-01 C 10 235
2018-02-01 C 15 130
I want to use group_by dynamically i.e. do not wish to type the column names on which group_by would be applied. Specifically, I want to compute mean of each Group for last two months.
As we can see that not each Group's data is present in the above dataframe for all dates. So the tasks are as follows:
Add a dummy row based on the date, in case data pertaining to Date = 2018-03-01not present for each Group (e.g. add row for A and C).
Perform group_by to compute mean using last two month's Value and Duration.
So my approach is as follows:
For Task 1:
s = pd.MultiIndex.from_product(df['Date'].unique(),df['Group'].unique()],names=['Date','Group'])
df = df.set_index(['Date','Group']).reindex(s).reset_index().sort_values(['Group','Date']).ffill(axis=0)
can we have a better method for achieving the 'add row' task? The reference is found here.
For Task 2:
def cond_grp_by(df,grp_by:str,cols_list:list,*args):
df_grp = df.groupby(grp_by)[cols_list].transform(lambda x : x.tail(2).mean())
return df_grp
df_cols = df.columns.tolist()
df = cond_grp_by(dealer_f_filt,'Group',df_cols)
Reference of the above approach is found here.
The above code is throwing IndexError : Column(s) ['index','Group','Date','Value','Duration'] already selected
The expected output is
Group Value Duration
A 10 60 <--------- Since a row is added for 2018-03-01 with
B 27.5 224 same value as 2018-02-01,we are
C 15 130 <--------- computing mean for last two values
Use GroupBy.agg instead transform if need output filled by aggregate values:
def cond_grp_by(df,grp_by:str,cols_list:list,*args):
return df.groupby(grp_by)[cols_list].agg(lambda x : x.tail(2).mean()).reset_index()
df = cond_grp_by(df,'Group',df_cols)
print (df)
Group Value Duration
0 A 10.0 60.0
1 B 27.5 224.0
2 C 15.0 130.0
If need last value per groups use GroupBy.last:
def cond_grp_by(df,grp_by:str,cols_list:list,*args):
return df.groupby(grp_by)[cols_list].last().reset_index()
df = cond_grp_by(df,'Group',df_cols)

How to split a dataframe column into multiple columns with a Pandas converter

I have a file with rows like this:
blablabla (CODE1513A15), 9.20, 9.70, 0
I want pandas to read each column, but from the first column I am interested only in the data between brackets, and I want to extract it into additional columns. Therefore, I tried using a Pandas converter:
import pandas as pd
from datetime import datetime
import string
code = 'CODE'
code_parser = lambda x: {
'date': datetime(int(x.split('(', 1)[1].split(')')[0][len(code):len(code)+2]), string.uppercase.index(x.split('(', 1)[1].split(')')[0][len(code)+4:len(code)+5])+1, int(x.split('(', 1)[1].split(')')[0][len(code)+2:len(code)+4])),
'value': float(x.split('(', 1)[1].split(')')[0].split('-')[0][len(code)+5:])
}
column_names = ['first_column', 'second_column', 'third_column', 'fourth_column']
pd.read_csv('myfile.csv', usecols=[0,1,2,3], names=column_names, converters={'first_column': code_parser})
With this code, I can convert the text between brackets to a dict containing a datetime object and a value.
If the code is CODE1513A15 as in the sample, it will be built from:
a known code (in this example, 'CODE')
two digits for the year
two digits for the day of month
A letter from A to L, which is the month (A for January, B for February, ...)
A float value
I tested the lambda function and it correctly extracts the information I want, and its output is a dict {'date': datetime(15, 1, 13), 'value': 15}. Nevertheless, if I print the result of the pd.read_csv method, the 'first_column' is a dict, while I was expecting it to be replaced by two columns called 'date' and 'value':
first_column second_column third_column fourth_column
0 {u'date':13-01-2015, u'value':15} 9.20 9.70 0
1 {u'date':14-01-2015, u'value':16} 9.30 9.80 0
2 {u'date':15-01-2015, u'value':12} 9.40 9.90 0
What I want to get is:
date value second_column third_column fourth_column
0 13-01-2015 15 9.20 9.70 0
1 14-01-2015 16 9.30 9.80 0
2 15-01-2015 12 9.40 9.90 0
Note: I don't care how the date is formatted, this is only a representation of what I expect to get.
Any idea?
I think it's better to do things step by step.
# read data into a data frame
column_names = ['first_column', 'second_column', 'third_column', 'fourth_column']
df = pd.read_csv(data, names=column_names)
# extract values using regular expression which is much more robust
# than string spliting
tmp = df.first_column.str.extract('CODE(\d{2})(\d{2})([A-L]{1})(\d+)')
tmp.columns = ['year', 'day', 'month', 'value']
tmp['month'] = tmp['month'].apply(lambda m: str(ord(m) - 64))
Sample output:
print tmp
year day month value
0 15 13 1 15
Then transform your original data frame into the format that you want
df['date'] = (tmp['year'] + tmp['day'] + tmp['month']).apply(lambda d: strptime(d, '%y%d%m'))
df['value'] = tmp['value']
del df['first_column']
Is conversion in the read_csv is mandatory? Otherwise, passing a function which returns Series to apply results in DataFrame.
df
first_column second_column third_column fourth_column
0 blablabla (CODE1513A15) 9.2 9.7 0
1 blablabla (CODE1514A16) 9.2 9.7 0
code_parser = lambda x: pd.Series({
'date': datetime(2000+int(x.split('(', 1)[1].split(')')[0][len(code):len(code)+2]), string.uppercase.index(x.split('(', 1)[1].split(')')[0][len(code)+4:len(code)+5])+1, int(x.split('(', 1)[1].split(')')[0][len(code)+2:len(code)+4])),
'value': float(x.split('(', 1)[1].split(')')[0].split('-')[0][len(code)+5:])
})
df['first_column'].apply(code_parser)
date value
0 2015-01-13 15
1 2015-01-14 16

Categories