Select last timestamp per each date - python

A dataframe contains only a few timestamps per day and I need to select the latest one for each date (not the values, the time stamp itself). the df looks like this:
A B C
2016-12-05 12:00:00+00:00 126.0 15.0 38.54
2016-12-05 16:00:00+00:00 131.0 20.0 42.33
2016-12-14 05:00:00+00:00 129.0 18.0 43.24
2016-12-15 03:00:00+00:00 117.0 22.0 33.70
2016-12-15 04:00:00+00:00 140.0 23.0 34.81
2016-12-16 03:00:00+00:00 120.0 21.0 32.24
2016-12-16 04:00:00+00:00 142.0 22.0 35.20
I managed to achieve what i needed by defining the following function:
def find_last_h(df,column):
newindex = []
df2 = df.resample('d').last().dropna()
for x in df2[column].values:
newindex.append(df[df[column]==x].index.values[0])
return pd.DatetimeIndex(newindex)
with which I specify which column's values to use as a filter to get the desired timestamps. The issue here is in the case of non unique values this might not work as desired.
Another way that is used is:
grouped = df.groupby([df.index.day,df.index.hour])
grouped.groupby(level=0).last()
and then reconstruct the timestamps but it is even more verbose. What is the smart way?

Use boolean indexing with mask created by duplicated and floor for truncate times:
idx = df.index.floor('D')
df = df[~idx.duplicated(keep='last') | ~idx.duplicated(keep=False)]
print (df)
A B C
2016-12-05 16:00:00 131.0 20.0 42.33
2016-12-14 05:00:00 129.0 18.0 43.24
2016-12-15 04:00:00 140.0 23.0 34.81
2016-12-16 04:00:00 142.0 22.0 35.20
Another solution with reset_index + set_index:
df = df.reset_index().groupby([df.index.date]).last().set_index('index')
print (df)
A B C
index
2016-12-05 16:00:00 131.0 20.0 42.33
2016-12-14 05:00:00 129.0 18.0 43.24
2016-12-15 04:00:00 140.0 23.0 34.81
2016-12-16 04:00:00 142.0 22.0 35.20
resample and groupby dates only lost times:
print (df.resample('1D').last().dropna())
A B C
2016-12-05 131.0 20.0 42.33
2016-12-14 129.0 18.0 43.24
2016-12-15 140.0 23.0 34.81
2016-12-16 142.0 22.0 35.20
print (df.groupby([df.index.date]).last())
A B C
2016-12-05 131.0 20.0 42.33
2016-12-14 129.0 18.0 43.24
2016-12-15 140.0 23.0 34.81
2016-12-16 142.0 22.0 35.20

how about
df.resample('24H',kind='period').last().dropna() ?

You can groupby the date and just take the max of each datetime to get the last datetime on each date.
This may look like:
df.groupby(df["datetime"].dt.date)["datetime"].max()
or something like
df.groupby(pd.Grouper(freq='D'))["datetime"].max()

Related

Overwrite one dataframe with values from another dataframe, based on repeated datetime index

I want to update and overwrite the values of one dataframe with the values in another, based on the datetime index, for a repeated datetime index. This code illustrates my problem, I have given df1 crazy values for illustrative purposes:
#import packages
import pandas as pd
import numpy as np
#create dataframes and indices
df = pd.DataFrame(np.random.randint(0,30,size=(10, 3)), columns=(['MeanT', 'MaxT', 'MinT']))
df1 = pd.DataFrame(np.random.randint(900,1000,size=(10, 3)), columns=(['MeanT', 'MaxT', 'MinT']))
df['Location'] =[2,2,3,3,4,4,5,5,6,6]
df1['Location'] =[2,2,3,3,4,4,5,5,6,6]
df.index = ["2020-05-18 12:00:00","2020-05-19 12:00:00","2020-05-18 12:00:00","2020-05-19 12:00:00","2020-05-18 12:00:00","2020-05-19 12:00:00","2020-05-18 12:00:00","2020-05-19 12:00:00","2020-05-18 12:00:00","2020-05-19 12:00:00"]
df1.index = ["2020-05-19 12:00:00", "2020-05-20 12:00:00", "2020-05-19 12:00:00", "2020-05-20 12:00:00", "2020-05-19 12:00:00", "2020-05-20 12:00:00", "2020-05-19 12:00:00", "2020-05-20 12:00:00", "2020-05-19 12:00:00", "2020-05-20 12:00:00"]
df.index = pd.to_datetime(df.index)
df1.index = pd.to_datetime(df1.index)
Take a look at both dataframes, which shows dates 18th and 19th for df, and 19th and 20th for df1.
print(df)
MeanT MaxT MinT Location
2020-05-18 12:00:00 28 0 9 2
2020-05-19 12:00:00 22 7 11 2
2020-05-18 12:00:00 2 7 7 3
2020-05-19 12:00:00 10 24 18 3
2020-05-18 12:00:00 10 12 25 4
2020-05-19 12:00:00 25 7 20 4
2020-05-18 12:00:00 1 8 11 5
2020-05-19 12:00:00 27 19 12 5
2020-05-18 12:00:00 25 10 26 6
2020-05-19 12:00:00 29 11 27 6
print(df1)
MeanT MaxT MinT Location
2020-05-19 12:00:00 912 991 915 2
2020-05-20 12:00:00 936 917 965 2
2020-05-19 12:00:00 918 977 901 3
2020-05-20 12:00:00 974 971 927 3
2020-05-19 12:00:00 979 929 953 4
2020-05-20 12:00:00 988 955 939 4
2020-05-19 12:00:00 969 983 940 5
2020-05-20 12:00:00 902 904 916 5
2020-05-19 12:00:00 983 942 965 6
2020-05-20 12:00:00 928 994 933 6
I want to create a new dataframe which updates df with the values from df1, so the new df has values for the 18th from df, and the 19th and 20th from df1.
I have tried using combine_first like so:
df = df.set_index(df.groupby(level=0).cumcount(), append=True)
df1 = df1.set_index(df1.groupby(level=0).cumcount(), append=True)
df3 = df.combine_first(df1).sort_index(level=[1,0]).reset_index(level=1, drop=True)
which updates the dataframe, but doesn't overwrite the data for the 19th with values in df1. It produces this output:
print(df3)
MeanT MaxT MinT Location
2020-05-18 12:00:00 28.0 0.0 9.0 2.0
2020-05-19 12:00:00 22.0 7.0 11.0 2.0
2020-05-20 12:00:00 936.0 917.0 965.0 2.0
2020-05-18 12:00:00 2.0 7.0 7.0 3.0
2020-05-19 12:00:00 10.0 24.0 18.0 3.0
2020-05-20 12:00:00 974.0 971.0 927.0 3.0
2020-05-18 12:00:00 10.0 12.0 25.0 4.0
2020-05-19 12:00:00 25.0 7.0 20.0 4.0
2020-05-20 12:00:00 988.0 955.0 939.0 4.0
2020-05-18 12:00:00 1.0 8.0 11.0 5.0
2020-05-19 12:00:00 27.0 19.0 12.0 5.0
2020-05-20 12:00:00 902.0 904.0 916.0 5.0
2020-05-18 12:00:00 25.0 10.0 26.0 6.0
2020-05-19 12:00:00 29.0 11.0 27.0 6.0
2020-05-20 12:00:00 928.0 994.0 933.0 6.0
So the values for the 18th and the 20th are correct, but the values for the 19th are still from df. I want the values from df to be overwritten with those in df1. Please help!
you just need to use combine_first backwards.
We can also use 'Location' as index instead groupby.cumcount
df3 = (df1.set_index('Location', append=True)
.combine_first(df.set_index('Location', append=True))
.reset_index(level='Location')
.reindex(columns=df.columns)
.sort_values('Location'))
print(df3)
Location MeanT MaxT MinT
2020-05-18-12:00:00 2 28.0 0.0 9.0
2020-05-19-12:00:00 2 912.0 991.0 915.0
2020-05-20-12:00:00 2 936.0 917.0 965.0
2020-05-18-12:00:00 3 2.0 7.0 7.0
2020-05-19-12:00:00 3 918.0 977.0 901.0
2020-05-20-12:00:00 3 974.0 971.0 927.0
2020-05-18-12:00:00 4 10.0 12.0 25.0
2020-05-19-12:00:00 4 979.0 929.0 953.0
2020-05-20-12:00:00 4 988.0 955.0 939.0
2020-05-18-12:00:00 5 1.0 8.0 11.0
2020-05-19-12:00:00 5 969.0 983.0 940.0
2020-05-20-12:00:00 5 902.0 904.0 916.0
2020-05-18-12:00:00 6 25.0 10.0 26.0
2020-05-19-12:00:00 6 983.0 942.0 965.0
2020-05-20-12:00:00 6 928.0 994.0 933.0

pandas calculate time series resampling

If I had some random data created on a one hour sample..
import pandas as pd
import numpy as np
from numpy.random import randint
np.random.seed(10) # added for reproductibility
rng = pd.date_range('10/9/2018 00:00', periods=1000, freq='1H')
df = pd.DataFrame({'Random_Number':randint(1, 100, 1000)}, index=rng)
I can use the groupby to break out each day:
for idx, day in df.groupby(df.index.date):
print(day)
Now is there a way to calculate the time difference between daily min & max value based on the timestamp in hours? for each day record the daily min & max & time difference?
After some discussion (thanks #Erfan):
(df.Random_Number
.groupby(df.index.date)
.agg(['idxmin','idxmax'])
.diff(axis=1).iloc[:,1]
.div(pd.to_timedelta('1H'))
)
Output:
2018-10-09 -4.0
2018-10-10 -1.0
2018-10-11 -4.0
2018-10-12 12.0
2018-10-13 21.0
2018-10-14 6.0
2018-10-15 -6.0
2018-10-16 -18.0
2018-10-17 -8.0
2018-10-18 9.0
2018-10-19 -10.0
2018-10-20 3.0
2018-10-21 10.0
2018-10-22 2.0
2018-10-23 9.0
2018-10-24 2.0
2018-10-25 3.0
2018-10-26 2.0
2018-10-27 -22.0
2018-10-28 6.0
2018-10-29 -8.0
2018-10-30 -1.0
2018-10-31 -11.0
2018-11-01 19.0
2018-11-02 7.0
2018-11-03 4.0
2018-11-04 18.0
2018-11-05 -1.0
2018-11-06 15.0
2018-11-07 -14.0
2018-11-08 -16.0
2018-11-09 -2.0
2018-11-10 -7.0
2018-11-11 -14.0
2018-11-12 12.0
2018-11-13 -14.0
2018-11-14 2.0
2018-11-15 2.0
2018-11-16 6.0
2018-11-17 -7.0
2018-11-18 5.0
2018-11-19 9.0
Name: idxmax, dtype: float64
Alternatively, should you want to retain all columns with data frame output, consider merging on an aggregated dataset:
# ADJUST FOR DATETIME AND DATE AS COLUMNS
df = (df.reset_index()
.assign(date = df.index.date)
)
# AGGREGATION + MERGE ON MIN/MAX + CALCULATION
agg_df = (df.groupby('date')['Random_Number']
.agg(["min", "max"])
.reset_index()
.merge(df, left_on=['date', 'max'], right_on=['date', 'Random_Number'])
.merge(df, left_on=['date', 'min'], right_on=['date', 'Random_Number'],
suffixes=['', '_min'])
.assign(diff = lambda x: (x['index'] - x['index_min']) / pd.to_timedelta('1H'))
)
print(agg_df.head(10))
# date min max index Random_Number index_min Random_Number_min diff
# 0 2018-10-09 1 94 2018-10-09 05:00:00 94 2018-10-09 09:00:00 1 -4.0
# 1 2018-10-10 12 95 2018-10-10 20:00:00 95 2018-10-10 21:00:00 12 -1.0
# 2 2018-10-11 5 97 2018-10-11 15:00:00 97 2018-10-11 19:00:00 5 -4.0
# 3 2018-10-12 7 98 2018-10-12 18:00:00 98 2018-10-12 06:00:00 7 12.0
# 4 2018-10-13 1 91 2018-10-13 22:00:00 91 2018-10-13 01:00:00 1 21.0
# 5 2018-10-14 1 97 2018-10-14 10:00:00 97 2018-10-14 04:00:00 1 6.0
# 6 2018-10-15 9 97 2018-10-15 06:00:00 97 2018-10-15 12:00:00 9 -6.0
# 7 2018-10-16 3 95 2018-10-16 04:00:00 95 2018-10-16 22:00:00 3 -18.0
# 8 2018-10-17 2 95 2018-10-17 13:00:00 95 2018-10-17 21:00:00 2 -8.0
# 9 2018-10-18 1 91 2018-10-18 21:00:00 91 2018-10-18 12:00:00 1 9.0

Concatenate all dataframe columns into a single column

I have a dataframe that looks roughly like:
01/01/19 02/01/19 03/01/19 04/01/19
hour
1.0 27.08 47.73 54.24 10.0
2.0 26.06 49.53 46.09 22.0
...
24.0 12.0 34.0 22.0 40.0
I'd like to reduce its dimension to a single column with a proper date index concatenating all the columns. Is there a smart pandas way to do it?
Expected result... something like:
01/01/19 00:00:00 27.08
01/01/19 01:00:00 26.08
...
01/01/19 23:00:00 12.00
02/01/19 00:00:00 47.73
02/01/19 01:00:00 49.53
...
02/01/19 23:00:00 34.00
...
You can stack and then fix the index using pd.to_datetime and pd.to_timedelta:
u = df.stack()
u.index = (pd.to_datetime(u.index.get_level_values(1), dayfirst=True)
+ pd.to_timedelta(u.index.get_level_values(0) - 1, unit='h'))
u.sort_index()
2019-01-01 00:00:00 27.08
2019-01-01 01:00:00 26.06
2019-01-01 23:00:00 12.00
2019-01-02 00:00:00 47.73
2019-01-02 01:00:00 49.53
2019-01-02 23:00:00 34.00
2019-01-03 00:00:00 54.24
2019-01-03 01:00:00 46.09
2019-01-03 23:00:00 22.00
2019-01-04 00:00:00 10.00
2019-01-04 01:00:00 22.00
2019-01-04 23:00:00 40.00
dtype: float64

Computing daily minimum and maximum of columns in pandas dataframe spanning multiple days

I have the foll. dataframe with hourly data:
tmp min_tmp max_tmp
dates
2017-07-19 14:00:00 19.0 19.0 19.0
2017-07-19 15:00:00 18.0 18.0 18.0
2017-07-19 16:00:00 16.0 16.0 16.0
2017-07-19 17:00:00 16.0 16.0 16.0
2017-07-19 18:00:00 15.0 15.0 15.0
Is there a way we can compute daily minimum and maximum values of tmp in min_tmp and max_tmp respectively. I tried this
df['min_temp'] = df['tmp'].min()
but this does not work for dataframe data that spans multiple days
Use resample and transform:
g = df.resample('D')['tmp']
df['min_tmp'] = g.transform('min')
df['max_tmp'] = g.transform('max')
Output
tmp min_tmp max_tmp
dates
2017-07-19 14:00:00 19.0 15.0 19.0
2017-07-19 15:00:00 18.0 15.0 19.0
2017-07-19 16:00:00 16.0 15.0 19.0
2017-07-19 17:00:00 16.0 15.0 19.0
2017-07-19 18:00:00 15.0 15.0 19.0
Here is the min/max computed by day. You have to group by day, month, and year simultaneously:
pd.groupby(df['tmp'], by=[df.index.day, df.index.month, df.index.year]).min()
transform in pandas
df['Date']=df.index
df.Date=pd.to_datetime(df.Date)
map={'min_tmp':min, 'max_tmp':max}
for key,value in map.items():
print(key,value)
df[key]=df.groupby(df.Date.dt.date)['tmp'].transform(func=value)
df.drop(['Date'],axis=1)
Out[469]:
tmp min_tmp max_tmp
Date
7/19/2017 14:00 19 15 19
7/19/2017 15:00 18 15 19
7/19/2017 16:00 16 15 19
7/19/2017 17:00 16 15 19
7/19/2017 18:00 15 15 19
I am lazy for repeating, but you can simply do this to achieve.
df['max_tmp']=df.groupby(df.Date.dt.date)['tmp'].transform(max)
df['min_tmp']=df.groupby(df.Date.dt.date)['tmp'].transform(min)

Pandas Resample Apply Custom Function?

I'm trying to use pandas to resample 15 minute periods into 1 hour periods but by applying a custom function. My DataFrame is in this format;
Date val1 val2
2016-01-30 07:00:00 49.0 45.0
2016-01-30 07:15:00 49.0 44.0
2016-01-30 07:30:00 52.0 47.0
2016-01-30 07:45:00 60.0 46.0
2016-01-30 08:00:00 63.0 61.0
2016-01-30 08:15:00 61.0 60.0
2016-01-30 08:30:00 62.0 61.0
2016-01-30 08:45:00 63.0 61.0
2016-01-30 09:00:00 68.0 60.0
2016-01-30 09:15:00 71.0 70.0
2016-01-30 09:30:00 71.0 70.0
..and i want to resample with this function;
def log_add(array_like):
return (10*math.log10((sum([10**(i/10) for i in array_like])))))
I do;
df.resample('1H').apply(log_add)
but this returns an empty df, doing this;
df.resample('1H').apply(lambda x: log_add(x))
does the same too. Anyone any ideas why its not applying the function properly?
Any help would be appreciated, thanks.
You can add parameter on what is implemented in 0.19.0 pandas:
print (df.resample('1H', on='Date').apply(log_add))
Or set Date to index by set_index:
df.set_index('Date', inplace=True)
print (df.resample('1H').apply(log_add))
Also first check if dtype of column Date is datetime, if not use to_datetime:
print (df.dtypes)
Date object
val1 float64
val2 float64
dtype: object
df.Date = pd.to_datetime(df.Date)
print (df.dtypes)
Date datetime64[ns]
val1 float64
val2 float64
dtype: object

Categories