Changing time interval in pandas and display the mean [duplicate] - python

I want to resample the data in Sms ,call and Internet column by replacing the value by their mean for every hour.
Code 1 tried :
df1.reset_index().set_index('TIME').resample('1H').mean()
error:Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
Code 2 tried:
df1['TIME'] = pd.to_datetime(data['TIME'])
df1.CALL.resample('60min', how='mean')
error: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'RangeIndex'
Dataframe:
ID TIME SMS CALL INTERNET
0 1 2013-11-30 23:00:00 0.277204 0.273629 13.674575
1 1 2013-11-30 23:10:00 0.341536 0.058176 13.330858
2 1 2013-11-30 23:20:00 0.379427 0.054601 11.329552
3 1 2013-11-30 23:30:00 0.600781 0.218489 13.166163
4 1 2013-11-30 23:40:00 0.405565 0.134176 13.347791
5 1 2013-11-30 23:50:00 0.187700 0.080738 12.434744
6 1 2013-12-01 00:00:00 0.282651 0.135964 13.860353
7 1 2013-12-01 00:10:00 0.109826 0.056388 12.583463
8 1 2013-12-01 00:20:00 0.348638 0.053438 12.644995
9 1 2013-12-01 00:30:00 0.138375 0.054062 12.251733
10 1 2013-12-01 00:40:00 0.054062 0.163803 11.292642
df1.dtypes
ID int64
TIME object
SMS float64
CALL float64
INTERNET float64
dtype: object

You can use parameter on in resample:
on : string, optional
For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.
New in version 0.19.0.
df1['TIME'] = pd.to_datetime(df1['TIME'])
df = df1.resample('60min', on='TIME').mean()
print (df)
ID SMS CALL INTERNET
TIME
2013-11-30 23:00:00 1 0.365369 0.136635 12.880614
2013-12-01 00:00:00 1 0.186710 0.092731 12.526637
Or add set_index for DatetimeIndex:
df1['TIME'] = pd.to_datetime(df1['TIME'])
df = df1.set_index('TIME').resample('60min').mean()

Related

Weird behavior for groupby using time interval in pyspark

I have a pyspark dataframe with a column named 'datetime' of the 'datetime64[ns]' type in the format "yyyy-MM-dd HH:mm:ss".
I'm trying to group it by a given timewindow.
This is what I'm doiung
import pyspark.sql.functions as psf
dataframe.groupBy(psf.window('datetime', f'{interval} seconds'), 'player_id', 'media_id').count()
interval is a parameter received as a string such as 'hour', 'day', 'week'.
I then convert it to seconds, as in, 1 hour = 3600 seconds, 1 day = 86400 seconds.
When I group it by 1 hour it works fine, this is the result:
window_start
window_end
player_id
media_id
count
2022-08-01 00:00:00
2022-08-01 01:00:00
1
2841
22
2022-08-01 00:00:00
2022-08-01 01:00:00
1
2899
44
Since the first date in the dataframe is 2022-08-01 everything is fine, but, when I try to group it by a week, this is the result:
window_start
window_end
player_id
media_id
count
2022-07-27 21:00:00
2022-08-03 21:00:00
1
1524
3
2022-07-27 21:00:00
2022-08-03 21:00:00
1
2841
1117
I'm positive there are no dates beforee 2022-08-01 in the dataframe.
Why is it doing this? I tried using the startTime parameter for the window function, but it is only to off-set the start, and not specify the beginning of a valid interval.
Any thoughts?

Pandas change time values based on condition

I have a dataframe:
data = {'time':['08:45:00', '09:30:00', '18:00:00', '15:00:00']}
df = pd.DataFrame(data)
I would like to convert the time based on conditions: if the hour is less than 9, I want to set it to 9 and if the hour is more than 17, I need to set it to 17.
I tried this approach:
df['time'] = np.where(((df['time'].dt.hour < 9) & (df['time'].dt.hour != 0)), dt.time(9, 00))
I am getting an error: Can only use .dt. accesor with datetimelike values.
Can anyone please help me with this? Thanks.
Here's a way to do what your question asks:
df.time = pd.to_datetime(df.time)
df.loc[df.time.dt.hour < 9, 'time'] = (df.time.astype('int64') + (9 - df.time.dt.hour)*3600*1000000000).astype('datetime64[ns]')
df.loc[df.time.dt.hour > 17, 'time'] = (df.time.astype('int64') + (17 - df.time.dt.hour)*3600*1000000000).astype('datetime64[ns]')
Input:
time
0 2022-06-06 08:45:00
1 2022-06-06 09:30:00
2 2022-06-06 18:00:00
3 2022-06-06 15:00:00
Output:
time
0 2022-06-06 09:45:00
1 2022-06-06 09:30:00
2 2022-06-06 17:00:00
3 2022-06-06 15:00:00
UPDATE:
Here's alternative code to try to address OP's error as described in the comments:
import pandas as pd
import datetime
data = {'time':['08:45:00', '09:30:00', '18:00:00', '15:00:00']}
df = pd.DataFrame(data)
print('', 'df loaded as strings:', df, sep='\n')
df.time = pd.to_datetime(df.time, format='%H:%M:%S')
print('', 'df converted to datetime by pd.to_datetime():', df, sep='\n')
df.loc[df.time.dt.hour < 9, 'time'] = (df.time.astype('int64') + (9 - df.time.dt.hour)*3600*1000000000).astype('datetime64[ns]')
df.loc[df.time.dt.hour > 17, 'time'] = (df.time.astype('int64') + (17 - df.time.dt.hour)*3600*1000000000).astype('datetime64[ns]')
df.time = [time.time() for time in pd.to_datetime(df.time)]
print('', 'df with time column adjusted to have hour between 9 and 17, converted to type "time":', df, sep='\n')
Output:
df loaded as strings:
time
0 08:45:00
1 09:30:00
2 18:00:00
3 15:00:00
df converted to datetime by pd.to_datetime():
time
0 1900-01-01 08:45:00
1 1900-01-01 09:30:00
2 1900-01-01 18:00:00
3 1900-01-01 15:00:00
df with time column adjusted to have hour between 9 and 17, converted to type "time":
time
0 09:45:00
1 09:30:00
2 17:00:00
3 15:00:00
UPDATE #2:
To not just change the hour for out-of-window times, but to simply apply 9:00 and 17:00 as min and max times, respectively (see OP's comment on this), you can do this:
df.loc[df['time'].dt.hour < 9, 'time'] = pd.to_datetime(pd.DataFrame({
'year':df['time'].dt.year, 'month':df['time'].dt.month, 'day':df['time'].dt.day,
'hour':[9]*len(df.index)}))
df.loc[df['time'].dt.hour > 17, 'time'] = pd.to_datetime(pd.DataFrame({
'year':df['time'].dt.year, 'month':df['time'].dt.month, 'day':df['time'].dt.day,
'hour':[17]*len(df.index)}))
df['time'] = [time.time() for time in pd.to_datetime(df['time'])]
Since your 'time' column contains strings they can kept as strings and assign new string values where appropriate. To filter for your criteria it is convenient to: create datetime Series from the 'time' column, create boolean Series by comparing the datetime Series with your criteria, use the boolean Series to filter the rows which need to be changed.
Your data:
import numpy as np
import pandas as pd
data = {'time':['08:45:00', '09:30:00', '18:00:00', '15:00:00']}
df = pd.DataFrame(data)
print(df.to_string())
>>>
time
0 08:45:00
1 09:30:00
2 18:00:00
3 15:00:00
Convert to datetime, make boolean Series with your criteria
dts = pd.to_datetime(df['time'])
lt_nine = dts.dt.hour < 9
gt_seventeen = (dts.dt.hour >= 17)
print(lt_nine)
print(gt_seventeen)
>>>
0 True
1 False
2 False
3 False
Name: time, dtype: bool
0 False
1 False
2 True
3 False
Name: time, dtype: bool
Use the boolean series to assign a new value:
df.loc[lt_nine,'time'] = '09:00:00'
df.loc[gt_seventeen,'time'] = '17:00:00'
print(df.to_string())
>>>
time
0 09:00:00
1 09:30:00
2 17:00:00
3 15:00:00
Or just stick with strings altogether and create the boolean Series using regex patterns and .str.match.
data = {'time':['08:45:00', '09:30:00', '18:00:00', '15:00:00','07:22:00','22:02:06']}
dg = pd.DataFrame(data)
print(dg.to_string())
>>>
time
0 08:45:00
1 09:30:00
2 18:00:00
3 15:00:00
4 07:22:00
5 22:02:06
# regex patterns
pattern_lt_nine = '^00|01|02|03|04|05|06|07|08'
pattern_gt_seventeen = '^17|18|19|20|21|22|23'
Make boolean Series and assign new values
gt_seventeen = dg['time'].str.match(pattern_gt_seventeen)
lt_nine = dg['time'].str.match(pattern_lt_nine)
dg.loc[lt_nine,'time'] = '09:00:00'
dg.loc[gt_seventeen,'time'] = '17:00:00'
print(dg.to_string())
>>>
time
0 09:00:00
1 09:30:00
2 17:00:00
3 15:00:00
4 09:00:00
5 17:00:00
Time series / date functionality
Working with text data

Why is the difference of datetime = zero for two rows in a dataframe?

This issue that I am facing is very simple yet weird and has troubled me to no end.
I have a dataframe as follows :
df['datetime'] = df['datetime'].dt.tz_convert('US/Pacific')
#converting datetime from datetime64[ns, UTC] to datetime64[ns,US/Pacific]
df.head()
vehicle_id trip_id datetime
6760612 1000500 4f874888ce404720a203e36f1cf5b716 2017-01-01 10:00:00-08:00
6760613 1000500 4f874888ce404720a203e36f1cf5b716 2017-01-01 10:00:01-08:00
6760614 1000500 4f874888ce404720a203e36f1cf5b716 2017-01-01 10:00:02-08:00
6760615 1000500 4f874888ce404720a203e36f1cf5b716 2017-01-01 10:00:03-08:00
6760616 1000500 4f874888ce404720a203e36f1cf5b716 2017-01-01 10:00:04-08:00
df.info ()
vehicle_id int64
trip_id object
datetime datetime64[ns, US/Pacific]
I am trying to find out the datatime difference as follows ( in two different ways) :
df['datetime_diff'] = df['datetime'].diff()
df['time_diff'] = (df['datetime'] - df['datetime'].shift(1)).astype('timedelta64[s]')
For a particular trip_id, I have the results as follows :
df[trip_frame['trip_id'] == '4f874888ce404720a203e36f1cf5b716'][['datetime','datetime_diff','time_diff']].head()
datetime datetime_diff time_diff
6760612 2017-01-01 10:00:00-08:00 NaT NaN
6760613 2017-01-01 10:00:01-08:00 00:00:01 1.0
6760614 2017-01-01 10:00:02-08:00 00:00:01 1.0
6760615 2017-01-01 10:00:03-08:00 00:00:01 1.0
6760616 2017-01-01 10:00:04-08:00 00:00:01 1.0
But for some other trip_ids like the below, you can observe that I am having the datetime difference as zero (for both the columns) when it is actually not.There is a time difference in seconds.
df[trip_frame['trip_id'] == '01b8a24510cd4e4684d67b96369286e0'][['datetime','datetime_diff','time_diff']].head(4)
datetime datetime_diff time_diff
3236107 2017-01-28 03:00:00-08:00 0 days 0.0
3236108 2017-01-28 03:00:01-08:00 0 days 0.0
3236109 2017-01-28 03:00:02-08:00 0 days 0.0
3236110 2017-01-28 03:00:03-08:00 0 days 0.0
df[df['trip_id'] == '01c2a70c25e5428bb33811ca5eb19270'][['datetime','datetime_diff','time_diff']].head(4)
datetime datetime_diff time_diff
8915474 2017-01-21 10:00:00-08:00 0 days 0.0
8915475 2017-01-21 10:00:01-08:00 0 days 0.0
8915476 2017-01-21 10:00:02-08:00 0 days 0.0
8915477 2017-01-21 10:00:03-08:00 0 days 0.0
Any leads as to what the actual issue is ? I will be very grateful.
If I just execute your code without the type conversion, everything looks fine:
df.timestamp - df.timestamp.shift(1)
On the example lines
rows=['2017-01-21 10:00:00-08:00',
'2017-01-21 10:00:01-08:00',
'2017-01-21 10:00:02-08:00',
'2017-01-21 10:00:03-08:00',
'2017-01-21 10:00:03-08:00'] # the above lines are from your example. I just invented this last line to have one equal entry
df= pd.DataFrame(rows, columns=['timestamp'])
df['timestamp']= df['timestamp'].astype('datetime64')
df.timestamp - df.timestamp.shift(1)
The last line returns
Out[40]:
0 NaT
1 00:00:01
2 00:00:01
3 00:00:01
4 00:00:00
Name: timestamp, dtype: timedelta64[ns]
That looks unsuspicious so far. Note, that you already have a timedelta64 series.
If I now add your conversion, I get:
(df.timestamp - df.timestamp.shift(1)).astype('timedelta64[s]')
Out[42]:
0 NaN
1 1.0
2 1.0
3 1.0
4 0.0
Name: timestamp, dtype: float64
You see, that the result is a series of floats. This is probably because there is a NaN in the series. One other thing is the additon [s]. This doesn't seem to work. If you use [ns] it seems to work. If you want to get rid of the nano seconds somehow, I guess you need to do it separately.

Count business day between using pandas columns

I have tried to calculate the number of business days between two date (stored in separate columns in a dataframe ).
MonthBegin MonthEnd
0 2014-06-09 2014-06-30
1 2014-07-01 2014-07-31
2 2014-08-01 2014-08-31
3 2014-09-01 2014-09-30
4 2014-10-01 2014-10-31
I have tried to apply numpy.busday_count but I get the following error:
Iterator operand 0 dtype could not be cast from dtype('<M8[ns]') to dtype('<M8[D]') according to the rule 'safe'
I have tried to change the type into Timestamp as the following :
Timestamp('2014-08-31 00:00:00')
or datetime :
datetime.date(2014, 8, 31)
or to numpy.datetime64:
numpy.datetime64('2014-06-30T00:00:00.000000000')
Anyone knows how to fix it?
Note 1: I have passed tried np.busday_count in two way :
1. Passing dataframe columns, t['Days']=np.busday_count(t.MonthBegin,t.MonthEnd)
Passing arrays np.busday_count(dt1,dt2)
Note2: My dataframe has over 150K rows so I need to use an efficient algorithm
You can using bdate_range, also I corrected your input , since the most of MonthEnd is early than the MonthBegin
[len(pd.bdate_range(x,y))for x,y in zip(df['MonthBegin'],df['MonthEnd'])]
Out[519]: [16, 21, 22, 23, 20]
I think the best way to do is
df.apply(lambda row : np.busday_count(row['MBegin'],row['MEnd']),axis=1)
For my dataframe df as below:
MBegin MEnd
0 2011-01-01 2011-02-01
1 2011-01-10 2011-02-10
2 2011-01-02 2011-02-02
doing :
df['MBegin'] = df['MBegin'].values.astype('datetime64[D]')
df['MEnd'] = df['MEnd'].values.astype('datetime64[D]')
df['busday'] = df.apply(lambda row : np.busday_count(row['MBegin'],row['MEnd']),axis=1)
>>df
MBegin MEnd busday
0 2011-01-01 2011-02-01 21
1 2011-01-10 2011-02-10 23
2 2011-01-02 2011-02-02 22
You need to provide the template in which your dates are written.
a = datetime.strptime('2014-06-9', '%Y-%m-%d')
Calculate this for your
b = datetime.strptime('2014-06-30', '%Y-%m-%d')
Now their difference
c = b-a
c.days
which gives you the difference 21 days, You can now use list comprehension to get the difference between two dates as days.
will give you datetime.timedelta(21), to convert it into days, just use
You can modify your code to get the desired result as below:
df = pd.DataFrame({'MonthBegin': ['2014-06-09', '2014-08-01', '2014-09-01', '2014-10-01', '2014-11-01'],
'MonthEnd': ['2014-06-30', '2014-08-31', '2014-09-30', '2014-10-31', '2014-11-30']})
df['MonthBegin'] = df['MonthBegin'].astype('datetime64[ns]')
df['MonthEnd'] = df['MonthEnd'].astype('datetime64[ns]')
df['BDays'] = np.busday_count(df['MonthBegin'].tolist(), df['MonthEnd'].tolist())
print(df)
MonthBegin MonthEnd BDays
0 2014-06-09 2014-06-30 15
1 2014-08-01 2014-08-31 21
2 2014-09-01 2014-09-30 21
3 2014-10-01 2014-10-31 22
4 2014-11-01 2014-11-30 20
Additionally numpy.busday_count has few other optional arguments like weekmask, holidays ... which you can use according to your need.

Dates from 1900-01-01 are added to my 'Time' after using df['Time'] = pd.to_datetime(phData['Time'], format='%H:%M:%S')

I am a self taught coder (for around a year, so new). Here is my data
phData = pd.read_excel('phone call log & duration.xlsx')
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 10:21:00 502
2 7722078014 7450120521 2017-07-01 10:23:00 56
The dtypes are:
called from int64
called to int64
Date datetime64[ns]
Time object
Duration in (sec) int64
dtype: object
phData['Time'] = pd.to_datetime(phData['Time'], format='%H:%M:%S')
phData.head(2)
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 1900-01-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 1900-01-01 10:21:00 502
I've managed to change the 'Time' to datetime64[ns] but somehow dates have been added?? From where I have no idea? I want to be able to analyse the Date and Time using Pandas which I'm happy to do. To explore calls made between dates and time, frequency etc. I think also I will be able to save it so it will work in Orange3. But Orange3 won't recognise the Time as a time format. I've tried stripping out the 1900-01-01 but get an error saying it can only be done if an object. I think the Time isn't a datetime but a datetime.time ??? and I'm not sure why this matters and how to simply have 2 columns one Date and another Time, that Pandas will recognise for me to mine. I have looked at countless posts and that's where I found how to use pd.to_datetime and that my issue might be datetime.time but I'm stuck after this.
Pandas doesn't have such dtype as Time. You can have either datetime or timedelta dtype.
Option 1: combine Date and Time into single column:
In [23]: df['TimeStamp'] = pd.to_datetime(df.pop('Date') + ' ' + df.pop('Time'))
In [24]: df
Out[24]:
called from called to Duration in (sec) TimeStamp
0 7722078014 7722012013 303 2017-07-01 10:00:00
1 7722078014 7722052018 502 2017-07-01 10:21:00
2 7722078014 7450120521 56 2017-07-01 10:23:00
Option 2: convert Date to datetime and Time to timedelta dtype:
In [27]: df.Date = pd.to_datetime(df.Date)
In [28]: df.Time = pd.to_timedelta(df.Time)
In [29]: df
Out[29]:
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 10:21:00 502
2 7722078014 7450120521 2017-07-01 10:23:00 56
In [30]: df.dtypes
Out[30]:
called from int64
called to int64
Date datetime64[ns]
Time timedelta64[ns]
Duration in (sec) int64
dtype: object

Categories