I have the following time series data of temperature readings:
DT Temperature
01/01/2019 0:00 41
01/01/2019 1:00 42
01/01/2019 2:00 44
......
01/01/2019 23:00 41
01/02/2019 0:00 44
I am trying to write a function that compares the hourly change in temperature for a given day. Any change greater than 3 will increment quickChange counter. Something like this:
def countChange(day):
for dt in day:
if dt+1 - dt > 3: quickChange = quickChange+1
I can call the function for a day ex: countChange(df.loc['2018-01-01'])
Use Series.diff with compare by 3 and count Trues values by sum:
np.random.seed(2019)
rng = (pd.date_range('2018-01-01', periods=10, freq='H').tolist() +
pd.date_range('2018-01-02', periods=10, freq='H').tolist())
df = pd.DataFrame({'Temperature': np.random.randint(100, size=20)}, index=rng)
print (df)
Temperature
2018-01-01 00:00:00 72
2018-01-01 01:00:00 31
2018-01-01 02:00:00 37
2018-01-01 03:00:00 88
2018-01-01 04:00:00 62
2018-01-01 05:00:00 24
2018-01-01 06:00:00 29
2018-01-01 07:00:00 15
2018-01-01 08:00:00 12
2018-01-01 09:00:00 16
2018-01-02 00:00:00 48
2018-01-02 01:00:00 71
2018-01-02 02:00:00 83
2018-01-02 03:00:00 12
2018-01-02 04:00:00 80
2018-01-02 05:00:00 50
2018-01-02 06:00:00 95
2018-01-02 07:00:00 5
2018-01-02 08:00:00 24
2018-01-02 09:00:00 28
#if necessary create DatetimeIndex if DT is column
df = df.set_index("DT")
def countChange(day):
return (day['Temperature'].diff() > 3).sum()
print (countChange(df.loc['2018-01-01']))
4
print (countChange(df.loc['2018-01-02']))
9
try pandas.DataFrame.diff:
df = pd.DataFrame({'dt': ["01/01/2019 0:00","01/01/2019 1:00","01/01/2019 2:00","01/01/2019 23:00","01/02/2019 0:00"],
'Temperature': [41, 42, 44, 41, 44]})
df = df.sort_values("dt")
df = df.set_index("dt")
def countChange(df):
df["diff"] = df["Temperature"].diff()
return df.loc[df["diff"] > 3, "diff"].count()
quickchange = countChange(df.loc["2018-01-01"])
Related
I have a large time-series > 5 million rows, the values in time series fluctuate randomly between 2-10:
A small section of time-series:
I want to identify a certain pattern from this time series, pattern:
when the value of pct_change is >= threshold " T " I want to raise a flag that says reading begins
if the value of pct_change is >= T or < T and !=0 after reading begins flag has been raised then a reading continue flag should be raised until a zero is encountered
if a zero is encountered then a reading stop flag should be raised if the value of pct_change is < T after this flag has been raised then a not reading flag should be raised.
I want to write a function that can tell me how many times and for what duration this happened.
If we take a threshold T of 4 and use pct_change from the example data screenshot then the output that I want is :
The main goal behind this is to find how many times this cycle is repeating for different thresholds.
To generate sample data :
import pandas as pd
a = [2,3,4,2,0,14,5,6,3,2,0,4,5,7,8,10,4,0,5,6,7,10,7,6,4,2,0,1,2,5,6]
idx = pd.date_range("2018-01-01", periods=len(a), freq="H")
ts = pd.Series(a, index=idx)
dd = pd.DataFrame()
dd['pct_change'] =ts
dd.head()
Can you please suggest an efficient way of doing it?
Output that I want if threshold 'T' is >= 4 :
First, keep only interesting data (>= T | == 0):
threshold = 4
df = dd.loc[dd["pct_change"].ge(threshold) | dd["pct_change"].eq(0)]
>>> df
pct_change
2018-01-01 02:00:00 4 # group 0, end=2018-01-01 04:00:00
2018-01-01 04:00:00 0
2018-01-01 05:00:00 14 # group 1, end=2018-01-01 10:00:00
2018-01-01 06:00:00 5
2018-01-01 07:00:00 6
2018-01-01 10:00:00 0
2018-01-01 11:00:00 4 # group 2, end=2018-01-01 17:00:00
2018-01-01 12:00:00 5
2018-01-01 13:00:00 7
2018-01-01 14:00:00 8
2018-01-01 15:00:00 10
2018-01-01 16:00:00 4
2018-01-01 17:00:00 0
2018-01-01 18:00:00 5 # group 3, end=2018-01-02 02:00:00
2018-01-01 19:00:00 6
2018-01-01 20:00:00 7
2018-01-01 21:00:00 10
2018-01-01 22:00:00 7
2018-01-01 23:00:00 6
2018-01-02 00:00:00 4
2018-01-02 02:00:00 0
2018-01-02 05:00:00 5 # group 4, end=2018-01-02 06:00:00
2018-01-02 06:00:00 6
Then, create wanting groups:
groups = df["pct_change"].eq(0).shift(fill_value=0).cumsum()
>>> groups
2018-01-01 02:00:00 0 # group 0
2018-01-01 04:00:00 0
2018-01-01 05:00:00 1 # group 1
2018-01-01 06:00:00 1
2018-01-01 07:00:00 1
2018-01-01 10:00:00 1
2018-01-01 11:00:00 2 # group 2
2018-01-01 12:00:00 2
2018-01-01 13:00:00 2
2018-01-01 14:00:00 2
2018-01-01 15:00:00 2
2018-01-01 16:00:00 2
2018-01-01 17:00:00 2
2018-01-01 18:00:00 3 # group 3
2018-01-01 19:00:00 3
2018-01-01 20:00:00 3
2018-01-01 21:00:00 3
2018-01-01 22:00:00 3
2018-01-01 23:00:00 3
2018-01-02 00:00:00 3
2018-01-02 02:00:00 3
2018-01-02 05:00:00 4 # group 4
2018-01-02 06:00:00 4
Name: pct_change, dtype: object
Finally, use groups to output result:
out = pd.DataFrame(df.groupby(groups) \
.apply(lambda x: (x.index[0], x.index[-1])) \
.tolist(), columns=["StartTime", "EndTime"])
>>> out
StartTime EndTime
0 2018-01-01 02:00:00 2018-01-01 04:00:00 # group 0
1 2018-01-01 05:00:00 2018-01-01 10:00:00 # group 1
2 2018-01-01 11:00:00 2018-01-01 17:00:00 # group 2
3 2018-01-01 18:00:00 2018-01-02 02:00:00 # group 3
4 2018-01-02 05:00:00 2018-01-02 06:00:00 # group 4
Bonus
There are some case where you have to remove groups:
The first pct value is 0
Two or more consecutive pct value is 0
To remove them:
out = out[~out["StartTime"].eq(out["EndTime"])]
Here I have a dataset with on input and date and time. Here I just want to convert time into 00:00:00 for specific value which is contain in input column, and other time will be display as it is. Then I wrote the code for that. Then what I want is specify that 00:00:00 only. So I wrote the code for it.
Here is my code:
data['time_diff']= pd.to_datetime(data['date'] + " " + data['time'],
format='%d/%m/%Y %H:%M:%S', dayfirst=True)
data['duration'] = np.where(data['X3'].eq(5), np.timedelta64(0), pd.to_timedelta(data['time']))
print (data['duration'].dtype)
def f(x):
ts = x.total_seconds()
hours, remainder = divmod(ts, 3600)
minutes, seconds = divmod(remainder, 60)
return ('{:02d}:{:02d}:{:02d}').format(int(hours), int(minutes), int(seconds))
data['duration'] = data['duration'].apply(f)
match_time="00:00:00"
T = data.loc[data['duration'] == match_time, 'duration']
Then I got the output :
Then what I want to do is I just want to add 6hours for each time series Then I wrote the code for it and it gave me just 0 values without separate.
my code:
def time (y):
S=[]
row=0
for row in range(len(T)):
y = "00:00:00"
while row >0:
S = np.array(y + np.timedelta(hours=i) for i in range(6))
row += 1
break
else:
continue
#break
return
A= T.apply(time)
print(A)
then output came:
But what I expected is :
T add timedelta 1hr till to 6 hrs expected output
00:00:00 01:00:00
" 02:00:00
03:00:00
04:00:00
" 05:00:00
06:00:00
00:00:00 " 01:00:00
02:00:00
03:00:00
04:00:00
05:00:00
06:00:00
00:00:00:00 01:00:00
02:00:00
03:00:00
04:00:00
05:00:00
06:00:00
My csv file
Maybe that's what you thought:
My test data frame:
T= pd.DataFrame({"T":[ "00:00:00" for i in range(3) ]},index=np.random.randint(0,100,3))
T
8 00:00:00
96 00:00:00
44 00:00:00
tims=[ dt.time(i).strftime("%H:%M:%S") for i in range(1,7)]
['01:00:00', '02:00:00', '03:00:00', '04:00:00', '05:00:00', '06:00:00']
dd=T.apply(lambda r: pd.Series({"T":"00:00:00", "Hours":tims}), axis=1)
T Hours
8 00:00:00 [01:00:00, 02:00:00, 03:00:00, 04:00:00, 05:00...
96 00:00:00 [01:00:00, 02:00:00, 03:00:00, 04:00:00, 05:00...
44 00:00:00 [01:00:00, 02:00:00, 03:00:00, 04:00:00, 05:00...
dd.explode("Hours")
T Hours
8 00:00:00 01:00:00
8 00:00:00 02:00:00
8 00:00:00 03:00:00
8 00:00:00 04:00:00
8 00:00:00 05:00:00
8 00:00:00 06:00:00
44 00:00:00 01:00:00
44 00:00:00 02:00:00
44 00:00:00 03:00:00
44 00:00:00 04:00:00
44 00:00:00 05:00:00
44 00:00:00 06:00:00
96 00:00:00 01:00:00
96 00:00:00 02:00:00
96 00:00:00 03:00:00
96 00:00:00 04:00:00
96 00:00:00 05:00:00
96 00:00:00 06:00:00
Let's say I have the below Dataframe. How would I do to get an extra column 'flag' with 1's where a day has a age bigger than 90 and only if it happens in 2 consecutive days (48h in this case)? The output should contain 1' on 2 or more days, depending on how many days the condition is met The dataset is much bigger, but I put here just a small portion so you get an idea.
Age
Dates
2019-01-01 00:00:00 29
2019-01-01 01:00:00 56
2019-01-01 02:00:00 82
2019-01-01 03:00:00 13
2019-01-01 04:00:00 35
2019-01-01 05:00:00 53
2019-01-01 06:00:00 25
2019-01-01 07:00:00 23
2019-01-01 08:00:00 21
2019-01-01 09:00:00 12
2019-01-01 10:00:00 15
2019-01-01 11:00:00 9
2019-01-01 12:00:00 13
2019-01-01 13:00:00 87
2019-01-01 14:00:00 9
2019-01-01 15:00:00 63
2019-01-01 16:00:00 62
2019-01-01 17:00:00 52
2019-01-01 18:00:00 43
2019-01-01 19:00:00 77
2019-01-01 20:00:00 95
2019-01-01 21:00:00 79
2019-01-01 22:00:00 77
2019-01-01 23:00:00 5
2019-01-02 00:00:00 78
2019-01-02 01:00:00 41
2019-01-02 02:00:00 10
2019-01-02 03:00:00 10
2019-01-02 04:00:00 88
2019-01-02 05:00:00 19
This would be the desired output:
Dates Age flag
0 2019-01-01 00:00:00 29 1
1 2019-01-01 01:00:00 56 1
2 2019-01-01 02:00:00 82 1
3 2019-01-01 03:00:00 13 1
4 2019-01-01 04:00:00 35 1
5 2019-01-01 05:00:00 53 1
6 2019-01-01 06:00:00 25 1
7 2019-01-01 07:00:00 23 1
8 2019-01-01 08:00:00 21 1
9 2019-01-01 09:00:00 12 1
10 2019-01-01 10:00:00 15 1
11 2019-01-01 11:00:00 9 1
12 2019-01-01 12:00:00 13 1
13 2019-01-01 13:00:00 87 1
14 2019-01-01 14:00:00 9 1
15 2019-01-01 15:00:00 63 1
16 2019-01-01 16:00:00 62 1
17 2019-01-01 17:00:00 52 1
18 2019-01-01 18:00:00 43 1
19 2019-01-01 19:00:00 77 1
20 2019-01-01 20:00:00 95 1
21 2019-01-01 21:00:00 79 1
22 2019-01-01 22:00:00 77 1
23 2019-01-01 23:00:00 5 1
24 2019-01-02 00:00:00 78 0
25 2019-01-02 01:00:00 41 0
26 2019-01-02 02:00:00 10 0
27 2019-01-02 03:00:00 10 0
28 2019-01-02 04:00:00 88 0
29 2019-01-02 05:00:00 19 0
The dates is the index of the dataframe and is incremented by 1h.
thanks
You can first compare column by Series.gt, then grouping by DatetimeIndex.date and ccheck if at least one True per groups by GroupBy.transform with GroupBy.any, last cast mask to integers for True/False to 1/0 mapping, then combinae it with previous answer:
df = pd.DataFrame({'Age': 10}, index=pd.date_range('2019-01-01', freq='5H', periods=24))
#for test 1H timestamp use
#df = pd.DataFrame({'Age': 10}, index=pd.date_range('2019-01-01', freq='H', periods=24 * 5))
df.loc[pd.Timestamp('2019-01-02 01:00:00'), 'Age'] = 95
df.loc[pd.Timestamp('2019-01-03 02:00:00'), 'Age'] = 95
df.loc[pd.Timestamp('2019-01-05 19:00:00'), 'Age'] = 95
#print (df)
#for test 48 consecutive values change N = 48
N = 10
s = df['Age'].gt(90)
s1 = (s.groupby(df.index.date).transform('any'))
g1 = s1.ne(s1.shift()).cumsum()
df['flag'] = (s.groupby(g1).transform('size').ge(N) & s1).astype(int)
print (df)
Age flag
2019-01-01 00:00:00 10 0
2019-01-01 05:00:00 10 0
2019-01-01 10:00:00 10 0
2019-01-01 15:00:00 10 0
2019-01-01 20:00:00 10 0
2019-01-02 01:00:00 95 1
2019-01-02 06:00:00 10 1
2019-01-02 11:00:00 10 1
2019-01-02 16:00:00 10 1
2019-01-02 21:00:00 10 1
2019-01-03 02:00:00 95 1
2019-01-03 07:00:00 10 1
2019-01-03 12:00:00 10 1
2019-01-03 17:00:00 10 1
2019-01-03 22:00:00 10 1
2019-01-04 03:00:00 10 0
2019-01-04 08:00:00 10 0
2019-01-04 13:00:00 10 0
2019-01-04 18:00:00 10 0
2019-01-04 23:00:00 10 0
2019-01-05 04:00:00 10 0
2019-01-05 09:00:00 10 0
2019-01-05 14:00:00 10 0
2019-01-05 19:00:00 95 0
Apparently, this could be a solution to the first version of the question: how to add a column whose row values are 1 if at least one of the rows with the same date (y-m-d) has an Age value greater than 90.
import pandas as pd
df = pd.DataFrame({
'Dates':['2019-01-01 00:00:00',
'2019-01-01 01:00:00',
'2019-01-01 02:00:00',
'2019-01-02 00:00:00',
'2019-01-02 01:00:00',
'2019-01-03 02:00:00',
'2019-01-03 03:00:00',],
'Age':[29, 56, 92, 13, 1, 2, 93],})
df.set_index('Dates', inplace=True)
df.index = pd.to_datetime(df.index)
df['flag'] = pd.DatetimeIndex(df.index).day
df['flag'] = df.flag.isin(df['flag'][df['Age']>90]).astype(int)
It returns:
Age flag
Dates
2019-01-01 00:00:00 29 1
2019-01-01 01:00:00 56 1
2019-01-01 02:00:00 92 1
2019-01-02 00:00:00 13 0
2019-01-02 01:00:00 1 0
2019-01-03 02:00:00 2 1
2019-01-03 03:00:00 93 1
I'm looking to filter a large dataframe (millions of rows) based on another much smaller dataframe that has only three columns: ID, Start, End.
The following is what I put together (which works), but it seems like a groupby() or np.where might be faster.
SETUP:
import pandas as pd
import io
csv = io.StringIO(u'''
time id num
2018-01-01 00:00:00 A 1
2018-01-01 01:00:00 A 2
2018-01-01 02:00:00 A 3
2018-01-01 03:00:00 A 4
2018-01-01 04:00:00 A 5
2018-01-01 05:00:00 A 6
2018-01-01 06:00:00 A 6
2018-01-03 07:00:00 B 10
2018-01-03 08:00:00 B 11
2018-01-03 09:00:00 B 12
2018-01-03 10:00:00 B 13
2018-01-03 11:00:00 B 14
2018-01-03 12:00:00 B 15
2018-01-03 13:00:00 B 16
2018-05-29 23:00:00 C 111
2018-05-30 00:00:00 C 122
2018-05-30 01:00:00 C 133
2018-05-30 02:00:00 C 144
2018-05-30 03:00:00 C 155
''')
df = pd.read_csv(csv, sep = '\t')
df['time'] = pd.to_datetime(df['time'])
csv_filter = io.StringIO(u'''
id start end
A 2018-01-01 01:00:00 2018-01-01 02:00:00
B 2018-01-03 09:00:00 2018-01-03 12:00:00
C 2018-05-30 00:00:00 2018-05-30 08:00:00
''')
df_filter = pd.read_csv(csv_filter, sep = '\t')
df_filter['start'] = pd.to_datetime(df_filter['start'])
df_filter['end'] = pd.to_datetime(df_filter['end'])
WORKING CODE
df = pd.merge_asof(df, df_filter, left_on = 'time', right_on = 'start', by = 'id').dropna(subset = ['start']).drop(['start','end'], axis = 1)
df = pd.merge_asof(df, df_filter, left_on = 'time', right_on = 'end', by = 'id', direction = 'forward').dropna(subset = ['end']).drop(['start','end'], axis = 1)
OUTPUT
time id num
0 2018-01-01 01:00:00 A 2
1 2018-01-01 02:00:00 A 3
6 2018-01-03 09:00:00 B 12
7 2018-01-03 10:00:00 B 13
8 2018-01-03 11:00:00 B 14
9 2018-01-03 12:00:00 B 15
11 2018-05-30 00:00:00 C 122
12 2018-05-30 01:00:00 C 133
13 2018-05-30 02:00:00 C 144
14 2018-05-30 03:00:00 C 155
Any thoughts on a more elegant / faster solution?
Why not merge before filter. notice this will eating up your memory when the data set are way to big .
newdf=df.merge(df_filter)
newdf=newdf.loc[newdf.time.between(newdf.start,newdf.end),df.columns.tolist()]
newdf
Out[480]:
time id num
1 2018-01-01 01:00:00 A 2
2 2018-01-01 02:00:00 A 3
9 2018-01-03 09:00:00 B 12
10 2018-01-03 10:00:00 B 13
11 2018-01-03 11:00:00 B 14
12 2018-01-03 12:00:00 B 15
15 2018-05-30 00:00:00 C 122
16 2018-05-30 01:00:00 C 133
17 2018-05-30 02:00:00 C 144
18 2018-05-30 03:00:00 C 155
I have a dataframe that looks like this:
I'm using python 3.6.5 and a datetime.time object for the index
print(sum_by_time)
Trips
Time
00:00:00 10
01:00:00 10
02:00:00 10
03:00:00 10
04:00:00 20
05:00:00 20
06:00:00 20
07:00:00 20
08:00:00 30
09:00:00 30
10:00:00 30
11:00:00 30
How can I group this dataframe by time interval to get something like this:
Trips
Time
00:00:00 - 03:00:00 40
04:00:00 - 07:00:00 80
08:00:00 - 11:00:00 120
I think need convert index values to timedeltas by to_timedelta and then resample:
df.index = pd.to_timedelta(df.index.astype(str))
df = df.resample('4H').sum()
print (df)
Trips
00:00:00 40
04:00:00 80
08:00:00 120
EDIT:
For your format need:
df['d'] = pd.to_datetime(df.index.astype(str))
df = df.groupby(pd.Grouper(freq='4H', key='d')).agg({'Trips':'sum', 'd':['first','last']})
df.columns = df.columns.map('_'.join)
df = df.set_index(df['d_first'].dt.strftime('%H:%M:%S') + ' - ' + df['d_last'].dt.strftime('%H:%M:%S'))[['Trips_sum']]
print (df)
Trips_sum
00:00:00 - 03:00:00 40
04:00:00 - 07:00:00 80
08:00:00 - 11:00:00 120