Python 3.7,
Pandas 25
I have a Pandas Dataframe with columns for startdate and enddate. I am looking for ranges that overlap the range of my variable(s). Without being verbose and composing a series of greater than/less than statements with ands/ors to filter out the rows I need, I would like to use some sort of interval "overlap". It appears Pandas has this functionality:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Interval.overlaps.html
The following test works:
range1 = pd.Interval(pd.Timestamp('2017-01-01 00:00:00'),pd.Timestamp('2018-01-01 00:00:00'),closed='both')
range2 = pd.Interval(pd.Timestamp('2016-01-01 00:00:00'),pd.Timestamp('2017-01-01 00:00:00'),closed='both')
range1.overlaps(range2)
However, when I go to apply it to the dataframe columns it does not. I am not sure if there is something wrong in my syntax, or if this simply can not be applied to a dataframe. Here are some of the things I have tried (and received the gamut of errors):
start_range = '2017-07-01 00:00:00'
end_current = '2019-07-01 00:00:00'
reporttest_range = pd.Interval(pd.Timestamp(start_range),pd.Timestamp(end_current),closed='both')
reporttest_filter = my_dataframe[my_dataframe['startdate']['enddate'].overlaps(reporttest_range)]
reporttest_filter = my_dataframe[my_dataframe['startdate','enddate'].overlaps(reporttest_range)]
reporttest_filter = my_dataframe[(my_dataframe['startdate','enddate']).overlaps(reporttest_range)]
reporttest_filter = my_dataframe.filter(['startdate','enddate']).overlaps(reporttest_range)
reporttest_filter = my_dataframe.filter['startdate','enddate'].overlaps(reporttest_range)
reporttest_filter = my_dataframe.filter(['startdate','enddate']).overlaps(reporttest_range)
print(reporttest_filter)
Can someone please point me to an efficient way to accomplish this?
As requested, the dataframe output looks like this:
record startdate enddate
0 99 2017-07-01 2018-06-30
1 280 2018-08-01 2021-07-31
2 100 2017-07-01 2018-06-30
3 281 2017-07-01 2018-06-30
You need to create IntervalIndex from df.startdate and df.enddate and use overlaps against reporttest_range. Your sample returns all true, so I add row for False case.
Sample df:
record startdate enddate
0 9931 2017-07-01 2018-06-30
1 28075 2018-08-01 2021-07-31
2 10042 2017-07-01 2018-06-30
3 28108 2017-07-01 2018-06-30
4 28109 2016-07-01 2016-12-30
5 28111 2017-07-02 2018-09-30
iix = pd.IntervalIndex.from_arrays(df.startdate, df.enddate, closed='both')
iix.overlaps(reporttest_range)
Out[400]: array([ True, True, True, True, False, True])
Use it to pick only overlapping rows
df[iix.overlaps(reporttest_range)]
Out[401]:
record startdate enddate
0 9931 2017-07-01 2018-06-30
1 28075 2018-08-01 2021-07-31
2 10042 2017-07-01 2018-06-30
3 28108 2017-07-01 2018-06-30
5 28111 2017-07-02 2018-09-30
Related
i have the following dataframe:
timestamp mes
0 2019-01-01 18:15:55.700 1228
1 2019-01-01 18:35:56.872 1402
2 2019-01-01 18:35:56.872 1560
3 2019-01-01 19:04:25.700 1541
4 2019-01-01 19:54:23.150 8754
5 2019-01-02 18:01:00.025 4124
6 2019-01-02 18:17:56.125 9736
7 2019-01-02 18:58:59.799 1597
8 2019-01-02 20:10:15.896 5285
How can I select only the rows where timestamp is between a start_time and end_time, for all the days in the dataframe? Basically the same role of .between_time() but here the timestamp column can't be the index since there are repeated values.
Also, this is actually a chunk from pd.read_csv() and I would have to do this for several millions of them, would it be faster if I used for example numpy datetime functionalities? I guess I could create from timestamp a time column and create a mask on that, but I'm afraid this would be too slow.
EDIT:
I added more rows and this is the expected result, say for start_time=datetime.time(18), end_time=datetime.time(19):
timestamp mes
0 2019-01-01 18:15:55.700 1228
1 2019-01-01 18:35:56.872 1402
2 2019-01-01 18:35:56.872 1560
5 2019-01-02 18:01:00.025 4124
6 2019-01-02 18:17:56.125 9736
7 2019-01-02 18:58:59.799 1597
My code (works but is slow):
df['time'] = df.timestamp.apply(lambda x: x.time())
mask = (df.time<end) & (df.time>=start)
selected = df.loc[mask]
If you have you columns set to date time:
start = df["timestamp"] >= "2019-01-01 18:15:55.700"
end = df["timestamp"] <= "2019-01-01 18:15:55.896 "
between_two_dates = start & end
df.loc[between_two_dates]
Works for me. Just set timestamp to datetime and take to index
df=pd.DataFrame({'timestamp':['2019-01-01 18:15:55.700','2019-01-01 18:17:55.700','2019-01-01 18:19:55.896'],'mes':[1228,1402,1560]})#Data
df['timestamp']=pd.to_datetime(df['timestamp'])#Coerce timestamp to datetime
df.set_index('timestamp', inplace=True)#set timestamp as index
df.between_time('18:16', '20:15')#Time btetween select
Result
I have a dataframe like this with two date columns and a quamtity column :
start_date end_date qty
1 2018-01-01 2018-01-08 23
2 2018-01-08 2018-01-15 21
3 2018-01-15 2018-01-22 5
4 2018-01-22 2018-01-29 12
I have a second dataframe with just column containing yearly holidays for a couple of years, like this:
holiday
1 2018-01-01
2 2018-01-27
3 2018-12-25
4 2018-12-26
I would like to go through the first dataframe row by row and assign boolean value to a new column holidays if a date in the second data frame falls between the date values of the first date frame. The result would look like this:
start_date end_date qty holidays
1 2018-01-01 2018-01-08 23 True
2 2018-01-08 2018-01-15 21 False
3 2018-01-15 2018-01-22 5 False
4 2018-01-22 2018-01-29 12 True
When I try to do that with a for loop I get the following error:
ValueError: Can only compare identically-labeled Series objects
An answer would be appreciated.
If you want a fully-vectorized solution, consider using the underlying numpy arrays:
import numpy as np
def holiday_arr(start, end, holidays):
start = start.reshape((-1, 1))
end = end.reshape((-1, 1))
holidays = holidays.reshape((1, -1))
result = np.any(
(start <= holiday) & (holiday <= end),
axis=1
)
return result
If you have your dataframes as above (calling them df1 and df2), you can obtain your desired result by running:
df1["contains_holiday"] = holiday_arr(
df1["start_date"].to_numpy(),
df1["end_date"].to_numpy(),
df2["holiday"].to_numpy()
)
df1 then looks like:
start_date end_date qty contains_holiday
1 2018-01-01 2018-01-08 23 True
2 2018-01-08 2018-01-15 21 False
3 2018-01-15 2018-01-22 5 False
4 2018-01-22 2018-01-29 12 True
try:
def _is_holiday(row, df2):
return ((df2['holiday'] >= row['start_date']) & (df2['holiday'] <= row['end_date'])).any()
df1.apply(lambda x: _is_holiday(x, df2), axis=1)
I'm not sure why you would want to go row-by-row. But boolean comparisons would be way faster.
df['holiday'] = ((df2.holiday >= df.start_date) & (df2.holiday <= df.end_date))
Time
>>> 1000 loops, best of 3: 1.05 ms per loop
Quoting #hchw solution (row-by-row)
def _is_holiday(row, df2):
return ((df2['holiday'] >= row['start_date']) & (df2['holiday'] <= row['end_date'])).any()
df.apply(lambda x: _is_holiday(x, df2), axis=1)
>>> The slowest run took 4.89 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 4.46 ms per loop
Try IntervalIndex.contains with list comprehensiont and np.sum
iix = pd.IntervalIndex.from_arrays(df1.start_date, df1.end_date, closed='both')
df1['holidays'] = np.sum([iix.contains(x) for x in df2.holiday], axis=0) >= 1
Out[812]:
start_date end_date qty holidays
1 2018-01-01 2018-01-08 23 True
2 2018-01-08 2018-01-15 21 False
3 2018-01-15 2018-01-22 5 False
4 2018-01-22 2018-01-29 12 True
Note: I assume start_date, end_date, holiday columns are in datetime format. If they are not, you need to convert them before run above command as follows
df1.start_date = pd.to_datetime(df1.start_date)
df1.end_date = pd.to_datetime(df1.end_date)
df2.holiday = pd.to_datetime(df2.holiday)
I have a dataframe (imported from Excel) which looks like this:
Date Period
0 2017-03-02 2017-03-01 00:00:00
1 2017-03-02 2017-04-01 00:00:00
2 2017-03-02 2017-05-01 00:00:00
3 2017-03-02 2017-06-01 00:00:00
4 2017-03-02 2017-07-01 00:00:00
5 2017-03-02 2017-08-01 00:00:00
6 2017-03-02 2017-09-01 00:00:00
7 2017-03-02 2017-10-01 00:00:00
8 2017-03-02 2017-11-01 00:00:00
9 2017-03-02 2017-12-01 00:00:00
10 2017-03-02 Q217
11 2017-03-02 Q317
12 2017-03-02 Q417
13 2017-03-02 Q118
14 2017-03-02 Q218
15 2017-03-02 Q318
16 2017-03-02 Q418
17 2017-03-02 2018
I am trying to convert all the 'Period' column into a consistent format. Some elements look already in the datetime format, others are converted to string (ex. Q217), others to int (ex 2018). Which is the fastest way to convert everything in a datetime? I was trying with some masking, like this:
mask = df['Period'].str.startswith('Q', na = False)
list_quarter = df_final[mask]['Period'].tolist()
quarter_convert = {'1':'31/03', '2':'30/06', '3':'31/08', '4':'30/12'}
counter = 0
for element in list_quarter:
element = element[1:]
quarter = element[0]
year = element[1:]
daymonth = ''.join(str(quarter_convert.get(word, word)) for word in quarter)
final = daymonth+'/'+year
list_quarter[counter] = final
counter+=1
However it fails when I try to substitute the modified elements in the original column:
df_nwe_final['Period'] = np.where(mask, pd.Series(list_quarter), df_nwe_final['Period'])
Of course I would need to do more or less the same with the 2018 type formats. However, I am sure I am missing something here, and there should be a much faster solution. Some fresh ideas from you would help! Thank you.
Reusing the code you show, let's first write a function that converts the Q-string to a datetime format (I adjusted to final format a little bit):
def convert_q_string(element):
quarter_convert = {'1':'03-31', '2':'06-30', '3':'08-31', '4':'12-30'}
element = element[1:]
quarter = element[0]
year = element[1:]
daymonth = ''.join(str(quarter_convert.get(word, word)) for word in quarter)
final = '20' + year + '-' + daymonth
return final
We can now use this to first convert all 'Q'-strings, and then pd.to_datetime to convert all elements to proper datetime values:
In [2]: s = pd.Series(['2017-03-01 00:00:00', 'Q217', '2018'])
In [3]: mask = s.str.startswith('Q')
In [4]: s[mask] = s[mask].map(convert_q_string)
In [5]: s
Out[5]:
0 2017-03-01 00:00:00
1 2017-06-30
2 2018
dtype: object
In [6]: pd.to_datetime(s)
Out[6]:
0 2017-03-01
1 2017-06-30
2 2018-01-01
dtype: datetime64[ns]
I'm trying to collapse the below dataframe into rows containing continuous time periods by id. Continuous means that, within the same id, the start_date is either less than, equal or at most one day greater than the end_date of the previous row (the data is already sorted by id, start_date and end_date). All rows that are continuous should be output as a single row, with start_date being the minimum start_date (i.e. the start_date of the first row in the continuous set) and end_date being the maximum end_date from the continuous set of rows.
Please see the desired output at the bottom.
The only way I can think of approaching this is by parsing the dataframe row by row, but I was wondering if there a more pythonic and elegant way to do it.
id start_date end_date
1 2017-01-01 2017-01-15
1 2017-01-12 2017-01-24
1 2017-01-25 2017-02-03
1 2017-02-05 2017-02-14
1 2017-02-16 2017-02-28
2 2017-01-01 2017-01-19
2 2017-01-24 2017-02-07
2 2017-02-07 2017-02-20
Here is the code to generate the input dataframe:
import numpy as np
import pandas as pd
start_date = ['2017-01-01','2017-01-12','2017-01-25','2017-02-05','2017-02-16',
'2017-01-01','2017-01-24','2017-02-07']
end_date = ['2017-01-15','2017-01-24','2017-02-03','2017-02-14','2017-02-28',
'2017-01-19','2017-02-07','2017-02-20']
df = pd.DataFrame({'id': [1,1,1,1,1,2,2,2],
'start_date': pd.to_datetime(start_date, format='%Y-%m-%d'),
'end_date': pd.to_datetime(end_date, format='%Y-%m-%d')})
Desired output:
id start_date end_date
1 2017-01-01 2017-02-03
1 2017-02-05 2017-02-14
1 2017-02-16 2017-02-28
2 2017-01-01 2017-01-19
2 2017-01-24 2017-02-20
def f(grp):
#define a list to collect valid start and end ranges
d=[]
(
#append a new row if the start date is at least 2 days greater than the last date from previous row,
#otherwise update last rows's end date with current row's end date.
grp.reset_index(drop=True)
.apply(lambda x: d.append({x.start_date:x.end_date})
if x.name==0 or (x.start_date-pd.DateOffset(1))>grp.iloc[x.name-1].end_date
else d[-1].update({list(d[-1].keys())[0]:x.end_date}),
axis=1)
)
#reconstruct a df using only valid start and end dates pairs.
return pd.DataFrame([[list(e.keys())[0],list(e.values())[0]] for e in d],
columns=['start_date','end_date'])
df.groupby('id').apply(f).reset_index().drop('level_1',1)
Out[467]:
id start_date end_date
0 1 2017-01-01 2017-02-03
1 1 2017-02-05 2017-02-14
2 1 2017-02-16 2017-02-28
3 2 2017-01-01 2017-01-19
4 2 2017-01-24 2017-02-20
DataFrame is as follows:
ID1 ID2
0 00:00:01.002 00:00:01.002
1 00:00:01.001 00:00:01.006
2 00:00:01.004 00:00:01.011
3 00:00:00.998 00:00:01.012
4 NaT 00:00:01.000
...
20 NaT 00:00:00.998
What I am trying to do is create a boxplot for each ID. There may or may not be multiple IDs depending on the dataset I provide. For right now I am trying to solve this for 2 datasets. If possible I would like a solution that has all the data on the same boxplot and then another with the data displayed on its own boxplot per ID.
I am very new to pandas (trying to learn it...) and am just getting frustrated at how long this is taking to figure out... Here is my code...
deltaTime = pd.DataFrame() #Create blank df
for x in range(0, len(totIDs)):
ID = IDList[x]
df = pd.DataFrame(data[ID]).T
deltaT[ID] = pd.to_datetime(df[TIME_COL]).diff()
deltaT.boxplot()
Pretty simple just cant seem to get it do what I want in plotting a boxplot for each ID. I should not that data is given to me by a homegrown file reader that takes several complex files and sorts them into the data dictionary which is indexed by IDs.
I am running pandas version 0.14.0 and python version 2.7.7
I am not sure how this works in 0.14.0 version, because last is 0.19.2 - I recommend upgrade if possible:
#sample data
np.random.seed(180)
dates = pd.date_range('2017-01-01 10:11:20', periods=10, freq='T')
cols = ['ID1','ID2']
df = pd.DataFrame(np.random.choice(dates, size=(10,2)), columns=cols)
print (df)
ID1 ID2
0 2017-01-01 10:12:20 2017-01-01 10:17:20
1 2017-01-01 10:16:20 2017-01-01 10:20:20
2 2017-01-01 10:18:20 2017-01-01 10:17:20
3 2017-01-01 10:12:20 2017-01-01 10:16:20
4 2017-01-01 10:14:20 2017-01-01 10:18:20
5 2017-01-01 10:18:20 2017-01-01 10:19:20
6 2017-01-01 10:17:20 2017-01-01 10:12:20
7 2017-01-01 10:13:20 2017-01-01 10:17:20
8 2017-01-01 10:16:20 2017-01-01 10:11:20
9 2017-01-01 10:13:20 2017-01-01 10:19:20
Call DataFrame.diff and then convert timedeltas to total_seconds:
df = df.diff().apply(lambda x: x.dt.total_seconds())
print(df)
ID1 ID2
0 NaN NaN
1 240.0 180.0
2 120.0 -180.0
3 -360.0 -60.0
4 120.0 120.0
5 240.0 60.0
6 -60.0 -420.0
7 -240.0 300.0
8 180.0 -360.0
9 -180.0 480.0
Last use DataFrame.plot.box
df.plot.box()
You can also check docs.