I am trying to calculate the # of days between failures. I'd like to know on each day in the series the # of days passed since the last failure where failure = 1. There may be anywhere from 1 to 1500 devices.
For Example, Id like my dataframe to look like this (please pull data from url in the second code block. This is just a short example of a larger dataframe.):
date device failure elapsed
10/01/2015 S1F0KYCR 1 0
10/07/2015 S1F0KYCR 1 7
10/08/2015 S1F0KYCR 0 0
10/09/2015 S1F0KYCR 0 0
10/17/2015 S1F0KYCR 1 11
10/31/2015 S1F0KYCR 0 0
10/01/2015 S8KLM011 1 0
10/02/2015 S8KLM011 1 2
10/07/2015 S8KLM011 0 0
10/09/2015 S8KLM011 0 0
10/11/2015 S8KLM011 0 0
10/21/2015 S8KLM011 1 20
Sample Code:
Edit: Please pull actual data from code block below. The above sample data is an short example. Thanks.
url = "https://raw.githubusercontent.com/dsdaveh/device-failure-analysis/master/device_failure.csv"
df = pd.read_csv(url, encoding = "ISO-8859-1")
df = df.sort_values(by = ['date', 'device'], ascending = True) #Sort by date and device
df['date'] = pd.to_datetime(df['date'],format='%Y/%m/%d') #format date to datetime
This is where I am running into obstacles. However the new column should contain the # of days since last failure, where failure = 1.
test['date'] = 0
for i in test.index[1:]:
if not test['failure'][i]:
test['elapsed'][i] = test['elapsed'][i-1] + 1
I have also tried
fails = df[df.failure==1]
fails.Dates = trues.index #need this because .diff() won't work on the index..
fails.Elapsed = trues.Dates.diff()
Using pandas.DataFrame.groupby with diff and apply:
import pandas as pd
import numpy as np
df['date'] = pd.to_datetime(df['date'])
s = df.groupby(['device', 'failure'])['date'].diff().dt.days.add(1)
s = s.fillna(0)
df['elapsed'] = np.where(df['failure'], s, 0)
Output:
Date Device Failure Elapsed
0 2015-10-01 S1F0KYCR 1 0.0
1 2015-10-07 S1F0KYCR 1 7.0
2 2015-10-08 S1F0KYCR 0 0.0
3 2015-10-09 S1F0KYCR 0 0.0
4 2015-10-17 S1F0KYCR 1 11.0
5 2015-10-31 S1F0KYCR 0 0.0
6 2015-10-01 S8KLM011 1 0.0
7 2015-10-02 S8KLM011 1 2.0
8 2015-10-07 S8KLM011 0 0.0
9 2015-10-09 S8KLM011 0 0.0
10 2015-10-11 S8KLM011 0 0.0
11 2015-10-21 S8KLM011 1 20.0
Update:
Found out the actual data linked in the OP contains No device that has more than two failure cases, making the final result all zeros (i.e. no second failure has ever happened and thus nothing to calculate for elapsed). Using OP's original snippet:
import pandas as pd
url = "http://aws-proserve-data-science.s3.amazonaws.com/device_failure.csv"
df = pd.read_csv(url, encoding = "ISO-8859-1")
df = df.sort_values(by = ['date', 'device'], ascending = True)
df['date'] = pd.to_datetime(df['date'],format='%Y/%m/%d')
Find if any device has more than 1 failure:
df.groupby(['device'])['failure'].sum().gt(1).any()
# False
Which actually confirms that the all zeros in df['elapsed'] is actually a correct answer :)
If you tweak your data a bit, it does yield elapsed just as expected.
df.loc[6879, 'device'] = 'S1F0RRB1'
# Making two occurrence of failure for device S1F0RRB1
s = df.groupby(['device', 'failure'])['date'].diff().dt.days.add(1)
s = s.fillna(0)
df['elapsed'] = np.where(df['failure'], s, 0)
df['elapsed'].value_counts()
# 0.0 124493
# 3.0 1
Here is one way
df['elapsed']=df[df.Failure.astype(bool)].groupby('Device').Date.diff().dt.days.add(1)
df.elapsed.fillna(0,inplace=True)
df
Out[225]:
Date Device Failure Elapsed elapsed
0 2015-10-01 S1F0KYCR 1 0 0.0
1 2015-10-07 S1F0KYCR 1 7 7.0
2 2015-10-08 S1F0KYCR 0 0 0.0
3 2015-10-09 S1F0KYCR 0 0 0.0
4 2015-10-17 S1F0KYCR 1 11 11.0
5 2015-10-31 S1F0KYCR 0 0 0.0
6 2015-10-01 S8KLM011 1 0 0.0
7 2015-10-02 S8KLM011 1 2 2.0
8 2015-10-07 S8KLM011 0 0 0.0
9 2015-10-09 S8KLM011 0 0 0.0
10 2015-10-11 S8KLM011 0 0 0.0
11 2015-10-21 S8KLM011 1 20 20.0
Related
I want to group nearby dates together, using a rolling window (?) of three week periods.
See example and attempt below:
import pandas as pd
d = {'id':[1, 1, 1, 1, 2, 3],
'datefield':['2021-01-01', '2021-01-15', '2021-01-30', '2021-02-05', '2020-02-10', '2020-02-20']}
df = pd.DataFrame(data=d)
df['datefield'] = pd.to_datetime(df['datefield'])
# id datefield
#0 1 2021-01-01
#1 1 2021-01-15
#2 1 2021-02-01
#3 2 2020-02-10
#4 3 2020-02-20
df['event'] = df.groupby(['id', pd.Grouper(key='datefield', freq='3W')]).ngroup()
# id datefield event
#0 1 2021-01-01 0
#1 1 2021-01-15 0
#2 1 2021-01-30 1 #Should be 0, since last id 1 event happened just 2 weeks ago
#3 1 2021-02-05 1 #Should be 0
#4 2 2020-02-10 2
#5 3 2020-02-20 3 #Correct, within 3 weeks of another but since the ids are not the same the event is different
Can compute different columns to make it easily understandable
df
id datefield
0 1 2021-01-01
1 1 2021-01-15
2 1 2021-01-30
3 1 2021-02-05
4 2 2020-02-10
5 2 2020-03-20
Calculate difference between dates in number of days
df['diff'] = df['datefield'].diff().dt.days
Get previous ID
df['prevId'] = df['id'].shift()
Decide whether to increment or not
df['increment'] = np.where((df['diff']>21) | (df['prevId'] != df['id']), 1, 0)
Lastly, just get the cumulative sum
df['event'] = df['increment'].cumsum()
Output
id datefield diff prevId increment event
0 1 2021-01-01 NaN NaN 1 1
1 1 2021-01-15 14.0 1.0 0 1
2 1 2021-01-30 15.0 1.0 0 1
3 1 2021-02-05 6.0 1.0 0 1
4 2 2020-02-10 -361.0 1.0 1 2
5 2 2020-03-20 39.0 2.0 1 3
Let's try a different approach using a boolean series instead:
df['group'] = ((df['datefield'].diff()
.fillna(pd.Timedelta(1))
.gt(pd.Timedelta(weeks=3))) |
(df['id'].ne(df['id'].shift()))).cumsum()
Output:
id datefield group
0 1 2021-01-01 1
1 1 2021-01-15 1
2 1 2021-01-30 1
3 1 2021-02-05 1
4 2 2020-02-10 2
5 2 2020-03-20 3
Is the difference between the previous row greater than 3 weeks:
print((df['datefield'].diff()
.fillna(pd.Timedelta(1))
.gt(pd.Timedelta(weeks=3))))
0 False
1 False
2 False
3 False
4 False
5 True
Name: datefield, dtype: bool
Or is the current id not equal to the previous id:
print((df['id'].ne(df['id'].shift())))
0 True
1 False
2 False
3 False
4 True
5 False
Name: id, dtype: bool
or (|) together the conditions
print((df['datefield'].diff()
.fillna(pd.Timedelta(1))
.gt(pd.Timedelta(weeks=3))) |
(df['id'].ne(df['id'].shift())))
0 True
1 False
2 False
3 False
4 True
5 True
dtype: bool
Then use cumsum to increment every where there is a True value to delimit the groups.
*Assumes id and datafield columns are appropriately ordered.
It looks like you want the diff between consecutive rows to be three weeks or less, otherwise a new group is formed. You can do it like this, starting from initial time t0:
df = df.sort_values("datefield").reset_index(drop=True)
t0 = df.datefield.iloc[0]
df["delta_t"] = pd.TimedeltaIndex(df.datefield - t0)
df["group"] = (df.delta_t.dt.days.diff() > 21).cumsum()
output:
id datefield delta_t group
0 2 2020-02-10 0 days 0
1 2 2020-03-20 39 days 1
2 1 2021-01-01 326 days 2
3 1 2021-01-15 340 days 2
4 1 2021-01-30 355 days 2
5 1 2021-02-05 361 days 2
Note that your original dataframe is not sorted properly.
Here I have a dataset with three inputs with date and time. Here I collected my data not in pattern time. Here what I want first is put my start time as 0 and convert other time into minutes.
my code is:
data = pd.read_csv('data6.csv',"," )
data['date'] = pd.to_datetime(data['date'] + " " + data['time'], format='%d/%m/%Y %H:%M:%S')
lastday = data.loc[0, 'date']
def convert_time(x):
global lastday
if x.date() == lastday.date():
tm = x - lastday
return tm.total_seconds()/60
else:
lastday = x
return 0
data['time'] = data['date'].apply(convert_time)
Then I got the results:
But what I expected is :
I want to set the time for every one minute from starting time 0 , then if column has no value at that time then put the 0 values. If values are append then put the value with time column in minutes.
If new day then put again start time as 0 then start the value in minutes .
This is like time group with one minute , data.
Date time in min X1 X2 X3
10/3/2018 1 63 0 0
2
3
4 if no values then put 0 values into that
5 column till the values available
6 Then put it that column values
7
8
9
10
11
12
13
10/4/2018 0 120 30 60
1 0 0 0
My csv file:
link for my csv:
My csv
After new code my time is displaying :
Pandas has functions for this; resample from a datetime index. You have to give an aggregation feature in case your data has multiple values within 1 minute. Below example will sum these values, it is easy to change this.
Please correct me if this is not what you want.
Code
# Read CSV
csv_url = 'https://docs.google.com/spreadsheets/d/1WWq1qhqi4bGzNir_svQV7VstBkGbocToipPCY83Cclc/gviz/tq?tqx=out:csv&sheet=1512153575'
data = pd.read_csv(csv_url)
data['date'] = pd.to_datetime(data['date'] + " " + data['time'], format='%d/%m/%Y %H:%M:%S')
# Resample to 1 minute (T is minute)
df = data.set_index('date') \
.resample('1T') \
.sum() \
.fillna(0)
# Optional ugly one-liner to start index at 0, and 1 row per minute, restart at day start
df.index = ((df.index - pd.to_datetime(df.index.date)).total_seconds() / 60).astype(int)
Output
df.head()
x1 x2 x3 Unnamed: 5 Unnamed: 6 Unnamed: 7
date
2018-03-10 06:15:00 63 0 0 0.0 0.0 0.0
2018-03-10 06:16:00 0 0 0 0.0 0.0 0.0
2018-03-10 06:17:00 0 0 0 0.0 0.0 0.0
2018-03-10 06:18:00 0 0 0 0.0 0.0 0.0
2018-03-10 06:19:00 0 0 0 0.0 0.0 0.0
Output 2
With ugly-ass one-liner
x1 x2 x3 Unnamed: 5 Unnamed: 6 Unnamed: 7
date
0 63 0 0 0.0 0.0 0.0
1 0 0 0 0.0 0.0 0.0
2 0 0 0 0.0 0.0 0.0
3 0 0 0 0.0 0.0 0.0
4 0 0 0 0.0 0.0 0.0
5 0 0 0 0.0 0.0 0.0
You may creat a dataframe df2 contain the column time and minutes of day, and then use
csv_url = 'https://docs.google.com/spreadsheets/d/1WWq1qhqi4bGzNir_svQV7VstBkGbocToipPCY83Cclc/gviz/tq?tqx=out:csv&sheet=1512153575'
data = pd.read_csv(csv_url)
df = pd.merge(data,df2,how='outer',on='time')
df = df.fillna(0)
df2 is like the pic, you can create it by script or excel
I have an existing dataframe which looks like:
id start_date end_date
0 1 20170601 20210531
1 2 20181001 20220930
2 3 20150101 20190228
3 4 20171101 20211031
I am trying to add 85 columns to this dataframe which are:
if the month/year (looping on start_date to end_date) lie between 20120101 and 20190101: 1
else: 0
I tried the following method:
start, end = [datetime.strptime(_, "%Y%m%d") for _ in ['20120101', '20190201']]
global_list = list(OrderedDict(((start + timedelta(_)).strftime(r"%m/%y"), None) for _ in range((end - start).days)).keys())
def get_count(contract_start_date, contract_end_date):
start, end = [datetime.strptime(_, "%Y%m%d") for _ in [contract_start_date, contract_end_date]]
current_list = list(OrderedDict(((start + timedelta(_)).strftime(r"%m/%y"), None) for _ in range((end - start).days)).keys())
temp_list = []
for each in global_list:
if each in current_list:
temp_list.append(1)
else:
temp_list.append(0)
return pd.Series(temp_list)
sample_df[global_list] = sample_df[['contract_start_date', 'contract_end_date']].apply(lambda x: get_count(*x), axis=1)
and the sample df looks like:
customer_id contract_start_date contract_end_date 01/12 02/12 03/12 04/12 05/12 06/12 07/12 ... 04/18 05/18 06/18 07/18 08/18 09/18 10/18 11/18 12/18 01/19
1 1 20181001 20220930 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 1 1 1 1
9 2 20160701 20200731 0 0 0 0 0 0 0 ... 1 1 1 1 1 1 1 1 1 1
3 3 20171101 20211031 0 0 0 0 0 0 0 ... 1 1 1 1 1 1 1 1 1 1
3 rows × 88 columns
it works fine for small dataset but for 160k rows it didn't stopped even after 3 hours. Can someone tell me a better way to do this?
Facing problems when the dates overlap for same customer.
First I'd cut off the dud dates, to normalize the end_time (to ensure it's in the time range):
In [11]: df.end_date = df.end_date.where(df.end_date < '2019-02-01', pd.Timestamp('2019-01-31')) + pd.offsets.MonthBegin()
In [12]: df
Out[12]:
id start_date end_date
0 1 2017-06-01 2019-02-01
1 2 2018-10-01 2019-02-01
2 3 2015-01-01 2019-02-01
3 4 2017-11-01 2019-02-01
Note: you'll need to do the same trick for start_date if there are dates prior to 2012.
I'd create the resulting DataFrame from a date range of the columns and then fill it in (with ones at start time and something else:
In [13]: m = pd.date_range('2012-01-01', '2019-02-01', freq='MS')
In [14]: res = pd.DataFrame(0., columns=m, index=df.index)
In [15]: res.update(pd.DataFrame(np.diag(np.ones(len(df))), df.index, df.start_date).groupby(axis=1, level=0).sum())
In [16]: res.update(-pd.DataFrame(np.diag(np.ones(len(df))), df.index, df.end_date).groupby(axis=1, level=0).sum())
The groupby sum is required if multiple rows start or end in the same month.
# -1 and NaN were really placeholders for zero
In [17]: res = res.replace(0, np.nan).ffill(axis=1).replace([np.nan, -1], 0)
In [18]: res
Out[18]:
2012-01-01 2012-02-01 2012-03-01 2012-04-01 2012-05-01 ... 2018-09-01 2018-10-01 2018-11-01 2018-12-01 2019-01-01
0 0.0 0.0 0.0 0.0 0.0 ... 1.0 1.0 1.0 1.0 1.0
1 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 1.0 1.0 1.0
2 0.0 0.0 0.0 0.0 0.0 ... 1.0 1.0 1.0 1.0 1.0
3 0.0 0.0 0.0 0.0 0.0 ... 1.0 1.0 1.0 1.0 1.0
I have a pandas data frame:
df12 = pd.DataFrame({'group_ids':[1,1,1,2,2,2],'dates':['2016-04-01','2016-04-20','2016-04-28','2016-04-05','2016-04-20','2016-04-29'],'event_today_in_group':[1,0,1,1,1,0]})
group_ids dates event_today_in_group
0 1 2016-04-01 1
1 1 2016-04-20 0
2 1 2016-04-28 1
3 2 2016-04-05 1
4 2 2016-04-20 1
5 2 2016-04-29 0
I would like to compute an additional column that contains, for each group_ids, the number of days since the last time event_today_in_group was 1.
group_ids dates event_today_in_group days_since_last_event
0 1 2016-04-01 1 0
1 1 2016-04-20 0 19
2 1 2016-04-28 1 27
3 2 2016-04-05 1 0
4 2 2016-04-20 1 15
5 2 2016-04-29 0 9
As I mentioned earlier, this will get you the non-cumulative difference between dates within each group:
df['days_since_last_event'] = df.groupby('group_ids')['dates'].diff().apply(lambda x: x.days)
In order to get a cumulative sum of this difference, based on whenever event_today_in_group changes, I propose using shift to get the value of the previous row, and then generating a cumulative sum, like so:
df['event_today_in_group'].shift().cumsum()
Output:
0 NaN
1 1.0
2 1.0
3 2.0
4 3.0
5 4.0
This gives us the second grouping value we need to get the cumulative sums. You could assign the above values to a new column, but if you're only using them for the calculation, then you can simply include them in the subsequent groupby operation like so:
df.loc[:, 'days_since_last_event'] = df.groupby(['group_ids', df['event_today_in_group'].shift().cumsum()])['days_since_last_event'].cumsum()
Result:
group_ids dates event_today_in_group days_since_last_event
0 1 2016-04-01 1 NaN
1 1 2016-04-20 0 19.0
2 1 2016-04-28 1 27.0
3 2 2016-04-05 1 NaN
4 2 2016-04-20 1 15.0
5 2 2016-04-29 0 9.0
Hi I would like to implement a counter which counts the number of successive zero observations in a dataframe (across multiple columns). But I would like to reset it if a non-zero observation is found. I have used a for loop but it is incredibly slow, I am sure there must be far more efficient ways. This is my code:
Here is a snapshot of df
df.head()
ACL ACT ADH ADR AFE AFH AFT
2013-02-05 NaN NaN NaN NaN NaN NaN NaN
2013-02-12 -0.136861 -0.020406 0.046150 0.000000 -0.005321 NaN 0.058195
2013-02-19 -0.006632 0.041665 0.007365 0.012738 0.040930 NaN -0.037818
2013-02-26 -0.023848 -0.023999 -0.030677 -0.003144 0.050604 NaN -0.047604
2013-03-05 0.009771 -0.024589 -0.021073 -0.039432 0.047315 NaN 0.068727
I first initialise an empty data frame which has the same properties of df (dataframe) above
df1=pd.DataFrame( index= df, columns=df)
df1=df1.fillna(0)
Then I create my function which iterates over the rows, but this only deals with one column at a time
def zero_obs(x=df,y=df1):
for i in range(len(x)):
if x[i] == 0:
y[i] = y[i-1] + 1
else:
y[i] = 0
return y
for col in df.columns:
df1[col] = zero_obs(x=df[col],y=df1[col])
Really appreciate any help!!
The output i expect is as follows:
df1.tail()
BRN AXL TTO AGL ACL
2017-01-03 3 125 0 0 0
2017-01-10 0 126 0 0 0
2017-01-17 1 127 0 0 0
2017-01-24 0 128 0 0 0
2017-01-31 0 129 1 0 0
setup
Consider the dataframe df
df = pd.DataFrame(
np.zeros((10, 2), dtype=int),
columns=list('AB')
)
df.loc[[0, 4, 8], 'A'] = 1
df.loc[6, 'B'] = 1
print(df)
A B
0 1 0
1 0 0
2 0 0
3 0 0
4 1 0
5 0 0
6 0 1
7 0 0
8 1 0
9 0 0
Option 1
pandas apply
def zero_obs(x):
"""`x` is assumed to be a `pd.Series`"""
csum = x.eq(0).cumsum()
cpos = csum.where(x.ne(0)).ffill().fillna(0)
return csum.sub(cpos)
print(df.apply(zero_obs))
A B
0 0.0 1.0
1 1.0 2.0
2 2.0 3.0
3 3.0 4.0
4 0.0 5.0
5 1.0 6.0
6 2.0 0.0
7 3.0 1.0
8 0.0 2.0
9 1.0 3.0
Option 2
don't use apply
This function works just as well on df
zero_obs(df)
A B
0 0.0 1.0
1 1.0 2.0
2 2.0 3.0
3 3.0 4.0
4 0.0 5.0
5 1.0 6.0
6 2.0 0.0
7 3.0 1.0
8 0.0 2.0
9 1.0 3.0