Consolidate time periods data in pandas - python

How do I consolidate time periods data in Python pandas?
I want to manipulate data from
person start end
1 2001-1-8 2002-2-14
1 2002-2-14 2003-3-1
2 2001-1-5 2002-2-16
2 2002-2-17 2003-3-9
to
person start end
1 2001-1-8 2002-3-1
2 2001-1-5 2002-3-9
I want to check first if the last end and new start are within 1 day first. If not, then keep the original data structure, if so, then consolidate.

df.sort_values(["person", "start", "end"], inplace=True)
def condense(df):
df['prev_end'] = df["end"].shift(1)
df['dont_condense'] = (abs(df['prev_end'] - df['start']) > timedelta(days=1))
df["group"] = df['dont_condense'].fillna(False).cumsum()
return df.groupby("group").apply(lambda x: pd.Series({"person": x.iloc[0].person,
"start": x.iloc[0].start,
"end": x.iloc[-1].end}))
df.groupby("person").apply(condense).reset_index(drop=True)

You can use if each group contains only 2 rows and need difference 1 and 0 days, also all data are sorted:
print (df)
person start end
0 1 2001-1-8 2002-2-14
1 1 2002-2-14 2003-3-1
2 2 2001-1-5 2002-2-16
3 2 2002-2-17 2003-3-9
4 3 2001-1-2 2002-2-14
5 3 2002-2-17 2003-3-10
df.start = pd.to_datetime(df.start)
df.end = pd.to_datetime(df.end)
def f(x):
#if need difference only 0 days, use
#a = (x['start'] - x['end'].shift()) == pd.Timedelta(days=0)
a = (x['start'] - x['end'].shift()).isin([pd.Timedelta(days=1), pd.Timedelta(days=0)])
if a.any():
x.end = x['end'].shift(-1)
return (x)
df1 = df.groupby('person').apply(f).dropna().reset_index(drop=True)
print (df1)
person start end
0 1 2001-01-08 2003-03-01
1 2 2001-01-05 2003-03-09
2 3 2001-01-02 2002-02-14
3 3 2002-02-17 2003-03-10

Related

Sort column names using wildcard using pandas

I have a big dataframe with more than 100 columns. I am sharing a miniature version of my real dataframe below
ID rev_Q1 rev_Q5 rev_Q4 rev_Q3 rev_Q2 tx_Q3 tx_Q5 tx_Q2 tx_Q1 tx_Q4
1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1
I would like to do the below
a) sort the column names based on Quarters (ex:Q1,Q2,Q3,Q4,Q5..Q100..Q1000) for each column pattern
b) By column pattern, I mean the keyword that is before underscore which is rev and tx.
So, I tried the below but it doesn't work and it also shifts the ID column to the back
df = df.reindex(sorted(df.columns), axis=1)
I expect my output to be like as below. In real time, there are more than 100 columns with more than 30 patterns like rev, tx etc. I want my ID column to be in the first position as shown below.
ID rev_Q1 rev_Q2 rev_Q3 rev_Q4 rev_Q5 tx_Q1 tx_Q2 tx_Q3 tx_Q4 tx_Q5
1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1
For the provided example, df.sort_index(axis=1) should work fine.
If you have Q values higher that 9, use natural sorting with natsort:
from natsort import natsort_key
out = df.sort_index(axis=1, key=natsort_key)
Or using manual sorting with np.lexsort:
idx = df.columns.str.split('_Q', expand=True, n=1)
order = np.lexsort([idx.get_level_values(1).astype(float), idx.get_level_values(0)])
out = df.iloc[:, order]
Something like:
new_order = list(df.columns)
new_order = ['ID'] + sorted(new_order.remove("ID"))
df = df[new_order]
we manually put "ID" in front and then sort what is remaining
The idea is to create a dataframe from the column names. Create two columns: one for Variable and another one for Quarter number. Finally sort this dataframe by values then extract index.
idx = (df.columns.str.extract(r'(?P<V>[^_]+)_Q(?P<Q>\d+)')
.fillna(0).astype({'Q': int})
.sort_values(by=['V', 'Q']).index)
df = df.iloc[:, idx]
Output:
>>> df
ID rev_Q1 rev_Q2 rev_Q3 rev_Q4 rev_Q5 tx_Q1 tx_Q2 tx_Q3 tx_Q4 tx_Q5
0 1 1 1 1 1 1 1 1 1 1 1
1 2 1 1 1 1 1 1 1 1 1 1
>>> (df.columns.str.extract(r'(?P<V>[^_]+)_Q(?P<Q>\d+)')
.fillna(0).astype({'Q': int})
.sort_values(by=['V', 'Q']))
V Q
0 0 0
1 rev 1
5 rev 2
4 rev 3
3 rev 4
2 rev 5
9 tx 1
8 tx 2
6 tx 3
10 tx 4
7 tx 5

How to create N groups based on conditions in columns?

I need to create groups using two columns. For example, I took shop_id and week. Here is the df:
shop_id week
0 1 1
1 1 2
2 1 3
3 2 1
4 2 2
5 3 2
6 1 5
Imagine that each group is some promo which took place in each shop consecutively (week by week). So, my attempt was to use sorting, shifting by 1 to get last_week, use booleans and then iterate over them, incrementing each time whereas condition not met:
test_df = pd.DataFrame({'shop_id':[1,1,1,2,2,3,1], 'week':[1,2,3,1,2,2,5]})
def createGroups(df, shop_id, week, group):
'''Create groups where is the same shop_id and consecutive week
'''
periods = []
period = 0
# sorting to create chronological order
df = df.sort_values(by = [shop_id,week],ignore_index = True)
last_week = df[week].shift(+1)==df[week]-1
last_shop = df[shop_id].shift(+1)==df[shop_id]
# here i iterate over booleans and increment group by 1
# if shop is different or last period is more than 1 week ago
for p,s in zip(last_week,last_shop):
if (p == True) and (s == True):
periods.append(period)
else:
period += 1
periods.append(period)
df[group] = periods
return df
createGroups(test_df, 'shop_id', 'week', 'promo')
And I get the grouping I need:
shop_id week promo
0 1 1 1
1 1 2 1
2 1 3 1
3 1 5 2
4 2 1 3
5 2 2 3
6 3 2 4
However, function seems to be an overkill. Any ideas on how to get the same without a for-loop using native pandas function? I saw .ngroups() in docs but have no idea how to apply it to my case. Even better would be to vectorise it somehow, but I don't know how to achieve this:(
First we want to identify the promotions (continuously in weeks), then use groupby().ngroup() to enumerate the promotion:
df = df.sort_values('shop_id')
s = df['week'].diff().ne(1).groupby(df['shop_id']).cumsum()
df['promo'] = df.groupby(['shop_id',s]).ngroup() + 1
Update: This is based on your solution:
df = df.sort_values(['shop_id','week'])
s = df[['shop_id', 'week']]
df['promo'] = (s['shop_id'].ne(s['shop_id'].shift()) |
s['week'].diff().ne(1) ).cumsum()
Output:
shop_id week promo
0 1 1 1
1 1 2 1
2 1 3 1
6 1 5 2
3 2 1 3
4 2 2 3
5 3 2 4

How to extract features using date range within a month?

I would like to extract features from a datetime column for a day/date for example between day 1 to 10, the output is stored under a column called
early_month
as 1 or 0 otherwise.
The following question I posted earlier gave me a solution using indexer_between_time in order to use time ranges.
How to extract features using time range?
I am using the following code to extract days of the month from date.
df["date_of_month"] = df["purchase_date"].dt.day
Thank you.
It's not clear from your question, but if you are trying to create a column that contains a 1 if the day is between 1 and 10, or 0 otherwise, it's very simple:
df['early_month'] = df['date_of_month'].apply(lambda x: 1 if x <= 10 else 0)
df['mid_month'] = df['date_of_month'].apply(lambda x: 1 if x >= 11 and x <= 20 else 0)
As a python beginner, if you would rather avoid lambda functions, you could achieve the same result by creating a function and then applying it as so:
def create_date_features(day, min_day, max_day):
if day >= min_day and day <= max_day:
return 1
else:
return 0
df['early_month'] = df['date_of_month'].apply(create_date_features, min_day=1, max_day=10)
df['mid_month'] = df['date_of_month'].apply(create_date_features, min_day=11, max_day=20)
I believe you need convert boolean mask to integers - Trues are processes like 1s:
rng = pd.date_range('2017-04-03', periods=10, freq='17D')
df = pd.DataFrame({'purchase_date': rng, 'a': range(10)})
m2 = df["purchase_date"].dt.day <= 10
df['early_month'] = m2.astype(int)
print (df)
purchase_date a early_month
0 2017-04-03 0 1
1 2017-04-20 1 0
2 2017-05-07 2 1
3 2017-05-24 3 0
4 2017-06-10 4 1
5 2017-06-27 5 0
6 2017-07-14 6 0
7 2017-07-31 7 0
8 2017-08-17 8 0
9 2017-09-03 9 1
Detail:
print (df["purchase_date"].dt.day <= 10)
0 True
1 False
2 True
3 False
4 True
5 False
6 False
7 False
8 False
9 True
Name: purchase_date, dtype: bool
Maybe you need this one:
import pandas as pd
from datetime import datetime
df = pd.DataFrame({'a':[1,2,3,4,5], 'time':['11.07.2018','12.07.2018','13.07.2018','14.07.2018','15.07.2018']})
df.time = pd.to_datetime(df.time, format='%d.%m.%Y')
df[df.time>datetime(2018,7,13)] #if you need filter for date
df[df.time>datetime(2018,7,13).day] #if you need filter for day

Iterating through DataFrame and keeping track of a certain sequence duration

I'd like to figure out how often a negative values occurs and how long that negative price occurs.
example df
d = {'value': [1,2,-3,-4,-5,6,7,8,-9,-10], 'period':[1,2,3,4,5,6,7,8,10]}
df = pd.DataFrame(data=d)
I checked which rows had negative values. df['value'] < 0
I thought I could just iterate through each row, keep a counter for when a negative value occurs and perhaps moving that row to another df, as I would like to save the beginning period and ending period.
What I'm currently trying
def count_negatives(df):
df_negatives = pd.DataFrame(columns=['start','end', 'counter'])
for index, row in df.iterrows():
counter = 0
df_negative_index = 0
while(row['value'] < 0):
# if its the first one add it to df as start ?
# grab the last one and add it as end
#constantly overwrite the counter?
counter += 1
#add counter to df row
df_negatives['counter'] = counter
return df_negatives
Except that gives me an infinite loop I think. If I replace while with an if I'm stuck comming up with a way to keep track of how long.
I think better is avoid loops:
#compare by <
a = df['value'].lt(0)
#running sum
b = a.cumsum()
#counter only for negative consecutive values
df['counter'] = b-b.mask(a).ffill().fillna(0).astype(int)
print (df)
value period counter
0 1 1 0
1 2 2 0
2 -3 3 1
3 -4 4 2
4 -5 5 3
5 6 6 0
6 7 7 0
7 8 8 0
8 -9 9 1
9 -10 10 2
Or if dont need reset counter :
a = df['value'].lt(0)
#repalce values per mask a to 0
df['counter'] = a.cumsum().where(a, 0)
print (df)
value period counter
0 1 1 0
1 2 2 0
2 -3 3 1
3 -4 4 2
4 -5 5 3
5 6 6 0
6 7 7 0
7 8 8 0
8 -9 9 4
9 -10 10 5
If want start and end period:
#comapre for negative mask
a = df['value'].lt(0)
#inverted mask
b = (~a).cumsum()
#filter only negative rows
c = b[a].reset_index()
#aggregate first and last value per groups
df = (c.groupby('value')['index']
.agg([('start', 'first'),('end', 'last')])
.reset_index(drop=True))
print (df)
start end
0 2 4
1 8 9
I would like to save the beginning period and ending period.
If this is your requirement, you can use itertools.groupby. Note also a period series is not required, as Pandas provides a natural integer index (beginning at 0) if not explicitly provided.
from itertools import groupby
from operator import itemgetter
d = {'value': [1,2,-3,-4,-5,6,7,8,-9,-10]}
df = pd.DataFrame(data=d)
ranges = []
for k, g in groupby(enumerate(df['value'][df['value'] < 0].index), lambda x: x[0]-x[1]):
group = list(map(itemgetter(1), g))
ranges.append((group[0], group[-1]))
print(ranges)
[(2, 4), (8, 9)]
Then, to convert to a dataframe:
df = pd.DataFrame(ranges, columns=['start', 'end'])
print(df)
start end
0 2 4
1 8 9

Finding the streak of days in python

So I have a set of 50 dates I have specified 7 here for example
df["CreatedDate"] = pd.DataFrame('09-08-16 0:00','22-08-16 0:00','23-08-16 0:00',28-08-16 0:00,'29-08-16 0:00','30-08-16 0:00','31-08-16 0:00')
df["CreatedDate"] = pd.to_datetime(df4.CreatedDate)
df4["DAY"] = df4.CreatedDate.dt.day
How to find the continuous days which form a streak range [1-3],[4-7],[8-15],[>=16]
Streak Count
1-3 3 #(9),(22,23) are in range [1-3]
4-7 1 #(28,29,30,31) are in range [4-7]
8-15 0
>=16 0
let's just say the product (pen) has been launched 2 yrs back we are taking the dataset for last 10 months from today and from what I want to find is that if people are buying that pen continuously for 1 or 2 or 3 days and if yes place the count [1-3] and if they are buying it continuously for 4 or 5 or 6 or 7 days we place the count in [4- 7] and so on for other ranges
I dont know which condition to specify to match the criteria
I believe need:
df4 = pd.DataFrame({'CreatedDate':['09-08-16 0:00','22-08-16 0:00','23-08-16 0:00','28-08-16 0:00','29-08-16 0:00','30-08-16 0:00','31-08-16 0:00']})
df4["CreatedDate"] = pd.to_datetime(df4.CreatedDate)
df4 = df4.sort_values("CreatedDate")
count = df4.groupby((df4["CreatedDate"].diff().dt.days > 1).cumsum()).size()
print (count)
CreatedDate
0 2
1 4
2 1
dtype: int64
a = (pd.cut(count, bins=[0,3,7,15,31], labels=['1-3', '4-7','8-15', '>=16'])
.value_counts()
.sort_index()
.rename_axis('Streak')
.reset_index(name='Count'))
print (a)
Streak Count
0 1-3 2
1 4-7 1
2 8-15 0
3 >=16 0
Here's an attempt, binning is the same as #jezrael (except the last bin which I'm not sure should be limited to 31... is there a way to have open intervals with pd.cut?)
import pandas as pd
df = pd.DataFrame({ "CreatedDate": ['09-08-16 0:00','22-08-16 0:00','23-08-16 0:00','28-08-16 0:00','29-08-16 0:00','30-08-16 0:00','31-08-16 0:00']})
df["CreatedDate"] = pd.to_datetime(df.CreatedDate)
# sort by date
df = df.sort_values("CreatedDate")
# group consecutive dates
oneday = pd.Timedelta("1 day")
df["groups"] = (df.diff() > oneday).cumsum()
counts = df.groupby("groups").count()["CreatedDate"]
# bin
streaks = (pd.cut(counts, bins=[0,3,7,15,1000000], labels=['1-3', '4-7','8-15', '>=16'])
.value_counts()
.rename_axis("streak")
.reset_index(name="count"))
print(streaks)
streak count
0 1-3 2
1 4-7 1
2 >=16 0
3 8-15 0

Categories