Pandas column redefinition (extension) does not work - python

I have two data frames (let say A and B) indexed with dates.
I define a column in B as following
B["column1"] = A.shift(1)
Later, when I add additional data to A and I want to update B, it doesn't work.
B["column1] = A.shift(1) still produces the same data before I added additional data to A.
How can I solve this issue?

Perform a df.reindex() before your assignment statement, as follows:
B = B.reindex(A.index)
Then, you can get your desired result with your code:
B["column1"] = A.shift(1)
Caution: If your dataframe B has other columns with values built with date indices other than the indices of dataframe A, reindexing in this way may cause loss of data in other columns of dataframe B. To overcome this, you can reindex B to the combined index of A and B with a union() function as follows:
B = B.reindex(A.index.union(B.index))
Demo Run
A_index = pd.date_range(start='2021/1/1', periods=8)
A = pd.Series([10, 20, 30, 40, 50, 60, 70, 80], index=A_index)
print(A)
2021-01-01 10
2021-01-02 20
2021-01-03 30
2021-01-04 40
2021-01-05 50
2021-01-06 60
2021-01-07 70
2021-01-08 80
Freq: D, dtype: int64
B = pd.DataFrame()
B["column1"] = A.shift(1)
print(B)
column1
2021-01-01 NaN
2021-01-02 10.0
2021-01-03 20.0
2021-01-04 30.0
2021-01-05 40.0
2021-01-06 50.0
2021-01-07 60.0
2021-01-08 70.0
# Add data to A
A = A.append(pd.Series([100, 110, 120], index=pd.date_range(start='2021/1/21', periods=3)))
print(A)
2021-01-01 10
2021-01-02 20
2021-01-03 30
2021-01-04 40
2021-01-05 50
2021-01-06 60
2021-01-07 70
2021-01-08 80
2021-01-21 100 <= New data
2021-01-22 110 <= New data
2021-01-23 120 <= New data
dtype: int64
#Run new code
B = B.reindex(A.index)
#Run existing code
B["column1"] = A.shift(1)
print(B)
column1
2021-01-01 NaN
2021-01-02 10.0
2021-01-03 20.0
2021-01-04 30.0
2021-01-05 40.0
2021-01-06 50.0
2021-01-07 60.0
2021-01-08 70.0
2021-01-21 80.0 <= New data
2021-01-22 100.0 <= New data
2021-01-23 110.0 <= New data

Please use DataFrame.at
B.at["column1"] = A.shift(1)
reference DataFrame.at

Related

Dataframe - Datetime, get cumulated sum of previous day

I have a dataframe with the following columns:
datetime: HH:MM:SS (not continuous, there are some missing days)
date: ['datetime'].dt.date
X = various values
X_daily_cum = df.groupby(['date']).X.cumsum()
So Xcum is the cumulated sum of X but grouped per day, it's reset every day.
Code to reproduce:
import pandas as pd
df = pd.DataFrame( [['2021-01-01 10:10', 3],
['2021-01-03 13:33', 7],
['2021-01-03 14:44', 6],
['2021-01-07 17:17', 2],
['2021-01-07 07:07', 4],
['2021-01-07 01:07', 9],
['2021-01-09 09:09', 3]],
columns=['datetime', 'X'])
df['datetime'] = pd.to_datetime(df['datetime'], format='%Y-%m-%d %M:%S')
df['date'] = df['datetime'].dt.date
df['X_daily_cum'] = df.groupby(['date']).X.cumsum()
print(df)
Now I would like a new column that takes for value the cumulated sum of previous available day, like that:
datetime X date X_daily_cum last_day_cum_value
0 2021-01-01 00:10:10 3 2021-01-01 3 NaN
1 2021-01-03 00:13:33 7 2021-01-03 7 3
2 2021-01-03 00:14:44 6 2021-01-03 13 3
3 2021-01-07 00:17:17 2 2021-01-07 2 13
4 2021-01-07 00:07:07 4 2021-01-07 6 13
5 2021-01-07 00:01:07 9 2021-01-07 15 13
6 2021-01-09 00:09:09 3 2021-01-09 3 15
Is there a clean way to do it with pandas with an apply ?
I have managed to do it in a disgusting way by copying the df, removing datetime granularity, selecting last record of each date, joining this new df with the previous one. It's disgusting, I would like a more elegant solution.
Thanks for the help
Use Series.duplicated with Series.mask for set missing values to all values without last per dates, then shifting values and forward filling missing values:
df['last_day_cum_value'] = (df['X_daily_cum'].mask(df['date'].duplicated(keep='last'))
.shift()
.ffill())
print (df)
datetime X date X_daily_cum last_day_cum_value
0 2021-01-01 00:10:10 3 2021-01-01 3 NaN
1 2021-01-03 00:13:33 7 2021-01-03 7 3.0
2 2021-01-03 00:14:44 6 2021-01-03 13 3.0
3 2021-01-07 00:17:17 2 2021-01-07 2 13.0
4 2021-01-07 00:07:07 4 2021-01-07 6 13.0
5 2021-01-07 00:01:07 9 2021-01-07 15 13.0
6 2021-01-09 00:09:09 3 2021-01-09 3 15.0
Old solution:
Use DataFrame.drop_duplicates with Series created by date and Series.shift for previous dates, then use Series.map for new column:
s = df.drop_duplicates('date', keep='last').set_index('date')['X_daily_cum'].shift()
print (s)
date
2021-01-01 NaN
2021-01-03 3.0
2021-01-07 13.0
2021-01-09 15.0
Name: X_daily_cum, dtype: float64
df['last_day_cum_value'] = df['date'].map(s)
print (df)
datetime X date X_daily_cum last_day_cum_value
0 2021-01-01 00:10:10 3 2021-01-01 3 NaN
1 2021-01-03 00:13:33 7 2021-01-03 7 3.0
2 2021-01-03 00:14:44 6 2021-01-03 13 3.0
3 2021-01-07 00:17:17 2 2021-01-07 2 13.0
4 2021-01-07 00:07:07 4 2021-01-07 6 13.0
5 2021-01-07 00:01:07 9 2021-01-07 15 13.0
6 2021-01-09 00:09:09 3 2021-01-09 3 15.0

Cumulative sum that updates between two date ranges

I have data that looks like this: (assume start and end are date times)
id
start
end
1
01-01
01-02
1
01-03
01-05
1
01-04
01-07
1
01-06
NaT
1
01-07
NaT
I want to get a data frame that would include all dates, that has a 'cumulative sum' that only counts for the range they are in.
dates
count
01-01
1
01-02
0
01-03
1
01-04
2
01-05
1
01-06
2
01-07
3
One idea I thought of was simply using cumcount on the start dates, and doing a 'reverse cumcount' decreasing the counts using the end dates, but I am having trouble wrapping my head around doing this in pandas and I'm wondering whether there's a more elegant solution.
Here is two options. first consider this data with only one id, note that your columns start and end must be datetime.
d = {'id': [1, 1, 1, 1, 1],
'start': [pd.Timestamp('2021-01-01'), pd.Timestamp('2021-01-03'),
pd.Timestamp('2021-01-04'), pd.Timestamp('2021-01-06'),
pd.Timestamp('2021-01-07')],
'end': [pd.Timestamp('2021-01-02'), pd.Timestamp('2021-01-05'),
pd.Timestamp('2021-01-07'), pd.NaT, pd.NaT]}
df = pd.DataFrame(d)
so to get your result, you can do a sub between the get_dummies of start and end. then sum if several start and or end at the same dates, cumsum along the dates, reindex to get all the dates between the min and max dates available. create a function.
def dates_cc(df_):
return (
pd.get_dummies(df_['start'])
.sub(pd.get_dummies(df_['end'], dtype=int), fill_value=0)
.sum()
.cumsum()
.to_frame(name='count')
.reindex(pd.date_range(df_['start'].min(), df_['end'].max()), method='ffill')
.rename_axis('dates')
)
Now you can apply this function to your dataframe
res = dates_cc(df).reset_index()
print(res)
# dates count
# 0 2021-01-01 1.0
# 1 2021-01-02 0.0
# 2 2021-01-03 1.0
# 3 2021-01-04 2.0
# 4 2021-01-05 1.0
# 5 2021-01-06 2.0
# 6 2021-01-07 2.0
Now if you have several id, like
df1 = df.assign(id=[1,1,2,2,2])
print(df1)
# id start end
# 0 1 2021-01-01 2021-01-02
# 1 1 2021-01-03 2021-01-05
# 2 2 2021-01-04 2021-01-07
# 3 2 2021-01-06 NaT
# 4 2 2021-01-07 NaT
then you can use the above function like
res1 = df1.groupby('id').apply(dates_cc).reset_index()
print(res1)
# id dates count
# 0 1 2021-01-01 1.0
# 1 1 2021-01-02 0.0
# 2 1 2021-01-03 1.0
# 3 1 2021-01-04 1.0
# 4 1 2021-01-05 0.0
# 5 2 2021-01-04 1.0
# 6 2 2021-01-05 1.0
# 7 2 2021-01-06 2.0
# 8 2 2021-01-07 2.0
that said, a more straightforward possibility is with crosstab that create a row per id, the rest is about the same manipulations.
res2 = (
pd.crosstab(index=df1['id'], columns=df1['start'])
.sub(pd.crosstab(index=df1['id'], columns=df1['end']), fill_value=0)
.reindex(columns=pd.date_range(df1['start'].min(), df1['end'].max()), fill_value=0)
.rename_axis(columns='dates')
.cumsum(axis=1)
.stack()
.reset_index(name='count')
)
print(res2)
# id dates count
# 0 1 2021-01-01 1.0
# 1 1 2021-01-02 0.0
# 2 1 2021-01-03 1.0
# 3 1 2021-01-04 1.0
# 4 1 2021-01-05 0.0
# 5 1 2021-01-06 0.0
# 6 1 2021-01-07 0.0
# 7 2 2021-01-01 0.0
# 8 2 2021-01-02 0.0
# 9 2 2021-01-03 0.0
# 10 2 2021-01-04 1.0
# 11 2 2021-01-05 1.0
# 12 2 2021-01-06 2.0
# 13 2 2021-01-07 2.0
the main difference between the two options is that this one create extra dates for each id, because for example 2021-01-01 is in id=1 but not id=2 and with this version, you get this date also for id=2 while in groupby it is not taken into account.

Efficiently counting records with date in between two columns

Say I have this DataFrame:
user
sub_date
unsub_date
group
0
alice
2021-01-01 00:00:00
2021-02-09 00:00:00
A
1
bob
2021-02-03 00:00:00
2021-04-05 00:00:00
B
2
charlie
2021-02-03 00:00:00
NaT
A
3
dave
2021-01-29 00:00:00
2021-09-01 00:00:00
B
What is the most efficient way to count the subbed users per date and per group? In other words, to get this DataFrame:
date
group
subbed
2021-01-01
A
1
2021-01-01
B
0
2021-01-02
A
1
2021-01-02
B
0
...
...
...
2021-02-03
A
2
2021-02-03
B
2
...
...
...
2021-02-10
A
1
2021-02-10
B
2
...
...
...
Here's a snippet to init the example df:
import pandas as pd
import datetime as dt
users = pd.DataFrame(
[
["alice", "2021-01-01", "2021-02-09", "A"],
["bob", "2021-02-03", "2021-04-05", "B"],
["charlie", "2021-02-03", None, "A"],
["dave", "2021-01-29", "2021-09-01", "B"],
],
columns=["user", "sub_date", "unsub_date", "group"],
)
users[["sub_date", "unsub_date"]] = users[["sub_date", "unsub_date"]].apply(
pd.to_datetime
)
Using a smaller date range for convenience
Note: my users df is different from OPs. I've changed around a few dates to make the outputs smaller
In [26]: import pandas as pd
...: import datetime as dt
...:
...: users = pd.DataFrame(
...: [
...: ["alice", "2021-01-01", "2021-01-05", "A"],
...: ["bob", "2021-01-03", "2021-01-07", "B"],
...: ["charlie", "2021-01-03", None, "A"],
...: ["dave", "2021-01-09", "2021-01-11", "B"],
...: ],
...: columns=["user", "sub_date", "unsub_date", "group"],
...: )
...:
...: users[["sub_date", "unsub_date"]] = users[["sub_date", "unsub_date"]].apply(
...: pd.to_datetime
...: )
In [81]: users
Out[81]:
user sub_date unsub_date group
0 alice 2021-01-01 2021-01-05 A
1 bob 2021-01-03 2021-01-07 B
2 charlie 2021-01-03 NaT A
3 dave 2021-01-09 2021-01-11 B
In [82]: users.melt(id_vars=['user', 'group'])
Out[82]:
user group variable value
0 alice A sub_date 2021-01-01
1 bob B sub_date 2021-01-03
2 charlie A sub_date 2021-01-03
3 dave B sub_date 2021-01-09
4 alice A unsub_date 2021-01-05
5 bob B unsub_date 2021-01-07
6 charlie A unsub_date NaT
7 dave B unsub_date 2021-01-11
# dropna to remove rows with no unsub_date
# sort_values to sort by date
# sub_date exists -> map to 1, else -1 then take cumsum to get # of subbed people at that date
In [85]: melted = users.melt(id_vars=['user', 'group']).dropna().sort_values('value')
...: melted['sub_value'] = np.where(melted['variable'] == 'sub_date', 1, -1) # or melted['variable'].map({'sub_date': 1, 'unsub_date': -1})
...: melted['sub_cumsum_group'] = melted.groupby('group')['sub_value'].cumsum()
...: melted
Out[85]:
user group variable value sub_value sub_cumsum_group
0 alice A sub_date 2021-01-01 1 1
1 bob B sub_date 2021-01-03 1 1
2 charlie A sub_date 2021-01-03 1 2
4 alice A unsub_date 2021-01-05 -1 1
5 bob B unsub_date 2021-01-07 -1 0
3 dave B sub_date 2021-01-09 1 1
7 dave B unsub_date 2021-01-11 -1 0
In [93]: idx = pd.date_range(melted['value'].min(), melted['value'].max(), freq='1D')
...: idx
Out[93]:
DatetimeIndex(['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04',
'2021-01-05', '2021-01-06', '2021-01-07', '2021-01-08',
'2021-01-09', '2021-01-10', '2021-01-11'],
dtype='datetime64[ns]', freq='D')
In [94]: melted.set_index('value').groupby('group')['sub_cumsum_group'].apply(lambda x: x.reindex(idx).ffill().fillna(0))
Out[94]:
group
A 2021-01-01 1.0
2021-01-02 1.0
2021-01-03 2.0
2021-01-04 2.0
2021-01-05 1.0
2021-01-06 1.0
2021-01-07 1.0
2021-01-08 1.0
2021-01-09 1.0
2021-01-10 1.0
2021-01-11 1.0
B 2021-01-01 0.0
2021-01-02 0.0
2021-01-03 1.0
2021-01-04 1.0
2021-01-05 1.0
2021-01-06 1.0
2021-01-07 0.0
2021-01-08 0.0
2021-01-09 1.0
2021-01-10 1.0
2021-01-11 0.0
Name: sub_cumsum_group, dtype: float64
The data is described by step functions, and staircase can be used for these applications
import staircase as sc
stepfunctions = users.groupby("group").apply(sc.Stairs, "sub_date", "unsub_date")
stepfunctions will be a pandas.Series, indexed by group, and the values are Stairs objects which represent step functions.
group
A <staircase.Stairs, id=2516834869320>
B <staircase.Stairs, id=2516112096072>
dtype: object
You could plot the step function for A if you wanted like so
stepfunctions["A"].plot()
Next step is to sample the step function at whatever dates you want, eg for every day of January..
sc.sample(stepfunctions, pd.date_range("2021-01-01", "2021-02-01")).melt(ignore_index=False).reset_index()
The result is this
group variable value
0 A 2021-01-01 1
1 B 2021-01-01 0
2 A 2021-01-02 1
3 B 2021-01-02 0
4 A 2021-01-03 1
.. ... ... ...
59 B 2021-01-30 1
60 A 2021-01-31 1
61 B 2021-01-31 1
62 A 2021-02-01 1
63 B 2021-02-01 1
note:
I am the creator of staircase. Please feel free to reach out with feedback or questions if you have any.
Try this?
>>> users.groupby(['sub_date','group'])[['user']].count()

Creating date column using Pandas with the date gaps filled between specific period using asfreq

Suppose I have a Pandas dataframe with 'Date' column whose values have gaps like below:
>>> import pandas as pd
>>> data = [['2021-01-02', 1.0], ['2021-01-05', 2.0], ['2021-02-05', 3.0]]
>>> df = pd.DataFrame(data, columns=['Date','$'])
>>> df
Date $
0 2021-01-02 1.0
1 2021-01-05 2.0
2 2021-02-05 3.0
I would like to fill the gaps in the 'Date' column from the period between Jan 01, 2021 to Feb 28, 2021 while copying (forward-filling) the values, so from some reading up on StackOverflow posts like this, I came up with this solution to transform the dataframe as shown below:
# I need to first convert values in 'Date' column to datetime64 type
>>> df['Date'] = pd.to_datetime(df['Date'])
# Then I have to set 'Date' column as the dataframe's index
>>> df = df.set_index(['Date'])
# Without doing the above two steps, the call below returns error
>>> df_new=df.asfreq(freq='D', how={'start':'2021-01-01', 'end':'2021-03-31'}, method='ffill')
>>> df_new
$
Date
2021-01-02 1.0
2021-01-03 1.0
2021-01-04 1.0
2021-01-05 2.0
2021-01-06 2.0
2021-01-07 2.0
2021-01-08 2.0
2021-01-09 2.0
2021-01-10 2.0
...
2021-01-31 2.0
2021-02-01 2.0
2021-02-02 2.0
2021-02-03 2.0
2021-02-04 2.0
2021-02-05 3.0
But as you can see above, the dates in df_new only starts at '2021-01-02' instead of '2021-01-01' AND it ends on '2021-02-05' instead of '2021-02-28'. I hope I'm entering the input for how parameter correctly above.
Q1: What else do I need to do to make the resulting dataframe look like below:
>>> df_new
$
Date
2021-01-01 1.0
2021-01-02 1.0
2021-01-03 1.0
2021-01-04 1.0
2021-01-05 2.0
2021-01-06 2.0
2021-01-07 2.0
2021-01-08 2.0
2021-01-09 2.0
2021-01-10 2.0
...
2021-01-31 2.0
2021-02-01 2.0
2021-02-02 2.0
2021-02-03 2.0
2021-02-04 2.0
2021-02-05 3.0
2021-02-06 3.0
...
2021-02-28 3.0
Q2: Is there any way I can accomplish this simpler (i.e. without having to set the 'Date' column as the index of the dataframe for example)
Thanks in advance for your suggestions/answers!
You can find min/max date, create new pd.date_range() using MonthBegin/End date offsets and reindex:
df.Date = pd.to_datetime(df.Date)
mn = df.Date.min()
mx = df.Date.max()
dr = pd.date_range(
mn - pd.tseries.offsets.MonthBegin(),
mx + pd.tseries.offsets.MonthEnd(),
name="Date",
)
df = df.set_index("Date").reindex(dr).ffill().bfill().reset_index()
print(df)
Prints:
Date $
0 2021-01-01 1.0
1 2021-01-02 1.0
2 2021-01-03 1.0
3 2021-01-04 1.0
4 2021-01-05 2.0
5 2021-01-06 2.0
...
55 2021-02-25 3.0
56 2021-02-26 3.0
57 2021-02-27 3.0
58 2021-02-28 3.0

Finding longest consecutive increase in Pandas

I have a dataframe:
Date Price
2021-01-01 29344.67
2021-01-02 32072.08
2021-01-03 33048.03
2021-01-04 32084.61
2021-01-05 34105.46
2021-01-06 36910.18
2021-01-07 39505.51
2021-01-08 40809.93
2021-01-09 40397.52
2021-01-10 38505.49
Date object
Price float64
dtype: object
And my goal is to find the longest consecutive period of growth.
It should return:
Longest consecutive period was from 2021-01-04 to 2021-01-08 with increase of $8725.32
and honestly I have no idea where to start with it. These are my first steps in pandas and I don't know which tools I should use to get this information.
Could anyone help me / point me in the right direction?
Detect your increasing sequence with cumsum on decreasing:
df['is_increasing'] = df['Price'].diff().lt(0).cumsum()
You would get:
Date Price is_increasing
0 2021-01-01 29344.67 0
1 2021-01-02 32072.08 0
2 2021-01-03 33048.03 0
3 2021-01-04 32084.61 1
4 2021-01-05 34105.46 1
5 2021-01-06 36910.18 1
6 2021-01-07 39505.51 1
7 2021-01-08 40809.93 1
8 2021-01-09 40397.52 2
9 2021-01-10 38505.49 3
Now, you can detect your longest sequence with
sizes=df.groupby('is_increasing')['Price'].transform('size')
df[sizes == sizes.max()]
And you get:
Date Price is_increasing
3 2021-01-04 32084.61 1
4 2021-01-05 34105.46 1
5 2021-01-06 36910.18 1
6 2021-01-07 39505.51 1
7 2021-01-08 40809.93 1
Something like what Quang did for split the group , then pick the number of group
s = df.Price.diff().lt(0).cumsum()
out = df.loc[s==s.value_counts().sort_values().index[-1]]
Out[514]:
Date Price
3 2021-01-04 32084.61
4 2021-01-05 34105.46
5 2021-01-06 36910.18
6 2021-01-07 39505.51
7 2021-01-08 40809.93

Categories