How to do/workaround a conditional join in python Pandas? - python

I am trying to calculate time-based aggregations in Pandas based on date values stored in a separate tables.
The top of the first table table_a looks like this:
COMPANY_ID DATE MEASURE
1 2010-01-01 00:00:00 10
1 2010-01-02 00:00:00 10
1 2010-01-03 00:00:00 10
1 2010-01-04 00:00:00 10
1 2010-01-05 00:00:00 10
Here is the code to create the table:
table_a = pd.concat(\
[pd.DataFrame({'DATE': pd.date_range("01/01/2010", "12/31/2010", freq="D"),\
'COMPANY_ID': 1 , 'MEASURE': 10}),\
pd.DataFrame({'DATE': pd.date_range("01/01/2010", "12/31/2010", freq="D"),\
'COMPANY_ID': 2 , 'MEASURE': 10})])
The second table, table_b, looks like this:
COMPANY END_DATE
1 2010-03-01 00:00:00
1 2010-06-02 00:00:00
2 2010-03-01 00:00:00
2 2010-06-02 00:00:00
and the code to create it is:
table_b = pd.DataFrame({'END_DATE':pd.to_datetime(['03/01/2010','06/02/2010','03/01/2010','06/02/2010']),\
'COMPANY':(1,1,2,2)})
I want to be able to get the sum of the 'measure' column for each 'COMPANY_ID' for each 30-day period prior to the 'END_DATE' in table_b.
This is (I think) the SQL equivalent:
select
b.COMPANY_ID,
b.DATE
sum(a.MEASURE) AS MEASURE_TO_END_DATE
from table_a a, table_b b
where a.COMPANY = b.COMPANY and
a.DATE < b.DATE and
a.DATE > b.DATE - 30
group by b.COMPANY;

Well, I can think of a few ways:
essentially blow up the dataframe by just merging on the exact field (company)... then filter on the 30-day windows after the merge.
should be fast but could use lots of memory
Move the merging and filtering on the 30-day window into a groupby().
results in a merge for each group, so slower but should use less memory
Option #1
Suppose your data looks like the following (I expanded your sample data):
print df
company date measure
0 0 2010-01-01 10
1 0 2010-01-15 10
2 0 2010-02-01 10
3 0 2010-02-15 10
4 0 2010-03-01 10
5 0 2010-03-15 10
6 0 2010-04-01 10
7 1 2010-03-01 5
8 1 2010-03-15 5
9 1 2010-04-01 5
10 1 2010-04-15 5
11 1 2010-05-01 5
12 1 2010-05-15 5
print windows
company end_date
0 0 2010-02-01
1 0 2010-03-15
2 1 2010-04-01
3 1 2010-05-15
Create a beginning date for the 30 day windows:
windows['beg_date'] = (windows['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
print windows
company end_date beg_date
0 0 2010-02-01 2010-01-02
1 0 2010-03-15 2010-02-13
2 1 2010-04-01 2010-03-02
3 1 2010-05-15 2010-04-15
Now do a merge and then select based on if date falls within beg_date and end_date:
df = df.merge(windows,on='company',how='left')
df = df[(df.date >= df.beg_date) & (df.date <= df.end_date)]
print df
company date measure end_date beg_date
2 0 2010-01-15 10 2010-02-01 2010-01-02
4 0 2010-02-01 10 2010-02-01 2010-01-02
7 0 2010-02-15 10 2010-03-15 2010-02-13
9 0 2010-03-01 10 2010-03-15 2010-02-13
11 0 2010-03-15 10 2010-03-15 2010-02-13
16 1 2010-03-15 5 2010-04-01 2010-03-02
18 1 2010-04-01 5 2010-04-01 2010-03-02
21 1 2010-04-15 5 2010-05-15 2010-04-15
23 1 2010-05-01 5 2010-05-15 2010-04-15
25 1 2010-05-15 5 2010-05-15 2010-04-15
You can compute the 30 day window sums by grouping on company and end_date:
print df.groupby(['company','end_date']).sum()
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Option #2 Move all merging into a groupby. This should be better on memory but I would think much slower:
windows['beg_date'] = (windows['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
def cond_merge(g,windows):
g = g.merge(windows,on='company',how='left')
g = g[(g.date >= g.beg_date) & (g.date <= g.end_date)]
return g.groupby('end_date')['measure'].sum()
print df.groupby('company').apply(cond_merge,windows)
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Another option Now if your windows never overlap (like in the example data), you could do something like the following as an alternative that doesn't blow up a dataframe but is pretty fast:
windows['date'] = windows['end_date']
df = df.merge(windows,on=['company','date'],how='outer')
print df
company date measure end_date
0 0 2010-01-01 10 NaT
1 0 2010-01-15 10 NaT
2 0 2010-02-01 10 2010-02-01
3 0 2010-02-15 10 NaT
4 0 2010-03-01 10 NaT
5 0 2010-03-15 10 2010-03-15
6 0 2010-04-01 10 NaT
7 1 2010-03-01 5 NaT
8 1 2010-03-15 5 NaT
9 1 2010-04-01 5 2010-04-01
10 1 2010-04-15 5 NaT
11 1 2010-05-01 5 NaT
12 1 2010-05-15 5 2010-05-15
This merge essentially inserts your window end dates into the dataframe and then backfilling the end dates (by group) will give you a structure to easily create you summation windows:
df['end_date'] = df.groupby('company')['end_date'].apply(lambda x: x.bfill())
print df
company date measure end_date
0 0 2010-01-01 10 2010-02-01
1 0 2010-01-15 10 2010-02-01
2 0 2010-02-01 10 2010-02-01
3 0 2010-02-15 10 2010-03-15
4 0 2010-03-01 10 2010-03-15
5 0 2010-03-15 10 2010-03-15
6 0 2010-04-01 10 NaT
7 1 2010-03-01 5 2010-04-01
8 1 2010-03-15 5 2010-04-01
9 1 2010-04-01 5 2010-04-01
10 1 2010-04-15 5 2010-05-15
11 1 2010-05-01 5 2010-05-15
12 1 2010-05-15 5 2010-05-15
df = df[df.end_date.notnull()]
df['beg_date'] = (df['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
print df
company date measure end_date beg_date
0 0 2010-01-01 10 2010-02-01 2010-01-02
1 0 2010-01-15 10 2010-02-01 2010-01-02
2 0 2010-02-01 10 2010-02-01 2010-01-02
3 0 2010-02-15 10 2010-03-15 2010-02-13
4 0 2010-03-01 10 2010-03-15 2010-02-13
5 0 2010-03-15 10 2010-03-15 2010-02-13
7 1 2010-03-01 5 2010-04-01 2010-03-02
8 1 2010-03-15 5 2010-04-01 2010-03-02
9 1 2010-04-01 5 2010-04-01 2010-03-02
10 1 2010-04-15 5 2010-05-15 2010-04-15
11 1 2010-05-01 5 2010-05-15 2010-04-15
12 1 2010-05-15 5 2010-05-15 2010-04-15
df = df[(df.date >= df.beg_date) & (df.date <= df.end_date)]
print df.groupby(['company','end_date']).sum()
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Another alternative is to resample your first dataframe to daily data and then compute rolling_sums with a 30 day window; and select the dates at the end that you are interested in. This could be quite memory intensive too.

There is a very easy, and practical (or maybe the only direct way) to do conditional join in pandas. Since there is no direct way to do conditional join in pandas, you will need an additional library, and that is, pandasql
Install the library pandasql from pip using the command pip install pandasql. This library allows you to manipulate the pandas dataframes using the SQL queries.
import pandas as pd
from pandasql import sqldf
df = pd.read_excel(r'play_data.xlsx')
df
id Name Amount
0 A001 A 100
1 A002 B 110
2 A003 C 120
3 A005 D 150
Now let's just do a conditional join to compare the Amount of the IDs
# Make your pysqldf object:
pysqldf = lambda q: sqldf(q, globals())
# Write your query in SQL syntax, here you can use df as a normal SQL table
cond_join= '''
select
df_left.*,
df_right.*
from df as df_left
join df as df_right
on
df_left.[Amount] > (df_right.[Amount]+10)
'''
# Now, get your queries results as dataframe using the sqldf object that you created
pysqldf(cond_join)
id Name Amount id Name Amount
0 A003 C 120 A001 A 100
1 A005 D 150 A001 A 100
2 A005 D 150 A002 B 110
3 A005 D 150 A003 C 120

I know I am late for the party but here are two solutions. The first one is rather simple but not very general, while the second one should be more universal. In what follows I assume that table_a and table_b objects are already defined as in the original question.
Solution 1
This one is simple. Here we just do a left join and append END_DATE values to table_a and then filter out the rows we are not interested in. So the memory overhead here is size of table_a * number of unique END_DATE values per COMPANY in table_b.
table_c = table_a.merge(table_b, left_on="COMPANY_ID", right_on="COMPANY")
table_c[(table_c["DATE"] - table_c["END_DATE"]).dt.days.between(-30, 0)] \
.groupby(["COMPANY", "END_DATE"])["MEASURE"].sum()
## OUTPUT:
COMPANY END_DATE
1 2010-03-01 310
2010-06-02 310
2 2010-03-01 310
2010-06-02 310
Name: MEASURE, dtype: int64
This is quite fast, but could blow up the size of table_a significantly if table_b contained many values.
Solution 2
This one is a bit smarter and operates row-by-row, where to each row in table_b we explicitly map only the relevant subset of table_a. Thus, we get only the data we need, so there is no memory overhead (beyond the memory needed to represent the raw records over which we want to sum).
table_b.groupby(["COMPANY", "END_DATE"]) \
.apply(lambda g: table_a[
(table_a["COMPANY_ID"] == g["COMPANY"].iloc[0]) & \
((table_a["DATE"] - g["END_DATE"].iloc[0]).dt.days.between(-30, 0))
]["MEASURE"].sum())
## OUTPUT:
COMPANY END_DATE
1 2010-03-01 310
2010-06-02 310
2 2010-03-01 310
2010-06-02 310
dtype: int64
Note that in this case for each inequality we use only the relevant subsets of table_a, which will be much more memory efficient. The price is that this soution seems to be about 2-3 times slower (but in general still relatively fast; ~2-3ms runtime on your data).

I am using karl D's data.
conditional_join from pyjanitor offers a way to deal with non-equi joins efficiently:
# pip install pyjanitor
import pandas as pd
import janitor
(df
.conditional_join(
windows, # series or dataframe to join to
# variable arguments
# left column, right column, join operator
('company', 'company', '=='),
('date', 'beg_date', '>='),
('date', 'end_date', '<='),
# for more performance, depending on the data size
# you can turn on use_numba
use_numba = False,
# filter for specific columns, if required
df_columns=['company', 'measure'],
right_columns='end_date')
.groupby(['company', 'end_date'])
.sum()
)
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15

Related

pandas get delta from corresponding date in a seperate list of dates

I have a dataframe:
df a b
7 2019-05-01 00:00:01
6 2019-05-02 00:15:01
1 2019-05-06 00:10:01
3 2019-05-09 01:00:01
8 2019-05-09 04:20:01
9 2019-05-12 01:10:01
4 2019-05-16 03:30:01
And
l = [datetime.datetime(2019,05,02), datetime.datetime(2019,05,10), datetime.datetime(2019,05,22) ]
I want to add a column with the following:
for each row, find the last date from l that is before it, and add number of days between them.
If none of the date is smaller - add the delta from the smallest one.
So the new column will be:
df a b. delta date
7 2019-05-01 00:00:01 -1 datetime.datetime(2019,05,02)
6 2019-05-02 00:15:01 0 datetime.datetime(2019,05,02)
1 2019-05-06 00:10:01 4 datetime.datetime(2019,05,02)
3 2019-05-09 01:00:01 7 datetime.datetime(2019,05,02)
8 2019-05-09 04:20:01 7 datetime.datetime(2019,05,02)
9 2019-05-12 01:10:01 2 datetime.datetime(2019,05,10)
4 2019-05-16 03:30:01 6 datetime.datetime(2019,05,10)
How can I do it?
Using merge_asof to align df['b'] and the list (as Series), then computing the difference:
# ensure datetime
df['b'] = pd.to_datetime(df['b'])
# craft Series for merging (could be combined with line below)
s = pd.Series(l, name='l')
# merge and fillna with minimum date
ref = pd.merge_asof(df['b'], s, left_on='b', right_on='l')['l'].fillna(s.min())
# compute the delta as days
df['delta'] =(df['b']-ref).dt.days
output:
a b delta
0 7 2019-05-01 00:00:01 -1
1 6 2019-05-02 00:15:01 0
2 1 2019-05-06 00:10:01 4
3 3 2019-05-09 01:00:01 7
4 8 2019-05-09 04:20:01 7
5 9 2019-05-12 01:10:01 2
6 4 2019-05-16 03:30:01 6
Here's a one line solution if you your b column has datetime object. Otherwise convert it to datetime object.
df['delta'] = df.apply(lambda x: sorted([x.b - i for i in l], key= lambda y: y.seconds)[0].days, axis=1)
Explanation : To each row you apply a function that :
Compute the deltatime between your row's datetime and every datetime present in l, then store it in a list
Sort this list by the numbers of seconds of each deltatime
Get the first value (with the smallest deltatime) and return its days
this code is seperate this dataset on
weekday Friday
year 2014
day 01
hour 00
minute 03
rides['weekday'] = rides.timestamp.dt.strftime("%A")
rides['year'] = rides.timestamp.dt.strftime("%Y")
rides['day'] = rides.timestamp.dt.strftime("%d")
rides['hour'] = rides.timestamp.dt.strftime("%H")
rides["minute"] = rides.timestamp.dt.strftime("%M")

pandas get a sum column for next 7 days

I want to get the sum of values for next 7 days of a column
my dataframe :
date value
0 2021-04-29 1
1 2021-05-03 2
2 2021-05-06 1
3 2021-05-15 1
4 2021-05-17 2
5 2021-05-18 1
6 2021-05-21 2
7 2021-05-22 5
8 2021-05-24 4
i tried to make a new column that contains date 7 days from current date
df['temp'] = df['date'] + timedelta(days=7)
then calculate value between date range :
df['next_7days'] = df[(df.date > df.date) & (df.date <= df.temp)].value.sum()
But this gives me answer as all 0.
intended result:
date value next_7days
0 2021-04-29 1 3
1 2021-05-03 2 1
2 2021-05-06 1 0
3 2021-05-15 1 10
4 2021-05-17 2 12
5 2021-05-18 1 11
6 2021-05-21 2 9
7 2021-05-22 5 4
8 2021-05-24 4 0
The method iam using currently is quite tedious, are their any better methods to get the intended result.
With a list comprehension:
tomorrow_dates = df.date + pd.Timedelta("1 day")
next_week_dates = df.date + pd.Timedelta("7 days")
df["next_7days"] = [df.value[df.date.between(tomorrow, next_week)].sum()
for tomorrow, next_week in zip(tomorrow_dates, next_week_dates)]
where we first define tomorrow and next week's dates and store them. Then zip them together and use between of pd.Series to get a boolean series if the date is indeed between the desired range. Then using boolean indexing to get the actual values and sum them. Do this for each date pair.
to get
date value next_7days
0 2021-04-29 1 3
1 2021-05-03 2 1
2 2021-05-06 1 0
3 2021-05-15 1 10
4 2021-05-17 2 12
5 2021-05-18 1 11
6 2021-05-21 2 9
7 2021-05-22 5 4
8 2021-05-24 4 0

Add a dummy indicating change between consecutive rows in grouped dataframe

This is a follow up to my previous question here.
Assume a dataset like this (which originally is read in from a .csv):
data = pd.DataFrame({'id': [1,2,3,1,2,3,1,2,3],
'time':['2017-01-01 12:00:00','2017-01-01 12:00:00','2017-01-01 12:00:00',
'2017-01-01 12:10:00','2017-01-01 12:10:00','2017-01-01 12:10:00',
'2017-01-01 12:20:00','2017-01-01 12:20:00','2017-01-01 12:20:00'],
'values': [10,11,12,10,12,13,10,13,13]})
data = data.set_index('id')
=>
id time values
0 1 2017-01-01 12:00:00 10
1 2 2017-01-01 12:00:00 11
2 3 2017-01-01 12:00:00 12
3 1 2017-01-01 12:10:00 10
4 2 2017-01-01 12:10:00 12
5 3 2017-01-01 12:10:00 13
6 1 2017-01-01 12:20:00 10
7 2 2017-01-01 12:20:00 13
8 3 2017-01-01 12:20:00 13
Time is identical for all IDs in each observation period. The series goes on like that for many observations, i.e. every ten minutes.
Previously, I learned how to get the total number of changes in values between two consecutive periods for each id:
data.groupby(data.index).values.apply(lambda x: (x != x.shift()).sum() - 1)
This works great and is really fast. Now, I am interested in adding a new column to the df. It should be a dummy indicating for each row in values if there was a change between the current and previous row. Thus, the result would be as follows:
=>
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
After fiddling around, I came up with a solution. However, it is really slow. It won't run on my actual dataset which is rather big:
def calc_change(x):
x = (x != x.shift())
x.iloc[0,] = False
return x
changes = data.groupby(data.index, as_index=False).values.apply(
calc_change).reset_index().iloc[:,2]
data = data.sort_index().reset_index()
data.loc[changes, 'change'] = 1
data = data.fillna(0)
I'm sure there are better and appreciate any help!
You can use this solution if your id column is not set as index.
data['change'] = data.groupby(['id'])['values'].apply(lambda x: x.diff() > 0).astype(int)
You get
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
With id as index,
data = data.sort_index()
data['change'] = data.groupby(data.index)['values'].apply(lambda x: x.diff() > 0).astype(int)

Group python pandas dataframe per weeks (starting on Monday)

I have a dataframe with values per day (see df below).
I want to group the "Forecast" field per week but with Monday as the first day of the week.
Currently I can do it via pd.TimeGrouper('W') (see df_final below) but it groups the week starting on Sundays (see df_final below)
import pandas as pd
data = [("W1","G1",1234,pd.to_datetime("2015-07-1"),8),
("W1","G1",1234,pd.to_datetime("2015-07-30"),2),
("W1","G1",1234,pd.to_datetime("2015-07-15"),2),
("W1","G1",1234,pd.to_datetime("2015-07-2"),4),
("W1","G2",2345,pd.to_datetime("2015-07-5"),5),
("W1","G2",2345,pd.to_datetime("2015-07-7"),1),
("W1","G2",2345,pd.to_datetime("2015-07-9"),1),
("W1","G2",2345,pd.to_datetime("2015-07-11"),3)]
labels = ["Site","Type","Product","Date","Forecast"]
df = pd.DataFrame(data,columns=labels).set_index(["Site","Type","Product","Date"])
df
Forecast
Site Type Product Date
W1 G1 1234 2015-07-01 8
2015-07-30 2
2015-07-15 2
2015-07-02 4
G2 2345 2015-07-05 5
2015-07-07 1
2015-07-09 1
2015-07-11 3
df_final = (df
.reset_index()
.set_index("Date")
.groupby(["Site","Product",pd.TimeGrouper('W')])["Forecast"].sum()
.astype(int)
.reset_index())
df_final["DayOfWeek"] = df_final["Date"].dt.dayofweek
df_final
Site Product Date Forecast DayOfWeek
0 W1 1234 2015-07-05 12 6
1 W1 1234 2015-07-19 2 6
2 W1 1234 2015-08-02 2 6
3 W1 2345 2015-07-05 5 6
4 W1 2345 2015-07-12 5 6
Use W-MON instead W, check anchored offsets:
df_final = (df
.reset_index()
.set_index("Date")
.groupby(["Site","Product",pd.Grouper(freq='W-MON')])["Forecast"].sum()
.astype(int)
.reset_index())
df_final["DayOfWeek"] = df_final["Date"].dt.dayofweek
print (df_final)
Site Product Date Forecast DayOfWeek
0 W1 1234 2015-07-06 12 0
1 W1 1234 2015-07-20 2 0
2 W1 1234 2015-08-03 2 0
3 W1 2345 2015-07-06 5 0
4 W1 2345 2015-07-13 5 0
I have three solutions to this problem as described below. First, I should state that the ex-accepted answer is incorrect. Here is why:
# let's create an example df of length 9, 2020-03-08 is a Sunday
s = pd.DataFrame({'dt':pd.date_range('2020-03-08', periods=9, freq='D'),
'counts':0})
> s
dt
counts
0
2020-03-08 00:00:00
0
1
2020-03-09 00:00:00
0
2
2020-03-10 00:00:00
0
3
2020-03-11 00:00:00
0
4
2020-03-12 00:00:00
0
5
2020-03-13 00:00:00
0
6
2020-03-14 00:00:00
0
7
2020-03-15 00:00:00
0
8
2020-03-16 00:00:00
0
These nine days span three Monday-to-Sunday weeks. The weeks of March 2nd, 9th, and 16th. Let's try the accepted answer:
# the accepted answer
> s.groupby(pd.Grouper(key='dt',freq='W-Mon')).count()
dt
counts
2020-03-09 00:00:00
2
2020-03-16 00:00:00
7
This is wrong because the OP wants to have "Monday as the first day of the week" (not as the last day of the week) in the resulting dataframe. Let's see what we get when we try with freq='W'
> s.groupby(pd.Grouper(key='dt', freq='W')).count()
dt
counts
2020-03-08 00:00:00
1
2020-03-15 00:00:00
7
2020-03-22 00:00:00
1
This grouper actually grouped as we wanted (Monday to Sunday) but labeled the 'dt' with the END of the week, rather than the start. So, to get what we want, we can move the index by 6 days like:
w = s.groupby(pd.Grouper(key='dt', freq='W')).count()
w.index -= pd.Timedelta(days=6)
or alternatively we can do:
s.groupby(pd.Grouper(key='dt',freq='W-Mon',label='left',closed='left')).count()
a third solution, arguably the most readable one, is converting dt to period first, then grouping, and finally (if needed) converting back to timestamp:
s.groupby(s.dt.dt.to_period('W'))['counts'].count().to_timestamp()
# a variant of this solution is: s.set_index('dt').to_period('W').groupby(pd.Grouper(freq='W')).count().to_timestamp()
all of these solutions return what the OP asked for:
dt
counts
2020-03-02 00:00:00
1
2020-03-09 00:00:00
7
2020-03-16 00:00:00
1
Explanation: when freq is provided to pd.Grouper, both closed and label kwargs default to right. Setting freq to W (short for W-Sun) works because we want our week to end on Sunday (Sunday included, and g.closed == 'right' handles this). Unfortunately, the pd.Grouper docstring does not show the default values but you can see them like this:
g = pd.Grouper(key='dt', freq='W')
print(g.closed, g.label)
> right right

make a shift by index with a pandas dataframe

Is there a pandas way to do that:
predicted_sells = []
for row in df.values:
index_tms = row[0]
delta = index_tms + timedelta(hours=1)
try:
sells_to_predict = df.loc[delta]['cars_sold']
except KeyError:
new_element = None
predicted_sells.append(sells_to_predict)
df['sell_to_predict'] = predicted_sells
example explanation:
sell is the number of cars I sold at the time tms. sell_to_predict is the number of cars I sold the hour after. I want to predict that. So I want to build a new column containing at the time tms the number of cars I will sell at the time tms+1h
before my code it looks like that
tms sell
2015-11-23 15:00:00 6
2015-11-23 16:00:00 2
2015-11-23 17:00:00 10
after it looks like that
tms sell sell_to_predict
2015-11-23 15:00:00 6 2
2015-11-23 16:00:00 2 10
2015-11-23 17:00:00 10 NaN
I create a new column based on a shift of an other column, but that's not a shift in number of columns. That's a shift based on an index (here the index is a timestamp)
Here is an other example, little more complex :
before :
sell random
store hour
1 1 1 9
2 7 7
2 1 4 3
2 2 3
after :
sell random predict
store hour
1 1 1 9 7
2 7 7 NaN
2 1 4 3 2
2 2 3 NaN
have you tried shift?
e.g.
df = pd.DataFrame(list(range(4)))
df.columns = ['sold']
df['predict'] = df.sold.shift(-1)
df
sold predict
0 0 1
1 1 2
2 2 3
3 3 NaN
the answer was to resample so I won't have any hole, and then apply the answer for this question : How do you shift Pandas DataFrame with a multiindex?

Categories