Suppose I was trying to organize sales data for a membership business.
I only have the start and end dates. Ideally sales between the start and end dates appear as 1, instead of missing.
I can't get the 'date' column to be filled with in-between dates. That is: I want a continuous set of months instead of gaps. Plus I need to fill missing data in columns with ffill.
I have tried different ways such as stack/unstack and reindex but different errors occur. I'm guessing there's a clean way to do this. What's the best practice to do this?
Suppose the multiindexed data structure:
variable sales
vendor date
a 2014-01-01 start date 1
2014-03-01 end date 1
b 2014-03-01 start date 1
2014-07-01 end date 1
And the desired result
variable sales
vendor date
a 2014-01-01 start date 1
2014-02-01 NaN 1
2014-03-01 end date 1
b 2014-03-01 start date 1
2014-04-01 NaN 1
2014-05-01 NaN 1
2014-06-01 NaN 1
2014-07-01 end date 1
you can do:
>>> f = lambda df: df.resample(rule='M', how='first')
>>> df.reset_index(level=0).groupby('vendor').apply(f).drop('vendor', axis=1)
variable sales
vendor date
a 2014-01-31 start date 1
2014-02-28 NaN NaN
2014-03-31 end date 1
b 2014-03-31 start date 1
2014-04-30 NaN NaN
2014-05-31 NaN NaN
2014-06-30 NaN NaN
2014-07-31 end date 1
and then just .fillna on sales column if needed.
I have a solution, but it's not really simple:
so, here's your DataFrame:
>>> df
sales date variable
vendor date
a 2014-01-01 1 start date
2014-01-03 1 end date
b 2014-01-03 1 start date
2014-01-07 1 end date
first, I want to create data for new MultiIndex:
>>> df2 = df.set_index('date variable', append=True).reset_index(level='date')['date']
>>> df2
vendor date variable
a start date 2014-01-01
end date 2014-01-03
b start date 2014-01-03
end date 2014-01-07
>>> df2 = df2.unstack()
>>> df2
date variable end date start date
vendor
a 2014-01-03 2014-01-01
b 2014-01-07 2014-01-03
now, create tuples for new MultiIndex:
>>> tuples = [(x[0], d) for x in df3.iterrows() for d in pd.date_range(x[1]['start date'], x[1]['end date'])]
>>> tuples
[('a', '2014-01-01'), ..., ('b', '2014-01-07)]
and create MultiIndex and reindex():
>>> mi = pd.MultiIndex.from_tuples(tuples,names=df.index.names)
>>> df.reindex(mi)
sales date variable
vendor date
a 2014-01-01 1 start date
2014-01-02 NaN NaN
2014-01-03 1 end date
b 2014-01-03 1 start date
2014-01-04 NaN NaN
2014-01-05 NaN NaN
2014-01-06 NaN NaN
2014-01-07 1 end date
Related
I would like create a new column which references a date column - 1 year and displays the corresponding values:
import pandas as pd
Input DF
df = pd.DataFrame({'consumption': [0,1,3,5], 'date':[pd.to_datetime('2017-04-01'),
pd.to_datetime('2017-04-02'),
pd.to_datetime('2018-04-01'),
pd.to_datetime('2018-04-02')]})
>>> df
consumption date
0 2017-04-01
1 2017-04-02
3 2018-04-01
5 2018-04-02
Expected DF
df = pd.DataFrame({'consumption': [0,1,3,5],
'prev_year_consumption': [np.NAN,np.NAN,0,1],
'date':[pd.to_datetime('2017-04-01'),
pd.to_datetime('2017-04-02'),
pd.to_datetime('2018-04-01'),
pd.to_datetime('2018-04-02')]})
>>> df
consumption prev_year_consumption date
0 NAN 2017-04-01
1 NAN 2017-04-02
3 0 2018-04-01
5 1 2018-04-02
So prev_year_consumption is are simply values from the consumption column where 1 year is subtracted from date dynamically.
in SQL I would probably do something like
SELECT df_past.consumption as prev_year_consumption, df_current.consumption
FROM df as df_current
LEFT JOIN ON df df_past ON year(df_current.date) = year(df_past.date) - 1
Appreciate any hints
The notation in pandas is similar. We are still doing a self merge however we need to specify that the right_on (or left_on) has a DateOffset of 1 year:
new_df = df.merge(
df,
left_on='date',
right_on=df['date'] + pd.offsets.DateOffset(years=1),
how='left'
)
new_df:
date consumption_x date_x consumption_y date_y
0 2017-04-01 0 2017-04-01 NaN NaT
1 2017-04-02 1 2017-04-02 NaN NaT
2 2018-04-01 3 2018-04-01 0.0 2017-04-01
3 2018-04-02 5 2018-04-02 1.0 2017-04-02
We can further drop and rename columns to get exact output:
new_df = df.merge(
df,
left_on='date',
right_on=df['date'] + pd.offsets.DateOffset(years=1),
how='left'
).drop(columns=['date_x', 'date_y']).rename(columns={
'consumption_y': 'prev_year_consumption'
})
new_df:
date consumption_x prev_year_consumption
0 2017-04-01 0 NaN
1 2017-04-02 1 NaN
2 2018-04-01 3 0.0
3 2018-04-02 5 1.0
I have this df:
revenue pct_yoy pct_qoq
2020-06-30 99.721 0.479013 0.092833
2020-03-31 91.250 0.478283 0.087216
2019-12-31 83.930 0.676253 0.135094
2019-09-30 73.941 NaN 0.096657
2019-06-30 67.424 NaN 0.092293
2019-03-31 61.727 NaN 0.232814
2018-09-30 50.070 NaN NaN
However, if you look at last index value with 2018, I seem to be missing 2018-12-31 when looking at the index as a sequential quarterly time-series. The index jumps straight to 2018-9-30.
How to ensure that any missing quarterly dates are inserted with nan values for their respective columns?
I'm not quite sure how to approach this problem.
You'll need to generate a list of your own quarterly dates that includes the missing dates. Then you can use .reindex to re-align your dataframe to this new list of dates.
# Get the oldest and newest dates which will be the bounds
# for our new Index
first_date = df.index.min()
last_date = df.index.max()
# Generate dates for every 3 months (3M) from first_date up to last_date
quarterly = pd.date_range(first_date, last_date, freq="3M")
# realign our dataframe using our new quarterly date index
# this will fill NaN for dates that did not exist in the
# original index
out = df.reindex(quarterly)
# if you want to order this from most recent date to least recent date
# do: out.sort_index(ascending=False)
print(out)
revenue pct_yoy pct_qoq
2018-09-30 50.070 NaN NaN
2018-12-31 NaN NaN NaN
2019-03-31 61.727 NaN 0.232814
2019-06-30 67.424 NaN 0.092293
2019-09-30 73.941 NaN 0.096657
2019-12-31 83.930 0.676253 0.135094
2020-03-31 91.250 0.478283 0.087216
2020-06-30 99.721 0.479013 0.092833
If your data contains only quarter-enddates as in the sample, you may use resample and asfreq to fill missing quarter-ends
df_final = df.resample('Q').asfreq()[::-1]
Out[122]:
revenue pct_yoy pct_qoq
2020-06-30 99.721 0.479013 0.092833
2020-03-31 91.250 0.478283 0.087216
2019-12-31 83.930 0.676253 0.135094
2019-09-30 73.941 NaN 0.096657
2019-06-30 67.424 NaN 0.092293
2019-03-31 61.727 NaN 0.232814
2018-12-31 NaN NaN NaN
2018-09-30 50.070 NaN NaN
I have a dataframe with a datetime64[ns] object which has the format, so there I have data per hourly base:
Datum Values
2020-01-01 00:00:00 1
2020-01-01 01:00:00 10
....
2020-02-28 00:00:00 5
2020-03-01 00:00:00 4
and another table with closing days, also in a datetime64[ns] column with the format, so there I only have a dayformat:
Dates
2020-02-28
2020-02-29
....
How can I delete all days in the first dataframe df, which occure in the second dataframe Dates? So that df is:
2020-01-01 00:00:00 1
2020-01-01 01:00:00 10
....
2020-03-01 00:00:00 4
Use Series.dt.floor for set times to 0, so possible filter by Series.isin with inverted mask in boolean indexing:
df['Datum'] = pd.to_datetime(df['Datum'])
df1['Dates'] = pd.to_datetime(df1['Dates'])
df = df[~df['Datum'].dt.floor('d').isin(df1['Dates'])]
print (df)
Datum Values
0 2020-01-01 00:00:00 1
1 2020-01-01 01:00:00 10
3 2020-03-01 00:00:00 4
EDIT: For flag column convert mask to integers by Series.view or Series.astype:
df['flag'] = df['Datum'].dt.floor('d').isin(df1['Dates']).view('i1')
#alternative
#df['flag'] = df['Datum'].dt.floor('d').isin(df1['Dates']).astype('int')
print (df)
Datum Values flag
0 2020-01-01 00:00:00 1 0
1 2020-01-01 01:00:00 10 0
2 2020-02-28 00:00:00 5 1
3 2020-03-01 00:00:00 4 0
Putting you aded comment into consideration
string of the Dates in df1
c="|".join(df1.Dates.values)
c
Coerce Datum to datetime
df['Datum']=pd.to_datetime(df['Datum'])
df.dtypes
Extract Datum as Dates ,dtype string
df.set_index(df['Datum'],inplace=True)
df['Dates']=df.index.date.astype(str)
Boolean select date ins in both
m=df.Dates.str.contains(c)
m
Mark inclusive dates as 0 and exclusive as 1
df['drop']=np.where(m,0,1)
df
Drop unwanted rows
df.reset_index(drop=True).drop(columns=['Dates'])
Outcome
This is my data:
df = pd.DataFrame([
{start_date: '2019/12/01', end_date: '2019/12/05', spend: 10000, campaign_id: 1}
{start_date: '2019/12/05', end_date: '2019/12/09', spend: 50000, campaign_id: 2}
{start_date: '2019/12/01', end_date: '', spend: 10000, campaign_id: 3}
{start_date: '2019/12/01', end_date: '2019/12/01', spend: 50, campaign_id: 4}
]);
I need to add a column to each row for each day since 2019/12/01, and calculate the spend on that campaign that day, which I'll get by dividing the spend on the campaign by the total number of days it was active.
So here I'd add a column for each day between 1 December and today (10 December). For row 1, the content of the five columns for 1 Dec to 5 Dec would be 2000, then for the six ocolumns from 5 Dec to 10 Dec it would be zero.
I know pandas is well-designed for this kind of problem, but I have no idea where to start!
Doesn't seem like a straight forward task to me. But first convert your date columns if you haven't already:
df["start_date"] = pd.to_datetime(df["start_date"])
df["end_date"] = pd.to_datetime(df["end_date"])
Then create a helper function for resampling:
def resampler(data, daterange):
temp = (data.set_index('start_date').groupby('campaign_id')
.apply(daterange)
.drop("campaign_id",axis=1)
.reset_index().rename(columns={"level_1":"start_date"}))
return temp
Now its a 3 step process. First resample your data according to end_date of each group:
df1 = resampler(df, lambda d: d.reindex(pd.date_range(min(d.index),max(d["end_date"]),freq="D")) if d["end_date"].notnull().all() else d)
df1["spend"] = df1.groupby("campaign_id")["spend"].transform(lambda x: x.mean()/len(x))
With the average values calculated, resample again to current date:
dates = pd.date_range(min(df["start_date"]),pd.Timestamp.today(),freq="D")
df1 = resampler(df1,lambda d: d.reindex(dates))
Finally transpose your dataframe:
df1 = pd.concat([df1.drop("end_date",axis=1).set_index(["campaign_id","start_date"]).unstack(),
df1.groupby("campaign_id")["end_date"].min()], axis=1)
df1.columns = [*dates,"end_date"]
print (df1)
#
2019-12-01 00:00:00 2019-12-02 00:00:00 2019-12-03 00:00:00 2019-12-04 00:00:00 2019-12-05 00:00:00 2019-12-06 00:00:00 2019-12-07 00:00:00 2019-12-08 00:00:00 2019-12-09 00:00:00 2019-12-10 00:00:00 end_date
campaign_id
1 2000.0 2000.0 2000.0 2000.0 2000.0 NaN NaN NaN NaN NaN 2019-12-05
2 NaN NaN NaN NaN 10000.0 10000.0 10000.0 10000.0 10000.0 NaN 2019-12-09
3 10000.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaT
4 50.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN 2019-12-01
I want to extract electricity consumption for Site 2
>>> df4 = pd.read_excel(xls, 'Elec Monthly Cons')
>>> df4
Site Unnamed: 1 2014-01-01 00:00:00 2014-02-01 00:00:00 2014-03-01 00:00:00 ... 2017-08-01 00:00:00 2017-09-01 00:00:00 2017-10-01 00:00:00 2017-11-01 00:00:00 2017-12-01 00:00:00
0 Site Profile JAN 2014 FEB 2014 MAR 2014 ... AUG 2017 SEP 2017 OCT 2017 NOV 2017 DEC 2017
1 Site 1 NHH 10344 NaN NaN ... NaN NaN NaN NaN NaN
2 Site 2 HH 258351 229513 239379 ... NaN NaN NaN NaN NaN
type
type(df4)
<class 'pandas.core.frame.DataFrame'>
My goal is to take out the numerical value but I do not know how to set the index properly. What I have tried so far does not work at all.
df1 = df.loc[idx[:,1:2],:]
But
raise IndexingError('Too many indexers')
pandas.core.indexing.IndexingError: Too many indexers
It seems that I do not understand indexing. Does the series type play any role?
df.head
<bound method NDFrame.head of Site Site 2
Unnamed: 1 HH
EDIT
print (df.index)
Index([ 'Site', 'Unnamed: 1', 2014-01-01 00:00:00,
2014-02-01 00:00:00, 2014-03-01 00:00:00, 2014-04-01 00:00:00,
2014-05-01 00:00:00, 2014-06-01 00:00:00, 2014-07-01 00:00:00,
How to solve this?
In my opinion is necessary remove :, because it means select all columns, but Series have no column.
Also it seems no MultiIndex, so then need:
df1 = df.iloc[1:2]
There is problem first 2 rows are headers, so for MultiIndex DataFrame need:
df4 = pd.read_excel(xls, 'Elec Monthly Cons', header=[0,1], index_col=[0,1])
And then for select use:
idx = pd.IndexSlice
df1 = df.loc[:, idx[:,'FEB 2014':'MAR 2014']]