I have this df:
revenue pct_yoy pct_qoq
2020-06-30 99.721 0.479013 0.092833
2020-03-31 91.250 0.478283 0.087216
2019-12-31 83.930 0.676253 0.135094
2019-09-30 73.941 NaN 0.096657
2019-06-30 67.424 NaN 0.092293
2019-03-31 61.727 NaN 0.232814
2018-09-30 50.070 NaN NaN
However, if you look at last index value with 2018, I seem to be missing 2018-12-31 when looking at the index as a sequential quarterly time-series. The index jumps straight to 2018-9-30.
How to ensure that any missing quarterly dates are inserted with nan values for their respective columns?
I'm not quite sure how to approach this problem.
You'll need to generate a list of your own quarterly dates that includes the missing dates. Then you can use .reindex to re-align your dataframe to this new list of dates.
# Get the oldest and newest dates which will be the bounds
# for our new Index
first_date = df.index.min()
last_date = df.index.max()
# Generate dates for every 3 months (3M) from first_date up to last_date
quarterly = pd.date_range(first_date, last_date, freq="3M")
# realign our dataframe using our new quarterly date index
# this will fill NaN for dates that did not exist in the
# original index
out = df.reindex(quarterly)
# if you want to order this from most recent date to least recent date
# do: out.sort_index(ascending=False)
print(out)
revenue pct_yoy pct_qoq
2018-09-30 50.070 NaN NaN
2018-12-31 NaN NaN NaN
2019-03-31 61.727 NaN 0.232814
2019-06-30 67.424 NaN 0.092293
2019-09-30 73.941 NaN 0.096657
2019-12-31 83.930 0.676253 0.135094
2020-03-31 91.250 0.478283 0.087216
2020-06-30 99.721 0.479013 0.092833
If your data contains only quarter-enddates as in the sample, you may use resample and asfreq to fill missing quarter-ends
df_final = df.resample('Q').asfreq()[::-1]
Out[122]:
revenue pct_yoy pct_qoq
2020-06-30 99.721 0.479013 0.092833
2020-03-31 91.250 0.478283 0.087216
2019-12-31 83.930 0.676253 0.135094
2019-09-30 73.941 NaN 0.096657
2019-06-30 67.424 NaN 0.092293
2019-03-31 61.727 NaN 0.232814
2018-12-31 NaN NaN NaN
2018-09-30 50.070 NaN NaN
Related
I have a multiindexed dataframe (but with more columns)
2020-12-22 09:47:50 2020-12-23 16:43:45 2020-12-22 15:00
Lines VehicleNumber
102 9405 3 NaN 3
9415 NaN NaN NaN
9416 NaN NaN NaN
Now I want to sort the columns such that I have the earliest date as a first column and the lastest as last. After that I want to delete columns, which are not in between two dates let's say 2020-12-22 10:00:00 < date < 2020-12-23 10:00:00. I tried transposing the dataframe, but it seems not to work when I have a multiindex.
Expected output:
2020-12-22 15:00 2020-12-23 16:43:45
Lines VehicleNumber
102 9405 3 NaN
9415 NaN NaN
9416 NaN NaN
So first we sort the columns by date and then check if they are between the two dates:
2020-12-22 10:00:00 < date < 2020-12-23 10:00:00 hence delete one column
First convert str columns to date time columns:
In [2244]: df.columns = pd.to_datetime(df.columns)
Then, sort df based on datetimes:
In [2246]: df = df.reindex(sorted(df.columns), axis=1)
Suppose you want to keep only column that are greater than following:
In [2251]: x = '2020-12-22 10:00:00'
Use List comprehension:
In [2257]: m = [i for i in df.columns if i > pd.to_datetime(x)]
In [2258]: df[m]
Out[2258]:
2020-12-22 15:00:00 2020-12-23 16:43:45
Lines VehicleNumber
102 9405.0 3.0 NaN
9415 NaN NaN NaN
9416 NaN NaN NaN
I'm trying to find ways of analysing log data from a git repo. I've dumped the data to a file that looks like this:
"hash","email","date","subject"
"65319af6e","jbrockmendel#gmail.com","2020-11-28","REF-IntervalIndex._assert_can_do_setop-38112"
"0bf58d8a9","simonjayhawkins#gmail.com","2020-11-28","DOC-add-contibutors-to-1.2.0-release-notes-38132"
"d16df293c","45562402+rhshadrach#users.noreply.github.com","2020-11-28","TYP-Add-cast-to-ABC-Index-like-types-38043"
"2d661a899","jbrockmendel#gmail.com","2020-11-28","CLN-simplify-mask_missing-38127"
"ba2ae2f73","jbrockmendel#gmail.com","2020-11-28","CLN-remove-unreachable-in-core.dtypes-38128"
I am able to get rows that have a count more than:
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
dataframe = pd.read_csv("git-log-2020.csv")
dataframe['date'] = pd.to_datetime(dataframe['date'])
grouped_dataframe = dataframe.groupby([pd.Grouper(key='date', freq='M'), "email"])[["subject"]].count()
# Select all the users that have contributed more than 20 times.
print(grouped_dataframe[grouped_dataframe['subject'] > 20])
But I would like to be able to find the following:
What is the top three users per month?
What are the total commits for each month?
what is the average number of commits per user / per month? What is the monthly average activity of the users?
My code and data can be found here: https://github.com/mooperd/git-analysis-pandas
Ta, Andrew
All answers using your sample data that I call df
Top three :
df.groupby(pd.Grouper(key='date', freq='1M')).apply(lambda d: d['email'].value_counts().sort_values(ascending = False).head(3))
produces
date email
2020-11-30 jbrockmendel#gmail.com 3
simonjayhawkins#gmail.com 1
45562402+rhshadrach#users.noreply.github.com 1
Name: email, dtype: int64
total commits per month
df.groupby(pd.Grouper(key='date', freq='M'))['subject'].count()
output:
date
2020-11-30 5
Freq: M, Name: subject, dtype: int64
as for average number of commits per user/per month not entirely clear what you want? For each user, the ratio of the number of commits for that user by the number of months in your dataset? or the number of months they made actual commits?. Would be useful to get sample output here
But I think the following transformation is useful, it produces a table of # of commits by email and month so you can take averages every which way
df2 = df.groupby([pd.Grouper(key='date', freq='M')])['email'].value_counts().unstack(level=0)
df2
On your real data it produces a table that is too large to reproduce here but it starts with
date 2020-01-31 2020-02-29 2020-03-31 2020-04-30 2020-05-31 2020-06-30 2020-07-31 2020-08-31 2020-09-30 2020-10-31 2020-11-30
email
10430241+xh2#users.noreply.github.com NaN NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
12420863+danchev#users.noreply.github.com NaN NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
12769364+tnwei#users.noreply.github.com NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.0 NaN
12874561+jeet-parekh#users.noreply.github.com NaN NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
as you can see it has a lot of NaNs that correspon to users not making any commits that month, which is useful when eg calculating averages over months with commits.For example
df2.mean(axis=1).sort_values(ascending = False)
produces average monthly commits for a user (sorted), and
df2.mean(axis=0).sort_values(ascending = False)
produces average monthly commits per month (sorted)
This is my data:
df = pd.DataFrame([
{start_date: '2019/12/01', end_date: '2019/12/05', spend: 10000, campaign_id: 1}
{start_date: '2019/12/05', end_date: '2019/12/09', spend: 50000, campaign_id: 2}
{start_date: '2019/12/01', end_date: '', spend: 10000, campaign_id: 3}
{start_date: '2019/12/01', end_date: '2019/12/01', spend: 50, campaign_id: 4}
]);
I need to add a column to each row for each day since 2019/12/01, and calculate the spend on that campaign that day, which I'll get by dividing the spend on the campaign by the total number of days it was active.
So here I'd add a column for each day between 1 December and today (10 December). For row 1, the content of the five columns for 1 Dec to 5 Dec would be 2000, then for the six ocolumns from 5 Dec to 10 Dec it would be zero.
I know pandas is well-designed for this kind of problem, but I have no idea where to start!
Doesn't seem like a straight forward task to me. But first convert your date columns if you haven't already:
df["start_date"] = pd.to_datetime(df["start_date"])
df["end_date"] = pd.to_datetime(df["end_date"])
Then create a helper function for resampling:
def resampler(data, daterange):
temp = (data.set_index('start_date').groupby('campaign_id')
.apply(daterange)
.drop("campaign_id",axis=1)
.reset_index().rename(columns={"level_1":"start_date"}))
return temp
Now its a 3 step process. First resample your data according to end_date of each group:
df1 = resampler(df, lambda d: d.reindex(pd.date_range(min(d.index),max(d["end_date"]),freq="D")) if d["end_date"].notnull().all() else d)
df1["spend"] = df1.groupby("campaign_id")["spend"].transform(lambda x: x.mean()/len(x))
With the average values calculated, resample again to current date:
dates = pd.date_range(min(df["start_date"]),pd.Timestamp.today(),freq="D")
df1 = resampler(df1,lambda d: d.reindex(dates))
Finally transpose your dataframe:
df1 = pd.concat([df1.drop("end_date",axis=1).set_index(["campaign_id","start_date"]).unstack(),
df1.groupby("campaign_id")["end_date"].min()], axis=1)
df1.columns = [*dates,"end_date"]
print (df1)
#
2019-12-01 00:00:00 2019-12-02 00:00:00 2019-12-03 00:00:00 2019-12-04 00:00:00 2019-12-05 00:00:00 2019-12-06 00:00:00 2019-12-07 00:00:00 2019-12-08 00:00:00 2019-12-09 00:00:00 2019-12-10 00:00:00 end_date
campaign_id
1 2000.0 2000.0 2000.0 2000.0 2000.0 NaN NaN NaN NaN NaN 2019-12-05
2 NaN NaN NaN NaN 10000.0 10000.0 10000.0 10000.0 10000.0 NaN 2019-12-09
3 10000.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaT
4 50.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN 2019-12-01
I have a dataframe full of bookings for one room (rows: booking_id, check-in date and check-out date that I want to transform into a timeseries indexed by all year days (index: days of year, feature: booked or not).
I have calculated the duration of the bookings, and reindexed the dataframe daily.
Now I need to forward-fill the dataframe, but only a limited number of times: the duration of each booking.
Tried iterating through each row with ffill but it applies to the entire dataframe, not to selected rows.
Any idea how I can do that?
Here is my code:
import numpy as np
import pandas as pd
#create dataframe
data=[[1, '2019-01-01', '2019-01-02', 1],
[2, '2019-01-03', '2019-01-07', 4],
[3, '2019-01-10','2019-01-13', 3]]
df = pd.DataFrame(data, columns=['booking_id', 'check-in', 'check-out', 'duration'])
#cast dates to datetime formats
df['check-in'] = pd.to_datetime(df['check-in'])
df['check-out'] = pd.to_datetime(df['check-out'])
#create timeseries indexed on check-in date
df2 = df.set_index('check-in')
#create new index and reindex timeseries
idx = pd.date_range(min(df['check-in']), max(df['check-out']), freq='D')
ts = df2.reindex(idx)
I have this:
booking_id check-out duration
2019-01-01 1.0 2019-01-02 1.0
2019-01-02 NaN NaT NaN
2019-01-03 2.0 2019-01-07 4.0
2019-01-04 NaN NaT NaN
2019-01-05 NaN NaT NaN
2019-01-06 NaN NaT NaN
2019-01-07 NaN NaT NaN
2019-01-08 NaN NaT NaN
2019-01-09 NaN NaT NaN
2019-01-10 3.0 2019-01-13 3.0
2019-01-11 NaN NaT NaN
2019-01-12 NaN NaT NaN
2019-01-13 NaN NaT NaN
I expect to have:
booking_id check-out duration
2019-01-01 1.0 2019-01-02 1.0
2019-01-02 1.0 2019-01-02 1.0
2019-01-03 2.0 2019-01-07 4.0
2019-01-04 2.0 2019-01-07 4.0
2019-01-05 2.0 2019-01-07 4.0
2019-01-06 2.0 2019-01-07 4.0
2019-01-07 NaN NaT NaN
2019-01-08 NaN NaT NaN
2019-01-09 NaN NaT NaN
2019-01-10 3.0 2019-01-13 3.0
2019-01-11 3.0 2019-01-13 3.0
2019-01-12 3.0 2019-01-13 3.0
2019-01-13 NaN NaT NaN
filluntil = ts['check-out'].ffill()
m = ts.index < filluntil.values
#reshaping the mask to be shame shape as ts
m = np.repeat(m, ts.shape[1]).reshape(ts.shape)
ts = ts.ffill().where(m)
First we create a series where the dates are ffilled. Then we create a mask where the index is less than the filled values. Then we fill based on our mask.
If you want to include the row with the check out date, change m from < to <=
I think to "forward-fill the dataframe" you should use pandas interpolate method. Documentation can be found here:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html
you can do something like this:
int_how_many_consecutive_to_fill = 3
df2 = df2.interpolate(axis=0, limit=int_how_many_consecutive_to_fill, limit_direction='forward')
look at the specific documentation for interpolate, there is a lot of custom functionality you can add with flags to the method.
EDIT:
to do this using the row value in the duration column for each interpolation, this is a bit messy but I think it should work (there may be a less hacky, cleaner solution using some functionality in pandas or another library i am unaware of):
#get rows with nans in them:
nans_df = df2[df2.isnull()]
#get rows without nans in them:
non_nans_df = df2[~df2.isnull()]
#list of dfs we will concat vertically at the end to get final dataframe.
dfs = []
#iterate through each row that contains NaNs.
for nan_index, nan_row in nans_df.iterrows():
previous_day = nan_index - pd.DateOffset(1)
#this checks if the previous day to this NaN row is a day where we have non nan values, if the previous day is a nan day just skip this loop. This is mostly here to handle the case where the first row is a NaN one.
if previous_day not in non_nans_df.index:
continue
date_offset = 0
#here we are checking how many sequential rows there are after this one with all nan values in it, this will be stored in the date_offset variable.
while (nan_index + pd.DateOffset(date_offset)) in nans_df.index:
date_offset += 1
#this gets us the last date in the sequence of continuous days with all nan values after this current one.
end_sequence_date = nan_index + pd.DateOffset(date_offset)
#this gives us a dataframe where the first row in it is the previous day to this one(nan_index), confirmed to be non NaN by the first if statement in this for loop. It then combines this non NaN row with all the sequential nan rows after it into the variable df_to_interpolate.
df_to_interpolate = non_nans_df.iloc[previous_day].append(nans_df.iloc[nan_index:end_sequence_date])
# now we pull the duration value for the first row in our df_to_interpolate dataframe.
limit_val = int(df_to_interpolate['duration'][0])
#here we interpolate the dataframe using the limit_val
df_to_interpolate = df_to_interpolate.interpolate(axis=0, limit=limit_val, limit_direction='forward')
#append df_to_interpolate to our list that gets combined at the end.
dfs.append(df_to_interpolate)
#gives us our final dataframe, interpolated forward using a dynamic limit value based on the most recent duration value.
final_df = pd.concat(dfs)
I want to get the lagged data from a dataset. The dataset is monthly and looks like this:
Final Profits
JCCreateDate
2016-04-30 31163371.59
2016-05-31 27512300.34
...
2019-02-28 16800693.82
2019-03-31 5384227.13
Now out of the above dataset, I've selected a window of data (last 12 months of data) from which I want to subtract 3,6,9 and 12 months.
I've created the window dataset like this:
df_all = pd.read_csv('dataset.csv')
df = pd.read_csv('window_dataset.csv')
data_start, data_end = pd.to_datetime(df.first_valid_index()), pd.to_datetime(df.last_valid_index())
dr = pd.date_range(data_start, data_end, freq='M')
Now for the daterange dr I wanted to subtract the months, lets suppose I subtract 3 months from dr and try to retrieve the data from df_all
df_all.loc[dr - pd.DateOffset(months=3)]
which gives me following output
Final Profits
2018-01-30 NaN
2018-02-28 9240766.46
2018-03-30 NaN
2018-04-30 13250515.05
2018-05-31 12539224.15
2018-06-30 17778326.04
2018-07-31 19345671.02
2018-08-30 NaN
2018-09-30 14815607.14
2018-10-31 28979099.74
2018-11-28 NaN
2018-12-31 12395273.24
As one can see I've got some NaN because the months like Jan, Mar has got 31 days and the subtraction is searching for the wrong day of the month. How to deal with it ?
I'm not 100% what you are looking for but I suspect use shift.
# set up dataframe
index = pd.date_range(start='2016-04-30', end='2019-03-31', freq='M' )
df = pd.DataFrame(np.random.randint(5000000, 50000000, 36), index=index, columns=['Final Profits'])
# create three columns shifting and subtracing from 'Final_Profits'
df['3mos'] = df['Final Profits'] - df['Final Profits'].shift(3)
df['6mos'] = df['Final Profits'] - df['Final Profits'].shift(6)
df['9mos'] = df['Final Profits'] - df['Final Profits'].shift(9)
print(df.head(12))
Final Profits 3mos 6mos 9mos
2016-04-30 45197972 NaN NaN NaN
2016-05-31 5029292 NaN NaN NaN
2016-06-30 20310120 NaN NaN NaN
2016-07-31 10514197 -34683775.0 NaN NaN
2016-08-31 31219405 26190113.0 NaN NaN
2016-09-30 21504727 1194607.0 NaN NaN
2016-10-31 19234437 8720240.0 -25963535.0 NaN
2016-11-30 18881711 -12337694.0 13852419.0 NaN
2016-12-31 27237712 5732985.0 6927592.0 NaN
2017-01-31 21692788 2458351.0 11178591.0 -23505184.0
2017-02-28 7869701 -11012010.0 -23349704.0 2840409.0
2017-03-31 20943248 -6294464.0 -561479.0 633128.0