I have a dataframe containing weekly sales for different products (a, b, c). If there were zero sales in a given week (e.g. week 4), there is no record for that week:
In[1]
df = pd.DataFrame({'product': list('aaaabbbbcccc'),
'week': [1, 2, 3, 5, 1, 2, 3, 5, 1, 2, 3, 4],
'sales': np.power(2, range(12))})
Out[1]
product sales week
0 a 1 1
1 a 2 2
2 a 4 3
3 a 8 5
4 b 16 1
5 b 32 2
6 b 64 3
7 b 128 5
8 c 256 1
9 c 512 2
10 c 1024 3
11 c 2048 4
I would like to create a new column containing the cumulative sales for the previous n weeks, grouped by product. E.g. for n=2 it should be like last_2_weeks:
product sales week last_2_weeks
0 a 1 1 0
1 a 2 2 1
2 a 4 3 3
3 a 8 5 4
4 b 16 1 0
5 b 32 2 16
6 b 64 3 48
7 b 128 5 64
8 c 256 1 0
9 c 512 2 256
10 c 1024 3 768
11 c 2048 4 1536
If there was a record for every week, I could just use rolling_sum as described in this question.
Is there a way to set 'week' as an index and only calculate the sum on that index? Or could I resample 'week' and set sales to zero for all missing rows?
Resample is only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex.
But reindex is possible with integers.
Firstly column week is set to index. Then df is grouped by column product and apply reindex by max values of index of each group. Missing values are filled by 0.
import pandas as pd
import numpy as np
df = pd.DataFrame({'product': list('aaaabbbbcccc'),
'week': [1, 2, 3, 5, 1, 2, 3, 5, 1, 2, 3, 4],
'sales': np.power(2, range(12))})
df = df.set_index('week')
def reindex_by_max_index_of_group(df):
index = range(1, max(df.index) + 1)
return df.reindex(index, fill_value=0)
df = df.groupby('product').apply(reindex_by_max_index_of_group)
df.drop(['product'], inplace=True, axis=1)
print df.reset_index()
# product week sales
#0 a 1 1
#1 a 2 2
#2 a 3 4
#3 a 4 0
#4 a 5 8
#5 b 1 16
#6 b 2 32
#7 b 3 64
#8 b 4 0
#9 b 5 128
#10 c 1 256
#11 c 2 512
#12 c 3 1024
#13 c 4 2048
You can use pivot to create a table which will auto-fill the missing values. This works provided that there is at least one entry for each week in your original data, reindex can be used to ensure that there is a row in the table for every week.
This can then have rolling_sum applied to it:
import pandas, numpy
df = pandas.DataFrame({'product': list('aaaabbbbcccc'),
'week': [1, 2, 3, 5, 1, 2, 3, 5, 1, 2, 3, 4],
'sales': numpy.power(2, range(12))})
sales = df.pivot(index='week', columns='product')
# Cope with weeks when there were no sales at all
sales = sales.reindex(range(min(sales.index), 1+max(sales.index))).fillna(0)
# Calculate the sum for the preceding two weeks
pandas.rolling_sum(sales, 3, min_periods=1)-sales
This gives the following result, which looks to match the desired (in that it provides the sum for the preceding two weeks):
product a b c
week
1 0 0 0
2 1 16 256
3 3 48 768
4 6 96 1536
5 4 64 3072
Related
I have a dataframe and I want to replace the value 7 with the round number of mean of its columns with out other 7 in that columns. Here is a simple example:
import pandas as pd
df = pd.DataFrame()
df['a'] = [1, 2, 3]
df['b'] =[3, 0, -1]
df['c'] = [4, 7, 6]
df['d'] = [7, 7, 6]
a b c d
0 1 3 4 7
1 2 0 7 7
2 3 -1 6 6
And here is the output I want:
a b c d
0 1 3 4 2
1 2 0 3 2
2 3 -1 6 6
For example, in row 1, the mean of column c is equal to 3.33 and then its round is 3, and in column column d is equal to 2 (since we do not consider the other 7 in that column).
Can you please help me with that?
here is one way to do it
# replace 7 with np.nan
df.replace(7,np.nan, inplace=True)
# fill NaN values with the mean of the column
(df.fillna(df.apply(lambda x: x.replace(np.nan, 0)
.mean(skipna=False) ))
.round(0)
.astype(int))
a b c d
0 1 3 4 2
1 2 0 3 2
2 3 -1 6 6
temp = df.replace(to_replace=7, value=0, inplace=False).copy()
df.replace(to_replace=7, value=temp.mean().astype(int), inplace=True)
Suppose I have heterogeneous dataframe:
a b c d
1 1 2 3 4
2 5 6 7 8
3 9 10 11 12
4 13 14 15 16
And i want to stack the rows like so:
a b c d
1 1,5,8,13 2,6,10,14 3,7,11,15 4,8,12,16
Etc...
All the references for grouby etc seem to require some feature of grouping, I just want to put x rows into columns, regardless of their content. Each row has a timestamp, I am looking to group values by sample count, so i want 1 row with all the values of x sample rows as columns.
I should end up with a dataframe that has x*original number of columns and original number of rows/x
I'm sure there must be some simple method I'm missing here without a series of loop etc
If need join all values to strings use:
df1 = df.astype(str).agg(','.join).to_frame().T
print (df1)
a b c d
0 1,5,9,13 2,6,10,14 3,7,11,15 4,8,12,16
Or if need create lists use:
df2 = pd.DataFrame([[list(df[x]) for x in df]], columns=df.columns)
print (df2)
a b c d
0 [1, 5, 9, 13] [2, 6, 10, 14] [3, 7, 11, 15] [4, 8, 12, 16]
If need scalars with MultiIndex (generated fro index nad columns labels) use:
df3 = df.unstack().to_frame().T
print (df3)
a b c d
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
0 1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16
I'm trying to create a column which contains a cumulative sum of the number of entries, tid, which are grouped according to unique values of (raceid, tid). The cumulative sum should increment by the number of entries in the grouping as shown in the df3 dataframe below rather than one at a time.
import pandas as pd
df1 = pd.DataFrame({
'rid': [1, 1, 1, 2, 2, 2, 3, 3, 4, 5, 5, 5, 5],
'tid': [1, 2, 2, 1, 1, 3, 1, 4, 5, 1, 1, 1, 3]})
rid tid
0 1 1
1 1 2
2 1 2
3 2 1
4 2 1
5 2 3
6 3 1
7 3 4
8 4 5
9 5 1
10 5 1
11 5 1
12 5 3
Giving after the required operation:
df3 = pd.DataFrame({
'rid': [1, 1, 1, 2, 2, 2, 3, 3, 4, 5, 5, 5, 5],
'tid': [1, 2, 2, 1, 1, 3, 1, 4, 5, 1, 1, 1, 3],
'groupentries': [1, 2, 2, 2, 2, 1, 1, 1, 1, 3, 3, 3, 1],
'cumulativeentries': [1, 2, 2, 3, 3, 1, 4, 1, 1, 7, 7, 7, 2]})
rid tid groupentries cumulativeentries
0 1 1 1 1
1 1 2 2 2
2 1 2 2 2
3 2 1 2 3
4 2 1 2 3
5 2 3 1 1
6 3 1 1 4
7 3 4 1 1
8 4 5 1 1
9 5 1 3 7
10 5 1 3 7
11 5 1 3 7
12 5 3 1 2
The derived column that I'm after is the cumulativeentries column although I've only figured out how to generate the intermediate column groupentries using pandas:
df1.groupby(["rid", "tid"]).size()
Values in cumulativeentries are actually a kind of running count.
The task is to count occurrences of the current tid in "source area" of
tid column:
from the beginning of the DataFrame,
up to (including) the end of the current group.
To compute values of both required values for each group, I defined
the following function:
def fn(grp):
lastRow = grp.iloc[-1] # last row of the current group
lastId = lastRow.name # index of this row
tids = df1.truncate(after=lastId).tid
return [grp.index.size, tids[tids == lastRow.tid].size]
To get the "source area" mentioned above I used truncate function.
In my opinion it is a very intuitive solution, based on the notion of the
"source area".
The function returns a list containing both required values:
the size of the current group,
how many tids equal to the current tid are in the
truncated tid column.
To apply this function, run:
df2 = df1.groupby(['rid', 'tid']).apply(fn).apply(pd.Series)\
.rename(columns={0: 'groupentries', 1: 'cumulativeentries'})
Details:
apply(fn) generates a Series containing 2-element lists.
apply(pd.Series) converts it to a DataFrame (with default column names).
rename sets the target column names.
And the last thing to do is to join this table to df1:
df1.join(df2, on=['rid', 'tid'])
For first column use GroupBy.transform with DataFrameGroupBy.size, for second use custom function for test all values of column to last index values, compare with last values and count matched values by sum:
f = lambda x: (df1['tid'].iloc[:x.index[-1]+1] == x.iat[-1]).sum()
df1['groupentries'] = df1.groupby(["rid", "tid"])['rid'].transform('size')
df1['cumulativeentries'] = df1.groupby(["rid", "tid"])['tid'].transform(f)
print (df1)
rid tid groupentries cumulativeentries
0 1 1 1 1
1 1 2 2 2
2 1 2 2 2
3 2 1 2 3
4 2 1 2 3
5 2 3 1 1
6 3 1 1 4
7 3 4 1 1
8 4 5 1 1
9 5 1 3 7
10 5 1 3 7
11 5 1 3 7
12 5 3 1 2
I have a dataframe of the following type:
A B
0 1 2
1 4 5
2 7 8
3 10 11
4 13 14
5 16 17
I want to calculate the mean of the first 3 element of each column and then next 3 elements and so on and then store in a dataframe.
Desired Output-
A B
0 4 5
1 12 14
Using Group By was one of the approach I thought of but I am unable to figure out how to use Group by in this case.
If default RangeIndex then use integer division and pass to groupby:
df = df.groupby(df.index // 3).mean()
print (df)
A B
0 4 5
1 13 14
Detail:
print (df.index // 3)
Int64Index([0, 0, 0, 1, 1, 1], dtype='int64')
General solution with array created by length of DataFrame - working with all index values:
df = df.groupby(np.arange(len(df)) // 3).mean()
Detail:
print (np.arange(len(df)) // 3)
[0 0 0 1 1 1]
How do I add a order number column to an existing DataFrame?
This is my DataFrame:
import pandas as pd
import math
frame = pd.DataFrame([[1, 4, 2], [8, 9, 2], [10, 2, 1]], columns=['a', 'b', 'c'])
def add_stats(row):
row['sum'] = sum([row['a'], row['b'], row['c']])
row['sum_sq'] = sum(math.pow(v, 2) for v in [row['a'], row['b'], row['c']])
row['max'] = max(row['a'], row['b'], row['c'])
return row
frame = frame.apply(add_stats, axis=1)
print(frame.head())
The resulting data is:
a b c sum sum_sq max
0 1 4 2 7 21 4
1 8 9 2 19 149 9
2 10 2 1 13 105 10
First, I would like to add 3 extra columns with order numbers, sorting on sum, sum_sq and max, respectively. Next, these 3 columns should be combined into one column - the mean of the order numbers - but I do know how to do that part (with apply and axis=1).
I think you're looking for rank where you mention sorting. Given your example, add:
frame['sum_order'] = frame['sum'].rank()
frame['sum_sq_order'] = frame['sum_sq'].rank()
frame['max_order'] = frame['max'].rank()
frame['mean_order'] = frame[['sum_order', 'sum_sq_order', 'max_order']].mean(axis=1)
To get:
a b c sum sum_sq max sum_order sum_sq_order max_order mean_order
0 1 4 2 7 21 4 1 1 1 1.000000
1 8 9 2 19 149 9 3 3 2 2.666667
2 10 2 1 13 105 10 2 2 3 2.333333
The rank method has some options as well, to specify the behavior in case of identical or NA-values for example.