I am currently working on my thesis and facing some problems in a groupby function I want to do. I am trying to find out someone's total purchase amount, average purchase amount, purchase count, how many products bought in total and the average value per product.
The data looks like thise:
id purchase_amount price_products #_products
0 123 30 20.00 2
2 123 NaN 10.00 NaN
3 124 50.00 25.00 3
4 124 NaN 15.00 NaN
5 124 NaN 10.00 NaN
My code looks like this:
df.groupby('id')[['purchase_amount','price_products','#_products']].agg(total_purchase_amount=('purchase_amount','sum'),average_purchase_amount=('purchase_amount','mean'),times_purchased=('#_products','count'),total_amount_products_purchased=('price_products','count'),average_value_products=('price_products','mean'))
But I get the following error:
SpecificationError: nested dictionary is ambiguous in aggregation
I cannot seem to find what I am doing wrong, hopefully someone can help me!
Do like this for all calculations
df.groupby('id')['purchase_amount'].agg({'total_purchase_amount':'sum'})
Since you have several variables to aggregate, I would suggest using the following form of aggregation:
df.groupby('id')[<variables-list>].agg([<statistics-list>])
For example:
df_agg = df.groupby('id')[['purchase_amount','price_products','#_products']].agg(["count", "mean", "sum"])
This will create a column-wise multi-level output data frame df_agg that looks like:
purchase_amount price_products #_products
count mean sum count mean sum count mean sum
id
123 1 30.0 30.0 2 15 30 1 2.0 2.0
124 1 50.0 50.0 3 17 51 1 3.0 3.0
You can then refer to a particular entry in the output data frame using the multi-index as follows:
df_agg['purchase_amount']['mean']
id
123 30.0
124 50.0
Name: mean, dtype: float64
or if you want e.g. all the means, use the cross-sectional method xs():
df_agg.xs('mean', axis=1, level=1)
purchase_amount price_products #_products
id
123 30.0 15 2.0
124 50.0 17 3.0
Note: presumably, the above piece of code will make Python compute more statistics than needed, as is the case in your example. But this may not be an issue in certain contexts, and it has the advantage that the code is shorter and generalizable to any set and number of (numeric and float) variables to aggregate.
You can do this in an organized way using a dictionary for your aggregation.
df = pd.DataFrame([[123, 30, 20, 2],
[123, np.nan, 10, np.nan],
[124, 50, 25, 3],
[124, np.nan, 15, np.nan],
[124, np.nan, 10, np.nan]],
columns=['id', 'purchase_amount', 'price_products', 'num_products']
)
agg_dict = {
'purchase_amount': [np.sum, np.mean],
'num_products': [np.count_nonzero],
'price_products': [np.count_nonzero, np.mean],
}
print(df.groupby('id').agg(agg_dict))
output:
purchase_amount num_products price_products
sum mean count_nonzero count_nonzero mean
id
123 30.0 30.0 2.0 2 15.000000
124 50.0 50.0 3.0 3 16.666667
Related
I have a dataframe like as shown below
df = pd.DataFrame(
{'stud_name' : ['ABC', 'ABC','ABC','DEF',
'DEF','DEF'],
'qty' : [123,31,490,518,70,900],
'trans_date' : ['13/11/2020','10/1/2018','11/11/2017','27/03/2016','13/05/2010','14/07/2008']})
I would like to do the below
a) for each stud_name, look at their past data (full past data) and compute the min, max and mean of qty column
Please note that the 1st record/row for every unique stud_name will be NA because there is no past data (history) to look at and compute the aggregate statistics
I tried something like below but the output is incorrect
df['trans_date'] = pd.to_datetime(df['trans_date'])
df.sort_values(by=['stud_name','trans_date'],inplace=True)
df['past_transactions'] = df.groupby('stud_name').cumcount()
df['past_max_qty'] = df.groupby('stud_name')['qty'].expanding().max().values
df['past_min_qty'] = df.groupby('stud_name')['qty'].expanding().min().values
df['past_avg_qty'] = df.groupby('stud_name')['qty'].expanding().mean().values
I expect my output to be like as shown below
We can use custom function to calculate the past statistics per student
def past_stats(q):
return (
q.expanding()
.agg(['max', 'min', 'mean'])
.shift().add_prefix('past_')
)
df.join(df.groupby('stud_name')['qty'].apply(past_stats))
stud_name qty trans_date past_max past_min past_mean
2 ABC 490 2017-11-11 NaN NaN NaN
1 ABC 31 2018-10-01 490.0 490.0 490.0
0 ABC 123 2020-11-13 490.0 31.0 260.5
5 DEF 900 2008-07-14 NaN NaN NaN
4 DEF 70 2010-05-13 900.0 900.0 900.0
3 DEF 518 2016-03-27 900.0 70.0 485.0
I have Pandas dataframe where I have points and corresponding lengths to another points. I am able to get minimal value of the calculated columns, however, I need the column names itself. I am unable to figure out how can I get the column names corresponding to values in a new column. My dataframe looks like this:
df.head():
0 1 2 ... 6 7 min
9 58.0 94.0 984.003636 ... 696.667367 218.039561 218.039561
71 100.0 381.0 925.324708 ... 647.707783 169.856557 169.856557
61 225.0 69.0 751.353014 ... 515.152768 122.377490 122.377490
0 and 1 are datapoints, the rest are distances to datapoints #1 to 7, in some cases the number of points can differ, does not really matter for the question. The code I use to count min is following:
new = users.iloc[:,2:].min(axis=1)
users["min"] = new
#could also do the following way
#users.assign(Min=lambda users: users.iloc[:,2:].min(1))
This is quite simple and there is no much about finding the minimum of multiple columns. However, I need to get the col name instead of the value. So my desired output would look like this (in the example all are 7, which is not rule):
0 1 2 ... 6 7 min
9 58.0 94.0 984.003636 ... 696.667367 218.039561 7
71 100.0 381.0 925.324708 ... 647.707783 169.856557 7
61 225.0 69.0 751.353014 ... 515.152768 122.377490 7
Is there a simple way to achieve this?
Use df.idxmin:
In [549]: df['min'] = df.iloc[:,2:].idxmin(axis=1)
In [550]: df
Out[550]:
0 1 2 6 7 min
9 58.0 94.0 984.003636 696.667367 218.039561 7
71 100.0 381.0 925.324708 647.707783 169.856557 7
61 225.0 69.0 751.353014 515.152768 122.377490 7
I am trying to find values for certain IDs and codes in a massive data set, and I am trying to get to these by taking the most recently used value for each unique pair. I am currently just taking the most recently used code using the code below
data.head()
ID Code value
15 13513 X2784 30.0
16 12665 X2744 65.0
17 16543 X2744 65.0
19 15761 X2100 29.0
21 14265 X2750 48.0
df = data.pivot_table(index='ID', columns='Code', values='value', aggfunc = 'first')
df.head()
ID X2784 X2744 X2100 X2750
13271 30.0 65.0 29.0 35.0
16343 30.0 65.0 29.0 35.0
19342 30.0 65.0 29.0 35.0
15437 30.0 65.0 29.0 35.0
14359 30.0 65.0 29.0 48.0
The issue is that some of these values are wrong due to anomalies in the data. The idea would be to look at the most recent value, determine if it represents a certain percentage of all values for that pair, and then assign it. An example of the issue would be something like this:
data[(data['ID'] == '14359') & (data['Code'] == 'X2750')]['value'].value_counts()
35.0 2530
48.0 2
The value of 29.0 is the most recent occurrence, but it happens such a small percentage of times that it should be considered an anomaly. Is there any way to combine the pivot_table aggfunc "first" with some sort of threshold of occurrences?
If you are sure that the majority is always your wished value you could use the median aggregation to get the "middle" or "50% quantile" value. This would cut off all anomalies.
Try this function:
df = data.pivot_table(index='ID', columns='Code', values='value', aggfunc = 'first', aggfunc=np.median)
I was able to figure it out using a lambda function for the aggfunc
aggfunc = lambda x: x.iloc[0] if x.value_counts()[x.iloc[0]]/x.value_counts().sum() > .25 else x.mode(dropna = False).iat[0]
Thanks everyone for the help!
I have a dataframe with two numeric columns. I want to add a third column to calculate the difference. But the condition is if the values in the first column are blank or Nan, the difference should be the value in the second column...
Can anyone help me with this problem?
Any suggestions and clues will be appreciated!
Thank you.
You should use vectorised operations where possible. Here you can use numpy.where:
df['Difference'] = np.where(df['July Sales'].isnull(), df['August Sales'],
df['August Sales'] - df['July Sales'])
However, consider this is precisely the same as considering NaN values in df['July Sales'] to be equal to zero. So you can use pd.Series.fillna:
df['Difference'] = df['August Sales'] - df['July Sales'].fillna(0)
This isn't really a situation with conditions, it is just a math operation.. Suppose you have the df:
consider your df using the .sub() method:
df['Diff'] = df['August Sales'].sub(df['July Sales'], fill_value=0)
returns output:
July Sales August Sales Diff
0 459.0 477 18.0
1 422.0 125 -297.0
2 348.0 483 135.0
3 397.0 271 -126.0
4 NaN 563 563.0
5 191.0 325 134.0
6 435.0 463 28.0
7 NaN 479 479.0
8 475.0 473 -2.0
9 284.0 496 212.0
Used a sample dataframe, but it shouldn't be hard to comprehend:
df = pd.DataFrame({'A': [1, 2, np.nan, 3], 'B': [10, 20, 30, 40]})
def diff(row):
return row['B'] if (pd.isnull(row['A'])) else (row['B'] - row['A'])
df['C'] = df.apply(diff, axis=1)
ORIGINAL DATAFRAME:
A B
0 1.0 10
1 2.0 20
2 NaN 30
3 3.0 40
AFTER apply:
A B C
0 1.0 10 9.0
1 2.0 20 18.0
2 NaN 30 30.0
3 3.0 40 37.0
try this:
def diff(row):
if not row['col1']:
return row['col2']
else:
return row['col1'] - row['col2']
df['col3']= df.apply(diff, axis=1)
I have two dataframes:
df1 - is a pivot table that has totals for both columns and rows, both with default names "All"
df2 - a df I created manually by specifying values and using the same index and column names as are used in the pivot table above. This table does not have totals.
I need to multiply the first dataframe by the values in the second. I expect the totals return NaNs since totals don't exist in the second table.
When I perform multiplication, I get the following error:
ValueError: cannot join with no level specified and no overlapping names
When I try the same on dummy dataframes it works as expected:
import pandas as pd
import numpy as np
table1 = np.matrix([[10, 20, 30, 60],
[50, 60, 70, 180],
[90, 10, 10, 110],
[150, 90, 110, 350]])
df1 = pd.DataFrame(data = table1, index = ['One','Two','Three', 'All'], columns =['A', 'B','C', 'All'] )
print(df1)
table2 = np.matrix([[1.0, 2.0, 3.0],
[5.0, 6.0, 7.0],
[2.0, 1.0, 5.0]])
df2 = pd.DataFrame(data = table2, index = ['One','Two','Three'], columns =['A', 'B','C'] )
print(df2)
df3 = df1*df2
print(df3)
This gives me the following output:
A B C All
One 10 20 30 60
Two 50 60 70 180
Three 90 10 10 110
All 150 90 110 350
A B C
One 1.00 2.00 3.00
Two 5.00 6.00 7.00
Three 2.00 1.00 5.00
A All B C
All nan nan nan nan
One 10.00 nan 40.00 90.00
Three 180.00 nan 10.00 50.00
Two 250.00 nan 360.00 490.00
So, visually, the only difference between df1 and df2 is the presence/absence of the column and row "All".
And I think the only difference between my dummy dataframes and the real ones is that the real df1 was created with pd.pivot_table method:
df1_real = pd.pivot_table(PY, values = ['Annual Pay'], index = ['PAR Rating'],
columns = ['CR Range'], aggfunc = [np.sum], margins = True)
I do need to keep the total as I'm using them in other calculations.
I'm sure there is a workaround but I just really want to understand why the same code works on some dataframes of different sizes but not others. Or maybe an issue is something completely different.
Thank you for reading. I realize it's a very long post..
IIUC,
My Preferred Approach
you can use the mul method in order to pass the fill_value argument. In this case, you'll want a value of 1 (multiplicative identity) to preserve the value from the dataframe in which the value is not missing.
df1.mul(df2, fill_value=1)
A All B C
All 150.0 350.0 90.0 110.0
One 10.0 60.0 40.0 90.0
Three 180.0 110.0 10.0 50.0
Two 250.0 180.0 360.0 490.0
Alternate Approach
You can also embrace the np.nan and use a follow-up combine_first to fill back in the missing bits from df1
(df1 * df2).combine_first(df1)
A All B C
All 150.0 350.0 90.0 110.0
One 10.0 60.0 40.0 90.0
Three 180.0 110.0 10.0 50.0
Two 250.0 180.0 360.0 490.0
I really like Pir 's approach , and here is mine :-)
df1.loc[df2.index,df2.columns]*=df2
df1
Out[293]:
A B C All
One 10.0 40.0 90.0 60
Two 250.0 360.0 490.0 180
Three 180.0 10.0 50.0 110
All 150.0 90.0 110.0 350
#Wen, #piRSquared, thank you for your help. This is what I ended up doing. There is probably a more elegant solution but this worked for me.
Since I was able to multiply two dummy dataframes of different sizes, I reasoned the issue wasn't the size, but the fact that one of the dataframes was created as a pivot table. Somehow in this pivot table, the headers were not recognized, though visually they were there. So, I decided to convert the pivot table to a regular dataframe. Steps I took:
Converted the pivot table to records and then back to dataframe using solution from this thread: pandas pivot table to data frame .
Cleaned up the column headers using solution from the same thread above: pandas pivot table to data frame .
Set my first column as the index following suggestion in this thread: How to remove index from a created Dataframe in Python?
This gave me a dataframe that was visually identical to what I had before but was no longer a pivot table.
I was then able to multiply the two dataframes with no issues. I used approach suggested by #Wen because I like that it preserves the structure.