Performing calculations on subset of data frame subset in Python - python

user_id char_id rating
100 33 3
100 44 2
100 33 1
100 44 4
111 55 5
111 44 4
111 55 5
I have a data frame formatted similarly to this one and am trying to perform calculations on the ratings after they have been grouped by user_id and char_id.
It doesn't work but I need to do something like data.groupby('user_id', 'char_id') and then calculate the moving average for each char_id for each user_id. Any help? I have several thousand user_id so I can't go through and select one at a time for the calculations.
I need to somehow iterate over the user_id column and group all the same user_ids together, and save that format so that user_ids are separate. Then I need to do the same thing, iterating over char_id for each user_id subset and saving that format so that I can finally perform calculations on the subsets of subsets of ratings. So far all my attempts have been unsuccessful. The closest I came was:
def divide_by_user(data):
for user in data['user_id']:
user_data = data.where(data['user_id'] == user)
return user_data

There's no need to do this manually, creating and summarizing subsets like this is exactly what DataFrame.groupby() is for. Create your groupby:
grouped = df.groupby(['user_id', 'char_id'])
Then you can apply a function to each subset. It sounds like you want either rolling_mean or expanding_mean, both of which are already available in pandas:
df['cum_average'] = grouped['rating'].apply(pd.expanding_mean)
# New column now contains the average rating for each subset,
# including all values that have been seen so far.
df
Out[43]:
user_id char_id rating cum_average
0 100 33 3 3
1 100 44 2 2
2 100 33 1 2
3 100 44 4 3
4 111 55 5 5
5 111 44 4 4
6 111 55 5 5
Using a larger randomly-generated dataset to demonstrate rolling_window():
df = pd.DataFrame({
'user_id': [random.choice([100, 111, 112]) for n in range(n_rows)],
'char_id': [random.choice([33, 44, 55]) for n in range(n_rows)],
'rating': [random.choice([1, 2, 3, 4, 5]) for n in range(n_rows)]
})
grouped = df.groupby(['user_id', 'char_id'])
df['cum_average'] = grouped['rating'].apply(pd.rolling_mean, window=7)
# Output. The rolling average will be NaN until enough values have been
# observed for that subset, you can change this using the
# min_periods argument to rolling_window
df.sort(columns=['user_id', 'char_id'])
char_id rating user_id cum_average
3 33 1 100 NaN
19 33 2 100 NaN
22 33 5 100 NaN
34 33 1 100 NaN
47 33 1 100 NaN
48 33 1 100 NaN
49 33 1 100 1.714286
51 33 4 100 2.142857
55 33 2 100 2.142857
60 33 2 100 1.714286
66 33 2 100 1.857143
...
etc.

Try this:
"df" is the dataFrame
mean=pd.rolling_mean(df.rating, 7)

Related

lookup within filtered range

I have a dataframe with data from ecommerce panel.
It has orders and returns mixed together.
Each row has orderID - it's the same number for normal orders and for corresponding returns that come back from customers.
My data looks like this:
orderID
Shop
Revenue
Note
44
0
-32
Return
45
0
-100
Return
44
1
14
45
3
20
Something else
46
2
50
47
1
80
Something
48
2
222
For each return I want to find a 'Shop' column value that corresponds to original order.
For example : 'orderID' == 44 comes twice: once as return (with 'Shop' == 0) and once as normal order (with 'Shop' == 1).
I want to replace all the 0 values with 'Shop' column with values from earlier orders
My desired output looks like this:
orderID
Shop
Revenue
Note
44
1
-32
Return
45
3
-100
Return
44
1
14
45
3
20
Something else
46
2
50
47
1
80
Something
48
2
222
I know how to do it in Google Sheets (first I filter table removing 'Shop'==0 values and then I vlookup for numbers in this filtered array)
I know how to filter this table using Pandas but I don't know how to write it.
I assume that I will need to write a temporary column first, where I store both types of values - for normal orders (just copied) and for returns.
Original dataframe is 1 000 000+ rows
My data in .csv is available here:
https://docs.google.com/spreadsheets/d/e/2PACX-1vQAJ4tMc_Bcvv-4FsUy3E7sG0m9hm-nLTVLj-LwlSEns-YJ1pbq6gSKp5mj5lZqRI2EgHOsOutwnn1I/pub?gid=0&single=true&output=csv
Thank you for any advice!
IIUC, using map:
m = df.query('Shop != 0').set_index('orderID')['Shop']
df['Shop'] = df['orderID'].map(m)
print(df)
Output:
orderID Shop Revenue Note
0 44 1 -32 Return
1 45 3 -100 Return
2 44 1 14 NaN
3 45 3 20 Something else
4 46 2 50 NaN
5 47 1 80 Something
6 48 2 222 NaN
Create a pd.Series using query to filter out zero shops then set_index and map shops to orderID​.
This works if there is a 1-1 shop to order mapping. If you have multiple shops per order, then you'll need logic to determine which shop valid.
If you have duplicate order to the same shop, then you need to drop_duplicates first.

Apply cubic root transformation and StandardScaler to some specific columns in pandas dataframe

I do have a data frame with so many cols. I would like to apply cbrt transformation first and then StandardScaler() to some specific cols in a dataframe for each month but I received some errors
df=pd.DataFrame({'month':['1','1','1','1','1','2','2','2','2','2','2','2'],'X1':
[30,42,25,32,12,10,4,6,5,10,24,21],'X2':[10,76,100,23,65,94,67,24,67,54,87,81],'X3':
[23,78,95,52,60,76,68,92,34,76,34,12]})
df
My code below is but no worries about Month
df['X1']=pd.Series(np.cbrt(df['X1'])).values
Below is for but needs to consider group month
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df['X1_scale'] = scaler.group('Month').fit(df['X1'])
I would like to combine these two operations on a autamated function that adds column X1_Scale and X2_Scale but since I have so many cols I would like to do this on first 2 cols (df.loc[:,2:3]) in general. Please help.
Thank you.
We can use np.cbrt to calculate the element wise cube root on the first two columns followed by groupby on month and transformation using zscore to calculate the standard score of each sample per unique month.
from scipy.stats import zscore
c = df.columns[1:3]
df[c + '_Scale'] = np.cbrt(df[c]).groupby(df['month']).transform(zscore)
month X1 X2 X3 X1_Scale X2_Scale
0 1 30 10 23 0.286075 -1.531934
1 1 42 76 78 1.220298 0.705876
2 1 25 100 95 -0.178042 1.142135
3 1 32 23 52 0.457241 -0.790689
4 1 12 65 60 -1.785572 0.474613
5 2 10 94 76 0.004353 1.026875
6 2 4 67 68 -1.208026 0.093139
7 2 6 24 92 -0.716861 -2.171608
8 2 5 67 34 -0.945947 0.093139
9 2 10 54 76 0.004353 -0.449041
10 2 24 87 34 1.565310 0.804088
11 2 21 81 12 1.296817 0.603408

How to group by a df in Python by a column with the difference between the max value of one column and the min of another column?

I have a data frame which looks like this:
student_id
session_id
reading_level_id
st_week
end_week
1
3334
3
3
3
1
3335
2
4
4
2
3335
2
2
2
2
3336
2
2
3
2
3337
2
3
3
2
3339
2
3
4
...
There are multiple session_id's, st_weeks and end_weeks for every student_id. Im trying to group the data by 'student_id' and I want to calculate the difference between the maximum(end_week) and the minimum (st_week) for each student.
Aiming for an output that would look something like this:
Student_id
Diff
1
1
2
2
....
I am relatively new to Python as well as Stack Overflow and have been trying to find an appropriate solution - any help is appreciated.
Using the data you shared, a simpler solution is possible:
Group by student_id, and pass False argument to the as_index parameter (this works for a dataframe, and returns a dataframe);
Next, use a named aggregation to get the `max week for end week and the min week for st_week for each group
Get the difference between max_wk and end_wk
Finally, keep only the required columns
(
df.groupby("student_id", as_index=False)
.agg(max_wk=("end_week", "max"), min_wk=("st_week", "min"))
.assign(Diff=lambda x: x["max_wk"] - x["min_wk"])
.loc[:, ["student_id", "Diff"]]
)
student_id Diff
0 1 1
1 2 2
There's probably a more efficient way to do this, but I broke this into separate steps for the grouping to get max and min values for each id, and then created a new column representing the difference. I used numpy's randint() function in this example because I didn't have access to a sample dataframe.
import pandas as pd
import numpy as np
# generate dataframe
df = pd.DataFrame(np.random.randint(0,100,size=(1200, 4)), columns=['student_id', 'session_id', 'st_week', 'end_week'])
# use groupby to get max and min for each student_id
max_vals = df.groupby(['student_id'], sort=False)['end_week'].max().to_frame()
min_vals = df.groupby(['student_id'], sort=False)['st_week'].min().to_frame()
# use join to put max and min back together in one dataframe
merged = min_vals.join(max_vals)
# use assign() to calculate difference as new column
merged = merged.assign(difference=lambda x: x.end_week - x.st_week).reset_index()
merged
student_id st_week end_week difference
0 40 2 99 97
1 23 5 74 69
2 78 9 93 84
3 11 1 97 96
4 97 24 88 64
... ... ... ... ...
95 54 0 96 96
96 18 0 99 99
97 8 18 97 79
98 75 21 97 76
99 33 14 93 79
You can create a custom function and apply it to a group-by over students:
def week_diff(g):
return g.end_week.max() - g.st_week.min()
df.groupby("student_id").apply(week_diff)
Result:
student_id
1 1
2 2
dtype: int64

Duplicate ID's with different values in rows

I'm having an issue with a dataset I am using for my thesis. The dataset contains customer purchase information and I want to figure out how many times a customer has purchased, what the total purchase amount is and what their average spending is. The data I currently have looks something like this:
id date total_purchase_amount product_price
0 84288 2020-1-1 100 50
1 84288 2020-1-1 50
2 84288 2020-3-7 80 20
3 84288 2020-3-7 60
4 84289 2020-8-16 200 10
5 84289 2020-8-16 50
6 84289 2020-8-16 10
7 84288 2020-8-16 80
8 84290 2020-4-2 10 10
9 84290 2020-4-8 30 30
10 84291 2020-5-23 45 45
Some customers have made purchases more than once, causing their customer ID to appear multiple times in the dataset. What I want to achieve is a dataset which looks this:
id total_purchase_amount average_spending times_purchased
0 84288 180 45 2
1 84289 200 37,5 1
2 84290 40 20 2
3 84291 45 45 1
Does anyone have a suggestion how I can achieve this? The dataset I work with is very large, so this problem cannot be solved manually.
Here is the code to get the first dataframe:
import pandas as pd
data = [[84288, "2020-1-1", 100, 50],[84288, "2020-1-1", "", 50],[84288, "2020-3-7", 80, 20], [84288, "2020-3-7", "", 60],[84289, "2020-8-16", 200, 10],[84289, "2020-8-16", "", 50],[84289, "2020-8-16", "", 10], [84289, "2020-8-16", "", 80],[84290, "2020-4-2", 10, 10],[84290, "2020-4-8", 30, 30],[84291, "2020-5-23", 45, 45]]
df = pd.DataFrame(data, columns=['id','date','total_purchase_amount','purchase_amount'])
Replace the blank rows with NA and do the math in the grouping.
df.replace('', np.NaN, inplace=True)
df.groupby('id')[['total_purchase_amount','purchase_amount']].agg(average_spending=('purchase_amount','mean'),times_purchased=('total_purchase_amount','count'))
average_spending times_purchased
id
84288 45.0 2
84289 37.5 1
84290 20.0 2
84291 45.0 1

Python Pandas: Create New Column With Calculations Based on Categorical Values in A Different Column

I have the following sample data frame:
id category time
43 S 8
22 I 10
15 T 350
18 L 46
I want to apply the following logic:
1) if category value equals "T" then create new column called "time_2" where "time" value is divided by 24.
2) if category value equals "L" then create new column called "time_2" where "time" value is divided by 3.5.
3) otherwise take existing "time" value from categories S or I
Below is my desired output table:
id category time time_2
43 S 8 8
22 I 10 10
15 T 350 14.58333333
18 L 46 13.14285714
I've tried using pd.np.where to get the above to work but am confused around syntax.
You can use map for rules
In [1066]: df['time_2'] = df.time / df.category.map({'T': 24, 'L': 3.5}).fillna(1)
In [1067]: df
Out[1067]:
id category time time_2
0 43 S 8 8.000000
1 22 I 10 10.000000
2 15 T 350 14.583333
3 18 L 46 13.142857
You can use np.select. This is a good alternative to nested np.where logic.
conditions = [df['category'] == 'T', df['category'] == 'L']
values = [df['time'] / 24, df['time'] / 3.5]
df['time_2'] = np.select(conditions, values, df['time'])
print(df)
id category time time_2
0 43 S 8 8.000000
1 22 I 10 10.000000
2 15 T 350 14.583333
3 18 L 46 13.142857

Categories