I have a dataframe like this:
Id F M R
7 1 286 907
12 1 286 907
17 1 186 1271
21 1 296 905
30 1 308 908
32 1 267 905
40 2 591 788
41 1 486 874
47 1 686 906
74 1 230 907
for each row if f> f's mean() and M> M's mean() and R>R's mean(),then output in new column is "1".
like this:
Id F M R score
7 1 286 907 1
12 1 286 907 0
17 1 186 1271 1
21 1 296 905
30 1 308 908
32 1 267 905
40 2 591 788
41 1 486 874
47 1 686 906
74 1 230 907
You can use numpy.where with mask created with comparing 3 columns with their mean and then use all for check all rows are True:
# I modify last value in row with index 6 to 1000
print (df)
Id F M R
0 7 1 286 907
1 12 1 286 907
2 17 1 186 1271
3 21 1 296 905
4 30 1 308 908
5 32 1 267 905
6 40 2 591 1000
7 41 1 486 874
8 47 1 686 906
9 74 1 230 907
print (df.F.mean())
1.1
print (df.M.mean())
362.2
print (df.R.mean())
949.0
print (df[['F','M','R']] > df[['F','M','R']].mean())
F M R
0 False False False
1 False False False
2 False False True
3 False False False
4 False False False
5 False False False
6 True True True
7 False True False
8 False True False
9 False False False
mask = (df[['F','M','R']] > df[['F','M','R']].mean()).all(1)
print (mask)
0 False
1 False
2 False
3 False
4 False
5 False
6 True
7 False
8 False
9 False
dtype: bool
df['score'] = np.where(mask,1,0)
print (df)
Id F M R score
0 7 1 286 907 0
1 12 1 286 907 0
2 17 1 186 1271 0
3 21 1 296 905 0
4 30 1 308 908 0
5 32 1 267 905 0
6 40 2 591 1000 1
7 41 1 486 874 0
8 47 1 686 906 0
9 74 1 230 907 0
If condition is changed:
mask = (df.F > df.F.mean()) & (df.M < df.M.mean()) & (df.R < df.R.mean())
print (mask)
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
dtype: bool
df['score'] = np.where(mask,2,0)
print (df)
Id F M R score
0 7 1 286 907 0
1 12 1 286 907 0
2 17 1 186 1271 0
3 21 1 296 905 0
4 30 1 308 908 0
5 32 1 267 905 0
6 40 2 591 1000 0
7 41 1 486 874 0
8 47 1 686 906 0
9 74 1 230 907 0
EDIT:
I think you can first check if in some conditions are not in some row more as one values by:
mask1 = (df.F > df.F.mean()) & (df.M > df.M.mean()) & (df.R > df.R.mean())
mask2 = (df.F > df.F.mean()) & (df.M < df.M.mean()) & (df.R < df.R.mean())
mask3 = (df.F < df.F.mean()) & (df.M < df.M.mean()) & (df.R < df.R.mean())
df['score1'] = np.where(mask1,1,0)
df['score2'] = np.where(mask2,2,0)
df['score3'] = np.where(mask3,3,0)
If not, use:
df.loc[mask1, 'score'] = 1
df.loc[mask2, 'score'] = 2
df.loc[mask3, 'score'] = 3
df.score.fillna(0, inplace=True)
df.loc[df['f']>df['f'].mean(),['f']] += 1
df.loc[df['m']>df['m'].mean(),['m']] += 1
df.loc[df['r']>df['r'].mean(),['r']] += 1
Have not tested this,please try and comment if works.
Or try this one
df['f'] = [x+1 for x in df['f'] if x>df['f'].mean()]
df['m'] = [x+1 for x in df['m'] if x>df['m'].mean()]
df['r'] = [x+1 for x in df['r'] if x>df['r'].mean()]
Related
I have a dataframe 'df'. Using the validation data validData, I want to compute the response rate (Florence = 1/Yes) using the rfm_aboveavg (RFM combinations response rates above the overall response). Response rate is given by considering 0/No and 1/Yes, so it would be rfm_crosstab[1] / rfm_crosstab['All'].
Using the results from the validation data, I want to only display the rows that are also shown in the training data output by the RFM column. How do I do this?
Data: 'df'
Seq# ID# Gender M R F FirstPurch ChildBks YouthBks CookBks ... ItalCook ItalAtlas ItalArt Florence Related Purchase Mcode Rcode Fcode Yes_Florence No_Florence
0 1 25 1 297 14 2 22 0 1 1 ... 0 0 0 0 0 5 4 2 0 1
1 2 29 0 128 8 2 10 0 0 0 ... 0 0 0 0 0 4 3 2 0 1
2 3 46 1 138 22 7 56 2 1 2 ... 1 0 0 0 2 4 4 3 0 1
3 4 47 1 228 2 1 2 0 0 0 ... 0 0 0 0 0 5 1 1 0 1
4 5 51 1 257 10 1 10 0 0 0 ... 0 0 0 0 0 5 3 1 0 1
My code: Crosstab for training data trainData
trainData, validData = train_test_split(df, test_size=0.4, random_state=1)
# Response rate for training data as a whole
responseRate = (sum(trainData.Florence == 1) / sum(trainData.Florence == 0)) * 100
# Response rate for RFM categories
# RFM: Combine R, F, M categories into one category
trainData['RFM'] = trainData['Mcode'].astype(str) + trainData['Rcode'].astype(str) + trainData['Fcode'].astype(str)
rfm_crosstab = pd.crosstab(index = [trainData['RFM']], columns = trainData['Florence'], margins = True)
rfm_crosstab['Percentage of 1/Yes'] = 100 * (rfm_crosstab[1] / rfm_crosstab['All'])
# RFM combinations response rates above the overall response
rfm_aboveavg = rfm_crosstab['Percentage of 1/Yes'] > responseRate
rfm_crosstab[rfm_aboveavg]
Output: Training data
Florence 0 1 All Percentage of 1/Yes
RFM
121 3 2 5 40.000000
131 9 1 10 10.000000
212 1 2 3 66.666667
221 6 3 9 33.333333
222 6 1 7 14.285714
313 2 1 3 33.333333
321 17 3 20 15.000000
322 20 4 24 16.666667
323 2 1 3 33.333333
341 61 10 71 14.084507
343 17 2 19 10.526316
411 12 3 15 20.000000
422 26 5 31 16.129032
423 32 8 40 20.000000
441 96 12 108 11.111111
511 19 4 23 17.391304
513 44 8 52 15.384615
521 24 5 29 17.241379
523 74 16 90 17.777778
533 177 28 205 13.658537
My code: Crosstab for validation data validData
# Response rate for RFM categories
# RFM: Combine R, F, M categories into one category
validData['RFM'] = validData['Mcode'].astype(str) + validData['Rcode'].astype(str) + validData['Fcode'].astype(str)
rfm_crosstab1 = pd.crosstab(index = [validData['RFM']], columns = validData['Florence'], margins = True)
rfm_crosstab1['Percentage of 1/Yes'] = 100 * (rfm_crosstab1[1] / rfm_crosstab1['All'])
rfm_crosstab1
Output: Validation data
Florence 0 1 All Percentage of 1/Yes
RFM
131 3 1 4 25.000000
141 8 0 8 0.000000
211 2 1 3 33.333333
212 2 0 2 0.000000
213 0 1 1 100.000000
221 5 0 5 0.000000
222 2 0 2 0.000000
231 21 1 22 4.545455
232 3 0 3 0.000000
233 1 0 1 0.000000
241 11 1 12 8.333333
242 8 0 8 0.000000
243 2 0 2 0.000000
311 7 0 7 0.000000
312 8 0 8 0.000000
313 1 0 1 0.000000
321 12 0 12 0.000000
322 13 0 13 0.000000
323 4 1 5 20.000000
331 19 1 20 5.000000
332 25 2 27 7.407407
333 11 1 12 8.333333
341 36 2 38 5.263158
342 30 2 32 6.250000
343 12 0 12 0.000000
411 8 2 10 20.000000
412 7 0 7 0.000000
413 13 1 14 7.142857
421 21 2 23 8.695652
422 30 1 31 3.225806
423 26 1 27 3.703704
431 51 3 54 5.555556
432 42 7 49 14.285714
433 41 5 46 10.869565
441 68 2 70 2.857143
442 78 3 81 3.703704
443 70 5 75 6.666667
511 17 0 17 0.000000
512 13 1 14 7.142857
513 26 6 32 18.750000
521 19 1 20 5.000000
522 25 6 31 19.354839
523 50 6 56 10.714286
531 66 3 69 4.347826
532 65 3 68 4.411765
533 128 24 152 15.789474
541 86 7 93 7.526882
542 100 6 106 5.660377
543 178 17 195 8.717949
All 1474 126 1600 7.875000
I have a data frame with the following shape:
0 1
0 OTT:81 DVBC:398
1 OTT:81 DVBC:474
2 OTT:81 DVBC:474
3 OTT:81 DVBC:454
4 OTT:81 DVBC:443
5 OTT:1 DVBC:254
6 DVBC:151 None
7 OTT:1 DVBC:243
8 OTT:1 DVBC:254
9 DVBC:227 None
I want for column 1 to be same as column 0 if column 1 contains "DVBC".
The split the values on ":" and the fill the empty ones with 0.
The end data frame should look like this
OTT DVBC
0 81 398
1 81 474
2 81 474
3 81 454
4 81 443
5 1 254
6 0 151
7 1 243
8 1 254
9 0 227
I try to do this starting with:
if df[0].str.contains("DVBC") is True:
df[1] = df[0]
But after this the data frame looks the same not sure why.
My idea after is to pass the values to the respective columns then split by ":" and rename the columns.
How can I implement this?
Universal solution for split values by : and pivoting- first create Series by DataFrame.stack, split by Series.str.splitSeries.str.rsplit and last reshape by DataFrame.pivot:
df = df.stack().str.split(':', expand=True).reset_index()
df = df.pivot('level_0',0,1).fillna(0).rename_axis(index=None, columns=None)
print (df)
DVBC OTT
0 398 81
1 474 81
2 474 81
3 454 81
4 443 81
5 254 1
6 151 0
7 243 1
8 254 1
9 227 0
Here is one way that should work with any number of columns:
(df
.apply(lambda c: c.str.extract(':(\d+)', expand=False))
.ffill(axis=1)
.mask(df.replace('None', pd.NA).isnull().shift(-1, axis=1, fill_value=False), 0)
)
output:
OTT DVBC
0 81 398
1 81 474
2 81 474
3 81 454
4 81 443
5 1 254
6 0 151
7 1 243
8 1 254
9 0 227
I'm trying to find in which row does the maximum value is in? I have a dataframe named recent_grads. Within it there is a column named share_women which has been set equal to sw_col. I also have the maximum value of that column and set equal to a variable called max_sw.
Tried the following code:
print(recent_grads['sharewomen'] == max_sw)
and I get the following result:
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 False
14 False
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 False
25 False
26 False
27 False
28 False
29 False
...
143 False
144 False
145 False
146 False
147 False
148 False
149 False
150 False
151 False
152 False
153 False
154 False
155 False
156 False
157 False
158 False
159 False
160 False
161 False
162 True
163 False
164 False
165 False
166 False
167 False
168 False
169 False
170 False
171 False
172 False
Name: sharewomen, Length: 173, dtype: bool
this the full view of sw_col:
0 0.120564
1 0.101852
2 0.153037
3 0.107313
4 0.341631
5 0.144967
6 0.535714
7 0.441356
8 0.139793
9 0.437847
10 0.199413
11 0.196450
12 0.119559
13 0.310820
14 0.183985
15 0.320784
16 0.343473
17 0.252960
18 0.350442
19 0.236063
20 0.578766
21 0.222695
22 0.325092
23 0.292607
24 0.278790
25 0.227118
26 0.342229
27 0.322222
28 0.189970
29 0.251389
...
143 0.606889
144 0.423209
145 0.779933
146 0.444582
147 0.506721
148 0.845934
149 0.667034
150 0.752144
151 0.810704
152 0.910933
153 0.697384
154 0.798920
155 0.905590
156 0.904075
157 0.745662
158 0.728495
159 0.584776
160 0.383719
161 0.719974
162 0.968954
163 0.707136
164 0.967998
165 0.690111
166 0.629505
167 0.666119
168 0.637293
169 0.817099
170 0.799859
171 0.798746
172 0.877960
Name: sharewomen, Length: 173, dtype: float64
any idea what to do? if so please try to do it that involve these three guys: recent_grads, max_sw, and sw_col.
I believe you're looking for idfmax:
df = pd.DataFrame([1, 4, 3, 2, 5, 3])
print(df)
0
0 1
1 4
2 3
3 2
4 5
5 3
df.idxmax()
==>
0 4
dtype: int64
Alternatively, if for some reason you're not allowed to use idxmax, you can do the following (note that the name of the relevant column is '0'):
df[df[0] == df[0].max()].index.values[0]
The output is 4.
Use idxmax:
recent_grads.iloc[recent_grads["share_women"].idxmax()]
I have a dataframe with three columns, bins_x, bins_y and z. I wish to add a new column unique that is an "index" of sorts for that unique combination of bins_x and bins_y. Below is an example of what I would like to append.
Note that I ordered the dataframe for clarity, but order does not matter in this context.
import numpy as np
import pandas as pd
np.random.seed(12)
n = 1000
height = 20
width = 20
bins_x = np.random.randint(1, width, size=n)
bins_y = np.random.randint(1, height, size=n)
z = np.random.randint(1, 500, size=n)
df = pd.DataFrame({'bins_x': bins_x, 'bins_y': bins_y, 'z': z})
print(df.sort_values(['bins_x', 'bins_y'])
bins_x bins_y z unique
23 0 0 462 0
531 0 0 199 1
665 0 0 176 2
363 0 1 219 0
468 0 1 450 1
593 0 1 385 2
609 0 1 74 3
663 0 1 46 4
14 0 2 242 0
208 0 2 381 1
600 0 2 445 2
865 0 2 221 3
400 0 3 178 0
75 0 4 281 0
140 0 4 205 1
282 0 4 47 2
838 0 4 212 3
Use groupby and cumcount:
df['unique'] = df.groupby(['bins_x','bins_y']).cumcount()
>>> df.sort_values(['bins_x', 'bins_y']).head(10)
bins_x bins_y z unique
207 1 1 4 0
259 1 1 313 1
327 1 1 300 2
341 1 1 64 3
440 1 1 398 4
573 1 1 96 5
174 1 2 219 0
563 1 2 398 1
796 1 2 417 2
809 1 2 167 3
I am having some data which look like as shown below df.
I am trying to calculate first the mean angle for each group using the function mean_angle. The calculated mean angle is then used to do another calculation per group using the function fun.
import pandas as pd
import numpy as np
generate sample data
a = np.array([1,2,3,4]).repeat(4)
x1 = 90 + np.random.randint(-15, 15, size=a.size//2 - 2 )
x2 = 270 + np.random.randint(-50, 50, size=a.size//2 + 2 )
b = np.concatenate((x1, x2))
np.random.shuffle(b)
df = pd.DataFrame({'a':a, 'b':b})
The returned dataframe is printed below.
a b
0 1 295
1 1 78
2 1 280
3 1 94
4 2 308
5 2 227
6 2 96
7 2 299
8 3 248
9 3 288
10 3 81
11 3 78
12 4 103
13 4 265
14 4 309
15 4 229
My functions are mean_angle and fun
def mean_angle(deg):
deg = np.deg2rad(deg)
deg = deg[~np.isnan(deg)]
S = np.sum(np.sin(deg))
C = np.sum(np.cos(deg))
mu = np.arctan2(S,C)
mu = np.rad2deg(mu)
if mu <0:
mu = 360 + mu
return mu
def fun(x, mu):
return np.where(abs(mu - x) < 45, x, np.where(x+180<360, x+180, x-180))
what I have tried
mu = df.groupby(['a'])['b'].apply(mean_angle)
df2 = df.groupby(['a'])['b'].apply(fun, args = (mu,)) #this function should be element wise
I know it is totally wrong but I could not come up with a better way.
The desired output is something like this where mu the mean_angle per group
a b c
0 1 295 np.where(abs(mu - 295) < 45, 295, np.where(295 +180<360, 295 +180, 295 -180))
1 1 78 np.where(abs(mu - 78) < 45, 78, np.where(78 +180<360, 78 +180, 78 -180))
2 1 280 np.where(abs(mu - 280 < 45, 280, np.where(280 +180<360, 280 +180, 280 -180))
3 1 94 ...
4 2 308 ...
5 2 227 .
6 2 96 .
7 2 299 .
8 3 248 .
9 3 288 .
10 3 81 .
11 3 78 .
12 4 103 .
13 4 265 .
14 4 309 .
15 4 229 .
Any help is appreciated
You don't need your second function, just pass the necessary columns to np.where(). So creating your dataframe in the same manner and not modifying your mean_angle function, we have the following sample dataframe:
a b
0 1 228
1 1 291
2 1 84
3 1 226
4 2 266
5 2 311
6 2 82
7 2 274
8 3 79
9 3 250
10 3 222
11 3 88
12 4 80
13 4 291
14 4 100
15 4 293
Then create your c column (containing your mu values) using groupby() and transform(), and finally apply your np.where() logic:
df['c'] = df.groupby(['a'])['b'].transform(mean_angle)
df['c'] = np.where(abs(df['c'] - df['b']) < 45, df['b'], np.where(df['b']+180<360, df['b']+180, df['b']-180))
Yields:
a b c
0 1 228 228
1 1 291 111
2 1 84 264
3 1 226 226
4 2 266 266
5 2 311 311
6 2 82 262
7 2 274 274
8 3 79 259
9 3 250 70
10 3 222 42
11 3 88 268
12 4 80 260
13 4 291 111
14 4 100 280
15 4 293 113