I want to group a column into deciles and assign points out of 50.
The lowest decile receives 5 points and points are increased in 5 point increments.
With below I am able to group my column into deciles. How do I assign points so the lowest decile has 5 points, 2nd lowest has 10 points so on ..and the highest decile has 50 points.
df = pd.DataFrame({'column'[1,2,2,3,4,4,5,6,6,7,7,8,8,9,10,10,10,12,13,14,16,16,16,18,19,20,20,22,24,28]})
df['decile'] = pd.qcut(df['column'], 10, labels = False)```
Try this:
df['points'] = df['decile'].add(1).mul(5)
Output:
column decile points
0 1 0 5
1 2 0 5
2 2 0 5
3 3 1 10
4 4 1 10
5 4 1 10
6 5 2 15
7 6 2 15
8 6 2 15
9 7 3 20
10 7 3 20
11 8 3 20
12 8 3 20
13 9 4 25
14 10 4 25
15 10 4 25
16 10 4 25
17 12 5 30
18 13 6 35
19 14 6 35
20 16 6 35
21 16 6 35
22 16 6 35
23 18 7 40
24 19 8 45
25 20 8 45
26 20 8 45
27 22 9 50
28 24 9 50
29 28 9 50
Simple enough; you can apply operations between columns directly. Deciles are numbered from 0 through 9, so they are naturally ordered. You want increments of 5 points per decile, so multiplying the deciles by 5 will give you that. Since you want to start at 5, you can offset with a simple sum. The following gives you what I believe you want:
df['points'] = df['decile'] * 5 + 5
Here's a way that can easily be generalized to different point systems that are not linear with decile:
df['points'] = df.decile.map({d:5 * (d + 1) for d in range(10)})
This uses Series.map() to map from each decile value to the desired number of points for that decile using a dictionary.
Related
I have a large dataset with millions of rows of data. One of the data columns is ID.
I also have another (hash)table that maps the range of indices to a specific group that meets a certain criteria.
What is an efficient way to map the range of indices to include them as an additional column on my dataset in pandas?
As an example, lets say that the dataset looks like this:
In [18]:
print(df_test)
Out [19]:
ID
0 13
1 14
2 15
3 16
4 17
5 18
6 19
7 20
8 21
9 22
10 23
11 24
12 25
13 26
14 27
15 28
16 29
17 30
18 31
19 32
Now the hash table with the range of indices looks like this:
In [20]:
print(df_hash)
Out [21]:
ID_first
0 0
1 2
2 10
where the index specifies the group number that I need.
I tried doing something like this:
for index in range(df_hash.size):
try:
df_test.loc[df_hash.ID_first[index]:df_hash.ID_first[index + 1], 'Group'] = index
except:
df_test.loc[df_hash.ID_first[index]:, 'Group'] = index
Which works well, except that it is really slow as it loops over the length of the hash table dataframe (hundreds of thousands of rows). It produces the following answer (which I want):
In [23]:
print(df_test)
Out [24]:
ID Group
0 13 0
1 14 0
2 15 1
3 16 1
4 17 1
5 18 1
6 19 1
7 20 1
8 21 1
9 22 1
10 23 2
11 24 2
12 25 2
13 26 2
14 27 2
15 28 2
16 29 2
17 30 2
18 31 2
19 32 2
Is there a way to do this more efficiently?
You can map the index of df_test using ID_first to the index of df_hash, and then ffill. Need to construct a Series as the pd.Index class doesn't have a ffill method.
df_test['group'] = (pd.Series(df_test.index.map(dict(zip(df_hash.ID_first, df_hash.index))),
index=df_test.index)
.ffill(downcast='infer'))
# ID group
#0 13 0
#1 14 0
#2 15 1
#...
#9 22 1
#10 23 2
#...
#17 30 2
#18 31 2
#19 32 2
you can do series.isin with series.cumsum
df_test['group'] = df_test['ID'].isin(df_hash['ID_first']).cumsum() #.sub(1)
print(df_test)
ID group
0 0 1
1 1 1
2 2 2
3 3 2
4 4 2
5 5 2
6 6 2
7 7 2
8 8 2
9 9 2
10 10 3
11 11 3
12 12 3
13 13 3
14 14 3
15 15 3
16 16 3
17 17 3
18 18 3
19 19 3
I am having a dataframe df like shown:
1-1 1-2 1-3 2-1 2-2 3-1 3-2 4-1 5-1
10 3 9 1 3 9 33 10 11
21 31 3 22 21 13 11 7 13
33 22 61 31 35 34 8 10 16
6 9 32 5 4 8 9 6 8
where the explanation of the columns as the following:
the first digit is a group number and the second is part of it or subgroup in our example we have groups 1,2,3,4,5 and group 1 consists of 1-1,1-2,1-3.
I would like to create a new dataframe that have only the groups 1,2,3,4,5 without subgroups and choose for each row the max number in the subgroup and be flexible for any new modifications or increasing the groups or subgroups.
The new dataframe I need is like the shown:
1 2 3 4 5
10 3 33 10 11
31 22 13 7 13
61 35 34 10 16
32 5 9 6 8
You can aggregate by columns with axis=1 and lambda function for split and select first values with max and DataFrame.groupby:
This working correct if numbers of groups contains 2 or more digits.
df1 = df.groupby(lambda x: x.split('-')[0], axis=1).max()
Alternative is pass splitted columns names:
df1 = df.groupby(df.columns.str.split('-').str[0], axis=1).max()
print (df1)
1 2 3 4 5
0 10 3 33 10 11
1 31 22 13 7 13
2 61 35 34 10 16
3 32 5 9 6 8
You can use .str[] or .str.get here.
df.groupby(df.columns.str[0], axis=1).max())
1 2 3 4 5
0 10 3 33 10 11
1 31 22 13 7 13
2 61 35 34 10 16
3 32 5 9 6 8
I have this pandas dataframe. It is sorted by the "h" column. What I want is to add two new columns where:
The items of each zone, will have a max boundary and a min boundary. (They will be the same for every item in the zone). The max boundary will be the minimum "h" value of the previous zone, and the min boundary will be the maximum "h" value of the next zone
name h w set row zone
ZZON5 40 36 A 0 0
DWOPN 38 44 A 1 0
5SWYZ 37 22 B 2 0
TFQEP 32 55 B 3 0
OQ33H 26 41 A 4 1
FTJVQ 24 25 B 5 1
F1RK2 20 15 B 6 1
266LT 18 19 A 7 1
HSJ3X 16 24 A 8 2
L754O 12 86 B 9 2
LWHDX 11 68 A 10 2
ZKB2F 9 47 A 11 2
5KJ5L 7 72 B 12 3
CZ7ET 6 23 B 13 3
SDZ1B 2 10 A 14 3
5KWRU 1 59 B 15 3
what i hope for:
name h w set row zone maxB minB
ZZON5 40 36 A 0 0 26
DWOPN 38 44 A 1 0 26
5SWYZ 37 22 B 2 0 26
TFQEP 32 55 B 3 0 26
OQ33H 26 41 A 4 1 32 16
FTJVQ 24 25 B 5 1 32 16
F1RK2 20 15 B 6 1 32 16
266LT 18 19 A 7 1 32 16
HSJ3X 16 24 A 8 2 18 7
L754O 12 86 B 9 2 18 7
LWHDX 11 68 A 10 2 18 7
ZKB2F 9 47 A 11 2 18 7
5KJ5L 7 72 B 12 3 9
CZ7ET 6 23 B 13 3 9
SDZ1B 2 10 A 14 3 9
5KWRU 1 59 B 15 3 9
Any ideas?
First group-by zone and find the minimum and maximum of them
min_max_zone = df.groupby('zone').agg(min=('h', 'min'), max=('h', 'max'))
Now you can use apply:
df['maxB'] = df['zone'].apply(lambda x: min_max_zone.loc[x-1, 'min']
if x-1 in min_max_zone.index else np.nan)
df['minB'] = df['zone'].apply(lambda x: min_max_zone.loc[x+1, 'max']
if x+1 in min_max_zone.index else np.nan)
I have created a days difference column in a pandas dataframe, and I'm looking to add a column that has the sum of a specific value over a given days window backwards
Notice that I can supply a date column for each row if it is needed, but the diff was created as days difference from the first day of the data.
Example
df = pd.DataFrame.from_dict({'diff': [0,0,1,2,2,2,2,10,11,15,18],
'value': [10,11,15,2,5,7,8,9,23,14,15]})
df
Out[12]:
diff value
0 0 10
1 0 11
2 1 15
3 2 2
4 2 5
5 2 7
6 2 8
7 10 9
8 11 23
9 15 14
10 18 15
I want to add 5_days_back_sum column that will sum the past 5 days, including same day so the result would be like this
Out[15]:
5_days_back_sum diff value
0 21 0 10
1 21 0 11
2 36 1 15
3 58 2 2
4 58 2 5
5 58 2 7
6 58 2 8
7 9 10 9
8 32 11 23
9 46 15 14
10 29 18 15
How can I achieve that? Originally I have a date column to create the diff column, if that helps its available
Use custom function with boolean indexing for filtering range with sum:
def f(x):
return df.loc[(df['diff'] >= x - 5) & (df['diff'] <= x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29
Similar solution with between:
def f(x):
return df.loc[df['diff'].between(x - 5, x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29
I'm using Dataframe in Pandas, and I would like to calculate the delta between each adjacent rows, using a partition.
For example, this is my initial set after sorting it by A and B:
A B
1 12 40
2 12 50
3 12 65
4 23 30
5 23 45
6 23 60
I want to calculate the delta between adjacent B values, partitioned by A. If we define C as result, the final table should look like this:
A B C
1 12 40 NaN
2 12 50 10
3 12 65 15
4 23 30 NaN
5 23 45 15
6 23 75 30
The reason for the NaN is that we cannot calculate delta for the minimum number in each partition.
You can group by column A and take the difference:
df['C'] = df.groupby('A')['B'].diff()
df
Out:
A B C
1 12 40 NaN
2 12 50 10.0
3 12 65 15.0
4 23 30 NaN
5 23 45 15.0
6 23 60 15.0