Removing duplicates with three conditions, Pandas - python

I have the following dataframe:
reference | topcredit | currentbalance | creditlimit
1 1 | 50 | 20 | 70
2 1 | 30 | 28 | 50
3 1 | 50 | 20 | 70
4 1 | 81 | 32 | 100
5 2 | 70 | 0 | 56
6 2 | 50 | 20 | 70
7 2 | 100 | 0 | 150
8 3 | 85 | 85 | 95
9 3 | 85 | 85 | 95
And so on...
I want to drop duplicates based on the 'reference' only those that have the same topcredit, currentbalance and creditlimit.
In the reference 1 I have two that have the same numbers in the three columns in line 1 and 3, but also in reference 2, line 6 I would like to keep 1 of reference 1 and also line 6 of reference 2. In reference 3 both lines have the same information too.
The expected output is:
reference | topcredit | currentbalance | creditlimit
1 | 50 | 20 | 70
1 | 30 | 28 | 50
1 | 81 | 32 | 100
2 | 70 | 24 | 56
2 | 50 | 20 | 70
2 | 100 | 80 | 150
3 | 85 | 85 | 95
I would apreciate the help, I've been searching how to do it for a while.

Related

Find top N values within each group

I have a dataset similar to the sample below:
| id | size | old_a | old_b | new_a | new_b |
|----|--------|-------|-------|-------|-------|
| 6 | small | 3 | 0 | 21 | 0 |
| 6 | small | 9 | 0 | 23 | 0 |
| 13 | medium | 3 | 0 | 12 | 0 |
| 13 | medium | 37 | 0 | 20 | 1 |
| 20 | medium | 30 | 0 | 5 | 6 |
| 20 | medium | 12 | 2 | 3 | 0 |
| 12 | small | 7 | 0 | 2 | 0 |
| 10 | small | 8 | 0 | 12 | 0 |
| 15 | small | 19 | 0 | 3 | 0 |
| 15 | small | 54 | 0 | 8 | 0 |
| 87 | medium | 6 | 0 | 9 | 0 |
| 90 | medium | 11 | 1 | 16 | 0 |
| 90 | medium | 25 | 0 | 4 | 0 |
| 90 | medium | 10 | 0 | 5 | 0 |
| 9 | large | 8 | 1 | 23 | 0 |
| 9 | large | 19 | 0 | 2 | 0 |
| 1 | large | 1 | 0 | 0 | 0 |
| 50 | large | 34 | 0 | 7 | 0 |
This is the input for above table:
data=[[6,'small',3,0,21,0],[6,'small',9,0,23,0],[13,'medium',3,0,12,0],[13,'medium',37,0,20,1],[20,'medium',30,0,5,6],[20,'medium',12,2,3,0],[12,'small',7,0,2,0],[10,'small',8,0,12,0],[15,'small',19,0,3,0],[15,'small',54,0,8,0],[87,'medium',6,0,9,0],[90,'medium',11,1,16,0],[90,'medium',25,0,4,0],[90,'medium',10,0,5,0],[9,'large',8,1,23,0],[9,'large',19,0,2,0],[1,'large',1,0,0,0],[50,'large',34,0,7,0]]
data= pd.DataFrame(data,columns=['id','size','old_a','old_b','new_a','new_b'])
I want to have an output which will group the dataset on size and would list out top 2 id based on the values of 'new_a' column within each group of size. Since, some of the ids are repeating multiple times, I would want to sum the values of new_a for such ids and then find top 2 values. My final table should look like the one below:
| size | id | new_a |
|--------|----|-------|
| large | 9 | 25 |
| large | 50 | 7 |
| medium | 13 | 32 |
| medium | 90 | 25 |
| small | 6 | 44 |
| small | 10 | 12 |
I have tried the below code but it isn't showing top 2 values of new_a for each group within 'size' column.
nlargest = data.groupby(['size','id'])['new_a'].sum().nlargest(2).reset_index()
print(
df.groupby('size').apply(
lambda x: x.groupby('id').sum().nlargest(2, columns='new_a')
).reset_index()[['size', 'id', 'new_a']]
)
Prints:
size id new_a
0 large 9 25
1 large 50 7
2 medium 13 32
3 medium 90 25
4 small 6 44
5 small 10 12
You can set size, id as the index to avoid double groupby here, and use Series.sum leveraging level parameter.
df.set_index(["size", "id"]).groupby(level=0).apply(
lambda x: x.sum(level=1).nlargest(2)
).reset_index()
size id new_a
0 large 9 25
1 large 50 7
2 medium 13 32
3 medium 90 25
4 small 6 44
5 small 10 12
You can chain two groupby methods:
data.groupby(['id', 'size'])['new_a'].sum().groupby('size').nlargest(2)\
.droplevel(0).to_frame('new_a').reset_index()
Output:
id size new_a
0 9 large 25
1 50 large 7
2 13 medium 32
3 90 medium 25
4 6 small 44
5 10 small 12

Custom Cumulative Sum in Pandas

Noob question, apologies.
I am trying to do a cumulative sum on a table I have imported. However I wish for it to perform slightly differently at a mid point in the column before continuing. Is there a way to get a cumsum() to calculate to a row then continue from a further point
df['Cumlative Sum'] = df['Value'].cumsum()
| | Value | Cumlative Sum | Expected Cumlative Sum |
|----|-------|---------------|------------------------|
| 0 | 329.6 | 329.6 | 329.6 |
| 1 | 34.0 | 363.6 | 363.6 |
| 2 | 10 | 373.6 | 373.6 |
| 3 | 8 | 381.6 | 381.6 |
| 4 | 3 | 384.6 | 384.6 |
| 5 | -2 | 382.6 | 382.6 |
| 6 | -4 | 378.6 | 378.6 |
| 7 | -34 | 344.6 | 344.6 |
| 8 | -1 | 343.6 | 343.6 |
| 9 | 343.6 | 687.2 | 343.6 |
| 10 | 0 | 687.2 | 343.6 |
| 11 | -33 | 654.2 | 310.6 |
| 12 | -3 | 651.2 | 307.6 |
| 13 | 0 | 651.2 | 307.6 |
| 14 | 1 | 652.2 | 308.6 |
| 15 | 4 | 656.2 | 312.6 |
| 16 | 0 | 656.2 | 312.6 |
| 17 | 21 | 677.2 | 333.6 |
| 18 | 333.6 | 1010.8 | 333.6 |
You can get started with something like this ..
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randint(0,100,size=(20,2)),columns=['A','B'])
def Offset_CumSum(Column, Percentage_Offset=0.5):
return np.cumsum(Column[int(len(Column)*Percentage_Offset):])
Cumsum_DF = df.apply(lambda x: Offset_CumSum(x), axis=0)
print(df)
print(Cumsum_DF)
This produces the following output.
A B
0 29 11
1 9 51
2 99 31
3 30 44
4 76 13
5 32 48
6 85 83
7 9 98
8 49 34
9 25 0
10 39 22
11 25 96
12 69 7
13 28 6
14 4 92
15 90 32
16 68 72
17 63 25
18 85 47
19 61 31
A B
10 39 22
11 64 118
12 133 125
13 161 131
14 165 223
15 255 255
16 323 327
17 386 352
18 471 399
19 532 430
=====================================================================
Adding a question dataset specific code after seeing the edit.
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randint(0,100,size=(20,2)),columns=['A','B'])
def Offset_CumSum(Column, Percentage_Offset=0.5):
return np.cumsum(Column[: int(len(Column)*Percentage_Offset)]).tolist() + np.cumsum(Column[int(len(Column)*Percentage_Offset):]).tolist()
Cumsum_DF = df.apply(lambda x: Offset_CumSum(x), axis=0)
print(df)
print(Cumsum_DF)
This should work.
df['Group Flag'] = ""
df.loc[0:8, 'Group Flag'] = 0
df.loc[9:17, 'Group Flag'] = 1
df['Cumlative Sum'] = df.groupby('Group Flag')['Value'].cumsum()
df.drop('Group Flag', axis=1)
df[['Title','Value','Cumlative Sum']]

Pandas table computation [duplicate]

This question already has answers here:
Get statistics for each group (such as count, mean, etc) using pandas GroupBy?
(9 answers)
Closed 3 years ago.
I have a table as follows:
+-------+-------+-------------+
| Code | Event | No. of runs |
+-------+-------+-------------+
| 66 | 1 | |
| 66 | 1 | 2 |
| 66 | 2 | |
| 66 | 2 | |
| 66 | 2 | 3 |
| 66 | 3 | |
| 66 | 3 | |
| 66 | 3 | |
| 66 | 3 | |
| 66 | 3 | 5 |
| 70 | 1 | |
| 70 | 1 | |
| 70 | 1 | |
| 70 | 1 | 4 |
+-------+-------+-------------+
Let's call each row a run. I want to count the no. of runs in each Event, separately for each Code. Would I need to use the groupby function? I have added the expected output in the No. of runs column.
Try using groupby with transfrom then mask duplicated rows:
df['Runs'] = df.groupby(['Code', 'Event'])['Event']\
.transform('count')\
.mask(df.duplicated(['Code','Event'], keep='last'), '')
Output (add new column to output dataframe from comparison to desired result):
Code Event No. of runs Runs
0 66 1
1 66 1 2 2
2 66 2
3 66 2
4 66 2 3 3
5 66 3
6 66 3
7 66 3
8 66 3
9 66 3 5 5
10 70 1
11 70 1
12 70 1
13 70 1 4 4

Pandas: sum multiple columns based on similar consecutive numbers in another column

Given the following table
+----+--------+--------+--------------+
| Nr | Price | Volume | Transactions |
+----+--------+--------+--------------+
| 1 | 194.6 | 100 | 1 |
| 2 | 195 | 10 | 1 |
| 3 | 194.92 | 100 | 1 |
| 4 | 194.92 | 52 | 1 |
| 5 | 194.9 | 99 | 1 |
| 6 | 194.86 | 74 | 1 |
| 7 | 194.85 | 900 | 1 |
| 8 | 194.85 | 25 | 1 |
| 9 | 194.85 | 224 | 1 |
| 10 | 194.6 | 101 | 1 |
| 11 | 194.85 | 19 | 1 |
| 12 | 194.6 | 10 | 1 |
| 13 | 194.6 | 25 | 1 |
| 14 | 194.53 | 12 | 1 |
| 15 | 194.85 | 14 | 1 |
| 16 | 194.6 | 11 | 1 |
| 17 | 194.85 | 93 | 1 |
| 18 | 195 | 90 | 1 |
| 19 | 195 | 100 | 1 |
| 20 | 195 | 50 | 1 |
| 21 | 195 | 50 | 1 |
| 22 | 195 | 25 | 1 |
| 23 | 195 | 5 | 1 |
| 24 | 195 | 500 | 1 |
| 25 | 195 | 100 | 1 |
| 26 | 195.09 | 100 | 1 |
| 27 | 195 | 120 | 1 |
| 28 | 195 | 60 | 1 |
| 29 | 195 | 40 | 1 |
| 30 | 195 | 10 | 1 |
| 31 | 194.6 | 1 | 1 |
| 32 | 194.99 | 1 | 1 |
| 33 | 194.81 | 20 | 1 |
| 34 | 194.81 | 50 | 1 |
| 35 | 194.97 | 17 | 1 |
| 36 | 194.99 | 25 | 1 |
| 37 | 195 | 75 | 1 |
+----+--------+--------+--------------+
For faster testing you can also find here the same table in a pandas dataframe
pd_data_before = pd.DataFrame([[1,194.6,100,1],[2,195,10,1],[3,194.92,100,1],[4,194.92,52,1],[5,194.9,99,1],[6,194.86,74,1],[7,194.85,900,1],[8,194.85,25,1],[9,194.85,224,1],[10,194.6,101,1],[11,194.85,19,1],[12,194.6,10,1],[13,194.6,25,1],[14,194.53,12,1],[15,194.85,14,1],[16,194.6,11,1],[17,194.85,93,1],[18,195,90,1],[19,195,100,1],[20,195,50,1],[21,195,50,1],[22,195,25,1],[23,195,5,1],[24,195,500,1],[25,195,100,1],[26,195.09,100,1],[27,195,120,1],[28,195,60,1],[29,195,40,1],[30,195,10,1],[31,194.6,1,1],[32,194.99,1,1],[33,194.81,20,1],[34,194.81,50,1],[35,194.97,17,1],[36,194.99,25,1],[37,195,75,1]],columns=['Nr','Price','Volume','Transactions'])
The question is how do we sum up the volume and transactions based on similar consecutive prices? The end result would be something like this:
+----+--------+--------+--------------+
| Nr | Price | Volume | Transactions |
+----+--------+--------+--------------+
| 1 | 194.6 | 100 | 1 |
| 2 | 195 | 10 | 1 |
| 4 | 194.92 | 152 | 2 |
| 5 | 194.9 | 99 | 1 |
| 6 | 194.86 | 74 | 1 |
| 9 | 194.85 | 1149 | 3 |
| 10 | 194.6 | 101 | 1 |
| 11 | 194.85 | 19 | 1 |
| 13 | 194.6 | 35 | 2 |
| 14 | 194.53 | 12 | 1 |
| 15 | 194.85 | 14 | 1 |
| 16 | 194.6 | 11 | 1 |
| 17 | 194.85 | 93 | 1 |
| 25 | 195 | 920 | 8 |
| 26 | 195.09 | 100 | 1 |
| 30 | 195 | 230 | 4 |
| 31 | 194.6 | 1 | 1 |
| 32 | 194.99 | 1 | 1 |
| 34 | 194.81 | 70 | 2 |
| 35 | 194.97 | 17 | 1 |
| 36 | 194.99 | 25 | 1 |
| 37 | 195 | 75 | 1 |
+----+--------+--------+--------------+
You can also find the result ready made in a pandas dataframe below:
pd_data_after = pd.DataFrame([[1,194.6,100,1],[2,195,10,1],[4,194.92,152,2],[5,194.9,99,1],[6,194.86,74,1],[9,194.85,1149,3],[10,194.6,101,1],[11,194.85,19,1],[13,194.6,35,2],[14,194.53,12,1],[15,194.85,14,1],[16,194.6,11,1],[17,194.85,93,1],[25,195,920,8],[26,195.09,100,1],[30,195,230,4],[31,194.6,1,1],[32,194.99,1,1],[34,194.81,70,2],[35,194.97,17,1],[36,194.99,25,1],[37,195,75,1]],columns=['Nr','Price','Volume','Transactions'])
I managed to achieve this in a for loop. But the problem is that it is very slow when iterating each row. My data set is huge, around 50 million rows.
Is there any way to achieve this without looping?
A common trick to groupby consecutive values is the following:
df.col.ne(df.col.shift()).cumsum()
We can use that here, then use agg to keep the first values of the columns we aren't summing, and to sum the values we do want to sum.
(df.groupby(df.Price.ne(df.Price.shift()).cumsum())
.agg({'Nr': 'last', 'Price': 'first', 'Volume':'sum', 'Transactions': 'sum'})
).reset_index(drop=True)
Nr Price Volume Transactions
0 1 194.60 100 1
1 2 195.00 10 1
2 4 194.92 152 2
3 5 194.90 99 1
4 6 194.86 74 1
5 9 194.85 1149 3
6 10 194.60 101 1
7 11 194.85 19 1
8 13 194.60 35 2
9 14 194.53 12 1
10 15 194.85 14 1
11 16 194.60 11 1
12 17 194.85 93 1
13 25 195.00 920 8
14 26 195.09 100 1
15 30 195.00 230 4
16 31 194.60 1 1
17 32 194.99 1 1
18 34 194.81 70 2
19 35 194.97 17 1
20 36 194.99 25 1
21 37 195.00 75 1

Python: Running maximum by another column?

I have a dataframe like this, which tracks the value of certain items (ids) over time:
mytime=np.tile( np.arange(0,10) , 2 )
myids=np.repeat( [123,456], [10,10] )
myvalues=np.random.random_integers(20,30,10*2)
df=pd.DataFrame()
df['myids']=myids
df['mytime']=mytime
df['myvalues']=myvalues
+-------+--------+----------+--+--+
| myids | mytime | myvalues | | |
+-------+--------+----------+--+--+
| 123 | 0 | 29 | | |
+-------+--------+----------+--+--+
| 123 | 1 | 23 | | |
+-------+--------+----------+--+--+
| 123 | 2 | 26 | | |
+-------+--------+----------+--+--+
| 123 | 3 | 24 | | |
+-------+--------+----------+--+--+
| 123 | 4 | 25 | | |
+-------+--------+----------+--+--+
| 123 | 5 | 29 | | |
+-------+--------+----------+--+--+
| 123 | 6 | 28 | | |
+-------+--------+----------+--+--+
| 123 | 7 | 21 | | |
+-------+--------+----------+--+--+
| 123 | 8 | 20 | | |
+-------+--------+----------+--+--+
| 123 | 9 | 26 | | |
+-------+--------+----------+--+--+
| 456 | 0 | 26 | | |
+-------+--------+----------+--+--+
| 456 | 1 | 24 | | |
+-------+--------+----------+--+--+
| 456 | 2 | 20 | | |
+-------+--------+----------+--+--+
| 456 | 3 | 26 | | |
+-------+--------+----------+--+--+
| 456 | 4 | 29 | | |
+-------+--------+----------+--+--+
| 456 | 5 | 29 | | |
+-------+--------+----------+--+--+
| 456 | 6 | 24 | | |
+-------+--------+----------+--+--+
| 456 | 7 | 21 | | |
+-------+--------+----------+--+--+
| 456 | 8 | 27 | | |
+-------+--------+----------+--+--+
| 456 | 9 | 29 | | |
+-------+--------+----------+--+--+
I'd need to calculate the running maximum for each id.
np.maximum.accumulate()
would calculate the running maximum regardless of id, whereas I need a similar calculation, which however resets every time the id changes. I can think of a simple script to do it in numba (I have very large arrays and non-vectorised non-numba code would be slow), but is there an easier way to do it?
With just two values I can run:
df['running max']= np.hstack(( np.maximum.accumulate(df[ df['myids']==123 ]['myvalues']) , np.maximum.accumulate(df[ df['myids']==456 ]['myvalues']) ) )
but this is not feasible with lots and lots of values.
Thanks!
Here you go. Assumption is mytime is sorted.
mytime=np.tile( np.arange(0,10) , 2 )
myids=np.repeat( [123,456], [10,10] )
myvalues=np.random.random_integers(20,30,10*2)
df=pd.DataFrame()
df['myids']=myids
df['mytime']=mytime
df['myvalues']=myvalues
groups = df.groupby('myids')
df['run_max_group'] = groups['myvalues'].transform(np.maximum.accumulate)
Output...
myids mytime myvalues run_max_group
0 123 0 27 27
1 123 1 21 27
2 123 2 24 27
3 123 3 25 27
4 123 4 22 27
5 123 5 20 27
6 123 6 20 27
7 123 7 30 30
8 123 8 24 30
9 123 9 22 30
10 456 0 29 29
11 456 1 23 29
12 456 2 30 30
13 456 3 28 30
14 456 4 26 30
15 456 5 25 30
16 456 6 28 30
17 456 7 27 30
18 456 8 20 30
19 456 9 24 30
It seems that it is indeed not too difficult
byid = df.groupby('myid')
rmax = byid['myvalues].cummax()
for k, indices in byid.indices.items():
print 'myid = %s' % k
print 'running max = %s' % rmax[indices]
I have (almost) no previous pandas, but using ipython as an exploratory instrument I was able to find a solution. I recommend the use of ipython to explore large and complex libraries.
p.s. re my previous comment: no need for axis=

Categories