How to remove duplicate rows with a condition in pandas - python

i.e
i want to drop duplicates pairs using col1 and col2 as the subset only if the values are the opposite in col3 (one negative and one positive). similar to drop_duplicates function but i want to impose a condition and only want to remove the first pair (i.e if 3 duplicates, just remove 2, leave 1)
my dataset (df):
col1 col2 col3
0 1 1 1
1 2 2 2
2 1 1 1
3 3 5 7
4 1 2 -1
5 1 2 1
6 1 2 1
I want:
col1 col2 col3
0 1 1 1
1 2 2 2
2 1 1 1
3 3 5 7
6 1 2 1
rows 4 and 5 are duplicated in col1 and col2 but value in col3 is the opposite, therefore we remove both. row 0 and row 2 have duplicate values in col1 and col2 but col3 is the same, so we don't remove those rows.
i've tried using drop_duplicates but realised it wouldn't work as it will only remove all duplicates and not consider anything else.

We can do transform
out = df[df.groupby(['col1','col2']).col3.transform('sum').ne(0) & df.col3.ne(0)]
Out[252]:
col1 col2 col3
0 1 1 1
1 2 2 2
2 1 1 1
3 3 5 7

Recreating the dataset:
import pandas as pd
data = [
[1, 1, 1],
[2, 2, 2],
[1, 1, 1],
[3, 5, 7],
[1, 2, -1],
[1, 2, 1],
[1, 2, 1],
]
df = pd.DataFrame(data, columns=['col1', 'col2', 'col3'])
if your data is not massive, you can use an iterrows function on a subset of the data.
The subset contains all duplicate values after all values have been turned into absolute values.
Next, we check if col3 is negative and if the opposite of col3 is in the duplicate subset.
If so, we drop the row from df.
df_dupes = df[df.abs().duplicated(keep=False)]
df_dupes_list = df_dupes.to_numpy().tolist()
for i, row in df_dupes.iterrows():
if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list:
df.drop(labels=i, axis=0, inplace=True)
This code should remove row 4.
In your desired output, you left row 5 for some reason.
If you can explain why you left row 5 but kept row 0, then I can adjust my code to more accurately match your desired output.

I used #Petar Luketina code here with an adjustment and it worked. However I would like to use it for a massive dataset -> 1million rows and 43 columns. This code takes forever:
df_dupes = df[df['col3'].abs().duplicated(keep=False)]
df_dupes_list = df_dupes.to_numpy().tolist()
for i, row in df_dupes.iterrows():
if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list:
print(row.col3)
try:
c = np.where((df['col1'] ==row.col1) & (df['col2'] ==row.col2) &
(df['col3'] ==-row.col3))[0][0]
df.drop(labels=[i,df.index.values[c]], axis=0, inplace=True)
except:
pass

I know this is an old question, but for those people interested, here is an alternative that avoids iterating over the rows:
First use a flag to identify the pair of rows to be removed (row plus the next row when col1 and col2 are the same and col3 are the negative of each other)
df.loc[(df.col1 == df.col1.shift(1)) & (df.col2 == df.col2.shift(1)) & (df.col3 == -df.col3.shift(1)), 'removeFlag'] = True
df.loc[df.removeFlag.shift(-1) == True, 'removeFlag'] = True
col1 col2 col3 removeFlag
0 1 1 1 NaN
1 2 2 2 NaN
2 1 1 1 NaN
3 3 5 7 NaN
4 1 2 -1 True
5 1 2 1 True
6 1 2 1 NaN
Then use this flag to delete to offending rows:
df = df[~(df.removeFlag == True)]
df.drop(columns=['removeFlag'], inplace=True)
col1 col2 col3
0 1 1 1
1 2 2 2
2 1 1 1
3 3 5 7
6 1 2 1
This approach probably needs a little more refinement if row 6 had been the same as row 4 (ie the first half of a repeated identical pair) but you get the idea.

Related

How to identify unique elements in two dataframes and append with a new row

I am trying to write a function that takes in two dataframes with a different number of rows, finds the elements that are unique to each dataframe in the first column, and then appends a new row that only contains the unique element to the dataframe where it does not exist. For example:
>>> d1 = {'col1': [1, 2], 'col2': [3, 4]}
>>> df1 = pd.DataFrame(data=d1)
>>> df1
col1 col2
0 1 3
1 2 4
2 5 6
>>> d2 = {'col1': [1, 2], 'col2': [3, 4]}
>>> df2 = pd.DataFrame(data=d2)
>>> df2
col1 col2
0 1 3
1 2 4
2 6 7
>>> standarized_unique_elems(df1, df2)
>>> df1
col1 col2
0 1 3
1 2 4
2 5 6
3 6 NaN
>>> df2
col1 col2
0 1 3
1 2 4
2 6 7
3 5 NaN
Before posting this question, I gave it my best shot, but cant figure out a good way to append a new row at the bottom of each dataframe with the unique element. Here is what I have so far:
def standardize_shape(df1, df2):
unique_elements = list(set(df1.iloc[:, 0]).symmetric_difference(set(df2.iloc[:, 0])))
for elem in unique_elements:
if elem not in df1.iloc[:, 0].tolist():
# append a new row with the unique element with rest of values NaN
if elem not in df2.iloc[:, 0].tolist():
# append a new row with the unique element with rest of values NaN
return (df1, df2)
I am still new to Pandas, so any help would be greatly appreciated!
We can do
out1 = pd.concat([df1,pd.DataFrame({'col1':df2.loc[~df2.col1.isin(df1.col1),'col1']})])
Out[269]:
col1 col2
0 1 3.0
1 2 4.0
2 5 6.0
2 6 NaN
#out2 = pd.concat([df2,pd.DataFrame({'col1':df1.loc[~df1.col1.isin(df2.col1),'col1']})])

how to iterate in pandas dataframe columns

i need do some operations with my dataframe
my dataframe is
df = pd.DataFrame(data={'col1':[1,2],'col2':[3,4]})
col1 col2
0 1 3
1 2 4
my operatin is column dependent
for example, i need to add (+) .max() of column to each value in this column
so df.col1.max() is 2 and df.col2.max() is 4
so my output should be:
col1 col2
0 3 7
1 4 8
i have been try this:
for i in df.columns:
df.i += df.i.max()
but
AttributeError: 'DataFrame' object has no attribute 'i'
you can chain df.add and df.max and specify the axis which avoids any loops.
df1 = df.add(df.max(axis=0))
print(df1)
col1 col2
0 3 7
1 4 8
To loop through the columns and add the maximum of each column you can do the following:
for col in df:
df[col] += df[col].max()
This gives
col1 col2
0 3 7
1 4 8

Pandas pivoting and adding a column from CSV with consecutive rows

I have consecutive row duplicates in two column.
I want to delete the second row duplicate based on [col1,col2] and move the value of another column to a new one.
Example:
Input
col1 col2 col3
X A 1
X A 2
Y A 3
Y A 4
X B 5
X B 6
Z C 7
Z C 8
Output
col1 col2 col3 col4
X A 1 2
Y A 3 4
X B 5 6
Z C 7 8
I found out about pivoting but I am struggling to understand how to add another column and avoid indexing, I would to preserve everything as written in the example
This is similar to Question 10 here:
(df.assign(col=df.groupby(['col1','col2']).cumcount())
.pivot_table(index=['col1','col2'], columns='col', values='col3')
.reset_index()
)
Output:
col col1 col2 0 1
0 X A 1 2
1 X B 5 6
2 Y A 3 4
3 Z C 7 8

delete pandas dataframe row if every value is equal

If I have a pandas dataframe which has a row containing float values and all the values are equal in the row, how do I delete that row from the dataframe?
Use DataFrame.nunique for test number of unique values per rows with Series.ne for filter out unique rows by boolean indexing:
df1 = df[df.nunique(axis=1).ne(1)]
Or test if not equal first column and test if at least one True per rows by DataFrame.any:
df1 = df[df.ne(df.iloc[:, 0], axis=0).any(axis=1)]
EDIT: If want remove all rows and all columns with same values solution should be changed for test columns with loc and axis=0:
df = pd.DataFrame({
'B':[4,4,4,4,4,4],
'C':[4,4,9,4,2,3],
'D':[4,4,5,7,1,0],
})
print (df)
B C D
0 4 4 4
1 4 4 4
2 4 9 5
3 4 4 7
4 4 2 1
5 4 3 0
df2 = df.loc[df.nunique(axis=1).ne(1), df.nunique(axis=0).ne(1)]
And for second solution:
df2 = df.loc[df.ne(df.iloc[:, 0], axis=0).any(axis=1), df.ne(df.iloc[0], axis=1).any(axis=0)]
print (df2)
C D
2 9 5
3 4 7
4 2 1
5 3 0
You can use DataFrame.diff over axis=1 (per row):
# Example dataframe:
df = pd.DataFrame({'Col1':[1,2,3],
'Col2':[2,2,5],
'Col3':[4,2,9]})
Col1 Col2 Col3
0 1 2 4
1 2 2 2 # <-- row with all same values
2 3 5 9
df[df.diff(axis=1).fillna(0).ne(0).any(axis=1)]
Col1 Col2 Col3
0 1 2 4
2 3 5 9

Pandas: Get mean of different rows when columns are equal

I'm trying to find the mean of values in different rows, grouped by similarities in other columns. Example:
In [14]: pd.DataFrame({'col1':[1,2,1,2], 'col2':['A','C','A','B'], 'col3':[1, 5, 6, 9]})
Out[14]:
col1 col2 col3
0 1 A 1
1 2 C 5
2 1 A 6
3 2 B 9
What I would like is to add a column with the means of col3, for all rows where the combination of col1 and col2 match. Desired output:
Out[14]:
col1 col2 col3 mean
0 1 A 1 3.5
1 2 C 5 5
2 1 A 6 3.5
3 2 B 9 9
I have tried several things with groupby in combination with apply but couldn't get proper results.
its a transform my man
df['mean'] = df.groupby(['col1','col2']).col3.transform('mean')

Categories