Compare pandas dataframes by multiple columns - python

What is the best way to figure out how two dataframes differ based on a combination of multiple columns. So if I have the following:
df1:
A B C
0 1 2 3
1 3 4 2
df2:
A B C
0 1 2 3
1 3 5 2
Want to show all rows where there is a difference such as (3,4,2) vs. (3,5,2) from above example. I've tried using the pd.merge() thinking that if I use all columns as the key to join using outer join, I would end up with dataframe that would help me get what I want but it doesn't turn out that way.
Thanks to EdChum I was able to use a mask from a boolean diff as below but first had to make sure indexes were comparable.
df1 = df1.set_index('A')
df2 = df2.set_index('A') #this gave me a nice index using one of the keys.
#if there are different rows than I would get nulls.
df1 = df1.reindex_like(df2)
df1[~(df1==df2).all(axis=1)] #this gave me all rows that differed.

We can use .all and pass axis=1 to perform row comparisons, we can then use this boolean index to show the rows that differ by negating ~ the boolean index:
In [43]:
df[~(df==df1).all(axis=1)]
Out[43]:
A B C
1 3 4 2
breaking this down:
In [44]:
df==df1
Out[44]:
A B C
0 True True True
1 True False True
In [45]:
(df==df1).all(axis=1)
Out[45]:
0 True
1 False
dtype: bool
We can then pass the above as a boolean index to df and invert it using ~

Related

pandas : pd.concat results in duplicated columns

I have a number of large dataframes in a list. I concatenate all of them to produce a single large dataframe.
df_list # This contains a list of dataframes
result = pd.concat(df_list, axis=0)
result.columns.duplicated().any() # This returns True
My expectation was that pd.concat will not produce duplicate columns.
I want to understand when it could result in duplicate columns so that I can debug the source.
I could not reproduce the problem with a toy dataset.
I have verified that the input data frames have unique columns by running df.columns.duplicated().any().
The pandas version used 1.0.1
(Pdb) p result_data[0].columns.duplicated().any()
False
(Pdb) p result_data[1].columns.duplicated().any()
False
(Pdb) p result_data[2].columns.duplicated().any()
False
(Pdb) p result_data[3].columns.duplicated().any()
False
(Pdb) p pd.concat(result_data[0:4]).columns.duplicated().any()
True
Check the below behaviour:
In [452]: df1 = pd.DataFrame({'A':[1,2,3], 'B':[2,3,4]})
In [468]: df2 = pd.DataFrame({'A':[1,2,3], 'B':[2,4,5]})
In [460]: df_list = [df1,df2]
This concats and keeps duplicate columns:
In [463]: pd.concat(df_list, axis=1)
Out[474]:
A B A B
0 1 2 1 2
1 2 3 2 4
2 3 4 3 5
pd.concat always concatenates the dataframes as is. It does not drop duplicate columns at all.
If you concatenate without the axis, it will append one dataframe below another in the same columns.
So you can have duplicate rows now, but not columns.
In [477]: pd.concat(df_list)
Out[477]:
A B
0 1 2 ## duplicate row
1 2 3
2 3 4
0 1 2 ## duplicate row
1 2 4
2 3 5
You can remove these duplicate rows by using drop_duplicates():
In [478]: pd.concat(df_list).drop_duplicates()
Out[478]:
A B
0 1 2
1 2 3
2 3 4
1 2 4
2 3 5
Update after OP's comment:
In [507]: df_list[0].columns.duplicated().any()
Out[507]: False
In [508]: df_list[1].columns.duplicated().any()
Out[508]: False
In [510]: pd.concat(df_list[0:2]).columns.duplicated().any()
Out[510]: False
I have the same issue when I get data from IEXCloud. I used IEXFinance functions to grab different data sets which are all suppose to return dataframes. I then Use concat to join the dataframes. It looks to have repeated the first column (symbols) into column 97. The data in columns 96 and 98 where from the second dataframe. There are no duplicate columns in df1 or df2. I can't see any logical reason for duplicating it there. DF2 has 70 columns.I suspect some of what was returned as a 'dataframe' is something else but this doesnt explain the seeming random nature of the position the concat function chooses to duplicate the first column of the first df!

How to compare two dataframes and filter rows and columns where a difference is found

I am testing dataframes for equality.
df_diff=(df1!=df2)
I get df_diff which is same shape as df*, and contains boolean True/False.
Now I would like to keep only the columns and rows of df1 where there was at least a different value.
If I simply do
df1=[df_diff.values]
I get all the rows where there was at least one True in df_diff, but lots of columns originally had False only.
As a second step, I would like then to be able to replace all the values (element-wise in the dataframe) which were equal (where df_diff==False) with NaNs.
example:
df1=pd.DataFrame(data=[[1,2,3],[4,5,6],[7,8,9]])
df2=pd.DataFrame(data=[[1,99,3],[4,5,99],[7,8,9]])
I would like to get from df1
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
to
1 2
0 2 NaN
1 NaN 6
I think you need DataFrame.any for check at least one True per rows of columns:
df = df_diff[df_diff.any(axis=1)]
It is possible to filter both of the original dataframes like so:
df11 = df1[df_diff.any(axis=1)]
df22 = df2[df_diff.any(axis=1)]
If want all columns and rows:
df = df_diff.loc[df_diff.any(axis=1), df_diff.any()]
EDIT: Filter d1 and add NaNs by where:
df_diff=(df1!=df2)
m1 = df_diff.any(axis=1)
m2 = df_diff.any()
out = df1.loc[m1, m2].where(df_diff.loc[m1, m2])
print (out)
1 2
0 2.0 NaN
1 NaN 6.0

Pandas how to check row is from which dataframe when comparing 2 dataframes?

I have the following code which compares 2 columns in 2 dataframes, which actually returns the rows which are different in both dataframes but I want to get the rows which are different for example in df1 not in both:
df1 = pd.DataFrame([('a','b','src'), ('a','b','src'), ('c','b','src'),('a','d','src')],columns=['col1','col2','origin'])
df2 = df1.copy(deep=True)
df2['origin'] = 'tgt'
df1['col1'][3] = 't'
df2['col2'][2] = 't'
df1[(df1['col1'] != df2['col1']) | (df1['col2'] != df2['col2'])]
which gives output as in the image:
Now, over here I do see the 2 differences but the origin column is always src. What I want is, the count of rows which are different but only from source i.e. df1
Because same columns and same indices in both DataFrames, is possible compare between them.
For check not equal in df1 and also in df2 need sum of Trues in boolean mask:
mask = (df1['col1'] != df2['col1']) | (df1['col2'] != df2['col2'])
print (mask)
0 False
1 False
2 True
3 True
dtype: bool
out = mask.sum()
print (out)
2

Pandas (Python) - Update column of a dataframe from another one with conditions and different columns

I had a problem and I found a solution but I feel it's the wrong way to do it. Maybe, there is a more 'canonical' way to do it.
I already had an answer for a really similar problem, but here I have not the same amount of rows in each dataframe. Sorry for the "double-post", but the first one is still valid so I think it's better to make a new one.
Problem
I have two dataframe that I would like to merge without having extra column and without erasing existing infos. Example :
Existing dataframe (df)
A A2 B
0 1 4 0
1 2 5 1
2 2 5 1
Dataframe to merge (df2)
A A2 B
0 1 4 2
1 3 5 2
I would like to update df with df2 if columns 'A' and 'A2' corresponds.
The result would be :
A A2 B
0 1 4 2 <= Update value ONLY
1 2 5 1
2 2 5 1
Here is my solution, but I think it's not a really good one.
import pandas as pd
df = pd.DataFrame([[1,4,0],[2,5,1],[2,5,1]],columns=['A','A2','B'])
df2 = pd.DataFrame([[1,4,2],[3,5,2]],columns=['A','A2','B'])
df = df.merge(df2,on=['A', 'A2'],how='left')
df['B_y'].fillna(0, inplace=True)
df['B'] = df['B_x']+df['B_y']
df = df.drop(['B_x','B_y'], axis=1)
print(df)
I tried this solution :
rows = (df[['A','A2']] == df2[['A','A2']]).all(axis=1)
df.loc[rows,'B'] = df2.loc[rows,'B']
But I have this error because of the wrong number of rows :
ValueError: Can only compare identically-labeled DataFrame objects
Does anyone has a better way to do ?
Thanks !
I think you can use DataFrame.isin for check where are same rows in both DataFrames. Then create NaN by mask, which is filled by combine_first. Last cast to int:
mask = df[['A', 'A2']].isin(df2[['A', 'A2']]).all(1)
print (mask)
0 True
1 False
2 False
dtype: bool
df.B = df.B.mask(mask).combine_first(df2.B).astype(int)
print (df)
A A2 B
0 1 4 2
1 2 5 1
2 2 5 1
With a minor tweak in the way in which the boolean mask gets created, you can get it to work:
cols = ['A', 'A2']
# Slice it to match the shape of the other dataframe to compare elementwise
rows = (df[cols].values[:df2.shape[0]] == df2[cols].values).all(1)
df.loc[rows,'B'] = df2.loc[rows,'B']
df

How to use pandas apply function on all columns of some rows of data frame

I have a dataframe. I want to replace values of all columns of some rows to a default value. Is there a way to do this via pandas apply function
Here is the dataframe
import pandas as pd
temp=pd.DataFrame({'a':[1,2,3,4,5,6],'b':[2,3,4,5,6,7],'c':['p','q','r','s','t','u']})
mylist=['p','t']
How to replace values in columns a and bto default value 0,where value of column c is in mylist
Is there a way to do this using pandas functionality,avoiding for loops
Use isin to create a boolean mask and use loc to set the rows that meet the condition to the desired new value:
In [37]:
temp.loc[temp['c'].isin(mylist),['a','b']] = 0
temp
Out[37]:
a b c
0 0 0 p
1 2 3 q
2 3 4 r
3 4 5 s
4 0 0 t
5 6 7 u
result of the inner isin:
In [38]:
temp['c'].isin(mylist)
Out[38]:
0 True
1 False
2 False
3 False
4 True
5 False
Name: c, dtype: bool
NumPy based method would be to use np.in1d to get such a mask and use it like so -
mask = np.in1d(temp.c,mylist)
temp.ix[mask,temp.columns!='c'] = 0
This will replace in all columns except 'c'. If you are looking to replace in specific columns, say 'a' and 'b', edit the last line to -
temp.ix[mask,['a','b']] = 0

Categories