Find all duplicate columns in a collection of data frames - python

Having a collection of data frames, the goal is to identify the duplicated column names and return them as a list.
Example
The input are 3 data frames df1, df2 and df3:
df1 = pd.DataFrame({'a':[1,5], 'b':[3,9], 'e':[0,7]})
a b e
0 1 3 0
1 5 9 7
df2 = pd.DataFrame({'d':[2,3], 'e':[0,7], 'f':[2,1]})
d e f
0 2 0 2
1 3 7 1
df3 = pd.DataFrame({'b':[3,9], 'c':[8,2], 'e':[0,7]})
b c e
0 3 8 0
1 9 2 7
The output is a list [b, e]

pd.Series.duplicated
Since you are using Pandas, you can use pd.Series.duplicated after concatenating column names:
# concatenate column labels
s = pd.concat([df.columns.to_series() for df in (df1, df2, df3)])
# keep all duplicates only, then extract unique names
res = s[s.duplicated(keep=False)].unique()
print(res)
array(['b', 'e'], dtype=object)
pd.Series.value_counts
Alternatively, you can extract a series of counts and identify rows which have a count greater than 1:
s = pd.concat([df.columns.to_series() for df in (df1, df2, df3)]).value_counts()
res = s[s > 1].index
print(res)
Index(['e', 'b'], dtype='object')
collections.Counter
The classic Python solution is to use collections.Counter followed by a list comprehension. Recall that list(df) returns the columns in a dataframe, so we can use this map and itertools.chain to produce an iterable to feed Counter.
from itertools import chain
from collections import Counter
c = Counter(chain.from_iterable(map(list, (df1, df2, df3))))
res = [k for k, v in c.items() if v > 1]

here is my code for this problem, for comparing with only two data frames, with out concat them.
def getDuplicateColumns(df1, df2):
df_compare = pd.DataFrame({'df1':df1.columns.to_list()})
df_compare["df2"] = ""
# Iterate over all the columns in dataframe
for x in range(df1.shape[1]):
# Select column at xth index.
col = df1.iloc[:, x]
# Iterate over all the columns in DataFrame from (x+1)th index till end
duplicateColumnNames = []
for y in range(df2.shape[1]):
# Select column at yth index.
otherCol = df2.iloc[:, y]
# Check if two columns at x y index are equal
if col.equals(otherCol):
duplicateColumnNames.append(df2.columns.values[y])
df_compare.loc[df_compare["df1"]==df1.columns.values[x], "df2"] = str(duplicateColumnNames)
return df_compare

Related

Matching value with column to retrieve index value

Please see example dataframe below:
I'm trying match values of columns X with column names and retrieve value from that matched column
so that:
A B C X result
1 2 3 B 2
5 6 7 A 5
8 9 1 C 1
Any ideas?
Here are a couple of methods:
# Apply Method:
df['result'] = df.apply(lambda x: df.loc[x.name, x['X']], axis=1)
# List comprehension Method:
df['result'] = [df.loc[i, x] for i, x in enumerate(df.X)]
# Pure Pandas Method:
df['result'] = (df.melt('X', ignore_index=False)
.loc[lambda x: x['X'].eq(x['variable']), 'value'])
Here I just build a dataframe from your example and call it df
dict = {
'A': (1,5,8),
'B': (2,6,9),
'C': (3,7,1),
'X': ('B','A','C')}
df = pd.DataFrame(dict)
You can extract the value from another column based on 'X' using the following code. There may be a better way to do this without having to convert first to list and retrieving the first element.
list(df.loc[df['X'] == 'B', 'B'])[0]
I'm going to create a column called 'result' and fill it with 'NA' and then replace the value based on your conditions. The loop below, extracts the value and uses .loc to replace it in your dataframe.
df['result'] = 'NA'
for idx, val in enumerate(list(vals)):
extracted = list(df.loc[df['X'] == val, val])[0]
df.loc[idx, 'result'] = extracted
Here it is as a function:
def search_replace(dataframe, search_col='X', new_col_name='result'):
dataframe[new_col_name] = 'NA'
for idx, val in enumerate(list(vals)):
extracted = list(dataframe.loc[dataframe[search_col] == val, val])[0]
dataframe.loc[idx, new_col_name] = extracted
return df
and the output
>>> search_replace(df)
A B C X result
0 1 2 3 B 2
1 5 6 7 A 5
2 8 9 1 C 1

how to write a function to filter rows based on a list values one by one and make analysis

I have a list contains more than 10 values and I have a full dataframe. I'd like to filter each value from the list to a subdataframe and do some analysis on each of them. How can I write a function so I don't need to copy paste and change value so many times.
eg.
list = ['A','B','C']
df1 = df[df['column1']=='A']
df2 = df[df['column1']=='B']
df3 = df[df['column1']=='C']
for each subdataframe ,I will do a groupby and value count
df1.groupby(['column2']).size()
df2.groupby(['column2']).size()
df3.groupby(['column2']).size()
First many DataFrames is here not necessary.
You can filter only necessary values for column1 and pass both columns to groupby:
L = ['A','B','C']
s = df1[df1['column1'].isin(L)].groupby(['column1', 'column2']).size()
Last select by values of list:
s.loc['A']
s.loc['B']
s.loc['C']
If want function:
def f(df, x):
return df[df['column1'].eq(L)].groupby(['column2']).size()
print (f(df1, 'A'))
You can use locals() to create variables dynamically (it's not really a good practice but it works well):
df = pd.DataFrame({'column1': list('ABC') * 3, 'column2': list('IKJIJKLNK')})
lst = ['A', 'B', 'C']
for idx, elmt in enumerate(lst, 1):
locals()[f"df{idx}"] = df[df['column1'] == elmt]
>>> df3
column1 column2
2 C J
5 C K
8 C K
>>> df3.value_counts('column2')
column1 column2
column2
K 2
J 1
dtype: int64
>>> df
column1 column2
0 A I
1 B K
2 C J
3 A I
4 B J
5 C K
6 A L
7 B N
8 C K

Joining multiple dataframes with multiple common columns

I have multiple dataframes like this-
df=pd.DataFrame({'a':[1,2,3],'b':[3,4,5],'c':[4,6,7]})
df2=pd.DataFrame({'a':[1,2,3],'d':[66,24,55],'c':[4,6,7]})
df3=pd.DataFrame({'a':[1,2,3],'f':[31,74,95],'c':[4,6,7]})
I want this output-
a c
0 1 4
1 2 6
2 3 7
This is the common columns across the 3 datasets. I am looking for a solution which works for multiple columns without having to specify the common columns as I have seen on SO( since the actual data frames are huge).
If need filter columns names with same content in each DataFrame is possible convert it to tuples and compare:
dfs = [df, df2, df3]
df1 = pd.concat([x.apply(tuple) for x in dfs], axis=1)
cols = df1.index[df1.eq(df1.iloc[:, 0], axis=0).all(axis=1)]
df2 = df[cols]
print (df2)
a c
0 1 4
1 2 6
2 3 7
If columns names should be different and is necessary compare only content:
df=pd.DataFrame({'a':[1,2,3],'b':[3,4,5],'c':[4,6,7]})
df2=pd.DataFrame({'r':[1,2,3],'t':[66,24,55],'l':[4,6,7]})
df3=pd.DataFrame({'f':[1,2,3],'g':[31,74,95],'m':[4,6,7]})
dfs = [df, df2, df3]
p = [x.apply(tuple).tolist() for x in dfs]
a = set(p[0]).intersection(*p)
print (a)
{(4, 6, 7), (1, 2, 3)}
You can use reduce, to apply function r_common cumulatively to the dataframes of dfs, from left to right, so as to reduce the list of dfs to a single dataframe df_common. The intersection method is use to find out the common columns in two dataframes d1 & d2 inside r_common function.
def r_common(d1, d2):
cols = d1.columns.intersection(d2.columns).tolist()
m = d1[cols].eq(d2[cols]).all()
return d1[m[m].index]
df_common = reduce(r_common, dfs) # dfs = [df, df2, df3]
Result:
# print(df_common)
a c
0 1 4
1 2 6
2 3 7
A combination of reduce, intersection, filter and concat could help with your usecase:
dfs = (df,df2,df3)
cols = [ent.columns for ent in dfs]
cols
[Index(['a', 'b', 'c'], dtype='object'),
Index(['a', 'd', 'c'], dtype='object'),
Index(['a', 'f', 'c'], dtype='object')]
#find the common columns to all :
from functools import reduce
universal_cols = reduce(lambda x,y : x.intersection(y), cols).tolist()
universal_cols
['a', 'c']
#filter for only universal_cols for each df
updates = [ent.filter(universal_cols) for ent in dfs]
If the columns and contents of the columns are the same, then you can skip the list comprehension and just filter from only one dataframe:
#let's use the first dataframe
output = df.filter(universal_cols)
If the columns' contents are different, then concatenate and drop duplicates:
#concatenate and drop duplicates
res = pd.concat(updates).drop_duplicates()
res #output has the same result
a c
0 1 4
1 2 6
2 3 7

Pandas DataFrame: How to merge left with a second DataFrame on a combination of index and columns

I'm trying to merge two dataframes.
I want to merge on one column, that is the index of the second DataFrame and
one column, that is a column in the second Dataframe. The column/index names are different in both DataFrames.
Example:
import pandas as pd
df2 = pd.DataFrame([(i,'ABCDEFGHJKL'[j], i*2 + j)
for i in range(10)
for j in range(10)],
columns = ['Index','Sub','Value']).set_index('Index')
df1 = pd.DataFrame([['SOMEKEY-A',0,'A','MORE'],
['SOMEKEY-B',4,'C','MORE'],
['SOMEKEY-C',7,'A','MORE'],
['SOMEKEY-D',5,'Z','MORE']
], columns=['key', 'Ext. Index', 'Ext. Sub', 'Description']
).set_index('key')
df1 prints out
key Ext. Index Ext. Sub Description
SOMEKEY-A 0 A MORE
SOMEKEY-B 4 C MORE
SOMEKEY-C 7 A MORE
SOMEKEY-D 5 Z MORE
the first lines of df2 are
Index Sub Value
0 A 0
0 B 1
0 C 2
0 D 3
0 E 4
I want to merge "Ext. Index" and "Ext. Sub" with DataFrame df2, where the index is "Index" and the column is "Sub"
The expected result is:
key Ext. Index Ext. Sub Description Ext. Value
SOMEKEY-A 0 A MORE 0
SOMEKEY-B 4 C MORE 10
SOMEKEY-C 7 A MORE 14
SOMEKEY-D 5 Z MORE None
Manually, the merge works like this
def get_value(x):
try:
return df2[(df2.Sub == x['Ext. Sub']) &
(df2.index == x['Ext. Index'])]['Value'].iloc[0]
except IndexError:
return None
df1['Ext. Value'] = df1.apply(get_value, axis = 1)
Can I do this with a pd.merge or pd.concat command, without
changing the df2 by turning the df2.index into a column?
Try using:
df_new = (df1.merge(df2[['Sub', 'Value']],
how='left',
left_on=['Ext. Index', 'Ext. Sub'],
right_on=[df2.index, 'Sub'])
.set_index(df1.index)
.drop('Sub', axis=1))

Run a function that requires multiple arguments through multiple columns - Pandas

Hi I currently have a function that is able to split values in a same cell that is delimited by a new line. However the function below only accepts me to pass through one column at a time was thinking if there is any other ways that I can pass it through multiple columns or in fact the whole dataframe.
A sample would be like this
A B C
1\n2\n3 2\n\5 A
The code is below
def tidy_split(df, column, sep='|', keep=False):
indexes = list()
new_values = list()
df = df.dropna(subset=[column])
for i, presplit in enumerate(df[column].astype(str)):
values = presplit.split(sep)
if keep and len(values) > 1:
indexes.append(i)
new_values.append(presplit)
for value in values:
indexes.append(i)
new_values.append(value)
new_df = df.iloc[indexes, :].copy()
new_df[column] = new_values
return new_df
It currently works when I run
df1 = tidy_split(df, 'A', '\n')
After running the function of selecting only column A
A B C
1 2\n5 A
2 2\n5 A
3 2\n5 A
I was hoping to be able to pass in more than just an accepted argument and in this case splitting column 'B' as well. Previously I have attempted passing in lambda or attempted using apply but it requires a positional argument which is 'column'. Would appreciate any help given! Was thinking if a loop is possible
EDIT: Desired output as each number refer to something important
Before
A B C
1\n2\n3 2\n5 A
After
A B C
1 2 A
2 5 A
3 n/a A
Input:
A B C
0 1\n2\n3 2\n5 A
Code:
import pandas as pd
cols = df.columns.tolist()
# create list in each cell by detecting '\n'
for col in cols:
df[col] = df[col].apply(lambda x: str(x).split("\n"))
# empty dataframe to store result
dfs = pd.DataFrame()
# loop over rows to construct small dataframes
# and then accumulate each to the resulting dataframe
for ind, row in df.iterrows():
a_vals = row['A']
b_vals = row['B'] + ["n/a"] * (len(a_vals) - len(row['B']))
c_vals = row['C'] + [row['C'][0]] * (len(a_vals) - len(row['C']))
temp = pd.DataFrame({'A': a_vals, 'B': b_vals, 'C': c_vals})
dfs = pd.concat([dfs, temp], axis=0, ignore_index=True)
Output:
A B C
0 1 2 A
1 2 5 A
2 3 n/a A

Categories