I have two dataframes df1 and df2. First column in both is a customer ID which is an int, but other columns contains various string values. I want to produce a new dataframe df3 that contains, for each customer ID, a set of values found in df2 but not in df1.
Example:
df1:
v1 v2 v3 v4
cust
1 A B B A
2 A A A A
3 B B A A
4 B C A A
df2:
v1 v2 v3 v4
cust
1 A A C B
2 A A C B
3 C B B A
4 C B B A
Expected output:
cust
1 {C}
2 {B, C}
3 {C}
4 {}
In [2]: df_2 = pd.DataFrame({"KundelID" : list(range(1,11)),
...: 'V1' : list('AACCBBBCCC'),
...: 'V2' : list('AABBBCCCAA'),
...: 'V3' : list('CCBBBBBAAB'),
...: 'V4' : list('BBAACAAAAB')})
...: df_1 = pd.DataFrame({"KundelID" : list(range(1,11)),
...: 'V1' : list('AABBCCCCCC'),
...: 'V2' : list('BABCCCCAAA'),
...: 'V3' : list('BAAAAABBBB'),
...: 'V4' : list('AAAACCCCBB')})
In [3]: df_1
Out[3]:
KundelID V1 V2 V3 V4
0 1 A B B A
1 2 A A A A
2 3 B B A A
3 4 B C A A
4 5 C C A C
5 6 C C A C
6 7 C C B C
7 8 C A B C
8 9 C A B B
9 10 C A B B
In [4]: df_2
Out[4]:
KundelID V1 V2 V3 V4
0 1 A A C B
1 2 A A C B
2 3 C B B A
3 4 C B B A
4 5 B B B C
5 6 B C B A
6 7 B C B A
7 8 C C A A
8 9 C A A A
9 10 C A B B
In [7]: pd.DataFrame({"KundeID" : df_2.KundelID,
...: 'Not-in-df_1' : [','.join([i for i in df_2_ if not i in df_1_]) if [i for i in df_2_ if not i in df_1_] else None for df_1_,df_2_ in zip(df_1.T[1:].apply(np.unique), df_2.T[1:].apply(np.unique))]})
Out[7]:
KundeID Not-in-df_1
0 1 C
1 2 B,C
2 3 C
3 4 None
4 5 B
5 6 B
6 7 A
7 8 None
8 9 None
9 10 None
The idea is to transform all values in each row into a set. Then, we can take the set difference for each customer ID. This avoids loops and list comprehensions:
df3 = (
pd
.concat([
df1.reindex(index=df2.index).apply(set, axis=1),
df2.apply(set, axis=1),
], axis=1)
.apply(lambda r: r[1].difference(r[0]), axis=1)
)
print(df3)
# Out:
cust
1 {C}
2 {B, C}
3 {C}
4 {}
Notes:
The bit df1.reindex(index=df2.index) is in case some IDs are absent from df1 or df2).
It is trivial to transform the output into something else instead of a set. For example ','.join(r[1].difference(r[0])) as the lambda will make strings.
Setup:
For future reference, in order to facilitate a reproducible example, it is a good idea to provide some code that can directly be copy/pasted by SO-ers for a quick start into your problem.
df1 = pd.read_csv(io.StringIO("""
1 A B B A
2 A A A A
3 B B A A
4 B C A A
"""), sep=' ', names='cust v1 v2 v3 v4'.split()).set_index('cust')
df2 = pd.read_csv(io.StringIO("""
1 A A C B
2 A A C B
3 C B B A
4 C B B A
"""), sep=' ', names='cust v1 v2 v3 v4'.split()).set_index('cust')
You transform each dataframe into a Series of sets, then perform a set operation across the Series, leveraging the intrinsic data alignment from pandas Series:
df2.apply(set, axis=1) - df1.apply(set, axis=1)
Output:
cust
1 {C}
2 {C, B}
3 {C}
4 {}
dtype: object
If you want the symmetric difference across datasets (i.e. elements in either the set or other but not both), then it's better using pd.concat:
dfs = [df1, df2]
pd.concat([df.apply(set, 1) for df in dfs], 1).apply(lambda x: x[0]^x[1], 1)
where 1 here stands for axis=1. Also, replacing x[0]^x[1] by set.symmetric_difference(*x) should work as well.
Interestingly, Series_A ^ Series_B doesn't work as expected, instead (apparently), it returns a bool Series telling us if the returning values from the set operations are not empty.
Related
I have to copy columns from one DataFrame A to another DataFrame B. The column names in A and B do not match.
What is the best way to do it? There are several columns like this. Do I need to write for each column like B["SO"] = A["Sales Order"] etc.
i would use pd.concat
combined_df = pd.concat([df1, df2[['column_a', 'column_b']]], axis=1)
also gives you the power to concat different size dateframes , outer join etc.
Use:
df1 = pd.DataFrame({
'SO':list('abcdef'),
'RI':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
})
print (df1)
SO RI C
0 a 4 7
1 b 5 8
2 c 4 9
3 d 5 4
4 e 5 2
5 f 4 3
df2 = pd.DataFrame({
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')
})
print (df2)
D E F
0 1 5 a
1 3 3 a
2 5 6 a
3 7 9 b
4 1 2 b
5 0 4 b
Create dictionary for rename, select columns matched, rename by dict and DataFrame.join to original - DataFrames matched by index values:
d = {'SO':'Sales Order',
'RI':'Retail Invoices'}
df11 = df1[d.keys()].rename(columns=d)
print (df11)
Sales Order Retail Invoices
0 a 4
1 b 5
2 c 4
3 d 5
4 e 5
5 f 4
df = df2.join(df11)
print (df)
D E F Sales Order Retail Invoices
0 1 5 a a 4
1 3 3 a b 5
2 5 6 a c 4
3 7 9 b d 5
4 1 2 b e 5
5 0 4 b f 4
Make a dictionary of abbreviations. And try this code.
Ex:
full_form_dict = {'SO':'Sales Order',
'RI':'Retail Invoices',}
A_col = list(A.columns)
B_col = [v for k,v in full_form_dict.items() if k in A_col]
# to loop over A_col
# B_col = [v for col in A_col for k,v in full_form_dict.items() if k == col]
Sorry if this seems simple but have been struggling to find an answer to this.
I have a large dataframe of the format in the picture:
Each row can be uniquely identified by the multi-index built from the columns "trip_id", "direction_id", "stop_sequence".
I would like to request methods using loops to create a python-dictionary of dataframes where each dataframe is a subset of the large dataframe which contains all the rows for each "trip_id" + "direction_id" multi-index.
At the end of the loops I would like to be able to have a python-dictionary of dataframes where I can access each dictionary with a simple index key such as from 0 - 10,000 or the key being the combination of trip_id and direction_id
E.g. for the image above, I would like all the rows where the trip_id is "17067064.T0.2-EPP-F-mjp-1.8.R" and the direction ID is "1" to be in one dataframe of this dictionary collection.
Thank you for your help.
Kind regards,
Ben
Use groupby with dictionary comprehension:
df = pd.DataFrame({
'A':list('abcdef'),
'B':[4,5,5,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')
}).set_index(['F','B','C'])
print (df)
A D E
F B C
a 4 7 a 1 5
5 8 b 3 3
9 c 5 6
b 5 4 d 7 9
2 e 1 2
4 3 f 0 4
A D E
#python 3.6+
dfs = {f'{a}_{b}':v for (a, b), v in df.groupby(level=['F','B'])}
#python bellow
#dfs = {'{}_{}'.format(a,b):v for (a, b), v in df.groupby(level=['F','B'])}
print (dfs)
{'a_4': A D E
F B C
a 4 7 a 1 5, 'a_5': A D E
F B C
a 5 8 b 3 3
9 c 5 6, 'b_4': A D E
F B C
b 4 3 f 0 4, 'b_5': A D E
F B C
b 5 4 d 7 9
2 e 1 2}
print (dfs['a_4'])
A D E
F B C
a 4 7 a 1 5
I am trying to get an output where I wish to add column d in d1 and d2 where a b c are same (like groupby).
For example
d1 = pd.DataFrame([[1,2,3,4]],columns=['a','b','c','d'])
d2 = pd.DataFrame([[1,2,3,4],[2,3,4,5]],columns=['a','b','c','d'])
then I'd like to get an output as
a b c d
0 1 2 3 8
1 2 3 4 5
Merging the two data frames and adding the resultant column d where a b c are same.
d1.add(d2) or radd gives me an aggregate of all columns
The solution should be a DataFrame which can be added again to another similarly.
Any help is appreciated.
You can use set_index first:
print (d2.set_index(['a','b','c'])
.add(d1.set_index(['a','b','c']), fill_value=0)
.astype(int)
.reset_index())
a b c d
0 1 2 3 8
1 2 3 4 5
df = pd.concat([d1, d2])
df.drop_duplicates()
a b c d
0 1 2 3 4
1 2 3 4 5
I would like to join 2 dataframes, so that the result will be the intersection on the two datasets on the key column.
By doing this:
result = pd.merge(df1,df2,on='key', how='inner')
I will get what I need, but with extra columns of df2. I only want df1 columns in the results. (I do not want to delete them later).
Any ideas?
Thanks,
Here is a generic solution which will work for one and for multiple keys (joining) columns:
Setup:
In [28]: a = pd.DataFrame({'a':[1,2,3,4], 'b':[10,20,30,40], 'c':list('abcd')})
In [29]: b = pd.DataFrame({'a':[3,4,5,6], 'b':[30,41,51,61], 'c':list('efgh')})
In [30]: a
Out[30]:
a b c
0 1 10 a
1 2 20 b
2 3 30 c
3 4 40 d
In [31]: b
Out[31]:
a b c
0 3 30 e
1 4 41 f
2 5 51 g
3 6 61 h
multiple joining keys:
In [32]: join_cols = ['a','b']
In [33]: a.merge(b[join_cols], on=join_cols)
Out[33]:
a b c
0 3 30 c
single joining key:
In [34]: join_cols = ['a']
In [35]: a.merge(b[join_cols], on=join_cols)
Out[35]:
a b c
0 3 30 c
1 4 40 d
I am trying to efficiently remove duplicates in Pandas in which duplicates are inverted across two columns. For example, in this data frame:
import pandas as pd
key = pd.DataFrame({'p1':['a','b','a','a','b','d','c'],'p2':['b','a','c','d','c','a','b'],'value':[1,1,2,3,5,3,5]})
df = pd.DataFrame(key,columns=['p1','p2','value'])
print frame
p1 p2 value
0 a b 1
1 b a 1
2 a c 2
3 a d 3
4 b c 5
5 d a 3
6 c b 5
I would want to remove rows 1, 5 and 6, leaving me with just:
p1 p2 value
0 a b 1
2 a c 2
3 a d 3
4 b c 5
Thanks in advance for ideas on how to do this.
Reorder the p1 and p2 values so they appear in a canonical order:
mask = df['p1'] < df['p2']
df['first'] = df['p1'].where(mask, df['p2'])
df['second'] = df['p2'].where(mask, df['p1'])
yields
In [149]: df
Out[149]:
p1 p2 value first second
0 a b 1 a b
1 b a 1 a b
2 a c 2 a c
3 a d 3 a d
4 b c 5 b c
5 d a 3 a d
6 c b 5 b c
Then you can drop_duplicates:
df = df.drop_duplicates(subset=['value', 'first', 'second'])
import pandas as pd
key = pd.DataFrame({'p1':['a','b','a','a','b','d','c'],'p2':['b','a','c','d','c','a','b'],'value':[1,1,2,3,5,3,5]})
df = pd.DataFrame(key,columns=['p1','p2','value'])
mask = df['p1'] < df['p2']
df['first'] = df['p1'].where(mask, df['p2'])
df['second'] = df['p2'].where(mask, df['p1'])
df = df.drop_duplicates(subset=['value', 'first', 'second'])
df = df[['p1', 'p2', 'value']]
yields
In [151]: df
Out[151]:
p1 p2 value
0 a b 1
2 a c 2
3 a d 3
4 b c 5