Compare 2 dataframes Pandas, returns wrong values - python

There are 2 dfs
datatypes are the same
df1 =
ID city name value
1 LA John 111
2 NY Sam 222
3 SF Foo 333
4 Berlin Bar 444
df2 =
ID city name value
1 NY Sam 223
2 LA John 111
3 SF Foo 335
4 London Foo1 999
5 Berlin Bar 444
I need to compare them and produce a new df, only with values, which are in df2, but not in df1
By some reason results after applying different methods are wrong
So far I've tried
pd.concat([df1, df2], join='inner', ignore_index=True)
but it returns all values together
pd.merge(df1, df2, how='inner')
it returns df1
then this one
df1[~(df1.iloc[:, 0].isin(list(df2.iloc[:, 0])))
it returns df1
The desired output is
ID city name value
1 NY Sam 223
2 SF Foo 335
3 London Foo1 999

Use DataFrame.merge by all columns without first and indicator parameter:
c = df1.columns[1:].tolist()
Or:
c = ['city', 'name', 'value']
df = (df2.merge(df1,on=c, indicator = True, how='left', suffixes=('','_'))
.query("_merge == 'left_only'")[df1.columns])
print (df)
ID city name value
0 1 NY Sam 223
2 3 SF Foo 335
3 4 London Foo1 999

Try this:
print("------------------------------")
print(df1)
df2 = DataFrameFromString(s, columns)
print("------------------------------")
print(df2)
common = df1.merge(df2,on=["city","name"]).rename(columns = {"value_y":"value", "ID_y":"ID"}).drop("value_x", 1).drop("ID_x", 1)
print("------------------------------")
print(common)
OUTPUT:
------------------------------
ID city name value
0 ID city name value
1 1 LA John 111
2 2 NY Sam 222
3 3 SF Foo 333
4 4 Berlin Bar 444
------------------------------
ID city name value
0 1 NY Sam 223
1 2 LA John 111
2 3 SF Foo 335
3 4 London Foo1 999
4 5 Berlin Bar 444
------------------------------
city name ID value
0 LA John 2 111
1 NY Sam 1 223
2 SF Foo 3 335
3 Berlin Bar 5 444

Related

pandas groupby column to list and keep certain values

I have the following dataframe:
id occupations
111 teacher
111 student
222 analyst
333 cook
111 driver
444 lawyer
I create a new column with a list of the all the occupations:
new_df['occupation_list'] = df['id'].map(df.groupby('id')['occupations'].agg(list))
How do I only include teacher and student values in occupation_list?
You can filter before groupby:
to_map = (df[df['occupations'].isin(['teacher', 'student'])]
.groupby('id')['occupations'].agg(list)
)
df['occupation_list'] = df['id'].map(to_map)
Output:
id occupations occupation_list
0 111 teacher [teacher, student]
1 111 student [teacher, student]
2 222 analyst NaN
3 333 cook NaN
4 111 driver [teacher, student]
5 444 lawyer NaN
You can also do
df.groupby('id')['occupations'].transform(' '.join).str.split()
You would just do a groupby and agg the column to a list:
df.groupby('id',as_index=False).agg({'occupations':lambda x: x.tolist()})
out:
>>> df
id occupations
0 111 teacher
1 111 student
2 222 analyst
3 333 cook
4 111 driver
5 444 lawyer
>>> df.groupby('id',as_index=False).agg({'occupations':lambda x: x.tolist()})
id occupations
0 111 [teacher, student, driver]
1 222 [analyst]
2 333 [cook]
3 444 [lawyer]

Transpose by grouping a Dataframe having both numeric and string variables

I have a DataFrame and I want to convert it into the following:
import pandas as pd
df = pd.DataFrame({'ID':[111,111,111,222,222,333],
'class':['merc','humvee','bmw','vw','bmw','merc'],
'imp':[1,2,3,1,2,1]})
print(df)
ID class imp
0 111 merc 1
1 111 humvee 2
2 111 bmw 3
3 222 vw 1
4 222 bmw 2
5 333 merc 1
Desired output:
ID 0 1 2
0 111 merc humvee bmw
1 111 1 2 3
2 222 vw bmw
3 222 1 2
4 333 merc
5 333 1
I wish to transpose the entire dataframe, but grouped by a particular column, ID in this case and maintaining the row order.
My attempt: I tried using .set_index() und .unstack(), but it did not work.
Use GroupBy.cumcount for counter and then reshape by DataFrame.stack and Series.unstack:
df1 = (df.set_index(['ID',df.groupby('ID').cumcount()])
.stack()
.unstack(1, fill_value='')
.reset_index(level=1, drop=True)
.reset_index())
print (df1)
ID 0 1 2
0 111 merc humvee bmw
1 111 1 2 3
2 222 vw bmw
3 222 1 2
4 333 merc
5 333 1
Another method would be to use groupby and concat - although this is not totally dynamic it works fine if you only have two columns you want to work with, namely class and imp
s = df.set_index([df['ID'],df.groupby('ID').cumcount()]).unstack(1)
df1 = pd.concat([s['class'],s['imp']],axis=0).sort_index().fillna('')
print(df1)
idx 0 1 2
ID
111 merc humvee bmw
111 1 2 3
222 vw bmw
222 1 2
333 merc
333 1

How to compare a list with dataframe column headers and substitute the header's name?

There is a df
df_example =
id city street house flat
0 NY street_ny 111 01
1 LA street_la 222 02
2 SF street_sf 333 03
3 Vegas street_vg 444 04
4 Boston street_bs 555 05
And in a database exists a table where every column name matches with column id (withoit id column)
sql_table (as df) =
column_name column_id
city 0
street 1
house 2
flat 3
I need to substitute in df_example column names with column ids from sql_table
Like this
id 0 1 2 3
0 NY street_ny 111 01
1 LA street_la 222 02
2 SF street_sf 333 03
3 Vegas street_vg 444 04
4 Boston street_bs 555 05
So far I got the list of column names without id column name
column_names_list = list(df_example)[1:]
column_names_list = ['city', 'street', 'house', 'flat']
But how to proceed I have no idea
.isin method doesn't really what I need
Appreciate any help
Use rename with dictionary created by zip:
df_example = df_example.rename(columns=dict(zip(df['column_name'], df['column_id'])))
print (df_example)
id 0 1 2 3
0 0 NY street_ny 111 1
1 1 LA street_la 222 2
2 2 SF street_sf 333 3
3 3 Vegas street_vg 444 4
4 4 Boston street_bs 555 5

How to group rows so as to use value_counts on the created groups with pandas?

I have some customer data such as this in a data frame:
S No Country Sex
1 Spain M
2 Norway F
3 Mexico M
...
I want to have an output such as this:
Spain
M = 1207
F = 230
Norway
M = 33
F = 102
...
I have a basic notion that I want to group my rows based on their countries with something like df.groupby(df.Country), and on the selected rows, I need to run something like df.Sex.value_counts()
Thanks!
I think need crosstab:
df = pd.crosstab(df.Sex, df.Country)
Or if want use your solution add unstack for columns with first level of MultiIndex:
df = df.groupby(df.Country).Sex.value_counts().unstack(level=0, fill_value=0)
print (df)
Country Mexico Norway Spain
Sex
F 0 1 0
M 1 0 1
EDIT:
If want add more columns then is possible set which level parameter is converted to columns:
df1 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=0, fill_value=0).reset_index()
print (df1)
No Country Sex 1 2 3
0 Mexico M 0 0 1
1 Norway F 0 1 0
2 Spain M 1 0 0
df2 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=1, fill_value=0).reset_index()
print (df2)
Country No Sex Mexico Norway Spain
0 1 M 0 0 1
1 2 F 0 1 0
2 3 M 1 0 0
df2 = df.groupby([df.No, df.Country]).Sex.value_counts().unstack(level=2, fill_value=0).reset_index()
print (df2)
Sex No Country F M
0 1 Spain 0 1
1 2 Norway 1 0
2 3 Mexico 0 1
You can also use pandas.pivot_table:
res = df.pivot_table(index='Country', columns='Sex', aggfunc='count', fill_value=0)
print(res)
SNo
Sex F M
Country
Mexico 0 1
Norway 1 0
Spain 0 1

Combined two dataframe based on index, replacing matching values in other column

I have the following wide df1:
Area geotype type ...
1 a 2 ...
1 a 1 ...
2 b 4 ...
4 b 8 ...
And the following two-column df2:
Area geotype
1 London
4 Cambridge
And I want the following:
Area geotype type ...
1 London 2 ...
1 London 1 ...
2 b 4 ...
4 Cambridge 8 ...
So I need to match based on the non-unique Area column, and then only if there is a match, replace the set values in the geotype column.
Apologies if this is a duplicate, I did actually search hard for a solution to this.
use update + map
df1.geotype.update(df1.Area.map(df2.set_index('Area').geotype))
Area geotype type
0 1 London 2
1 1 London 1
2 2 b 4
3 4 Cambridge 8
I think you can use map by Series created with set_index and then fill NaN values by combine_first or fillna:
df1.geotype = df1.ID.map(df2.set_index('ID')['geotype']).combine_first(df1.geotype)
#df1.geotype = df1.ID.map(df2.set_index('ID')['geotype']).fillna(df1.geotype)
print (df1)
ID geotype type
0 1 London 2
1 2 a 1
2 3 b 4
3 4 Cambridge 8e
Another solution with mask and numpy.in1d:
df1.geotype = df1.geotype.mask(np.in1d(df1.ID, df2.ID),
df1.ID.map(df2.set_index('ID')['geotype']))
print (df1)
ID geotype type
0 1 London 2
1 2 a 1
2 3 b 4
3 4 Cambridge 8e
EDIT by comment:
Problem is not unique ID values in df2 like:
df2 = pd.DataFrame({'ID': [1, 1, 4], 'geotype': ['London', 'Paris', 'Cambridge']})
print (df2)
ID geotype
0 1 London
1 1 Paris
2 4 Cambridge
So function map cannot choose right value and raise error.
Solution is remove duplicates by drop_duplicates, by default keep first value:
df2 = df2.drop_duplicates('ID')
print (df2)
ID geotype
0 1 London
2 4 Cambridge
Or if need keep last value:
df2 = df2.drop_duplicates('ID', keep='last')
print (df2)
ID geotype
1 1 Paris
2 4 Cambridge
If cannot remove duplicates, there is another solution with outer merge, but there are duplicated rows where is duplicated ID in df2:
df1 = pd.merge(df1, df2, on='ID', how='outer', suffixes=('_',''))
df1.geotype = df1.geotype.combine_first(df1.geotype_)
df1 = df1.drop('geotype_', axis=1)
print (df1)
ID type geotype
0 1 2 London
1 1 2 Paris
2 2 1 a
3 3 4 b
4 4 8e Cambridge
alternative solution:
In [78]: df1.loc[df1.ID.isin(df2.ID), 'geotype'] = df1.ID.map(df2.set_index('ID').geotype)
In [79]: df1
Out[79]:
ID geotype type
0 1 London 2
1 2 a 1
2 3 b 4
3 4 Cambridge 8
UPDATE: answers updated question - if you have duplicates in the Area column in the df2 DF:
In [152]: df1.loc[df1.Area.isin(df2.Area), 'geotype'] = df1.Area.map(df2.set_index('Area').geotype)
...
skipped
...
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
get rid of duplicates:
In [153]: df1.loc[df1.Area.isin(df2.Area), 'geotype'] = df1.Area.map(df2.drop_duplicates(subset='Area').set_index('Area').geotype)
In [154]: df1
Out[154]:
Area geotype type
0 1 London 2
1 1 London 1
2 2 b 4
3 4 Cambridge 8

Categories