updating multiple columns in the pandas data frame based another table - python

I have 2 CSV file like this , want to update the df1 columns (LL,UL) base on the df2(LL,UL) by matching columns (test ,cond) in the both dataframes
df1:
test Cond day mode LL UL
a T1 Tue 7
b T2 mon 7
c T2 sun 6
d T3 fri 3
c T2 sat 6
d T3 wed 3
df2:-
test Cond LL UL
a T1 15 23
b T2 -3 -3.5
c T2 -19 -11
d T3 6.5 14.5
my expected output should be:-
def SpecsLL(cond1,test1):
if ((cond1==spec['Cond'] ) & (test1==spec['test'])):
return df2['LL']
df1['LL'] = df1.apply(lambda x: SpecsLL(x['Cond'],x['test']),axis=1)
i have tried above code but not working.
any ideas on how to do it??

Simply use merge functionalities of pandas
df1.merge(df2)

Method 1: combine_first
index_cols = ['test', 'Cond']
(
df1
.set_index(index_cols)
.combine_first(
df2.set_index(index_cols)
).reset_index()
)
Explanation:
set_index moves the specified columns to the index, indicating that each row should be identified by its test and Cond columns.
foo.combine_first(bar) will identify matching index + column labels between foo and bar, and fill in values from bar wherever foo is NaN or has a column/row missing. In this case, thanks to the set_index, the two dataframes will have their rows matched where test and Cond are the same, and then the UL and LL values from df2 will be filled in to the corresponding columns of the output.
reset_index simply reverses the set_index call, so that test and Cond become regular columns again.
Note that this operation might mangle the order of your columns, so if that is important to you then you can call .reindex(df1.columns, axis=1) at the very end, which will reorder the columns to original order in df1.
Method 2: merge
Alternatively you can use the merge method, which allows you to operate on the columns directly without using set_index, but will require some other preprocessing:
index_cols = ['test', 'Cond']
(
df1
.drop(['LL', 'UL'], axis=1)
.merge(
df2,
on=index_cols
)
)
The .drop call is necessary because otherwise merge will include the UL and LL columns from both DataFrames in the output:
test Cond day mode LL_x UL_x LL_y UL_y
0 a T1 Tue 7 NaN NaN 15.0 23.0
1 b T2 mon 7 NaN NaN -3.0 -3.5
2 c T2 sun 6 NaN NaN -19.0 -11.0
3 c T2 sat 6 NaN NaN -19.0 -11.0
4 d T3 fri 3 NaN NaN 6.5 14.5
5 d T3 wed 3 NaN NaN 6.5 14.5
Which to use?
With the data that you have provided, merge seems like the more natural operation - if you never expect UL and LL to have any data in df1, then if possible I'd recommend simply removing those column headers entirely from the input CSV, so that df1 doesn't have those columns at all. In that case, the drop call would no longer be necessary and the required merge call is very expressive.
However, if you expect that df1 would sometimes have real values for UL or LL, and you want to include those values in the output, then the combine_first solution is what you want. Note that if both df1 and df2 have different non-null values for a particular row/column, then the df1.combine_first(df2) will select the value from df1 and ignore the df2 value. If you instead wanted to prioritise the values from df2 then you want to call it the other way round, i.e. df2.combine_first(df1).

Related

Pandas fails to remove some rows in dataframe [duplicate]

I have a dataframe df :
>>> df
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20060630 6.590 NaN 6.590 5.291
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left:
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that ?
Use DataFrame.drop and pass it a Series of index labels:
In [65]: df
Out[65]:
one two
one 1 4
two 2 3
three 3 2
four 4 1
In [66]: df.drop(index=[1,3])
Out[66]:
one two
one 1 4
three 3 2
Note that it may be important to use the "inplace" command when you want to do the drop in line.
df.drop(df.index[[1,3]], inplace=True)
Because your original question is not returning anything, this command should be used.
http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.drop.html
If the DataFrame is huge, and the number of rows to drop is large as well, then simple drop by index df.drop(df.index[]) takes too much time.
In my case, I have a multi-indexed DataFrame of floats with 100M rows x 3 cols, and I need to remove 10k rows from it. The fastest method I found is, quite counterintuitively, to take the remaining rows.
Let indexes_to_drop be an array of positional indexes to drop ([1, 2, 4] in the question).
indexes_to_keep = set(range(df.shape[0])) - set(indexes_to_drop)
df_sliced = df.take(list(indexes_to_keep))
In my case this took 20.5s, while the simple df.drop took 5min 27s and consumed a lot of memory. The resulting DataFrame is the same.
I solved this in a simpler way - just in 2 steps.
Make a dataframe with unwanted rows/data.
Use the index of this unwanted dataframe to drop the rows from the original dataframe.
Example:
Suppose you have a dataframe df which as many columns including 'Age' which is an integer. Now let's say you want to drop all the rows with 'Age' as negative number.
df_age_negative = df[ df['Age'] < 0 ] # Step 1
df = df.drop(df_age_negative.index, axis=0) # Step 2
Hope this is much simpler and helps you.
You can also pass to DataFrame.drop the label itself (instead of Series of index labels):
In[17]: df
Out[17]:
a b c d e
one 0.456558 -2.536432 0.216279 -1.305855 -0.121635
two -1.015127 -0.445133 1.867681 2.179392 0.518801
In[18]: df.drop('one')
Out[18]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
Which is equivalent to:
In[19]: df.drop(df.index[[0]])
Out[19]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
If I want to drop a row which has let's say index x, I would do the following:
df = df[df.index != x]
If I would want to drop multiple indices (say these indices are in the list unwanted_indices), I would do:
desired_indices = [i for i in len(df.index) if i not in unwanted_indices]
desired_df = df.iloc[desired_indices]
Here is a bit specific example, I would like to show. Say you have many duplicate entries in some of your rows. If you have string entries you could easily use string methods to find all indexes to drop.
ind_drop = df[df['column_of_strings'].apply(lambda x: x.startswith('Keyword'))].index
And now to drop those rows using their indexes
new_df = df.drop(ind_drop)
Use only the Index arg to drop row:-
df.drop(index = 2, inplace = True)
For multiple rows:-
df.drop(index=[1,3], inplace = True)
In a comment to #theodros-zelleke's answer, #j-jones asked about what to do if the index is not unique. I had to deal with such a situation. What I did was to rename the duplicates in the index before I called drop(), a la:
dropped_indexes = <determine-indexes-to-drop>
df.index = rename_duplicates(df.index)
df.drop(df.index[dropped_indexes], inplace=True)
where rename_duplicates() is a function I defined that went through the elements of index and renamed the duplicates. I used the same renaming pattern as pd.read_csv() uses on columns, i.e., "%s.%d" % (name, count), where name is the name of the row and count is how many times it has occurred previously.
Determining the index from the boolean as described above e.g.
df[df['column'].isin(values)].index
can be more memory intensive than determining the index using this method
pd.Index(np.where(df['column'].isin(values))[0])
applied like so
df.drop(pd.Index(np.where(df['column'].isin(values))[0]), inplace = True)
This method is useful when dealing with large dataframes and limited memory.
To drop rows with indices 1, 2, 4 you can use:
df[~df.index.isin([1, 2, 4])]
The tilde operator ~ negates the result of the method isin. Another option is to drop indices:
df.loc[df.index.drop([1, 2, 4])]
Look at the following dataframe df
df
column1 column2 column3
0 1 11 21
1 2 12 22
2 3 13 23
3 4 14 24
4 5 15 25
5 6 16 26
6 7 17 27
7 8 18 28
8 9 19 29
9 10 20 30
Lets drop all the rows which has an odd number in column1
Create a list of all the elements in column1 and keep only those elements that are even numbers (the elements that you dont want to drop)
keep_elements = [x for x in df.column1 if x%2==0]
All the rows with the values [2, 4, 6, 8, 10] in its column1 will be retained or not dropped.
df.set_index('column1',inplace = True)
df.drop(df.index.difference(keep_elements),axis=0,inplace=True)
df.reset_index(inplace=True)
We make the column1 as index and drop all the rows that are not required. Then we reset the index back.
df
column1 column2 column3
0 2 12 22
1 4 14 24
2 6 16 26
3 8 18 28
4 10 20 30
As Dennis Golomazov's answer suggests, using drop to drop rows. You can select to keep rows instead. Let's say you have a list of row indices to drop called indices_to_drop. You can convert it to a mask as follows:
mask = np.ones(len(df), bool)
mask[indices_to_drop] = False
You can use this index directly:
df_new = df.iloc[mask]
The nice thing about this method is that mask can come from any source: it can be a condition involving many columns, or something else.
The really nice thing is, you really don't need the index of the original DataFrame at all, so it doesn't matter if the index is unique or not.
The disadvantage is of course that you can't do the drop in-place with this method.
Consider an example dataframe
df =
index column1
0 00
1 10
2 20
3 30
we want to drop 2nd and 3rd index rows.
Approach 1:
df = df.drop(df.index[2,3])
or
df.drop(df.index[2,3],inplace=True)
print(df)
df =
index column1
0 00
3 30
#This approach removes the rows as we wanted but the index remains unordered
Approach 2
df.drop(df.index[2,3],inplace=True,ignore_index=True)
print(df)
df =
index column1
0 00
1 30
#This approach removes the rows as we wanted and resets the index.
This worked for me
# Create a list containing the index numbers you want to remove
index_list = list(range(42766, 42798))
df.drop(df.index[index_list], inplace =True)
df.shape
This should drop all indexes within that created range

How to findout difference between two dataframes irrespective of index? [duplicate]

I have two data frames df1 and df2, where df2 is a subset of df1. How do I get a new data frame (df3) which is the difference between the two data frames?
In other word, a data frame that has all the rows/columns in df1 that are not in df2?
By using drop_duplicates
pd.concat([df1,df2]).drop_duplicates(keep=False)
Update :
The above method only works for those data frames that don't already have duplicates themselves. For example:
df1=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]})
df2=pd.DataFrame({'A':[1],'B':[2]})
It will output like below , which is wrong
Wrong Output :
pd.concat([df1, df2]).drop_duplicates(keep=False)
Out[655]:
A B
1 2 3
Correct Output
Out[656]:
A B
1 2 3
2 3 4
3 3 4
How to achieve that?
Method 1: Using isin with tuple
df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))]
Out[657]:
A B
1 2 3
2 3 4
3 3 4
Method 2: merge with indicator
df1.merge(df2,indicator = True, how='left').loc[lambda x : x['_merge']!='both']
Out[421]:
A B _merge
1 2 3 left_only
2 3 4 left_only
3 3 4 left_only
For rows, try this, where Name is the joint index column (can be a list for multiple common columns, or specify left_on and right_on):
m = df1.merge(df2, on='Name', how='outer', suffixes=['', '_'], indicator=True)
The indicator=True setting is useful as it adds a column called _merge, with all changes between df1 and df2, categorized into 3 possible kinds: "left_only", "right_only" or "both".
For columns, try this:
set(df1.columns).symmetric_difference(df2.columns)
Accepted answer Method 1 will not work for data frames with NaNs inside, as pd.np.nan != pd.np.nan. I am not sure if this is the best way, but it can be avoided by
df1[~df1.astype(str).apply(tuple, 1).isin(df2.astype(str).apply(tuple, 1))]
It's slower, because it needs to cast data to string, but thanks to this casting pd.np.nan == pd.np.nan.
Let's go trough the code. First we cast values to string, and apply tuple function to each row.
df1.astype(str).apply(tuple, 1)
df2.astype(str).apply(tuple, 1)
Thanks to that, we get pd.Series object with list of tuples. Each tuple contains whole row from df1/df2.
Then we apply isin method on df1 to check if each tuple "is in" df2.
The result is pd.Series with bool values. True if tuple from df1 is in df2. In the end, we negate results with ~ sign, and applying filter on df1. Long story short, we get only those rows from df1 that are not in df2.
To make it more readable, we may write it as:
df1_str_tuples = df1.astype(str).apply(tuple, 1)
df2_str_tuples = df2.astype(str).apply(tuple, 1)
df1_values_in_df2_filter = df1_str_tuples.isin(df2_str_tuples)
df1_values_not_in_df2 = df1[~df1_values_in_df2_filter]
import pandas as pd
# given
df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[23,45,12,34,27,44,28,39,40]})
df2 = pd.DataFrame({'Name':['John','Smith','Wale','Tom','Menda','Yuswa',],
'Age':[23,12,34,44,28,40]})
# find elements in df1 that are not in df2
df_1notin2 = df1[~(df1['Name'].isin(df2['Name']) & df1['Age'].isin(df2['Age']))].reset_index(drop=True)
# output:
print('df1\n', df1)
print('df2\n', df2)
print('df_1notin2\n', df_1notin2)
# df1
# Age Name
# 0 23 John
# 1 45 Mike
# 2 12 Smith
# 3 34 Wale
# 4 27 Marry
# 5 44 Tom
# 6 28 Menda
# 7 39 Bolt
# 8 40 Yuswa
# df2
# Age Name
# 0 23 John
# 1 12 Smith
# 2 34 Wale
# 3 44 Tom
# 4 28 Menda
# 5 40 Yuswa
# df_1notin2
# Age Name
# 0 45 Mike
# 1 27 Marry
# 2 39 Bolt
Perhaps a simpler one-liner, with identical or different column names. Worked even when df2['Name2'] contained duplicate values.
newDf = df1.set_index('Name1')
.drop(df2['Name2'], errors='ignore')
.reset_index(drop=False)
edit2, I figured out a new solution without the need of setting index
newdf=pd.concat([df1,df2]).drop_duplicates(keep=False)
Okay i found the answer of highest vote already contain what I have figured out. Yes, we can only use this code on condition that there are no duplicates in each two dfs.
I have a tricky method. First we set ’Name’ as the index of two dataframe given by the question. Since we have same ’Name’ in two dfs, we can just drop the ’smaller’ df’s index from the ‘bigger’ df.
Here is the code.
df1.set_index('Name',inplace=True)
df2.set_index('Name',inplace=True)
newdf=df1.drop(df2.index)
Pandas now offers a new API to do data frame diff: pandas.DataFrame.compare
df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
In addition to accepted answer, I would like to propose one more wider solution that can find a 2D set difference of two dataframes with any index/columns (they might not coincide for both datarames). Also method allows to setup tolerance for float elements for dataframe comparison (it uses np.isclose)
import numpy as np
import pandas as pd
def get_dataframe_setdiff2d(df_new: pd.DataFrame,
df_old: pd.DataFrame,
rtol=1e-03, atol=1e-05) -> pd.DataFrame:
"""Returns set difference of two pandas DataFrames"""
union_index = np.union1d(df_new.index, df_old.index)
union_columns = np.union1d(df_new.columns, df_old.columns)
new = df_new.reindex(index=union_index, columns=union_columns)
old = df_old.reindex(index=union_index, columns=union_columns)
mask_diff = ~np.isclose(new, old, rtol, atol)
df_bool = pd.DataFrame(mask_diff, union_index, union_columns)
df_diff = pd.concat([new[df_bool].stack(),
old[df_bool].stack()], axis=1)
df_diff.columns = ["New", "Old"]
return df_diff
Example:
In [1]
df1 = pd.DataFrame({'A':[2,1,2],'C':[2,1,2]})
df2 = pd.DataFrame({'A':[1,1],'B':[1,1]})
print("df1:\n", df1, "\n")
print("df2:\n", df2, "\n")
diff = get_dataframe_setdiff2d(df1, df2)
print("diff:\n", diff, "\n")
Out [1]
df1:
A C
0 2 2
1 1 1
2 2 2
df2:
A B
0 1 1
1 1 1
diff:
New Old
0 A 2.0 1.0
B NaN 1.0
C 2.0 NaN
1 B NaN 1.0
C 1.0 NaN
2 A 2.0 NaN
C 2.0 NaN
As mentioned here
that
df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))]
is correct solution but it will produce wrong output if
df1=pd.DataFrame({'A':[1],'B':[2]})
df2=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]})
In that case above solution will give
Empty DataFrame, instead you should use concat method after removing duplicates from each datframe.
Use concate with drop_duplicates
df1=df1.drop_duplicates(keep="first")
df2=df2.drop_duplicates(keep="first")
pd.concat([df1,df2]).drop_duplicates(keep=False)
I had issues with handling duplicates when there were duplicates on one side and at least one on the other side, so I used Counter.collections to do a better diff, ensuring both sides have the same count. This doesn't return duplicates, but it won't return any if both sides have the same count.
from collections import Counter
def diff(df1, df2, on=None):
"""
:param on: same as pandas.df.merge(on) (a list of columns)
"""
on = on if on else df1.columns
df1on = df1[on]
df2on = df2[on]
c1 = Counter(df1on.apply(tuple, 'columns'))
c2 = Counter(df2on.apply(tuple, 'columns'))
c1c2 = c1-c2
c2c1 = c2-c1
df1ondf2on = pd.DataFrame(list(c1c2.elements()), columns=on)
df2ondf1on = pd.DataFrame(list(c2c1.elements()), columns=on)
df1df2 = df1.merge(df1ondf2on).drop_duplicates(subset=on)
df2df1 = df2.merge(df2ondf1on).drop_duplicates(subset=on)
return pd.concat([df1df2, df2df1])
> df1 = pd.DataFrame({'a': [1, 1, 3, 4, 4]})
> df2 = pd.DataFrame({'a': [1, 2, 3, 4, 4]})
> diff(df1, df2)
a
0 1
0 2
There is a new method in pandas DataFrame.compare that compare 2 different dataframes and return which values changed in each column for the data records.
Example
First Dataframe
Id Customer Status Date
1 ABC Good Mar 2023
2 BAC Good Feb 2024
3 CBA Bad Apr 2022
Second Dataframe
Id Customer Status Date
1 ABC Bad Mar 2023
2 BAC Good Feb 2024
5 CBA Good Apr 2024
Comparing Dataframes
print("Dataframe difference -- \n")
print(df1.compare(df2))
print("Dataframe difference keeping equal values -- \n")
print(df1.compare(df2, keep_equal=True))
print("Dataframe difference keeping same shape -- \n")
print(df1.compare(df2, keep_shape=True))
print("Dataframe difference keeping same shape and equal values -- \n")
print(df1.compare(df2, keep_shape=True, keep_equal=True))
Result
Dataframe difference --
Id Status Date
self other self other self other
0 NaN NaN Good Bad NaN NaN
2 3.0 5.0 Bad Good Apr 2022 Apr 2024
Dataframe difference keeping equal values --
Id Status Date
self other self other self other
0 1 1 Good Bad Mar 2023 Mar 2023
2 3 5 Bad Good Apr 2022 Apr 2024
Dataframe difference keeping same shape --
Id Customer Status Date
self other self other self other self other
0 NaN NaN NaN NaN Good Bad NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN
2 3.0 5.0 NaN NaN Bad Good Apr 2022 Apr 2024
Dataframe difference keeping same shape and equal values --
Id Customer Status Date
self other self other self other self other
0 1 1 ABC ABC Good Bad Mar 2023 Mar 2023
1 2 2 BAC BAC Good Good Feb 2024 Feb 2024
2 3 5 CBA CBA Bad Good Apr 2022 Apr 2024
A slight variation of the nice #liangli's solution that does not require to change the index of existing dataframes:
newdf = df1.drop(df1.join(df2.set_index('Name').index))
Finding difference by index. Assuming df1 is a subset of df2 and the indexes are carried forward when subsetting
df1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna()
# Example
df1 = pd.DataFrame({"gender":np.random.choice(['m','f'],size=5), "subject":np.random.choice(["bio","phy","chem"],size=5)}, index = [1,2,3,4,5])
df2 = df1.loc[[1,3,5]]
df1
gender subject
1 f bio
2 m chem
3 f phy
4 m bio
5 f bio
df2
gender subject
1 f bio
3 f phy
5 f bio
df3 = df1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna()
df3
gender subject
2 m chem
4 m bio
Defining our dataframes:
df1 = pd.DataFrame({
'Name':
['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'],
'Age':
[23,45,12,34,27,44,28,39,40]
})
df2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa'])
df1
Name Age
0 John 23
1 Mike 45
2 Smith 12
3 Wale 34
4 Marry 27
5 Tom 44
6 Menda 28
7 Bolt 39
8 Yuswa 40
df2
Name Age
0 John 23
2 Smith 12
3 Wale 34
5 Tom 44
6 Menda 28
8 Yuswa 40
The difference between the two would be:
df1[~df1.isin(df2)].dropna()
Name Age
1 Mike 45.0
4 Marry 27.0
7 Bolt 39.0
Where:
df1.isin(df2) returns the rows in df1 that are also in df2.
~ (Element-wise logical NOT) in front of the expression negates the results, so we get the elements in df1 that are NOT in df2–the difference between the two.
.dropna() drops the rows with NaN presenting the desired output
Note This only works if len(df1) >= len(df2). If df2 is longer than df1 you can reverse the expression: df2[~df2.isin(df1)].dropna()
I found the deepdiff library is a wonderful tool that also extends well to dataframes if different detail is required or ordering matters. You can experiment with diffing to_dict('records'), to_numpy(), and other exports:
import pandas as pd
from deepdiff import DeepDiff
df1 = pd.DataFrame({
'Name':
['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'],
'Age':
[23,45,12,34,27,44,28,39,40]
})
df2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa'])]
DeepDiff(df1.to_dict(), df2.to_dict())
# {'dictionary_item_removed': [root['Name'][1], root['Name'][4], root['Name'][7], root['Age'][1], root['Age'][4], root['Age'][7]]}
Symmetric Difference
If you are interested in the rows that are only in one of the dataframes but not both, you are looking for the set difference:
pd.concat([df1,df2]).drop_duplicates(keep=False)
⚠️ Only works, if both dataframes do not contain any duplicates.
Set Difference / Relational Algebra Difference
If you are interested in the relational algebra difference / set difference, i.e. df1-df2 or df1\df2:
pd.concat([df1,df2,df2]).drop_duplicates(keep=False)
⚠️ Only works, if both dataframes do not contain any duplicates.
Another possible solution is to use numpy broadcasting:
df1[np.all(~np.all(df1.values == df2.values[:, None], axis=2), axis=0)]
Output:
Name Age
1 Mike 45
4 Marry 27
7 Bolt 39
Using the lambda function you can filter the rows with _merge value “left_only” to get all the rows in df1 which are missing from df2
df3 = df1.merge(df2, how = 'outer' ,indicator=True).loc[lambda x :x['_merge']=='left_only']
df
Try this one:
df_new = df1.merge(df2, how='outer', indicator=True).query('_merge == "left_only"').drop('_merge', 1)
It will result a new dataframe with the differences: the values that exist in df1 but not in df2.

Pandas merging dataframes and overwriting the data in the original df

I'm trying to merge two pandas dataframes but I can't figure out how to get the result I need. These are the example versions of dataframes I'm looking at:
df1 = pd.DataFrame([["09/10/2019",None],["10/10/2019",None], ["11/10/2019",6],
["12/10/2019",5], ["13/10/2019",3], ["14/10/2019",3],
["15/10/2019",5],
["16/10/2019",None]], columns = ['Date', 'A'])
df2 = pd.DataFrame([["10/10/2019",3], ["11/10/2019",5], ["12/10/2019",6],
["13/10/2019",1], ["14/10/2019",2], ["15/10/2019",4]],
columns = ['Date', 'A'])
I have checked the Pandas merging 101 but still can't find the way to do it correctly. Essentially what I need using the same graphics as in the guide is this:
i.e. I want to keep the data from df1 that falls outside the shared keys section, but within shared area I want df2 data from column 'A' to overwrite data from df1. I'm not even sure that merge is the right tool to use.
I've tried using df1 = pd.merge(df1, df2, how='right', on='Date') with different options, but in most cases it creates two separate columns - A_x and A_y in the output.
This is what I want to get as the end result:
Date A
0 09/10/2019 NaN
1 10/10/2019 3.0
2 11/10/2019 5.0
3 12/10/2019 6.0
4 13/10/2019 1.0
5 14/10/2019 2.0
6 15/10/2019 4.0
7 16/10/2019 NaN
Thanks in advance!
here is a way using combine_first:
df2.set_index('Date').combine_first(df1.set_index('Date')).reset_index()
Or reindex_like:
df2.set_index('Date').reindex_like(df1.set_index('Date')).reset_index()
Date A
0 09/10/2019 NaN
1 10/10/2019 3.0
2 11/10/2019 5.0
3 12/10/2019 6.0
4 13/10/2019 1.0
5 14/10/2019 2.0
6 15/10/2019 4.0
7 16/10/2019 NaN

Difference(s) between merge() and concat() in pandas

What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()?
So far, this is what I found, please comment on how complete and accurate my understanding is:
.merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with either axis, using only indices, and gives the option for adding a hierarchical index.
Incidentally, this allows for the following redundancy: both can combine two dataframes using the rows indices.
pd.DataFrame.join() merely offers a shorthand for a subset of the use cases of .merge()
(Pandas is great at addressing a very wide spectrum of use cases in data analysis. It can be a bit daunting exploring the documentation to figure out what is the best way to perform a particular task. )
A very high level difference is that merge() is used to combine two (or more) dataframes on the basis of values of common columns (indices can also be used, use left_index=True and/or right_index=True), and concat() is used to append one (or more) dataframes one below the other (or sideways, depending on whether the axis option is set to 0 or 1).
join() is used to merge 2 dataframes on the basis of the index; instead of using merge() with the option left_index=True we can use join().
For example:
df1 = pd.DataFrame({'Key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'], 'data1': range(7)})
df1:
Key data1
0 b 0
1 b 1
2 a 2
3 c 3
4 a 4
5 a 5
6 b 6
df2 = pd.DataFrame({'Key': ['a', 'b', 'd'], 'data2': range(3)})
df2:
Key data2
0 a 0
1 b 1
2 d 2
#Merge
# The 2 dataframes are merged on the basis of values in column "Key" as it is
# a common column in 2 dataframes
pd.merge(df1, df2)
Key data1 data2
0 b 0 1
1 b 1 1
2 b 6 1
3 a 2 0
4 a 4 0
5 a 5 0
#Concat
# df2 dataframe is appended at the bottom of df1
pd.concat([df1, df2])
Key data1 data2
0 b 0 NaN
1 b 1 NaN
2 a 2 NaN
3 c 3 NaN
4 a 4 NaN
5 a 5 NaN
6 b 6 NaN
0 a Nan 0
1 b Nan 1
2 d Nan 2
At a high level:
.concat() simply stacks multiple DataFrame together either
vertically, or stitches horizontally after aligning on index
.merge() first aligns two DataFrame' selected common column(s) or
index, and then pick up the remaining columns from the aligned rows of each DataFrame.
More specifically, .concat():
Is a top-level pandas function
Combines two or more pandas DataFrame vertically or horizontally
Aligns only on the index when combining horizontally
Errors when any of the DataFrame contains a duplicate index.
Defaults to outer join with the option for inner join
And .merge():
Exists both as a top-level pandas function and a DataFrame method (as of pandas 1.0)
Combines exactly two DataFrame horizontally
Aligns the calling DataFrame's column(s) or index with the other
DataFrame's column(s) or index
Handles duplicate values on the joining columns or index by
performing a cartesian product
Defaults to inner join with options for left, outer, and right
Note that when performing pd.merge(left, right), if left has two rows containing the same values from the joining columns or index, each row will combine with right's corresponding row(s) resulting in a cartesian product. On the other hand, if .concat() is used to combine columns, we need to make sure no duplicated index exists in either DataFrame.
Practically speaking:
Consider .concat() first when combining homogeneous DataFrame, while
consider .merge() first when combining complementary DataFrame.
If need to merge vertically, go with .concat(). If need to merge
horizontally via columns, go with .merge(), which by default merge on the columns in common.
Reference: Pandas 1.x Cookbook
pd.concat takes an Iterable as its argument. Hence, it cannot take DataFrames directly as its argument. Also Dimensions of the DataFrame should match along axis while concatenating.
pd.merge can take DataFrames as its argument, and is used to combine two DataFrames with same columns or index, which can't be done with pd.concat since it will show the repeated column in the DataFrame.
Whereas join can be used to join two DataFrames with different indices.
I am currently trying to understand the essential difference(s) between pd.DataFrame.merge() and pd.concat().
Nice question. The main difference:
pd.concat works on both axes.
The other difference, is pd.concat has innerdefault and outer joins only, while pd.DataFrame.merge() has left, right, outer, innerdefault joins.
Third notable other difference is: pd.DataFrame.merge() has the option to set the column suffixes when merging columns with the same name, while for pd.concat this is not possible.
With pd.concat by default you are able to stack rows of multiple dataframes (axis=0) and when you set the axis=1 then you mimic the pd.DataFrame.merge() function.
Some useful examples of pd.concat:
df2=pd.concat([df]*2, ignore_index=True) #double the rows of a dataframe
df2=pd.concat([df, df.iloc[[0]]]) # add first row to the end
df3=pd.concat([df1,df2], join='inner', ignore_index=True) # concat two df's
The main difference between merge & concat is that merge allow you to perform more structured "join" of tables where use of concat is more broad and less structured.
Merge
Referring the documentation, pd.DataFrame.merge takes right as a required argument, which you can think it as joining left table and right table according to some pre-defined structured join operation. Note the definition for parameter right.
Required Parameters
right: DataFrame or named Series
Optional Parameters
how: {‘left’, ‘right’, ‘outer’, ‘inner’} default ‘inner’
on: label or list
left_on: label or list, or array-like
right_on: label or list, or array-like
left_index: bool, default False
right_index: bool, default False
sort: bool, default False
suffixes: tuple of (str, str), default (‘_x’, ‘_y’)
copy: bool, default True
indicator: bool or str, default False
validate: str, optional
Important: pd.DataFrame.merge requires right to be a pd.DataFrame or named pd.Series object.
Output
Returns: DataFrame
Furthermore, if we check the docstring for Merge Operation on pandas is below:
Perform a database (SQL) merge operation between two DataFrame or Series
objects using either columns as keys or their row indexes
Concat
Refer to documentation of pd.concat, first note that the parameter is not named any of table, data_frame, series, matrix, etc., but objs instead. That is, you can pass many "data containers", which are defined as:
Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
Required Parameters
objs: a sequence or mapping of Series or DataFrame objects
Optional Parameters
axis: {0/’index’, 1/’columns’}, default 0
join: {‘inner’, ‘outer’}, default ‘outer’
ignore_index: bool, default False
keys: sequence, default None
levels: list of sequences, default None
names: list, default None
verify_integrity: bool, default False
sort: bool, default False
copy: bool, default True
Output
Returns: object, type of objs
Example
Code
import pandas as pd
v1 = pd.Series([1, 5, 9, 13])
v2 = pd.Series([10, 100, 1000, 10000])
v3 = pd.Series([0, 1, 2, 3])
df_left = pd.DataFrame({
"v1": v1,
"v2": v2,
"v3": v3
})
df_right = pd.DataFrame({
"v4": [5, 5, 5, 5],
"v5": [3, 2, 1, 0]
})
df_concat = pd.concat([v1, v2, v3])
# Performing operations on default
merge_result = df_left.merge(df_right, left_index=True, right_index=True)
concat_result = pd.concat([df_left, df_right], sort=False)
print(merge_result)
print('='*20)
print(concat_result)
Code Output
v1 v2 v3 v4 v5
0 1 10 0 5 3
1 5 100 1 5 2
2 9 1000 2 5 1
3 13 10000 3 5 0
====================
v1 v2 v3 v4 v5
0 1.0 10.0 0.0 NaN NaN
1 5.0 100.0 1.0 NaN NaN
2 9.0 1000.0 2.0 NaN NaN
3 13.0 10000.0 3.0 NaN NaN
0 NaN NaN NaN 5.0 3.0
1 NaN NaN NaN 5.0 2.0
2 NaN NaN NaN 5.0 1.0
You can achieve, however, the first output (merge) with concat by changing the axis parameter
concat_result = pd.concat([df_left, df_right], sort=False, axis=1)
Observe the following behavior,
concat_result = pd.concat([df_left, df_right, df_left, df_right], sort=False)
outputs;
v1 v2 v3 v4 v5
0 1.0 10.0 0.0 NaN NaN
1 5.0 100.0 1.0 NaN NaN
2 9.0 1000.0 2.0 NaN NaN
3 13.0 10000.0 3.0 NaN NaN
0 NaN NaN NaN 5.0 3.0
1 NaN NaN NaN 5.0 2.0
2 NaN NaN NaN 5.0 1.0
3 NaN NaN NaN 5.0 0.0
0 1.0 10.0 0.0 NaN NaN
1 5.0 100.0 1.0 NaN NaN
2 9.0 1000.0 2.0 NaN NaN
3 13.0 10000.0 3.0 NaN NaN
0 NaN NaN NaN 5.0 3.0
1 NaN NaN NaN 5.0 2.0
2 NaN NaN NaN 5.0 1.0
3 NaN NaN NaN 5.0 0.0
, which you cannot perform a similar operation with merge, since it only allows a single DataFrame or named Series.
merge_result = df_left.merge([df_right, df_left, df_right], left_index=True, right_index=True)
outputs;
TypeError: Can only merge Series or DataFrame objects, a <class 'list'> was passed
Conclusion
As you may have notice already that input and outputs may be different between "merge" and "concat".
As I mentioned at the beginning, the very first (main) difference is that "merge" performs a more structured join with a set of restricted set of objects and parameters where as "concat" performs a less strict/broader join with a broader set of objects and parameters.
All in all, merge is less tolerant to changes/(the input) and "concat" is looser/less sensitive to changes/(the input). You can achieve "merge" by using "concat", but the reverse is not always true.
"Merge" operation uses Data Frame columns (or name of pd.Series object) or row indices, and since it uses those entities only it performs horizontal merge of Data Frames or Series, and does not apply vertical operation as a result.
If you want to see more, you can deep dive in the source code a bit;
Source code for concat
Source code for merge
Only concat function has axis parameter. Merge is used to combine dataframes side-by-side based on values in shared columns so there is no need for axis parameter.
by default:
join is a column-wise left join
pd.merge is a column-wise inner join
pd.concat is a row-wise outer join
pd.concat:
takes Iterable arguments. Thus, it cannot take DataFrames directly (use [df,df2])
Dimensions of DataFrame should match along axis
Join and pd.merge:
can take DataFrame arguments
Click to see picture for understanding why code below does the same thing
df1.join(df2)
pd.merge(df1, df2, left_index=True, right_index=True)
pd.concat([df1, df2], axis=1)

How to drop a list of rows from Pandas dataframe?

I have a dataframe df :
>>> df
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20060630 6.590 NaN 6.590 5.291
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left:
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that ?
Use DataFrame.drop and pass it a Series of index labels:
In [65]: df
Out[65]:
one two
one 1 4
two 2 3
three 3 2
four 4 1
In [66]: df.drop(index=[1,3])
Out[66]:
one two
one 1 4
three 3 2
Note that it may be important to use the "inplace" command when you want to do the drop in line.
df.drop(df.index[[1,3]], inplace=True)
Because your original question is not returning anything, this command should be used.
http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.drop.html
If the DataFrame is huge, and the number of rows to drop is large as well, then simple drop by index df.drop(df.index[]) takes too much time.
In my case, I have a multi-indexed DataFrame of floats with 100M rows x 3 cols, and I need to remove 10k rows from it. The fastest method I found is, quite counterintuitively, to take the remaining rows.
Let indexes_to_drop be an array of positional indexes to drop ([1, 2, 4] in the question).
indexes_to_keep = set(range(df.shape[0])) - set(indexes_to_drop)
df_sliced = df.take(list(indexes_to_keep))
In my case this took 20.5s, while the simple df.drop took 5min 27s and consumed a lot of memory. The resulting DataFrame is the same.
I solved this in a simpler way - just in 2 steps.
Make a dataframe with unwanted rows/data.
Use the index of this unwanted dataframe to drop the rows from the original dataframe.
Example:
Suppose you have a dataframe df which as many columns including 'Age' which is an integer. Now let's say you want to drop all the rows with 'Age' as negative number.
df_age_negative = df[ df['Age'] < 0 ] # Step 1
df = df.drop(df_age_negative.index, axis=0) # Step 2
Hope this is much simpler and helps you.
You can also pass to DataFrame.drop the label itself (instead of Series of index labels):
In[17]: df
Out[17]:
a b c d e
one 0.456558 -2.536432 0.216279 -1.305855 -0.121635
two -1.015127 -0.445133 1.867681 2.179392 0.518801
In[18]: df.drop('one')
Out[18]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
Which is equivalent to:
In[19]: df.drop(df.index[[0]])
Out[19]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
If I want to drop a row which has let's say index x, I would do the following:
df = df[df.index != x]
If I would want to drop multiple indices (say these indices are in the list unwanted_indices), I would do:
desired_indices = [i for i in len(df.index) if i not in unwanted_indices]
desired_df = df.iloc[desired_indices]
Here is a bit specific example, I would like to show. Say you have many duplicate entries in some of your rows. If you have string entries you could easily use string methods to find all indexes to drop.
ind_drop = df[df['column_of_strings'].apply(lambda x: x.startswith('Keyword'))].index
And now to drop those rows using their indexes
new_df = df.drop(ind_drop)
Use only the Index arg to drop row:-
df.drop(index = 2, inplace = True)
For multiple rows:-
df.drop(index=[1,3], inplace = True)
In a comment to #theodros-zelleke's answer, #j-jones asked about what to do if the index is not unique. I had to deal with such a situation. What I did was to rename the duplicates in the index before I called drop(), a la:
dropped_indexes = <determine-indexes-to-drop>
df.index = rename_duplicates(df.index)
df.drop(df.index[dropped_indexes], inplace=True)
where rename_duplicates() is a function I defined that went through the elements of index and renamed the duplicates. I used the same renaming pattern as pd.read_csv() uses on columns, i.e., "%s.%d" % (name, count), where name is the name of the row and count is how many times it has occurred previously.
Determining the index from the boolean as described above e.g.
df[df['column'].isin(values)].index
can be more memory intensive than determining the index using this method
pd.Index(np.where(df['column'].isin(values))[0])
applied like so
df.drop(pd.Index(np.where(df['column'].isin(values))[0]), inplace = True)
This method is useful when dealing with large dataframes and limited memory.
To drop rows with indices 1, 2, 4 you can use:
df[~df.index.isin([1, 2, 4])]
The tilde operator ~ negates the result of the method isin. Another option is to drop indices:
df.loc[df.index.drop([1, 2, 4])]
Look at the following dataframe df
df
column1 column2 column3
0 1 11 21
1 2 12 22
2 3 13 23
3 4 14 24
4 5 15 25
5 6 16 26
6 7 17 27
7 8 18 28
8 9 19 29
9 10 20 30
Lets drop all the rows which has an odd number in column1
Create a list of all the elements in column1 and keep only those elements that are even numbers (the elements that you dont want to drop)
keep_elements = [x for x in df.column1 if x%2==0]
All the rows with the values [2, 4, 6, 8, 10] in its column1 will be retained or not dropped.
df.set_index('column1',inplace = True)
df.drop(df.index.difference(keep_elements),axis=0,inplace=True)
df.reset_index(inplace=True)
We make the column1 as index and drop all the rows that are not required. Then we reset the index back.
df
column1 column2 column3
0 2 12 22
1 4 14 24
2 6 16 26
3 8 18 28
4 10 20 30
As Dennis Golomazov's answer suggests, using drop to drop rows. You can select to keep rows instead. Let's say you have a list of row indices to drop called indices_to_drop. You can convert it to a mask as follows:
mask = np.ones(len(df), bool)
mask[indices_to_drop] = False
You can use this index directly:
df_new = df.iloc[mask]
The nice thing about this method is that mask can come from any source: it can be a condition involving many columns, or something else.
The really nice thing is, you really don't need the index of the original DataFrame at all, so it doesn't matter if the index is unique or not.
The disadvantage is of course that you can't do the drop in-place with this method.
Consider an example dataframe
df =
index column1
0 00
1 10
2 20
3 30
we want to drop 2nd and 3rd index rows.
Approach 1:
df = df.drop(df.index[2,3])
or
df.drop(df.index[2,3],inplace=True)
print(df)
df =
index column1
0 00
3 30
#This approach removes the rows as we wanted but the index remains unordered
Approach 2
df.drop(df.index[2,3],inplace=True,ignore_index=True)
print(df)
df =
index column1
0 00
1 30
#This approach removes the rows as we wanted and resets the index.
This worked for me
# Create a list containing the index numbers you want to remove
index_list = list(range(42766, 42798))
df.drop(df.index[index_list], inplace =True)
df.shape
This should drop all indexes within that created range

Categories