Change multiple columns in a DataFrame - python

I am a beginner in Python and made my first venture into Pandas today. What I want to do is to convert several columns from string to float. Here's a quick example:
import numpy as np
import pandas as pd
def convert(str):
try:
return float(str.replace(',', ''))
except:
return None
df = pd.DataFrame([
['A', '1,234', '456,789'],
['B', '1' , '---' ]
], columns=['Company Name', 'X', 'Y'])
I want to convert X and Y to float. The reality has more columns and I don't always know the column names for X and Y so I must use integer indexing.
This works:
df.iloc[:, 1] = df.iloc[:, 1].apply(convert)
df.iloc[:, 2] = df.iloc[:, 2].apply(convert)
This doesn't:
df.iloc[:, 1:2] = df.iloc[:, 1:2].apply(convert)
# Error: could not broadcast input array from shape (2) into shape (2,1)
Is there anyway to apply the convert function on multiple columns at once?

There are several issues with your logic:
The slice 1:2 excludes 2, consistent with list slicing or slice object syntax. Use 1:3 instead.
Applying an element-wise function to a series via pd.Series.apply works. To apply an element-wise function to a dataframe, you need pd.DataFrame.applymap.
Never shadow built-ins: use mystr or x instead of str as a variable or argument name.
When you use a try / except construct, you should generally specify error type(s), in this case ValueError.
Therefore, this is one solution:
def convert(x):
try:
return float(x.replace(',', ''))
except ValueError:
return None
df.iloc[:, 1:3] = df.iloc[:, 1:3].applymap(convert)
print(df)
Company Name X Y
0 A 1234 456789
1 B 1 NaN
However, your logic is inefficient: you should look to leverage column-wise operations wherever possible. This can be achieved via pd.DataFrame.apply, along with pd.to_numeric applied to each series:
def convert_series(x):
return pd.to_numeric(x.str.replace(',', ''), errors='coerce')
df.iloc[:, 1:3] = df.iloc[:, 1:3].apply(convert_series)
print(df)
Company Name X Y
0 A 1234 456789
1 B 1 NaN

Related

Lazy evaluate Pandas dataframe filters

I'm observing a behavior that's weird to me, can anyone tell me how I can define filter once and re-use throughout my code?
>>> df = pd.DataFrame([1,2,3], columns=['A'])
>>> my_filter = df.A == 2
>>> df.loc[1] = 5
>>> df[my_filter]
A
1 5
I expect my_filter to return empty dataset since none of the A columns are equal to 2.
I'm thinking about making a function that returns the filter and re-use that but is there any more pythonic as well as pandaic way of doing this?
def get_my_filter(df):
return df.A == 2
df[get_my_filter()]
change df
df[get_my_filter()]
Masks are not dynamic, they stay how you defined them when you defined them.
So if you still need to change the dataframe value, you should swap lines 2 and 3.
That would work.
you applied the filter in the first place. Changing a value in the row won't help.
df = pd.DataFrame([1,2,3], columns=['A'])
my_filter = df.A == 2
print(my_filter)
'''
A
0 False
1 True
2 False
'''
as you can see, it returns a series. If you change the data after this process, it will not work. because this represents the first version of the df. But you can use define filter as a string. You can achieve what you want if you use the string filter inside the eval() function.
df = pd.DataFrame([1,2,3], columns=['A'])
my_filter = 'df.A == 2'
df.loc[1] = 5
df[eval(my_filter)]
'''
Out[205]:
Empty DataFrame
Columns: [A]
Index: []
'''

ValueError: Can only compare identically-labeled DataFrame objects [duplicate]

I'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod):
...
uat = uat[['Customer Number','Product']]
prod = prod[['Customer Number','Product']]
print uat['Customer Number'] == prod['Customer Number']
print uat['Product'] == prod['Product']
print uat == prod
The first two match exactly:
74357 True
74356 True
Name: Customer Number, dtype: bool
74357 True
74356 True
Name: Product, dtype: bool
For the third print, I get an error:
Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd?
Thanks
Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both):
In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]])
In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0])
In [3]: df1 == df2
Exception: Can only compare identically-labeled DataFrame objects
One solution is to sort the index first (Note: some functions require sorted indexes):
In [4]: df2.sort_index(inplace=True)
In [5]: df1 == df2
Out[5]:
0 1
0 True True
1 True True
Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1):
In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1)
Out[11]:
0 1
0 True True
1 True True
Note: This can still raise (if the index/columns aren't identically labelled after sorting).
You can also try dropping the index column if it is not needed to compare:
print(df1.reset_index(drop=True) == df2.reset_index(drop=True))
I have used this same technique in a unit test like so:
from pandas.util.testing import assert_frame_equal
assert_frame_equal(actual.reset_index(drop=True), expected.reset_index(drop=True))
At the time when this question was asked there wasn't another function in Pandas to test equality, but it has been added a while ago: pandas.equals
You use it like this:
df1.equals(df2)
Some differenes to == are:
You don't get the error described in the question
It returns a simple boolean.
NaN values in the same location are considered equal
2 DataFrames need to have the same dtype to be considered equal, see this stackoverflow question
EDIT:
As pointed out in #paperskilltrees answer index alignment is important. Apart from the solution provided there another option is to sort the index of the DataFrames before comparing the DataFrames. For df1 that would be df1.sort_index(inplace=True).
When you compare two DataFrames, you must ensure that the number of records in the first DataFrame matches with the number of records in the second DataFrame. In our example, each of the two DataFrames had 4 records, with 4 products and 4 prices.
If, for example, one of the DataFrames had 5 products, while the other DataFrame had 4 products, and you tried to run the comparison, you would get the following error:
ValueError: Can only compare identically-labeled Series objects
this should work
import pandas as pd
import numpy as np
firstProductSet = {'Product1': ['Computer','Phone','Printer','Desk'],
'Price1': [1200,800,200,350]
}
df1 = pd.DataFrame(firstProductSet,columns= ['Product1', 'Price1'])
secondProductSet = {'Product2': ['Computer','Phone','Printer','Desk'],
'Price2': [900,800,300,350]
}
df2 = pd.DataFrame(secondProductSet,columns= ['Product2', 'Price2'])
df1['Price2'] = df2['Price2'] #add the Price2 column from df2 to df1
df1['pricesMatch?'] = np.where(df1['Price1'] == df2['Price2'], 'True', 'False') #create new column in df1 to check if prices match
df1['priceDiff?'] = np.where(df1['Price1'] == df2['Price2'], 0, df1['Price1'] - df2['Price2']) #create new column in df1 for price diff
print (df1)
example from https://datatofish.com/compare-values-dataframes/
Flyingdutchman's answer is great but wrong: it uses DataFrame.equals, which will return False in your case.
Instead, you want to use DataFrame.eq, which will return True.
It seems that DataFrame.equals ignores the dataframe's index, while DataFrame.eq uses dataframes' indexes for alignment and then compares the aligned values. This is an occasion to quote the central gotcha of Pandas:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you.
As we can see in the following examples, the data alignment is neither broken, nor enforced, unless explicitly requested. So we have three different situations.
No explicit instruction given, as to the alignment: == aka DataFrame.__eq__,
In [1]: import pandas as pd
In [2]: df1 = pd.DataFrame(index=[0, 1, 2], data={'col1':list('abc')})
In [3]: df2 = pd.DataFrame(index=[2, 0, 1], data={'col1':list('cab')})
In [4]: df1 == df2
---------------------------------------------------------------------------
...
ValueError: Can only compare identically-labeled DataFrame objects
Alignment is explicitly broken: DataFrame.equals, DataFrame.values, DataFrame.reset_index(),
In [5]: df1.equals(df2)
Out[5]: False
In [9]: df1.values == df2.values
Out[9]:
array([[False],
[False],
[False]])
In [10]: (df1.values == df2.values).all().all()
Out[10]: False
Alignment is explicitly enforced: DataFrame.eq, DataFrame.sort_index(),
In [6]: df1.eq(df2)
Out[6]:
col1
0 True
1 True
2 True
In [8]: df1.eq(df2).all().all()
Out[8]: True
My answer is as of pandas version 1.0.3.
Here I am showing a complete example of how to handle this error. I have added rows with zeros. You can have your dataframes from csv or any other source.
import pandas as pd
import numpy as np
# df1 with 9 rows
df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[23,45,12,34,27,44,28,39,40]})
# df2 with 8 rows
df2 = pd.DataFrame({'Name':['John','Mike','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[25,45,14,34,26,44,29,42]})
# get lengths of df1 and df2
df1_len = len(df1)
df2_len = len(df2)
diff = df1_len - df2_len
rows_to_be_added1 = rows_to_be_added2 = 0
# rows_to_be_added1 = np.zeros(diff)
if diff < 0:
rows_to_be_added1 = abs(diff)
else:
rows_to_be_added2 = diff
# add empty rows to df1
if rows_to_be_added1 > 0:
df1 = df1.append(pd.DataFrame(np.zeros((rows_to_be_added1,len(df1.columns))),columns=df1.columns))
# add empty rows to df2
if rows_to_be_added2 > 0:
df2 = df2.append(pd.DataFrame(np.zeros((rows_to_be_added2,len(df2.columns))),columns=df2.columns))
# at this point we have two dataframes with the same number of rows, and maybe different indexes
# drop the indexes of both, so we can compare the dataframes and other operations like update etc.
df2.reset_index(drop=True, inplace=True)
df1.reset_index(drop=True, inplace=True)
# add a new column to df1
df1['New_age'] = None
# compare the Age column of df1 and df2, and update the New_age column of df1 with the Age column of df2 if they match, else None
df1['New_age'] = np.where(df1['Age'] == df2['Age'], df2['Age'], None)
# drop rows where Name is 0.0
df2 = df2.drop(df2[df2['Name'] == 0.0].index)
# now we don't get the error ValueError: Can only compare identically-labeled Series objects
I found where the error is coming from in my case:
The problem was that column names list was accidentally enclosed in another list.
Consider following example:
column_names=['warrior','eat','ok','monkeys']
df_good = pd.DataFrame(np.ones(shape=(6,4)),columns=column_names)
df_good['ok'] < df_good['monkeys']
>>> 0 False
1 False
2 False
3 False
4 False
5 False
df_bad = pd.DataFrame(np.ones(shape=(6,4)),columns=[column_names])
df_bad ['ok'] < df_bad ['monkeys']
>>> ValueError: Can only compare identically-labeled DataFrame objects
And the thing is you cannot visually distinguish the bad DataFrame from good.
In my case i just write directly param columns in creating dataframe, because data from one sql-query was with names, and without in other

Splitting a dataframe based on condition

I am trying to split my dataframe into two based of medical_plan_id. If it is empty, into df1. If not empty into df2.
df1 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] == ""]
df2 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] is not ""]
The code below works, but if there are no empty fields, my code raises TypeError("invalid type comparison").
df1 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] == ""]
How to handle such situation?
My df_with_medicalplanid looks like below:
wellthie_issuer_identifier ... medical_plan_id
0 UHC99806 ... None
1 UHC99806 ... None
Use ==, not is, to test equality
Likewise, use != instead of is not for inequality.
is has a special meaning in Python. It returns True if two variables point to the same object, while == checks if the objects referred to by the variables are equal. See also Is there a difference between == and is in Python?.
Don't repeat mask calculations
The Boolean masks you are creating are the most expensive part of your logic. It's also logic you want to avoid repeating manually as your first and second masks are inverses of each other. You can therefore use the bitwise inverse ~ ("tilde"), also accessible via operator.invert, to negate an existing mask.
Empty strings are different to null values
Equality versus empty strings can be tested via == '', but equality versus null values requires a specialized method: pd.Series.isnull. This is because null values are represented in NumPy arrays, which are used by Pandas, by np.nan, and np.nan != np.nan by design.
If you want to replace empty strings with null values, you can do so:
df['medical_plan_id'] = df['medical_plan_id'].replace('', np.nan)
Conceptually, it makes sense for missing values to be null (np.nan) rather than empty strings. But the opposite of the above process, i.e. converting null values to empty strings, is also possible:
df['medical_plan_id'] = df['medical_plan_id'].fillna('')
If the difference matters, you need to know your data and apply the appropriate logic.
Semi-final solution
Assuming you do indeed have null values, calculate a single Boolean mask and its inverse:
mask = df['medical_plan_id'].isnull()
df1 = df[mask]
df2 = df[~mask]
Final solution: avoid extra variables
Creating additional variables is something, as a programmer, you should look to avoid. In this case, there's no need to create two new variables, you can use GroupBy with dict to give a dictionary of dataframes with False (== 0) and True (== 1) keys corresponding to your masks:
dfs = dict(tuple(df.groupby(df['medical_plan_id'].isnull())))
Then dfs[0] represents df2 and dfs[1] represents df1 (see also this related answer). A variant of the above, you can forego dictionary construction and use Pandas GroupBy methods:
dfs = df.groupby(df['medical_plan_id'].isnull())
dfs.get_group(0) # equivalent to dfs[0] from dict solution
dfs.get_group(1) # equivalent to dfs[1] from dict solution
Example
Putting all the above in action:
df = pd.DataFrame({'medical_plan_id': [np.nan, '', 2134, 4325, 6543, '', np.nan],
'values': [1, 2, 3, 4, 5, 6, 7]})
df['medical_plan_id'] = df['medical_plan_id'].replace('', np.nan)
dfs = dict(tuple(df.groupby(df['medical_plan_id'].isnull())))
print(dfs[0], dfs[1], sep='\n'*2)
medical_plan_id values
2 2134.0 3
3 4325.0 4
4 6543.0 5
medical_plan_id values
0 NaN 1
1 NaN 2
5 NaN 6
6 NaN 7
Another variant is to unpack df.groupby, which returns an iterator with tuples (first item being the element of groupby and the second being the dataframe).
Like this for instance:
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
_ is in Python used to mark variables that are not interested to keep. I have separated the code to two lines for readability.
Full example
import pandas as pd
df_with_medicalplanid = pd.DataFrame({
'medical_plan_id': ['214212','','12251','12421',''],
'value': 1
})
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
print(df1)
Returns:
medical_plan_id value
0 214212 1
2 12251 1
3 12421 1
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
# Anton missed cond in right side bracket
print(df1)

Using a dataframe to construct an other in a for loop

As the title says, I've been trying to build a Pandas DataFrame from an other df using a for loop and calculating new columns with the last one built.
So far, I've tried :
df = pd.DataFrame(np.arange(10))
df.columns = [10]
df1 = pd.DataFrame(np.arange(10))
df1.columns = [10]
steps = np.linspace(10,1,10,dtype = int)
This works:
for i in steps:
print(i)
df[i-1] = df[i].apply(lambda a: a-1)
But when I try building df and df1 at the same time like so :
for i in steps:
print(i)
df[i-1] = df[i].apply(lambda a: a-df1[i])
df1[i-1] = df1[i].apply(lambda a: a-1)
It returns a lot of gibberish + the line :
ValueError : Wrong number of items passed 10, placement implies 1
In this example, I am well aware that I could build df1 first and build df after. But it returns the same error if I try :
for i in steps:
print(i)
df[i-1] = df[i].apply(lambda a: a-df1[i])
df1[i-1] = df1[i].apply(lambda a: a-df[i])
Which is what i really need in the end.
Any help is much appreciated,
Alex
apply is trying to apply a function along an axis that you specify. It can be 0 (applying the function to each column) or 1 (applying the function to each row). Per default, it is applying the function to the columns. In your first example:
for i in steps:
print(i)
df[i-1] = df[i].apply(lambda a: a-1)
Each column is looped because of your for loop, and your function .apply removes 1 to the entire column. You can see a as being your entire column. It is exactly the same as the following:
for i in steps:
print(i)
df[i - 1] = df[i] - 1
A way you can see .apply is with the following. Assuming I have the following dataframe:
df = pd.DataFrame(np.random.rand(10,4))
df.sum() and df.apply(lambda a: np.sum(a)) yields exactly the same result. It is just a simple example, but you can do more powerful calculations if needed.
Note that .apply is not the fastest method, so try to avoid it if you can.
An example where apply would be useful is if you have a function some_fct() defined that takes int or float as arguments and you would like to apply it to the elements of a dataframe column.
import pandas as pd
import numpy as np
import math
def some_fct(x):
return math.sin(x) / x
np.random.seed(100)
df = pd.DataFrame(np.random.rand(10,2))
Obviously, some_fct(df[0]) would not work as the function takes int or float as arguments. df[0] is a Series. However, using the apply method, you could apply your function to the elements of df[0] that are themselves floats.
df[0].apply(lambda x: some_fct(x))
Found it, I just need to drop the .apply !
Example :
df = pd.DataFrame(np.arange(10))
df.columns = [10]
df1 = pd.DataFrame(np.arange(10))
df1.columns = [10]
steps = np.linspace(10,1,10,dtype = int)
for i in steps:
print(i)
df[i-1] = df[i] - df1[i]
df1[i-1] = df1[i] + df[i]
It does exactly what it should !
I don't have enough knowledge about python, I cannot explain why
pd.DataFrame().apply()
will not use what was out of itself.

Pandas "Can only compare identically-labeled DataFrame objects" error

I'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod):
...
uat = uat[['Customer Number','Product']]
prod = prod[['Customer Number','Product']]
print uat['Customer Number'] == prod['Customer Number']
print uat['Product'] == prod['Product']
print uat == prod
The first two match exactly:
74357 True
74356 True
Name: Customer Number, dtype: bool
74357 True
74356 True
Name: Product, dtype: bool
For the third print, I get an error:
Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd?
Thanks
Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both):
In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]])
In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0])
In [3]: df1 == df2
Exception: Can only compare identically-labeled DataFrame objects
One solution is to sort the index first (Note: some functions require sorted indexes):
In [4]: df2.sort_index(inplace=True)
In [5]: df1 == df2
Out[5]:
0 1
0 True True
1 True True
Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1):
In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1)
Out[11]:
0 1
0 True True
1 True True
Note: This can still raise (if the index/columns aren't identically labelled after sorting).
You can also try dropping the index column if it is not needed to compare:
print(df1.reset_index(drop=True) == df2.reset_index(drop=True))
I have used this same technique in a unit test like so:
from pandas.util.testing import assert_frame_equal
assert_frame_equal(actual.reset_index(drop=True), expected.reset_index(drop=True))
At the time when this question was asked there wasn't another function in Pandas to test equality, but it has been added a while ago: pandas.equals
You use it like this:
df1.equals(df2)
Some differenes to == are:
You don't get the error described in the question
It returns a simple boolean.
NaN values in the same location are considered equal
2 DataFrames need to have the same dtype to be considered equal, see this stackoverflow question
EDIT:
As pointed out in #paperskilltrees answer index alignment is important. Apart from the solution provided there another option is to sort the index of the DataFrames before comparing the DataFrames. For df1 that would be df1.sort_index(inplace=True).
When you compare two DataFrames, you must ensure that the number of records in the first DataFrame matches with the number of records in the second DataFrame. In our example, each of the two DataFrames had 4 records, with 4 products and 4 prices.
If, for example, one of the DataFrames had 5 products, while the other DataFrame had 4 products, and you tried to run the comparison, you would get the following error:
ValueError: Can only compare identically-labeled Series objects
this should work
import pandas as pd
import numpy as np
firstProductSet = {'Product1': ['Computer','Phone','Printer','Desk'],
'Price1': [1200,800,200,350]
}
df1 = pd.DataFrame(firstProductSet,columns= ['Product1', 'Price1'])
secondProductSet = {'Product2': ['Computer','Phone','Printer','Desk'],
'Price2': [900,800,300,350]
}
df2 = pd.DataFrame(secondProductSet,columns= ['Product2', 'Price2'])
df1['Price2'] = df2['Price2'] #add the Price2 column from df2 to df1
df1['pricesMatch?'] = np.where(df1['Price1'] == df2['Price2'], 'True', 'False') #create new column in df1 to check if prices match
df1['priceDiff?'] = np.where(df1['Price1'] == df2['Price2'], 0, df1['Price1'] - df2['Price2']) #create new column in df1 for price diff
print (df1)
example from https://datatofish.com/compare-values-dataframes/
Flyingdutchman's answer is great but wrong: it uses DataFrame.equals, which will return False in your case.
Instead, you want to use DataFrame.eq, which will return True.
It seems that DataFrame.equals ignores the dataframe's index, while DataFrame.eq uses dataframes' indexes for alignment and then compares the aligned values. This is an occasion to quote the central gotcha of Pandas:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you.
As we can see in the following examples, the data alignment is neither broken, nor enforced, unless explicitly requested. So we have three different situations.
No explicit instruction given, as to the alignment: == aka DataFrame.__eq__,
In [1]: import pandas as pd
In [2]: df1 = pd.DataFrame(index=[0, 1, 2], data={'col1':list('abc')})
In [3]: df2 = pd.DataFrame(index=[2, 0, 1], data={'col1':list('cab')})
In [4]: df1 == df2
---------------------------------------------------------------------------
...
ValueError: Can only compare identically-labeled DataFrame objects
Alignment is explicitly broken: DataFrame.equals, DataFrame.values, DataFrame.reset_index(),
In [5]: df1.equals(df2)
Out[5]: False
In [9]: df1.values == df2.values
Out[9]:
array([[False],
[False],
[False]])
In [10]: (df1.values == df2.values).all().all()
Out[10]: False
Alignment is explicitly enforced: DataFrame.eq, DataFrame.sort_index(),
In [6]: df1.eq(df2)
Out[6]:
col1
0 True
1 True
2 True
In [8]: df1.eq(df2).all().all()
Out[8]: True
My answer is as of pandas version 1.0.3.
Here I am showing a complete example of how to handle this error. I have added rows with zeros. You can have your dataframes from csv or any other source.
import pandas as pd
import numpy as np
# df1 with 9 rows
df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[23,45,12,34,27,44,28,39,40]})
# df2 with 8 rows
df2 = pd.DataFrame({'Name':['John','Mike','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[25,45,14,34,26,44,29,42]})
# get lengths of df1 and df2
df1_len = len(df1)
df2_len = len(df2)
diff = df1_len - df2_len
rows_to_be_added1 = rows_to_be_added2 = 0
# rows_to_be_added1 = np.zeros(diff)
if diff < 0:
rows_to_be_added1 = abs(diff)
else:
rows_to_be_added2 = diff
# add empty rows to df1
if rows_to_be_added1 > 0:
df1 = df1.append(pd.DataFrame(np.zeros((rows_to_be_added1,len(df1.columns))),columns=df1.columns))
# add empty rows to df2
if rows_to_be_added2 > 0:
df2 = df2.append(pd.DataFrame(np.zeros((rows_to_be_added2,len(df2.columns))),columns=df2.columns))
# at this point we have two dataframes with the same number of rows, and maybe different indexes
# drop the indexes of both, so we can compare the dataframes and other operations like update etc.
df2.reset_index(drop=True, inplace=True)
df1.reset_index(drop=True, inplace=True)
# add a new column to df1
df1['New_age'] = None
# compare the Age column of df1 and df2, and update the New_age column of df1 with the Age column of df2 if they match, else None
df1['New_age'] = np.where(df1['Age'] == df2['Age'], df2['Age'], None)
# drop rows where Name is 0.0
df2 = df2.drop(df2[df2['Name'] == 0.0].index)
# now we don't get the error ValueError: Can only compare identically-labeled Series objects
I found where the error is coming from in my case:
The problem was that column names list was accidentally enclosed in another list.
Consider following example:
column_names=['warrior','eat','ok','monkeys']
df_good = pd.DataFrame(np.ones(shape=(6,4)),columns=column_names)
df_good['ok'] < df_good['monkeys']
>>> 0 False
1 False
2 False
3 False
4 False
5 False
df_bad = pd.DataFrame(np.ones(shape=(6,4)),columns=[column_names])
df_bad ['ok'] < df_bad ['monkeys']
>>> ValueError: Can only compare identically-labeled DataFrame objects
And the thing is you cannot visually distinguish the bad DataFrame from good.
In my case i just write directly param columns in creating dataframe, because data from one sql-query was with names, and without in other

Categories