I'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod):
...
uat = uat[['Customer Number','Product']]
prod = prod[['Customer Number','Product']]
print uat['Customer Number'] == prod['Customer Number']
print uat['Product'] == prod['Product']
print uat == prod
The first two match exactly:
74357 True
74356 True
Name: Customer Number, dtype: bool
74357 True
74356 True
Name: Product, dtype: bool
For the third print, I get an error:
Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd?
Thanks
Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both):
In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]])
In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0])
In [3]: df1 == df2
Exception: Can only compare identically-labeled DataFrame objects
One solution is to sort the index first (Note: some functions require sorted indexes):
In [4]: df2.sort_index(inplace=True)
In [5]: df1 == df2
Out[5]:
0 1
0 True True
1 True True
Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1):
In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1)
Out[11]:
0 1
0 True True
1 True True
Note: This can still raise (if the index/columns aren't identically labelled after sorting).
You can also try dropping the index column if it is not needed to compare:
print(df1.reset_index(drop=True) == df2.reset_index(drop=True))
I have used this same technique in a unit test like so:
from pandas.util.testing import assert_frame_equal
assert_frame_equal(actual.reset_index(drop=True), expected.reset_index(drop=True))
At the time when this question was asked there wasn't another function in Pandas to test equality, but it has been added a while ago: pandas.equals
You use it like this:
df1.equals(df2)
Some differenes to == are:
You don't get the error described in the question
It returns a simple boolean.
NaN values in the same location are considered equal
2 DataFrames need to have the same dtype to be considered equal, see this stackoverflow question
EDIT:
As pointed out in #paperskilltrees answer index alignment is important. Apart from the solution provided there another option is to sort the index of the DataFrames before comparing the DataFrames. For df1 that would be df1.sort_index(inplace=True).
When you compare two DataFrames, you must ensure that the number of records in the first DataFrame matches with the number of records in the second DataFrame. In our example, each of the two DataFrames had 4 records, with 4 products and 4 prices.
If, for example, one of the DataFrames had 5 products, while the other DataFrame had 4 products, and you tried to run the comparison, you would get the following error:
ValueError: Can only compare identically-labeled Series objects
this should work
import pandas as pd
import numpy as np
firstProductSet = {'Product1': ['Computer','Phone','Printer','Desk'],
'Price1': [1200,800,200,350]
}
df1 = pd.DataFrame(firstProductSet,columns= ['Product1', 'Price1'])
secondProductSet = {'Product2': ['Computer','Phone','Printer','Desk'],
'Price2': [900,800,300,350]
}
df2 = pd.DataFrame(secondProductSet,columns= ['Product2', 'Price2'])
df1['Price2'] = df2['Price2'] #add the Price2 column from df2 to df1
df1['pricesMatch?'] = np.where(df1['Price1'] == df2['Price2'], 'True', 'False') #create new column in df1 to check if prices match
df1['priceDiff?'] = np.where(df1['Price1'] == df2['Price2'], 0, df1['Price1'] - df2['Price2']) #create new column in df1 for price diff
print (df1)
example from https://datatofish.com/compare-values-dataframes/
Flyingdutchman's answer is great but wrong: it uses DataFrame.equals, which will return False in your case.
Instead, you want to use DataFrame.eq, which will return True.
It seems that DataFrame.equals ignores the dataframe's index, while DataFrame.eq uses dataframes' indexes for alignment and then compares the aligned values. This is an occasion to quote the central gotcha of Pandas:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you.
As we can see in the following examples, the data alignment is neither broken, nor enforced, unless explicitly requested. So we have three different situations.
No explicit instruction given, as to the alignment: == aka DataFrame.__eq__,
In [1]: import pandas as pd
In [2]: df1 = pd.DataFrame(index=[0, 1, 2], data={'col1':list('abc')})
In [3]: df2 = pd.DataFrame(index=[2, 0, 1], data={'col1':list('cab')})
In [4]: df1 == df2
---------------------------------------------------------------------------
...
ValueError: Can only compare identically-labeled DataFrame objects
Alignment is explicitly broken: DataFrame.equals, DataFrame.values, DataFrame.reset_index(),
In [5]: df1.equals(df2)
Out[5]: False
In [9]: df1.values == df2.values
Out[9]:
array([[False],
[False],
[False]])
In [10]: (df1.values == df2.values).all().all()
Out[10]: False
Alignment is explicitly enforced: DataFrame.eq, DataFrame.sort_index(),
In [6]: df1.eq(df2)
Out[6]:
col1
0 True
1 True
2 True
In [8]: df1.eq(df2).all().all()
Out[8]: True
My answer is as of pandas version 1.0.3.
Here I am showing a complete example of how to handle this error. I have added rows with zeros. You can have your dataframes from csv or any other source.
import pandas as pd
import numpy as np
# df1 with 9 rows
df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[23,45,12,34,27,44,28,39,40]})
# df2 with 8 rows
df2 = pd.DataFrame({'Name':['John','Mike','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[25,45,14,34,26,44,29,42]})
# get lengths of df1 and df2
df1_len = len(df1)
df2_len = len(df2)
diff = df1_len - df2_len
rows_to_be_added1 = rows_to_be_added2 = 0
# rows_to_be_added1 = np.zeros(diff)
if diff < 0:
rows_to_be_added1 = abs(diff)
else:
rows_to_be_added2 = diff
# add empty rows to df1
if rows_to_be_added1 > 0:
df1 = df1.append(pd.DataFrame(np.zeros((rows_to_be_added1,len(df1.columns))),columns=df1.columns))
# add empty rows to df2
if rows_to_be_added2 > 0:
df2 = df2.append(pd.DataFrame(np.zeros((rows_to_be_added2,len(df2.columns))),columns=df2.columns))
# at this point we have two dataframes with the same number of rows, and maybe different indexes
# drop the indexes of both, so we can compare the dataframes and other operations like update etc.
df2.reset_index(drop=True, inplace=True)
df1.reset_index(drop=True, inplace=True)
# add a new column to df1
df1['New_age'] = None
# compare the Age column of df1 and df2, and update the New_age column of df1 with the Age column of df2 if they match, else None
df1['New_age'] = np.where(df1['Age'] == df2['Age'], df2['Age'], None)
# drop rows where Name is 0.0
df2 = df2.drop(df2[df2['Name'] == 0.0].index)
# now we don't get the error ValueError: Can only compare identically-labeled Series objects
I found where the error is coming from in my case:
The problem was that column names list was accidentally enclosed in another list.
Consider following example:
column_names=['warrior','eat','ok','monkeys']
df_good = pd.DataFrame(np.ones(shape=(6,4)),columns=column_names)
df_good['ok'] < df_good['monkeys']
>>> 0 False
1 False
2 False
3 False
4 False
5 False
df_bad = pd.DataFrame(np.ones(shape=(6,4)),columns=[column_names])
df_bad ['ok'] < df_bad ['monkeys']
>>> ValueError: Can only compare identically-labeled DataFrame objects
And the thing is you cannot visually distinguish the bad DataFrame from good.
In my case i just write directly param columns in creating dataframe, because data from one sql-query was with names, and without in other
Related
I'm observing a behavior that's weird to me, can anyone tell me how I can define filter once and re-use throughout my code?
>>> df = pd.DataFrame([1,2,3], columns=['A'])
>>> my_filter = df.A == 2
>>> df.loc[1] = 5
>>> df[my_filter]
A
1 5
I expect my_filter to return empty dataset since none of the A columns are equal to 2.
I'm thinking about making a function that returns the filter and re-use that but is there any more pythonic as well as pandaic way of doing this?
def get_my_filter(df):
return df.A == 2
df[get_my_filter()]
change df
df[get_my_filter()]
Masks are not dynamic, they stay how you defined them when you defined them.
So if you still need to change the dataframe value, you should swap lines 2 and 3.
That would work.
you applied the filter in the first place. Changing a value in the row won't help.
df = pd.DataFrame([1,2,3], columns=['A'])
my_filter = df.A == 2
print(my_filter)
'''
A
0 False
1 True
2 False
'''
as you can see, it returns a series. If you change the data after this process, it will not work. because this represents the first version of the df. But you can use define filter as a string. You can achieve what you want if you use the string filter inside the eval() function.
df = pd.DataFrame([1,2,3], columns=['A'])
my_filter = 'df.A == 2'
df.loc[1] = 5
df[eval(my_filter)]
'''
Out[205]:
Empty DataFrame
Columns: [A]
Index: []
'''
I'm new to python and I have a pandas dataframe that I want to iterate row by row (like for example a 2d array in other languages).
The goal is something like this as a logic: (if df was a like 2d array)
for row in df:
if df[row,2] == '' AND df[row,1] !='':
df[row-1,1] = df[row,1]
df[row,1] = ''
The point is: I want to move the contents of the the current row to the previous one in the column 1, if the current row,column 2 is empty and the current row,column 1 is not.
How would I do that in a python way? (without for example iterating with for loops). I saw something about vectorization but I don't really get how it works.
Or is it easier to convert the df into a list of lists, or an array? The files are big so I would like to use a fast way and I read from excel file, so I just used the read_excel of pandas to import it into a df.
Try this (assuming by column 1 you meant the column at index 0, and by column 2, the one at index 1):
import pandas as pd
import numpy as np
col1, col2 = df.columns[0], df.columns[1]
mask = (df.loc[:, col1] != '') & (df.loc[:, col2] == '')
mask.iloc[0] = False # don't wrap around first row (even if the condition applies)
df.loc[mask.shift(-1, fill_value=False), col1] = df.loc[mask, col1].values
The key point here is using Series.shift to shift the boolean mask backwards by one. This only uses pandas/numpy vectorized functions, so it will be much better than iterating with a plain Python for loop.
Step-by-step
[Get the labels of your columns: col1, col2 = df.columns[0], df.columns[1]]
Create a boolean mask which is True for the rows which satisfy your condition, i.e. nonempty first column and empty second column:
mask = (df.loc[:, col1] != '') & (df.loc[:, col2] == '')
mask.iloc[0] = False
Here we manually set the first element of the mask to False, since even if the first row satisfies the condition, we can't do anything with it (there is no previous row to copy the value of the first column to). (This isn't a problem for Series.shift, which doesn't wrap around, but it is when we're using this mask, in step 3, to select the values that we're going to assign, with df.loc[mask, col1].values: if mask.iloc[0] were True, we would have one more value than targets.)
Shift the mask backwards by one to obtain a mask of the rows to be modified (i.e. the rows that come immediately before a row that satisfies the condition):
mask.shift(-1, fill_value=False)
Since we're shifting the mask backwards by one, the last element won't be defined, so we set it to False by using fill_value=False—we don't want to modify the last row.
Within column 1, assign the values of the rows satisfying the condition to their respective previous rows, using the two masks that we computed:
df.loc[mask.shift(-1, fill_value=False), col1] = df.loc[mask, col1].values
Here we must use .values on the right-hand-side to get the raw numpy array of values, since if we leave it as a Series, pandas will try to align the indices of the lhs and rhs (and since we shifted the rows by one, the indices won't match, so the end result will contain NaNs); instead, we simply want to assign the first element of the rhs to the first slot of the lhs, the second element to the second slot, etc.
This is more or less the same approach as the one outlined by Chaos in the comments.
Example
>>> sample = pd.DataFrame([("spam", ""), ("foo", "bar"), ("baz", ""), ("", "eggs")])
>>> df = sample.copy()
>>> df
0 1
0 spam
1 foo bar
2 baz
3 eggs
>>> col1, col2 = df.columns[0], df.columns[1]
>>> mask = (df.loc[:, col1] != '') & (df.loc[:, col2] == '')
>>> mask.iloc[0] = False
>>> df.loc[mask.shift(-1, fill_value=False), col1] = df.loc[mask, col1].values
>>> df
0 1
0 spam
1 baz bar
2 baz
3 eggs
Addendum
If you actually do want to make the value of the first row wrap around to the last row (if the condition applies to the first row)—i.e. you want to move the values around circularly—, you can use np.roll instead of Series.shift:
mask = (df.loc[:, col1] != '') & (df.loc[:, col2] == '')
df.loc[np.roll(mask, -1), col1] = np.roll(df.loc[mask, col1].values, -1)
Then, continuing the previous example:
>>> df = sample.copy()
>>> mask = (df.loc[:, col1] != '') & (df.loc[:, col2] == '')
>>> df.loc[np.roll(mask, -1), col1] = np.roll(df.loc[mask, col1].values, -1)
>>> df
0 1
0 spam
1 baz bar
2 baz
3 spam eggs
In case you will not find a more pythonic way, here is the correct code to do the work:
for i in range(1, len(df)):
if df.iloc[i, 2]='' and df.iloc[i, 1]!='':
df.iloc[i-1, 1]=df.iloc[i,1]
df.iloc[i, 1]=''
I need to check if the values from the column A contain the values from column B.
I tried using the isin() method:
import pandas as pd
df = pd.DataFrame({'A': ['filePath_en_GB_LU_file', 'filePath_en_US_US_file', 'filePath_en_GB_PL_file'],
'B': ['_LU_', '_US_', '_GB_']})
df['isInCheck'] = df.A.isin(df.B)
For some reason it's not working.
It returns only False values, whereas for first two rows it should return True.
What am I missing in there?
I think you need DataFrame.apply, but for last row is also match:
df['isInCheck'] = df.apply(lambda x: x.B in x.A, axis=1)
print (df)
A B isInCheck
0 filePath_en_GB_LU_file _LU_ True
1 filePath_en_US_US_file _US_ True
2 filePath_en_GB_PL_file _GB_ True
Try to use an apply:
df['isInCheck'] = df.apply(lambda r: r['B'] in r['A'], axis=1)
This will check row-wise. If you want to check if multiple elements are presents, maybe you should create a column for each one of them:
for e in df['B'].unique():
df[f'has_"{e}"'] = df.apply(lambda r: e in r['A'], axis=1)
print(df)
A B has_"_LU_" has_"_US_" has_"_GB_"
0 filePath_en_GB_LU_file _LU_ True False True
1 filePath_en_US_US_file _US_ False True False
2 filePath_en_GB_PL_file _GB_ False False True
I am trying to split my dataframe into two based of medical_plan_id. If it is empty, into df1. If not empty into df2.
df1 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] == ""]
df2 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] is not ""]
The code below works, but if there are no empty fields, my code raises TypeError("invalid type comparison").
df1 = df_with_medicalplanid[df_with_medicalplanid['medical_plan_id'] == ""]
How to handle such situation?
My df_with_medicalplanid looks like below:
wellthie_issuer_identifier ... medical_plan_id
0 UHC99806 ... None
1 UHC99806 ... None
Use ==, not is, to test equality
Likewise, use != instead of is not for inequality.
is has a special meaning in Python. It returns True if two variables point to the same object, while == checks if the objects referred to by the variables are equal. See also Is there a difference between == and is in Python?.
Don't repeat mask calculations
The Boolean masks you are creating are the most expensive part of your logic. It's also logic you want to avoid repeating manually as your first and second masks are inverses of each other. You can therefore use the bitwise inverse ~ ("tilde"), also accessible via operator.invert, to negate an existing mask.
Empty strings are different to null values
Equality versus empty strings can be tested via == '', but equality versus null values requires a specialized method: pd.Series.isnull. This is because null values are represented in NumPy arrays, which are used by Pandas, by np.nan, and np.nan != np.nan by design.
If you want to replace empty strings with null values, you can do so:
df['medical_plan_id'] = df['medical_plan_id'].replace('', np.nan)
Conceptually, it makes sense for missing values to be null (np.nan) rather than empty strings. But the opposite of the above process, i.e. converting null values to empty strings, is also possible:
df['medical_plan_id'] = df['medical_plan_id'].fillna('')
If the difference matters, you need to know your data and apply the appropriate logic.
Semi-final solution
Assuming you do indeed have null values, calculate a single Boolean mask and its inverse:
mask = df['medical_plan_id'].isnull()
df1 = df[mask]
df2 = df[~mask]
Final solution: avoid extra variables
Creating additional variables is something, as a programmer, you should look to avoid. In this case, there's no need to create two new variables, you can use GroupBy with dict to give a dictionary of dataframes with False (== 0) and True (== 1) keys corresponding to your masks:
dfs = dict(tuple(df.groupby(df['medical_plan_id'].isnull())))
Then dfs[0] represents df2 and dfs[1] represents df1 (see also this related answer). A variant of the above, you can forego dictionary construction and use Pandas GroupBy methods:
dfs = df.groupby(df['medical_plan_id'].isnull())
dfs.get_group(0) # equivalent to dfs[0] from dict solution
dfs.get_group(1) # equivalent to dfs[1] from dict solution
Example
Putting all the above in action:
df = pd.DataFrame({'medical_plan_id': [np.nan, '', 2134, 4325, 6543, '', np.nan],
'values': [1, 2, 3, 4, 5, 6, 7]})
df['medical_plan_id'] = df['medical_plan_id'].replace('', np.nan)
dfs = dict(tuple(df.groupby(df['medical_plan_id'].isnull())))
print(dfs[0], dfs[1], sep='\n'*2)
medical_plan_id values
2 2134.0 3
3 4325.0 4
4 6543.0 5
medical_plan_id values
0 NaN 1
1 NaN 2
5 NaN 6
6 NaN 7
Another variant is to unpack df.groupby, which returns an iterator with tuples (first item being the element of groupby and the second being the dataframe).
Like this for instance:
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
_ is in Python used to mark variables that are not interested to keep. I have separated the code to two lines for readability.
Full example
import pandas as pd
df_with_medicalplanid = pd.DataFrame({
'medical_plan_id': ['214212','','12251','12421',''],
'value': 1
})
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
print(df1)
Returns:
medical_plan_id value
0 214212 1
2 12251 1
3 12421 1
cond = df_with_medicalplanid['medical_plan_id'] == ''
(_, df1) , (_, df2) = df_with_medicalplanid.groupby(cond)
# Anton missed cond in right side bracket
print(df1)
I'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod):
...
uat = uat[['Customer Number','Product']]
prod = prod[['Customer Number','Product']]
print uat['Customer Number'] == prod['Customer Number']
print uat['Product'] == prod['Product']
print uat == prod
The first two match exactly:
74357 True
74356 True
Name: Customer Number, dtype: bool
74357 True
74356 True
Name: Product, dtype: bool
For the third print, I get an error:
Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd?
Thanks
Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both):
In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]])
In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0])
In [3]: df1 == df2
Exception: Can only compare identically-labeled DataFrame objects
One solution is to sort the index first (Note: some functions require sorted indexes):
In [4]: df2.sort_index(inplace=True)
In [5]: df1 == df2
Out[5]:
0 1
0 True True
1 True True
Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1):
In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1)
Out[11]:
0 1
0 True True
1 True True
Note: This can still raise (if the index/columns aren't identically labelled after sorting).
You can also try dropping the index column if it is not needed to compare:
print(df1.reset_index(drop=True) == df2.reset_index(drop=True))
I have used this same technique in a unit test like so:
from pandas.util.testing import assert_frame_equal
assert_frame_equal(actual.reset_index(drop=True), expected.reset_index(drop=True))
At the time when this question was asked there wasn't another function in Pandas to test equality, but it has been added a while ago: pandas.equals
You use it like this:
df1.equals(df2)
Some differenes to == are:
You don't get the error described in the question
It returns a simple boolean.
NaN values in the same location are considered equal
2 DataFrames need to have the same dtype to be considered equal, see this stackoverflow question
EDIT:
As pointed out in #paperskilltrees answer index alignment is important. Apart from the solution provided there another option is to sort the index of the DataFrames before comparing the DataFrames. For df1 that would be df1.sort_index(inplace=True).
When you compare two DataFrames, you must ensure that the number of records in the first DataFrame matches with the number of records in the second DataFrame. In our example, each of the two DataFrames had 4 records, with 4 products and 4 prices.
If, for example, one of the DataFrames had 5 products, while the other DataFrame had 4 products, and you tried to run the comparison, you would get the following error:
ValueError: Can only compare identically-labeled Series objects
this should work
import pandas as pd
import numpy as np
firstProductSet = {'Product1': ['Computer','Phone','Printer','Desk'],
'Price1': [1200,800,200,350]
}
df1 = pd.DataFrame(firstProductSet,columns= ['Product1', 'Price1'])
secondProductSet = {'Product2': ['Computer','Phone','Printer','Desk'],
'Price2': [900,800,300,350]
}
df2 = pd.DataFrame(secondProductSet,columns= ['Product2', 'Price2'])
df1['Price2'] = df2['Price2'] #add the Price2 column from df2 to df1
df1['pricesMatch?'] = np.where(df1['Price1'] == df2['Price2'], 'True', 'False') #create new column in df1 to check if prices match
df1['priceDiff?'] = np.where(df1['Price1'] == df2['Price2'], 0, df1['Price1'] - df2['Price2']) #create new column in df1 for price diff
print (df1)
example from https://datatofish.com/compare-values-dataframes/
Flyingdutchman's answer is great but wrong: it uses DataFrame.equals, which will return False in your case.
Instead, you want to use DataFrame.eq, which will return True.
It seems that DataFrame.equals ignores the dataframe's index, while DataFrame.eq uses dataframes' indexes for alignment and then compares the aligned values. This is an occasion to quote the central gotcha of Pandas:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you.
As we can see in the following examples, the data alignment is neither broken, nor enforced, unless explicitly requested. So we have three different situations.
No explicit instruction given, as to the alignment: == aka DataFrame.__eq__,
In [1]: import pandas as pd
In [2]: df1 = pd.DataFrame(index=[0, 1, 2], data={'col1':list('abc')})
In [3]: df2 = pd.DataFrame(index=[2, 0, 1], data={'col1':list('cab')})
In [4]: df1 == df2
---------------------------------------------------------------------------
...
ValueError: Can only compare identically-labeled DataFrame objects
Alignment is explicitly broken: DataFrame.equals, DataFrame.values, DataFrame.reset_index(),
In [5]: df1.equals(df2)
Out[5]: False
In [9]: df1.values == df2.values
Out[9]:
array([[False],
[False],
[False]])
In [10]: (df1.values == df2.values).all().all()
Out[10]: False
Alignment is explicitly enforced: DataFrame.eq, DataFrame.sort_index(),
In [6]: df1.eq(df2)
Out[6]:
col1
0 True
1 True
2 True
In [8]: df1.eq(df2).all().all()
Out[8]: True
My answer is as of pandas version 1.0.3.
Here I am showing a complete example of how to handle this error. I have added rows with zeros. You can have your dataframes from csv or any other source.
import pandas as pd
import numpy as np
# df1 with 9 rows
df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[23,45,12,34,27,44,28,39,40]})
# df2 with 8 rows
df2 = pd.DataFrame({'Name':['John','Mike','Wale','Marry','Tom','Menda','Bolt','Yuswa',],
'Age':[25,45,14,34,26,44,29,42]})
# get lengths of df1 and df2
df1_len = len(df1)
df2_len = len(df2)
diff = df1_len - df2_len
rows_to_be_added1 = rows_to_be_added2 = 0
# rows_to_be_added1 = np.zeros(diff)
if diff < 0:
rows_to_be_added1 = abs(diff)
else:
rows_to_be_added2 = diff
# add empty rows to df1
if rows_to_be_added1 > 0:
df1 = df1.append(pd.DataFrame(np.zeros((rows_to_be_added1,len(df1.columns))),columns=df1.columns))
# add empty rows to df2
if rows_to_be_added2 > 0:
df2 = df2.append(pd.DataFrame(np.zeros((rows_to_be_added2,len(df2.columns))),columns=df2.columns))
# at this point we have two dataframes with the same number of rows, and maybe different indexes
# drop the indexes of both, so we can compare the dataframes and other operations like update etc.
df2.reset_index(drop=True, inplace=True)
df1.reset_index(drop=True, inplace=True)
# add a new column to df1
df1['New_age'] = None
# compare the Age column of df1 and df2, and update the New_age column of df1 with the Age column of df2 if they match, else None
df1['New_age'] = np.where(df1['Age'] == df2['Age'], df2['Age'], None)
# drop rows where Name is 0.0
df2 = df2.drop(df2[df2['Name'] == 0.0].index)
# now we don't get the error ValueError: Can only compare identically-labeled Series objects
I found where the error is coming from in my case:
The problem was that column names list was accidentally enclosed in another list.
Consider following example:
column_names=['warrior','eat','ok','monkeys']
df_good = pd.DataFrame(np.ones(shape=(6,4)),columns=column_names)
df_good['ok'] < df_good['monkeys']
>>> 0 False
1 False
2 False
3 False
4 False
5 False
df_bad = pd.DataFrame(np.ones(shape=(6,4)),columns=[column_names])
df_bad ['ok'] < df_bad ['monkeys']
>>> ValueError: Can only compare identically-labeled DataFrame objects
And the thing is you cannot visually distinguish the bad DataFrame from good.
In my case i just write directly param columns in creating dataframe, because data from one sql-query was with names, and without in other