I have two dataframes, each with the same number of columns :
print(df1.shape)
(54, 35238)
print(df2.shape)
(64, 35238)
And both don't have any index set
print(df1.index.name)
None
print(df2.index.name)
None
However, whenever I try to vertically concat them (so to have a third dataframe with shape (118, 35238)), it produces a new df with NaNs:
df3 = pandas.concat([df1, df2], ignore_index=True)
print(df3)
The resultant df has the correct number of rows, but it has decided to concat them as new columns. Using the "axis" flag set to 1 results in the same number of (inappropriate) columns (e.g. shape of (63, 70476)).
Any ideas on how to fix this?
They have the same number of columns, but are the column names different? The documentation on concat suggests to me that you need identical column names to have them stack the way you want.
If this is the problem, you could probably fix it by changing one dataframe's column names to match the other before concatenating:
df2.columns = df1.columns
This might be because your df2 is a series, you can try:
pd.concat([df1, pd.DataFrame([df2])], axis=0, ignore_index=True)
Related
I have two dataframes, df1 and df2, and I know that df2 is a subset of df1. What I am trying to do is find the set difference between df1 and df2, such that df1 has only entries that are different from those in df2. To accomplish this, I first used pandas.util.hash_pandas_object on each of the dataframes, and then found the set difference between the two hashed columns.
df1['hash'] = pd.util.hash_pandas_object(df1, index=False)
df2['hash'] = pd.util.hash_pandas_object(df2, index=False)
df1 = df1.loc[~df1['hash'].isin(df2['hash'])]
This results in df1 remaining the same size; that is, none of the hash values matched. However, when I use a lambda function, df1 is reduced by the expected amount.
df1['hash'] = df1.apply(lambda x: hash(tuple(x)), axis=1)
df2['hash'] = df2.apply(lambda x: hash(tuple(x)), axis=1)
df1 = df1.loc[~df1['hash'].isin(df2['hash'])]
The problem with the second approach is that it takes an extremely long time to execute (df1 has about 3 million rows). Am I just misunderstanding how to use pandas.util.hash_pandas_object?
The difference is that in the first case you are hashing the complete dataframe, while in the second case you are hashing each individual row.
If your object is to remove the duplicate rows, you can achieve this faster using left/right merge with indicator option and then drop the rows that are not unique to the original dataframe.
df_merged = df1.merge(df2, how='left', on=list_columns, indicator=True)
df_merged = df_merged[df_merged.indicator=="left_only"] # this will keep only unmatched rows
I got two DataFrames I would like to merge, but I would prefer to check if the one column that exists in both dfs has the exact same values in each row.
for genereal merging I tried several solutions in the comment you see the shape
df = pd.concat([df_b, df_c], axis=1, join='inner') # (245131, 40)
df = pd.concat([df_b, df_c], axis=1).reindex(df_b.index) # (245131, 40)
df = pd.merge(df_b, df_c, on=['client_id'], how='inner') # (420707, 39)
df = pd.concat([df_b, df_c], axis=1) # (245131, 40)
The original df_c is (245131, 14) and df_b is (245131, 26)
By that I assume that the column client_id has the exact values, since in three approaches I have a shape of 245131 rows.
I would like to compare the client_ids in a new_df, tried it with .loc, but it did not work out. Tried also df.rename(columns={ df.columns[20]: "client_id_1" }, inplace=True) but it renamed both columns
I tried
df_test = df_c.client_id
df_test.append(df_b.client_id, ignore_index=True)
but I only receive one index and one client_id column but the shape says 245131 rows.
If I can be sure that the values are exact the same, should I drop the client_id in one df and do the concat/merge after that? So that I got the correct shape of (245131, 39)
is there a mangle_dupe_cols command for merge or compare like for read_csv?
Chris if you wish to check if 2 columns of 2 separate dataframes are exactly the same, you can try the following:
tuple(df1['col'].values) == tuple(df2['col'].values)
This should return a bool value
If you want to merge 2 dataframes ensure all the rows for your column of interest has unique values as duplicates will cause addition of rows
Else use concat if you want to join the dataframes along the axis
I have some 100 dataframes that need to be filled in another big dataframe. Presenting the question with two dataframes
import pandas as pd
df1 = pd.DataFrame([1,1,1,1,1], columns=["A"])
df2 = pd.DataFrame([2,2,2,2,2], columns=["A"])
Please note that both the dataframes have same column names.
I have a master dataframe that has repetitive index values as follows:-
master_df=pd.DataFrame(index=df1.index)
master_df= pd.concat([master_df]*2)
Expected Output:-
master_df['A']=[1,1,1,1,1,2,2,2,2,2]
I am using for loop to replace every n rows of master_df with df1,df2... df100.
Please suggest a better way of doing it.
In fact df1,df2...df100 are output of a function where the input is column A values (1,2). I was wondering if there is something like
another_df=master_df['A'].apply(lambda x: function(x))
Thanks in advance.
If you want to concatenate the dataframes you could just use pandas concat with a list as the code below shows.
First you can add df1 and df2 to a list:
df_list = [df1, df2]
Then you can concat the dfs:
master_df = pd.concat(df_list)
I used the default value of 0 for 'axis' in the concat function (which is what I think you are looking for), but if you want to concatenate the different dfs side by side you can just set axis=1.
I have two data frames df1 and df2. both have same numbers of rows but different columns.
I want to concat all columns of df1 and 2nd and 3rd column of df2.
df1 has 119 columns and df2 has 3 of which i want 2nd & 3rd
Code I am using is:
data_train_test = pd.concat([df1,df2.iloc[:,
[2,3]]],axis=1,ignore_index=False)
Error I am getting is
ValueError: Shape of passed values is (121, 39880), indices imply (121, 28898)
My Analysis:
39880 - 28898 = 10982
df1 is TFID data frame made from concat of two other data frames with rows 17916+10982 = 28898.
how I made df2 is
frames = [data, prediction_data]
df2 = pd.concat(frames)
I am not able to find the exact reason for this problem. Can someone please help?
I think I solved it by resetting the index while creating df2.
frames = [data, prediction_data]
df2 = pd.concat(frames).reset_index()
I am not sure I understood your question correctly but I thinks what you want to do is :
data_train_test = pd.concat([df1,df2[[1,2]]])
.iloc[] is used to select a row (the ith row in the index of your dataframe). So you don't really need it their.
import pandas as pd
df1 = pd.DataFrame(data={'a':[0]})
df2 = pd.DataFrame(data={'b1':[1], 'b2':[2], 'b3':[3]})
data_train_test = pd.concat([df1,df2[df2.columns[1:3]]], axis=1)
# or
data_train_test = pd.concat([df1,df2.loc[:,df2.columns[1:3]]], axis=1)
I'm trying to join two dataframes in pandas to have the following behavior: I want to join on a specified column, but have it so redundant columns are not added to the dataframe. This is analogous to combine_first except combine_first does not seem to take an index column optional argument. Example:
# combine df1 and df2 based on "id" column
df1 = pandas.merge(df2, how="outer", on=["id"])
The problem with the above is that columns common to df1/df2 aside from "id" will be added twice (with _x,_y prefixes) to df1. How can I do something like:
# Do outer join from df2 to df1, matching items by "id" but not adding
# columns that are redundant (df1 takes precedence if the values disagree)
df1.combine_first(df2, on=["id"])
How can this be done?
If you are trying to merge columns from df2 into df1 while excluding any redundant columns, the following should work.
df1.set_index("id", inplace=True)
df2.set_index("id", inplace=True)
df3 = df1.merge(df2.ix[:,df2.columns-df1.columns], left_index=True, right_index=True, how="outer")
However this obviously will not update any values from df1 with values from df2 as it is only bringing in non-redundant columns. But since you said df1 will take precedence on any values that disagree, perhaps this will do the trick?