Pandas concat with different columns - python

I have two dfs that I want to concat
(sorry I don't know how to properly recreate a df here)
A B
a1 b1
a2 b2
a3 b3
A C
a1 c1
a4 c4
Result:
A B C
a1 b1 c1
a2 b2 NaN
a3 b3 NaN
a4 NaN c4
I have tried:
merge = pd.concat([df1,df2],axis = 0,ignore_index= True)
but this seems to just append the second df to the first df
Thank you!

I believe you need an outer join:
>>> pd.merge(df,df2,how='outer')
A B C
0 a1 b1 c1
1 a2 b2 NaN
2 a3 b3 NaN
3 a4 NaN c4

Related

Adding and subtracting dataframe rows conditionally

Lets say I have the dataframe below:
A B C D
0 A0 B0 C0 D0
1 A1 B1 C1 D1
2 A2 B2 C2 D2
3 A3 B3 C3 D3
4 A4 B4 C4 D4
I am trying to write something that would essentially say; if column A contains A1, A2, or A4, then add a 'column E' populated by 'xx' in the rows where any of the three variables appear.
Then create a df2 which only contains the flagged rows and a df3 which has the flagged rows and column E subtracted. Resulting in df2:
A B C D E
0 A1 B1 C1 D1 xx
1 A2 B2 C2 D2 xx
2 A4 B4 C4 D4 xx
and df3:
A B C D
0 A0 B0 C0 D0
1 A3 B3 C3 D3
Python/pandas beginner here, so any and all help is much appreciated!
You can use boolean indexing:
mask = df["A"].isin(["A1", "A2", "A4"])
df_a = df[mask].copy()
df_a["E"] = "xx"
df_b = df[~mask] # add .copy()
print(df_a)
print(df_b)
Prints:
A B C D E
1 A1 B1 C1 D1 xx
2 A2 B2 C2 D2 xx
4 A4 B4 C4 D4 xx
A B C D
0 A0 B0 C0 D0
3 A3 B3 C3 D3

Pandas combine two dataframes to update values of a particular column in 1st dataframe

I have two dataframes:
df1 :
A B C
0 a0 b0 c0
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
4 a4 b4 c4
df2 :
A B C
0 a0 b0 c11
1 a1 b1 c5
2 a70 b2 c20
3 a3 b9 c9
In df1, for every row, whenever Column A and Column B values are equal to values in df2, column C should be updated with value from df2.
Output:
A B C
0 a0 b0 c11
1 a1 b1 c5
2 a2 b2 c2
3 a3 b3 c3
4 a4 b4 c4
I tried the following, but it did not work.
df1.set_index(['A', 'B'])
df2.set_index(['A', 'B'])
df1.update(df2)
df1.reset_index()
df2.reset_index()
df1["C"][:4] = np.where((df1["A"][:4]==df2["A"])&(df1["B"][:4]==df2["B"]),df2["C"],df1["C"][:4])
A B C
0 a0 b0 c11
1 a1 b1 c5
2 a2 b2 c2
3 a3 b3 c3
4 a4 b4 c4

Concatenating Pandas DataFrames with overlapping indexes but never overlapping values

I have two DataFrames of arbitrary shape of the type:
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 NaN
3 A3 NaN NaN
4 A4 NaN NaN
and
A B C
2 NaN NaN C2
3 NaN B3 C3
4 NaN B4 C4
5 A5 B5 C5
6 A6 B6 C6
The two DataFrames have overlapping indexes. Where there is an overlap, for a given column, there is a non-NaN in one DataFrame, and a NaN in the other. How can I concatenate these such that I can achieve a DataFrame with all values and no NaNs:
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 C2
3 A3 B3 C3
4 A4 B4 C4
5 A5 B5 C5
6 A6 B6 C6
My proposed solution is:
df3 = pd.concat([pd.concat([df1[col].dropna(), df2[col].dropna()]) for col in df1.columns], axis=1)
However, ideally I would not work column-by-column.
Use combine_first:
df = df1.combine_first(df2)
print(df)
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 C2
3 A3 B3 C3
4 A4 B4 C4
5 A5 B5 C5
6 A6 B6 C6
using df.fillna() and df.append() with dropna()
df1.fillna(df2).append(df2).dropna()
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 C2
3 A3 B3 C3
4 A4 B4 C4
5 A5 B5 C5
6 A6 B6 C6

Pandas append dataframe without creating new columns

I have two dataframes that look like this:
df1=
A B
1 A1 B1
2 A2 B2
3 A3 B3
df2 =
A C
4 A4 C4
5 A5 C5
I would like to append df2 to df1, like so:
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 NaN
5 A5 NaN
(Note: I've edited the dataframes so that not all the columns in df1 are necessarily in df2)
Whether I use concat or append, the resulting dataframe I get would have a column called "C" with the first three rows filled with nan. I just want to keep the two original columns in df1, with the new values appended. Is there a way concatenate the dataframes without having to drop the extra column afterwards?
You can first filter columns for appending by subset:
print (df2[['A']])
A
4 A4
5 A5
print (pd.concat([df1, df2[['A']]]))
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 NaN
5 A5 NaN
print (df1.append(df2[['A']]))
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 NaN
5 A5 NaN
print (df2[['A','B']])
A B
4 A4 B4
5 A5 B5
print (pd.concat([df1, df2[['A','B']]]))
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 B4
5 A5 B5
Or:
print (df1.append(df2[['A','B']]))
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 B4
5 A5 B5
EDIT by comment:
If columns in df1 and df2 have different columns, use intersection:
print (df1)
A B D
1 A1 B1 R
2 A2 B2 T
3 A3 B3 E
print (df2)
A B C
4 A4 B4 C4
5 A5 B5 C5
print (df1.columns.intersection(df2.columns))
Index(['A', 'B'], dtype='object')
print (pd.concat([df1, df2[df1.columns.intersection(df2.columns)]]))
A B D
1 A1 B1 R
2 A2 B2 T
3 A3 B3 E
4 A4 B4 NaN
5 A5 B5 NaN
Actually the solution is in an obscure corner of this page. Here's the code to use:
pd.concat([df1,df2],join_axes=[df1.columns])

Copying values into Nan fields

I have a large data-frame with the format as below:
If any of the cell is "NaN", i want to copy from the cell immediately above it. So, my dataframe should look like:
In case the first row has "NaN", then I'll have to let it be.
Can someone please help me with this?
This looks like pandas if so you need to call ffill
In [72]:
df = pd.DataFrame({'A':['A0','A1','A2',np.NaN,np.NaN, 'A3'], 'B':['B0','B1','B2',np.NaN,np.NaN, 'B3'], 'C':['C0','C1','C2',np.NaN,np.NaN, 'C3']})
df
Out[72]:
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 C2
3 NaN NaN NaN
4 NaN NaN NaN
5 A3 B3 C3
In [73]:
df.ffill()
Out[73]:
A B C
0 A0 B0 C0
1 A1 B1 C1
2 A2 B2 C2
3 A2 B2 C2
4 A2 B2 C2
5 A3 B3 C3

Categories