There are two dataframe with same columns, index and the order of the columns are the same. I call them tableA and tableB.
tableA = pd.DataFrame({'col1':[np.NaN,1,2],'col2':[2,3,np.NaN]})
tableB = pd.DataFrame({'col1':[2,4,2],'col2':[2,3,5]})
tableA tableB
col1 col2 col1 col2
0 na 2 0 2 2
1 1 3 1 4 5
2 2 na 2 2 5
I want to replace some value of tableB to 'NA' where the value of same position of tableA is na.
For now, I use loop to do it column by column.
for n in range(tableB.shape[1]):
tableB.iloc[:,n] = tableB.iloc[:,n].where(pd.isnull(tableA.iloc[:,n])==False,'NA')
tableB
col1 col2
0 NA 2
1 4 5
2 2 NA
Is there other way to do it without using loop? I have tried using replace but it can only change the first column.
tableB.replace(pd.isnull(tableA), 'NA', inplace=True) #only adjust the first column.
Thanks for your help!
I think you need where or numpy.where:
1.
df = tableB.where(tableA.notnull())
print (df)
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
2.
df = pd.DataFrame(np.where(tableA.notnull(), tableB, np.nan),
columns=tableB.columns,
index=tableB.index)
print (df)
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
You could use mask
In [7]: tableB.mask(tableA.isnull())
Out[7]:
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
tableB[tableA.isnull()] = np.nan
Related
Assume, I have a data frame such as
import pandas as pd
df = pd.DataFrame({'visitor':['A','B','C','D','E'],
'col1':[1,2,3,4,5],
'col2':[1,2,4,7,8],
'col3':[4,2,3,6,1]})
visitor
col1
col2
col3
A
1
1
4
B
2
2
2
C
3
4
3
D
4
7
6
E
5
8
1
For each row/visitor, (1) First, if there are any identical values, I would like to keep the 1st value of each row then replace the rest of identical values in the same row with NULL such as
visitor
col1
col2
col3
A
1
NULL
4
B
2
NULL
NULL
C
3
4
NULL
D
4
7
6
E
5
8
1
Then (2) keep rows/visitors with more than 1 value such as
Final Data Frame
visitor
col1
col2
col3
A
1
NULL
4
C
3
4
NULL
D
4
7
6
E
5
8
1
Any suggestions? many thanks
We can use series.duplicated along the columns axis to identify the duplicates, then mask the duplicates using where and filter the rows where the sum of non-duplicated values is greater than 1
s = df.set_index('visitor')
m = ~s.apply(pd.Series.duplicated, axis=1)
s.where(m)[m.sum(1).gt(1)]
col1 col2 col3
visitor
A 1 NaN 4.0
C 3 4.0 NaN
D 4 7.0 6.0
E 5 8.0 1.0
Let us try mask with pd.Series.duplicated, then dropna with thresh
out = df.mask(df.apply(pd.Series.duplicated,1)).dropna(thresh = df.shape[1]-1)
Out[321]:
visitor col1 col2 col3
0 A 1 NaN 4.0
2 C 3 4.0 NaN
3 D 4 7.0 6.0
4 E 5 8.0 1.0
Suppose I have the following dataframes:
df1 = pd.DataFrame({'col1':['a','b','c','d'],'col2':[1,2,3,4]})
df2 = pd.DataFrame({'col3':['a','x','a','c','b']})
I wonder how can I look up on df1 and make a new column on df2 and replace values from col2 in it, for those values that there is no data I shall impute 0, the result should look like the following:
col3 col4
0 a 1
1 x 0
2 a 1
3 c 3
4 b 2
Use Series.map with Series.fillna:
df2['col2'] = df2['col3'].map(df1.set_index('col1')['col2']).fillna(0).astype(int)
print (df2)
col3 col2
0 a 1
1 x 0
2 a 1
3 c 3
4 b 2
Or DataFrame.merge, better if need append multiple columns:
df = df2.merge(df1.rename(columns={'col1':'col3'}), how='left').fillna(0)
print (df)
col3 col2
0 a 1.0
1 x 0.0
2 a 1.0
3 c 3.0
4 b 2.0
How can I replace NA values in df1
df1:
ID col1 col2 col3 col4
A NaN NaN NaN NaN
B 0 0 1 2
C NaN NaN NaN NaN
With the values from the other dataframe that are corresponding to those NaN values (so other values do not go over)
df2:
ID col1 col2 col3 col4
A 1 2 1 11
B 2 2 4 8
C 0 0 NaN NaN
So result is
ID col1 col2 col3 col4
A 1 2 1 11
B 0 0 1 2
C 0 0 NaN NaN
IIUC use if ID are index in both DataFrames:
df = df1.fillna(df2)
Or:
df = df1.combine_first(df2)
print (df)
col1 col2 col3 col4
ID
A 1.0 2.0 1.0 11.0
B 0.0 0.0 1.0 2.0
C 0.0 0.0 NaN NaN
If ID are columns:
df = df1.set_index('ID').fillna(df2.set_index('ID'))
#alternative
#df = df1.set_index('ID').combine_first(df2.set_index('ID'))
import numpy as np
import pandas as pd
(rows, columns) = df1.shape
for i in range(rows):
for j in range(columns):
if df1.iloc[i,j] == np.NaN:
df1.iloc[i,j] = df2.iloc[i,j]
If all df1 missing values have a corresponding value in df2, that should work.
This solution also takes in count that the NaN values are expressed correctly in df1 as np.NaN, so if they are in string format or another one it will raise an exception.
I have a pandas dataframe df as shown.
col1 col2
0 NaN a
1 2 b
2 NaN c
3 NaN d
4 5 e
5 6 f
I want to find the first NaN value in col1 and assign a new value to it. I've tried both of the following methods but none of them works.
df.loc[df['col'].isna(), 'col1'][0] = 1
df.loc[df['col'].isna(), 'col1'].iloc[0] = 1
Both of them don't show any error or warning. But when I check the value of the original dataframe, it doesn't change.
What is the correct way to do this?
You can use .fillna() with limit=1 parameter:
df['col1'].fillna(1, limit=1, inplace=True)
print(df)
Prints:
col1 col2
0 1.0 a
1 2.0 b
2 NaN c
3 NaN d
4 5.0 e
5 6.0 f
I have a pandas DataFrame that looks similar to the following...
>>> df = pd.DataFrame({
... 'col1':['A','C','B','A','B','C','A'],
... 'col2':[np.nan,1.,np.nan,1.,1.,np.nan,np.nan],
... 'col3':[0,1,9,4,2,3,5],
... })
>>> df
col1 col2 col3
0 A NaN 0
1 C 1.0 1
2 B NaN 9
3 A 1.0 4
4 B 1.0 2
5 C NaN 3
6 A NaN 5
What I would like to do is group the rows of col1 by value and then update any NaN values in col2 to increment in value by 1 based on the last highest value of that group in col1.
So that my expected results would look like the following...
>>> df
col1 col2 col3
0 A 1.0 4
1 A 2.0 0
2 A 3.0 5
3 B 1.0 2
4 B 2.0 9
5 C 1.0 1
6 C 2.0 3
I believe I can use something like groupby on col1 though I'm unsure how to increment the value in col2 based on the last highest value of the group from col1. I've tried the following, but instead of incrementing the value of col1 it updates the value to all 1.0 and adds an additional column...
>>> df1 = df.groupby(['col1'], as_index=False).agg({'col2': 'min'})
>>> df = pd.merge(df1, df, how='left', left_on=['col1'], right_on=['col1'])
>>> df
col1 col2_x col2_y col3
0 A 1.0 NaN 0
1 A 1.0 1.0 1
2 A 1.0 NaN 5
3 B 1.0 NaN 9
4 B 1.0 1.0 4
5 C 1.0 1.0 2
6 C 1.0 NaN 3
Use GroupBy.cumcount only for rows with missing values, add maximum value per group with GroupBy.transform and max and last replace by original values by fillna:
df = pd.DataFrame({
'col1':['A','C','B','A','B','B','B'],
'col2':[np.nan,1.,np.nan,1.,3.,np.nan, 0],
'col3':[0,1,9,4,2,3,4],
})
print (df)
col1 col2 col3
0 A NaN 0
1 C 1.0 1
2 B NaN 9
3 A 1.0 4
4 B 3.0 2
5 B NaN 3
6 B 0.0 4
df = df.sort_values(['col1','col2'], na_position='last')
s = df.groupby('col1')['col2'].transform('max')
df['new'] = (df[df['col2'].isna()]
.groupby('col1')
.cumcount()
.add(1)
.add(s)
.fillna(df['col2']).astype(int))
print (df)
col1 col2 col3 new
3 A 1.0 4 1
0 A NaN 0 2
6 B 0.0 4 0
4 B 3.0 2 3
2 B NaN 9 4
5 B NaN 3 5
1 C 1.0 1 1
Another way:
df['col2_new'] = df.groupby('col1')['col2'].apply(lambda x: x.replace(np.nan, x.value_counts().index[0]+1))
df = df.sort_values('col1')