how to assign to slice of slice in pandas - python

I have a pandas dataframe df as shown.
col1 col2
0 NaN a
1 2 b
2 NaN c
3 NaN d
4 5 e
5 6 f
I want to find the first NaN value in col1 and assign a new value to it. I've tried both of the following methods but none of them works.
df.loc[df['col'].isna(), 'col1'][0] = 1
df.loc[df['col'].isna(), 'col1'].iloc[0] = 1
Both of them don't show any error or warning. But when I check the value of the original dataframe, it doesn't change.
What is the correct way to do this?

You can use .fillna() with limit=1 parameter:
df['col1'].fillna(1, limit=1, inplace=True)
print(df)
Prints:
col1 col2
0 1.0 a
1 2.0 b
2 NaN c
3 NaN d
4 5.0 e
5 6.0 f

Related

Merge/concat two dataframe by cols

I have two data frame :
import pandas as pd
from numpy import nan
df1 = pd.DataFrame({'key':[1,2,3,4],
'only_at_df1':['a','b','c','d'],
'col2':['e','f','g','h'],})
df2 = pd.DataFrame({'key':[1,9],
'only_at_df2':[nan,'x'],
'col2':['e','z'],})
How to acquire this:
df3 = pd.DataFrame({'key':[1,2,3,4,9],
'only_at_df1':['a','b','c','d',nan],
'only_at_df2':[nan,nan,nan,nan,'x'],
'col2':['e','f','g','h','z'],})
any help appreciated.
The best is probably to use combine_first after temporarily setting "key" as index:
df1.set_index('key').combine_first(df2.set_index('key')).reset_index()
output:
key col2 only_at_df1 only_at_df2
0 1 e a NaN
1 2 f b NaN
2 3 g c NaN
3 4 h d NaN
4 9 z NaN x
This seems like a straightforward use of merge with how="outer":
df1.merge(df2, how="outer")
Output:
key only_at_df1 col2 only_at_df2
0 1 a e NaN
1 2 b f NaN
2 3 c g NaN
3 4 d h NaN
4 9 NaN z x

How to replace column values based on other columns in pandas?

Assume, I have a data frame such as
import pandas as pd
df = pd.DataFrame({'visitor':['A','B','C','D','E'],
'col1':[1,2,3,4,5],
'col2':[1,2,4,7,8],
'col3':[4,2,3,6,1]})
visitor
col1
col2
col3
A
1
1
4
B
2
2
2
C
3
4
3
D
4
7
6
E
5
8
1
For each row/visitor, (1) First, if there are any identical values, I would like to keep the 1st value of each row then replace the rest of identical values in the same row with NULL such as
visitor
col1
col2
col3
A
1
NULL
4
B
2
NULL
NULL
C
3
4
NULL
D
4
7
6
E
5
8
1
Then (2) keep rows/visitors with more than 1 value such as
Final Data Frame
visitor
col1
col2
col3
A
1
NULL
4
C
3
4
NULL
D
4
7
6
E
5
8
1
Any suggestions? many thanks
We can use series.duplicated along the columns axis to identify the duplicates, then mask the duplicates using where and filter the rows where the sum of non-duplicated values is greater than 1
s = df.set_index('visitor')
m = ~s.apply(pd.Series.duplicated, axis=1)
s.where(m)[m.sum(1).gt(1)]
col1 col2 col3
visitor
A 1 NaN 4.0
C 3 4.0 NaN
D 4 7.0 6.0
E 5 8.0 1.0
Let us try mask with pd.Series.duplicated, then dropna with thresh
out = df.mask(df.apply(pd.Series.duplicated,1)).dropna(thresh = df.shape[1]-1)
Out[321]:
visitor col1 col2 col3
0 A 1 NaN 4.0
2 C 3 4.0 NaN
3 D 4 7.0 6.0
4 E 5 8.0 1.0

How to impute entire missing values in pandas dataframe with mode/mean?

I know codes forfilling seperately by taking each column as below
data['Native Country'].fillna(data['Native Country'].mode(), inplace=True)
But i am working on a dataset with 50 rows and there are 20 categorical values which need to be imputed.
Is there a single line code for imputing the entire data set??
Use DataFrame.fillna with DataFrame.mode and select first row because if same maximum occurancies is returned all values:
data = pd.DataFrame({
'A':list('abcdef'),
'col1':[4,5,4,5,5,4],
'col2':[np.nan,8,3,3,2,3],
'col3':[3,3,5,5,np.nan,np.nan],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')
})
cols = ['col1','col2','col3']
print (data[cols].mode())
col1 col2 col3
0 4 3.0 3.0
1 5 NaN 5.0
data[cols] = data[cols].fillna(data[cols].mode().iloc[0])
print (data)
A col1 col2 col3 E F
0 a 4 3.0 3.0 5 a
1 b 5 8.0 3.0 3 a
2 c 4 3.0 5.0 6 a
3 d 5 3.0 5.0 9 b
4 e 5 2.0 3.0 2 b
5 f 4 3.0 3.0 4 b

How to groupby and update values in pandas?

I have a pandas DataFrame that looks similar to the following...
>>> df = pd.DataFrame({
... 'col1':['A','C','B','A','B','C','A'],
... 'col2':[np.nan,1.,np.nan,1.,1.,np.nan,np.nan],
... 'col3':[0,1,9,4,2,3,5],
... })
>>> df
col1 col2 col3
0 A NaN 0
1 C 1.0 1
2 B NaN 9
3 A 1.0 4
4 B 1.0 2
5 C NaN 3
6 A NaN 5
What I would like to do is group the rows of col1 by value and then update any NaN values in col2 to increment in value by 1 based on the last highest value of that group in col1.
So that my expected results would look like the following...
>>> df
col1 col2 col3
0 A 1.0 4
1 A 2.0 0
2 A 3.0 5
3 B 1.0 2
4 B 2.0 9
5 C 1.0 1
6 C 2.0 3
I believe I can use something like groupby on col1 though I'm unsure how to increment the value in col2 based on the last highest value of the group from col1. I've tried the following, but instead of incrementing the value of col1 it updates the value to all 1.0 and adds an additional column...
>>> df1 = df.groupby(['col1'], as_index=False).agg({'col2': 'min'})
>>> df = pd.merge(df1, df, how='left', left_on=['col1'], right_on=['col1'])
>>> df
col1 col2_x col2_y col3
0 A 1.0 NaN 0
1 A 1.0 1.0 1
2 A 1.0 NaN 5
3 B 1.0 NaN 9
4 B 1.0 1.0 4
5 C 1.0 1.0 2
6 C 1.0 NaN 3
Use GroupBy.cumcount only for rows with missing values, add maximum value per group with GroupBy.transform and max and last replace by original values by fillna:
df = pd.DataFrame({
'col1':['A','C','B','A','B','B','B'],
'col2':[np.nan,1.,np.nan,1.,3.,np.nan, 0],
'col3':[0,1,9,4,2,3,4],
})
print (df)
col1 col2 col3
0 A NaN 0
1 C 1.0 1
2 B NaN 9
3 A 1.0 4
4 B 3.0 2
5 B NaN 3
6 B 0.0 4
df = df.sort_values(['col1','col2'], na_position='last')
s = df.groupby('col1')['col2'].transform('max')
df['new'] = (df[df['col2'].isna()]
.groupby('col1')
.cumcount()
.add(1)
.add(s)
.fillna(df['col2']).astype(int))
print (df)
col1 col2 col3 new
3 A 1.0 4 1
0 A NaN 0 2
6 B 0.0 4 0
4 B 3.0 2 3
2 B NaN 9 4
5 B NaN 3 5
1 C 1.0 1 1
Another way:
df['col2_new'] = df.groupby('col1')['col2'].apply(lambda x: x.replace(np.nan, x.value_counts().index[0]+1))
df = df.sort_values('col1')

replace value based on other dataframe

There are two dataframe with same columns, index and the order of the columns are the same. I call them tableA and tableB.
tableA = pd.DataFrame({'col1':[np.NaN,1,2],'col2':[2,3,np.NaN]})
tableB = pd.DataFrame({'col1':[2,4,2],'col2':[2,3,5]})
tableA tableB
col1 col2 col1 col2
0 na 2 0 2 2
1 1 3 1 4 5
2 2 na 2 2 5
I want to replace some value of tableB to 'NA' where the value of same position of tableA is na.
For now, I use loop to do it column by column.
for n in range(tableB.shape[1]):
tableB.iloc[:,n] = tableB.iloc[:,n].where(pd.isnull(tableA.iloc[:,n])==False,'NA')
tableB
col1 col2
0 NA 2
1 4 5
2 2 NA
Is there other way to do it without using loop? I have tried using replace but it can only change the first column.
tableB.replace(pd.isnull(tableA), 'NA', inplace=True) #only adjust the first column.
Thanks for your help!
I think you need where or numpy.where:
1.
df = tableB.where(tableA.notnull())
print (df)
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
2.
df = pd.DataFrame(np.where(tableA.notnull(), tableB, np.nan),
columns=tableB.columns,
index=tableB.index)
print (df)
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
You could use mask
In [7]: tableB.mask(tableA.isnull())
Out[7]:
col1 col2
0 NaN 2.0
1 4.0 3.0
2 2.0 NaN
tableB[tableA.isnull()] = np.nan

Categories