Pandas: Drop Rows, Columns If More Than Half Are NaN - python

I have a Pandas DataFrame called df with 1,460 rows and 81 columns. I want to remove all columns where at least half the entries are NaN and to do something similar for rows.
From the Pandas docs, I attempted this:
train_df.shape //(1460, 81)
train_df.dropna(thresh=len(train_df)/2, axis=1, inplace=True)
train_df.shape //(1460, 77)
Is this the correct way of doing it? It seems to remove 4 columns but I'm surprised. I would have thought len(train_df) gets me the number of rows so I've passed the wrong value to thresh...?
How would I do the same thing for rows (removing rows where at least half the columns are NaN)?
Thanks!

I guess you did the right thing but forgot to add the .index.
The line should look like this:
train_df.dropna(thresh=len(train_df.index)/2, axis=1, inplace=True)
Hope that helps.

Using count and loc. count(axis=) ignores NaNs for counting.
In [4135]: df.loc[df.count(1) > df.shape[1]/2, df.count(0) > df.shape[0]/2]
Out[4135]:
0
0 0.382991
1 0.428040
7 0.441113
Details
In [4136]: df
Out[4136]:
0 1 2 3
0 0.382991 0.658090 0.881214 0.572673
1 0.428040 0.258378 0.865269 0.173278
2 0.579953 NaN NaN NaN
3 0.117927 NaN NaN NaN
4 0.597632 NaN NaN NaN
5 0.547839 NaN NaN NaN
6 0.998631 NaN NaN NaN
7 0.441113 0.527205 0.779821 0.251350
In [4137]: df.count(1) > df.shape[1]/2
Out[4137]:
0 True
1 True
2 False
3 False
4 False
5 False
6 False
7 True
dtype: bool
In [4138]: df.count(0) < df.shape[0]/2
Out[4138]:
0 False
1 True
2 True
3 True
dtype: bool

Setup
np.random.seed([3,14159])
df = pd.DataFrame(np.random.choice([1, np.nan], size=(10, 10)))
df
0 1 2 3 4 5 6 7 8 9
0 1.0 1.0 NaN NaN NaN 1.0 1.0 NaN 1.0 NaN
1 NaN 1.0 1.0 1.0 1.0 1.0 1.0 1.0 NaN 1.0
2 NaN 1.0 1.0 NaN NaN NaN NaN 1.0 1.0 1.0
3 1.0 NaN NaN NaN NaN NaN NaN NaN 1.0 NaN
4 1.0 1.0 1.0 1.0 1.0 1.0 NaN NaN 1.0 NaN
5 1.0 NaN NaN 1.0 NaN NaN 1.0 NaN NaN 1.0
6 NaN NaN 1.0 NaN NaN 1.0 1.0 NaN NaN 1.0
7 NaN NaN NaN 1.0 NaN 1.0 NaN 1.0 NaN NaN
8 1.0 1.0 1.0 NaN 1.0 NaN 1.0 NaN NaN 1.0
9 NaN NaN NaN 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Solution 1
This assumes you make the calculation for rows and columns before you drop either rows or columns.
n = df.notnull()
df.loc[n.mean(1) > .5, n.mean() > .5]
5 6 9
1 1.0 1.0 1.0
4 1.0 NaN NaN
8 NaN 1.0 1.0
9 1.0 1.0 1.0
Solution 2
Similar concept but using numpy tools.
v = np.isnan(df.values)
r = np.count_nonzero(v, 1) < v.shape[1] // 2
c = np.count_nonzero(v, 0) < v.shape[0] // 2
df.loc[r, c]
5 6 9
1 1.0 1.0 1.0
4 1.0 NaN NaN
8 NaN 1.0 1.0
9 1.0 1.0 1.0

Try this code, it would do !
df.dropna(thresh = df.shape[1]/3, axis = 0, inplace = True)

Related

Merging dataframes in pandas while ignoring NaN values

Suppose I have two dataframes:
df_a
A B C
0 1.0 NaN NaN
1 NaN 1.0 NaN
2 NaN NaN 1.0
df_b
A B C
0 NaN NaN 2.0
1 NaN 2.0 NaN
2 2.0 NaN NaN
How would I go about merging/concatenating them so the result dataframe looks like this:
df_c
A B C
0 1.0 NaN 2.0
1 NaN 2.0 NaN
2 2.0 NaN 1.0
The way I got closer conceptually was by using pd.merge(df_a, df_b, "right"), but all values on df_a ended up replaced.
Is there any way to ignore NaN values when merging?
In your case do combine_first
df_c = df_b.combine_first(df_a)
df_c
Out[151]:
A B C
0 1.0 NaN 2.0
1 NaN 2.0 NaN
2 2.0 NaN 1.0
df_c = df_a.combine_first(df_b)
df_c
A B C
0 1.0 NaN 2.0
1 NaN 1.0 NaN
2 2.0 NaN 1.0
df_d = df_b.combine_first(df_a)
df_d
A B C
0 1.0 NaN 2.0
1 NaN 2.0 NaN
2 2.0 NaN 1.0

Best way to reassemble a pandas data frame

Need to reassemble a data frame that is the result of a group by operation. It is assumed to be ordered.
Major Minor RelType SomeNulls
0 0.0 0.0 1 1.0
1 NaN NaN 2 NaN
2 1.0 1.0 1 NaN
3 NaN NaN 2 NaN
4 NaN NaN 3 NaN
5 2.0 3.0 1 NaN
6 NaN NaN 2 2.0
And looking for something like this
Major Minor RelType SomeNulls
0 0.0 0.0 1 1.0
1 0.0 0.0 2 NaN
2 1.0 1.0 1 NaN
3 1.0 1.0 2 NaN
4 1.0 1.0 3 NaN
5 2.0 3.0 1 NaN
6 2.0 3.0 2 2.0
Wondering if there is an elegant way to resolve it.
import pandas as pd
import numpy as np
def refill_frame(df, cols):
while df[cols].isnull().values.any():
for col in cols:
if col in list(df):
#print (col)
df[col]= np.where(df[col].isnull(), df[col].shift(1), df[col])
return df
df = pd.DataFrame({'Major': [0, None, 1, None, None,2, None],
'Minor': [0, None, 1, None, None,3, None],
'RelType': [1, 2, 1, 2,3, 1,2],
'SomeNulls': [1, None,None, None,None,None,2]
})
print (df)
cols2fill =['Major', 'Minor']
df = refill_frame(df, cols2fill)
print (df)
If I understand the question correctly, You could do a transform on the specific columns:
df.loc[:, ['Major', 'Minor']] = df.loc[:, ['Major', 'Minor']].transform('ffill')
Major Minor RelType SomeNulls
0 0.0 0.0 1 1.0
1 0.0 0.0 2 NaN
2 1.0 1.0 1 NaN
3 1.0 1.0 2 NaN
4 1.0 1.0 3 NaN
5 2.0 3.0 1 NaN
6 2.0 3.0 2 2.0
You could also use the fill_direction function from pyjanitor:
# pip install pyjanitor
import janitor
df.fill_direction({"Major":"down", "Minor":"down"})
Major Minor RelType SomeNulls
0 0.0 0.0 1 1.0
1 0.0 0.0 2 NaN
2 1.0 1.0 1 NaN
3 1.0 1.0 2 NaN
4 1.0 1.0 3 NaN
5 2.0 3.0 1 NaN
6 2.0 3.0 2 2.0

Pandas interpolate NaNs from zero to next valid value

I am looking for a way to linear interpolate missing values (NaN) from zero to the next valid value.
E.g.:
A B C D E
0 NaN 2.0 NaN NaN 0
1 3.0 4.0 NaN NaN 1
2 NaN NaN NaN NaN 5
3 NaN 3.0 NaN NaN 4
Given this table, i want the output to look like this:
A B C D E
0 NaN 2.0 0 0 0
1 3.0 4.0 0 0.5 1
2 NaN NaN NaN NaN 5
3 NaN 3.0 0 2 4
I've tried using fillna to fill only the next NaN to a valid value to 0 and to then linear interpolate the whole dataframe.
The problem I'm facing here is that specifying a value and a limit with fillna won't affect consecutive NaNs, but limit the total amount of columns to be filled.
If possible please only suggest solutions without iterating over each row manually since I'm working with large dataframes.
Thanks in advance.
Here's a method that will work to replace 0 for the first NaN after a valid number and then will interpolate row-wise. I added extra rows in the end to illustrate the behavior for multiple fillings on the same row, fillings of only one value, or rows that end in NaN streaks.
Sample Data
A B C D E
0 NaN 2.0 NaN NaN 0
1 3.0 4.0 NaN NaN 1
2 NaN NaN NaN NaN 5
3 NaN 3.0 NaN NaN 4
4 3 NaN 7 NaN 5
5 NaN 4 7 NaN 6
6 NaN 4 7 NaN NaN
7 5 NaN 5 NaN NaN
Code
m = (df.notnull().cummax(axis=1) & df.isnull()).astype(int).diff(axis=1).fillna(0)
update = m.where(m.eq(1) & m.loc[:, ::-1].cummin(axis=1).eq(-1)).replace(1, 0)
df.update(update) # Add in 0s
df = df.interpolate(axis=1, limit_area='inside')
A B C D E
0 NaN 2.0 0.0 0.0 0.0
1 3.0 4.0 0.0 0.5 1.0
2 NaN NaN NaN NaN 5.0
3 NaN 3.0 0.0 2.0 4.0
4 3.0 0.0 7.0 0.0 5.0
5 NaN 4.0 7.0 0.0 6.0
6 NaN 4.0 7.0 NaN NaN
7 5.0 0.0 5.0 NaN NaN
How it works:
(df.notnull().cummax(1) & df.isnull()) # True for streaks of null after non-null
# A B C D E
#0 False False True True False
#1 False False True True False
#2 False False False False False
#3 False False True True False
#4 False True False True False
#5 False False False True False
#6 False False False True True
#7 False True False True True
# Taking the diff then allows you to find only the first NaN after any non-null.
# I.e. flagged by `1`
(df.notnull().cummax(1) & df.isnull()).astype(int).diff(axis=1).fillna(0)
# A B C D E
#0 0.0 0.0 1.0 0.0 -1.0
#1 0.0 0.0 1.0 0.0 -1.0
#2 0.0 0.0 0.0 0.0 0.0
#3 0.0 0.0 1.0 0.0 -1.0
#4 0.0 1.0 -1.0 1.0 -1.0
#5 0.0 0.0 0.0 1.0 -1.0
#6 0.0 0.0 0.0 1.0 0.0
#7 0.0 1.0 -1.0 1.0 0.0
# The update DataFrame is a like-indexed DF with 0s where they get filled.
# The reversed cummin ensures fills only if there's a non-null value after the 0.
m.where(m.eq(1) & m.loc[:, ::-1].cummin(1).eq(-1)).replace(1, 0)
# A B C D E
#0 NaN NaN 0.0 NaN NaN
#1 NaN NaN 0.0 NaN NaN
#2 NaN NaN NaN NaN NaN
#3 NaN NaN 0.0 NaN NaN
#4 NaN 0.0 NaN 0.0 NaN
#5 NaN NaN NaN 0.0 NaN
#6 NaN NaN NaN NaN NaN
#7 NaN 0.0 NaN NaN NaN

Dataframe from a dict of lists of dicts?

I have a dict of lists of dicts. What is the most efficient way to convert this into a DataFrame in pandas?
data = {
"0a2":[{"a":1,"b":1},{"a":1,"b":1,"c":1},{"a":1,"b":1}],
"279":[{"a":1,"b":1,"c":1},{"a":1,"b":1,"d":1}],
"ae2":[{"a":1,"b":1},{"a":1,"d":1},{"a":1,"b":1},{"a":1,"d":1}],
#...
}
import pandas as pd
pd.DataFrame(data, columns=["a","b","c","d"])
What I've tried:
One solution is to denormalize the data like this, by duplicating the "id" keys:
bad_data = [
{"a":1,"b":1,"id":"0a2"},{"a":1,"b":1,"c":1,"id":"0a2"},{"a":1,"b":1,"id":"0a2"},
{"a":1,"b":1,"c":1,"id":"279"},{"a":1,"b":1,"d":1,"id":"279"},
{"a":1,"b":1,"id":"ae2"},{"a":1,"d":1,"id":"ae2"},{"a":1,"b":1,"id":"ae2"},{"a":1,"d":1,"id":"ae2"}
]
pd.DataFrame(bad_data, columns=["a","b","c","d","id"])
But my data is very large, so I'd prefer some other hierarchical index solution.
IIUC, you can do (remcomended)
new_df = pd.concat((pd.DataFrame(d) for d in data.values()), keys=data.keys())
Output:
a b c d
0a2 0 1 1.0 NaN NaN
1 1 1.0 1.0 NaN
2 1 1.0 NaN NaN
279 0 1 1.0 1.0 NaN
1 1 1.0 NaN 1.0
ae2 0 1 1.0 NaN NaN
1 1 NaN NaN 1.0
2 1 1.0 NaN NaN
3 1 NaN NaN 1.0
Or
pd.concat(pd.DataFrame(v).assign(ID=k) for k,v in data.items())
Output:
a b c ID d
0 1 1.0 NaN 0a2 NaN
1 1 1.0 1.0 0a2 NaN
2 1 1.0 NaN 0a2 NaN
0 1 1.0 1.0 279 NaN
1 1 1.0 NaN 279 1.0
0 1 1.0 NaN ae2 NaN
1 1 NaN NaN ae2 1.0
2 1 1.0 NaN ae2 NaN
3 1 NaN NaN ae2 1.0

How to reset cumulative sum every time there is a NaN in a pandas dataframe?

If I have a Pandas data frame like this:
1 2 3 4 5 6 7
1 NaN 1 1 1 NaN 1 1
2 NaN NaN 1 1 1 1 1
3 NaN NaN NaN 1 NaN 1 1
4 1 1 NaN NaN 1 1 NaN
How do I do a cumulative sum such that the count resets every time there is a NaN value in the row? Such that I get something like this:
1 2 3 4 5 6 7
1 NaN 1 2 3 NaN 1 2
2 NaN NaN 1 2 3 4 5
3 NaN NaN NaN 1 NaN 1 2
4 1 2 NaN NaN 1 2 NaN
You could do:
# compute mask where np.nan = True
mask = pd.isna(df).astype(bool)
# compute cumsum across rows fillna with ffill
cumulative = df.cumsum(1).fillna(method='ffill', axis=1).fillna(0)
# get the values of cumulative where nan is True use the same method
restart = cumulative[mask].fillna(method='ffill', axis=1).fillna(0)
# set the result
result = (cumulative - restart)
result[mask] = np.nan
# display the result
print(result)
Output
1 2 3 4 5 6 7
0 NaN 1.0 2.0 3.0 NaN 1.0 2.0
1 NaN NaN 1.0 2.0 3.0 4.0 5.0
2 NaN NaN NaN 1.0 NaN 1.0 2.0
3 1.0 2.0 NaN NaN 1.0 2.0 NaN
You can do with stack and unstack
s=df.stack(dropna=False).isnull().cumsum()
df=df.where(df.isnull(),s.groupby(s).cumcount().unstack())
df
Out[86]:
1 2 3 4 5 6 7
1 NaN 1.0 2.0 3.0 NaN 1 2.0
2 NaN NaN 1.0 2.0 3.0 4 5.0
3 NaN NaN NaN 1.0 NaN 1 2.0
4 3.0 4.0 NaN NaN 1.0 2 NaN
I came up with a slightly different answer here that might be helpful.
For as single series I made this function to to do the cumsum-reset on nulls.
def cumsum_reset_on_null(srs: pd.Series) -> pd.Series:
"""
For a pandas series with null values,
do a cumsum and reset the cumulative sum when a null value is encountered.
Example)
input: [1, 1, np.nan, 1, 2, 3, np.nan, 1, np.nan]
return: [1, 2, 0, 1, 3, 6, 0, 1, 0]
"""
cumulative = srs.cumsum().fillna(method='ffill')
restart = ((cumulative * srs.isnull()).replace(0.0, np.nan)
.fillna(method='ffill').fillna(0))
result = (cumulative - restart)
return result.replace(0, np.nan)
Then for the full dataframe, just apply this function row-wise
df = pd.DataFrame([
[np.nan, 1, 1, 1, np.nan, 1, 1],
[np.nan, np.nan, 1, 1, 1, 1, 1],
[np.nan, np.nan, np.nan, 1, np.nan, 1, 1],
[1, 1, np.nan, np.nan, 1, 1, np.nan],
])
df.apply(cumsum_reset_on_null, axis=1)
0 NaN 1.0 2.0 3.0 NaN 1.0 2.0
1 NaN NaN 1.0 2.0 3.0 4.0 5.0
2 NaN NaN NaN 1.0 NaN 1.0 2.0
3 1.0 2.0 NaN NaN 1.0 2.0 NaN
One of the way can be:
sample = pd.DataFrame({1:[np.nan,np.nan,np.nan,1],2:[1,np.nan,np.nan,1],3:[1,1,np.nan,np.nan],4:[1,1,1,np.nan],5:[np.nan,1,np.nan,1],6:[1,1,1,1],7:[1,1,1,np.nan]},index=[1,2,3,4])
Output of sample
1 2 3 4 5 6 7
1 NaN 1.0 1.0 1.0 NaN 1 1.0
2 NaN NaN 1.0 1.0 1.0 1 1.0
3 NaN NaN NaN 1.0 NaN 1 1.0
4 1.0 1.0 NaN NaN 1.0 1 NaN
Following code would do:
#numr = number of rows
#numc = number of columns
numr,numc = sample.shape
for i in range(numr):
s=0
flag=0
for j in range(numc):
if np.isnan(sample.iloc[i,j]):
flag=1
else:
if flag==1:
s=sample.iloc[i,j]
flag=0
else:
s+=sample.iloc[i,j]
sample.iloc[i,j]=s
Output:
1 2 3 4 5 6 7
1 NaN 1.0 2.0 3.0 NaN 1.0 2.0
2 NaN NaN 1.0 2.0 3.0 4.0 5.0
3 NaN NaN NaN 1.0 NaN 1.0 2.0
4 1.0 2.0 NaN NaN 1.0 2.0 NaN

Categories