how to replace infinite value with maximum value of a pandas column? - python

I have a dataframe which looks like
City Crime_Rate
A 10
B 20
C inf
D 15
I want to replace the inf with the max value of the Crime_Rate column , so that my resulting dataframe should look like
City Crime_Rate
A 10
B 20
C 20
D 15
I tried
df['Crime_Rate'].replace([np.inf],max(df['Crime_Rate']),inplace=True)
But python takes inf as the maximum value , where am I going wrong here ?

Filter out inf values first and then get max of Series:
m = df.loc[df['Crime_Rate'] != np.inf, 'Crime_Rate'].max()
df['Crime_Rate'].replace(np.inf,m,inplace=True)
Another solution:
mask = df['Crime_Rate'] != np.inf
df.loc[~mask, 'Crime_Rate'] = df.loc[mask, 'Crime_Rate'].max()
print (df)
City Crime_Rate
0 A 10.0
1 B 20.0
2 C 20.0
3 D 15.0

Here is a solution for a whole matrix/data frame:
highest_non_inf = df.max().loc[lambda v: v<np.Inf].max()
df.replace(np.Inf, highest_non_inf)

Set use_inf_as_nan to true and then use fillna. (Use this if you want to consider inf and nan both as missing value) i.e
pd.options.mode.use_inf_as_na = True
df['Crime_Rate'].fillna(df['Crime_Rate'].max(),inplace=True)
City Crime_Rate
0 A 10.0
1 B 20.0
2 C 20.0
3 D 15.0

One way to do it using an additional function replace(np.inf, np.nan) within max().
It replaces inf with nan for the operations happening inside max() and max returns the expected maximum value not inf
Example below : Max value is 100 and replaces inf
#Create dummy data frame
import pandas as pd
import numpy as np
a = float('Inf')
v = [1,2,5,a,10,5,a,5,100,2]
df = pd.DataFrame({'Col_A': v})
#Data frame looks like this
In [33]: df
Out[33]:
Col_A
0 1.000000
1 2.000000
2 5.000000
3 inf
4 10.000000
5 5.000000
6 inf
7 5.000000
8 100.000000
9 2.000000
# Replace inf
df['Col_A'].replace([np.inf],max(df['Col_A'].replace(np.inf,
np.nan)),inplace=True)
In[35]: df
Out[35]:
Col_A
0 1.0
1 2.0
2 5.0
3 100.0
4 10.0
5 5.0
6 100.0
7 5.0
8 100.0
9 2.0
Hope that works !

Use numpy clip. It's elegant and blazingly fast:
import numpy as np
import pandas as pd
df = pd.DataFrame({"x": [-np.inf, +np.inf, np.nan, 4, 3]})
df["x"] = np.clip(df["x"], -np.inf, 100)
# Out:
# x
# 0 -inf
# 1 100.0
# 2 NaN
# 3 4.0
# 4 3.0
To get rid of the negative infinity as well, replace -np.inf with a small number. NaN is always unaffected. To get the max, use max(df["x"]).

Related

Group pandas columns by word in common in column name?

I have a data set like this:
seq S01-T01 S01-T02 S01-T03 S02-T01 S02-T02 S02-T03 S03-T01 S03-T02 S03-T03
B 7 2 9 2 1 9 2 1 1
C NaN 4 4 2 4 NaN 2 6 8
D 5 NaN NaN 2 5 9 NaN 1 1
I want to get a data frame that:
(1) calculates the mean of all the columns with T01 in them
(2) gets the mean per S-number except for T01 (i.e. get the mean of T02 and T03, for each S field)
(3) get the mean of the list of numbers returned from step 2 (i.e. step 2 will return a list of means, one for each S-number, i then want the mean of that list).
So the output for above would be:
T0_means mean_of_other_means
B 3.6 3.83
C 1.3 4.33
D 2.3 2.6
(i just in my head changed the NaNs to 0 for averaging).
I'm getting stuck at the first step, I wrote:
import sys
import pandas as pd
df = pd.read_csv('fat_norm_extracted.csv',sep=',')
list_cols_to_keep = ['S01-T01','S02-T01','S03-T01']
df = df.loc[df['column_name'].isin(list_of_cols_to_keep)]
print(df)
And the error is:
Traceback (most recent call last):
File "calculate_averages.py", line 6, in <module>
df = df.loc[df['column_name'].isin(list_of_cols_to_keep)]
File "/home/slowat/.conda/envs/embedding_nlp/lib/python3.8/site-packages/pandas/core/frame.py", line 3024, in __getitem__
indexer = self.columns.get_loc(key)
File "/home/slowat/.conda/envs/embedding_nlp/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3082, in get_loc
raise KeyError(key) from err
KeyError: 'column_name'
I know what the error means, that column name is being taken as a string, but not how to fix it. Could someone show me a way around this?
The line
df = df.loc[df['column_name'].isin(list_of_cols_to_keep)]
Is filtering the rows of df where the values of the column named column_name are in the list of value of list_of_cols_to_keep.
If you want to select the columns, you can do :
df = df.loc[:, list_of_cols_to_keep]
Where : is for all rows.
Otherwise you can also use :
df = df.filter(list_of_cols_to_keep)
You can use str.contains to flag the column names that includes T01 via boolean mask msk. Then filter the columns using loc and find mean across columns for T01_means. For mean_of_other_means, you can use msk by using groupby.cumsum on it to create groups; then use groupby.mean across columns to find group means; then use mean yet again to find the mean of means:
df = df.set_index('seq').fillna(0)
msk = df.columns.str.contains('T01')
df['T0_means'] = df.loc[:, msk].mean(axis=1)
df['mean_of_other_means'] = df.drop(columns='T0_means').loc[:, ~msk].groupby(msk.cumsum()[~msk], axis=1).mean().mean(axis=1)
df = df.reset_index()
Output:
seq S01-T01 S01-T02 S01-T03 S02-T01 S02-T02 S02-T03 S03-T01 S03-T02 S03-T03 T0_means mean_of_other_means
0 B 7.0 2.0 9.0 2 1 9.0 2.0 1 1 3.666667 3.833333
1 C 0.0 4.0 4.0 2 4 0.0 2.0 6 8 1.333333 4.333333
2 D 5.0 0.0 0.0 2 5 9.0 0.0 1 1 2.333333 2.666667
First , to me it seems like mean_of_means is nothing but mean of all columns that don't end in T01 because consider row B:
S01-T02 S02-T02 S03-T02 mean
B 2 1 9 (2+1+9)/3
S01-T03 S02-T03 S03-T03 mean
B 9 9 1 (9+9+1/3)
Then the mean of above two means is : ( (2+1+9)/3 + (9+9+1)/3 ) / 2 = (2+1+9+9+9+1)/6
which is nothing but the mean of all columns that don't end in T01!
With that I think you can do:
df = df.fillna(0)
T01_means = df.filter(regex='.*T01$',axis=1).mean(axis=1)
mean_of_means_no_T01 = df.filter(regex='.*(?<!T01)$',axis=1).mean(axis=1)
and then
means_df = pd.concat([T01_means, mean_of_means_no_T01],axis=1)
means_df.columns = ['T01_means', 'mean_of_means_no_T01']
means_df
T01_means mean_of_means_no_T01
B 3.666667 3.833333
C 1.333333 4.333333
D 2.333333 2.666667

Divide several columns with the same column name ending by one other column in python

I have a smiliar question to this one.
I have a dataframe with several rows, which looks like this:
Name TypA TypB ... TypF TypA_value TypB_value ... TypF_value Divider
1 1 1 NaN 10 5 NaN 5
2 NaN 2 NaN NaN 20 NaN 10
and I want to divide all columns with the ending "value" by the column "Divider", how can I do so? One trick would be to use the sorting, to use the answer from above, but is there a direct way for it? That I do not need to sort the dataframe.
The outcome would be:
Name TypA TypB ... TypF TypA_value TypB_value ... TypF_value Divider
1 1 1 NaN 2 1 0 5
2 NaN 2 NaN 0 2 0 10
So a NaN will lead to a 0.
Use DataFrame.filter to filter the columns like value from dataframe then use DataFrame.div along axis=0 to divide it by column Divider, finally use DataFrame.update to update the values in dataframe:
d = df.filter(like='_value').div(df['Divider'], axis=0).fillna(0)
df.update(d)
Result:
Name TypA TypB TypF TypA_value TypB_value TypF_value Divider
0 1 1.0 1 NaN 2.0 1.0 0.0 5
1 2 NaN 2 NaN 0.0 2.0 0.0 10
You could select the columns of interest using DataFrame.filter, and divide as:
value_cols = df.filter(regex=r'_value$').columns
df[value_cols] /= df['Divider'].to_numpy()[:,None]
# df[value_cols] = df[value_cols].fillna(0)
print(df)
Name TypA TypB TypF TypA_value TypB_value TypF_value Divider
0 1 1.0 1 NaN 2.0 1.0 NaN 5
1 2 NaN 2 NaN NaN 2.0 NaN 10
Taking two sample columns A and B :
import pandas as pd
import numpy as np
a={ 'Name':[1,2],
'TypA':[1,np.nan],
'TypB':[1,2],
'TypA_value':[10,np.nan],
'TypB_value':[5,20],
'Divider':[5,10]
}
df = pd.DataFrame(a)
cols_all = df.columns
Find columns for which calculations are to be done. Assuming there all have 'value' and an underscore :
cols_to_calc = [c for c in cols_all if '_value' in c]
For these columns: first, divide with the divider column then replace nan with 0 in those columns.
for c in cols_to_calc:
df[c] = df[c] / df.Divider
df[c] = df[c].fillna(0)

How to do a Python DataFrame Boolean Mask on nan values [duplicate]

Given a pandas dataframe containing possible NaN values scattered here and there:
Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs?
UPDATE: using Pandas 0.22.0
Newer Pandas versions have new methods 'DataFrame.isna()' and 'DataFrame.notna()'
In [71]: df
Out[71]:
a b c
0 NaN 7.0 0
1 0.0 NaN 4
2 2.0 NaN 4
3 1.0 7.0 0
4 1.0 3.0 9
5 7.0 4.0 9
6 2.0 6.0 9
7 9.0 6.0 4
8 3.0 0.0 9
9 9.0 0.0 1
In [72]: df.isna().any()
Out[72]:
a True
b True
c False
dtype: bool
as list of columns:
In [74]: df.columns[df.isna().any()].tolist()
Out[74]: ['a', 'b']
to select those columns (containing at least one NaN value):
In [73]: df.loc[:, df.isna().any()]
Out[73]:
a b
0 NaN 7.0
1 0.0 NaN
2 2.0 NaN
3 1.0 7.0
4 1.0 3.0
5 7.0 4.0
6 2.0 6.0
7 9.0 6.0
8 3.0 0.0
9 9.0 0.0
OLD answer:
Try to use isnull():
In [97]: df
Out[97]:
a b c
0 NaN 7.0 0
1 0.0 NaN 4
2 2.0 NaN 4
3 1.0 7.0 0
4 1.0 3.0 9
5 7.0 4.0 9
6 2.0 6.0 9
7 9.0 6.0 4
8 3.0 0.0 9
9 9.0 0.0 1
In [98]: pd.isnull(df).sum() > 0
Out[98]:
a True
b True
c False
dtype: bool
or as #root proposed clearer version:
In [5]: df.isnull().any()
Out[5]:
a True
b True
c False
dtype: bool
In [7]: df.columns[df.isnull().any()].tolist()
Out[7]: ['a', 'b']
to select a subset - all columns containing at least one NaN value:
In [31]: df.loc[:, df.isnull().any()]
Out[31]:
a b
0 NaN 7.0
1 0.0 NaN
2 2.0 NaN
3 1.0 7.0
4 1.0 3.0
5 7.0 4.0
6 2.0 6.0
7 9.0 6.0
8 3.0 0.0
9 9.0 0.0
You can use df.isnull().sum(). It shows all columns and the total NaNs of each feature.
I had a problem where I had to many columns to visually inspect on the screen so a shortlist comp that filters and returns the offending columns is
nan_cols = [i for i in df.columns if df[i].isnull().any()]
if that's helpful to anyone
Adding to that if you want to filter out columns having more nan values than a threshold, say 85% then use
nan_cols85 = [i for i in df.columns if df[i].isnull().sum() > 0.85*len(data)]
This worked for me,
1. For getting Columns having at least 1 null value. (column names)
data.columns[data.isnull().any()]
2. For getting Columns with count, with having at least 1 null value.
data[data.columns[data.isnull().any()]].isnull().sum()
[Optional]
3. For getting percentage of the null count.
data[data.columns[data.isnull().any()]].isnull().sum() * 100 / data.shape[0]
In datasets having large number of columns its even better to see how many columns contain null values and how many don't.
print("No. of columns containing null values")
print(len(df.columns[df.isna().any()]))
print("No. of columns not containing null values")
print(len(df.columns[df.notna().all()]))
print("Total no. of columns in the dataframe")
print(len(df.columns))
For example in my dataframe it contained 82 columns, of which 19 contained at least one null value.
Further you can also automatically remove cols and rows depending on which has more null values
Here is the code which does this intelligently:
df = df.drop(df.columns[df.isna().sum()>len(df.columns)],axis = 1)
df = df.dropna(axis = 0).reset_index(drop=True)
Note: Above code removes all of your null values. If you want null values, process them before.
df.columns[df.isnull().any()].tolist()
it will return name of columns that contains null rows
I know this is a very well-answered question but I wanted to add a slight adjustment. This answer only returns columns containing nulls, and also still shows the count of the nulls.
As 1-liner:
pd.isnull(df).sum()[pd.isnull(df).sum() > 0]
Description
Count nulls in each column
null_count_ser = pd.isnull(df).sum()
True|False series describing if that column had nulls
is_null_ser = null_count_ser > 0
Use the T|F series to filter out those without
null_count_ser[is_null_ser]
Example Output
name 5
phone 187
age 644
i use these three lines of code to print out the column names which contain at least one null value:
for column in dataframe:
if dataframe[column].isnull().any():
print('{0} has {1} null values'.format(column, dataframe[column].isnull().sum()))
This is one of the methods..
import pandas as pd
df = pd.DataFrame({'a':[1,2,np.nan], 'b':[np.nan,1,np.nan],'c':[np.nan,2,np.nan], 'd':[np.nan,np.nan,np.nan]})
print(pd.isnull(df).sum())
enter image description here
Both of these should work:
df.isnull().sum()
df.isna().sum()
DataFrame methods isna() or isnull() are completely identical.
Note: Empty strings '' is considered as False (not considered NA)
df.isna() return True values for NaN, False for the rest. So, doing:
df.isna().any()
will return True for any column having a NaN, False for the rest
To see just the columns containing NaNs and just the rows containing NaNs:
isnulldf = df.isnull()
columns_containing_nulls = isnulldf.columns[isnulldf.any()]
rows_containing_nulls = df[isnulldf[columns_containing_nulls].any(axis='columns')].index
only_nulls_df = df[columns_containing_nulls].loc[rows_containing_nulls]
print(only_nulls_df)
features_with_na=[features for features in dataframe.columns if dataframe[features].isnull().sum()>0]
for feature in features_with_na:
print(feature, np.round(dataframe[feature].isnull().mean(), 4), '% missing values')
print(features_with_na)
it will give % of missing value for each column in dataframe
The code works if you want to find columns containing NaN values and get a list of the column names.
na_names = df.isnull().any()
list(na_names.where(na_names == True).dropna().index)
If you want to find columns whose values are all NaNs, you can replace any with all.

Input contains NaN, infinity or a value too large for dtype('float64') when I scale my data

I am trying to normalize my data like this :
scaler = MinMaxScaler()
trainX=scaler.fit_transform(X_data_train)
and I get this error :
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
X_data_train is a pandas DataFrame of size (95538, 550). What is really odd is that when I write
print (X_data_train.min().min())
it gives -5482.4473 and similarly for the max, I get 28738212.0, which does not seem for me to be extra-high values...
Moreover, based on the command given by the 54+ voted answer, I did check I have no NaN or Infinity for sure. Moreover, I don't have blanks in my csvor things like that, as I checked the dimensions
So, where is the problem ??
You can also check NaNs and inf:
df = pd.DataFrame({'B':[4,5,4,5,5,np.inf],
'C':[7,8,9,4,2,3],
'D':[np.nan,3,5,7,1,0],
'E':[5,3,6,9,2,4]})
print (df)
B C D E
0 4.000000 7 NaN 5
1 5.000000 8 3.0 3
2 4.000000 9 5.0 6
3 5.000000 4 7.0 9
4 5.000000 2 1.0 2
5 inf 3 0.0 4
nan = df[df.isnull().any(axis=1)]
print (nan)
B C D E
0 4.0 7 NaN 5
inf = df[df.eq(np.inf).any(axis=1)]
print (inf)
B C D E
5 inf 3 0.0 4
If want find all index with at least one NaNs in rows:
print (df.index[np.isnan(df).any(axis=1)])
Int64Index([0], dtype='int64')
And columns:
print (df.columns[np.isnan(df).any()])
Index(['D'], dtype='object')

How to use previous N values in pandas column to fill NaNs?

Say I have a time series data as below.
df
priceA priceB
0 25.67 30.56
1 34.12 28.43
2 37.14 29.08
3 Nan 34.23
4 32 Nan
5 18.75 41.1
6 Nan 45.12
7 23 39.67
8 Nan 36.45
9 36 Nan
Now I want to fill NaNs in column priceA by taking mean of previous N values in the column. In this case take N=3.
And for column priceB I have to fill Nan by value M rows above(current index-M).
I tried to write for loop for it which is not a good practice as my data is too large. Is there a better way to do this?
N=3
M=2
def fillPriceA( df,indexval,n):
temp=[ ]
for i in range(n):
if i < 0:
continue
temp.append(df.loc[indexval-(i+1), 'priceA'])
return np.nanmean(np.array(temp, dtype=np.float))
def fillPriceB(df, indexval, m):
return df.loc[indexval-m, 'priceB']
for idx, rows for df.iterrows():
if idx< N:
continue
else:
if rows['priceA']==None:
rows['priceA']= fillPriceA(df, idx,N)
if rows['priceB']==None:
rows['priceB']=fillPrriceB(df,idx,M)
Expected output:
priceA priceB
0 25.67 30.56
1 34.12 28.43
2 37.14 29.08
3 32.31 34.23
4 32 29.08
5 18.75 41.1
6 27.68 45.12
7 23 39.67
8 23.14 36.45
9 36 39.67
A solution could be to only work with the nan index (see dataframe boolean indexing):
param = dict(priceA = 3, priceB = 2) #Number of previous values to consider
for col in df.columns:
for i in df[np.isnan(df[col])].index: #Iterate over nan index
_window = df.iloc[max(0,(i-param[col])):i][col] #get the nth expected elements
df.loc[i][col] = _window.mean() if col == 'priceA' else _window.iloc[0] #Replace with right method
print(df)
Result:
priceA priceB
0 25.670000 30.56
1 34.120000 28.43
2 37.140000 29.08
3 32.310000 34.23
4 32.000000 29.08
5 18.750000 41.10
6 27.686667 45.12
7 23.000000 39.67
8 23.145556 36.45
9 36.000000 39.67
Note
1. Using np.isnan() implies that your columns are numeric. If not convert your columns before with pd.to_numeric():
...
for col in df.columns:
df[col] = pd.to_numeric(df[col], errors = 'coerce')
...
Or use pd.isnull() instead (see example below). Be aware of the performances (numpy is faster):
from random import randint
#A sample with 10k elements and some np.nan
arr = np.random.rand(10000)
for i in range(100):
arr[randint(0,9999)] = np.nan
#Performances
%timeit pd.isnull(arr)
10000 loops, best of 3: 24.8 µs per loop
%timeit np.isnan(arr)
100000 loops, best of 3: 5.6 µs per loop
2. A more generic alternative could be to define methods and window size to apply for each column in a dict:
import pandas as pd
param = {}
param['priceA'] = {'n':3,
'method':lambda x: pd.isnull(x)}
param['priceB'] = {'n':2,
'method':lambda x: x[0]}
param contains now n the number of elements and method a lambda expression. Accordingly rewrite your loops:
for col in df.columns:
for i in df[np.isnan(df[col])].index: #Iterate over nan index
_window = df.iloc[max(0,(i-param[col]['n'])):i][col] #get the nth expected elements
df.loc[i][col] = param[col]['method'](_window.values) #Replace with right method
print(df)#This leads to a similar result.
You can use an NA mask to do what you need per column:
import pandas as pd
import numpy as np
df = pd.DataFrame({'a': [1,2,3,4, None, 5, 6], 'b': [1, None, 2, 3, 4, None, 7]})
df
# a b
# 0 1.0 1.0
# 1 2.0 NaN
# 2 3.0 2.0
# 3 4.0 3.0
# 4 NaN 4.0
# 5 5.0 NaN
# 6 6.0 7.0
for col in df.columns:
s = df[col]
na_indices = s[s.isnull()].index.tolist()
prev = 0
for k in na_indices:
s[k] = np.mean(s[prev:k])
prev = k
df[col] = s
print(df)
a b
# 0 1.0 1.0
# 1 2.0 1.0
# 2 3.0 2.0
# 3 4.0 3.0
# 4 2.5 4.0
# 5 5.0 2.5
# 6 6.0 7.0
While this is still a custom operation, I am pretty sure it will be slightly faster because it is not iterating over each row, just over the NA values, which I am assuming will be sparse compared to the actual data
To fill priceA use rolling, then shift and use this result in fillna,
# make some data
df = pd.DataFrame({'priceA': range(10)})
#make some rows missing
df.loc[[4, 6], 'priceA'] = np.nan
n = 3
df.priceA = df.priceA.fillna(df.priceA.rolling(n, min_periods=1).mean().shift(1))
The only edge case here is when two nans are within n of one another but it seems to handle this as in your question.
For priceB just use shift,
df = pd.DataFrame({'priceB': range(10)})
df.loc[[4, 8], 'priceB'] = np.nan
m = 2
df.priceB = df.priceB.fillna(df.priceB.shift(m))
Like before, there is the edge case where there is a nan exactly m before another nan.

Categories