Pandas division by zero, errors despite np.where condition - python

So I am using jupyter notebooks and I have a function that uses the
data['woe2'] = np.log(data['B']/data['MonthSales'])
equation. The issue i'm having is that when 'B' equals 0 Python throws a tantrum over division by 0. This happens even though I tried using np.where to make an exception. Do you guys have any ideas?
import pandas as pd
data = pd.DataFrame({"A" : ["John","Deep","Julia","Kate","Sandy"],
"MonthSales" : [25,30,35,40,45], "B" : [10,0,0,20,40]})
data['woe2'] = np.where((data['B'] != 0),
np.log(data['B']/data['MonthSales']), 0)

It isn't complaining about division BY zero, but division of zero by a non-zero denominator. It is producing -inf.
Here is a bit cleaner way to do it, as you can pass Pandas tests in as conditionals.
data_bool = data['B'] != 0
data['woe2'] = np.log(data[data_bool]['B']/data[data_bool]['MonthSales'])

In recent:
"RuntimeWarning: divide by zero encountered in log" in numpy.log even though small values were filtered out
we explain that np.where is a conditional selector; its arguments are evaluated in full first.
The Series division:
In [72]: data['B']/data['MonthSales']
Out[72]:
0 0.400000
1 0.000000
2 0.000000
3 0.500000
4 0.888889
dtype: float64
Taking the log, raises the warning. Note it is issued by pandas.core.arraylike:
In [73]: np.log(data['B']/data['MonthSales'])
C:\Users\paul\miniconda3\lib\site-packages\pandas\core\arraylike.py:402: RuntimeWarning: divide by zero encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
Out[73]:
0 -0.916291
1 -inf
2 -inf
3 -0.693147
4 -0.117783
dtype: float64
If instead we take the log of the equivalent array, using the where/out parameters to make it conditional, we avoid the warning:
In [74]: np.log((data['B']/data['MonthSales']).values, where=data['B']>0,
out=np.zeros(data.shape[0]))
Out[74]: array([-0.91629073, 0. , 0. , -0.69314718, -0.11778304])

I think that warning is just not significant (like all warnings), indeed, in numpy documentation, they put an example with 0 in the argument array and when we run the same code of the example, it goes with the same warning

Related

Pandas NaN with log

I got this problem "setting an array element with a sequence", any help to solve the matter? I used this to create NaN in my data to be able to calculate the log and then I need to plot it.
import pandas as pd
d = np.array(Hnew)
df = pd.DataFrame(data=d)
df = df.mask(df < 62.5)
h = np.zeros(np.size(df))
for i in range(0, np.size(df)):
h[i] = 5-np.log((df[i]-62.5)/0.915)
This should work:
h= 5 - np.log((df.mask(df['val']<= 62.5)['val'] - 62.5)/0.915)
You tried to assign the series np.log returned to the elements in an array of type float64 which isn't possible (that's the reason for the message). But np.log already returns the series you probably want.
Please also note that I also changed < 62.5 to <= 62.5 because you probably would get -inf or an error, if you try to calculate the log of 0.

python divide value by 0

I am trying to compare values in a table, it so happens that some might be zero and I therefore get an error message that I cannot divide by 0.
Why isn't the script returning inf instead of an error?
When I test this script on a dataframe with one column it works, with more than one column it breaks with the Zero Division Error.
table[change] = ['{0}%'.format(str(round(100*x,2)) for x in \
(table.ix[:,table.shape[1]-1] - table.ix[:,0]) / table.ix[:,0]]
table example:
0 1 2 3 4 5 6 \
numbers 0.0 100.0 120.0 220.0 250.0 300.0 500.0\\
revenues 50.0 100.0 120.0 220.0 250.0 300.0 500.0
where table.ix[:,0] is 0.0.
Some of the values at table.ix[:,0] are zero and others are not, hence, try and except in my experience will not work because the script will break once the value divisible is equal to 0.
I tried two of the other methods and they did not work for me.
Can you be a little more descriptive in your answer? I am struggling to take the approach given.
I have another approach which I am trying and it is not working. Do not see yet what the problem is:
for index, row in table.iterrows():
if row[0] == 0:
table[change] = 'Nan'
else:
x = (row[-1] - row[0]) / row[0]
table[change] = '{0} {1}%'.format( str(round(100 * x, 2)))
The 'change' column contains the same values (i.e. the last comparison of the table)
Dividing by zero is usually a serious error; defaulting to infinity would not be appropriate for most situations.
Before attempting to calculate the value, check if the divisor (table.ix[:,0] in this case) is equal to zero. If it is, then skip the calculation and just assign whatever value you want.
Or you can wrap the division calculation in a try/except block as suggested by #Andrew.
Looks like python has a specific ZeroDivisionError, you should use try except to do something else in that case.
try:
table[change] = ['{0}%'.format(str(round(100*x,2)) for x in \
(table.ix[:,table.shape[1]-1] - table.ix[:,0]) / table.ix[:,0]]
except ZeroDivisionError:
table[change] = inf
In that case, you can just divide the whole Series, and Pandas will do the inf substitution for you. Something like:
if df1.ndim == 1:
table[change] = inf
elif df1.ndim > 1 and df1.shape[0] > 1:
table[change] = ['{0}%'.format(str(round(100*x,2)) for x in \
(table.ix[:,table.shape[1]-1] - table.ix[:,0]) / table.ix[:,0]]
The fact that your original example only had one row seems to make Pandas fetch the value in that cell for the division. If you do the division with an array with more than one row, it has the behaviour that I think you were originally expecting.
EDIT:
I've just spotted the generator expression that I completely overlooked. This is much easier than I thought.
Performing your normalisation then, if your version of pandas is up to date, then you can call round if you want.
table["change"] = 100 * ((table.iloc[:, -1] - table.iloc[:, 0])/ table.iloc[:, 0])
#And if you're running Pandas v 0.17.0
table.round({"change" : 2})

For a pandas Series, shouldn't s.sort_index(inplace=True) change s?

Given this code:
s = pd.Series([1,2,3], index=['C','B','A'])
s.sort_index(inplace=True)
Shouldn't s now look like this:
A 3
B 2
C 1
dtype: int64
When I run this, s remains unchanged. Maybe I'm confused about what the inplace argument is supposed to do. I thought that it was supposed to change the Series on which the method is called.
For the record, this does return the sorted series, but it does so whether or not you set inplace to True.
You are indeed correct with your expectation. However, this was not yet implemented before 0.17 / a bug in 0.17 that this keyword was ignored instead of raising an error (like before). But a fix will be released in the upcoming version 0.17.1.
See https://github.com/pydata/pandas/pull/11422
So for now, easiest is just to use it without inplace:
In [4]: s = s.sort_index()
In [5]: s
Out[5]:
A 3
B 2
C 1
dtype: int64
You need to have a dataframe:
s = pd.DataFrame([1,2,3], index=['C','B','A'])
s.sort_index(inplace=True)
s
Out[25]:
0
A 3
B 2
C 1
inplace for sort_index works on DataFrame, not series. For series you have to redefine it.

sklearn error ValueError: Input contains NaN, infinity or a value too large for dtype('float64')

I am using sklearn and having a problem with the affinity propagation. I have built an input matrix and I keep getting the following error.
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
I have run
np.isnan(mat.any()) #and gets False
np.isfinite(mat.all()) #and gets True
I tried using
mat[np.isfinite(mat) == True] = 0
to remove the infinite values but this did not work either.
What can I do to get rid of the infinite values in my matrix, so that I can use the affinity propagation algorithm?
I am using anaconda and python 2.7.9.
This might happen inside scikit, and it depends on what you're doing. I recommend reading the documentation for the functions you're using. You might be using one which depends e.g. on your matrix being positive definite and not fulfilling that criteria.
EDIT: How could I miss that:
np.isnan(mat.any()) #and gets False
np.isfinite(mat.all()) #and gets True
is obviously wrong. Right would be:
np.any(np.isnan(mat))
and
np.all(np.isfinite(mat))
You want to check whether any of the elements are NaN, and not whether the return value of the any function is a number...
I got the same error message when using sklearn with pandas. My solution is to reset the index of my dataframe df before running any sklearn code:
df = df.reset_index()
I encountered this issue many times when I removed some entries in my df, such as
df = df[df.label=='desired_one']
This is my function (based on this) to clean the dataset of nan, Inf, and missing cells (for skewed datasets):
import pandas as pd
import numpy as np
def clean_dataset(df):
assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame"
df.dropna(inplace=True)
indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(axis=1)
return df[indices_to_keep].astype(np.float64)
In most cases getting rid of infinite and null values solve this problem.
get rid of infinite values.
df.replace([np.inf, -np.inf], np.nan, inplace=True)
get rid of null values the way you like, specific value such as 999, mean, or create your own function to impute missing values
df.fillna(999, inplace=True)
This is the check on which it fails:
https://github.com/scikit-learn/scikit-learn/blob/0.17.X/sklearn/utils/validation.py#L51
Which says
def _assert_all_finite(X):
"""Like assert_all_finite, but only for ndarray."""
X = np.asanyarray(X)
# First try an O(n) time, O(1) space solution for the common case that
# everything is finite; fall back to O(n) space np.isfinite to prevent
# false positives from overflow in sum method.
if (X.dtype.char in np.typecodes['AllFloat'] and not np.isfinite(X.sum())
and not np.isfinite(X).all()):
raise ValueError("Input contains NaN, infinity"
" or a value too large for %r." % X.dtype)
So make sure that you have non NaN values in your input. And all those values are actually float values. None of the values should be Inf either.
The Dimensions of my input array were skewed, as my input csv had empty spaces.
With this version of python 3:
/opt/anaconda3/bin/python --version
Python 3.6.0 :: Anaconda 4.3.0 (64-bit)
Looking at the details of the error, I found the lines of codes causing the failure:
/opt/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in _assert_all_finite(X)
56 and not np.isfinite(X).all()):
57 raise ValueError("Input contains NaN, infinity"
---> 58 " or a value too large for %r." % X.dtype)
59
60
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
From this, I was able to extract the correct way to test what was going on with my data using the same test which fails given by the error message: np.isfinite(X)
Then with a quick and dirty loop, I was able to find that my data indeed contains nans:
print(p[:,0].shape)
index = 0
for i in p[:,0]:
if not np.isfinite(i):
print(index, i)
index +=1
(367340,)
4454 nan
6940 nan
10868 nan
12753 nan
14855 nan
15678 nan
24954 nan
30251 nan
31108 nan
51455 nan
59055 nan
...
Now all I have to do is remove the values at these indexes.
None of the answers here worked for me. This was what worked.
Test_y = np.nan_to_num(Test_y)
It replaces the infinity values with high finite values and the nan values with numbers
I had the same error, and in my case X and y were dataframes so I had to convert them to matrices first:
X = X.values.astype(np.float)
y = y.values.astype(np.float)
Edit: The originally suggested X.as_matrix() is Deprecated
Problem seems to occur in DecisionTreeClassifier input check, Try
X_train = X_train.replace((np.inf, -np.inf, np.nan), 0).reset_index(drop=True)
I had the error after trying to select a subset of rows:
df = df.reindex(index=my_index)
Turns out that my_index contained values that were not contained in df.index, so the reindex function inserted some new rows and filled them with nan.
Remove all infinite values:
(and replace with min or max for that column)
import numpy as np
# generate example matrix
matrix = np.random.rand(5,5)
matrix[0,:] = np.inf
matrix[2,:] = -np.inf
>>> matrix
array([[ inf, inf, inf, inf, inf],
[0.87362809, 0.28321499, 0.7427659 , 0.37570528, 0.35783064],
[ -inf, -inf, -inf, -inf, -inf],
[0.72877665, 0.06580068, 0.95222639, 0.00833664, 0.68779902],
[0.90272002, 0.37357483, 0.92952479, 0.072105 , 0.20837798]])
# find min and max values for each column, ignoring nan, -inf, and inf
mins = [np.nanmin(matrix[:, i][matrix[:, i] != -np.inf]) for i in range(matrix.shape[1])]
maxs = [np.nanmax(matrix[:, i][matrix[:, i] != np.inf]) for i in range(matrix.shape[1])]
# go through matrix one column at a time and replace + and -infinity
# with the max or min for that column
for i in range(matrix.shape[1]):
matrix[:, i][matrix[:, i] == -np.inf] = mins[i]
matrix[:, i][matrix[:, i] == np.inf] = maxs[i]
>>> matrix
array([[0.90272002, 0.37357483, 0.95222639, 0.37570528, 0.68779902],
[0.87362809, 0.28321499, 0.7427659 , 0.37570528, 0.35783064],
[0.72877665, 0.06580068, 0.7427659 , 0.00833664, 0.20837798],
[0.72877665, 0.06580068, 0.95222639, 0.00833664, 0.68779902],
[0.90272002, 0.37357483, 0.92952479, 0.072105 , 0.20837798]])
I found that after calling pct_change on a new column that nan existed in one of rows. I remove the nan row with the following code
df = df.replace([np.inf, -np.inf], np.nan)
df = df.dropna()
df = df.reset_index()
i got the same error. it worked with df.fillna(-99999, inplace=True) before doing any replacement, substitution etc
I would like to propose a solution for numpy that worked well for me. The line
from numpy import inf
inputArray[inputArray == inf] = np.finfo(np.float64).max
substitues all infite values of a numpy array with the maximum float64 number.
Puff !! In my case the problem was about NaN values...
You can list your columns that had NaN with this function
your_data.isnull().sum()
and then you can fill these NAN values in your dataset file.
Here is the code for how to "Replace NaN with zero and infinity with large finite numbers."
your_data[:] = np.nan_to_num(your_data)
from numpy.nan_to_num
In my case the problem was that many scikit functions return numpy arrays, which are devoid of pandas index. So there was an index mismatch when I used those numpy arrays to build new DataFrames and then I tried to mix them with the original data.
dataset = dataset.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
This worked for me
I had the same issue, in my case the answer was simply that I had a cell in my CSV with no value ("x,y,z,,"). Putting a default value in fixed it for me.
Using isneginf may help.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.isneginf.html#numpy.isneginf
x[numpy.isneginf(x)] = 0 #0 is the value you want to replace with
Note: This solution only applies if you consciously want to keep NaN entries in your dataset.
This error happened to me when I was using some of the scikit-learn functionality (in my case: GridSearchCV). Under the hood I was using an xgboost XGBClassifier which handles NaN data gracefully. However, GridSearchCV was using sklearn.utils.validation module that encforced lack of missing data in the input data by calling _assert_all_finite function. This was ultimately causing an error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64')
Sidenote: _assert_all_finite accepts an allow_nan argument, which, if set to True, would not be causing issues. However, scikit-learn API does not allow us to have control over this argument.
Solution
My solution was to use patch module to silence the _assert_all_finite function so that it does not raise ValueError. Here is a snippet
import sklearn
with mock.patch("sklearn.utils.validation._assert_all_finite"):
# your code that raises ValueError
this will replace the _assert_all_finite by a dummy mock function so it won't get executed.
Please note that patching is not a recommended practice and might result in unpredictable behaviour!
EDIT:
This Pull Request should resolve the issue (though the fix has not been released as of Jan 2022)
If you're running an estimator, it could be that your learning rate is too high. I passed in the wrong array to a grid search by accident and ended up training with a learning rate of 500, which I could see causing issues with the training process.
Basically it's not necessarily only your inputs that have to all be valid, but the intermediate data as well.
After a long time of dealing with this problem, I realized that this is because in splits of training and testing sets there are columns of data which are the same for all data rows. Then some calculations in some algorithms may lead to infinity results. If the data that you are using is in a way that close rows are more likely to be similar then shuffling the data can help. This is a bug with scikit. I'm using version 0.23.2.
If you happen to use the "kc_house_data.csv" dataset (which some commenters and many data-science newcomers seem to use, because it's presented in lots of popular course material), the data is faulty and the true source for the error.
To fix it, as of 2022:
Delete the last (empty) line in the csv file
There are two lines that contain one empty data value "x,x,,x,x" - to fix it, don't delete the comma, instead add a random integer value like 2000, so it looks like this "x,x,2000,x,x"
Don't forget to save and reload in your project.
All the other answers are helpful and correct, but not in this case:
If you use kc_house_data.csv you need to fix the data in the file, nothing else will help, the empty data field will shift the other data around randomly and generate weird bugs that are hard to track to the source!
In my case the algorithm required data to be between (0,1) noninclusive. My quite brutal solutions was to add a small random number to all desired values:
y_train = pd.DataFrame(y_train).applymap(lambda x: x + np.random.rand()/100000.0)["col_name"]
y_train[y_train >= 1] = 0.999999
while y_train is in the range of [0,1].
This is definitely not suitable for all cases, as you are messing with your input data but can be a solution if you have sparse data and only need a quick forecast
try
mat.sum()
If the sum of your data is infinity (greater that the max float value which is 3.402823e+38) you will get that error.
see the _assert_all_finite function in validation.py from the scikit source code:
if is_float and np.isfinite(X.sum()):
pass
elif is_float:
msg_err = "Input contains {} or a value too large for {!r}."
if (allow_nan and np.isinf(X).any() or
not allow_nan and not np.isfinite(X).all()):
type_err = 'infinity' if allow_nan else 'NaN, infinity'
# print(X.sum())
raise ValueError(msg_err.format(type_err, X.dtype))

Numpy: multiplying with NaN values without using nan_to_num

I was able to optimise some operations in my program quite a bit using numpy. When I profile a run, I noticed that most of the time is spent in numpy.nan_to_num. I'd like to improve this even further.
The sort of calculations occurring are multiplication of two arrays for which one of the arrays could contain nan values. I want these to be treated as zeros, but I can't initialise the array with zeros, as nan has a meaning later on and can't be set to 0. Is there a way of doing multiplications (and additions) with nan being treated as zero?
From the nan_to_num docstring, I can see a new array is produced which may explain why it's taking so long.
Replace nan with zero and inf with finite numbers.
Returns an array or scalar replacing Not a Number (NaN) with zero,...
A function like nansum for arbitrary arithmetic operations would be great.
Here's some example data:
import numpy as np
a = np.random.rand(1000, 1000)
a[a < 0.1] = np.nan # set some random values to nan
b = np.ones_like(a)
One option is to use np.where to set the value of the result to 0 wherever one of your arrays is equal to NaN:
result = np.where(np.isnan(a), 0, a * b)
If you have to do several operations on an array that contains NaNs, you might consider using masked arrays, which provide a more general method for dealing with missing or invalid values:
masked_a = np.ma.masked_invalid(a)
result2 = masked_a * b
Here, result2 is another np.ma.masked_array whose .mask attribute is set according to where the NaN values were in a. To convert this back to a normal np.ndarray with the masked values replaced by 0s, you can use the .filled() method, passing in the fill value of your choice:
result_filled = result2.filled(0)
assert np.all(result_filled == result)

Categories