Python - Replacing warnings with a simple message - python

I have built a few off-the-shelf classifiers from sklearn and there are some expected scenarios where I know the classifier is bound to perform badly and not predict anything correctly. The sklearn.svm package runs without an error but raises the following warning.
~/anaconda/lib/python3.5/site-packages/sklearn/metrics/classification.py:1074: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
I wish to suppress this warning and instead replace with a message to stdout, say for instance, "poor classifier performance".
Is there any way to suppress warnings in general?

Suppressing all warnings is easy with -Wignore (see warning flag docs)
The warnings module can do some finer-tuning with filters (ignore just your warning type).
Capturing just your warning (assuming there isn't some API in the module to tweak it) and doing something special could be done using the warnings.catch_warnings context manager and code adapted from "Testing Warnings":
import warnings
class MyWarning(Warning):
pass
def something():
warnings.warn("magic warning", MyWarning)
with warnings.catch_warnings(record=True) as w:
# Trigger a warning.
something()
# Verify some things
if ((len(w) == 1)
and issubclass(w[0].category, MyWarning)
and "magic" in str(w[-1].message)):
print('something magical')

Related

logistic regression how to suppress message iterations limit reached

I perform many logistic regression analyses with different parameters. From time to time I get an annoying message that the iteration limit is reached.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/arnold/bin/anaconda/envs/vehicles/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
I don't want a message, I have 1000's of them in my project during one run. Is there a way to suppress it?
What i'd like is some indication that something had gone wrong, e.g. raising an exception so that I can check afterwards which analyses were ok and which were wrong. Is there a way to do that?
The message is a custom warning defined sklearn.exceptions. You can suppress it (as noted in the comments), and you can catch it as if it was an error. The catch feature allows you to record the message. That might help you check which analyses were okay afterward.
The following code sample should help you get started. It is based on the python warnings documentation. The with block catches and records the warning produced by the logistic regression.
import warnings
from sklearn import datasets, linear_model,exceptions
import matplotlib.pyplot as plt
#>>>Start: Create dummy data
blob = datasets.make_blobs(n_samples=100,centers=1)[0]
x = blob[:,0].reshape(-1,1)
# y needs to be integer for logistic regression
y = blob[:,1].astype(int)
plt.scatter(x,y)
#<<<End: Create dummy data
#<<Create logistic regression. set max_iteration to a low number
lr = linear_model.LogisticRegression(max_iter=2)
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a warning.
lr.fit(x,y)
After running the code, you can check the contents of variable w.
print(type(w))
print(w[-1].category)
print(w[-1].message)
Output:
<class 'list'>
<class 'sklearn.exceptions.ConvergenceWarning'>
lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Hide scikit-learn ConvergenceWarning: "Increase the number of iterations (max_iter) or scale the data"

I am using Python to predict values and getting many warnings like:
Increase the number of iterations (max_iter) or scale the data as
shown in:
https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
C:\Users\ASMGX\anaconda3\lib\site-packages\sklearn\linear_model_logistic.py:762:
ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL
NO. of ITERATIONS REACHED LIMIT.
this prevents me from seeing the my own printed results.
Is there any way I can stop these warnings from showing?
You can use the warnings-module to temporarily suppress warnings. Either all warnings or specific warnings.
In this case scikit-learn is raising a ConvergenceWarning so I suggest suppressing exactly that type of warning. That warning-class is located in sklearn.exceptions.ConvergenceWarning so import it beforehand and use the context-manager catch_warnings and the function simplefilter to ignore the warning, i.e. not print it to the screen:
import warnings
from sklearn.exceptions import ConvergenceWarning
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=ConvergenceWarning)
optimizer_function_that_creates_warning()
You can also ignore that specific warning globally to avoid using the context-manager:
import warnings
warnings.simplefilter("ignore", category=ConvergenceWarning)
optimizer_function_that_creates_warning()
I suggest using the context-manager though since you are sure about where you suppress warnings. This way you will not suppress warnings from unexpected places.
Use the solver and max_iter to solve the problem...
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression(solver='lbfgs', max_iter=500000).fit(x_train, y_train)

The sklearn.tree.tree module is deprecated in version 0.22 and will be removed in version 0.24

I'm using the DecisionTreeClassifier from scikit-learn (https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) and getting the following warning:
FutureWarning: The sklearn.tree.tree module is deprecated in version
0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.tree. Anything that
cannot be imported from sklearn.tree is now part of the private API.
I'm a bit confused about why I'm receiving this warning as I'm not using sklearn.tree.tree anywhere. I am using sklearn.tree as the warning suggests but still receive this warning. In fact I'm using code of the form:
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(<params>)
tree.fit(training_data, training_labels)
As per the example code given in https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html but still get this warning.
I've searched the scikit documentation and online and can't find how to update my code inline with the suggestion in the warning. Does anyone know what I need to change to fix the warning?
You can ignore the deprecation warning, it's only a warning (I wouldn't worry if your code isn't referencing that subpackage, there's probably an import somewhere under the hood inside sklearn.)
You could suppress all FutureWarnings, but then you might miss another more important one, on sklearn or another package. So I'd just ignore it for now. But if you want to:
import warnings
warnings.simplefilter('ignore', FutureWarning)
from sklearn.tree import ...
# ... Then turn warnings back on for other packages
warnings.filterwarnings('module') # or 'once', or 'always'
See the doc, or How to suppress Future warning from import?, although obviously you replace import pandas with your own import statement.
link of the same kind of problem
It's just a warning, for now -- until you upgrade scikit/sklearn to version 0.24, You need to update your scikit/sklearn version.

Ignore warnings in from Python modules (seaborn, sklearn)

There are many questions related to the question title above and all basically tell you to do:
import warnings
warnings.filterwarnings('ignore')
and to make sure this is placed before the first import.
However, even after doing this I get many warnings from seaborn and sklearn. I get UserWarning, DataConversionWarning and RuntimeWarning which, according to documentation, all inherit from Warning and should be covered by the above code.
Is there another way to hide those warnings?
(I cannot really solve most of them anyway)
EDIT
Example 1:
C:\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py:645: DataConversionWarning: Data with input dtype int32, int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
Example 2
C:\Anaconda3\lib\site-packages\seaborn\distributions.py:340: UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
ax.set_ylim(0, auto=None)
Example2
It's a bit hard to track down; seaborn imports statsmodels. And in statsmodels/tools/sm_exceptions.py you find this line
warnings.simplefilter('always', category=UserWarning)
in which reverses any previous setting for user warnings.
A solution for now would be to remove that line or to set the warning state after the import of seaborn (and hence statsmodels). In a future version of statsmodels this will be fixed by PR 4712, so using the development version of statsmodels would also be an option.
Example1
I did not find a way to reproduce the first example from sklearn; so that may or may not have a different reason.

Catch OptimizeWarning as an exception

I was just trying to catch an OptimizeWarning thrown by the scipy.optimize.curve_fit function, but I realized it was not recognized as a valid exception.
This is a non-working simple idea of what I'm doing:
from scipy.optimize import curve_fit
try:
popt, pcov = curve_fit(some parameters)
except OptimizeWarning:
print 'Maxed out calls.'
# do something
I looked around the docs but there was nothing there.
Am I missing something obvious or is it simply not defined for some reason?
BTW, this is the full warning I get and that I want to catch:
/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py:604: OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)
You can require that Python raise this warning as an exception using the following code:
import warnings
from scipy.optimize import OptimizeWarning
warnings.simplefilter("error", OptimizeWarning)
# Your code here
Issues with warnings
Unfortunately, warnings in Python have a few issues you need to be aware of.
Multiple filters
First, there can be multiple filters, so your warning filter can be overridden by something else. This is not too bad and can be worked around with the catch_warnings context manager:
import warnings
from scipy.optimize import OptimizeWarning
with warnings.catch_warnings():
warnings.simplefilter("error", OptimizeWarning)
try:
# Do your thing
except OptimizeWarning:
# Do your other thing
Raised Once
Second, warnings are only raised once by default. If your warning has already been raised before you set the filter, you can change the filter, it won't raise the warning again.
To my knowledge, there unfortunately is not much you can do about this. You'll want to make sure you run the warnings.simplefilter("error", OptimizeWarning) as early as possible.

Categories