Catch OptimizeWarning as an exception - python

I was just trying to catch an OptimizeWarning thrown by the scipy.optimize.curve_fit function, but I realized it was not recognized as a valid exception.
This is a non-working simple idea of what I'm doing:
from scipy.optimize import curve_fit
try:
popt, pcov = curve_fit(some parameters)
except OptimizeWarning:
print 'Maxed out calls.'
# do something
I looked around the docs but there was nothing there.
Am I missing something obvious or is it simply not defined for some reason?
BTW, this is the full warning I get and that I want to catch:
/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py:604: OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)

You can require that Python raise this warning as an exception using the following code:
import warnings
from scipy.optimize import OptimizeWarning
warnings.simplefilter("error", OptimizeWarning)
# Your code here
Issues with warnings
Unfortunately, warnings in Python have a few issues you need to be aware of.
Multiple filters
First, there can be multiple filters, so your warning filter can be overridden by something else. This is not too bad and can be worked around with the catch_warnings context manager:
import warnings
from scipy.optimize import OptimizeWarning
with warnings.catch_warnings():
warnings.simplefilter("error", OptimizeWarning)
try:
# Do your thing
except OptimizeWarning:
# Do your other thing
Raised Once
Second, warnings are only raised once by default. If your warning has already been raised before you set the filter, you can change the filter, it won't raise the warning again.
To my knowledge, there unfortunately is not much you can do about this. You'll want to make sure you run the warnings.simplefilter("error", OptimizeWarning) as early as possible.

Related

Scipy solve_ivp crashes script and debugger without error message

I am trying to solve a differential equation using scipy.integrate.solve_ivp using the "rk45" method. However, directly after the first actual call to my system function f, the script crashes without any exception message or similar, it just stops. The same thing happens when I try to run it using the debugger as python3 -m pdb ./solve.py. I also tried using the trace module as described here. However, that gives me too much information and I don't really see where exactly the error appears. The error strictly appears directly after the system function is called, somewhere in the scipy module.
I currently have not constructed a minimal example to reproduce this, I might add that later. For now, I am wondering if there are further ways I could try to debug this problem. The error might occur somewhere outside of the actual python code.
When I try running it in Juypter, the same error message as shown in this question appears.
Here is the example:
import numpy
import scipy.integrate as integrate
N = 300
def f(t, x):
return numpy.ravel(numpy.ones((2, N, N, N), dtype=complex))
ival = numpy.ravel(numpy.ones((2, N, N, N), dtype=complex))
integrate.solve_ivp(f, (0, 100), ival)

Hide scikit-learn ConvergenceWarning: "Increase the number of iterations (max_iter) or scale the data"

I am using Python to predict values and getting many warnings like:
Increase the number of iterations (max_iter) or scale the data as
shown in:
https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
C:\Users\ASMGX\anaconda3\lib\site-packages\sklearn\linear_model_logistic.py:762:
ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL
NO. of ITERATIONS REACHED LIMIT.
this prevents me from seeing the my own printed results.
Is there any way I can stop these warnings from showing?
You can use the warnings-module to temporarily suppress warnings. Either all warnings or specific warnings.
In this case scikit-learn is raising a ConvergenceWarning so I suggest suppressing exactly that type of warning. That warning-class is located in sklearn.exceptions.ConvergenceWarning so import it beforehand and use the context-manager catch_warnings and the function simplefilter to ignore the warning, i.e. not print it to the screen:
import warnings
from sklearn.exceptions import ConvergenceWarning
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=ConvergenceWarning)
optimizer_function_that_creates_warning()
You can also ignore that specific warning globally to avoid using the context-manager:
import warnings
warnings.simplefilter("ignore", category=ConvergenceWarning)
optimizer_function_that_creates_warning()
I suggest using the context-manager though since you are sure about where you suppress warnings. This way you will not suppress warnings from unexpected places.
Use the solver and max_iter to solve the problem...
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression(solver='lbfgs', max_iter=500000).fit(x_train, y_train)

How to capture specific warning without raising error in python

I am running different sets of data to identify best modeling algorithm for each dataset. I loop through each datasets to check various algorithms and select the best models based on test score. I know that some of my datasets not going to converge for specific models (i.e: LogisticRegression)
and getting converging warning (i.e:"lbfgs failed to converge (status=1):"). I don't want to ignore the warning. My goal is to return score for models that converge and don't return any value if I get this convergence warning.
I am able to work around this by turning this warning into error using "warnings.filterwarnings('error',category=ConvergenceWarning, module='sklearn')" and then go through try and except to get what I want. The problem with this method is that if there is any other error beside sklearn convergance warning it will bypass the try line and I wouldn't be able to know what cause the error. Is there any other way to capture this warning beside turning it to error?
Here is the simplified overview of my code ( data not included as its a big datasets and I don't think is relevant to the question). Most of stackoverflow questions that I was able to find is about how to supress the error(How to disable ConvergenceWarning using sklearn?)or to turn this warning into error and I didn't find any other method to capture the warning without turning it to error.
from sklearn.linear_model import LogisticRegression
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings('error',category=ConvergenceWarning, module='sklearn')
try:
model=LogisticRegression().fit(x_train,y_train)
predict=model.predict(x_test)
except:
print('model didnt converge')
There are a couple things that can help you here.
First, you can specify what kind of Exception you are looking for, any you can specify multiple except clauses. Here is an example from the docs:
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise
The other thing to notice in the above is the except OSError as err. Using this syntax, you can print the error message associated with the error.

Python - Replacing warnings with a simple message

I have built a few off-the-shelf classifiers from sklearn and there are some expected scenarios where I know the classifier is bound to perform badly and not predict anything correctly. The sklearn.svm package runs without an error but raises the following warning.
~/anaconda/lib/python3.5/site-packages/sklearn/metrics/classification.py:1074: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
I wish to suppress this warning and instead replace with a message to stdout, say for instance, "poor classifier performance".
Is there any way to suppress warnings in general?
Suppressing all warnings is easy with -Wignore (see warning flag docs)
The warnings module can do some finer-tuning with filters (ignore just your warning type).
Capturing just your warning (assuming there isn't some API in the module to tweak it) and doing something special could be done using the warnings.catch_warnings context manager and code adapted from "Testing Warnings":
import warnings
class MyWarning(Warning):
pass
def something():
warnings.warn("magic warning", MyWarning)
with warnings.catch_warnings(record=True) as w:
# Trigger a warning.
something()
# Verify some things
if ((len(w) == 1)
and issubclass(w[0].category, MyWarning)
and "magic" in str(w[-1].message)):
print('something magical')

Ignore pandas warnings

There are some similar questions, but the replies there do not work for me.
I am trying to do this, as explained in the warnings documentation:
def disable_pandas_warnings():
import warnings
warnings.resetwarnings() # Maybe somebody else is messing with the warnings system?
warnings.filterwarnings('ignore') # Ignore everything
# ignore everything does not work: ignore specific messages, using regex
warnings.filterwarnings('ignore', '.*A value is trying to be set on a copy of a slice from a DataFrame.*')
warnings.filterwarnings('ignore', '.*indexing past lexsort depth may impact performance*')
And I call this at the start of my test/program:
disable_pandas_warnings()
As you can see in the comments, I have:
made sure that the warnings filters are not polluted (since filtering is performed on a first-match way)
ignore all messages
ignore specific messages (by giving message regex)
Nothing seems to work: messages are still displayed. How can I disable all warnings?

Categories