I am running m.solve() in a try .. except construct to elegantly handle any exceptions raised by the solver due to maximum iterations or convergence to an infeasibility but want to interrogate APPINFO and APPSTATUS to determine if a solution was found. I was surprised to see that I always seem to get APPINFO=0 and APPSTATUS=1 even though the the solver reports that a solutions was not found.
What am I missing in my interpretation of the document on APPINFO and APPSTATUS?
Piece of code to reproduce error.
from gekko import GEKKO
m=GEKKO(remote=False)
m.x=m.Var()
m.y=m.Var()
m.total=m.Intermediate(m.x+m.y)
m.Equation(m.total>20) #if included, no feasible solution exists
m.Equation(m.x<9)
m.Equation(m.y<9)
m.Maximize(m.total)
m.options.SOLVER=3
try:
m.solve()
except Exception as e:
print('Exception',e)
print('APPINFO', m.options.APPINFO)
print('APPSTATUS', m.options.APPSTATUS)
Use debug=False to not raise an exception when Gekko fails to solve. When there is an exception, the results are not loaded back into m.options.
from gekko import GEKKO
m=GEKKO(remote=False)
m.x=m.Var()
m.y=m.Var()
m.total=m.Intermediate(m.x+m.y)
m.Equation(m.total>20) #if included, no feasible solution exists
m.Equation(m.x<9)
m.Equation(m.y<9)
m.Maximize(m.total)
m.options.SOLVER=3
m.solve(debug=False)
print('APPINFO', m.options.APPINFO)
print('APPSTATUS', m.options.APPSTATUS)
This produces the correct error response:
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 0.0156 sec
Objective : -18.023281704731964
Unsuccessful with error code 0
---------------------------------------------------
Creating file: infeasibilities.txt
Use command apm_get(server,app,'infeasibilities.txt') to retrieve file
#error: Solution Not Found
APPINFO 2
APPSTATUS 0
Related
I perform many logistic regression analyses with different parameters. From time to time I get an annoying message that the iteration limit is reached.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/arnold/bin/anaconda/envs/vehicles/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
I don't want a message, I have 1000's of them in my project during one run. Is there a way to suppress it?
What i'd like is some indication that something had gone wrong, e.g. raising an exception so that I can check afterwards which analyses were ok and which were wrong. Is there a way to do that?
The message is a custom warning defined sklearn.exceptions. You can suppress it (as noted in the comments), and you can catch it as if it was an error. The catch feature allows you to record the message. That might help you check which analyses were okay afterward.
The following code sample should help you get started. It is based on the python warnings documentation. The with block catches and records the warning produced by the logistic regression.
import warnings
from sklearn import datasets, linear_model,exceptions
import matplotlib.pyplot as plt
#>>>Start: Create dummy data
blob = datasets.make_blobs(n_samples=100,centers=1)[0]
x = blob[:,0].reshape(-1,1)
# y needs to be integer for logistic regression
y = blob[:,1].astype(int)
plt.scatter(x,y)
#<<<End: Create dummy data
#<<Create logistic regression. set max_iteration to a low number
lr = linear_model.LogisticRegression(max_iter=2)
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a warning.
lr.fit(x,y)
After running the code, you can check the contents of variable w.
print(type(w))
print(w[-1].category)
print(w[-1].message)
Output:
<class 'list'>
<class 'sklearn.exceptions.ConvergenceWarning'>
lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
I am running different sets of data to identify best modeling algorithm for each dataset. I loop through each datasets to check various algorithms and select the best models based on test score. I know that some of my datasets not going to converge for specific models (i.e: LogisticRegression)
and getting converging warning (i.e:"lbfgs failed to converge (status=1):"). I don't want to ignore the warning. My goal is to return score for models that converge and don't return any value if I get this convergence warning.
I am able to work around this by turning this warning into error using "warnings.filterwarnings('error',category=ConvergenceWarning, module='sklearn')" and then go through try and except to get what I want. The problem with this method is that if there is any other error beside sklearn convergance warning it will bypass the try line and I wouldn't be able to know what cause the error. Is there any other way to capture this warning beside turning it to error?
Here is the simplified overview of my code ( data not included as its a big datasets and I don't think is relevant to the question). Most of stackoverflow questions that I was able to find is about how to supress the error(How to disable ConvergenceWarning using sklearn?)or to turn this warning into error and I didn't find any other method to capture the warning without turning it to error.
from sklearn.linear_model import LogisticRegression
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings('error',category=ConvergenceWarning, module='sklearn')
try:
model=LogisticRegression().fit(x_train,y_train)
predict=model.predict(x_test)
except:
print('model didnt converge')
There are a couple things that can help you here.
First, you can specify what kind of Exception you are looking for, any you can specify multiple except clauses. Here is an example from the docs:
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise
The other thing to notice in the above is the except OSError as err. Using this syntax, you can print the error message associated with the error.
I install Baron solver on windows and I use pyomo in Spyder. I move baron.exe to current directory
when I run my code, this error appears:
Solver log file: 'C:\Users\~\AppData\Local\Temp\tmpumg5hxvr.baron.log'
Solver solution file: 'C:\Users\~\AppData\Local\Temp\tmpbmx1pfpt.baron.soln'
Solver problem files: ('C:\\Users\\~\\AppData\\Local\\Temp\\tmpygo80gcy.pyomo.bar',)
C:\Users\~\baron.exe: can't open C:\Users\~\AppData\Local\Temp\tmpygo80gcy.pyomo.bar.nl
ERROR: Solver (baron) returned non-zero return code (1)
ERROR: See the solver log above for diagnostic information.
Traceback
ApplicationError: Solver (baron) did not exit normally
But There is not the log file: C:\Users\~\AppData\Local\Temp\tmpumg5hxvr.baron.log
How can I fix this problem?
Thanks for your helps in advance.
I met the same problem today. I solved the problem by changing the solver name 'baron' to 'Baron'
I am running an optimization in Gurobi which crashes whenever I add a quadratic constraint to the problem that I generate thru the following lines of code:
expression = gurobipy.QuadExpr()
for course_key in hostings:
for kitchen_key in hostings[course_key]:
if not hostings[course_key][kitchen_key].large_gathering:
expression.add(x[kitchen_key,course_key,team_key1]*x[kitchen_key,course_key,team_key2])
mod.addQConstr(expression,gurobipy.GRB.LESS_EQUAL,1,"1MeetingPerPair_"+team_key1+"_"+team_key2)
The optimization always crashes after three iterations:
cmd output
with the following error message:
Unhandled exception at 0x00007FFC596CE6FC (ntdll.dll) in python.exe:
0xC0000374: A heap has been corrupted (parameters: 0x00007FF8FF82C6E0).
Does anyone have any clue as to how this problem could be solved? I am rather clueless as to what the error message even wants to tell me. I tried constructing the constraint in different ways (e. g. using .add instead of .addTerms) but that didn't change anything. Appreciate any help!
I was just trying to catch an OptimizeWarning thrown by the scipy.optimize.curve_fit function, but I realized it was not recognized as a valid exception.
This is a non-working simple idea of what I'm doing:
from scipy.optimize import curve_fit
try:
popt, pcov = curve_fit(some parameters)
except OptimizeWarning:
print 'Maxed out calls.'
# do something
I looked around the docs but there was nothing there.
Am I missing something obvious or is it simply not defined for some reason?
BTW, this is the full warning I get and that I want to catch:
/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py:604: OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)
You can require that Python raise this warning as an exception using the following code:
import warnings
from scipy.optimize import OptimizeWarning
warnings.simplefilter("error", OptimizeWarning)
# Your code here
Issues with warnings
Unfortunately, warnings in Python have a few issues you need to be aware of.
Multiple filters
First, there can be multiple filters, so your warning filter can be overridden by something else. This is not too bad and can be worked around with the catch_warnings context manager:
import warnings
from scipy.optimize import OptimizeWarning
with warnings.catch_warnings():
warnings.simplefilter("error", OptimizeWarning)
try:
# Do your thing
except OptimizeWarning:
# Do your other thing
Raised Once
Second, warnings are only raised once by default. If your warning has already been raised before you set the filter, you can change the filter, it won't raise the warning again.
To my knowledge, there unfortunately is not much you can do about this. You'll want to make sure you run the warnings.simplefilter("error", OptimizeWarning) as early as possible.