I haven't had trouble with pickling a Pyomo model before, but now a recursion error is being raised when trying to pickle Expressions.
Interestingly, the error can be avoided in the example below via a couple of ways:
by removing the "mutable" flag from the parameter
by reducing the size of the set to a very small value, to e.g. range(0, 10)
...but I don't know why these would fix the error, nor are they workable solutions for the actual optimization model I am trying to pickle.
The following example generates the error from the pickling of just a single Expression.
(I am using pyomo=5.6.2, cloudpickle=0.6.1, and python=3.7.4)
import cloudpickle
import pyomo.environ as pyo
test_model = pyo.ConcreteModel()
# A set is added.
set_elements = list(range(0, 500))
test_model.my_set = pyo.Set(initialize=set_elements)
# A parameter is added.
param_values = dict()
for e in set_elements:
param_values[e] = 1
test_model.my_param = pyo.Param(test_model.my_set, initialize=param_values, mutable=True)
# An expression is added.
def calculation_rule(mdl):
return sum(mdl.my_param[e] for e in mdl.my_set)
test_model.calculation_expr = pyo.Expression(rule=calculation_rule)
# We attempt to pickle the expression.
pickle_str = cloudpickle.dumps(test_model.calculation_expr)
The last line of the above code raises the following exception:
PicklingError: Could not pickle object as excessively deep recursion required.
QUESTION: Do I need to modify the way the Expression is written in order to pickle the model, or should I save the model using something other than Cloudpickle?
Thanks in advance for any help!
One fix is to use Pyomo's quicksum instead of Python's sum. It leads to more compact expression trees and seems to fix the recursion issue you're seeing:
# An expression is added.
def calculation_rule(mdl):
return pyo.quicksum(mdl.my_param[e] for e in mdl.my_set)
test_model.calculation_expr = pyo.Expression(rule=calculation_rule)
Documentation here.
Related
I would like to understand some Python code that I've been reading:
my_stream = some.library.method(arg1=val, arg2=val)(input_stream)
My guess is that some.library.method() returns an iterator into which input_stream is passed as an argument. Is this correct?
I have searched "python generator functions" to get documentation on this type of syntax but have found nothing other than nested examples such as: sum(mult(input)). Can anyone provide an explanation or link?
UPDATE
Below is a specific example:
tokenized_train_stream = trax.data.Tokenize(vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR)(train_stream)
Is this correct?
If you are unsure about if thing in python is something you might use inspect built-in module, it provides numerous issomething functions, among them isgenerator, simple usage example
import inspect
lst = [1,2,3]
gen = (i for i in [1,2,3])
print(inspect.isgenerator(lst)) # False
print(inspect.isgenerator(gen)) # True
I am trying to do a optimization using pygmo. I am facing an error from the pygmo package. It is basically a vector optimisation.
I get the following error when i initiate the population:
"TypeError: No registered converter was able to produce a C++ rvalue of type double from this Python object of type numpy.ndarray"
I have tried to remove the scipy command from my function but my function throws error.
class square_fit:
def fitness(self,rect_length):
sq_area = rect_width*rect_length
pulse_area =sc.integrate.simps(vge_reg, dx=0.1)
rmse = pulse_area-sq_area
return [sq_area,pulse_area,rmse]
def get_bounds(self):
return ([2.5],[3.633])
algo = pg.algorithm(uda = pg.nlopt('auglag'))
algo.extract(pg.nlopt).local_optimizer = pg.nlopt('var2')
algo.set_verbosity(200)
pop = pg.population(prob = square_fit(), size = 1) # error happens
pop.problem.c_tol = [1E-6] * 6
pop = algo.evolve(pop)
I want to know if i can do a optimization in pygmo by including command from other python packages like numpy or scipy in my cost function. what is c++rvalue and which variable is referred to as numpy.ndarray
I found the reason for this error. You just need to define the return type of the variables that are returned by the fitness function.
I wanted to apply a very simple function using ndimage.generic_filter() from scipy. This is the code:
import numpy as np
import scipy.ndimage as ndimage
data = np.random.rand(400,128)
dimx = int(np.sqrt(np.size(data,0)))
dimy = dimx
coord = np.random.randint(np.size(data,0), size=(dimx,dimy))
def test_func(values):
idx_center = int(values[4])
weight_center = data[idx_center]
weights_around = data[values]
differences = weights_around - weight_center
distances = np.linalg.norm(differences, axis=1)
return np.max(distances)
results = ndimage.generic_filter(coord,
test_func,
footprint = np.ones((3,3)))
When I execute it though, the following error shows up:
SystemError: <class 'int'> returned a result with an error set
when trying to coerce values[4] to an int. If I run the function test_func() without using ndimage.generic_filter() for a random array values, the function works alright.
Why is this error occurring? Is there a way to make it work?
For your case:
This must be a bug in either Python or SciPy. Please file a bug at https://bugs.python.org and/or https://www.scipy.org/bug-report.html. Include the version numbers of Python and NumPy/SciPy, the full code that you have here, and the entire traceback.
(Also, if you can find a way to trigger this bug that doesn't require the use of randomness, they will likely appreciate it. But if you can't find such a method, then please do file it as-is.)
In general:
"[R]eturned a result with an error set" is something that can only be done at the C level.
In general, the Python/C API expects most C functions to do one of two things:
Set an exception using one of these functions and return NULL (corresponds to throwing an exception).
Don't set an exception and return a "real" value, usually a PyObject* (corresponds to returning a value, including returning None).
These two cases are normally incorrect:
Set an exception (or fail to clear one that already exists), but then return some value other than NULL.
Don't set an exception, but then return NULL.
Python is raising a SystemError because the implementation of int, in the Python standard library, tried to do (3), possibly as a result of SciPy doing it first. This is always wrong, so there must be a bug in either Python or the SciPy code that it called into.
I was having a very similar experience with Python 3.8.1 and SciPy 1.4.1 on Linux. A workaround was to introduce np.floor so that:
centre = int(window.size / 2) becomes centre = int(np.floor(window.size/2))
which seems to have resolved the issue.
I am using HTMLTestRunner to create an HTML report for my unit test. I suppose the code for HTMLTestRunner provided here is written and optimized for python 2 and before, because I got three errors regarding incompatibility with python 3 like use of StringIO instead of io.
Now, line 639 has a has_key method defined, in this code snippet
def sortResult(self, result_list):
# unittest does not seems to run in any particular order.
# Here at least we want to group them together by class.
rmap = {}
classes = []
for n,t,o,e in result_list:
cls = t.__class__
if not rmap.has_key(cls):
rmap[cls] = []
classes.append(cls)
rmap[cls].append((n,t,o,e))
r = [(cls, rmap[cls]) for cls in classes]
return r
Since python 3 has has_key removed from python 3, so I get error regarding this. Since I am not that much familiar with python, I searched and found that in can be a suitable replacement. So how can I replace this has_ key method? I tried by simply replacing has key with in but it failed and got an invalid syntax error.
Instead of
if not rmap.has_key(cls):
try
if cls not in rmap:
You can see the docs for details.
I have a pickled file called classifier.pkl that I am trying to load into another module. However, I get an error I don't understand.
My code to pickle:
features = ['bob','ice','snowing'] #... shortened for exposition's sake
def extract_features(document):
return {'contains(%s)'% word: (word in set(document))
for word in all_together_word_list}
training_set = classify.util.apply_features(extract_features,tweets[0])
classifier = NaiveBayesClassifier.train(training_set)
cPcikle.dump(open('cocaine_classifier.pkl','wb'))
My code to unpickle:
features, extract_features, classifier =
cPickle.load(open('cocaine_classifier.pkl','rb'))
My error:
AttributeError: 'module' object has no attribute 'extract_features'
A while ago I made the .pkl file by pickling three things:
features : list
extract_features : function
classifier : instance of NLTK Naive Bayes Classifier
Puzzlingly, I get the same error with the following code:
x = cPickle.load(open('cocaine_classifier.pkl','rb'))
Why can't I retrieve three things? Even when I'm not trying to unpack the tuple?
Update
As NPE pointed out the path of the function to be unpickled must exactly match the function into which its being unpickled. I was debugging and Terminal and so from mod import * loads everything into the namespace whereas import mod as m does not.
The problem is that when you pickle a function, only the (fully-qualified) name of the function is pickled, not the function itself. This means that you have to have the function definition in place when you're unpickling.
Did you by any chance mean to pickle the result of calling extract_features?