DocPlex giving CpoSolverException when using search phases - python

I am running a constraint programming model in docplex. When I add the following search phase I get an error in docplex:
model.set_parameters({'SearchType': 'DepthFirst', 'Workers': 2, "LogVerbosity": "Verbose"})
p1 = search_phase(
vars=shifts.values(),
varchooser=select_largest(var_impact()),
valuechooser=select_largest(value_impact())
)
p2 = search_phase(
vars=work_hours.values(),
varchooser=select_smallest(domain_size()),
valuechooser=select_random_value()
)
model.add(p1)
ans = model.solve(TimeLimit=100, execfile='cpoptimizer.exe')
I get the following error
(base) dipplestix#DESKTOP-37BA91G:~/classes/csci 2951/hw2$ ./run.sh input/7_14.sched
! --------------------------------------------------- CP Optimizer 20.1.0.0 --
! Satisfiability problem - 196 variables, 266 constraints, 1 phase
! Presolve : 21 extractables eliminated, 7 constraints generated
! TimeLimit = 100
! Workers = 2
! LogVerbosity = Verbose
! SearchType = DepthFirst
! Initial process time : 0.02s (0.02s extraction + 0.00s propagation)
! . Log search space : 449.3 (before), 449.3 (after)
! . Memory usage : 501.9 kB (before), 501.9 kB (after)
! Using parallel search with 2 workers.
! ----------------------------------------------------------------------------
! Branches Non-fixed W Branch decision
Traceback (most recent call last):
File "src/run.py", line 8, in <module>
p = solve(sys.argv[1])
File "/home/dipplestix/classes/csci 2951/hw2/src/solver.py", line 97, in solve
ans = model.solve(TimeLimit=100, execfile='cpoptimizer.exe')
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/model.py", line 1080, in solve
msol = solver.solve()
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver.py", line 614, in solve
raise e
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver.py", line 607, in solve
msol = self.agent.solve()
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver_local.py", line 191, in solve
jsol = self._wait_json_result(EVT_SOLVE_RESULT)
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver_local.py", line 474, in _wait_json_result
data = self._wait_event(evt)
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver_local.py", line 424, in _wait_event
evt, data = self._read_message()
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver_local.py", line 533, in _read_message
frame = self._read_frame(6)
File "/home/dipplestix/anaconda3/lib/python3.7/site-packages/docplex/cp/solver/solver_local.py", line 593, in _read_frame
raise CpoSolverException("Nothing to read from local solver process. Process seems to have been stopped (rc={}).".format(rc))
docplex.cp.solver.solver.CpoSolverException: Nothing to read from local solver process. Process seems to have been stopped (rc=5).
Hoiwever, if I use this search_phase instead it works
p1 = search_phase(
vars=shifts.values(),
varchooser=select_random_var(),
valuechooser=select_random_value()
)
Any ideas what could be causing this?

Unfortunately, the evaluators using statistics over the branches of the search like impacts, success rate, or objective variation measures are not available for variable and value in DepthFirst search. You can use them in Restart and MultiPoint. However docplex should raise an error is this case and not exit this way. We will fix this for the next release.

Related

Uber Ludwig: Issue Making Predictions

I decided to mess with Uber Ludwig again. I wanted to make a simple demo using the python API that learns to add 1 to the input number. I have successfully produced a model, but the issue arises when predicting. I am running on the newest release from github on PopOS 19.10 on CPU TensorFlow.
Thank you for any help.
Edit: I have reproduced the issue on windows as well.
The error is as follows
Traceback (most recent call last):
File "predict.py", line 3, in <module>
x = model.predict({"numberIn":[1]}, return_type='dict')
File "/home/user/.local/lib/python3.7/site-packages/ludwig/api.py", line 914, in predict
gpu_fraction=gpu_fraction,
File "/home/user/.local/lib/python3.7/site-packages/ludwig/api.py", line 772, in _predict
self.model_definition['preprocessing']
File "/home/user/.local/lib/python3.7/site-packages/ludwig/data/preprocessing.py", line 159, in build_data
preprocessing_parameters
File "/home/user/.local/lib/python3.7/site-packages/ludwig/data/preprocessing.py", line 180, in handle_missing_values
dataset_df[feature['name']] = dataset_df[feature['name']].fillna(
AttributeError: 'list' object has no attribute 'fillna'
Here is my prediction script
from ludwig.api import LudwigModel
model = LudwigModel.load("/home/user/Documents/ludwig-test/plus1/results/api_experiment_run_0/model")
x = model.predict({"numberIn":[1]}, return_type='dict')
#x = model.predict({"numberIn":[1]}, return_type=<class 'dict'>) I tried this with no success
print(x)
Here is the contents of my training script.
mydata = {"numberIn":[], "value":[]}
for x in range(10000):
mydata["numberIn"].append(x)
mydata["value"].append(x + 1)
from ludwig.api import LudwigModel
print("Imported Ludwig")
modelobject = LudwigModel(model_definition_file="modeldef.yaml")
stats = modelobject.train(data_dict=mydata)
modelobject.close()
modeldef.yaml
input_features:
-
name: numberIn
type: numerical
output_features:
-
name: value
type: numerical
Solution: Input argument of predict function is not positional and data_dict needs to be specified in this case.
x = modelobject.predict(data_dict=mydictionary)

How can I prevent parallel python from quitting with "OSError: [Errno 35] Resource temporarily unavailable"?

Context
I'm trying to run 1000 simulations that involve (1) damaging a road network and then (2) measuring the traffic delays due to the damage. Both steps (1) and (2) involve creating multiple "maps". In step (1), I create 30 damage maps. In step (2), I measure the traffic delay for each of those 30 damage maps. The function then returns the average traffic delay over the 30 damage maps, and proceeds to run the next simulation. The pseudocode for the setup looks like this:
for i in range(0,1000): # for each simulation
create 30 damage maps using parallel python
measure the traffic delay of each damage map using parallel
python
compute the average traffic delay for simulation i
Since the maps are independent of each other, I've been using the parallel python package at each step.
Problem -- Error Message
The code has twice thrown the following error around the 72nd simulation (of 1000) and stopped running during step (1), which involves damaging the bridges.
An error has occurred during the function execution
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/ppworker.py", line 90, in run
__result = __f(*__args)
File "<string>", line 4, in compute_damage
File "<string>", line 3, in damage_bridges
File "/Library/Python/2.7/site-packages/scipy/stats/__init__.py", line 345, in <module>
from .stats import *
File "/Library/Python/2.7/site-packages/scipy/stats/stats.py", line 171, in <module>
from . import distributions
File "/Library/Python/2.7/site-packages/scipy/stats/distributions.py", line 10, in <module>
from ._distn_infrastructure import (entropy, rv_discrete, rv_continuous,
File "/Library/Python/2.7/site-packages/scipy/stats/_distn_infrastructure.py", line 16, in <module>
from scipy.misc import doccer
File "/Library/Python/2.7/site-packages/scipy/misc/__init__.py", line 68, in <module>
from scipy.interpolate._pade import pade as _pade
File "/Library/Python/2.7/site-packages/scipy/interpolate/__init__.py", line 175, in <module>
from .interpolate import *
File "/Library/Python/2.7/site-packages/scipy/interpolate/interpolate.py", line 32, in <module>
from .interpnd import _ndim_coords_from_arrays
File "interpnd.pyx", line 1, in init scipy.interpolate.interpnd
File "/Library/Python/2.7/site-packages/scipy/spatial/__init__.py", line 95, in <module>
from .ckdtree import *
File "ckdtree.pyx", line 31, in init scipy.spatial.ckdtree
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py", line 123, in cpu_count
with os.popen(comm) as p:
OSError: [Errno 35] Resource temporarily unavailable
Systems and versions
I'm running Python 2.7 in a PyCharm virtual environment with parallel python (pp) 1.6.5. My computer runs Mac OS High Sierra 10.13.3 with memory of 8 GB 1867 MHz DDR3.
Attempted fixes
I gather that the problem is with the parallel python package or how I've used it, but am otherwise at a loss to understand how to fix this. It's been noted as a bug on the parallel python page -- wkerzendorf posted there:
Q: I get a Socket Error/Memory Error when using jobs that use >os.system calls
A: The fix I found is using subprocess.Popen and poping the >stdout,stderr into the subprocess.PIPE. here is an example:
subprocess.Popen(['ls ->rtl'],stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True). That >fixed the error for me.
However, I wasn't at all sure where to make this modification.
I've also read that the problem may be with my system limits, per this Ghost in the Machines blog post. However, when I tried to reconfigure the max number of files and max user processes, I got the following message in terminal:
Could not set resource limits: 1: Operation not permitted
Code using parallel python
The code I'm working with is rather complicated (requires multiple input files to run) so I'm afraid I can't provide a minimal, reproducible example here. You can download and run a version of the code at this link.
Below I've included the code for step (1), in which I use parallel python to create the 30 damage maps. The number of workers is 4.
ppservers = () #starting a super cool parallelization
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print "Starting pp with", job_server.get_ncpus(), "workers"
# set up jobs
jobs = []
for i in targets:
jobs.append(job_server.submit(compute_damage, (lnsas[i%len(lnsas)], napa_dict, targets[i], i%sets, U[i%sets][:] ), modules = ('random', 'math', ), depfuncs = (damage_bridges, )))
# get the results that have already run
bridge_array_new = []
bridge_array_internal = []
indices_array = []
bridge_array_hwy_num = []
for job in jobs:
(index, damaged_bridges_internal, damaged_bridges_new, num_damaged_bridges_road) = job()
bridge_array_internal.append(damaged_bridges_internal)
bridge_array_new.append(damaged_bridges_new)
indices_array.append(index)
bridge_array_hwy_num.append(num_damaged_bridges_road)
Additional functions
The compute_damage function looks like this.
def compute_damage(scenario, master_dict, index, scenario_index, U):
'''goes from ground-motion intensity map to damage map '''
#figure out component damage for each ground-motion intensity map
damaged_bridges_internal, damaged_bridges_new, num_damaged_bridges = damage_bridges(scenario, master_dict, scenario_index, U) #e.g., [1, 89, 598] #num_bridges_out is highway bridges only
return index, damaged_bridges_internal, damaged_bridges_new, num_damaged_bridges
The damage_bridges function looks like this.
def damage_bridges(scenario, master_dict, scenario_index, u):
'''This function damages bridges based on the ground shaking values (demand) and the structural capacity (capacity). It returns two lists (could be empty) with damaged bridges (same thing, just different bridge numbering'''
from scipy.stats import norm
damaged_bridges_new = []
damaged_bridges_internal = []
#first, highway bridges and overpasses
beta = 0.6 #you may want to change this by removing this line and making it a dictionary lookup value 3 lines below
i = 0 # counter for bridge index
for site in master_dict.keys(): #1-1889 in Matlab indices (start at 1)
lnSa = scenario[master_dict[site]['new_id'] - 1]
prob_at_least_ext = norm.cdf((1/float(beta)) * (lnSa - math.log(master_dict[site]['ext_lnSa'])), 0, 1) #want to do moderate damage state instead of extensive damage state as we did here, then just change the key name here (see master_dict description)
#U = random.uniform(0, 1)
if u[i] <= prob_at_least_ext:
damaged_bridges_new.append(master_dict[site]['new_id']) #1-1743
damaged_bridges_internal.append(site) #1-1889
i += 1 # increment bridge index
# GB ADDITION -- to use with master_dict = napa_dict, since napa_dict only has 40 bridges
num_damaged_bridges = sum([1 for i in damaged_bridges_new if i <= 1743])
return damaged_bridges_internal, damaged_bridges_new, num_damaged_bridges
It seems like the issue was that I had neglected to destroy the servers created in steps (1) and (2) -- a simple fix! I simply added job_server.destroy() at the end of each step. I'm currently running the simulations and have reached 250 of the 1000 without incident.
To be completely clear, the code for step (1) is now:
ppservers = () #starting a super cool parallelization
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
# set up jobs
jobs = []
for i in targets:
jobs.append(job_server.submit(compute_damage, (lnsas[i%len(lnsas)], napa_dict, targets[i], i%sets, U[i%sets][:] ), modules = ('random', 'math', ), depfuncs = (damage_bridges, )))
# get the results that have already run
bridge_array_new = []
bridge_array_internal = []
indices_array = []
bridge_array_hwy_num = []
for job in jobs:
(index, damaged_bridges_internal, damaged_bridges_new, num_damaged_bridges_road) = job()
bridge_array_internal.append(damaged_bridges_internal)
bridge_array_new.append(damaged_bridges_new)
indices_array.append(index)
bridge_array_hwy_num.append(num_damaged_bridges_road)
job_server.destroy()

multi-threading in Python with pulp

I have a function which accepts a list R. In this function, I have defined an optimization problem using "pulp", This is my function:
import pulp
from multiprocessing.dummy import Pool as ThreadPool
def optimize(R):
variables = ["x1","x2","x3","x4"]
costs = {"x1":R[0], "x2":R[1], "x3":R[2], "x4":R[3]}
constraint = {"x1":5, "x2":7, "x3":4, "x4":3}
prob_variables = pulp.LpVariable.dicts("Intg",variables,
lowBound=0,
upBound=1,
cat=pulp.LpInteger)
prob = pulp.LpProblem("test1", pulp.LpMaximize)
# defines the constraints
prob += pulp.lpSum([constraint[i]*prob_variables[i] for i in variables]) <= 14
# defines the objective function to maximize
prob += pulp.lpSum([costs[i]*prob_variables[i] for i in variables])
pulp.GLPK().solve(prob)
# Solution
return pulp.value(prob.objective)
To get the output, I used a list as my input and the output is correct:
my_input = [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]
results =[]
for i in range(0,len(my_input)):
results.append(optimize(my_input[i]))
print("*"*20)
print(results)
But, I want to use multi-threading instead of the for loop. So, I used:
my_input = [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]
pool = ThreadPool(4)
results = pool.map(optimize, my_input)
But it gives me some errors:
Traceback (most recent call last):
File "/Users/Mohammad/PycharmProjects/untitled10/multi_thread.py", line 35, in <module>
results = pool.map(optimize, my_input)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 608, in get
raise self._value
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/Users/Mohammad/PycharmProjects/untitled10/multi_thread.py", line 27, in optimize
pulp.GLPK().solve(prob)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PuLP-1.6.1-py3.5.egg/pulp/solvers.py", line 179, in solve
return lp.solve(self)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PuLP-1.6.1-py3.5.egg/pulp/pulp.py", line 1643, in solve
status = solver.actualSolve(self, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PuLP-1.6.1-py3.5.egg/pulp/solvers.py", line 377, in actualSolve
raise PulpSolverError("PuLP: Error while executing "+self.path)
pulp.solvers.PulpSolverError: PuLP: Error while executing glpsol
Can anybody help me?
In my actual code, my_input list has the length of 27 (instead of 4 in the above code) and for each one, in my function I have to perform 80k optimizations (instead of one in the above code). So, multi-threading is a big help for me.
I have seen that class pulp.solvers.COIN_CMD has a threads argument, although the documentation is quite laconic. Taking a look at the code source, it seems to be indeed a way to provide threads to the solver.
If naming is indeed the issue, consider adding the desired name index for a given problem as an input argument to the function. Something like:
def optimize(tup): # here, tup contains (idx, R), so as to be callable using pool.map
...
prob = pulp.LpProblem('test'+str(idx), pulp.LpMaximize)
...
and then something like:
my_input = [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]
pool = ThreadPool(4)
results = pool.map(optimize, enumerate(my_input))

Sage for Graph Theory, KeyError

I'll start with my code, because this may just be an obvious problem to those with better understanding of the language:
g = graphs.CompleteGraph(60).complement()
for i in range(1,180):
a = randint(0,59)
b = randint(0,59)
h = copy(g)
h.add_edge(a,b)
if h.is_circular_planar():
g.add_edge(a,b)
strong = copy(strong_resolve(g))
S = strong.vertex_cover()
d = {'#00FF00': [], '#FF0000': []}
for v in G.vertices():
if v in S:
d['#FF0000'].append(v)
else:
d['#00FF00'].append(v)
g.plot(layout="spring", vertex_colors=d).show()
strong.plot(vertex_colors=d).show()
new_strong = copy(strong)
for w in new_strong.vertices():
if len(new_strong.neighbors(w)) == 0: #trying to remove
new_strong.delete_vertex(w) #disconnected vertices
new_strong.plot(vertex_colors=d).show()
A couple notes: strong_resolve is a function which takes in a graph and outputs another graph. The first two blocks of code work fine.
My problem is that once I add the third block things don't work anymore. In fiddling around I've gotten variants of this code that when added cause errors, and when removed the errors remain somehow. What happens now is that the for loop seems to go until its end and only then it will give the following error:
Traceback (most recent call last): if h.is_circular_planar():
File "", line 1, in <module>
File "/tmp/tmprzreop/___code___.py", line 30, in <module>
exec compile(u'new_strong.plot(vertex_colors=d).show()
File "", line 1, in <module>
File "/usr/lib/sagemath/local/lib/python2.7/site-packages/sage/misc/decorators.py", line 550, in wrapper
return func(*args, **options)
File "/usr/lib/sagemath/local/lib/python2.7/site-packages/sage/graphs/generic_graph.py", line 15706, in plot
return self.graphplot(**options).plot()
File "/usr/lib/sagemath/local/lib/python2.7/site-packages/sage/graphs/generic_graph.py", line 15407, in graphplot
return GraphPlot(graph=self, options=options)
File "/usr/lib/sagemath/local/lib/python2.7/site-packages/sage/graphs/graph_plot.py", line 247, in __init__
self.set_vertices()
File "/usr/lib/sagemath/local/lib/python2.7/site-packages/sage/graphs/graph_plot.py", line 399, in set_vertices
pos += [self._pos[j] for j in vertex_colors[i]]
KeyError: 0
this can vary in that KeyError: 0 is occasionally 1 or 2 depending on some unknown factor.
I apologize in advance for my horrible code and acknowledge that I really have no idea what I'm doing but I'd really appreciate if someone could help me out here.
I figured it out! It turns out the error came from d having entries that made no sense in new_strong, namely those for vertices that were deleted already. This caused the key error when plot() tried to colour the vertices according to d.

Remove the numba.lowering.LoweringError: Internal error

I'm using numba to speed up my code which is working fine without numba. But after using #jit, it crashes with this error:
Traceback (most recent call last):
File "C:\work_asaaki\code\gbc_classifier_train_7.py", line 54, in <module>
gentlebooster.train(X_train, y_train, boosting_rounds)
File "C:\work_asaaki\code\gentleboost_c_class_jit_v7_nolimit.py", line 298, in train
self.g_per_round, self.g = train_function(X, y, H)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 152, in _compile_for_args
return self.jit(sig)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 143, in jit
return self.compile(sig, **kws)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 250, in compile
locals=self.locals)
File "C:\Anaconda\lib\site-packages\numba\compiler.py", line 183, in compile_bytecode
flags.no_compile)
File "C:\Anaconda\lib\site-packages\numba\compiler.py", line 323, in native_lowering_stage
lower.lower()
File "C:\Anaconda\lib\site-packages\numba\lowering.py", line 219, in lower
self.lower_block(block)
File "C:\Anaconda\lib\site-packages\numba\lowering.py", line 254, in lower_block
raise LoweringError(msg, inst.loc)
numba.lowering.LoweringError: Internal error:
NotImplementedError: ('cast', <llvm.core.Instruction object at 0x000000001801D320>, slice3_type, int64)
File "gentleboost_c_class_jit_v7_nolimit.py", line 103
Line 103 is below, in a loop:
weights = np.empty([n,m])
for curr_n in range(n):
weights[curr_n,:] = 1.0/(n) # this is line 103
where n is a constant already defined somewhere above in my code.
How can I remove the error? What "lowering" is going on? I'm using Anaconda 2.0.1 with Numba 0.13.x and Numpy 1.8.x on a 64-bit machine.
Based on this: https://gist.github.com/cc7768/bc5b8b7b9052708f0c0a,
I figured out what to do to avoid the issue. Instead of using the colon : to refer to any row/column, I just opened up the loop into two loops to explicitly refer to the indices in each dimension of the array:
weights = np.empty([n,m])
for curr_n in range(n):
for curr_m in range (m):
weights[curr_n,curr_m] = 1.0/(n)
There were other instances in my code after this where I used the colon, but they didn't cause errors further down, not sure why.

Categories