cvxopt.solvers.qp in python causes the kernel to die - python

When I try to solve a quadratic programming problem with solvers.qp from the cvxopt package in python, it kills my kernel after a few seconds.
The documentation of the package is found at http://cvxopt.org/userguide/coneprog.html#cvxopt.solvers.qp . If I run the example code from that page:
from math import sqrt
from cvxopt import matrix
from cvxopt.solvers import qp
# Problem data.
n = 4
S = matrix([[ 4e-2, 6e-3, -4e-3, 0.0 ],
[ 6e-3, 1e-2, 0.0, 0.0 ],
[-4e-3, 0.0, 2.5e-3, 0.0 ],
[ 0.0, 0.0, 0.0, 0.0 ]])
pbar = matrix([.12, .10, .07, .03])
G = matrix(0.0, (n,n))
G[::n+1] = -1.0
h = matrix(0.0, (n,1))
A = matrix(1.0, (1,n))
b = matrix(1.0)
# Compute trade-off.
N = 100
mus = [ 10**(5.0*t/N-1.0) for t in range(N) ]
portfolios = [ qp(mu*S, -pbar, G, h, A, b)['x'] for mu in mus ]
After 2 seconds or so I get the following reply from python:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
...
I also don't understand what this ['x'] option is all about. But even if I leave that away it gives me that 'unexpected' death of the kernel. I also tried qp problems that definitely have a solution. Like x^2+y^2 under no constraints or with non-negativity constraints... what ever I do, it kills my kernel. What could be the problem?
Maybe it is important to say, that
I use Ubuntu 16
I use Python 3.5
I use cvxopt 1.1.9
The package cvxopt also uses C-files.

I faced the same problem, when I was running cvxopt in Jupyter Lab, so I moved my code to PyCharm and got an error
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results.
I googled and found a question that solved it via
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"

You can run the script from terminal to get cause of this error.
When I got this error, the cause is Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
I use conda so that's my solution:
conda config --add channels conda-forge
conda install -f cvxopt

Related

Conda Numba Cuda: libNVVM cannot be found

My development environment is: Ubuntu 18.04.5 LTS, Python3.6 and I have installed via conda (numba and cudatoolkit). Nvidia GPU GeForce GTX 1050 Ti, which is supported by cuda.
The installation of conda and numba seem to work as intended as I can import numba within python3.6 scripts.
The problem seems identical situation to the question asked here: Cuda: library nvvm not found
but none of the proposed solutions seem to work in my case, and I'm not sure how to highlight my situation properly (I can't do it through an answer in the other thread...). If raising a duplicate of the question is inappropriate, then guide me to proper conduct.
When I try to run the code below I get the following error: numba.cuda.cudadrv.error.NvvmSupportError: libNVVM cannot be found. Do conda install cudatoolkit: library nvvm not found
from numba import cuda, float32
#Controls threads per block and shared memory usage.
#The computation will be done on blocks of TPBxTPB elements.
TPB = 16
#cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
if x >= C.shape[0] and y >= C.shape[1]:
# Quit if (x, y) is outside of valid C boundary
return
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[tx, ty] = A[x, ty + i * TPB]
sB[tx, ty] = B[tx + i * TPB, y]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[tx, j] * sB[j, ty]
# Wait until all threads finish computing
cuda.syncthreads()
C[x, y] = tmp
import numpy as np
matrix_A = np.array([[0.1,0.2],[0.1,0.2]])
Doing as suggested and running conda install cudatoolkit does not work. I have tried many variations on this install that I've found online to no avail.
In the other post a solution that seems to have worked for many is to add lines about environment variables in the .bashrc file in the home directory. The suggestions however refer to files that exist in the /usr directory, where I have no cuda data since I've installed through conda. I have tried many variations on these exports without success. This is perhaps where the solution lies, but if so then the solution would benefit from being generalized.
Does anyone have any up-to-date or generalized solutions to this problem?
EDIT: adding information from terminal outputs (thanks for the hint of editing the question to do so)
> conda list numba
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
numba 0.51.2 py38h0573a6f_1
> conda list cudatoolkit
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
cudatoolkit 11.0.221 h6bb024c_0
Also adding output from numba -s: https://pastebin.com/raw/6u1MUkxg
Idea of possible cause (not yet confirmed): I noticed in the numba -s output that it specifies Python Version: 3.8.3, where I've been explicitly using python3.6 in the terminal since simply using python has usually meant using python2.7. I checked however, and my system now uses Python 3.8.3 with the python command, and Python 3.6.9 with the python3.6. And when running the code using python I get a syntax error instead, which is a good sign: raise ValueError(missing_launch_config_msg).
I will try to fix the syntax errors and confirm that the code works, after which I will report here of the situation.
Confirmation of solution: using python instead of python3.6 in the terminal solved the problem. The root cause was the user.

Python Kernel Died when using triple for loop containing requires_grad=True parameters (Pytorch)

I just want to store para_k in every position of COV matrix, where:
para_k=torch.tensor([2.0],requires_grad=True).
And because I want to update parameters by optimising a loss function later(omitted here), I have to add another for loop. Therefore, the whole code is as follows:
import torch
para_k=torch.tensor([2.0],requires_grad=True)
for t in range(5):
num_row=300
num_col=300
COV = torch.zeros((num_row,num_col))
for i in range(num_row):
for j in range(num_col):
COV[i,j]=para_k
print('i= %d ,j= %d' % (i,j) )
This simple computation cannot be finished by Spyder!!
Things can go well if I change nrow and ncol to be smaller than 100.
If I use number larger than 100, it will send error message (sometimes even one iteration cannot be finished) that Kernel died and restarting: Spyder Kernel Died
I even uninstall and install again my Anaconda and reinstall pytorch. I also tried to run Spyder as administrator, still got the error.
I check the event of my system, the error is: ntdll.dll regarding to pythonw.exe. I listed pythonw.exe as exception of firewall, but still cannot solve the problem.

Bit Vector tactic leads to exit code 139 in Z3Py

This is a simple bit vector problem:
import z3
s = z3.Tactic('bv').solver()
m = z3.Function('m', z3.BitVecSort(32), z3.BitVecSort(32))
a, b = z3.BitVecs('a b', 32)
axioms = [
a == m(12432),
z3.Not(a == b)
]
s.add(axioms)
print(s.check())
Python crashes with error code 139. Please note that, this is not my real problem, so I must use bit vector tactic in my project, though it doesn't have any problem with smt tactic or even qfbv tactic.
It seems to be a bug in 4.4.0. With 4.4.0 and Ubuntu 16.04 LTS and Python 2.7 you can reproduce the issue. However in newer versions of Z3, it has been fixed. I tried 4.4.2 and it returns sat.
https://github.com/Z3Prover/z3/issues/685

Matlab's cannot call Python code that imports statsmodels

This question concerns Matlab 2014b, Python 3.4 and Mac OS 10.10.
I have the following Python file tmp.py:
from statsmodels.tsa.arima_process import ArmaProcess
import numpy as np
def generate_AR_time_series():
arparams = np.array([-0.8])
maparams = np.array([])
ar = np.r_[1, -arparams]
ma = np.r_[1, maparams]
arma_process = ArmaProcess(ar, ma)
return arma_process.generate_sample(100)
I want to call generate_AR_time_series from Matlab so I used:
py.tmp.generate_AR_time_series()
which gave a vague error message
Undefined variable "py" or class "py.tmp.generate_AR_time_series".
To look into the problem further, I tried
tmp = py.eval('__import__(''tmp'')', struct);
which gave me a detailed but still obscured error message:
Python Error:
dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so, 2): Symbol
not found: __gfortran_stop_numeric_f08
Referenced from: /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
Expected in: /Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
I can call the function within Python just fine, so I guess the problem is with Matlab. From the detailed message, it seems that the problem lies in something is expected in the Matlab installation path, but of course Matlab installation path does not contain those things since they are third-party libraries for Python.
How to solve this problem?
Edit 1:
libgfortran.3.dylib can be found in a lot of places:
/Applications/MATLAB_R2014a.app/sys/os/maci64/libgfortran.3.dylib
/Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
/opt/local/lib/gcc48/libgfortran.3.dylib
/opt/local/lib/gcc49/libgfortran.3.dylib
/opt/local/lib/libgcc/libgfortran.3.dylib
/Users/wdg/Documents/MATLAB/mcode/nativelibs/macosx/bin/libgfortran.3.dylib
Try:
setenv('DYLD_LIBRARY_PATH', '/usr/local/bin/');
For me, using the setenv approach from within MATLAB did not work. Also, MATLAB modifies the DYLD_LIBRARY_PATH variable during startup to include necessary libraries.
First, you have to make sure which version of gfortran scipy was linked against: in Terminal.app, enter otool -L /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so and look for 'libgfortran' in the output.
It worked for me to copy $(MATLABROOT)/bin/.matlab7rc.sh to my home directory and change the line LDPATH_PREFIX='' in the mac section (around line 195 in my case) to LDPATH_PREFIX='/opt/local/lib/gcc49', or whatever path to libgfortran you found above.
This ensures that /opt/local/lib/gcc49/libgfortran.3.dylib is found before the MATLAB version, but leaves other paths intact.

Python Kernel died when using Gurobi in Enthought Canopy on Win 7

I was trying to use Gurobi for Python to solve a Mixed Integer Optimization problem, but every time I run the code I wrote according to the Gurobi tutorial, a message popped up, saying:
"The Kernel has died, would you like to restart it? If you do not restart the kernel, you will be able to save the notebook, but running code will nor work until the notebook is reopened."
I used Enthought Canopy with Python 2.7 and Gurobipy from Gurobi on Win 7.
Here is a small trial optimization problem I tried to solve, and the message above showed up after I run it:
import gurobipy
import numpy
from gurobipy import *
def minVol(N1,solution):
model=Model()
x=model.addVar(lb=0.0,ub=1.0,vtype=GRB.CONTINUOUS)
y=model.addVar(lb=0.0,ub=1.0,vtype=GRB.CONTINUOUS)
model.update()
varibles=model.getVars()
model.setObjective(2*x+y,GRB.Maximize)
model.update()
model.optimize()
if model.status==GRB.OPTIMAL:
s=model.getAttr('s',varibles)
for i in range(N1):
solution[i]=s[i]
return True
else:
return False
N1=2
solution=[0]*(N1)
success=minVol(N1,solution)
if success:
print solution
Anyone would help me with this? Thank you very much!

Categories