This is a simple bit vector problem:
import z3
s = z3.Tactic('bv').solver()
m = z3.Function('m', z3.BitVecSort(32), z3.BitVecSort(32))
a, b = z3.BitVecs('a b', 32)
axioms = [
a == m(12432),
z3.Not(a == b)
]
s.add(axioms)
print(s.check())
Python crashes with error code 139. Please note that, this is not my real problem, so I must use bit vector tactic in my project, though it doesn't have any problem with smt tactic or even qfbv tactic.
It seems to be a bug in 4.4.0. With 4.4.0 and Ubuntu 16.04 LTS and Python 2.7 you can reproduce the issue. However in newer versions of Z3, it has been fixed. I tried 4.4.2 and it returns sat.
https://github.com/Z3Prover/z3/issues/685
Related
My development environment is: Ubuntu 18.04.5 LTS, Python3.6 and I have installed via conda (numba and cudatoolkit). Nvidia GPU GeForce GTX 1050 Ti, which is supported by cuda.
The installation of conda and numba seem to work as intended as I can import numba within python3.6 scripts.
The problem seems identical situation to the question asked here: Cuda: library nvvm not found
but none of the proposed solutions seem to work in my case, and I'm not sure how to highlight my situation properly (I can't do it through an answer in the other thread...). If raising a duplicate of the question is inappropriate, then guide me to proper conduct.
When I try to run the code below I get the following error: numba.cuda.cudadrv.error.NvvmSupportError: libNVVM cannot be found. Do conda install cudatoolkit: library nvvm not found
from numba import cuda, float32
#Controls threads per block and shared memory usage.
#The computation will be done on blocks of TPBxTPB elements.
TPB = 16
#cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
if x >= C.shape[0] and y >= C.shape[1]:
# Quit if (x, y) is outside of valid C boundary
return
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[tx, ty] = A[x, ty + i * TPB]
sB[tx, ty] = B[tx + i * TPB, y]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[tx, j] * sB[j, ty]
# Wait until all threads finish computing
cuda.syncthreads()
C[x, y] = tmp
import numpy as np
matrix_A = np.array([[0.1,0.2],[0.1,0.2]])
Doing as suggested and running conda install cudatoolkit does not work. I have tried many variations on this install that I've found online to no avail.
In the other post a solution that seems to have worked for many is to add lines about environment variables in the .bashrc file in the home directory. The suggestions however refer to files that exist in the /usr directory, where I have no cuda data since I've installed through conda. I have tried many variations on these exports without success. This is perhaps where the solution lies, but if so then the solution would benefit from being generalized.
Does anyone have any up-to-date or generalized solutions to this problem?
EDIT: adding information from terminal outputs (thanks for the hint of editing the question to do so)
> conda list numba
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
numba 0.51.2 py38h0573a6f_1
> conda list cudatoolkit
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
cudatoolkit 11.0.221 h6bb024c_0
Also adding output from numba -s: https://pastebin.com/raw/6u1MUkxg
Idea of possible cause (not yet confirmed): I noticed in the numba -s output that it specifies Python Version: 3.8.3, where I've been explicitly using python3.6 in the terminal since simply using python has usually meant using python2.7. I checked however, and my system now uses Python 3.8.3 with the python command, and Python 3.6.9 with the python3.6. And when running the code using python I get a syntax error instead, which is a good sign: raise ValueError(missing_launch_config_msg).
I will try to fix the syntax errors and confirm that the code works, after which I will report here of the situation.
Confirmation of solution: using python instead of python3.6 in the terminal solved the problem. The root cause was the user.
I'm getting TypeErrors when trying to run the optimization library pyOpt. The code I'm trying to run is the basic example given here (I'm only testing for the SLSQP solver).
I'm getting the following error during the execution of the solver
la = numpy.array([max(m,1)], numpy.int)
gg = numpy.zeros([la], numpy.float)
TypeError: only integer scalar arrays can be converted to a scalar index
I suspect the error is due to changes in numpy because of the answer given here. If that is the case, what are my options to make the library work? I can think of downgrading numpy but I don't want any unforseen changes to the other libraries in my system.
Using Python 2.7 with numpy 1.12.1 on Ubuntu 14.04.
replace lz with lz[0] - the issue is with version of numpy as you noted.
/usr/local/lib/python2.7/dist-packages/pyOpt/pySLSQP/pySLSQP.py
I got it to run by changing lines
374 gg = numpy.zeros([la[0]], numpy.float)
377 dg = numpy.zeros([la[0],n+1], numpy.float)
401 w = numpy.zeros([lw[0]], numpy.float)
404 jw = numpy.zeros([ljw[0]], numpy.intc)
I ended up encountering errors during execution that were not python errors but rather errors encountered during operation of the compiled code, so I just switched to using pyGMO2 and I am satisfied.
When I try to solve a quadratic programming problem with solvers.qp from the cvxopt package in python, it kills my kernel after a few seconds.
The documentation of the package is found at http://cvxopt.org/userguide/coneprog.html#cvxopt.solvers.qp . If I run the example code from that page:
from math import sqrt
from cvxopt import matrix
from cvxopt.solvers import qp
# Problem data.
n = 4
S = matrix([[ 4e-2, 6e-3, -4e-3, 0.0 ],
[ 6e-3, 1e-2, 0.0, 0.0 ],
[-4e-3, 0.0, 2.5e-3, 0.0 ],
[ 0.0, 0.0, 0.0, 0.0 ]])
pbar = matrix([.12, .10, .07, .03])
G = matrix(0.0, (n,n))
G[::n+1] = -1.0
h = matrix(0.0, (n,1))
A = matrix(1.0, (1,n))
b = matrix(1.0)
# Compute trade-off.
N = 100
mus = [ 10**(5.0*t/N-1.0) for t in range(N) ]
portfolios = [ qp(mu*S, -pbar, G, h, A, b)['x'] for mu in mus ]
After 2 seconds or so I get the following reply from python:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
...
I also don't understand what this ['x'] option is all about. But even if I leave that away it gives me that 'unexpected' death of the kernel. I also tried qp problems that definitely have a solution. Like x^2+y^2 under no constraints or with non-negativity constraints... what ever I do, it kills my kernel. What could be the problem?
Maybe it is important to say, that
I use Ubuntu 16
I use Python 3.5
I use cvxopt 1.1.9
The package cvxopt also uses C-files.
I faced the same problem, when I was running cvxopt in Jupyter Lab, so I moved my code to PyCharm and got an error
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results.
I googled and found a question that solved it via
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
You can run the script from terminal to get cause of this error.
When I got this error, the cause is Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
I use conda so that's my solution:
conda config --add channels conda-forge
conda install -f cvxopt
I have a piece of code which computes the Helmholtz-Hodge Decomposition.
I've been running on my Mac OS Yosemite and it was working just fine. A month ago, however, my Mac got pretty slow (it was really old), and I opted to buy a new notebook (Windows 8.1, Dell).
After installing all Python libs and so on, I continued my work running this same code (versioned in Git). And then the result was pretty weird, completely different from the one obtained in the old notebook.
For instance, what I do is to construct to matrices a and b(really long calculus) and then I call the solver:
s = numpy.linalg.solve(a, b)
This was returning a (wrong, and different of the result obtained in my Mac, which was right).
Then, I tried to use:
s = scipy.linalg.solve(a, b)
And the program exits with code 0 but at the middle of it.
Then, I just made a simple test of:
print 'here1'
s = scipy.linalg.solve(a, b)
print 'here2'
And here2 is never printed.
I tried:
print 'here1'
x, info = numpy.linalg.cg(a, b)
print 'here2'
And the same happens.
I also tried to check the solution after using numpy.linalg.solve:
print numpy.allclose(numpy.dot(a, s), b)
And I got a False (?!).
I don't know what is happening, how to find a solution, I just know that the same code runs in my Mac, but it would be very good if I could run it in other platforms. Now I'm stucked in this problem (don't have a Mac anymore) and with no clue about the cause.
The weirdest thing is that I don't receive any error on runtime warning, no feedback at all.
Thank you for any help.
EDIT:
Numpy Suit Test Results:
Scipy Suit Test Results:
Download Anaconda package manager
http://continuum.io/downloads
When you download this it will already have all the dependencies for numpy worked out for you. It installs locally and will work on most platforms.
This is not really an answer, but this blog discusses in length the problems of having a numpy ecosystem that evolves fast, at the expense of reproducibility.
By the way, which version of numpy are you using? The documentation for the latest 1.9 does not report any method called cg as the one you use...
I suggest the use of this example so that you (and others) can check the results.
>>> import numpy as np
>>> import scipy.linalg
>>> np.random.seed(123)
>>> a = np.random.random(size=(10000, 10000))
>>> b = np.random.random(size=(10000,))
>>> s_np = np.linalg.solve(a, b)
>>> s_sc = scipy.linalg.solve(a, b)
>>> np.allclose(s_np,s_sc)
>>> s_np
array([-15.59186559, 7.08345804, 4.48174646, ..., -16.43310046,
-8.81301553, -10.77509242])
I hope you can find the answer - one option in the future is to create a virtual machine for each of your projects, using Docker. This allows easy portability.
See a great article here discussing Docker for research.
I recently reinstalled my python environment and a code that used to work very quickly now creeps at best (usually just hangs taking up more and more memory).
The point at which the code hangs is:
solve(exp(-alpha * x**2) - 0.01, alpha)
I've been able to reproduce this problem with a fresh IPython 0.13.1 session:
In [1]: from sympy import solve, Symbol, exp
In [2]: x = 14.7296138519
In [3]: alpha = Symbol('alpha', real=True)
In [4]: solve(exp(-alpha * x**2) - 0.01, alpha)
this works for integers but also quite slow. In the original code I looped over this looking for hundreds of different alpha's for different values of x (other than 14.7296138519) and it didn't take more than a second.
any thoughts?
The rational=False flag was introduced for such cases as this.
>>> q=14.7296138519
>>> solve(exp(-alpha * q**2) - 0.01, alpha, rational=False)
[0.0212257459123917]
(The explanation is given in the issue cited above.)
Rolling back from version 0.7.2 to 0.7.1 solved this problem.
easy_install sympy==0.7.1
I've reported this as a bug to sympy's google code.
I am currently running sympy ver: 1.11.1
This is for all I know the latest version.
However, hanging as was described, when solving a set of 3 differential for 3 angular double differentials persist.
Eg.:
sols = solve([LE1, LE2, LE3], (the_dd, phi_dd, psi_dd),
simplify=False, rational=False)