RuntimeError: cuda runtime error (8) in pytorch - python

when I try to run a pytorch faster rcnn code (from https://github.com/rowanz/neural-motifs), I get the issuse as follow
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCTensorMathPairwise.cu
line=21 error=8 : invalid device function
It arises when operate
keep.append(keep_im + s)
where keep is a list, s is a int number and keep_im is a torch.cuda.Longtensor
It's strange that, when I modify the code as
try:
keep.append(keep_im + s)
except BaseException:
keep.append(keep_im + s)
It return error in the try and then operates again in the except, and successes....
Anyone know what happen here?
I use python 2.7 + pytorch 0.3 + Cuda 8 + cudnn7.1, Titan XP in Ubuntu 16. Thanks

You have to upgrade torch version.
**
For me upgrade to torch 1.11.0 version worked.
**

Related

CUDA Illegal Memory Access error when using torch.cat

I was playing around with pytorch concatenate and wanted to see if I could use an output tensor that had a different device to the input tensors, here is the code:
import torch
a = torch.ones(4)
b = torch.ones(4)
c = torch.zeros(8).cuda()
print(c)
ab = torch.cat([a,b], out=c)
print(c)
I am running this inside a jupyter notebook. pytorch version: 1.7.1
I get the following error:
...
\Anaconda3\envs\...\lib\site-packages\torch\_tensor_str.py in __init__(self, tensor)
87
88 else:
---> 89 nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
90
91 if nonzero_finite_vals.numel() == 0:
RuntimeError: CUDA error: an illegal memory access was encountered
It happens if you try to access the tensors c (in this case with a print).
I couldnt find anything in the documentation that said I couldn't do this, other than perhaps this line:
" ... any python sequence of tensors of the same type ... "
The error is kind of curious though... any ideas?
It appears that the behaviors changes according to the version of pytorch. With the version 1.3.0 I get the error expected object of backend CUDA but got CPU, but the version 1.5.0 I do indeed get the same error as you do. This would probably be worth mentioning on their github, because I believe the former error is more useful than the latter.
Anyway, both errors come from the fact that you concatenate cpu tensors into a GPU one. You can solve it very easily :
# Move the tensors to the GPU prior to concatenating
ab = torch.cat([a.cuda(),b.cuda()], out=c)
or
# Move the tensor after concatenating
c.copy_(torch.cat([a,b]).cuda())
I don't have a notebook but I believe you will have to restart your kernel, the error you get seems to break it down really bad. My python shell just cannot compute anything anymore after getting the illegal memory access.
I faced a similar issue and reproduced the error as above with minor differences:
# 080521 debug RuntimeError: CUDA error: an illegal memory access was encountered
# https://stackoverflow.com/questions/66985008/cuda-illegal-memory-access-error-when-using-torch-cat
import torch
a = torch.ones(4)
b = torch.ones(4)
c = torch.zeros(8).cuda()
print(c)
ab = torch.cat([a,b], out=c) # throws error below:
print(c)
# RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
# (when checking arugment for argument tensors in method wrapper__cat_out_out)
# i.e. 'expected object of backend CUDA but got CPU'
Applying this logic: Using CUDA with pytorch? (setting tensor type to cuda) solved the error:
import torch
torch.set_default_tensor_type('torch.cuda.FloatTensor')
a = torch.ones(4)
b = torch.ones(4)
c = torch.zeros(8).cuda()
print(c)
ab = torch.cat([a,b], out=c)
print(c)

Conda Numba Cuda: libNVVM cannot be found

My development environment is: Ubuntu 18.04.5 LTS, Python3.6 and I have installed via conda (numba and cudatoolkit). Nvidia GPU GeForce GTX 1050 Ti, which is supported by cuda.
The installation of conda and numba seem to work as intended as I can import numba within python3.6 scripts.
The problem seems identical situation to the question asked here: Cuda: library nvvm not found
but none of the proposed solutions seem to work in my case, and I'm not sure how to highlight my situation properly (I can't do it through an answer in the other thread...). If raising a duplicate of the question is inappropriate, then guide me to proper conduct.
When I try to run the code below I get the following error: numba.cuda.cudadrv.error.NvvmSupportError: libNVVM cannot be found. Do conda install cudatoolkit: library nvvm not found
from numba import cuda, float32
#Controls threads per block and shared memory usage.
#The computation will be done on blocks of TPBxTPB elements.
TPB = 16
#cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
if x >= C.shape[0] and y >= C.shape[1]:
# Quit if (x, y) is outside of valid C boundary
return
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[tx, ty] = A[x, ty + i * TPB]
sB[tx, ty] = B[tx + i * TPB, y]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[tx, j] * sB[j, ty]
# Wait until all threads finish computing
cuda.syncthreads()
C[x, y] = tmp
import numpy as np
matrix_A = np.array([[0.1,0.2],[0.1,0.2]])
Doing as suggested and running conda install cudatoolkit does not work. I have tried many variations on this install that I've found online to no avail.
In the other post a solution that seems to have worked for many is to add lines about environment variables in the .bashrc file in the home directory. The suggestions however refer to files that exist in the /usr directory, where I have no cuda data since I've installed through conda. I have tried many variations on these exports without success. This is perhaps where the solution lies, but if so then the solution would benefit from being generalized.
Does anyone have any up-to-date or generalized solutions to this problem?
EDIT: adding information from terminal outputs (thanks for the hint of editing the question to do so)
> conda list numba
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
numba 0.51.2 py38h0573a6f_1
> conda list cudatoolkit
# packages in environment at /home/tobka/anaconda3:
#
# Name Version Build Channel
cudatoolkit 11.0.221 h6bb024c_0
Also adding output from numba -s: https://pastebin.com/raw/6u1MUkxg
Idea of possible cause (not yet confirmed): I noticed in the numba -s output that it specifies Python Version: 3.8.3, where I've been explicitly using python3.6 in the terminal since simply using python has usually meant using python2.7. I checked however, and my system now uses Python 3.8.3 with the python command, and Python 3.6.9 with the python3.6. And when running the code using python I get a syntax error instead, which is a good sign: raise ValueError(missing_launch_config_msg).
I will try to fix the syntax errors and confirm that the code works, after which I will report here of the situation.
Confirmation of solution: using python instead of python3.6 in the terminal solved the problem. The root cause was the user.

TypeError when using SLSQP solver in PyOpt library

I'm getting TypeErrors when trying to run the optimization library pyOpt. The code I'm trying to run is the basic example given here (I'm only testing for the SLSQP solver).
I'm getting the following error during the execution of the solver
la = numpy.array([max(m,1)], numpy.int)
gg = numpy.zeros([la], numpy.float)
TypeError: only integer scalar arrays can be converted to a scalar index
I suspect the error is due to changes in numpy because of the answer given here. If that is the case, what are my options to make the library work? I can think of downgrading numpy but I don't want any unforseen changes to the other libraries in my system.
Using Python 2.7 with numpy 1.12.1 on Ubuntu 14.04.
replace lz with lz[0] - the issue is with version of numpy as you noted.
/usr/local/lib/python2.7/dist-packages/pyOpt/pySLSQP/pySLSQP.py
I got it to run by changing lines
374 gg = numpy.zeros([la[0]], numpy.float)
377 dg = numpy.zeros([la[0],n+1], numpy.float)
401 w = numpy.zeros([lw[0]], numpy.float)
404 jw = numpy.zeros([ljw[0]], numpy.intc)
I ended up encountering errors during execution that were not python errors but rather errors encountered during operation of the compiled code, so I just switched to using pyGMO2 and I am satisfied.

Bit Vector tactic leads to exit code 139 in Z3Py

This is a simple bit vector problem:
import z3
s = z3.Tactic('bv').solver()
m = z3.Function('m', z3.BitVecSort(32), z3.BitVecSort(32))
a, b = z3.BitVecs('a b', 32)
axioms = [
a == m(12432),
z3.Not(a == b)
]
s.add(axioms)
print(s.check())
Python crashes with error code 139. Please note that, this is not my real problem, so I must use bit vector tactic in my project, though it doesn't have any problem with smt tactic or even qfbv tactic.
It seems to be a bug in 4.4.0. With 4.4.0 and Ubuntu 16.04 LTS and Python 2.7 you can reproduce the issue. However in newer versions of Z3, it has been fixed. I tried 4.4.2 and it returns sat.
https://github.com/Z3Prover/z3/issues/685

python "TypeError: 'numpy.float64' object cannot be interpreted as an integer"

import numpy as np
for i in range(len(x)):
if (np.floor(N[i]/2)==N[i]/2):
for j in range(N[i]/2):
pxd[i,j]=x[i]-(delta*j)*np.sin(s[i]*np.pi/180)
pyd[i,j]=y[i]-(delta*j)*np.cos(s[i]*np.pi/180)
else:
for j in range((N[i]-1)/2):
pxd[i,j]=x[i]-(delta*j)*np.sin(s[i]*np.pi/180)
pyd[i,j]=y[i]-(delta*j)*np.cos(s[i]*np.pi/180)
Does anyone has an idea of solving this problem? Running these codes
successfully?
N=np.floor(np.divide(l,delta))
...
for j in range(N[i]/2):
N[i]/2 will be a float64 but range() expects an integer. Just cast the call to
for j in range(int(N[i]/2)):
I came here with the same Error, though one with a different origin.
It is caused by unsupported float index in 1.12.0 and newer numpy versions
even if the code should be considered as valid.
An int type is expected, not a np.float64
Solution: Try to install numpy 1.11.0
sudo pip install -U numpy==1.11.0.
I had the same problems when I was training a retained object detection model (faster RCNN) and this worked for me perfectly:
pip uninstall pycocotools
pip install pycocotools-windows
Similar situation. It was working. Then, I started to include pytables. At first view, no reason to errors. I decided to use another function, that has a domain constraint (elipse) and received the following error:
TypeError: 'numpy.float64' object cannot be interpreted as an integer
or
TypeError: 'numpy.float64' object is not iterable
The crazy thing: the previous function I was using, no code changed, started to return the same error. My intermediary function, already used was:
def MinMax(x, mini=0, maxi=1)
return max(min(x,mini), maxi)
The solution was avoid numpy or math:
def MinMax(x, mini=0, maxi=1)
x = [x_aux if x_aux > mini else mini for x_aux in x]
x = [x_aux if x_aux < maxi else maxi for x_aux in x]
return max(min(x,mini), maxi)
Then, everything calm again. It was like one library possessed max and min!
While I appreciate this is not the OP's problem, I just had this error message for a very different reason and this is the top result so I'm posting my problem and resolution here.
I had this code:
x = np.ndarray([1.0, 2.0, 3.0], dtype=np.float_)
Notice the subtle mistake? ndarray is the numpy array class, but you usually don't construct it directly. Instead you use the array() helper function:
x = np.array([1.0, 2.0, 3.0], dtype=np.float_)
Switching to the second form solved my problem.
This problem may occur when we use an old version of numpy. In my case, I was using 1.18.5. I upgraded to 1.19.5 and the fail finished.
After this, if you are using Jupyter, you shall shutdown Kernell.

Categories