Opencv Cuda accelerated : Python can't see GPU device - python

I installed OpenCV for GPU use in python, following tutorials on youtube.
I encounter a major difficulty when I try to see if python recognizes the GPU.
After the installation, I executed this code, in order to verify if my GPU is detected or not :
import cv2
from cv2 import cuda
cuda.printCudaDeviceInfo(0)
The first output was :
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cv2.error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\core\include\opencv2/core/private.cuda.hpp:106
: error: (-216:No CUDA support) The library is compiled without CUDA support in function 'throw_no_cuda'
So I thought my install was wrong, then I did the install again.
After many attempts, I tried to do the same code verification as before but in the site-packages folder of my Miniconda install (In the same location of cv2 for GPU).
And surprisingly, when I use the method cuda.printCudaDeviceInfo(0) the output is :
*** CUDA Device Query (Runtime API) version (CUDART static linking) ***
Device count: 1
Device 0: "NVIDIA T400"
...
Compute Mode:
Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.70, CUDA Runtime Version = 11.70, NumDevs = 1
So the GPU has been detected when I used python in this folder.
But my want is to use python in other folders.
I thought it was a PATH error, but I added to my path, the cv2 location in my variables system environment, and I got the same result.
Does anyone have an idea about how to fix this ?
Thank you.

Related

Problem importing TensorFlow 2 in Python (running on WSL in Windows)

Problem: I followed Microsoft's instruction in order to properly install and run TensorFlow 2 in WSL with GPU acceleration, using DirectML (here's the document).
Following the installation, when I try and import tensorflow in Python I get the following output:
>>> import tensorflow
2022-11-22 15:52:33.090032: I tensorflow/core/platform/cpu_feature_guard.cc:193]
This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)
to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package
/tensorflow/__init__.py", line 440, in <module>
_ll.load_library(_plugin_dir)
File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package
/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: /home/pietro
/miniconda3/envs/testing/lib/python3.9/site-packages/tensorflow-plugin
/libtfdml_plugin.so: undefined symbol:_ZN10tensorflow8internal15LogMessageFatalD1Ev, version tensorflow
I tried instead to follow the instructions for TensorFlow 1 and PyTorch (just in case something was wrong with my machine) and they both work perfectly, so I assume this issue only involves TensorFlow 2 somehow.
Did anyone encounter the same problem?
Thanks to everybody in advance :)
Pietro
Had the same problem, and downgrading TensorFlow from 2.11 fixed it. First remove the existing version:
pip uninstall tensorflow-cpu
Then re-install, this time with 2.10.0:
pip install tensorflow-cpu==2.10.0
After that, try importing it in Python. You should see something like the following (apologies for the messy output):
>>> import tensorflow as tf
2022-11-28 22:41:21.693757: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-28 22:41:21.806150: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2022-11-28 22:41:22.982148: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.so
2022-11-28 22:41:22.982289: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdxcore.so
2022-11-28 22:41:22.996385: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libd3d12.so
2022-11-28 22:41:27.615851: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.
You can test that it works by adding two tensors. Running a command like the following:
print(tf.add([1.0, 2.0], [3.0, 4.0]))
And somewhere in the output, you should be able to verify that DirectML has found your GPU:
2022-11-28 22:43:42.632447: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (NVIDIA GeForce RTX 3080)
Hope this helps!

Python OpenCV with Cuda not working after successful build

I am on Windows 10, using Python 3.9.6 and my cv2 version is 4.4.0. I built OpenCV with Cuda successfully and after calling cv2.cuda.getCudaEnabledDeviceCount(), it returns 1 as expected. The following lines also work fine.
net = cv2.dnn.readNetFromCaffe(proto_file, weights_file)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
# multiple lines
# processing frame
# and setting input blob
net.setInput(in_blob)
However, executing the following line throws an exception.
output = net.forward()
The exception:
cv2.error: OpenCV(4.4.0)
G:\opencv-4.4.0\opencv-4.4.0\modules\dnn\src\dnn.cpp:2353: error:
(-216:No CUDA support) OpenCV was not built to work with the selected
device. Please check CUDA_ARCH_PTX or CUDA_ARCH_BIN in your build
configuration. in function
'cv::dnn::dnn4_v20200609::Net::Impl::initCUDABackend'
The message says that my Cuda was not built to work with the selected device (which I'm guessing is my GPU).
It seems to have encountered a conflict with CUDA_ARCH_BIN and/or CUDA_ARCH_PTX. My GPU model is NVIDIA Geforce MX130 whose CUDA_ARCH_BIN value is what I found to be 6.1 and I set it according on CMake.
How can I resolve these issues? Let me know if I need to provide any more information.
"Sources say" the MX130 has a Maxwell core, not a Pascal core. Maxwell is the predecessor of Pascal.
Hence, you only have CUDA compute capability 5.0.
You should check that with an appropriate tool such as GPU-Z that does its best to query the hardware instead of going by specs.
Sources:
https://en.wikipedia.org/wiki/GeForce_10_series#GeForce_10_(10xx)_series_for_notebooks (notice how the Fab (nm) is different and the code name is GM108, not GPxxx)
https://www.techpowerup.com/gpu-specs/geforce-mx130.c3043

(-216:No CUDA support) The library is compiled without CUDA support in function 'throw_no_cuda'

Hi I build opencv from source and I would like to use in on the GPU, so I did all the flags, but still I have this issue if I want to put an image on the GPU. Does anyone have an idea? Here is how the configuration looks like in the end here. I already struggle for hours with that. I think the configuration is right but maybe some path isn't?
cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-v7sdauef/opencv/modules/core/include/opencv2/core/private.cuda.hpp:106: error: (-216:No CUDA support) The library is compiled without CUDA support in function 'throw_no_cuda'

LD_preload for using other versions of libc, isn't working in pwntools

I want to use other versions of library for my pwn study in pwntools, but EOF error occurred.
I tried to solve this issue , changed ubuntu versions 3 times (18.04 desktop -> 14.04 desktop -> 18.04.0 server), reinstall python and pwntools 4 times.
currently, versions are ubuntu 18.04.0 server, Python 2.7.15rc1, pwntools 3.12.2
I tried using other versions library for my pwn study in pwntools.
like this:
p = process("./binary_name",env={"LD_PRELOAD" : "./libc_name"})
and tried also
env = {"LD_PRELOAD": os.path.join(os.getcwd(), "libc_name")}
p = process("./binary_name",env=env)
and excute python code, Error occurred
I already set the permisson of libc to chmod 777, but result is same.
[*] Process './aeiou' stopped with exit code -4 (SIGILL) (pid 77469)
Traceback (most recent call last):
File "ex4.py", line 6, in <module>
p.sendlineafter(">>","3")
File "/home/synod2/.local/lib/python2.7/site- packages/pwnlib/tubes/tube.py", line 747, in sendlineafter
~~~~~~~~~~~~~~
EOFError
I dont know why EOF error occurred. but, because of 3 differents version ubuntu give the same error, I think I missed install something.
but I don't know what I missed!
Maybe you should try it on Ubuntu 16.
Obviously your binary file is dynamic linked. So when the program need to call some libc function such as read. It will pass some information to the dynamic linker, then the linker will calculate the real address of the read function.
but functions in libc has a version attribute. So if you try to use LD_PRELOAD on Ubuntu 18.04. the dynamic linker would try to find sth like read_2_27 in you 2.23-version-libc which only have read_2_23. so your program would fail to execute.
UPDATE:
another solution is to tell the excutable file to use the correct version of ld.so
elf file has a segment(INTERP) in which save the path to the ld.so to use. you can just change it to the path to ld.so you want to use.
BTW, you can find many version of ld.so in the repository

DEVICE_NOT_FOUND while calling pyopencl.Context

I am struggling with the following Python code:
import pyopencl as cl
ctx = cl.Context(dev_type=cl.device_type.GPU)
It gives the following exception:
RuntimeError: clcreatecontextfromtype failed: DEVICE_NOT_FOUND
My OS is Linux Mint Debian Edition 2, running on a laptop with i7-5600U. It also has a graphic card, but I do not use it. I am using Python 3.4.2.
I have installed the Debian package amd-opencl-icd (I first tried beignet, but then the command clinfo failed).
I have installed pyopencl using pip and opencl using this tutorial. Note that I did not do the fourth step (creating the symbolic link to intel64.icd), since I did not have this file. The test at the end of the tutorial succeed.
Do you have any hint about what is happening? I am surprised that the C++ test of opencl (in the tutorial) and the installation of pyopencl both succeed, but this simple command of pyopencl fails.
EDIT
After installing the Intel driver, I now have a different issue.
The command clinfo gives the following:
terminate called after throwing an instance of 'unsigned long'
And the above Python code gives:
LogicError: clcreatecontextfromtype failed: INVALID_PLATFORM
You've installed the intel opencl SDK, which gives you the compiler and maybe the CPU runtime. You're trying to create a context consisting of GPU devices, which means that you need the runtime for intel HD graphics. Grab the 64-bit driver from the link below.
https://software.intel.com/en-us/articles/opencl-drivers#latest_linux_driver
The CPU runtime is also available from that link. You need to follow the same procedure as before for the opencl HD graphics driver (converting .rpm to .deb). The CPU driver has a script you can execute.
The INVALID_PLATFORM error you got after installing the runtime appears to be because it expects the platform to be passed as a property, when creating from device type. It expects the properties as a list of key-tuple pairs. This is shown in the snippet below for the first available platform. The keyword is one of the values in context_properties, and the value is the platform object itself.
import pyopencl as cl
platforms = cl.get_platforms()
ctx = cl.Context(dev_type=cl.device_type.GPU, properties=[(cl.context_properties.PLATFORM, platforms[0])])
print(ctx.devices)
On my platform this prints
[<pyopencl.Device 'Intel(R) HD Graphics 4600' on 'Intel(R) OpenCL' at 0x1c04b217140>]
as my first platform is intel.

Categories