I install kaggle_python docker image from this tutorial:
http://blog.kaggle.com/2016/02/05/how-to-get-started-with-data-science-in-containers/
this image is perfect but I don't know how to use GPU in it. anyone have any idea?
Nvidia has released a docker runtime that allows docker containers to access their host GPU. Assuming the image you're running has the CUDA libraries built in, you ought to be able to install nvidia-docker as per their instructions, then just launch a container using docker run --runtime=nvidia ...
There's an FAQ for using nvidia-dockers if you run into other roadblocks. I haven't done this myself, but lots of issues are probably going to be specific to how you installed the drivers and cuda libraries on your particular machine. You may also have to modify the image to include any necessary CUDA libraries if they aren't already installed.
Did you download the CUDA branch (link: https://github.com/Kaggle/docker-python/tree/cuda)? If so, all the infrastructure for the GPUs should already be set up and ready to go. Otherwise, you're going to have to do the setup yourself. :)
Related
I been working with vscode development containers. I've managed to build 2 separate containers to leverage gpu support inside of the container.
The first container built tensorflow-gpu into a cuda:11.5.2-cudnn8 runtime image.
With the other container I'm using cudf, and I've tried a couple variations of builds from the install rapidsai guide. How ever installing both tensorflow-gpu and cudf into the same environment has been troublesome due to package conflicts notably with protobuff.
I did at one point get them to install into the same image using a rapidsai devel image but conda took well over an hour to resolve and the final image was something like 30gb and there were still some bugs.
Anyone tips one getting cudf and tensorflow-gpu to run in the same environment?
To get RAPIDS and Tensorflow into the same container, use CUDA Toolkit (CTK) 11.2. I think this is the only CTK version compatible with both libraries right now.
I am trying to serve a TensorFlow model with Nvidia GPU support in windows 10 (version 20H2, OS Build 19042.1165). To the best of my understanding, I think the best way to do the serving is using Docker image tensorflow/serving:latest-gpu. But to do this in windows we need to install nvidia-docker2 using WSL2. But my organization doesn't allow us to register in the windows insider program and without it, I am unable to install the CUDA toolkit in WSL2.
So, is there any other way to serve the tf model with "GPU support" other than using docker?
It looks like the only solution is to build from source, but that is not officially supported for windows.
Here is the link if someone wants to build tf serving from source :
I tried to build a docker container with python and the tensorflow-gpu package for a ppc64le machine. I installed miniconda3 in the docker container and used the IBM repository to install all the necessary packages. To my surprise the resulting docker container was twice as big (7GB) as its amd64 counterpart (3.8GB).
I think the reason is, that the packages from the IBM repository are bloating the installation. I did some research and found two files libtensorflow.so and libtensorflow_cc.so in the tensorflow_core directory. Both of theses files are about 900MB in size and they are not installed in the amd64 container.
It seems these two files are the API-files for programming with C and C++. So my question is: If I am planning on only using python in this container, can I just delete these two files or do they serve another purpose in the ppc64le installation of tensorflow?
Yes. Those are added as there were many requests for it and it's a pain to cobble together the libraries and headers yourself for an already built TF .whl.
They can be removed if you'd rather have the disk space.
What is the content of your "amd64 container"? Just a pip install tensorflow?
I'm working on a python project that needs pylucene(python wrapper for lucene, a java library for search-engines programming).
I've created a Dockerfile that automatically downloads and compile pylucene; then also installs other needed pip dependencies.I builded this Dockerfile obtaining a docker image with all the dependencies(both pylucene and the others installed using pip).
Setting in pycharm this image as remote python interpreter I can run my code, but now I need to release my software in a way that allows to execute it also without pycharm or any other IDE that support remote interpreters.
I thought about creating another Dockerfile that starts from the dependency image and then copy in it my source obtaining an image where the code can be executed.
I don't like this solution much beacause the objective of my project is processing large offline datasets, so in this way the user of this image always have to specify bindings between container and host filesystem.
Are there any better options? Maybe creating an archive that contains my source, pylucene and pip dependencies?
Windows 10 64 bit, python 3.8.2, pylucene latest version (8.3.0)
I have a program that processes videos using foreground detection in OpenCV 2.4.9/python/on windows and packaged for a windows executable using py2exe. I recently updated opencv to opencv3 and repackaged my program. When i run on my computer (with opencv3 installed locally) everything goes fine.
However, when a user goes and downloads the program and runs it on another computer, they get the warning
Failed to load OpenCL runtime
This just seems to be just a warning, and i can detect no performance issues.
I have a couple options. I can just suppress this specific warning in a try statement, or i can somehow turn off the OpenCL on my computer for packaging the program. Suggestions on either strategy would be appreciated. Anything i am overlooking? To my understanding the OpenCL library is for acceleration using GPU.
Thanks,
the solution will be compiling the OpenCV libs without OpenCL and then link them to your application
I had encounter the same problem, here's my solution:
go to the Intel website and download the OpenCL library, then unzip it
run the install.sh file
If your install fails because of update-alternatives errors, maybe it's because you are using Ubuntu/Debian distro and the Intel install package has a wrong setting with it.
To solve this, xfanzone did a very good job on this. Take a look here.
download the patch zip file and patch your OpenCL package
install it again, now it should work fine
If you just don't need to use OpenCL, you can set the environment var as below:
export OPENCV_OPENCL_RUNTIME=999
Sometimes, if you want to turn on the opencl:
export OPENCV_OPENCL_RUNTIME=