tensorflow gpu serving without docker on "windows" - python

I am trying to serve a TensorFlow model with Nvidia GPU support in windows 10 (version 20H2, OS Build 19042.1165). To the best of my understanding, I think the best way to do the serving is using Docker image tensorflow/serving:latest-gpu. But to do this in windows we need to install nvidia-docker2 using WSL2. But my organization doesn't allow us to register in the windows insider program and without it, I am unable to install the CUDA toolkit in WSL2.
So, is there any other way to serve the tf model with "GPU support" other than using docker?

It looks like the only solution is to build from source, but that is not officially supported for windows.
Here is the link if someone wants to build tf serving from source :

Related

Docker image with tensorflow and cudf

I been working with vscode development containers. I've managed to build 2 separate containers to leverage gpu support inside of the container.
The first container built tensorflow-gpu into a cuda:11.5.2-cudnn8 runtime image.
With the other container I'm using cudf, and I've tried a couple variations of builds from the install rapidsai guide. How ever installing both tensorflow-gpu and cudf into the same environment has been troublesome due to package conflicts notably with protobuff.
I did at one point get them to install into the same image using a rapidsai devel image but conda took well over an hour to resolve and the final image was something like 30gb and there were still some bugs.
Anyone tips one getting cudf and tensorflow-gpu to run in the same environment?
To get RAPIDS and Tensorflow into the same container, use CUDA Toolkit (CTK) 11.2. I think this is the only CTK version compatible with both libraries right now.

how to run GPU on Mac OS Big Sur/Jupyter notebook

I am trying to create a GPU environment in Jupyter notebook to run CNN models but have had trouble. I am on MacOS (Big Sur) and was following the instructions from: https://www.techentice.com/how-to-make-jupyter-notebook-to-run-on-gpu/
First, to create a separate GPU environment in Jupyter understand that I need CUDA toolkit. However, found out that CUDA toolkit no longer supports Mac.
Second, understand that I have to download tensor flow GPU which apparently doesn't support MAC/python 3.7.
would be grateful for any help or advice please. essentially I just want to be able to run my code on GPU as CPU is way too slow for machine learning models. is there any way around this?

how to use GPU in kaggle_python docker image

I install kaggle_python docker image from this tutorial:
http://blog.kaggle.com/2016/02/05/how-to-get-started-with-data-science-in-containers/
this image is perfect but I don't know how to use GPU in it. anyone have any idea?
Nvidia has released a docker runtime that allows docker containers to access their host GPU. Assuming the image you're running has the CUDA libraries built in, you ought to be able to install nvidia-docker as per their instructions, then just launch a container using docker run --runtime=nvidia ...
There's an FAQ for using nvidia-dockers if you run into other roadblocks. I haven't done this myself, but lots of issues are probably going to be specific to how you installed the drivers and cuda libraries on your particular machine. You may also have to modify the image to include any necessary CUDA libraries if they aren't already installed.
Did you download the CUDA branch (link: https://github.com/Kaggle/docker-python/tree/cuda)? If so, all the infrastructure for the GPUs should already be set up and ready to go. Otherwise, you're going to have to do the setup yourself. :)

Tensorflow Object Detection API on Windows

Tensorflow recently released their new object detection api Is there any way to run this on windows? The directions apear to be for linux.
Yes, you can run the Tensorflow Object Detection API on Windows. Unfortunately it is a bit tricky and the official documentation does not reflect that appropriately. I used the following procedure:
Install Tensorflow natively on Windows with Anaconda + CUDA + cuDNN. Note that TF 1.5 is now built against CUDA 9.0, so make sure you download the appropriate versions.
Then you clone the repository and build the Protobuf files as described in the tutorial, but beware, there is a bug in Windows Protobuf 3.5, so make sure you use version 3.4.
cd [TF-models]\research
protoc.exe object_detection/protos/*.proto --python_out=.
Finally, you need to build and install the packages with
cd [TF-models]\research\slim
python setup.py install
cd [TF-models]\research
python setup.py install
If you get the exception error: could not create 'BUILD': Cannot create a file when that file already exists here, delete the BUILD file inside first, it will be re-created automatically
And make the built binaries available to your path python path, or simply copy the directories slim and object_detection to your [Anaconda3]/Lib/site-packages directory
To see everything put together, check out our Music Object Detector, which was trained on Windows and Linux.
We don't officially support the Tensorflow Object Detection API, but some external users have gotten it to work.
Our dependencies are pillow, lxml, jupyter, matplotlib and protobuf compiler. You can download a version of the protobuf compiler here. The remaining dependencies can be installed with pip.
As I said on the other post, you can use your local GPU in windows, as Tensorflow supports GPU on python.
And here is an example.
Unfortunately, Tensoflow does not support tensorflow-serving on windows. Also as you said Nvidia-Docker is not supported on windows. Bash on windows has no support for GPU either. So I think this is the only easy way to go for now.
The below tutorial was build specifically for using the Tensorflow Object Detection API on Windows. I've successfully used the below tutorial many times:
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

Where should Tensorflow serving be cloned to?

I've installed Tensorflow but now wish to productionize my model.
So I'm trying to follow this guide:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md
Does Tensorflow serving run along side Tensor flow?
If so where should I clone the the repo to so that the packages can be seen on the python library path please?
Many thanks.
This has not been developed as of 03/13/2017. See this issue. It says there has not been a binary release of tensorflow serving yet so the only way you can import the packages is by cloning the repo and doing development inside of the cloned project.
To answer your question about serving running alongside Tensorflow: the tensorflow module inside of serving is the tensorflow project, so basically tensorflow_serving comes with its own tensorflow.

Categories