Where should Tensorflow serving be cloned to? - python

I've installed Tensorflow but now wish to productionize my model.
So I'm trying to follow this guide:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md
Does Tensorflow serving run along side Tensor flow?
If so where should I clone the the repo to so that the packages can be seen on the python library path please?
Many thanks.

This has not been developed as of 03/13/2017. See this issue. It says there has not been a binary release of tensorflow serving yet so the only way you can import the packages is by cloning the repo and doing development inside of the cloned project.
To answer your question about serving running alongside Tensorflow: the tensorflow module inside of serving is the tensorflow project, so basically tensorflow_serving comes with its own tensorflow.

Related

tensorflow gpu serving without docker on "windows"

I am trying to serve a TensorFlow model with Nvidia GPU support in windows 10 (version 20H2, OS Build 19042.1165). To the best of my understanding, I think the best way to do the serving is using Docker image tensorflow/serving:latest-gpu. But to do this in windows we need to install nvidia-docker2 using WSL2. But my organization doesn't allow us to register in the windows insider program and without it, I am unable to install the CUDA toolkit in WSL2.
So, is there any other way to serve the tf model with "GPU support" other than using docker?
It looks like the only solution is to build from source, but that is not officially supported for windows.
Here is the link if someone wants to build tf serving from source :

How do I access OpenCV source compiled package from my Python project venv on windows?

So I have been writing a Python program that utilizes OpenCV for Windows. It's mostly just a project to learn how to use Vision-based Machine Learning. I've gotten the project working with my CPU, and while it "functions" it is abysmally slow, so I wanted to try working with the GPU version instead. Unfortunately, there isn't a python package that has OpenCV with Cuda-enabled GPU functionality (from what I could tell).
So after researching, I found that in order to do what I wanted, I had to compile the OpenCV source code with Cmake. So I set out to do so with the help of this guide which seemed to work (it compiled at least). But now i'm running into the issue of:
I don't actually know how to import this newly built package into my project's venv. I've tried moving the cv2.cp37-win_amd64.pyd into the project's venv site packages, i've tried moving the entire build folder into the project and directly importing it, but neither actually worked... so I'm a little at a loss on what to do to get OpenCV with GPU enabled CUDA working in my project.
EDIT:
I followed the guide that Miki linked, including installing Anaconda and all that jazz. The package imports correctly when I use:
Following #Miki's suggestion, I went through the guide that they linked, following all of the processes, testing that OpenCV was in fact being built correctly with:
set path=%openCvBuild%\install\x64\vc16\bin;%path%
python -c "import cv2; print(f'OpenCV: {cv2.__version__} for python installed and working')"
But it still isn't showing up in the python import in conda, despite being in the env's site-packages folder.

How to do inference using TesnorFlow-GPU models on Tegra X2?

I am new to Jetson tegra x2 board.
I have a plan to run my tensorflow-gpu models on TX2 board and see how they perform there. These models are trained and tested on GTX GPU machine.
On tx2 board, Jetpack full does not have tensorflow in it. So tensorflow needs to be built/installed which I have seen several tutorials on and tried. My python files train.py and test.py expect tensorflow-gpu.
Now I suspect, if tensorflow-gpu buiding on tx2 board is the right way to go?
Oh, there is Nvidia TensorRT on TX2, that will do part of the job, but how? and is that right?
Will tensorflow and tensorRT work together to replace tensorflow-gpu? but how? then what modifications will i have to make in my train and test python files?
Do I really need to build tensorflow for tx2 at all? I only need inference I don't want to do training there.
I have studied different blogs and tried a several options but now things are bit messed up.
My simple question is:
What are steps to get inference done on Jetson TX2 board by using TensorFlow-GPU deep learning models trained on GTX machine?
The easiest way is to install the NVIDIA provided wheel: https://docs.nvidia.com/deeplearning/dgx/install-tf-jetsontx2/index.html
All the dependencies are already installed by JetPack.
After you install Tensorflow using the wheel, you can use it however you use Tensorflow on other platforms. For running inference, you can download a Tensorflow model into TX2 memory, and run your Tensorflow inference scripts on them.
You can also optimize your Tensorflow models by passing them through TF-TRT: https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html
There is just one API call that does the optimization: create_inference_graph(...)
This will optimize the Tensorflow graph (by mostly fusing nodes), and also let you build the models for lower precision to get better speedup.
I built tensorflow on JetsonTX2 following this guide. It provides instructions and wheels for both Python 2 and Python3.
https://github.com/jetsonhacks/installTensorFlowTX2
If you are new to Jetson TX2, also take a look at this "Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson". (*This does not require tensorflow installation since Jetpack already builds TensorRT)
https://github.com/dusty-nv/jetson-inference#building-from-source-on-jetson
If you have tensorflow trained graphs that you want to run inference on Jetson then you need to first install tensorflow. Afterwards, it is recommended (not compulsory for inference) that you optimize your trained models with tensorRT.Check out these repos for object detection/classification examples that uses TensorRT optimization.
https://github.com/NVIDIA-AI-IOT/tf_trt_models
https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification
You can find the tensorflow-gpu wheel files of TX2 for both python 2.7 and python 3.5 in this link of Nvidia's Developer Forum.
https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-8-wheel-with-jetpack-3-2-/

installing tensorflow without internet connection

I have to install TensorFlow + Keras on my computer, but I have not internet connection on it, so I have to download this package on another computer and then pass it on mine.
My question is: where can I saefty downolad TensorFlow + Keras and then, how can I install it using Anaconda?
Thanks a lot for helping!
I will just assume you are using a Linux machine
For tensorflow, you can follow the guide provided by florian.
Step 1
For Keras you just need to git clone or download the repository in https://github.com/keras-team/keras
Once you cloned or downloaded the repository on your machine connected to the internet, pass it to the one you want to use it one.
Step 2
Open a terminal and navigate to the passed keras folder with cd. run the setup.py by tipping ./setup.py into the console and you are done.
Step 3
To verify the installation you can run one of the examples. Navigate to the examples folder inside the keras folder and type ./mnist_cnn.pyinto the console. If everything was installed right you should see the network training and output. Otherwise, an error message will be displayed in the console.
Here is the windows version for keras and anaconda
Step 1
Copy the downloaded folder of keras to %USERPROFILE%\Anaconda3\Lib\site-packages. Then use cd %USERPROFILE%\Anaconda3\Lib\site-packages\keras to get to the keras folder.
Step 2
in the same terminal type python setup.py develop to install keras on windows.
Step 3
To check keras navigate into the examples folder and run the same example is in the Linux step 3.
My windows skills are very rusty, so I won't be able to help you troubleshoot problems here. I recommend you to install Linux on a second partition if you want to dive into deep learning since it is a lot easier to set up a system for DL on Linux than on Windows, also if you want to use AWS later it is chaper on Linux than windows.

Tensorflow Object Detection API on Windows

Tensorflow recently released their new object detection api Is there any way to run this on windows? The directions apear to be for linux.
Yes, you can run the Tensorflow Object Detection API on Windows. Unfortunately it is a bit tricky and the official documentation does not reflect that appropriately. I used the following procedure:
Install Tensorflow natively on Windows with Anaconda + CUDA + cuDNN. Note that TF 1.5 is now built against CUDA 9.0, so make sure you download the appropriate versions.
Then you clone the repository and build the Protobuf files as described in the tutorial, but beware, there is a bug in Windows Protobuf 3.5, so make sure you use version 3.4.
cd [TF-models]\research
protoc.exe object_detection/protos/*.proto --python_out=.
Finally, you need to build and install the packages with
cd [TF-models]\research\slim
python setup.py install
cd [TF-models]\research
python setup.py install
If you get the exception error: could not create 'BUILD': Cannot create a file when that file already exists here, delete the BUILD file inside first, it will be re-created automatically
And make the built binaries available to your path python path, or simply copy the directories slim and object_detection to your [Anaconda3]/Lib/site-packages directory
To see everything put together, check out our Music Object Detector, which was trained on Windows and Linux.
We don't officially support the Tensorflow Object Detection API, but some external users have gotten it to work.
Our dependencies are pillow, lxml, jupyter, matplotlib and protobuf compiler. You can download a version of the protobuf compiler here. The remaining dependencies can be installed with pip.
As I said on the other post, you can use your local GPU in windows, as Tensorflow supports GPU on python.
And here is an example.
Unfortunately, Tensoflow does not support tensorflow-serving on windows. Also as you said Nvidia-Docker is not supported on windows. Bash on windows has no support for GPU either. So I think this is the only easy way to go for now.
The below tutorial was build specifically for using the Tensorflow Object Detection API on Windows. I've successfully used the below tutorial many times:
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

Categories