What is the difference between Tensorflow and Keras? - python

I am currently working with neural networks in keras and I know that it works with tensorflow in the back-end, I have it installed on the GPU, but I don't know if keras uses the GPU or if it is something completely different from tensorflow.

TensorFlow is a mid-level framework that performs operations on tensors. Keras is a high-level API that simplifies the creation and training of neural networks. Keras doesn't do any of the tensor ops itself; it delegates those to its backend, which is a mid-level framework of your choosing: TensorFlow, CNTK, or Theano. Each of those frameworks can be configured to do the tensor ops in whatever ways they can (as far as I am aware, each of them can use either CPUs or GPUs). Keras, however, doesn't really care how the ops get done. It just tells the backend to do them, and they get done.

Related

Is there any alternate for tfagents in pytorch

We are using tfagents in tensorflow for reinforcement learning, because of limitations with static computation graphs we are planning to migrate our code to pytorch.
tfagents is great and have very good documentation and reduce a lot of time doing the same task again
We are wondering if the pytorch community have a similar kind of stuff?
rllib is an alternative which supports PyTorch.

Is it possible to make predictions with a keras/TensorFlow model without downloading TensorFlow and all its dependancies?

I'm trying to use a custom keras model I trained with tensorflow-gpu on my desktop with Python on a mobile phone (Android), however I need to run it with Python on the phone as well. I looked up TensorFlow Lite, however that appears to be written for Java.
Is there any lite (Python) version of TensorFlow, some kind of barebones package that's just set up for making predictions from a TensorFlow/keras model file? I'm trying to focus on saving space, so a solution under 50mb would be desired.
Thanks
TensorFlow Serving was built for the specific purpose of serving pre-trained models. I'm not sure if it runs (or how difficult to make it run) on Android or what it's compiled footprint is, if it's less than 50MB. If you can make it work, please do report back here!

is there any example of Tensorflow inference with multiple GPUs?

I just got an intern position in which I need to help them with serving inference requests on multiple GPUs. However, the available Github resource I could find are all about training.
Is there any example using multiple GPU to do inference in Tensorflow (python).

Tensorflow Object Detection API with GPU on Windows and real-time detection

I am testing the new Tensorflow Object Detection API in Python, and I succeeded in installing it on Windows using docker. However, my trained model (Faster RCNN resnet101 COCO) takes up to 15 seconds to make a prediction (with very good accuracy though), probably because I only use Tensorflow CPU.
My three questions are:
Considering the latency, where is the problem? I heard Faster RCNN was a good model for low latency visual detection, is it because of the CPU-only execution?
With such latency, is it possible to make efficient realtime video processing by using tensorflow GPU, or should I use a more popular model like YOLO?
The popular mean to use tensorflow GPU in docker is nvidia-docker but is not supported on windows. Should I continue to look for a docker (or conda) solution for local prediction, or should I deploy my model directly to a virtual instance with GPU (I am comfortable with Google Cloud Platform)?
Any advice and/or good practice concerning real-time video processing with Tensorflow is very welcome!
Considering the latency, where is the problem ? I heard Faster RCNN
was a good model for low latency visual detection, is it because of
the CPU-only execution ?
Of course, it's because you are using CPU.
With such latency, is it possible to make efficient realtime video
processing by using tensorflow GPU, or should I use a more popular
model like YOLO ?
Yolo is fast, but I once used it for face and accuracy was not that great. But a good alternative.
The popular mean to use tensorflow GPU in docker is nvidia-docker but
is not supported on windows. Should I continue to look for a docker
(or conda) solution for local prediction, or should I deploy my model
directly to a virtual instance with GPU (I am comfortable with Google
Cloud Platform) ?
I think you can still use your local GPU in windows, as Tensorflow supports GPU on python.
And here is an example, simply to do that. It has a client which can read webcam or IP cam stream. The server is using Tensorflow python GPU version and ready to use pre-trained model for predictions.
Unfortunately, Tensoflow does not support tensorflow-serving on windows. Also as you said Nvidia-Docker is not supported on windows. Bash on windows has no support for GPU either. So I think this is the only easy way to go for now.

Training Neural Networks on Hadoop Cluster

I have been studying neural networks for some weeks. Furthermore even if I always used R the Keras library in Python was really helpful with someone with a small programming background like me.
Keras it's a very nice interface which allows the customization I need without even invoking the backend, unless for some custom loss metrics I used.
Being that straightforward is also the Hardware specification, which for example allows you to switch from the CPU of the machine where you have your Python+Keras installed to the machine (compatible) GPU, allowing to exploit the strong parallelization of neural networks when training them.
I was wondering if there is anything which allows you to switch to hadoop cluster training of neural networks with the same kind of ease.
Moreover is there some hadoop open source cluster available to do so?
Thank you for your help

Categories