Install Tensorflow for python 3 on Deeplens - python

I have just started using the Amazon Deeplens device. I got the demo face detection application working. I have a custom trained tensorflow model (.pb) that I want to deploy on Deeplens. I followed an online guide to create a new project with lambda (python 3.7) and uploaded and imported my model from a s3 bucket. Now, my issue is that I need to install tensorflow on the device for python 3. I tried installing various versions and was even successful it once but I still get an error in the logs saying Tensorflow Module not found
I have a couple of questions regarding this:
My Lambda has python 3.7 execution environment. Is this correct or should I match the one on deeplens (3.5)?
Can I upgrade python on Deeplens to the latest version?
If not, what is the Tensorflow version that is supported for python 3.5 on Deeplens and what is the correct command to install it: pip or pip3?
Any help or insight is appreciated.

Related

failed to install tensorflow on a EC2 instance Ubuntu 20.04

I have a flask application that I would like to run it on an EC2 instance and TensorFlow is needed cause it is image classification. However, after the necessary updates and upgrades, I try to install TensorFlow but after the progress bar completes, I don't see the successfully installed tensorflow==2.7.0. Images are attached, is there any reason for why it is not letting me install TensorFlow, or does the instance have limitations that won't let me install it. Please help and thanks in advance.
installing the TensorFlow on the EC2 instance
Increase the storage on the hard disk (30 GB on my case_
pip install tensorflow-cpu
link
I also faced the same error and apparently the storage was not the reason. I tried using large instance with large memory and I was able to install tensorflow in the anaconda environment successfully.

Installing Tensorflow and Keras on Intel Pentium

For a university we are supposed to implement a TensorFlow project using the python libraries for tensorflow and keras. I can install both of them just fine using pip3, but executing any piece of code results in some kind of error.
I've settled on testing the very complicated code:
import keras
Using python 3.6 and the newest tensorflow and keras (pip3 install tensorflow keras) I get the error ModuleNotFoundError: No module named 'tensorflow.python'; 'tensorflow' is not a package. I checked, and import tensorflow finds the package, but returns some error about AVX instructions and dumps the core.
I researched, and my CPU does not support AVX instructions which are part of tensorflow >= 1.6.0. I could not find a precompiled version that runs on my laptop without AVX, and I don't have the time to compile myself.
I tried downgrading to tensorflow == 1.5.0 and keras == 2.1.3 which was the version when tensorflow == 1.5.0 was around, but I still get missing errors, for each version and import statement a different one.
For example when I use the code:
import keras
from keras.datasets import mnist
I instead get the error AttributeError: module 'keras.utils' has no attribute 'Sequence'. I'm on an Intel Pentium, which I assume is the problem. I am fully aware that my setup is in no way suitable for machine learning, and it isn't supposed to be, but nevertheless I'd like to work on that assignment.
Anyone got experience with installing TensorFlow on older machines?
System:
Ubuntu 18.04.2 LTS
Intel(R) Pentium(R) 3556U # 1.70GHz (Dual Core)
4GB RAM
I had the same trouble, but it seems to have solved it. (However, the Python version shall be 3.5. )
For CPUs that do not support AVX, the tensorflow must be version 1.5 or lower.
If you want to install Tensorflow 1.5, the Python version must be 3.5 or lower.
The successful procedure is as follows.
(1) Uninstall your Anaconda.
(2) Download the following version of Anaconda from the following
URL. Version: Anaconda3-4.2.0-Windows-x86_64.exe
URL:https://repo.anaconda.com/archive/ or https://repo.anaconda.com/archive/Anaconda3-4.2.0-Windows-x86_64.exe
(3) Double-click the anaconda icon of “(2)” above, and install the
anaconda according to the GUI instructions.
(4) Start Anaconda Prompt
(5) Enter “pip install tensorflow==1.5” in Anaconda Prompt and press
the return key. Wait for the installation to finish. (See the log)
(6) Enter "pip install keras==2.2.4" in Anaconda Prompt and press the
return key. Wait for the installation to finish.(See the log)
This completes the installation. If you Enter " import tensorflow " on Jupiter notebook, some future error may displayed.(See this log.)
System:
My PC does not support AVX like your PC. My PC's specs are as follows.
PC:Surface Go
CPU:Intel(R) Pentium(R) CPU 4415Y @ 1.60 GHz
Windows10:64bit
How to test ?
Enter and execute the following command on Jupiter Note. Or use this file.
import tensorflow as tf
print(tf.__version__)
print(tf.keras.__version__)
or
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
If your install is successful, then following message will be displayed on your Jupiter notebook
1.5.0
2.1.2-tf
P.S.
I'm not very good at English, so I'm sorry if I have some impolite or unclear expressions.
Sticking to the Pentium configuration is not recommended for default tensorflow builds because of AVX dependencies. Also many recent advances in this area are not available in earlier builds of TF and you will find it difficult to replicate research works. Options below:
Get a Google Colab (https://colab.research.google.com/) notebook, install Keras and TF and get going with your work
There have been genuine requests for this support, refer to this link [https://github.com/tensorflow/tensorflow/issues/18689] where unofficial builds are provided. See if one of them works
Build Tensorflow from scratch (very hard option), with the right set of flags for Bazel (remove all AVX/threading options)

How to fit TensorFlow Serving Client API in a Python lambda?

I'm trying to build a Python Lambda to send images to TensorFlow Serving for inferences. I have at least two dependencies: CV2 and tensorflow_serving.apis. I've run multiple tutorials showing it's possible to run tensorflow in a lambda, but they provide the package to install and don't explain how they got it to fit in the limit of less than 256MB unzipped.
How to Deploy ... Lambda and TensorFlow
Using TensorFlow and the Serverless Framework...
I've tried following the official instructions for packaging but just this downloads 475MB of dependencies:
$ python -m pip install tensorflow-serving-api --target .
Collecting tensorflow-serving-api
Downloading https://files.pythonhosted.org/packages/79/69/1e724c0d98f12b12f9ad583a3df7750e14ec5f06069aa4be8d75a2ab9bb8/tensorflow_serving_api-1.12.0-py2.py3-none-any.whl
...
$ du -hs .
475M .
I see that others have fought this dragon and won (1) (2) by doing contortions to rip out all unused libraries from all dependencies or compile from scratch. But such extremes strike me as complicated and hopefully outdated in a world where data science and lambdas are almost mainstream. Is it true that so few people are using TensorFlow Serving with Python that I'll have to jump through such hoops to get one working as a Lambda? Or is there an easier way?
The goal is to not actually have tensorflow on the client side, as it uses a ton of space but isn't really needed for inference. Unfortunately the tensorflow-serving-api requires the entire tensorflow package, which by itself is too big to fit into a lambda.
What you can do instead is build your own client instead of using that package. This involves using the grpcio-tools package for the protobuf communication, and the various .proto files from tensorflow and tensorflow serving.
Specifically you'll want to package up these files-
tensorflow/serving/
tensorflow_serving/apis/model.proto
tensorflow_serving/apis/predict.proto
tensorflow_serving/apis/prediction_service.proto
tensorflow/tensorflow/
tensorflow/core/framework/resource_handle.proto
tensorflow/core/framework/tensor_shape.proto
tensorflow/core/framework/tensor.proto
tensorflow/core/framework/types.proto
From there you can generate the python protobuf files.

Deploying Google Cloud Functions with TensorFlow as a dependency

I'd like to use Google Cloud Functions to deploy a keras model saved in JSON (including weights in HDF5) with tensorflow as backend.
The deployment succeed when I don't specify tensorflow in requirements.txt. Although when testing the function in GCP, I get the error message specifying that tensorflow could not be found.
Error: function crashed. Details:
No module named 'tensorflow'
First, I find it quite strange that Google doesn't provide environments with tensorflow pre-installed.
But now if I specify tensorflow in requirements.txt, deployment fails with the error message
ERROR: (gcloud.beta.functions.deploy) OperationError:
code=3, message=Build failed: USER ERROR:
`pip_download_wheels` had stderr output:
Could not find a version that satisfies the
requirement tensorflow (from -r /dev/stdin (line 5))
(from versions: )
No matching distribution found for tensorflow (from -r
/dev/stdin (line 5))
Is there a way I can get tensorflow on Cloud Functions or Google deliberately blocks the install to get us to use ML Engine?
EDIT: Tensorflow 1.13.1 now supports Python 3.7.
Previous answer:
There isn't currently a way to use tensorflow on Google Cloud Functions.
However, it's not because Google is deliberately blocking it: the tensorflow package only provides built distributions for CPython 2.7, 3.3, 3.4, 3.5 and 3.6, but the Cloud Functions Python runtime is based on Python version 3.7.0, so pip (correctly) can't find any compatible distributions.
There are currently some compatibility issues with TensorFlow and Python 3.7, but once that is fixed, tensorflow should be installable on Google Cloud Functions. For now though, you'll have to use ML Engine.

tensorflow: <built-in function AppendInt32ArrayToTensorProto> returned NULL without setting an error

Had various issues getting tensorflow onto my system and eventually did with v1.4.1. Trying to run this: https://github.com/sherjilozair/char-rnn-tensorflow
SystemError: built-in function AppendInt32ArrayToTensorProto returned NULL without setting an error
Searched and couldn't find this specific issue or any patches in newer versions with this same isusue.
You are using an older Tensorflow version, which is probably not compatible with your current python version.
check your computer configuration and install a matching Tensorflow version with the help of the following table: https://www.tensorflow.org/install/pip#package-location
Install a python version that matches your Tensorflow version (also can be found in the link provided above)
Check your python version: $ python3 --version
Check your Tensorflow version:$ pip3 list | grep tensorflow
If versions are matching as stated in the table above you'd probably get rid of the error
I've encountered a similar problem when I was trying to run the Tensorflow image retraining script: https://github.com/tensorflow/hub/raw/master/examples/image_retraining/retrain.py
In my case the problem was caused by Tensorflow 1.11.0 not being compatible with python 3.7.0.
Steps that solved the problem for me:
Uninstall python 3.7.0.
Install python 3.6.0.
I run the script again, and now it run properly
Hope it will hellp :)

Categories