I'd like to use Google Cloud Functions to deploy a keras model saved in JSON (including weights in HDF5) with tensorflow as backend.
The deployment succeed when I don't specify tensorflow in requirements.txt. Although when testing the function in GCP, I get the error message specifying that tensorflow could not be found.
Error: function crashed. Details:
No module named 'tensorflow'
First, I find it quite strange that Google doesn't provide environments with tensorflow pre-installed.
But now if I specify tensorflow in requirements.txt, deployment fails with the error message
ERROR: (gcloud.beta.functions.deploy) OperationError:
code=3, message=Build failed: USER ERROR:
`pip_download_wheels` had stderr output:
Could not find a version that satisfies the
requirement tensorflow (from -r /dev/stdin (line 5))
(from versions: )
No matching distribution found for tensorflow (from -r
/dev/stdin (line 5))
Is there a way I can get tensorflow on Cloud Functions or Google deliberately blocks the install to get us to use ML Engine?
EDIT: Tensorflow 1.13.1 now supports Python 3.7.
Previous answer:
There isn't currently a way to use tensorflow on Google Cloud Functions.
However, it's not because Google is deliberately blocking it: the tensorflow package only provides built distributions for CPython 2.7, 3.3, 3.4, 3.5 and 3.6, but the Cloud Functions Python runtime is based on Python version 3.7.0, so pip (correctly) can't find any compatible distributions.
There are currently some compatibility issues with TensorFlow and Python 3.7, but once that is fixed, tensorflow should be installable on Google Cloud Functions. For now though, you'll have to use ML Engine.
Related
On Google Colab I am trying to implement Mol_dqn from the paper Optimization of Molecules via Deep Reinforcement Learning
. I have used the code from Google Research's Github here.
The model relies on TensorFlow version 1, which Google Colab no longer supports.
How can I get the model to run? How could I update the code scripts to run on Tensorflow ver 2? Is this the only option?
When I try to execute one of the python scripts, the error "ModuleNotFoundError: No module named 'tensorflow.contrib'" occurs.
I have tried uninstalling Tensorflow and reinstalling version 1.5, but Google Colab would not allow it.
I tried the command
%tensorflow_version 1.x
but Google Colab no longer supports it.
I have a Jupyter notebook that i've built a script in for extracting data from a Google Sheet using these two imports:
from googleapiclient.discovery import build
from google.oauth import service_account
I'm trying to copy it to AWS Lambda and I'm having trouble uploading these three libraries to a layer:
google-api-python-client
google-auth-httplib2
google-auth-oauthlib
I downloaded them from pypi.org. They all only have one download option and don't specify which version of python 3 they're compatible with, except google-api-python-client which has "Python 3.7, 3.8, 3.9, 3.10 and 3.11 are fully supported and tested." in the comments.
I just checked and it looks like my Jupyter notebook is running Python 3.10. I've also copied the script into VSCode and these libraries also appear to only work in Python 3.10. Which is weird since at least one of them should still work in all versions.
It makes me think i'm doing something wrong.
Also, it doesn't look like Lambda supports 3.10? So is there no way to run Google libraries on it? Or do I need to use older libraries?
If you don't have 3.9 locally, you can use Docker to run it inside a container and see which packages you need.
FROM amazon/aws-lambda-python:3.9
RUN pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
Build it:
docker build . --progress=plain
See logs:
#5 24.65 Successfully installed cachetools-5.3.0 certifi-2022.12.7 charset-normalizer-3.0.1 google-api-core-2.11.0
google-api-python-client-2.77.0 google-auth-2.16.0
google-auth-httplib2-0.1.0 google-auth-oauthlib-1.0.0
googleapis-common-protos-1.58.0 httplib2-0.21.0 idna-3.4
oauthlib-3.2.2 protobuf-4.21.12 pyasn1-0.4.8 pyasn1-modules-0.2.8
pyparsing-3.0.9 requests-2.28.2 requests-oauthlib-1.3.1 rsa-4.9
six-1.16.0 uritemplate-4.1.1 urllib3-1.26.14
So your requirements.txt for Python 3.9 will look like:
google-api-python-client==2.77.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==1.0.0
I recommend you work locally using the same version of Python and its packages. Docker is great for that!
I am trying to use from summarizer import Summarizer, TransformerSummarizer (a.k.a. bert-extractive-summarizer) library in python to do text summerization with models like Bert, Gpt-2 and others...
But when I try this import I get an error (ie. warning but I can't run my code):
UserWarning: "sox" backend is being deprecated. The default backend will be changed to "sox_io" backend in 0.8.0 and "sox" backend will be removed in 0.9.0. Please migrate to "sox_io" backend. Please refer to https://github.com/pytorch/audio/issues/903 for the detail.
warnings.warn(
then traceback and at the end this:
AttributeError: module transformers.models.cpm has no attribute CpmTokenizer
Who can I fix this?
PS: I saw on the github that the solution to torchaudio problem is to use torchaudio.set_audio_backend("sox_io") , but how should I use it is not clarified if anyone knows the solution to the problem let them write a detailed step by step process.
I would suggest installing package tensorflow-gpu before importing any transformers library, as it internally checks if TensorFlow is installed.
pip install tensorflow-gpu
more information on how to install Tensorflow with GPU on Tensorflow
I have just started using the Amazon Deeplens device. I got the demo face detection application working. I have a custom trained tensorflow model (.pb) that I want to deploy on Deeplens. I followed an online guide to create a new project with lambda (python 3.7) and uploaded and imported my model from a s3 bucket. Now, my issue is that I need to install tensorflow on the device for python 3. I tried installing various versions and was even successful it once but I still get an error in the logs saying Tensorflow Module not found
I have a couple of questions regarding this:
My Lambda has python 3.7 execution environment. Is this correct or should I match the one on deeplens (3.5)?
Can I upgrade python on Deeplens to the latest version?
If not, what is the Tensorflow version that is supported for python 3.5 on Deeplens and what is the correct command to install it: pip or pip3?
Any help or insight is appreciated.
I'm trying to build a Python Lambda to send images to TensorFlow Serving for inferences. I have at least two dependencies: CV2 and tensorflow_serving.apis. I've run multiple tutorials showing it's possible to run tensorflow in a lambda, but they provide the package to install and don't explain how they got it to fit in the limit of less than 256MB unzipped.
How to Deploy ... Lambda and TensorFlow
Using TensorFlow and the Serverless Framework...
I've tried following the official instructions for packaging but just this downloads 475MB of dependencies:
$ python -m pip install tensorflow-serving-api --target .
Collecting tensorflow-serving-api
Downloading https://files.pythonhosted.org/packages/79/69/1e724c0d98f12b12f9ad583a3df7750e14ec5f06069aa4be8d75a2ab9bb8/tensorflow_serving_api-1.12.0-py2.py3-none-any.whl
...
$ du -hs .
475M .
I see that others have fought this dragon and won (1) (2) by doing contortions to rip out all unused libraries from all dependencies or compile from scratch. But such extremes strike me as complicated and hopefully outdated in a world where data science and lambdas are almost mainstream. Is it true that so few people are using TensorFlow Serving with Python that I'll have to jump through such hoops to get one working as a Lambda? Or is there an easier way?
The goal is to not actually have tensorflow on the client side, as it uses a ton of space but isn't really needed for inference. Unfortunately the tensorflow-serving-api requires the entire tensorflow package, which by itself is too big to fit into a lambda.
What you can do instead is build your own client instead of using that package. This involves using the grpcio-tools package for the protobuf communication, and the various .proto files from tensorflow and tensorflow serving.
Specifically you'll want to package up these files-
tensorflow/serving/
tensorflow_serving/apis/model.proto
tensorflow_serving/apis/predict.proto
tensorflow_serving/apis/prediction_service.proto
tensorflow/tensorflow/
tensorflow/core/framework/resource_handle.proto
tensorflow/core/framework/tensor_shape.proto
tensorflow/core/framework/tensor.proto
tensorflow/core/framework/types.proto
From there you can generate the python protobuf files.