I am trying to deploy a google cloud function to use the universal-sentence-encoder model.
However, if I add in the dependencies to my requirements.txt:
tensorflow==2.1
tensorflow-hub==0.8.0
then the function fails to deploy with the following error:
Build failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage": "gzip_tar_runtime_package gzip /tmp/tmpOBr2rZ.tar -1\nexited with error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is likely not on the path", "errorType": "InternalError", "errorId": "F57B9E18"}}
What does this error mean?
How can I fix it?
Note that the code for the function itself is just the demo code provided by google when you click "create function" in the web console. It deploys when I remove these requirements, when I add them it breaks.
This error can happen when the size of the deployed files is larger than the available Cloud Function memory. The gzip_tar_runtime_package could not be installed because memory could not be allocated.
Make sure you are only using the required dependencies. If you are uploading static files, make sure you only upload necessary files.
After that, if you are still facing the issue, try increasing the Cloud Function Memory setting the --memory flag in the gcloud functions deploy command as explained here
EDIT:
There is currently a known issue with Tensorflow 2.1 in Cloud Functions.
The current workaround would be to use Tensorflow 2.0.0 or 2.0.1
Related
I am getting the below error in the Azure Function for Python
Please see the below screenshot
Whenever I am trying to open the azure function python on the portal then I got the above error.
Let me know if anyone has any idea regarding this error.
This error can be very difficult to debug, because it seems to be caused by multiple root causes. In my case, I suspect the root cause to be a failure in pip package installation. But this is difficult to verify, because I was not able to drill in to the pip logs. The deployment log does not contain information about pip installation, and some of the logs were unavailable because the host runtime was down.
I followed these best practices to finally make the Python function deployment succeed:
Use remote build (app setting: SCM_DO_BUILD_DURING_DEPLOYMENT: 1)
Make sure that the AzureWebJobsStorage application setting is configured to point to the correct Function Storage
Do not include local .venv/ directory in deployment (add it to .funcignore)
Make sure the dependencies can be installed on the local Virtual Environment without conflicts
Test that the function runs locally without errors
In requirements.txt, I had the following lines. Note that there is no need to specify azure-functions version, since it is determined by the platform. It is only for local linting etc.
pip==21.2.*
azure-functions
As a side note, it is not necessary to specify "Build from package" (app setting: WEBSITE_RUN_FROM_PACKAGE: 1); this seems to be enabled by default.
My deployment configuration:
OS: Ubuntu 21.04
Functions Python version: 3.9
Functions Runtime Extension version: 4
Deployed with VS Code Azure extension
Anyone encounter this problem before? I can't amplify push to deploy my aws function and api.
This is the error I getting:
This will depend on your current local configuration, but I managed to fix this issue by editing my Pipfile to use python 3.9 instead of python 3.8.
In my Azure ML pipeline I've got a PythonScriptStep that is crunching some data. I need to access the Azure ML Logger to track metrics in the step, so I'm trying to import get_azureml_logger but that's bombing out. I'm not sure what dependency I need to install via pip.
from azureml.logging import get_azureml_logger
ModuleNotFoundError: No module named 'azureml.logging'
I came across a similar post but it deals with Azure Notebooks. Anyway, I tried adding that blob to my pip dependency, but it's failing with an Auth error.
Collecting azureml.logging==1.0.79 [91m ERROR: HTTP error 403 while getting
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
[0m91m ERROR: Could not install requirement azureml.logging==1.0.79 from
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
(from -r /azureml-environment-setup/condaenv.g4q7suee.requirements.txt
(line 3)) because of error 403 Client Error:
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. for url:
https://azuremldownloads.blob.core.windows.net/wheels/latest/azureml.logging-1.0.79-py3-none-any.whl?sv=2016-05-31&si=ro-2017&sr=c&sig=xnUdTm0B%2F%2FfknhTaRInBXyu2QTTt8wA3OsXwGVgU%2BJk%3D
I'm not sure how to move on this, all I need to do is to log metrics in the step.
Check out the ScriptRunConfig Section of the Monitor Azure ML experiment runs and metrics. ScriptRunConfig works effectively the same as a PythonScriptStep.
The idiom is generally to have the following in your the script of your PythonScriptStep:
from azureml.core.run import Run
run = Run.get_context()
run.log('foo_score', "bar")
Side note: You don't need to change your environment dependencies to use this because PythonScriptSteps have azureml-defaults installed automatically as a dependency.
With Zappa sample application to deploy into AWS using zappa deploy command, all the steps are happening as expected as shown below.
(env) E:\Projects_EDrive\AWS\Zappa\zappa_examples\Zappa\example>zappa deploy dev_api
(Werkzeug 0.12.2 (c:\python27\lib\site-packages), Requirement.parse('Werkzeug==0.12'), set([u'zappa']))
Calling deploy for stage dev_api..
Downloading and installing dependencies..
Packaging project as zip.
Uploading dev-api-zappa-test-flask-app-dev-api-1503456512.zip (302.6KiB)..
100%|#######################################################################################################################| 310K/310K [00:08<00:00, 37.9KB/s]
Uploading dev-api-zappa-test-flask-app-dev-api-template-1503456531.json (1.6KiB)..
100%|#####################################################################################################################| 1.65K/1.65K [00:01<00:00, 1.04KB/s]
Waiting for stack dev-api-zappa-test-flask-app-dev-api to create (this can take a bit)..
75%|############################################################################################2 | 3/4 [00:10<00:05, 5.56s/res]
Deploying API Gateway..
Deployment complete!: https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/dev_api
But when accessing above endpoint , getting internal error response.
Later verified created S3 bucket, there is no file uploaded. Empty bucket.
Later verified lambda as well, it has got default code. Hence getting internal error response. As per logs, it has got no module named builtins.
"Unable to import module 'handler': No module named builtins"
How to debug zappa deployment and how to install python packages ?
You could try troubleshooting with Python-lambda-local tool. It tries it's best to mimic the real Lambda.
Remove the dependencies and Recreate the Virtualenv. It should work.
Ref: https://github.com/Miserlou/Zappa/issues/1222
I've been trying to install Plot.ly Python SDK, I have it included in the requirements.txt but still fails and I get a Page Not Found error when calling a page served by Flask.
The problem with Plot.ly is that it requires the credentials to be installed:
import plotly
plotly.tools.set_credentials_file(username='SomeDemoAccount', api_key='SomeAPIKey')
And this won't run in as a code, not from ssh in the console because the instance doesn't has access to the ~/.plotly/.credentials file, i.e. it can't create it nor access it, so any call to the API will always fail. In AWS logs you'll get the following error:
Looks like you don't have 'read-write' permission to your 'home' ('~') directory or to our '~/.plotly' directory. That means plotly's python api can't setup local configuration files. No problem though! You'll just have to sign-in using 'plotly.plotly.sign_in()'. For help with that: 'help(plotly.plotly.sign_in)'.
So the solution is to call the plotly.plotly.sign_in() method that it's not even mentioned in their getting started guide nor the API reference, and it must be called with following arguments:
plotly.plotly.sign_in("Your Plotly Username","Your Plotly API Key")
That I implemented by having those values as EB Environment Properties:
plotly.plotly.sign_in(os.environ['YOUR_PLOTLY_USERNAME_ENV_PROPERTY_NAME'],os.environ['YOUR_PLOTLY_API_KEY_ENV_PROPERTY_NAME'])