Zappa serverless aws lambda issue - python

With Zappa sample application to deploy into AWS using zappa deploy command, all the steps are happening as expected as shown below.
(env) E:\Projects_EDrive\AWS\Zappa\zappa_examples\Zappa\example>zappa deploy dev_api
(Werkzeug 0.12.2 (c:\python27\lib\site-packages), Requirement.parse('Werkzeug==0.12'), set([u'zappa']))
Calling deploy for stage dev_api..
Downloading and installing dependencies..
Packaging project as zip.
Uploading dev-api-zappa-test-flask-app-dev-api-1503456512.zip (302.6KiB)..
100%|#######################################################################################################################| 310K/310K [00:08<00:00, 37.9KB/s]
Uploading dev-api-zappa-test-flask-app-dev-api-template-1503456531.json (1.6KiB)..
100%|#####################################################################################################################| 1.65K/1.65K [00:01<00:00, 1.04KB/s]
Waiting for stack dev-api-zappa-test-flask-app-dev-api to create (this can take a bit)..
75%|############################################################################################2 | 3/4 [00:10<00:05, 5.56s/res]
Deploying API Gateway..
Deployment complete!: https://xxxxxxxx.execute-api.us-east-1.amazonaws.com/dev_api
But when accessing above endpoint , getting internal error response.
Later verified created S3 bucket, there is no file uploaded. Empty bucket.
Later verified lambda as well, it has got default code. Hence getting internal error response. As per logs, it has got no module named builtins.
"Unable to import module 'handler': No module named builtins"
How to debug zappa deployment and how to install python packages ?

You could try troubleshooting with Python-lambda-local tool. It tries it's best to mimic the real Lambda.

Remove the dependencies and Recreate the Virtualenv. It should work.
Ref: https://github.com/Miserlou/Zappa/issues/1222

Related

Error when installing torch through requirements.txt for azure web service deployment

Generating a requirements.txt file returns this for torch:
torch==1.6.0+cpu
torchvision==0.7.0+cpu
However, with +cpu, I get an error that it is not able to find what it is supposed to install.
I navigated to this website: https://pypi.org/project/torch/#history and since I couldnt find any version saying "+cpu" so I removed the +cpu from my requirements.txt file and ran the deployment again.
Now this is where it is stuck at:
Collecting torch==1.6.0
9:41:06 PM cv-web-app: [16:41:06+0000] Downloading torch-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (748.8 MB)
It is taking forever to install this and in the end I keep on getting this error:
An unknown error has occurred. Check the diagnostic log for details.
I see the diagnostic logs through the azure portal and I dont see anything logged beyond the installing of torch. As in I am unable to figure out what the error is. Maybe I am mistaking in my checking.
How do I figure out what is wrong? What does the CPU indicate?
Moreover, I am making a computer vision app. Using flask, and my system is windows. I am deploying it to azure through vscode through the "Create new web app" option.
As suggested by #ryanchill , When we deploy our Python app, Azure App Service will create a virtual environment and run pip install -r requirements.txt.
If its not happening, make sure SCM_DO_BUILD_DURING_DEPLOYMENT is set to 1.
See Configure a Linux Python app for Azure App Service for more details.
NOTE: Also as mentioned in the above MS DOC
You can install using path with putting the pytorch path in the requirements.txt
For example:
--find-links https://download.pytorch.org/whl/torch_stable.html torch==1.9.0+cpu
Reference:
. Python Webapp on Azure - PyTorch | MS Q&A .
Also please refer the below links for more information:
. MS Tutorial: Deploy Python apps to Azure App Service
.Blog PyTorch Web Service deployment using Azure Machine Learning Service and Azure Web Apps from VS Code

Python build on AWS CodeBuild failing with deps error - dependancies were not changed

Morning,
I have an app using AWS CodeBuild that is rarely deployed. I went to deploy a tiny change - no change to dependancies and the build now fails. I suspect something in AWS or Python has moved in since I last deployed.
The error is
Build Failed
Error: PythonPipBuilder:ResolveDependencies - {simplejson==3.17.3(wheel)}
simplejson has never been in requirements.txt so I am not sure why it is suddenly being called out.
Have you seen this before?

Deploying Google Cloud Function with Tensorflow fails

I am trying to deploy a google cloud function to use the universal-sentence-encoder model.
However, if I add in the dependencies to my requirements.txt:
tensorflow==2.1
tensorflow-hub==0.8.0
then the function fails to deploy with the following error:
Build failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage": "gzip_tar_runtime_package gzip /tmp/tmpOBr2rZ.tar -1\nexited with error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is likely not on the path", "errorType": "InternalError", "errorId": "F57B9E18"}}
What does this error mean?
How can I fix it?
Note that the code for the function itself is just the demo code provided by google when you click "create function" in the web console. It deploys when I remove these requirements, when I add them it breaks.
This error can happen when the size of the deployed files is larger than the available Cloud Function memory. The gzip_tar_runtime_package could not be installed because memory could not be allocated.
Make sure you are only using the required dependencies. If you are uploading static files, make sure you only upload necessary files.
After that, if you are still facing the issue, try increasing the Cloud Function Memory setting the --memory flag in the gcloud functions deploy command as explained here
EDIT:
There is currently a known issue with Tensorflow 2.1 in Cloud Functions.
The current workaround would be to use Tensorflow 2.0.0 or 2.0.1

"ModuleNotFoundError: No module named 'django'" when trying to deploy Django server on Azure

After I tried to deploy my Django website on Azure, I got an error saying:
ModuleNotFoundError: No module named 'django'
I added a requirements.txt in the root directory of my Django project, am I missing anything else? I've tried to install Django from Kudu BASH but it gets stuck on "Cleaning Up".
Here is the full error: https://pastebin.com/z5xxqM08
I built the site using Django-2.2 and Python 3.6.8.
Just summarized as an answer for other people. According to your error information, I can see that you tried to deploy your Django app to Azure WebApp on Linux based on Docker. So there are two offical documents will help as below.
Quickstart: Create a Python app in Azure App Service on Linux
Configure a Linux Python app for Azure App Service
The error ModuleNotFoundError: No module named 'django' indicated that there is not django package installed on the container of Azure Linux WebApp.
Due to the content of Container characteristics of #2 document above as below,
To install additional packages, such as Django, create a requirements.txt file in the root of your project using pip freeze > requirements.txt. Then, publish your project to App Service using Git deployment, which automatically runs pip install -r requirements.txt in the container to install your app's dependencies.
So the possible reason is the requirements.txt file not in the corrent path of your project or container after deployed, which path should be /home/site/wwwroot/requirements.txt on the container or the root of your project like the offical sample Azure-Samples/djangoapp on GitHub.
I had the same problem. requirements.txt was in my repository, but randomly I started getting the same error ModuleNotFoundError: No module named 'django' after changing a setting or restarting the service or something. Nothing I could do, change the setting, or reboot repeatedly fixed it. Finally what worked for me was this:
Make a small change to the code and commit it, and push it up to the app service.
This fixed it for me. It has happened a couple times now and every time this solution has worked for me. It seems like the App Service sometimes gets in this state and needs to be jostled with this trick?

How to deploy AWS python Lambda project locally?

I got an AWS python Lambda function which contains few python files and also several dependencies.
The app is build using Chalice so by that the function will be mapped like any REST function.
Before the deployment in prod env, I want to test it locally, so I need to pack all this project (python files and dependencies), I tried to look over the web for the desired solution but I couldn't find it.
I managed to figrue how to deploy one python file, but a whole project did not succeed.
Take a look to the Atlassian's Localstack: https://github.com/atlassian/localstack
It's a full copy of the AWS cloud stack, locally.
I use Travis : I hooked it to my master branch in git, so that when I push on this branch, Travis tests my lambda, with a script that uses pytest, after having installed all its dependencies with pip install. If all the tests passed, it then deploy the lambda in AWS in my prod-env.

Categories