Getting 'failed creating virtual environment' error when amplify push - python

Anyone encounter this problem before? I can't amplify push to deploy my aws function and api.
This is the error I getting:

This will depend on your current local configuration, but I managed to fix this issue by editing my Pipfile to use python 3.9 instead of python 3.8.

Related

Azure function for python is unreachable

I am getting the below error in the Azure Function for Python
Please see the below screenshot
Whenever I am trying to open the azure function python on the portal then I got the above error.
Let me know if anyone has any idea regarding this error.
This error can be very difficult to debug, because it seems to be caused by multiple root causes. In my case, I suspect the root cause to be a failure in pip package installation. But this is difficult to verify, because I was not able to drill in to the pip logs. The deployment log does not contain information about pip installation, and some of the logs were unavailable because the host runtime was down.
I followed these best practices to finally make the Python function deployment succeed:
Use remote build (app setting: SCM_DO_BUILD_DURING_DEPLOYMENT: 1)
Make sure that the AzureWebJobsStorage application setting is configured to point to the correct Function Storage
Do not include local .venv/ directory in deployment (add it to .funcignore)
Make sure the dependencies can be installed on the local Virtual Environment without conflicts
Test that the function runs locally without errors
In requirements.txt, I had the following lines. Note that there is no need to specify azure-functions version, since it is determined by the platform. It is only for local linting etc.
pip==21.2.*
azure-functions
As a side note, it is not necessary to specify "Build from package" (app setting: WEBSITE_RUN_FROM_PACKAGE: 1); this seems to be enabled by default.
My deployment configuration:
OS: Ubuntu 21.04
Functions Python version: 3.9
Functions Runtime Extension version: 4
Deployed with VS Code Azure extension

How can you change your python code after you deploy it to heroku using git , without downloading and reinstalling requirements everytime? [duplicate]

Im using django 3 and Python 3.7.4
I don't have any issues with the deployment and the project is working, it's just the first time I face this issue.
Normally when deploying to Heroku all packages in the requirements file get installed during the first deployment process, and any further deployment will only update or install the new packages the get added.
In my case, everytime I deploy, heroku is installing the whole packages again.
Please advise if there is a way to handle this issue.
thanks
This looks like a current issue with the Heroku python buildpack. As long as the issue persists the cache is cleared on every build, since the sqlite3 check is broken. Suggest upvoting the issue on GitHub.

Deploying Google Cloud Function with Tensorflow fails

I am trying to deploy a google cloud function to use the universal-sentence-encoder model.
However, if I add in the dependencies to my requirements.txt:
tensorflow==2.1
tensorflow-hub==0.8.0
then the function fails to deploy with the following error:
Build failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage": "gzip_tar_runtime_package gzip /tmp/tmpOBr2rZ.tar -1\nexited with error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is likely not on the path", "errorType": "InternalError", "errorId": "F57B9E18"}}
What does this error mean?
How can I fix it?
Note that the code for the function itself is just the demo code provided by google when you click "create function" in the web console. It deploys when I remove these requirements, when I add them it breaks.
This error can happen when the size of the deployed files is larger than the available Cloud Function memory. The gzip_tar_runtime_package could not be installed because memory could not be allocated.
Make sure you are only using the required dependencies. If you are uploading static files, make sure you only upload necessary files.
After that, if you are still facing the issue, try increasing the Cloud Function Memory setting the --memory flag in the gcloud functions deploy command as explained here
EDIT:
There is currently a known issue with Tensorflow 2.1 in Cloud Functions.
The current workaround would be to use Tensorflow 2.0.0 or 2.0.1

heroku keeps installing django packages during each deployment

Im using django 3 and Python 3.7.4
I don't have any issues with the deployment and the project is working, it's just the first time I face this issue.
Normally when deploying to Heroku all packages in the requirements file get installed during the first deployment process, and any further deployment will only update or install the new packages the get added.
In my case, everytime I deploy, heroku is installing the whole packages again.
Please advise if there is a way to handle this issue.
thanks
This looks like a current issue with the Heroku python buildpack. As long as the issue persists the cache is cleared on every build, since the sqlite3 check is broken. Suggest upvoting the issue on GitHub.

Update AWS Elastic Beanstalk solution stack name

I have a Cloudformation template with the following Elastic Beanstalk environment:
Resources:
BeanstalkEnvironment1:
Type: AWS::ElasticBeanstalk::Environment
Properties:
ApplicationName: Application1
Description: ignored
EnvironmentName: Environment1'
SolutionStackName: '64bit Amazon Linux 2017.03 v2.5.0 running Python 3.4'
My main goal is to update the environment's Python version from 3.4 to 3.6. I was able to update the solution stack name with the following command (taken from this answer)
aws elasticbeanstalk update-environment --solution-stack-name "64bit Amazon Linux 2018.03 v2.7.6 running Python 3.6" --environment-name "Environment1"
However, I cannot do subsequent updates using the existing template if I update it to the new solution stack name, because I get "Cannot update a stack when a custom-named resource requires replacing". It works if I keep the original one, but I would like to keep the running platform in sync with the template.
Any ideas?
Thanks!
I get the same problem. This appears to be a limitation of Elastic Beanstalk and CloudFormation. In the docs (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html) an update to SolutionStackName shows as Update requires: Replacement.
If you just change the EnvironmentName every time you change SolutionStackName it should work fine.
Check the documentation note of SolutionStackName:
Note: If you specify SolutionStackName, don't specify PlatformArn or
TemplateName.

Categories