Update AWS Elastic Beanstalk solution stack name - python

I have a Cloudformation template with the following Elastic Beanstalk environment:
Resources:
BeanstalkEnvironment1:
Type: AWS::ElasticBeanstalk::Environment
Properties:
ApplicationName: Application1
Description: ignored
EnvironmentName: Environment1'
SolutionStackName: '64bit Amazon Linux 2017.03 v2.5.0 running Python 3.4'
My main goal is to update the environment's Python version from 3.4 to 3.6. I was able to update the solution stack name with the following command (taken from this answer)
aws elasticbeanstalk update-environment --solution-stack-name "64bit Amazon Linux 2018.03 v2.7.6 running Python 3.6" --environment-name "Environment1"
However, I cannot do subsequent updates using the existing template if I update it to the new solution stack name, because I get "Cannot update a stack when a custom-named resource requires replacing". It works if I keep the original one, but I would like to keep the running platform in sync with the template.
Any ideas?
Thanks!

I get the same problem. This appears to be a limitation of Elastic Beanstalk and CloudFormation. In the docs (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html) an update to SolutionStackName shows as Update requires: Replacement.
If you just change the EnvironmentName every time you change SolutionStackName it should work fine.

Check the documentation note of SolutionStackName:
Note: If you specify SolutionStackName, don't specify PlatformArn or
TemplateName.

Related

Migrating from Python 3.4 to Python 3.6 AMI

Does anyone have a process for migrating an Elastic Beanstalk environment from an existing Python 3.4 instance to a Python 3.6 instance?
By saving my configuration and changing the "EC2 image ID" (under Configuration -> Instances) to that of a Python 3.6 AMI platform instance, it looks like I was able to spin up a new EC2 instance with a Python 3.6 AMI (I see aws-elasticbeanstalk-amzn-2018.03.0.x86_64-python36-hvm-201805090750 (ami-b5342ad5) listed in my EC2 instance details). I believe this involved destroying my Elastic Beanstalk environment and bringing a new one up from the configuration backup.
It looks like I now have an EC2 instance with an Python 3.6 AMI, however, when I run eb config I see it still listed as a Python 3.4 instance. And it otherwise behaves as it is still a Python 3.4 instance (the virtualenv is still 3.4).
I saw this thread stating that the PlatformArn needs to be updated. Mine says
PlatformArn: arn:aws:elasticbeanstalk:us-west-1::platform/Python 3.4 running on 64bit
Amazon Linux/2.7.0
I tried changing the "3.4" to a "3.6" with no success. Any suggestions? Thanks!
I see the problem.
There is presently no such Python 3.6 PlatformArn as the following in us-west-1:
arn:aws:elasticbeanstalk:us-west-1::platform/Python 3.6 running on 64bit Amazon Linux/2.7.0
You can determine the list of PlatformArns in us-west-1 at your disposal by performing:
aws elasticbeanstalk list-platform-versions --region us-west-1 | grep "PlatformArn"
Depending on your account access rules/permissions, you should be able to see:
arn:aws:elasticbeanstalk:us-west-1::platform/Python 3.6 running on 64bit Amazon Linux/2.6.0
in the result of the list-platform-versions API call. The difference, which is subtle, is the trailing "2.6.0", instead of "2.7.0", which you tried.

Modify deployment process on elasticbeanstalk ami

I've grown tired of trying to get elastic beanstalk to run python 3.5. Instead, I want to create a custom ami which establishes a separate virtualenv for the application (with python 3.5) and knows enough to launch the application using that virtualenv.
The problem is that once I ssh into the ec2 instance in order to create my custom ami, I am left wondering where the scripts are which govern the elastic beanstalk deployment behavior.
For example, when deploying via travis to elastic beanstalk, EB knows enough to look in a specific folder for the file application.py and to execute the file using a specific virtualenv (or maybe even the shudder root python installation of the machine). It even knows to execute a pip install -r requirements. Can anyone point me to where the script(s) are which govern this behavior?
UPDATE
Please see Elastic beanstalk require python 3.5 for those referencing the .ebextensions option. So far, it has not proved able to handle this problem due to the interdependency between the EB image operating system and the python environment used to run the application.
All of the EB files can be found in /opt/elasticbeanstalk - /opt/elasticbeanstalk/hooks is probably most relevant for what you're looking for.
You can use the ebextensions to run scripts you want when starting your ami.

How can I run a simple python script hosted in the cloud on a specific schedule?

Say I have a file "main.py" and I just want it to run at 10 minute intervals, but not on my computer. The only external libraries the file uses are mysql.connector and pip requests.
Things I've tried:
PythonAnywhere - free tier is too limiting (need to connect to external DB)
AWS Lambda - Only supports up to Python 2.7, converted my code but still had issues
Google Cloud Platform + Heroku - can only find tutorials covering deploying applications, I think these could do what I'm looking for but I can't figure out how.
Thanks!
I'd start by taking a look at this question/answer that I asked previously on unix.stackexchange - I went with an AWS redhat installation and it was free to use.
Once you've decided on your VM, you can add SSH onto your server using any SSH client and upload your Python script. A personal preference is this application.
If you need to update the Python version on the server, you can do this by installing the required Python RPMs. A quick google should return the yum [or whichever RPM management system you're using] repository for the required RPMs.
Once you've installed the version of Python that you need, I'd suggest looking into the 'crontab' which can be used to schedule jobs. You can set a cronjob to run every 10minutes which will call your script.
See this site for more information on how to use the crontab
This sounds like a perfect use case for AWS Lambda which supports Python. You can invoke your Lambda on a schedule using Scheduled Events.
I see that you tried Lambda and it didn't work out for you which is too bad as that seems like the easiest route. You could also launch an EC2 instance and use userdata to schedule a cron when the instance starts.
Another option would be an Elastic Beanstalk worker with a cron.yml that defines your schedule. Elastic Beanstalk supports Python 3.4.
Update: AWS does now support Python 3.6. Just select Python 3.6 from the runtime environments when configuring.

Google Cloud VM creates new version on deploy

I am running a Virtual Machine on Google Cloud and am using their SDK to deploy with the following command:
gcloud preview app deploy ./app.yaml
The deployment works, however for every deployment a new instance is created which can only be reached by adding the version id to the domain name. I tried removing older instances through the developer dashboard but they just restart directly after that.
How can I remove the newly created instances and overwrite the default version on the main domain by default when deploying?
To do this directly from gcloud, use the following two flags:
--set-default:
Set the deployed version to be the default serving version.
--version:
The version of the app that will be created or replaced by this
deployment. If you do not specify a version, one will be generated for
you.
(both from gcloud preview app deploy --help).
If you set --version to be the same each time, the current version deployed at that URL will be overwritten, and a new version will not be created on each deployment.
If you use --set-default, the deployed version can be accessed just using the domain name (without the version as a subdomain).
Deleting the other versions by hand in the developer console will be the simplest way to get rid of them.
Turns out you can't edit this under Computer Engine > VM Instances. You have to look under AppEngine > Versions and change the default version there + delete the older ones.

Access Google cloud endpoints on a non-existent version label

I have two apps, my_app and my_endpoint_app. I can access my_endpoint_app with any version label in the URL I want and it will automatically route to the default version if it does not match an existing version.
Example:
https://josh-dot-my_endpoint_app.appspot.com/ will respond with the default version since there is no josh version deployed.
If I try to do the same with a Google Cloud Endpoint service call, I get a Not Found error.
Example:
The unsuccessful https://josh-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint vs the working https://my_endpoint_app.appspot.com/_ah/api/myendpoint
I have a couple of Google AppEngine applications that communicate with each other via Cloud Endpoints.
Under normal usage this is OK because I know the version beforehand and avoid these errors. In our development environment, this falls apart. In order to support feature branches and testing in isolation, we push our code up to appspot using the -V switch of appcfg.py.
Example:
appcfg.py -A my_app -V josh update .
Now I can access my feature branch at https://josh-dot-my_app.appspot.com. In order to support some version label hackery, I dynamically calculate the right endpoint app to call with something like s/my_app/my_endpont_app/g and then make my service calls there. This fails because of the dynamic version label not existing. If I push a version label with that name it completes as expected.
Is there any way to get Cloud Endpoints to answer on non-existent version label hostnames?
Scenarios that I want to support
https://my_endpoint_app.appspot.com/_ah/api/myendpoint
Main application URL, routes to default version
https://josh-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint
Version does not exist, should route to default version
https://new-feature-dot-my_endpoint_app.appspot.com/_ah/api/myendpoint
Version new-feature exists, should route to new-feature version so that we can test new code in isolation before merging into the main code branch. This would be internal apis that the current endpoints might make use of without changing what the endpoint accomplishes. (performance improvements, etc)
You can reroute any Url to any module/version via the dispatch file.

Categories