I have an Elastic Beanstalk environment running Python 3.6 on AWS Linux 1, and I want to switch it to Python 3.8 on Amazon Linux 2.
I know that I can upgrade environments using the aws CLI update-environment command:
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
However, AWS Linux 2 uses different configuration parameters. I can't deploy the AWS Linux 2 config because it's invalid on AWS Linux 1 and I can't upgrade to AWS Linux 2 because my config is invalid.
How do I do the upgrade, and is there a way to do it in-place?
Differences in Configuration
AWS Linux 2 has changed a lot of how elastic beanstalk works and how it is configured. Regardless of whether you are doing an in-place upgrade or spinning up a new environment, here is a list of things that will be different to run through before making the upgrade. Most of the items here are things that are different in Elastic Beanstalk config that live in .ebextensions.
There are differences in sub-package dependencies between Python 3.6 and 3.8. You should test your requirements file on Python 3.8 and make sure it's compatible, especially if you use a generated requirements.txt.
AWS Linux 2 no longer allows you to write Apache config using a file directive in .ebextensions. These modifications now need to live in .platform/httpd/conf.
The virtual environment is no longer active while running container_commands. Any container commands that use your code need to have source $PYTHONPATH/activate run first.
Generated files now get wiped on config changes, so commands like django's collectstatic need to get moved to hooks.
Postgres client is no longer available normally though yum. To install it, you need to do:
packages:
yum:
amazon-linux-extras: []
commands:
01_postgres_activate:
command: sudo amazon-linux-extras enable postgresql10
02_postgres_install:
command: sudo yum install -y postgresql-devel
Apache is no longer the default web server (it is Nginx). To continue using it, you need to specify that as an option on your environment, such as:
option_settings:
aws:elasticbeanstalk:environment:proxy:
ProxyServer: apache
Modwsgi has been replaced with Gunicorn. Any modwsgi customizations you have will no longer work, and the WSGI path has a different format:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: config.wsgi:application
Static files config has a different format:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: staticfiles
You will get opted-in to advanced health reporting. Adding an Elastic Beanstalk health check is strongly recommended:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /health-check/
The application is now run on port 8000 on the server via Gunicorn and Apache/Nginx are just proxying requests to Gunicorn. This matters if you are doing apache customizations such as encrypting traffic between the load balancers and applications servers.
Apache is now run through systemctl rather than supervisord. If you are trying to restart Apache, the command is now sudo systemctl restart httpd
If you want to load your environment variables when sshed into the server, you need to parse them differently:
The environment variables live in a different place and have a different format. To get access to them when SSHed in, you need to add jq: [] to your yum installs. Then, either run the following commands or add them to the bashrc of the server (using a file directive in .ebextensions) to load environment variables and activate the python virtual environment:
source <(/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""')
source $PYTHONPATH/activate
cd /var/app/current
Upgrading by Launching a New Environment
To take this upgrade path, you need to not be using the Elastic Beanstalk data tier (i.e. you launched your RDS instance yourself, rather than through Elastic Beanstalk).
Create a code branch with your AWS Linux 2 config
Launch a new Elastic Beanstalk environment on AWS Linux 2.
Copy the environment variables from your previous environment.
Allow access to your database from the new environment (add the new environment's server security group as the target of an ingress rule on the database's security group)
Set up SSL on the new environment.
Deploy the AWS Linux 2 code branch to the new environment.
Test this new environment, ignoring browser certificate warnings (or set up a temporary DNS entry to test it).
Switch the DNS entry to point to your new environment or use AWS's CNAME swap feature on the two environemnts.
After your new environment has been running without problems for sufficient time, terminate your old environment.
Upgrading In-Place
There is a way to do the upgrade in-place, though there will be a few minutes where your site says "502 Bad Gateway". In order to do this, you need EB config that is compatible with both the AWS Linux 1 and AWS Linux 2 environments.
For Python, you can do this with a small Flask app and a four part deploy.
Part 1: deploy placeholder app that is compatible with both platforms
Add flask to your requirements.txt (if it's not already there).
Delete all files in .ebextensions
Make .ebextensions/01.config:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: wsgi_shim.py
make wsgi_shim.py:
from flask import Flask
application = Flask(__name__)
#application.route("/")
#application.route("/<path:path>/")
def hello_world(path=None):
return "This site is currently down for maintenance"
[If using load balancer to application server encryption, change the load balancer to send all traffic to the server via HTTP.]
eb deploy
Part 2: upgrade platform to AWS Linux 2
[If you have any static routes configured in elastic beanstalk delete them.]
Upgrade your eb environment
# Get list of solution stacks
aws elasticbeanstalk list-available-solution-stacks --output=json --query 'SolutionStacks' --region us-east-1
# Use one of the above options here
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
Part 3: deploy your main application to AWS Linux 2
Replace .ebextensions/01.config with your new AWS Linux 2 config.
Add .platform/httpd/conf.d/ssl_rewrite.conf:
RewriteEngine On
<If "-n '%{HTTP:X-Forwarded-Proto}' && %{HTTP:X-Forwarded-Proto} != 'https'">
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
eb deploy
Part 4: cleanup
[If using load balancer to application server encryption, change the load balancer back to sending traffic to the server via HTTPS.]
Delete wsgi_shim.py and remove flask from requirements.txt (unless it's a flask project).
eb deploy
Related
How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too
I have a flask app that I have cloned onto my aws ec2 instance. I can only run it using a virtual environment (which I activate by running the following):
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install flask==1.1.1
Below is my app:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
It runs just fine when executing env FLASK_APP=app.py flask run. The problem is, I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname. I get a tiemout error whenever I try to access it though. I think this is because i'm running the flask app in a virtual environment but I'm not sure. I can't find any good tutorials on how to expose this. Where am I going wrong here?
Quick n' safe solution:
As you're running the development server, which isn't meant for production. I would say the best way to connect to this app from just your own machine, is with an ssh tunnel:
ssh -L 5000:localhost:5000 ec2_addr
You can then point your web-browser to http://localhost:5000/ on your client machine, which will be tunneled to port 5000 on the EC2 instance. This is a fast way to connect to a Flask (or any) server on a remote Linux box. The tunnel is destroyed when you stop that ssh session.
Longer method:
I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname.
The timeout isn't because it's running in a virtual environment: You'll probably find it's because you need to assign Security Groups to your instance through the EC2 console. These allow you to open certain ports on the public IP.
See this other answer I wrote, regarding EC2 security groups.
However be careful. You shouldn't expose the development server in this manner. There are a number of tutorials like this one from digital ocean which cover deploying a flask app behind gunicorn and nginx with a Let's Encrypt SSL cert. You should find yourself in a position where your security group exposes port 80 and 443, with requests to port 80 being redirected to the https URL by the nginx configuration.
If this sounds like a whole load of hassle / you don't have Linux skills / you don't want to learn Linux skills, then these are common reasons people pick managed services like AWS Elastic Beanstalk, to which you can deploy your flask app (official guide), without having to worry about server config. Heroku is another (non AWS) service which offers such features.
It really depends on what you require / wish to gain. Stay safe!
I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize
This following codes are inside a uWSGI configuration file called flask1.ini:
[uwsgi]
socket = /tmp/flask1.sock
chmod-socket = 777
evn = PRODUCTION=TRUE
module = indy
callable = app
processes = 4
threads = 2
logto = /var/indylog
The production server is set up on ubuntu 14.04 using uWSGI and nginx for Flask application.
I wrote a new module that uses Python 2.7 and it runs without any error on my local ubuntu 14.04 virtualenv (Flask development server) and the same nginx and uWSGI set up as the production environment. However, when I deployed the same code live on production server, it gives a bunch of syntax errors, I am trying to figure out why this is the case.
I run python --version on my local desktop and production server, they are both Python 2.7.6.
My questions: with the above uWSGI configuration on production server, which Python is being used? The machine Python or virtualenv Python?
To be precise, neither. uwsgi does not actually run the Python binary, it uses libpython directly. It just follows your system's LD_LIBRARY_PATH to find the corresponding libpython library and this is normally not affected by virtualenv.
What is affected by virtualenv, however, is the location from which uwsgi will load your packages. You will still need to add a line in your uwsgi.ini to specify the path your virtualenv like this:
virtualenv = /path/to/your/virtualenv
If you wish to configure uwsgi to use different versions of libpython, you will need to build the corresponding plugin for each version and specify it in uwsgi.ini. You can find more information about this here
At first, you have to create Python 3 environment for your source code:
virtualenv -p /usr/bin/python3 path_to_your_project/env
And install packets required:
cd path_to_your_project
source env/bin/activate
# you can use pip to install packets required, e.g:
pip install -r requirements.txt
Finally, add virtualenv to your uwsgi.ini file:
virtualenv = path_to_your_project/env
Install uwsgi in the virtualenv to use whichever Python version the env is configured with. /path/to/env/bin/uwsgi --ini /path/to/flask.ini. Instead of global uwsgi path/to/your/flask.ini, which would use the Python version that the system installed.
I am working on a Flask app, and want to deploy it on Koding so that my other team members can also view/edit it. I cloned the git repository inside a VM ( on Koding.com ), install PIP, installed dependencies, but when I start the flask server, it displays that the server has started and is running on 127.0.0.1:5000.
But when I go to :5000, it says VM is not active.
NOTE : normally works and displays the files under VM's "Web" folder.
Use 0.0.0.0 as source IP. Also remember that, your VM will be turned off 15 minutes after logout.