How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too
Related
I have an Elastic Beanstalk environment running Python 3.6 on AWS Linux 1, and I want to switch it to Python 3.8 on Amazon Linux 2.
I know that I can upgrade environments using the aws CLI update-environment command:
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
However, AWS Linux 2 uses different configuration parameters. I can't deploy the AWS Linux 2 config because it's invalid on AWS Linux 1 and I can't upgrade to AWS Linux 2 because my config is invalid.
How do I do the upgrade, and is there a way to do it in-place?
Differences in Configuration
AWS Linux 2 has changed a lot of how elastic beanstalk works and how it is configured. Regardless of whether you are doing an in-place upgrade or spinning up a new environment, here is a list of things that will be different to run through before making the upgrade. Most of the items here are things that are different in Elastic Beanstalk config that live in .ebextensions.
There are differences in sub-package dependencies between Python 3.6 and 3.8. You should test your requirements file on Python 3.8 and make sure it's compatible, especially if you use a generated requirements.txt.
AWS Linux 2 no longer allows you to write Apache config using a file directive in .ebextensions. These modifications now need to live in .platform/httpd/conf.
The virtual environment is no longer active while running container_commands. Any container commands that use your code need to have source $PYTHONPATH/activate run first.
Generated files now get wiped on config changes, so commands like django's collectstatic need to get moved to hooks.
Postgres client is no longer available normally though yum. To install it, you need to do:
packages:
yum:
amazon-linux-extras: []
commands:
01_postgres_activate:
command: sudo amazon-linux-extras enable postgresql10
02_postgres_install:
command: sudo yum install -y postgresql-devel
Apache is no longer the default web server (it is Nginx). To continue using it, you need to specify that as an option on your environment, such as:
option_settings:
aws:elasticbeanstalk:environment:proxy:
ProxyServer: apache
Modwsgi has been replaced with Gunicorn. Any modwsgi customizations you have will no longer work, and the WSGI path has a different format:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: config.wsgi:application
Static files config has a different format:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: staticfiles
You will get opted-in to advanced health reporting. Adding an Elastic Beanstalk health check is strongly recommended:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /health-check/
The application is now run on port 8000 on the server via Gunicorn and Apache/Nginx are just proxying requests to Gunicorn. This matters if you are doing apache customizations such as encrypting traffic between the load balancers and applications servers.
Apache is now run through systemctl rather than supervisord. If you are trying to restart Apache, the command is now sudo systemctl restart httpd
If you want to load your environment variables when sshed into the server, you need to parse them differently:
The environment variables live in a different place and have a different format. To get access to them when SSHed in, you need to add jq: [] to your yum installs. Then, either run the following commands or add them to the bashrc of the server (using a file directive in .ebextensions) to load environment variables and activate the python virtual environment:
source <(/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""')
source $PYTHONPATH/activate
cd /var/app/current
Upgrading by Launching a New Environment
To take this upgrade path, you need to not be using the Elastic Beanstalk data tier (i.e. you launched your RDS instance yourself, rather than through Elastic Beanstalk).
Create a code branch with your AWS Linux 2 config
Launch a new Elastic Beanstalk environment on AWS Linux 2.
Copy the environment variables from your previous environment.
Allow access to your database from the new environment (add the new environment's server security group as the target of an ingress rule on the database's security group)
Set up SSL on the new environment.
Deploy the AWS Linux 2 code branch to the new environment.
Test this new environment, ignoring browser certificate warnings (or set up a temporary DNS entry to test it).
Switch the DNS entry to point to your new environment or use AWS's CNAME swap feature on the two environemnts.
After your new environment has been running without problems for sufficient time, terminate your old environment.
Upgrading In-Place
There is a way to do the upgrade in-place, though there will be a few minutes where your site says "502 Bad Gateway". In order to do this, you need EB config that is compatible with both the AWS Linux 1 and AWS Linux 2 environments.
For Python, you can do this with a small Flask app and a four part deploy.
Part 1: deploy placeholder app that is compatible with both platforms
Add flask to your requirements.txt (if it's not already there).
Delete all files in .ebextensions
Make .ebextensions/01.config:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: wsgi_shim.py
make wsgi_shim.py:
from flask import Flask
application = Flask(__name__)
#application.route("/")
#application.route("/<path:path>/")
def hello_world(path=None):
return "This site is currently down for maintenance"
[If using load balancer to application server encryption, change the load balancer to send all traffic to the server via HTTP.]
eb deploy
Part 2: upgrade platform to AWS Linux 2
[If you have any static routes configured in elastic beanstalk delete them.]
Upgrade your eb environment
# Get list of solution stacks
aws elasticbeanstalk list-available-solution-stacks --output=json --query 'SolutionStacks' --region us-east-1
# Use one of the above options here
aws elasticbeanstalk update-environment --environment-name <ENV_NAME> --solution-stack-name "64bit Amazon Linux 2 v3.3.7 running Python 3.8"
Part 3: deploy your main application to AWS Linux 2
Replace .ebextensions/01.config with your new AWS Linux 2 config.
Add .platform/httpd/conf.d/ssl_rewrite.conf:
RewriteEngine On
<If "-n '%{HTTP:X-Forwarded-Proto}' && %{HTTP:X-Forwarded-Proto} != 'https'">
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
eb deploy
Part 4: cleanup
[If using load balancer to application server encryption, change the load balancer back to sending traffic to the server via HTTPS.]
Delete wsgi_shim.py and remove flask from requirements.txt (unless it's a flask project).
eb deploy
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I have a docker container that is setup to perform some given actions with selenium. My goal is to have the docker container be created when a request is received for a certain endpoint created using flask. The flask app has been setup with uWSGI and Nginx using this tut.
When the endpoint receives a request it is suppose to run the bash script ./run.sh:
#!/bin/bash
ID=$1
docker run --rm \
-v $(pwd)/code:/code \
-v /etc/hosts:/etc/hosts \
selenium \
python3 \
/code/main.py ${ID}
I can successfully make a call to the endpoint using the IP given from digital ocean but when it gets to the point where it needs to run docker it says:
docker: command not found
Note, I can go into the virtualenv manually, run python app.py, send request to flask endpoint and the docker container is created and everything works great.
You probably need to add a PATH variable to your bash script which includes the location of your docker executable. The user running NGINX likely doesn't have a path set.
PATH=$PATH:/usr/local/bin:/usr/bin
Also you'll need to ensure that the user running NGINX has permission to use docker, so add them to the docker group.
If this is a public service, then I would think carefully about whether you really want internet users to be launching containers on your server, does $1 come from user input?
I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize
I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
Here's a very simple procedure to move your Python script from local to EC2 Instance and run it.
> 1. scp -i <filepath to Pem> <filepath to Py File> ec2-user#<Public DNS>.compute-1.amazonaws.com:<filepath in EC2 instance where you want
> your file to be>
> 2. Cd to to the directory in EC2 containing the file. Type Python <Filename.py> There it executed.
Here's a concrete examples for those who likes things shown step-by-step:
In your local directory, create a python script with the following code: print("Hello AWS")
Assuming you already have AWS already set up and you want to run this script in EC2, you need to SCP (Secure Copy Protocol) your file to a directory in EC2. So here's an example:
- My filepath to pem is ~/Desktop/random.pem.
- My filepath to py file is ~/Desktop/hello_aws.py
- My public DNS is ec22-34-12-888
- The ec2 directory where I want my script to be is in /home/ec2-user
- So the full command I run in my local terminal is:
scp -i ~/Desktop/random.pem ~/Desktop/hello_aws.py ec2-user#ec2-34-201-49-170.compute-1.amazonaws.com:/home/ec2-user
Now ssh to your ec2 instance, cd to /home/ec2-user (Or wherever you put your file) and Python hello_aws.py
You have a variety of options. You can browse through a large library of AMIs here.
You can import a vm, instructions are here.
This is a general article about AWS and python.
And in this article, the author takes you through a more advanced system with a combination of datastores in python using the highly recommend django framework.
Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them.
Running scripts on Linux ec2 instances
I had to run a script on Amazon ec2 and learned how to do it. Even though the question was asked years back, I thought I would share how easy it is today.
Setting up EC2 and ssh-ing to ec2 host
Sign up and launch an ec2 instance(Do not forget to save the certificate file that will be generated while launching ec2) with default settings.
Once the ec2 is up and running, provide required permissions to the certificate file chmod 400 /path/my-key-pair.pem (or .cer file)
Run the command: ssh -i /path/my-key-pair.pem(.cer) USER#Public DNS(USER data changes based on the operating system you have launched, refer to the below paragraph for more details && Public DNS can be obtained on ec2 instance page)
Use the ssh command to connect to the instance. You specify the private key (.pem) file and user_name#public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL, the user name is ec2-user or root. For Ubuntu, the user name is ubuntu or root. For Centos, the user name is centos. For Fedora, the user name is ec2-user. For SUSE, the user name is ec2-user or root. Otherwise, if ec2-user and root don't work, check with your AMI provider.
Clone the script to EC2
In order to run the scripts on ec2, I would prefer storing the code on Github as a repo or as a gist(if you need to keep code private) and clone into ec2.
Above mention is very easy and is not error-prone.
Running the python script
I have worked with RHEL Linux instance and python was already installed. So, I could run python script after ssh-ing to host directly. It depends on your operating system you choose. Refer to aws manuals if it's not installed already.
Reference: AWS Doc