Developing using PyCharm in Docker Container on AWS Instance - python

I use PyCharm Professional to develop python.
I am able to connect the PyCharm run/debugs GUI to local Docker image's python interpreter and run local code using the Docker Container python environment libraries, eg. via the procedure described here: Configuring Remote Interpreter via Docker.
I am also able to SSH into AWS instances with PyCharm and connect to remote python interpreters there, which maps files from my local project into a remote directory and again allows me to run a GUI stepping through remote code as though it was local, eg. via the procedure described here: Configuring Remote Interpreters via SSH.
I have a Docker image on Docker hub that I would like to deploy to an AWS instance, and then connect my local PyCharm GUI to the environment inside the remote container, but I can't see how to do this, can anybody help me?
[EDIT] Once proposal that has been made is to put an SSH Server inside the remote container and connect my local PyCharm directly into the container via SSH, for example as described here. It's one solution but has been extensively criticised elsewhere - is there a more canonical solution?

After doing a bit of research, I came to the conclusion that installing an SSH server inside my container and logging in via the PyCharm SSH remote interpreter was the best thing to do, despite concerns raised elsewhere. I managed it as follows.
The Dockerfile below will create an image with an SSH server inside that you can SSH into. It also has anaconda/python, so it's possible to run a notebook server inside and connect to that in the usual way for Jupyter degubbing. Note that it's got a plain-text password (screencast), you should definitely enable key login if you're using this for anything sensitive.
It will take local libraries and install them into your package library inside the container, and optionally you can pull repos from GitHub as well (register for an API key in GitHub if you want to do this so you don't need to enter a plain text password). It also requires you to create a plaintext requirements.txt containing all of the other packages you will need to be pip installed.
Then run build command to create the image, and run to create a container from that image. In the Dockerfile we expose the SSH through the container's port 22, so let's hook that up to an unused port on the AWS instance - this is the port we will SSH through. Also add another port pairing if you want to use Jupyter from your local machine at any point:
docker build -t your_image_name .
don't miss the . at the end - it's important!
docker run -d -p 5001:22 -p8889:8889 --name=your_container_name your_image_name
Nb. you will need to bash into the container (docker exec -it xxxxxxxxxx bash) and turn Jupyter on, with jupyter notebook.
Dockerfile:
ROM python:3.6
RUN apt-get update && apt-get install -y openssh-server
# Load an ssh server. Change root username and password. By default in debian, password login is prohibited,
# go into the file that controls this and make a change to allow password login
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN /etc/init.d/ssh restart
# Install git, so we can pull in some repos
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Install the requirements and the libraries we need (from a requirements.txt file)
COPY requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
# These are local libraries, add them (assuming a setup.py)
ADD your_libs_directory /your_libs_directory
RUN python3 -m pip install /your_libs_directory
RUN python3 your_libs_directory/setup.py install
# Adding git repos (optional - assuming a setup.py)
git clone https://git_user_name:git_API_token#github.com/YourGit/git_repo.git
RUN python3 -m pip install /git_repo
RUN python3 git_repo/setup.py install
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Related

Why doesn't docker build copy the files over?

I am trying to deploy a python Webapp which is basically an interface for an excel file. The application is based on python.
I've build and ran the container on my local machine and everything is smooth. But when I try to deploy on Azure Web Apps I face an issue which says that my file is missing.
The Dockerfile looks like this
FROM python:3.8-slim-buster AS base
WORKDIR /home/mydashboard
RUN python -m pip install --upgrade pip
RUN pip3 install pipenv
COPY Pipfile /home/mydashboard/Pipfile
COPY Pipfile.lock /home/mydashboard/Pipfile.lock
COPY dataset.csv /home/mydashboard/dataset.csv
COPY src/ /home/mydashboard/src
RUN pipenv install --system
WORKDIR /home/mydashboard/src
EXPOSE 8000
CMD gunicorn -b :8000 dashboard_app.main:server
Weirdly enough when this runs on the Azure App Service I receive a message which was that "dataset.csv" does not exist.
I printed the files inside of /home/dashboard/ it seems that it was an empty folder!!
Who does this happen??
It runs perfectly on my personal computer but it seems that it just run on Azure.
Based on:
Your error message is dataset.csv does not exist
It works on your local machine
Fails in Azure
Probably dataset.csv exists in the copy from location on your local machine, but is missing in Azure.

Run docker daemon from pycharm

I've just started to use docker and got an error.
I use Pycharm on macOS. In my project I clone a github repo (that's a simple LogisticRegression from sklearn) that includes dockerfile.
I expected, that all what I needed was
docker build . -t servername
But I got an error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Where should i run Docker daemon?
Thank you for your help!
You need to install and run Docker first. Here is the Link for Docker Desktop for MacOS.
Run it with sudo so sudo docker build . -t servername
else try sudo systemctl enable docker.service && systemctl enable containerd.service

Python Virtualenv HTTP Server With Docker

Trying to host a python http server and works fine.
FROM python:latest
COPY index.html /
CMD python3 -m http.server
But when trying with python virtualenv, facing issues.
FROM python:3
COPY index.html .
RUN pip install virtualenv
RUN virtualenv --python="python3" .virtualenv
RUN .virtualenv/bin/pip install boto3
RUN python3 -m http.server &
CMD ["/bin/bash"]
Please help.
I just want to point up that using virtualenv within docker container might be redundant.
With docker, you are encapsulating your one specific application along with its dependencies (libraries, frameworks, boto3 in your case), as opposed to your local machine where you might have several applications being developed, each with different dependencies.
Thus, I would recommend considering again the necessity of virtualenv within docker.
Second, running the command:
RUN python3 -m http.server &
in the background is also bad practice here. You want to run it with the CMD command in the foreground, so it will run as the first process (PID 1). Then it will receive all docker signals, and start automatically with the container start:
CMD ["virtualenv/bin/python3", "-m", "http.server"]

How to Deploy Flask app on AWS EC2 Linux/UNIX instance

How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too

AWS Lightsail Cloud9 Install Fails

When I am trying to load the Cloud9 IDE for my AWS Lightsail instance it gives me this error:
Installation Started
Package Cloud9 IDE 1
--------------------
Python version 2.7 is required to install pty.js. Please install Python 2.7 and try again. You can find more information on how to install Python in the docs: http://docs.aws.amazon.com/console/cloud9/python-ssh
exiting with 1
Failed Bash. Exit code 1
My Lightsail instance does have python 2.7.15 installed (when I do python --version). Does anyone know a solution to this issue?
Here's the walkthrough on connecting your AWS cloud9 IDE to your AWS Lightsail instance (Wordpress, Node, Python etc).
Go to https://lightsail.aws.amazon.com/ls/webapp/home/instances
Create an instance using UNIX/Linux/Wordpress or Node or whatever floats your boat. ->
click create instance
Go to networking and Create a static IP for the instance
Go to manage instance and connect using the web based SSH shell
sudo apt-get update
sudo apt-get install -y python-minimal
sudo apt-get update
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
. ~/.bashrc
nvm install node
which node (should print sonething like => /home/bitnami/.nvm/versions/node/v11.13.0/bin/node)
curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash
wget -O - https://raw.githubusercontent.com/c9/install/master/install.sh | bash
go to https://us-west-2.console.aws.amazon.com/cloud9/home
create a new environment using SSH
enter the username bitnami and the static IP of the instance from lightsail
Environment Path => /home/bitnami
Node Path -> enter the value of 'which node' command from lightsail =>
(e.g. /home/bitnami/.nvm/versions/node/v11.10.0/bin/node)
At the bottom of the new cloud9 configuration, there's an SSH key, highlight and copy that.
Go back to the cloud terminal in lightsail =>
run vi ~/.ssh/authorized_keys
Add the Cloud9 ssh key 2 lines after the default key
Go back to your cloud9 environment and click 'create Environment' once the SSH key has been added and saved
You should now be connected to your lightsail instance through AWS cloud9

Categories