Create docker container from flask request in uWSGI instance - python

I have a docker container that is setup to perform some given actions with selenium. My goal is to have the docker container be created when a request is received for a certain endpoint created using flask. The flask app has been setup with uWSGI and Nginx using this tut.
When the endpoint receives a request it is suppose to run the bash script ./run.sh:
#!/bin/bash
ID=$1
docker run --rm \
-v $(pwd)/code:/code \
-v /etc/hosts:/etc/hosts \
selenium \
python3 \
/code/main.py ${ID}
I can successfully make a call to the endpoint using the IP given from digital ocean but when it gets to the point where it needs to run docker it says:
docker: command not found
Note, I can go into the virtualenv manually, run python app.py, send request to flask endpoint and the docker container is created and everything works great.

You probably need to add a PATH variable to your bash script which includes the location of your docker executable. The user running NGINX likely doesn't have a path set.
PATH=$PATH:/usr/local/bin:/usr/bin
Also you'll need to ensure that the user running NGINX has permission to use docker, so add them to the docker group.
If this is a public service, then I would think carefully about whether you really want internet users to be launching containers on your server, does $1 come from user input?

Related

Server is freezing while trying to get a backup of MongoDB in docker composer

I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz

How to Deploy Flask app on AWS EC2 Linux/UNIX instance

How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too

How to connect a Postgres Docker image from another Docker container

I am trying to set up this Bullet Train API server on our machines. I am successfully able to run their Python server using the docker-compose up method.
As per the docs, it needs the database as well. I preferred using the docker image for the Postgres DB docker run --name local_postgres -d -P postgres which returns this:
It doesn't return a success message saying if the Postgres Docker is running successfully or not. It just returns some kinda long string which I feel should be an identifier of the Postgres Docker image.
As I need to connect this Bullet Train API server to this Dockerized database -
The question is how to find the connection string for this Postgres Docker image?
The trick is to use docker-compose. Put your application image in there as one service and your postgres image as a second service. Then also include an overlay network in the stack and specify that in each of your services. After that it is possible for the application to access the database via the docker service's name "local_postgres" as the hostname.
Update as per your comment
Make sure that your dockerfile that defines the postgres container contains an EXPOSE command.
EXPOSE 5432
If missing, add it and rebuild the container.
Start the container and include the below option, which will expose the database port on localhost.
docker run --name local_postgres -p 5432:5432 -d -P postgres
Check if the port is really exposed by typing
docker ps | grep 'local_postgres'
You should see something like this in the output.
PORTS 0.0.0.0:5432->5432/tcp
If you see this output, the port 5432 is successfully exposed on your host. So if your app runs on localhost, you can access the database via localhost:5432

pycharm can't complete remote interpreter setup for Docker

I'm new to Docker. I'm using Docker & docker-compose, going through a flask tutorial. The base docker image is python 2.7 slim.
It's running on Linux. docker 1.11.2
The application is working fine.
I want to get pycharm pro connecting to the remote interpreter, something I have never done before.
I followed the instructions for docker-compose. Initially it was failing because it could not connect to port 2376. I added this port to docker-compose.yml and the error went away.
However, trying to save the configuration now stalls/hangs with a dialog 'Getting Remote Interpreter Version'. This never completes. Also, I can't quit pycharm. This happens in Pycharm 2016.2 and 2016.3 EAP (2nd).
The help say "SFTP support is required for copying helpers to the server".
Does this mean I need to do something?
I'm not using docker-machine
The problem was that TCP access to the docker API is not established by default under ubuntu 16.04.
There are suggestions to enable TCP/IP access.
However, JetBrains gave me the simplest solution:
If you are using Linux it is most likely that Docker installed with
its default setup and Docker is expecting to be used through UNIX
domain file socket /var/run/docker.sock. And you should specify
unix:///var/run/docker.sock in the API URL field. Please comment
whether it helps!
This suggestion worked with my Ubuntu 16.04 -derived distribution.
This goes into the Docker entry in PyCharm preferences under Build, Execution, Deployment.
You can also edit this while setting up a remote interpreter, but only by making a new Docker entry.
TCP/IP Method
This method works if you want TCP/IP access, but this is a security risk. The socket approach is better, which is probably why it is the default.
https://coreos.com/os/docs/latest/customizing-docker.html
Customizing docker
The Docker systemd unit can be customized by overriding the unit that
ships with the default CoreOS settings. Common use-cases for doing
this are covered below.
Enable the remote API on a new socket
Create a file called /etc/systemd/system/docker-tcp.socket to make
Docker available on a TCP socket on port 2375.
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
Service=docker.service
[Install]
WantedBy=sockets.target
Then enable this new socket:
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
Test that it’s working:
docker -H tcp://127.0.0.1:2375 ps
Once I thought to search for ubuntu 16.04 I came across simpler solutions, but I did not test them.
For instance:
https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04/
Edit the file /lib/systemd/system/docker.service
Modify the line that starts with ExecStart to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375
Where my addition is the “-H tcp://0.0.0.0:2375” part. Save the
modified file. Restart the Docker service:
sudo service docker restart
Test that the Docker API is indeed accessible:
curl http://localhost:2375/version
I - docker-compose up
I think PyCharm will run docker-compose up, have you try to run this command first in your terminal (from where your docker-compose.yml is) ?
Maybe if some errors occur, you will get more info in your terminal.
II - pycharm docker configuration
Otherwise it could be due to your docker machine configuration in PyCharm.
What I do to configure my machine and to be sure this one is correctly configured:
1 - run docker-machine ls in your shell
2 - copy paste the url without tcp://
3 - go to pycharm preferences -> Build, Execution, Deployement -> Docker -> + to create a new server, fill the server name field
4 - paste previously copied url keeping https://
5 - fill the path of your machine certificates folder
6 - tick Import credentials from Docker Machine
7 - click Detect -> your machine should appear in the selection list
8 - save this server
9 - select this server when configuring your remote interpreter, from PyCharm Preferences -> Project -> Project Interpreter -> wheel -> add remote -> Docker or Docker Compose
10 - you should be able to select a service name
11 - save your new interpreter
11 - try run your test twice, sometimes it could take time to initialize

How to watch xvfb session that's inside a docker on remote server from my local browser?

I'm running a docker (That I built on my own), that's docker running E2E tests.
The browser is up and running but I want to have another nice to have feature, I want the ability of watching the session online.
My docker run command is:
docker run -p 4444:4444 --name ${DOCKER_TAG_NAME}
-e Some_ENVs
-v Volume:Volume
--privileged
-d "{docker-registry}" >> /dev/null 2>&1
I'm able to export screenshots but in some cases it's not enough and the ability of watching what is the exact state of the test would be amazing.
I tried a lot of options but I came to a dead end, Any help would be great.
My tests are in Python 2.7
My Docker base is ubuntu:14.04
My environment is in AWS (If that's matter)
The docker runs on Ubuntu servers.
I know it a duplicate of this but no one answered him so...
There is a recent tool called Selenoid. It is launching browsers in Docker containers (i.e. headless as you require). It has a standalone UI capable to show live session screen via VNC. So you can launch multiple sessions in parallel and then look and even intercept actions happening in target browser. All this stuff perfectly works in cloud environment.
I have faced the same issue before with vnc, you need to know your xvfb/vnc in which port is using then open that port on you aws secuirty group once you done with that then you should be able to connect.
On my case i was starting selenium docker "https://github.com/elgalu/docker-selenium" and used this command to start the docker machine "docker run -d --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \
-e SCREEN_WIDTH=1920 -e SCREEN_HEIGHT=1480 \
elgalu/selenium"
The VNC port as per the command is "5900" so i opened that port on instance security group, and connected using VNC viewer on port 5900

Categories