Debug Docker that quits immediately? - python

I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?

Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.

I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled

Related

How to restart Python Docker Container from inside

My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.
My system: I have a own Docker image based on the official python image which look like this:
FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]
As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:
#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &
All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.
My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:
from subprocess import call
[...]
call(["reboot"])
This does not work inside the Python Debian image, because of error:
reboot: command not found
The other approach was to mount the docker.sock inside the container, but the error this time is:
root#MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied
I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.
Update
After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.
Here's an MRE:
Dockerfile
FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]
start.sh
#!/usr/bin/env bash
python script.py &
# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1
# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!
script.py
import os
import signal
# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)
*exec form
Testing it
> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1
Original post
I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.
Option 1: docker run --restart
Related documentation
docker run --restart on-failure <image>
Option 2: Using docker-compose
Version 3
In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:
version: "3"
services:
app:
...
restart_policy:
condition: on-failure
...
Version 2
Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.
version: "2"
services:
app:
...
restart: "on-failure"
...
Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.
Well, in the end the solution was much simpler than I expected.
I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.
import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()

Python process in docker crashes when runned non-interactively

I face an odd problem with an tool I found on github, it's a script that distributes an mjpeg stream: https://github.com/OliverF/mjpeg-relay
I create a docker image with the provided command (after initializing the submodule git submodule update --init):
docker build -t relay .
When I now run the container as follows (with the -it flag) the script runs fine, when removing the flag the container exits after some seconds.
docker run -it -p 54017:54321 relay "http://192.0.2.1:1234/?action=stream"
As I want to be able to start the script for multiple streams in a docker compose file, adding restart: unless-stopped leads to an endless loop of containers being restarted.
services:
mjpeg:
image: relay
command: "http://192.0.2.1:1234/?action=stream"
ports:
- "54017:54017"
restart: unless-stopped
I thought about encapsulating the command into tmux sessions, however I had no success with it. Or can you help me finding what leads to the crash of the script when running it non-interactive?
Thanks you very much!

Docker executes python scripts only when I stopping the container

I'm trying to build a Docker image for my python app (a small api on aiohttp with a couple endpoints)
FROM python:3
WORKDIR /home/emil/Projects/elastic_simple_engine
COPY . .
RUN pip3 install -r requirements.txt
EXPOSE 5000/tcp
CMD ["python3", "entry.py"]
The last line of the Dockerfile runs a python script which starts aiohttp.web.Application():
# entry.py
# ...a few dozens of code lines above...
if __name__ == '__main__':
print('Initializing...')
aiohttp.web.run_app(app, host='127.0.0.1', port=5000)
After building an image I'm trying to run the container:
$ docker run -p 5000:5000 myapp
Docker runs the container silently without any output in shell but I can't reach my app's host: 127.0.0.1:5000 (everything works perfectly when I launch it without docker).
Only when I stop the container it prints in console the lines that should be shown during app's launch and shuts down:
Initializing...
======== Running on http://127.0.0.1:5000 ========
(Press CTRL+C to quit)
Please, help me figure out that I do wrong.
TLDR
Set host to 0.0.0.0
127.0.0.1 is the IP address to local interface. It can only communicate within the same host.
0.0.0.0 means a server (in this context) would listen to every available network interface (including 127.0.0.1).
Here, since you are not sharing the docker network, 127.0.0.1 is only available inside the container and not from outside the container. You should use 0.0.0.0 to access it from outside the container or pass --network="host" in docker run but this can have other complications with port sharing.

Send logs from docker container to Windows host

I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!

Flask + Docker app - The container dies when i run docker run

I have an application with Dockerfile + docker-compose.
Dockerfile
docker-compose.yml
I have a CI, which creates an image from my dockerfile and send it to the hub.docker
Travis.yaml
When I drop this image on my cloud server I can not run this image by running the command below:
docker run -d -p 80:80 flask-example
because the container dies.
Besides the downloaded image from hub.docker after compiled by travis, will I need docker-compose on my server? Executing the command:
docker-compose up -d
To run the application? Or is there another way to do it?
Thanks guys.
running docker with -d flag detached your container, which mean that it runs in background.
Thus, you cannot see the error. Just remove this flag and you will see why it is dying.
From the link to your docker-compose file, it seems that port 80 is already in used (by frontend container) so maybe you can try using a different port?
(for example: docker run -d -p 8080:80 flask-example)
Second, you are right.
docker-compose is just another way to run your container. You don't have to use both.

Categories