I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.
Related
My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.
My system: I have a own Docker image based on the official python image which look like this:
FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]
As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:
#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &
All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.
My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:
from subprocess import call
[...]
call(["reboot"])
This does not work inside the Python Debian image, because of error:
reboot: command not found
The other approach was to mount the docker.sock inside the container, but the error this time is:
root#MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied
I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.
Update
After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.
Here's an MRE:
Dockerfile
FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]
start.sh
#!/usr/bin/env bash
python script.py &
# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1
# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!
script.py
import os
import signal
# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)
*exec form
Testing it
> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1
Original post
I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.
Option 1: docker run --restart
Related documentation
docker run --restart on-failure <image>
Option 2: Using docker-compose
Version 3
In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:
version: "3"
services:
app:
...
restart_policy:
condition: on-failure
...
Version 2
Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.
version: "2"
services:
app:
...
restart: "on-failure"
...
Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.
Well, in the end the solution was much simpler than I expected.
I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.
import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()
I have a flask app that has one route and nothing complex going on, running in a docker container. I cannot for the life of me get print statements to show up in the logs (docker-compose logs -f <containername>). So far, I have tried various answers that supposedly have fixed this problem for others including:
Calling print("test", flush=True)
Setting PYTHONUNBUFFERED=1 and verifying it is set in the actual container with echo
Setting PYTHONUNBUFFERED=0
Running python with the -u flag
Using the logging module (logger.warning, logger.info, etc)
So far nothing has worked. The flask app is starting perfectly fine, but no output from my print statements is shown. I have sanity checked that i'm editing the correct file by adding random syntax errors and watching the app brick itself. I'm using python 3.8 and docker-compose 2
Try this:
import sys
print('It is working',file=sys.stderr)
I found this question while looking for answers to a similar problem. I was running a flask app in a conda environment in a container and wasn't getting any log output even though the flask app itself was working fine. I added the following lines to my Dockerfile and it starting logging as expected -
ENV PYTHONUNBUFFERED=1
RUN echo "source activate my_env" > ~/.bashrc
ENV PATH /opt/conda/envs/my_env/bin:$PATH
CMD ["python", "api.py"]
You can see logs with docker-compose or docker
With docker-compose you have to see SERVICE
Note: you add containername but you have to add service name
NOT $ docker-compose logs -f <containername>
USE $ docker-compose logs -f <SERVICE_NAME>)
With docker you have to add container name or container id
docker logs -f CONTAINER_ID | CONTAINER_NAME
I have a Python-based ROS2 node running inside a Docker container and I am trying to handle the graceful shutdown of the node by capturing the SIGTERM/SIGINT signals and/or by catching the KeyboardInterrupt exception.
The problem is when I run the node in a container using docker-compose. I cannot seem to catch the "moment" when the container is being stopped/killed. I've explicitly added the STOPSIGNAL in the Dockerfile and the stop_signal in the docker-compose file.
Here is a sample of the node code:
import signal
import sys
import rclpy
def stop_node(*args):
print("Stopping node..")
rclpy.shutdown()
return True
def main():
rclpy.init(args=sys.argv)
print("Creating node..")
node = rclpy.create_node("mynode")
print("Running node..")
while rclpy.ok():
rclpy.spin_once(node)
if __name__ == '__main__':
try:
signal.signal(signal.SIGINT, stop_node)
signal.signal(signal.SIGTERM, stop_node)
main()
except:
stop_node()
Here is a sample Dockerfile to re-create the image:
FROM osrf/ros2:nightly
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
RUN apt-get update && \
apt-get install -y vim
WORKDIR /nodes
COPY mynode.py .
ADD run-node.sh /run-node.sh
RUN chmod +x /run-node.sh
STOPSIGNAL SIGTERM
Here is the sample docker-compose.yml:
version: '3'
services:
mynode:
container_name: mynode-container
image: mynode
entrypoint: /bin/bash -c "/run-node.sh"
privileged: true
stdin_open: false
tty: true
stop_signal: SIGTERM
Here is the run-node.sh script:
source /opt/ros/$ROS_DISTRO/setup.bash
python3 /nodes/mynode.py
When I manually run the node inside the container (using python3 mynode.py or by /run-node.sh) or when I do docker run -it mynode /bin/bash -c "/run-node.sh", I get the "Stopping node.." message. But when I do docker-compose up, I never see that message when I stop the container, by Ctrl+C or by docker-compose down.
$ docker-compose up
Creating network "ros-node_default" with the default driver
Creating mynode-container ... done
Attaching to mynode-container
mynode-container | Creating node..
mynode-container | Running node..
^CGracefully stopping... (press Ctrl+C again to force)
Stopping mynode-container ... done
$
I've tried:
moving the calls to signal.signal
using atexit instead of signal
using docker stop and docker kill --signal
I've also checked this Python inside docker container, gracefully stop question but there's no clear solution there, and I'm not sure if using ROS/rclpy makes my setup different (also, my host machine is Ubuntu 18.04, while that user was on Windows).
Is it possible to catch the stopping of the container in my stop_node method?
When your docker-compose.yml file says:
entrypoint: /bin/bash -c "/run-node.sh"
Since that's a bare string, Docker wraps it in a /bin/sh -c wrapper. So your container's main process is something like
/bin/sh -c '/bin/bash -c "/run-node.sh"'
In turn, the bash script stays running. It launches a Python script, and stays running as its parent until that script exits. (The two levels of sh -c wrappers may or may not stay running.)
The important part here is that this wrapper shell, not your script, is the main container process that receives signals, and (it turns out) won't receive SIGTERM unless it's explicitly coded to.
The most important restructuring to do here is to have your wrapper script exec the Python script. That causes it to replace the wrapper, so it becomes the main process and receives signals. If nothing else changing the last line to
exec python3 /nodes/mynode.py
will likely help.
I would go a little further here and make sure as much of this code is built into your Docker image, and try to minimize the number of explicit shell wrappers. "Do some initialization, then exec something" is an extremely common Docker pattern, and you can write this script and make it your image's entrypoint:
#!/bin/sh
# Do the setup
# ("." is the same as "source", but standard)
. "/opt/ros/$ROS_DISTRO/setup.bash"
# Run the main CMD
exec "$#"
Similarly, your main script should start with a "shebang" line like
#!/usr/bin/env python3
import ...
Your Dockerfile already contains the setup to be able to run the wrapper directly, you may need a similar RUN chmod line for the main script. But then you can add
ENTRYPOINT ["/run-node.sh"]
CMD ["/nodes/my-node.py"]
Since both scripts are executable and have the "shebang" lines you can run them directly. Using the JSON syntax keeps Docker from adding an additional shell wrapper. Since your entrypoint script will now run whatever the command is, it's easy to change that separately. For example, if you want an interactive shell that's done the environment variable setup to try to debug your container startup, you can override just the command part
docker run --rm -it mynode sh
I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about django docker you get awesome docker-composer examples and tutorials and I have an environment working - I can run docker-compose up and I have a neat developer environment but the PDB is not working (which is very sad).
I can settle with running docker-compose run my-awesome-app python app.py 0.0.0.0:8000 but then I can access my application over http://127.0.0.1:8000 from the host (I can with docker-compose up) and it seems that each time I use run new containers are made like: dir_app_13 and dir_db_4 which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from this django example. Also I experimented but nothing seems to help me. And I'm using docker-composer 1.3.0rc3 as it has Dockerfile pointing support.
Use the following steps to attach pdb on any python script.
Step 1. Add the following in your yml file
stdin_open: true
tty: true
This will enable interactive mode and will attach stdin. This is equivalent for -it mode.
Step 2.
docker attach <generated_containerid>
You'll now get the pdb shell
Try running your web container with the --service-ports option: docker-compose run --service-ports web
If after the adding of
stdin_open: true
tty: true
you started to get issues similar to that:
fd = self._input_fileno()
if fd is not None and fd in ready:
> return ord(os.read(fd, 1))
E TypeError: ord() expected a character, but string of length 0 found
You can try to add ENV LC_ALL en_US.UTF-8 at the top of your Docker file
FROM python:3.8.2-slim-buster as build_base
ENV LC_ALL en_US.UTF-8
Till my experience, docker-compose up command does not provide an interactive shell, but it starts the printing STDOUT to default read-only shell.
Or if you have specified and mapped logs directory, docker-compose up command will print nothing on the attached shell but it sends output to your mapped logs. So you have to attach the container separately once it is running.
when you do docker-compose up, make it in detached mode via -d and connect to the container via
docker exec -it your_container_name bash
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.