I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about django docker you get awesome docker-composer examples and tutorials and I have an environment working - I can run docker-compose up and I have a neat developer environment but the PDB is not working (which is very sad).
I can settle with running docker-compose run my-awesome-app python app.py 0.0.0.0:8000 but then I can access my application over http://127.0.0.1:8000 from the host (I can with docker-compose up) and it seems that each time I use run new containers are made like: dir_app_13 and dir_db_4 which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from this django example. Also I experimented but nothing seems to help me. And I'm using docker-composer 1.3.0rc3 as it has Dockerfile pointing support.
Use the following steps to attach pdb on any python script.
Step 1. Add the following in your yml file
stdin_open: true
tty: true
This will enable interactive mode and will attach stdin. This is equivalent for -it mode.
Step 2.
docker attach <generated_containerid>
You'll now get the pdb shell
Try running your web container with the --service-ports option: docker-compose run --service-ports web
If after the adding of
stdin_open: true
tty: true
you started to get issues similar to that:
fd = self._input_fileno()
if fd is not None and fd in ready:
> return ord(os.read(fd, 1))
E TypeError: ord() expected a character, but string of length 0 found
You can try to add ENV LC_ALL en_US.UTF-8 at the top of your Docker file
FROM python:3.8.2-slim-buster as build_base
ENV LC_ALL en_US.UTF-8
Till my experience, docker-compose up command does not provide an interactive shell, but it starts the printing STDOUT to default read-only shell.
Or if you have specified and mapped logs directory, docker-compose up command will print nothing on the attached shell but it sends output to your mapped logs. So you have to attach the container separately once it is running.
when you do docker-compose up, make it in detached mode via -d and connect to the container via
docker exec -it your_container_name bash
Related
My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.
My system: I have a own Docker image based on the official python image which look like this:
FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]
As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:
#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &
All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.
My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:
from subprocess import call
[...]
call(["reboot"])
This does not work inside the Python Debian image, because of error:
reboot: command not found
The other approach was to mount the docker.sock inside the container, but the error this time is:
root#MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied
I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.
Update
After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.
Here's an MRE:
Dockerfile
FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]
start.sh
#!/usr/bin/env bash
python script.py &
# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1
# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!
script.py
import os
import signal
# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)
*exec form
Testing it
> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1
Original post
I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.
Option 1: docker run --restart
Related documentation
docker run --restart on-failure <image>
Option 2: Using docker-compose
Version 3
In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:
version: "3"
services:
app:
...
restart_policy:
condition: on-failure
...
Version 2
Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.
version: "2"
services:
app:
...
restart: "on-failure"
...
Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.
Well, in the end the solution was much simpler than I expected.
I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.
import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.
I have a flask app that has one route and nothing complex going on, running in a docker container. I cannot for the life of me get print statements to show up in the logs (docker-compose logs -f <containername>). So far, I have tried various answers that supposedly have fixed this problem for others including:
Calling print("test", flush=True)
Setting PYTHONUNBUFFERED=1 and verifying it is set in the actual container with echo
Setting PYTHONUNBUFFERED=0
Running python with the -u flag
Using the logging module (logger.warning, logger.info, etc)
So far nothing has worked. The flask app is starting perfectly fine, but no output from my print statements is shown. I have sanity checked that i'm editing the correct file by adding random syntax errors and watching the app brick itself. I'm using python 3.8 and docker-compose 2
Try this:
import sys
print('It is working',file=sys.stderr)
I found this question while looking for answers to a similar problem. I was running a flask app in a conda environment in a container and wasn't getting any log output even though the flask app itself was working fine. I added the following lines to my Dockerfile and it starting logging as expected -
ENV PYTHONUNBUFFERED=1
RUN echo "source activate my_env" > ~/.bashrc
ENV PATH /opt/conda/envs/my_env/bin:$PATH
CMD ["python", "api.py"]
You can see logs with docker-compose or docker
With docker-compose you have to see SERVICE
Note: you add containername but you have to add service name
NOT $ docker-compose logs -f <containername>
USE $ docker-compose logs -f <SERVICE_NAME>)
With docker you have to add container name or container id
docker logs -f CONTAINER_ID | CONTAINER_NAME
I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.