Related
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.
Mi situation is: I'm developing some docker containers. One of these containers is a celery app that get some tasks from another app and process these tasks.
As I'm working everything in the containers I need to debug on the container and also I need some reloading of the app when the code changes.
I can make both things to work separately using debugpy for debugging and watchmedo for reloading. My problem comes when trying to combine both of them: debugging + reloading in celery.
EXTRA INFO:
I've already a Flask app container where I can achieve this by only using debugpy. I don't need watchmedo nor inotify because flask already come with the --reload option. NICE! But this doesn't happen with celery since its old --autoreload option has been removed some time ago.
DEBUGGING:
To achieve the debugging I've made the following in my dockerfile:
CMD ["python", "-m", "debugpy", "--wait-for-client", "--listen", "0.0.0.0:9999", "-m", "celery", "-A", "celery_main", "worker", "-l", "INFO", "-n", "worker", "--concurrency=1"]
That works fine but there is not reload on code changes.
RELOADING:
To achieve reloading I've made the following in my dockerfile.
CMD ["watchmedo" "shell-command" "--patterns" "'*.py'" "--recursive" "--command='celery -A celery_main worker -l INFO -n worker --concurrency=1'"]
That is also fine for reloading but I miss the debugging.
ATTEMPT: MIX
So I tried mixing both things but it doesn't seem to work. I just get nothing:
CMD ["watchmedo" "shell-command" "--patterns" "'*.py'" "--recursive" "--command='python -m debugpy --wait-for-client --listen 0.0.0.0:5678 -m celery -A celery_main worker -l INFO -n worker --concurrency=1'"]
BTW a problem that I guess will come if this eventually works, is that every time the code changes, the whole debugpy... command will be executed and that means that I must re-run the debugger on my IDE which is VSCODE.
Any idea how to solve this issue(s) ??
I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled
I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about django docker you get awesome docker-composer examples and tutorials and I have an environment working - I can run docker-compose up and I have a neat developer environment but the PDB is not working (which is very sad).
I can settle with running docker-compose run my-awesome-app python app.py 0.0.0.0:8000 but then I can access my application over http://127.0.0.1:8000 from the host (I can with docker-compose up) and it seems that each time I use run new containers are made like: dir_app_13 and dir_db_4 which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from this django example. Also I experimented but nothing seems to help me. And I'm using docker-composer 1.3.0rc3 as it has Dockerfile pointing support.
Use the following steps to attach pdb on any python script.
Step 1. Add the following in your yml file
stdin_open: true
tty: true
This will enable interactive mode and will attach stdin. This is equivalent for -it mode.
Step 2.
docker attach <generated_containerid>
You'll now get the pdb shell
Try running your web container with the --service-ports option: docker-compose run --service-ports web
If after the adding of
stdin_open: true
tty: true
you started to get issues similar to that:
fd = self._input_fileno()
if fd is not None and fd in ready:
> return ord(os.read(fd, 1))
E TypeError: ord() expected a character, but string of length 0 found
You can try to add ENV LC_ALL en_US.UTF-8 at the top of your Docker file
FROM python:3.8.2-slim-buster as build_base
ENV LC_ALL en_US.UTF-8
Till my experience, docker-compose up command does not provide an interactive shell, but it starts the printing STDOUT to default read-only shell.
Or if you have specified and mapped logs directory, docker-compose up command will print nothing on the attached shell but it sends output to your mapped logs. So you have to attach the container separately once it is running.
when you do docker-compose up, make it in detached mode via -d and connect to the container via
docker exec -it your_container_name bash
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.