I feel there is a subtle difference between the --tty and --interactive switches of the docker run command, that I don't grasp:
--interactive, -i: Keep STDIN open even if not attached
--tty , -t: Allocate a pseudo-TTY
So I decided to run some tests.
First I created a basic Python script, which continuously prints a string.
Then I created a basic docker image, which will run this script when a container is started.
my_script.py
import time
while True:
time.sleep(1)
print('still running...')
Dockerfile
FROM python:3.8.1-buster
COPY my_script.py /
CMD [ "python3", "/my_script.py"]
Built using command:
docker build --tag pytest .
Test 1
I run docker run --name pytest1 -i pytest, to test the interactive behaviour of the container.
Nothing is printed to the console, but when I press Control+C the python script is interrupted and the container stops running.
This confirms my thinking that stdin was open on the container and my keyboard input entered the container.
Test 2
I run docker run --name pytest1 -t pytest, to test the pseudo-tty behaviour of the container. It repeatedly prints still running... to the console, ánd when I press Control+C the python script is interrupted and the container stops running.
Test 3
I run docker run --name pytest1 -it pytest, to test the combined behaviour. The behaviour is the same as in Test 2.
Questions
What are the nuances I'm missing here?
Why would one use the combined -it switches, as you often see, if there is no benefit to the -t switch?
Does the --tty switch just keeps bóth stdin and stdout open?
-t option is needed if you want to interact with a shell like /bin/sh for instance. The shell works by controlling tty. No tty available, no shell.
we use -i in combination with -t to be able to write commands to the shell we opened
few tests you could reproduce to understand:
docker run alpine /bin/sh: the container exits. shell needs to wait for stdin
docker run -i alpine /bin/sh: the container stays, but the shell won't start. we cannot type commands
docker run -t alpine /bin/sh: shell starts, but we are stuck, the keys we press are not interpreted
docker run -it alpine /bin/sh : yeah our shell is working
Related
My existing shell script is trying to:
Start a docker,
Change the permission of the Python file in it,
Run that while looped (streaming) python job in docker (Not wait for it),
Run a python job outside (in the local machine) which will feed data to the docker python job for which it waits.
#!/bin/bash
clear
docker run -d -v /home/ubuntu/Downloads/docker_work/test_py_app/app:/workspace/app -p 8881:8888 -p 5002:5002 --gpus all --name pytorch nvcr.io/nvidia/pytorch:server-v1.0 tail -f /dev/null
sudo docker exec -it pytorch chmod 777 /workspace/server/server.py
sudo docker exec -it pytorch python /workspace/server/server.py
python /home/ubuntu/PycharmProjects/test/pipeline/client.py
exit
If the two python programs' runs are put into two different shell, it works perfectly fine, but the problem with the existing piece is, the shell gets stuck at server python file run inside docker. How can I fire it in an async fashion? So that, immediately after triggering the first python job, it goes and starts the local machine's python job?
I am running a Docker container with
docker run --name client_container -v ~/client/vol:/vol --network host -it --entrypoint '/bin/bash' --privileged client_image:latest -c "bash /execute/start_client.sh && /bin/bash"
I have a service on the host machine and I would like it to be able to print something to the interactive bash terminal at an arbitrary time. Is this possible?
No, you can't do this.
In general it's difficult at best to write things to other terminal windows; Docker adding an additional layer of isolation makes this pretty much impossible. The docker run -t option means there probably is a special device file inside the container that could reach that terminal session, but since the host and container filesystems are isolated from each other, a host process can't access it at all.
I have a few .py files that I want to run in a Docker image.
But they need the scrapy-splash Docker image to function well. How will I be able to run those .py files in a Docker container or image while also running scrapy-splash? I am planning to run it on a VPS server.
You have two choices:
Run the python scripts in the entrypoint, after splash. To run splash in background and then execute your scripts you'll need an entrypoint like this:
ENTRYPOINT ["/bin/bash", "-c", "python3 splash <SPLASH OPTIONS> & python3 your_script.py && python3 your_second_script.py"]
This way the container will end after running the scripts.
Run the splash container in detached mode and then execute the scripts with docker exec.
I am new to docker, trying to run multiple python processes in docker.
Though it's not recommended, however, it should work as suggested here "https://docs.docker.com/engine/admin/multi-service_container/"
My Dockerfile :
FROM custom_image
MAINTAINER Shubham
RUN apt-get update -y
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/bin/bash"]
CMD ["start.sh"]
start.sh :
nohup python flask-app.py &
nohup python sink.py &
nohup python faceConsumer.py &
nohup python classifierConsumer.py &
nohup python demo.py &
echo lastLine
run command :
docker run --runtime=nvidia -p 5000:5000 out_image
the same shell script worked when I go to terminal and run.
tried without nohup, didn't work.
tried python subprocess also to start other python processes, didn't work.
Is it possible to run multiple processes without supervisord or docker-compose?
update: not getting any error, only "lastLine" is getting printed and docker container exits.
Docker docs has examples of how to do this. If you're using Python, then using supervisord is a good option.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
The advantage of this over running a bunch of background processes is you get better job control and processes that exit prematurely will be restarted.
Your problem is putting everything in background. Your container starts, executes all commands and then exits when CMD process finishes - despite background processes running. Docker does not know that.
You could try running everything else in background but then
python demo.py
as it is. This would cause the process to stay alive assuming demo.py does not exit.
You can also run in detached mode or redirect the nohup to log/nohup.out as default docker runs the command by socket, redirection does not happen.
I am learning how to develop Django application in docker with this official tutorial: https://docs.docker.com/compose/django/
I have successfully run through the tutorial, and
docker-compose run web django-admin.py startproject composeexample . creates the image
docker-compose up runs the application
The question is:
I often use python manage.py shell to run Django in shell mode, but I do not know how to achieve that with docker.
I use this command (when run with compose)
docker-compose run <service_name> python manage.py shell
where <service name> is the name of the docker service(in docker-compose.yml).
So, In your case the command will be
docker-compose run web python manage.py shell
https://docs.docker.com/compose/reference/run/
When run with Dockerfile
docker exec -it <container_id> python manage.py shell
Run docker exec -it --user desired_user your_container bash Running this command has similar effect then runing ssh to remote server - after you run this command you will be inside container's bash terminal. You will be able to run all Django's manage.py commands.
Inside your container just run python manage.py shell
You can use docker exec in the container to run commands like below.
docker exec -it container_id python manage.py shell
If you're using docker-compose you shouldn't always run additional containers when it's not needed to, as each run will start new container and you'll lose a lot of disk space. So you can end up with running multiple containers when you totally won't have to. Basically it's better to:
Start your services once with docker-compose up -d
Execute (instead of running) your commands:
docker-compose exec web ./manage.py shell
or, if you don't want to start all services (because, for example - you want to run only one command in Django), then you should pass --rm flag to docker-compose run command, so the container will be removed just after passed command will be finished.
docker-compose run --rm web ./manage.py shell
In this case when you'll escape shell, the container created with run command will be destroyed, so you'll save much space on your disk.
If you're using Docker Compose (using command docker compose up) to spin up your applications, after you run that command then you can run the interactive shell in the container by using the following command:
docker compose exec <container id or name of your Django app> python3 <path to your manage.py file, for example, src/manage.py> shell
Keep in mind the above is using Python version 3+ with python3.