Printing to an interactive terminal inside a docker container from outside - python

I am running a Docker container with
docker run --name client_container -v ~/client/vol:/vol --network host -it --entrypoint '/bin/bash' --privileged client_image:latest -c "bash /execute/start_client.sh && /bin/bash"
I have a service on the host machine and I would like it to be able to print something to the interactive bash terminal at an arbitrary time. Is this possible?

No, you can't do this.
In general it's difficult at best to write things to other terminal windows; Docker adding an additional layer of isolation makes this pretty much impossible. The docker run -t option means there probably is a special device file inside the container that could reach that terminal session, but since the host and container filesystems are isolated from each other, a host process can't access it at all.

Related

opening Jupyter Lab in docker container

recently started using docker and tried to crate a container with jupyter lab, so it could run on a local host.
Since I have been using anaconda before it seems like that localhost:8888 is already taken, so I have tried to use another avaliable port. `docker run -p 8080:8080 <image_name>' created a link which web page with token authentification which gives me no chance to enter. Also it used same port 8888. Is there any other port to use so that both, anaconda and docker work together without errors?
Have you tried this?
TL;DR:
Run docker as
docker run -it -p 8888:8888 image:version
Then, inside your container, initialize Jupyter with:
jupyter notebook --ip 0.0.0.0 --no-browser --allow-root
Now you are supposedly able to access the notebook through your desktop browser on http://localhost:8888
The -p option to docker run takes two ports. First is the port on the host that you want to connect to, and second is the port within the container that the service is running on.
Assuming jupyter is running on port 8888 inside the container, and you want to access it on 8080 on your localhost, the command you are looking for would be:
docker run -p 8080:8888 <image_name>
or to run it interactively and clean up after itself:
docker run -it --rm 8080:8888 <image_name>

Difference between the docker run --tty and --interactive switches

I feel there is a subtle difference between the --tty and --interactive switches of the docker run command, that I don't grasp:
--interactive, -i: Keep STDIN open even if not attached
--tty , -t: Allocate a pseudo-TTY
So I decided to run some tests.
First I created a basic Python script, which continuously prints a string.
Then I created a basic docker image, which will run this script when a container is started.
my_script.py
import time
while True:
time.sleep(1)
print('still running...')
Dockerfile
FROM python:3.8.1-buster
COPY my_script.py /
CMD [ "python3", "/my_script.py"]
Built using command:
docker build --tag pytest .
Test 1
I run docker run --name pytest1 -i pytest, to test the interactive behaviour of the container.
Nothing is printed to the console, but when I press Control+C the python script is interrupted and the container stops running.
This confirms my thinking that stdin was open on the container and my keyboard input entered the container.
Test 2
I run docker run --name pytest1 -t pytest, to test the pseudo-tty behaviour of the container. It repeatedly prints still running... to the console, ánd when I press Control+C the python script is interrupted and the container stops running.
Test 3
I run docker run --name pytest1 -it pytest, to test the combined behaviour. The behaviour is the same as in Test 2.
Questions
What are the nuances I'm missing here?
Why would one use the combined -it switches, as you often see, if there is no benefit to the -t switch?
Does the --tty switch just keeps bóth stdin and stdout open?
-t option is needed if you want to interact with a shell like /bin/sh for instance. The shell works by controlling tty. No tty available, no shell.
we use -i in combination with -t to be able to write commands to the shell we opened
few tests you could reproduce to understand:
docker run alpine /bin/sh: the container exits. shell needs to wait for stdin
docker run -i alpine /bin/sh: the container stays, but the shell won't start. we cannot type commands
docker run -t alpine /bin/sh: shell starts, but we are stuck, the keys we press are not interpreted
docker run -it alpine /bin/sh : yeah our shell is working

Django shell mode in docker

I am learning how to develop Django application in docker with this official tutorial: https://docs.docker.com/compose/django/
I have successfully run through the tutorial, and
docker-compose run web django-admin.py startproject composeexample . creates the image
docker-compose up runs the application
The question is:
I often use python manage.py shell to run Django in shell mode, but I do not know how to achieve that with docker.
I use this command (when run with compose)
docker-compose run <service_name> python manage.py shell
where <service name> is the name of the docker service(in docker-compose.yml).
So, In your case the command will be
docker-compose run web python manage.py shell
https://docs.docker.com/compose/reference/run/
When run with Dockerfile
docker exec -it <container_id> python manage.py shell
Run docker exec -it --user desired_user your_container bash Running this command has similar effect then runing ssh to remote server - after you run this command you will be inside container's bash terminal. You will be able to run all Django's manage.py commands.
Inside your container just run python manage.py shell
You can use docker exec in the container to run commands like below.
docker exec -it container_id python manage.py shell
If you're using docker-compose you shouldn't always run additional containers when it's not needed to, as each run will start new container and you'll lose a lot of disk space. So you can end up with running multiple containers when you totally won't have to. Basically it's better to:
Start your services once with docker-compose up -d
Execute (instead of running) your commands:
docker-compose exec web ./manage.py shell
or, if you don't want to start all services (because, for example - you want to run only one command in Django), then you should pass --rm flag to docker-compose run command, so the container will be removed just after passed command will be finished.
docker-compose run --rm web ./manage.py shell
In this case when you'll escape shell, the container created with run command will be destroyed, so you'll save much space on your disk.
If you're using Docker Compose (using command docker compose up) to spin up your applications, after you run that command then you can run the interactive shell in the container by using the following command:
docker compose exec <container id or name of your Django app> python3 <path to your manage.py file, for example, src/manage.py> shell
Keep in mind the above is using Python version 3+ with python3.

How to watch xvfb session that's inside a docker on remote server from my local browser?

I'm running a docker (That I built on my own), that's docker running E2E tests.
The browser is up and running but I want to have another nice to have feature, I want the ability of watching the session online.
My docker run command is:
docker run -p 4444:4444 --name ${DOCKER_TAG_NAME}
-e Some_ENVs
-v Volume:Volume
--privileged
-d "{docker-registry}" >> /dev/null 2>&1
I'm able to export screenshots but in some cases it's not enough and the ability of watching what is the exact state of the test would be amazing.
I tried a lot of options but I came to a dead end, Any help would be great.
My tests are in Python 2.7
My Docker base is ubuntu:14.04
My environment is in AWS (If that's matter)
The docker runs on Ubuntu servers.
I know it a duplicate of this but no one answered him so...
There is a recent tool called Selenoid. It is launching browsers in Docker containers (i.e. headless as you require). It has a standalone UI capable to show live session screen via VNC. So you can launch multiple sessions in parallel and then look and even intercept actions happening in target browser. All this stuff perfectly works in cloud environment.
I have faced the same issue before with vnc, you need to know your xvfb/vnc in which port is using then open that port on you aws secuirty group once you done with that then you should be able to connect.
On my case i was starting selenium docker "https://github.com/elgalu/docker-selenium" and used this command to start the docker machine "docker run -d --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \
-e SCREEN_WIDTH=1920 -e SCREEN_HEIGHT=1480 \
elgalu/selenium"
The VNC port as per the command is "5900" so i opened that port on instance security group, and connected using VNC viewer on port 5900

PyCharm add remote Python interpreter inside the Docker

So I have set up a docker on my laptop. I'm using Boot2Docker so I have one level of indirection to access the docker. In PyCharm, I can set a remote python interpreter via SSH but I'm not sure how to do it for dockers that can only be accessed via Boot2Docker?
Okay so to answer your question(s):
In PyCharm, I can set a remote python interpreter via SSH but I'm not sure how to do it for dockers that can only be accessed via Boot2Docker?
You need:
To ensure that you have SSH running in your container
There are many base images that include SSH. See: Dockerizing an SSH Daemon
Expose the SSH service to the Boot2Docker/VirtualBox VM.
docker run -d -p 2222:22 myimage ...
Setup PyCharm to connect to your Boot2Docker/VirtualBox VM.
boot2docker ip
Attaching to a running container is easy too!
$ boot2docker ssh
$ docker exec -i -t <cid> /bin/bash
Where <cid> is the Container ID or Name (if you used --name.

Categories