So I have set up a docker on my laptop. I'm using Boot2Docker so I have one level of indirection to access the docker. In PyCharm, I can set a remote python interpreter via SSH but I'm not sure how to do it for dockers that can only be accessed via Boot2Docker?
Okay so to answer your question(s):
In PyCharm, I can set a remote python interpreter via SSH but I'm not sure how to do it for dockers that can only be accessed via Boot2Docker?
You need:
To ensure that you have SSH running in your container
There are many base images that include SSH. See: Dockerizing an SSH Daemon
Expose the SSH service to the Boot2Docker/VirtualBox VM.
docker run -d -p 2222:22 myimage ...
Setup PyCharm to connect to your Boot2Docker/VirtualBox VM.
boot2docker ip
Attaching to a running container is easy too!
$ boot2docker ssh
$ docker exec -i -t <cid> /bin/bash
Where <cid> is the Container ID or Name (if you used --name.
Related
recently started using docker and tried to crate a container with jupyter lab, so it could run on a local host.
Since I have been using anaconda before it seems like that localhost:8888 is already taken, so I have tried to use another avaliable port. `docker run -p 8080:8080 <image_name>' created a link which web page with token authentification which gives me no chance to enter. Also it used same port 8888. Is there any other port to use so that both, anaconda and docker work together without errors?
Have you tried this?
TL;DR:
Run docker as
docker run -it -p 8888:8888 image:version
Then, inside your container, initialize Jupyter with:
jupyter notebook --ip 0.0.0.0 --no-browser --allow-root
Now you are supposedly able to access the notebook through your desktop browser on http://localhost:8888
The -p option to docker run takes two ports. First is the port on the host that you want to connect to, and second is the port within the container that the service is running on.
Assuming jupyter is running on port 8888 inside the container, and you want to access it on 8080 on your localhost, the command you are looking for would be:
docker run -p 8080:8888 <image_name>
or to run it interactively and clean up after itself:
docker run -it --rm 8080:8888 <image_name>
I am running a Docker container with
docker run --name client_container -v ~/client/vol:/vol --network host -it --entrypoint '/bin/bash' --privileged client_image:latest -c "bash /execute/start_client.sh && /bin/bash"
I have a service on the host machine and I would like it to be able to print something to the interactive bash terminal at an arbitrary time. Is this possible?
No, you can't do this.
In general it's difficult at best to write things to other terminal windows; Docker adding an additional layer of isolation makes this pretty much impossible. The docker run -t option means there probably is a special device file inside the container that could reach that terminal session, but since the host and container filesystems are isolated from each other, a host process can't access it at all.
I'm running a docker (That I built on my own), that's docker running E2E tests.
The browser is up and running but I want to have another nice to have feature, I want the ability of watching the session online.
My docker run command is:
docker run -p 4444:4444 --name ${DOCKER_TAG_NAME}
-e Some_ENVs
-v Volume:Volume
--privileged
-d "{docker-registry}" >> /dev/null 2>&1
I'm able to export screenshots but in some cases it's not enough and the ability of watching what is the exact state of the test would be amazing.
I tried a lot of options but I came to a dead end, Any help would be great.
My tests are in Python 2.7
My Docker base is ubuntu:14.04
My environment is in AWS (If that's matter)
The docker runs on Ubuntu servers.
I know it a duplicate of this but no one answered him so...
There is a recent tool called Selenoid. It is launching browsers in Docker containers (i.e. headless as you require). It has a standalone UI capable to show live session screen via VNC. So you can launch multiple sessions in parallel and then look and even intercept actions happening in target browser. All this stuff perfectly works in cloud environment.
I have faced the same issue before with vnc, you need to know your xvfb/vnc in which port is using then open that port on you aws secuirty group once you done with that then you should be able to connect.
On my case i was starting selenium docker "https://github.com/elgalu/docker-selenium" and used this command to start the docker machine "docker run -d --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \
-e SCREEN_WIDTH=1920 -e SCREEN_HEIGHT=1480 \
elgalu/selenium"
The VNC port as per the command is "5900" so i opened that port on instance security group, and connected using VNC viewer on port 5900
I have a command that looks like:
p = subprocess.Popen(['docker', 'run', 'imagename'])
in a Python program. I am able to execute this successfully from terminal, however when I run it in PyCharm I receive this error:
Cannot connect to the Docker daemon. Is the docker daemon running on this host
How can I fix this error to run in the Python IDE?
The key is in understanding what eval "$(docker-machine env dockermachinename)" returns (where dockermachinename is your docker machine name(you can check the name with "docker-machine ls" command)).
When you run docker-machine env dockermachinename which is what you need to configure your shell to connect with Docker, it prints:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://999.999.99.999:999"
export DOCKER_CERT_PATH="/Users/enderland/.docker/machine/machines/dockermachinename"
export DOCKER_MACHINE_NAME="dockermachinename"
# Run this command to configure your shell:
# eval $(docker-machine env default)
These environment variables need to be within PyCharm. By adding them into your configuration environment variables list, you will be able to connect with Docker.
This assumes your Docker machine is running (if not, you need to do docker-machine start dockermachinename).
I have to write test for deployment script which uploads files through SSH, but I'd like to have it not depending on external servers configuration. This is how i see it:
Create 2 SSH daemons without authentication on different ports of loopback interface.
Run the deployment script on these two ports
The only question is how to run these dummy SSH daemons.
I use Python and Fabric.
If you want full control over the server's actions (e.g. in order to simulate various problem conditions and thereby do a really thorough testing) I recommend twisted: as this article shows, it makes it really easy to set up your own custom SSH server.
If you'd rather use an existing ssh server, pick one from the list here (or use the one that comes with your system, if any; or maybe sshwindows if you're on windows) and run it with subprocess from Python as a part of starting up your tests.
Another option is to spin up a dockerized container with sshd service running. You can use a docker image like these:
https://github.com/kabirbaidhya/fakeserver
https://github.com/panubo/docker-sshd.
I've used this for testing out a deployment script (made on top of fabric ).
Here's how you use it.
Pull the image.
➜ docker pull kabirbaidhya/fakeserver
Set authorized keys for the server.
➜ cat ~/.ssh/id_rsa.pub > /path/to/authorized_keys
Run the fakeserver.
➜ docker run -d -p 2222:22 \
-v "/path/to/authorized_keys:/etc/authorized_keys/tester" \
-e SSH_USERS="tester:1001:1001" \
--name=fakeserver kabirbaidhya/fakeserver
You can now use the fakeserver from any ssh client. For instance:
➜ ssh tester#localhost -p 2222
➜ ssh tester#localhost -p 2222 "echo 'Hello World'"
If this works, you can then use any ssh clients or scripts on top of paramiko or fabric to test against this mock server.
Hope this helps.
Reimplementing an SSH daemon is not trivial.
If your only problem is you don't want them depending on existing configurations, you can start up new sshd with -f to specify a specific configuration and -p to run on a given port.
You can use os.system to make calls to the shell:
os.system('sshd -f myconfig -p 22022')