Please be gentle, I am new to docker.
I'm trying to run a docker container from within Python but am running into some trouble due to environment variables not being set.
For example I run
import os
os.popen('docker-machine start default').read()
os.popen('eval "$(docker-machine env default)"').read()
which will start the machine but does not set the environment variables and so I can not pass a docker run command.
Ideally it would be great if I did not need to run the eval "$(docker-machine env default)". I'm not really sure why I can't set these to something static every time I start the machine.
So I am trying to set them using the bash command but Python just returns an empty string and then returns an error if I try to do docker run my_container.
Error:
Post http:///var/run/docker.sock/v1.20/containers/create: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
I'd suggest running these two steps to start a machine in a bash script first. Then you can have that same bash script call your python script and access docker with docker-py
import docker
import os
docker_host = os.environ['DOCKER_HOST']
client = docker.Client(docker_host)
client.create(...)
...
Related
I am currently following a docker tutorial and am using a VM to follow along. Essentially the tutorial is about creating docker images from existing containers. The task involves setting the container to use python as the executable.
docker commit --change='CMD ["python", "-c", "import this"]' 07a3f5a8b944 ubuntu_python
Whilst I am actually using python3 the issue repeats regardless of which version of python I state in the CMD.
I then attempt to run an image based on the container using
docker run ubuntu_python
but I will get this error
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "python": executable file not found in $PATH: unknown.
Looks like I need to add python to the path of the VM and google searching is not yielding anything useful.
Any suggestions would be much appreciated
thanks
I have a docker image and and associated container than runs a jupyter-lab server. On this docker image I have a very specific python module that cannot be installed on the host. On my host, I have all my work environment that I don't want to run on the docker container.
I would like to use that module from python script running on the host. My first idea is to use docker-py (https://github.com/docker/docker-py) on the host like this:
import docker
client = docker.from_env()
container = client.container.run("myImage", detach=True)
container.exec_run("python -c 'import mymodule; # do stuff; print(something)'")
and get the output and keep working in my script.
Is there a better solution? Is there a way to connect to the jupyter server within the script on the host for example?
Thanks
First. As #dagnic states on his comment there are those 2 modules that let you execute docker runtime in you python script (there's probably more, another different question would be "which one is best").
Second. Without knowing anything about Jupiter, but since you call it "server" , it would mean to me that you are able to port mapping that server (remember -p 8080:80 or --publish 8080:80, yeah that's it!). after setting a port mapping for your container you would be able to ie use pycurl module an "talk" to that service.
Remember, if you "talk on a port" to your server, you might also want to do this using i.e with docker-py.
Since you asked if any better solution exists: This two method would be the more popular. First one would convenient for your script, second would launch a server and you can use pycurl from your host script as you asked (connect to the jupyter server) .ie if you launch jupyter server like:
docker run -p 9999:8888 -it -e JUPYTER_ENABLE_LAB=yes jupyter/base-notebook:latest
you can pycurl like:
import pycurl
from io import BytesIO
b_obj = BytesIO()
crl = pycurl.Curl()
# Set URL value
crl.setopt(crl.URL, 'https://localhost:8888')
# Write bytes that are utf-8 encoded
crl.setopt(crl.WRITEDATA, b_obj)
# Perform a file transfer
crl.perform()
# End curl session
crl.close()
# Get the content stored in the BytesIO object (in byte characters)
get_body = b_obj.getvalue()
# Decode the bytes stored in get_body to HTML and print the result
print('Output of GET request:\n%s' % get_body.decode('utf8'))
Update:
So you have two questions:
1. Is there a better solution?
Basically using docker-py module and run jupyter server in a docker container (and a few other options not involving docker I suppose)
2. Is there a way to connect to the jupyter server within the script on the host for example?
Here, there is an example how to run jupyter in docker.
enter link description here
The rest is use pycurl from your code to talk to that jupyther server from your host computer.
Im trying to start a container and keep it up but not in interactive mode so it will still be returned as an output by the docker ps command
its something like: docker run -d alpine sleep 50
I couldn't find any reference how to do it using Docker SDK for Python
Please advise
the run method for the container object in the Python Docker SDK has a boolean parameter called detach, I think setting it to true will result in that container running in the background
https://docker-py.readthedocs.io/en/stable/containers.html
Example:
import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
client.containers.run('nginx',detach=True)
I am creating Python code that will be built into a docker image.
My intent is that the docker image will have the capability of running other docker images on the host.
Let's call these docker containers "daemon" and "workers," respectively.
I've proven that this concept works by running "daemon" using
-v /var/run/docker.sock:/var/run/docker.sock
I'd like to be able to write the code so that it will work anywhere that there exists a /var/run/docker.sock file.
Since I'm working on an OSX machine I have to use the Docker Quickstart terminal. As such, on my system there is no docker.sock file.
The docker-py documentation shows this as the way to capture the docker client:
from docker import Client
cli = Client(base_url='unix://var/run/docker.sock')
Is there some hackery I can do on my system so that I can instantiate the docker client that way?
Can I create the docker.sock file on my file system and have it sym-linked to the VM docker host?
I really don't want to have to build my docker image every time I was to test a single line code change... help!!!
I have a command that looks like:
p = subprocess.Popen(['docker', 'run', 'imagename'])
in a Python program. I am able to execute this successfully from terminal, however when I run it in PyCharm I receive this error:
Cannot connect to the Docker daemon. Is the docker daemon running on this host
How can I fix this error to run in the Python IDE?
The key is in understanding what eval "$(docker-machine env dockermachinename)" returns (where dockermachinename is your docker machine name(you can check the name with "docker-machine ls" command)).
When you run docker-machine env dockermachinename which is what you need to configure your shell to connect with Docker, it prints:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://999.999.99.999:999"
export DOCKER_CERT_PATH="/Users/enderland/.docker/machine/machines/dockermachinename"
export DOCKER_MACHINE_NAME="dockermachinename"
# Run this command to configure your shell:
# eval $(docker-machine env default)
These environment variables need to be within PyCharm. By adding them into your configuration environment variables list, you will be able to connect with Docker.
This assumes your Docker machine is running (if not, you need to do docker-machine start dockermachinename).