I can find all of the ingredients for what I want to do, but I'm not sure if I can put them together.
Ultimately, I want a Python process to be able to create and manage Docker instances running on Azure.
This link shows that you can use the Docker API to fire up instances on Azure: https://docs.docker.com/engine/context/aci-integration/. It's Beta, but I've been able to run my own container on Azure after logging in, using something like this:
docker --context myacicontext run hello-world
The second half of my problem is to call this from docker-py. The vanilla usage of docker-py is nice and striaght-forward, but I can't find any reference to the flag "--context" in the docker-py docs (https://docker-py.readthedocs.io/en/stable/).
Is there a way for configuring docker-py such that it provides a --context?
EDIT:
Thanks to #CharlesXu pointing me in the right direction, I have now found that the following docker-py command does have an effect:
docker.context.ContextAPI.set_current_context("myacicontext")
This changes the default context used by the docker cmd line interface, so
C:\Users\MikeSadler>docker ps -a
will subsequently list the containers running in Azure, and not locally.
However, the docker.from_env().containers.list(all=all) command stubbornly continues to return the local containers. This is true even if you restart the Python session and create a new client from a completely fresh start.
CONCLUSION:
Having spoken with the Docker developers, as of October 2020 docker-py officially does not support Cloud connections. They are currently using GRPC protocols to manage Cloud runs, and this may be incorporated into docker-py in the future.
I'm afraid there is no way to do things like the command docker --context myacicontext run hello-world does. And there is also no parameter like --context in the SDK. As I know, you can set the current context use the SDK like this:
import docker
docker.context.config.write_context_name_to_docker_config('context_name')
But when you use the code:
client = docker.from_env()
client.containers.run('xxx')
Then it will set the context into default. It means you cannot run the containers into ACI. I think it may be a bug that needs to be fixed. I'm not very very sure, but that's it right now.
Related
I would like to run docker-compose via python docker sdk.
However I couldn't find any reference on how to achieve this using these reference Python SDK? I could also use subprocess but I have some other difficulty while using that. see here docker compose subprocess
I am working on the same issue and was looking for answers, but nothing so far. The best shot I can give it is to simplify that docker-compose logic. For example, you have a YAML file with a network and services - create them separately using Python Docker SDK and connect containers to a network.
It gets cumbersome, but eventually you can get things working that way from Python.
I created a package to make this easy: python-on-whales
Install with
pip install python-on-whales
Then you can do
from python_on_whales import docker
docker.compose.up()
docker.compose.stop()
# and all the other commands.
You can find the source for the package in my GitHub repository: https://gabrieldemarmiesse.github.io/python-on-whales/
I have created three containers, one is for Python Flask application, second is for PostgreSQL db and third is for angular8. Im using Docker compose to run this. my question so each container has ports so total 3 ports. Is there a way I can use only one port to run this whole application like Docker Run instead of Docker Compose? All I want is a single port where this API can be called from anywhere.
If the only thing you want to be visible to the "outside" is the API, you can use the --link flag when calling docker run. Basically, start up the PG container, then start up the Flask container, linking to PG, then start up the Angular container, linking to Flask. However, the --link flag is a legacy feature, and may disappear sometime in the future.
Another option is to create a network with docker network create and make sure your three containers are all using that same network. They should all be able to communicate with each other in this way, and you just need to publish the API port so that other apps can use your API.
I'm not sure what your requirements are, but docker-compose is generally the cleaner way to do it, as it helps you achieve consistency in your automations.
I am currently running a lot of similar Docker containers which are created and run by a Python script via the official API. Since Docker natively doesn't support GPU mapping, I tested Nvidia-Docker, which fulfills my requirements, but I'm not sure how to integrate it seamlessly in my script.
I tried to find the proper API calls for Nvidia-Docker using Google and the docs, but I didn't manage to find anything useful.
My current code looks something like this:
# assemble a new container using the params obtained earlier
container_id = client.create_container(img_id, command=commands, stdin_open=True, tty=True, volumes=[folder], host_config=client.create_host_config(binds=[mountpoint,]),detach=False)
# run it
client.start(container_id)
The documentation for the API can be found here.
From Nvidia-Dockers Github page:
The default runtime used by the DockerĀ® Engine is
runc, our runtime can become the default one by configuring the docker
daemon with --default-runtime=nvidia. Doing so will remove the need to
add the --runtime=nvidia argument to docker run. It is also the only
way to have GPU access during docker build.
Basically, I want to add the --runtime=nvidia-docker argument to my create_container call, but there is no support for that as it seems.
But since I need to switch between runtimes multiple times during the script execution (mixing Nvidia-Docker and native Docker containers) the quick and dirty way would be to run a bash command using subprocess but I feel like there has to be a better way.
TL;DR: I am looking for a way to run Nvidia-Docker containers from a Python script.
run() and create() methods have runtime parameter according to https://docker-py.readthedocs.io/en/stable/containers.html
Which has sense because docker cli tool is pretty simple and every command translate in a call to the docker engine service REST API
I know there are a ton of articles, blogs, and SO questions about running python applications inside a docker container. But I am looking for information on doing the opposite. I have a distributed application with a lot of independent components, and I have put each of these components inside a docker container(s) that I can run by manually via
docker run -d <MY_IMAGE_ID> mycommand.py
But I am trying to find a way (a pythonic, object-oriented way) of running these containerized applications from a python script on my "master" host machine.
I can obviously wrap the command line calls into a subprocess.Popen(), but I'm looking to see if something a bit more managed exists.
I have seen docker-py but I'm not sure if this is what I need or not; I can't find a way to use it to simply run a container, capture output, etc.
If anyone has any experience with this, I would appreciate any pointers.
A bit lost on where to start after exploring digitalcoean/aws.
I have looked at the documentation for docker and boto3, and docker seems to be the direction I want to go in (docker for AWS), but I am unsure that these are mutually exclusive solutions.
From what I understand the following workflow is possible:
Code local python (most any language, but I am using py)
Deploy local code (aka upload) to a server
Call that code from a local machine with some argument(s) via a script leveraging some cloud API (boto3/docker?)
Grab finished result file from my cloud (pull file that is JSON/CSV etc and contains my results) using an API (boto3/docker?)
I thought this would be way easier to get up and running (maybe it is, and I am just missing something).
I feel like I am hitting my head against the wall on something that is intended to not be so tough.
Any pointers/guidance are hugely appreciated.
Thank you!
boto3 is an interface to aws.
docker is a software tool for managing images and deploying them as containers.
You can use boto3 to create your amazon machine, and then install docker on that machine, and pull containers from a docker repository to run them.
There's also solutions like docker-machine(docker-toolbox for windows/mac) that can be used to create machines on amazon and then run your containers directly on that machine from your local docker repository.