I am currently running a lot of similar Docker containers which are created and run by a Python script via the official API. Since Docker natively doesn't support GPU mapping, I tested Nvidia-Docker, which fulfills my requirements, but I'm not sure how to integrate it seamlessly in my script.
I tried to find the proper API calls for Nvidia-Docker using Google and the docs, but I didn't manage to find anything useful.
My current code looks something like this:
# assemble a new container using the params obtained earlier
container_id = client.create_container(img_id, command=commands, stdin_open=True, tty=True, volumes=[folder], host_config=client.create_host_config(binds=[mountpoint,]),detach=False)
# run it
client.start(container_id)
The documentation for the API can be found here.
From Nvidia-Dockers Github page:
The default runtime used by the DockerĀ® Engine is
runc, our runtime can become the default one by configuring the docker
daemon with --default-runtime=nvidia. Doing so will remove the need to
add the --runtime=nvidia argument to docker run. It is also the only
way to have GPU access during docker build.
Basically, I want to add the --runtime=nvidia-docker argument to my create_container call, but there is no support for that as it seems.
But since I need to switch between runtimes multiple times during the script execution (mixing Nvidia-Docker and native Docker containers) the quick and dirty way would be to run a bash command using subprocess but I feel like there has to be a better way.
TL;DR: I am looking for a way to run Nvidia-Docker containers from a Python script.
run() and create() methods have runtime parameter according to https://docker-py.readthedocs.io/en/stable/containers.html
Which has sense because docker cli tool is pretty simple and every command translate in a call to the docker engine service REST API
Related
Basically, I need to write a python script which takes arguments with argparser and launches instance of VM in openstack, optionally creates a disk tool and mounts it to VM.
I've tried to search for similiar scripts and found this , generally this should work out, but it is quite old and when I looked for PythonSDK documentation on openstack website and found many different clients and python api for that clients, which I should use?
Each OpenStack service has its own python client library, such as python-novaclient, python-cinderclient, python-glanceclient. They also provide user guides, e.g. How to use cinderclient, have a look and you will find out the answer.
Generally, I prefer trying command-line in terminal first, like cinder create --display-name corey-volume 10 or nova boot --image xxx --block-device source=volume,id=xxx corey-vm, to verify the command exists and the idea works, then change it to python code. If I don't know how to use it or get unexpected errors in the script, I will go to Github to check its source code, it really helps, especially in debugging.
I can find all of the ingredients for what I want to do, but I'm not sure if I can put them together.
Ultimately, I want a Python process to be able to create and manage Docker instances running on Azure.
This link shows that you can use the Docker API to fire up instances on Azure: https://docs.docker.com/engine/context/aci-integration/. It's Beta, but I've been able to run my own container on Azure after logging in, using something like this:
docker --context myacicontext run hello-world
The second half of my problem is to call this from docker-py. The vanilla usage of docker-py is nice and striaght-forward, but I can't find any reference to the flag "--context" in the docker-py docs (https://docker-py.readthedocs.io/en/stable/).
Is there a way for configuring docker-py such that it provides a --context?
EDIT:
Thanks to #CharlesXu pointing me in the right direction, I have now found that the following docker-py command does have an effect:
docker.context.ContextAPI.set_current_context("myacicontext")
This changes the default context used by the docker cmd line interface, so
C:\Users\MikeSadler>docker ps -a
will subsequently list the containers running in Azure, and not locally.
However, the docker.from_env().containers.list(all=all) command stubbornly continues to return the local containers. This is true even if you restart the Python session and create a new client from a completely fresh start.
CONCLUSION:
Having spoken with the Docker developers, as of October 2020 docker-py officially does not support Cloud connections. They are currently using GRPC protocols to manage Cloud runs, and this may be incorporated into docker-py in the future.
I'm afraid there is no way to do things like the command docker --context myacicontext run hello-world does. And there is also no parameter like --context in the SDK. As I know, you can set the current context use the SDK like this:
import docker
docker.context.config.write_context_name_to_docker_config('context_name')
But when you use the code:
client = docker.from_env()
client.containers.run('xxx')
Then it will set the context into default. It means you cannot run the containers into ACI. I think it may be a bug that needs to be fixed. I'm not very very sure, but that's it right now.
I am trying to develop a AWS lambda to make a rollout restart deployment using the python client. I cannot find any implementation in the github repo or references. Using the -v in the kubectl rollout restart is not giving me enough hints to continue with the development.
Anyways, it is more related to the python client:
https://github.com/kubernetes-client/python
Any ideas? perhaps I could be missing something
The python client interacts directly with the Kubernetes API. Similar to what kubectl does. However, kubectl added some utility commands which contain logic that is not contained in the Kubernetes API. Rollout is one of those utilities.
In this case that means you have two approaches. You could reverse engineer the API calls the kubectl rollout restart makes. Pro tip: With go, you can actually import internal Kubectl behaviour and libraries, making this quite easy. So consider writing your lambda in golang.
Alternatively, you can have your Lambda call the Kubectl binary (using the process exec libraries in python). However, this does mean you need to include the binary in your lambda in some way (either by uploading it with your lambda or by building a lambda layer containing kubectl).
#Andre Pires, it can be done like this way :
data := fmt.Sprintf(`{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"%s"}}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"%s","maxSurge": "%s"}}}`, time.Now().String(), "25%", "25%")
newDeployment, err := clientImpl.ClientSet.AppsV1().Deployments(item.Pod.Namespace).Patch(context.Background(), deployment.Name, types.StrategicMergePatchType, []byte(data), metav1.PatchOptions{FieldManager: "kubectl-rollout"})
I know there are a ton of articles, blogs, and SO questions about running python applications inside a docker container. But I am looking for information on doing the opposite. I have a distributed application with a lot of independent components, and I have put each of these components inside a docker container(s) that I can run by manually via
docker run -d <MY_IMAGE_ID> mycommand.py
But I am trying to find a way (a pythonic, object-oriented way) of running these containerized applications from a python script on my "master" host machine.
I can obviously wrap the command line calls into a subprocess.Popen(), but I'm looking to see if something a bit more managed exists.
I have seen docker-py but I'm not sure if this is what I need or not; I can't find a way to use it to simply run a container, capture output, etc.
If anyone has any experience with this, I would appreciate any pointers.
A bit lost on where to start after exploring digitalcoean/aws.
I have looked at the documentation for docker and boto3, and docker seems to be the direction I want to go in (docker for AWS), but I am unsure that these are mutually exclusive solutions.
From what I understand the following workflow is possible:
Code local python (most any language, but I am using py)
Deploy local code (aka upload) to a server
Call that code from a local machine with some argument(s) via a script leveraging some cloud API (boto3/docker?)
Grab finished result file from my cloud (pull file that is JSON/CSV etc and contains my results) using an API (boto3/docker?)
I thought this would be way easier to get up and running (maybe it is, and I am just missing something).
I feel like I am hitting my head against the wall on something that is intended to not be so tough.
Any pointers/guidance are hugely appreciated.
Thank you!
boto3 is an interface to aws.
docker is a software tool for managing images and deploying them as containers.
You can use boto3 to create your amazon machine, and then install docker on that machine, and pull containers from a docker repository to run them.
There's also solutions like docker-machine(docker-toolbox for windows/mac) that can be used to create machines on amazon and then run your containers directly on that machine from your local docker repository.