I have a script using docker python library or Docker Client API. I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that.
I know in docker, there is --cpus flag, but docker only has cpu_shares (int): CPU shares (relative weight) parameter to use. Does everyone have experience in setting the limit on cpu usage using docker?
import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container = client.containers.run(my_docker_image, mem_limit=30G)
Edits:
I tried nano_cpus as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000) to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores(), it still shows 30, which I am confused. I also link R tag now.
Thank you!
All that I see about this is with operating systems. Is there a reason you can't just use these on system? Just asking for clarification here.
If you can use the cmd, then you can use this command:
sudo docker run -it --cpus=1.0 alpine:latest /bin/sh
Setting nana_cpus works and you could use parallelly::availableCores() to detect cpus set by cgroup in r.
Related
im looking for the OS version(such as Ubuntu 20.04.1 LTS) to get it from container that run on kuberentes server. i mean, i need to OS of the server which on that server i have kubernetes with number of pods(and containers).
i saw there is a library which call "kubernetes" but didn't found any relevant info on this specific subject.
is there a way to get this info with python?
many thanks for the help!
If you need to get an OS version of running container you should read
https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/
as it described above you can get access to your running pod by command:
kubectl exec --stdin --tty <pod_name> -- /bin/bash
then just type "cat /etc/os-release" and you will see the OS info which your pod running on. In most cases containers run on unix systems and you will find current pod OS.
You also can install python or anything else inside your pod. But I do not recommend to do it. Containers have minimum thing to make you app work. For checking it is ok, but after it just deploy new container.
Using the node info on which pod is running via kubectl. In the below command, replace the <PODNAME> with your pod name.
kubectl get node $(kubectl get pod <PODNAME> -o jsonpath='{.spec.nodeName}') -o jsonpath='{.status.nodeInfo.osImage}'
I am trying to run my streamlit app via docker. Since I want to run my code in a linux system, I am trying first it it runs in my windows system.
So I ran my container and ran a command which gave me two URLs. But both of the urls are not working.
This is the terminal result
This the result in browser
Do I have to mention any port number? And if yes, then how to find my local system's port?
Thanks in advance
I think you are using the wrong port number, please try to use -p 8501:8501 in your docker command and then go to localhost:8501 in your browser.
First check if you are able to ping the internal IP of the docker container from the host machine.
then export multiple ports using the following args:-
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2>
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I am running a pytorch training on CycleGan inside a Docker image.
I want to use visdom to show the progress of the training (also recommended from the CycleGan project).
I can start a visdom.server inside the docker container and access it outside of the container. But when I try to use the basic example on visdom inside a bash session, of the same container that is running the visdom.server. I get connection refused errors such as The requested URL could not be retrieved.
I think I need to configure the visdom.Visdom() in the example in some custom way to be able to send the data to the server.
Thankful for any help!
Notes
When I start visdom.server it says You can navigate to http://c4b7a2be26c4:8097, when all the examples mentions localhost:8097.
I am trying to do this behind a proxy.
I realised that, in order to curl localhost:8097, I need to use curl --noproxy localhost, localhost:8097. So I will have to do something similar inside visdom.
When setting http_proxy inside a docker container, you need to set no_proxy=localhost, 127.0.0.1 as well in order to allow connections to local host.
Got the same problem, And I found when you use a docker container to connect server, then you can not use the same docker container to run you code
I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.
For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.
follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...