im looking for the OS version(such as Ubuntu 20.04.1 LTS) to get it from container that run on kuberentes server. i mean, i need to OS of the server which on that server i have kubernetes with number of pods(and containers).
i saw there is a library which call "kubernetes" but didn't found any relevant info on this specific subject.
is there a way to get this info with python?
many thanks for the help!
If you need to get an OS version of running container you should read
https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/
as it described above you can get access to your running pod by command:
kubectl exec --stdin --tty <pod_name> -- /bin/bash
then just type "cat /etc/os-release" and you will see the OS info which your pod running on. In most cases containers run on unix systems and you will find current pod OS.
You also can install python or anything else inside your pod. But I do not recommend to do it. Containers have minimum thing to make you app work. For checking it is ok, but after it just deploy new container.
Using the node info on which pod is running via kubectl. In the below command, replace the <PODNAME> with your pod name.
kubectl get node $(kubectl get pod <PODNAME> -o jsonpath='{.spec.nodeName}') -o jsonpath='{.status.nodeInfo.osImage}'
Related
I have a script using docker python library or Docker Client API. I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that.
I know in docker, there is --cpus flag, but docker only has cpu_shares (int): CPU shares (relative weight) parameter to use. Does everyone have experience in setting the limit on cpu usage using docker?
import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container = client.containers.run(my_docker_image, mem_limit=30G)
Edits:
I tried nano_cpus as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000) to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores(), it still shows 30, which I am confused. I also link R tag now.
Thank you!
All that I see about this is with operating systems. Is there a reason you can't just use these on system? Just asking for clarification here.
If you can use the cmd, then you can use this command:
sudo docker run -it --cpus=1.0 alpine:latest /bin/sh
Setting nana_cpus works and you could use parallelly::availableCores() to detect cpus set by cgroup in r.
I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.
You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.
I'm running a python script in a docker container using docker-compose on an Ubuntu 20.04 server. I'm looking for a way to automatically delete old docker-compose logs. I specified such a structure, but the script hangs after about a week of work:
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
It seems to me that this is happening because the place for logs is running out. Is this possible or is there another reason? And if this is the reason, how can I solve it? Thanks
You can do this with an external script.
First, find the container logs location with this command
docker container inspect --format='{{.LogPath}}' [CONTAINER ID/NAME]
And then, truncate the log file with
truncate -s 0 /path/to/logfile
You can put that in a bash script and have it run automatically with cron.
I'm using nginx to serve some of my docs. I have a python script that processes these docs for me. I don't want to pre-process the docs and then add them in before the docker container is built since these docs can grow to be pretty big and they increase in number. What I want is to run my python (and bash) scripts inside the nginx container and have nginx just serve those docs. Is there a way to do this without pre-processing the docs before building the container?
I've attempted to execute RUN python3 process_docs.py, but I keep seeing the following error:
/bin/sh: 1: python: not found
The command '/bin/sh -c python process_docs.py' returned a non-zero code: 127
Is there a way to get python3 onto the Nginx docker container? I was thinking of installing python3 using:
apt-get update -y
apt-get install python3.6 -y
but I'm not sure that this would be good practice. Please let me know the best way to run my pre processing script.
You can use a bind mount to inject data from your host system into the container. This will automatically update itself when the host data changes. If you're running this in Docker Compose, the syntax looks like
version: '3.8'
services:
nginx:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
- ./data:/usr/share/nginx/html/data
ports:
- '8000:80' # access via http://localhost:8000
In this sample setup, the html directory holds your static assets (checked into source control) and the data directory holds the generated data. You can regenerate the data from the host, outside Docker, the same way you would if Docker weren't involved
# on the host
. venv/bin/activate
./regenerate_data.py --output-directory ./data
You should not need docker exec in normal operation, though it can be extremely useful as a debugging tool. Conceptually it might help to think of a container as identical to a process; if you ask "can I run this Python script inside the Nginx process", no, you should generally run it somewhere else.
I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.
For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.
follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...