Installing packages in a Kubernetes Pod - python

I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.

You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.

Related

html to pdf on Azure using pdfkit with wkhtmltopdf

I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the pdfkit python library. This requires the wkhtmltopdf executable to be available.
To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.
When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.
sudo apt-get install wkhtmltopdf
I'm also aware that I can write this start up script on the app service itself.
My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?
By setting the below commands in the App configuration will work.
Thanks to #pamelafox for the comments.
Commands
PRE_BUILD_COMMAND or POST_BUILD_COMMAND
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Create python virtual environment if specified by VIRTUALENV_NAME.
Run python -m pip install --cache-dir /usr/local/share/pip-cache --prefer-binary -r requirements.txt if requirements.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH.
Run python setup.py install if setup.py exists.
Run python package commands and determine python package wheel.
If manage.py is found in the root of the repo manage.py collectstatic is run. However, if DISABLE_COLLECTSTATIC is set to true this step is skipped.
Compress virtual environment folder if specified by compress_virtualenv property key.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Build Conda environment and Python JupyterNotebook
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Set up Conda virtual environemnt conda env create --file $envFile.
If requirment.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH, activate environemnt conda activate $environmentPrefix and run pip install --no-cache-dir -r requirements.txt.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Package manager
The latest version of pip is used to install dependencies.
Run
The below process is applied to know how to start an app.
If user has specified a start script, run it.
Else, find a WSGI module and run with gunicorn.
Look for and run a directory containing a wsgi.py file (for Django).
Look for the following files in the root of the repo and an app class within them (for Flask and other WSGI frameworks).
application.py
app.py
index.py
server.py
Gunicorn multiple workers support
To enable running gunicorn with multiple workers strategy and fully utilize the cores to improve performance and prevent potential timeout/blocks from sync workers, add and set the environment variable PYTHON_ENABLE_GUNICORN_MULTIWORKERS=true into the app settings.
In Azure Web Apps the version of the Python runtime which runs your app is determined by the value of LinuxFxVersion in your site config. See ../base_images.md for how to modify this.
References taken from
Python runtime on App Service

Running a python script inside an nginx docker container

I'm using nginx to serve some of my docs. I have a python script that processes these docs for me. I don't want to pre-process the docs and then add them in before the docker container is built since these docs can grow to be pretty big and they increase in number. What I want is to run my python (and bash) scripts inside the nginx container and have nginx just serve those docs. Is there a way to do this without pre-processing the docs before building the container?
I've attempted to execute RUN python3 process_docs.py, but I keep seeing the following error:
/bin/sh: 1: python: not found
The command '/bin/sh -c python process_docs.py' returned a non-zero code: 127
Is there a way to get python3 onto the Nginx docker container? I was thinking of installing python3 using:
apt-get update -y
apt-get install python3.6 -y
but I'm not sure that this would be good practice. Please let me know the best way to run my pre processing script.
You can use a bind mount to inject data from your host system into the container. This will automatically update itself when the host data changes. If you're running this in Docker Compose, the syntax looks like
version: '3.8'
services:
nginx:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
- ./data:/usr/share/nginx/html/data
ports:
- '8000:80' # access via http://localhost:8000
In this sample setup, the html directory holds your static assets (checked into source control) and the data directory holds the generated data. You can regenerate the data from the host, outside Docker, the same way you would if Docker weren't involved
# on the host
. venv/bin/activate
./regenerate_data.py --output-directory ./data
You should not need docker exec in normal operation, though it can be extremely useful as a debugging tool. Conceptually it might help to think of a container as identical to a process; if you ask "can I run this Python script inside the Nginx process", no, you should generally run it somewhere else.

How to use data in a docker container?

After installing Docker and googling for hours now, I can't figure out how to place data in a Docker, it seems to become more complex by the minute.
What I did; installed Docker and ran the image that I want to use (kaggle/python). I also read several tutorials about managing and sharing data in Docker containers, but no success so far...
What I want: for now, I simply want to be able to download GitHub repositories+other data to a Docker container. Where and how do I need to store these files? I prefer using GUI or even my GitHub GUI, but simple commands would also be fine I suppose.. Is it also possible to place data or access data from a Docker that is currently not active?
Note that I also assume you are using linux containers. This works in all platforms, but on windows you need to tell your docker process that that you are dealing with linux containers. (It's a dropdown in the tray)
It takes a bit of work to understand docker and the only way to understand it is to get your hands dirty. I recommend starting with making an image of an existing project. Make a Dockerfile and play with docker build . etc.
To cover the docker basics (fast version) first.
In order to run something in docker we first need to build and image
An image is a collection of files
You can add files to an image by making a Dockerfile
Using the FROM keyword on the first line you extend and image
by adding new files to it creating a new image
When staring a container we need to tell what image it should use
and all the files in the image is copied into the containers storage
The simplest way to get files inside a container:
Crate your own image using a Dockerfile and copy in the files
Map a directory on your computer/server into the container
You can also use docker cp, to copy files from and two a container,
but that's not very practical in the long run.
(docker-compose automates a lot of these things for you, but you should probably also play around with the docker command to understand how things work. A compose file is basically a format that stores arguments to the docker command so you don't have to write commands that are multiple lines long)
A "simple" way to configure multiple projects in docker in local development.
In your project directory, add a docker-dev folder (or whatever you want to call it) that contains an environment file and a compose file. The compose file is responsible for telling docker how it should run your projects. You can of course make a compose file for each project, but this way you can run them easily together.
projects/
docker-dev/
.env
docker-compose.yml
project_a/
Dockerfile
# .. all your project files
project_b/
Dockerfile
# .. all your project files
The values in .env is sent as variables to the compose file. We simply add the full path to the project directory for now.
PROJECT_ROOT=/path/to/your/project/dir
The compose file will describe each of your project as a "service". We are using compose version 2 here.
version: '2'
services:
project_a:
# Assuming this is a Django project and we override command
build: ${PROJECT_ROOT}/project_a
command: python manage.py runserver 0.0.0.0:8000
volumes:
# Map the local source inside the container
- ${PROJECT_ROOT}/project_a:/srv/project_a/
ports:
# Map port 8000 in the container to your computer at port 8000
- "8000:8000"
project_a:
# Assuming this is a Django project and we override command
build: ${PROJECT_ROOT}/project_b
volumes:
# Map the local source inside the container
- ${PROJECT_ROOT}/project_b:/srv/project_b/
This will tell docker how to build and run the two projects. We are also mapping the source on your computer into the container so you can work on the project locally and see instant updates in the container.
Now we need to create a Dockerfile for each out our projects, or docker will not know how to build the image for the project.
Example of a Dockerfile:
FROM python:3.6
COPY requirements.txt /requirements.txt
RUN pip install requirements.txt
# Copy the project into the image
# We don't need that now because we are mapping it from the host
# COPY . /srv/project_a
# If we need to expose a network port, make sure we specify that
EXPOSE 8000
# Set the current working directory
WORKDIR /srv/project_a
# Assuming we run django here
CMD python manage.py runserver 0.0.0.0:8000
Now we enter the docker-dev directory and try things out. Try to build a single project at a time.
docker-compose build project_a
docker-compose build project_b
To start the project in background mode.
docker-compose up -d project_a
Jumping inside a running container
docker-compose exec project_a bash
Just run the container in the forground:
docker-compose run project_a
There is a lot of ground to cover, but hopefully this can be useful.
In my case I run a ton of web servers of different kinds. This gets really frustrating if you don't set up a proxy in docker so you can reach each container using a virtual host. You can for example use jwilder-nginx (https://hub.docker.com/r/jwilder/nginx-proxy/) to solve this in a super-easy way. You can edit your own host file and make fake name entires for each container (just add a .dev suffix so you don't override real dns names)
The jwilder-nginx container will automagically send you to a specific container based on a virtualhost name you decide. Then you no longer need to map ports to your local computer except for the nginx container that maps to port 80.
For others who prefer using GUI, I ended up using portainer.
After installing portainer (which is done by using one simple command), you can open the UI by browsing to where it is running, in my case:
http://127.0.1.1:9000
There you can create a container. First specify a name and an image, then scroll down to 'Advanced container options' > Volumes > map additional volume. Click the 'Bind' button, specify a path in the container (e.g. '/home') and the path on your host, and you're done!
Add files to this host directory while your container is not running, then start the container and you will see your files in there. The other way around, accessing in files created by the container, is also possible while the container is not running.
Note: I'm not sure whether this is the correct way of doing things. I will, however, edit this post as soon as I encounter any problems.
After pulling the image, you can use code like this in the shell:
docker run --rm -it -p 8888:8888 -v d:/Kaggles:/d kaggle/python
Run jupyter notebook inside the container
jupyter notebook --ip=0.0.0.0 --no-browser
This mounts the local directory onto the container having access to it.
Then, go to the browser and hit https://localhost:8888, and when I open a new kernel it's with Python 3.5/ I don't recall doing anything special when pulling the image or setting up Docker.
You can find more information from here.
You can also try using datmo in order to easily setup environment and track machine learning projects to make experiments reproducible. You can run datmo task command as follows for setting up jupyter notebook,
datmo task run 'jupyter notebook' --port 8888
It sets up your project and files inside the environment to keep track of your progress.

Controlling VMs using Python scripts

I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.
For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.
follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...

How do you manage development tasks when setting up your development environment using docker containers?

I have been researching docker and understand almost everything I have read so far. I have built a few images, linked containers together, mounted volumes, and even got a sample django app running.
The one thing I can not wrap my head around is setting up a development environment. The whole point of docker is to be able to take your environment anywhere so that everything you do is portable and consistent. If I am running a django app in production being served by gunicorn for example, I need to restart the server in order for my code changes to take affect; this is not ideal when you are working on your project in your local laptop environment. If I make a change to my models or views I don't want to have to attach to the container, stop gunicorn, and then restart it every time I make a code change.
I am also not sure how I would run management commands. python manage.py syncdb would require me to get inside of the container and run commands. I also use south to manage data and schema migrations python manage.py migrate. How are others dealing with this issue?
Debugging is another issue. Would I have to somehow get all my logs to save somewhere so I can look at things? I usually just look at the django development server's output to see errors and prints.
It seems that I would have to make a special dev-environment container that had a bunch of workarounds; that seems like it completely defeats the purpose of the tool in the first place though. Any suggestions?
Update after doing more research:
Thanks for the responses. They set me on the right path.
I ended up discovering fig http://www.fig.sh/ It let's you orchestrate the linking and mounting of volumes, you can run commands. fig run container_name python manage.py syncdb . It seems pretty nice and I have been able to set up my dev environment using it.
Made a diagram of how I set it up using vagrant https://www.vagrantup.com/.
I just run
fig up
in the same directory as my fig.yml file and it does everything needed to link the containers and start the server. I am just running the development server when working on my mac so that it restarts when I change python code.
At my current gig we setup a bash script called django_admin. You run it like so:
django_admin <management command>
Example:
django_admin syncdb
The script looks something like this:
docker run -it --rm \
-e PYTHONPATH=/var/local \
-e DJANGO_ENVIRON=LOCAL \
-e LC_ALL=en_US.UTF-8 \
-e LANG=en_US.UTF-8 \
-v /src/www/run:/var/log \
-v /src/www:/var/local \
--link mysql:db \
localhost:5000/www:dev /var/local/config/local/django-admin $#
I'm guessing you could also hook something up like this to manage.py
I normally wrap my actual CMD in a script that launches a bash shell. Take a look at Docker-Jetty container as an example. The final two lines in the script are:
/opt/jetty/bin/jetty.sh restart
bash
This will start jetty and then open a shell.
Now I can use the following command to enter a shell inside the container and run any commands or look at logs. Once I am done I can use Ctrl-p + Ctrl-q to detach from the container.
docker attach CONTAINER_NAME

Categories