Adding python3 to path using a UBUNTU 22.04 AWS EC2 instance - python

I am currently following a docker tutorial and am using a VM to follow along. Essentially the tutorial is about creating docker images from existing containers. The task involves setting the container to use python as the executable.
docker commit --change='CMD ["python", "-c", "import this"]' 07a3f5a8b944 ubuntu_python
Whilst I am actually using python3 the issue repeats regardless of which version of python I state in the CMD.
I then attempt to run an image based on the container using
docker run ubuntu_python
but I will get this error
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "python": executable file not found in $PATH: unknown.
Looks like I need to add python to the path of the VM and google searching is not yielding anything useful.
Any suggestions would be much appreciated
thanks

Related

Receiving OSError: [Errno 8] Exec format error in app running in Docker Container

I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message:
OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe'
I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app):
./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr
I am using the following Python code to invoke the command in the API route which is where it fails:
proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True)
The Dockerfile for my backend is pasted below:
FROM python:3
WORKDIR /app
ENV FLASK_APP=main.py
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I have tried to tackle the issue a few different ways:
By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error.
In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed.
I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile.
At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux.
Any chances folks have seen similar issues arise with this error? Thanks in advance!
So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.

Installing packages in a Kubernetes Pod

I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.
You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.

docker build image not working in PyCharm

When attempting to build an image from a Dockerfile in PyCharm the following error is shown
Deploying 'test1 Dockerfile: Dockerfile'...
Building image...
Failed to deploy 'test1 Dockerfile: Dockerfile': com.github.dockerjava.api.exception.DockerClientException: Failed to parse docker configuration file<br/>caused by: java.io.IOException: Failed to parse docker config.json
Building outside of PyCharm from the command line works fine.
I assume I'm meant to set the location of the .docker/config.json file, but I don't see where in PyCharm to do that.
Versions
PyCharm Community 2018.2.1
PyCharm Docker plugin 182.3684.90
Docker version 18.03.1-ce, build 9ee9f40
Ubuntu 16.04
Something could have tempered with ~/.docker/config.json. Try deleting it and restarting the docker service afterwards.
You might want to docker login again in case you were logged in to some private registries.

I'm having trouble using docker-py in a development environment on OSX

I am creating Python code that will be built into a docker image.
My intent is that the docker image will have the capability of running other docker images on the host.
Let's call these docker containers "daemon" and "workers," respectively.
I've proven that this concept works by running "daemon" using
-v /var/run/docker.sock:/var/run/docker.sock
I'd like to be able to write the code so that it will work anywhere that there exists a /var/run/docker.sock file.
Since I'm working on an OSX machine I have to use the Docker Quickstart terminal. As such, on my system there is no docker.sock file.
The docker-py documentation shows this as the way to capture the docker client:
from docker import Client
cli = Client(base_url='unix://var/run/docker.sock')
Is there some hackery I can do on my system so that I can instantiate the docker client that way?
Can I create the docker.sock file on my file system and have it sym-linked to the VM docker host?
I really don't want to have to build my docker image every time I was to test a single line code change... help!!!

Docker from python

Please be gentle, I am new to docker.
I'm trying to run a docker container from within Python but am running into some trouble due to environment variables not being set.
For example I run
import os
os.popen('docker-machine start default').read()
os.popen('eval "$(docker-machine env default)"').read()
which will start the machine but does not set the environment variables and so I can not pass a docker run command.
Ideally it would be great if I did not need to run the eval "$(docker-machine env default)". I'm not really sure why I can't set these to something static every time I start the machine.
So I am trying to set them using the bash command but Python just returns an empty string and then returns an error if I try to do docker run my_container.
Error:
Post http:///var/run/docker.sock/v1.20/containers/create: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
I'd suggest running these two steps to start a machine in a bash script first. Then you can have that same bash script call your python script and access docker with docker-py
import docker
import os
docker_host = os.environ['DOCKER_HOST']
client = docker.Client(docker_host)
client.create(...)
...

Categories