How to run python scripts after CMD in Dockerfile? - python

I have a docker image that exposes 9000 port for server. After the server is running, I need to execute the 3 python scripts which depends on server so, they can only get executed after server.py is running however, after CMD command, the other code do not get executed and remains stuck. What are the possible suggestion to run 3 scripts in same container?
FROM python:3.7.3 as build
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# CMD [ "python", "./server.py" ] (The following 3 scripts depends on server.py for execution)
RUN python /app/script1.py
RUN python /app/script2.py
RUN python /app/script3.py
EXPOSE 9000
CMD [ "python", "./server.py" ]

As written in the Dockerfile referece
There can only be one CMD instruction in a Dockerfile
The CMD instruction tells the container what its entry point is, and when running the container, that is what will be run.
If running python ./server.py is a blocking call (which I'm assuming it is, since it's called a server, and most likely responds to some kind of requests), then this won't be possible.
Instead, try restructuring your scripts so that they are run when the server is run, by doing everything you do in script1.py, script2.py, script3.py after the server has been started inside of server.py.
If instead this is about script1.py... sending requests to the server, I'd recommend not including those in the container. Instead, you can simply run those scripts, manually, from the terminal while the server container is running.

You can just execute those scripts from the command line using docker exec after the container has started. You'll just need to know what the container name is
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
Or just make a bash script, say my_script.sh to run them all and just execute that
#!/usr/bin/env bash
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
And then
docker exec <CONTAINER NAME> ./my_script.sh

Related

How to restart Python Docker Container from inside

My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.
My system: I have a own Docker image based on the official python image which look like this:
FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]
As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:
#!/bin/bash
echo "Starting"
echo "Env: $ENTORNO"
exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &
All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.
My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:
from subprocess import call
[...]
call(["reboot"])
This does not work inside the Python Debian image, because of error:
reboot: command not found
The other approach was to mount the docker.sock inside the container, but the error this time is:
root#MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied
I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.
Update
After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.
Here's an MRE:
Dockerfile
FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]
start.sh
#!/usr/bin/env bash
python script.py &
# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1
# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!
script.py
import os
import signal
# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)
*exec form
Testing it
> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1
Original post
I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.
Option 1: docker run --restart
Related documentation
docker run --restart on-failure <image>
Option 2: Using docker-compose
Version 3
In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:
version: "3"
services:
app:
...
restart_policy:
condition: on-failure
...
Version 2
Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.
version: "2"
services:
app:
...
restart: "on-failure"
...
Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.
Well, in the end the solution was much simpler than I expected.
I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.
import docker
[...]
docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()

How to create a virtual machine programmatically?

I'm trying to find a way to run .exe application in python (I mean making virtual box where you can run .exe programs). And when you run the application its will only affect the folder where python script is.
Dockerfile
FROM python:3
ADD main.py .
ADD the.exe
CMD [ "python", "main.py"]
main.py
import os
os.startfile("/the.exe")
Build
docker build -t isolatedExe:latest .
Run
docker run isolatedExe:latest
Next interact with the container by using
docker exec -i -t <image> /bin/bash
Note: Find the image id with docker ps

How to run a simple Python script without writing a complete Dockerfile?

I have set up Docker Toolbox on a Win 10 machine. I have some simple single file Python scripts that I want to run in Docker, just for learning purpose.
Started learning Docker today, and Python 3 days ago.
I assume I have set up Docker correctly, I can run the example hello-world image. No error messages during setup.
I am following an instruction from here https://runnable.com/docker/python/dockerize-your-python-application,
which says:
If you only need to run a simple script (with a single file), you can avoid writing a complete Dockerfile. In the examples below, assume you store my_script.py in /usr/src/widget_app/, and you want to name the container my-first-python-script:
docker run -it --rm --name my-first-python-script -v "$PWD":/usr/src/widget_app python:3 python my_script.py
If I type pwd, it shows:
/c/Program Files/Docker Toolbox
And the script I want to run is located here:
C:\Docker\Python\my_script.py
This is what I think should work:
docker run -it --rm --name my-first-python-script -v "$PWD":/c/Docker/Python python:3 python my_script.py
No matter how I try to specify the file directory, I get an error:
python: can't open file 'my_script.py': [Errno 2] No such file or directory
When you run -v "$PWD":/c/Docker/Python, you are saying you want to link your current working directory to the path /c/Docker/Python in the container, which isn't what you want to do. What you are trying to do is link C:\Docker\Python\ on your host to the container folder /usr/src/widget_app.
This command will put your script inside the container path /usr/src/widget_app, then run it:
docker run -it --rm --name my-first-python-script -v /c/Docker/Python:/usr/src/widget_app python:3 python /usr/src/widget_app/my_script.py

Run Python from Docker

I'm trying out Docker these days and I want to run create virtual environments in Python in Docker. I downloaded Miniconda3 from docker hub and tested out with basic hello world program written in python.
I ran:
docker run -i-t continuumio/miniconda3 /bin/bash
Then on another terminal I ran:
docker exec laughing_wing "python ~/Documents/Test/hello_world.py"
Where the name of docker container is laughing_wing, and my hello_world.py is in Documents/Test directory.
But running the second command I get:
"OCI runtime exec failed: exec failed: container_linux.go:344:
starting container process caused "exec: \"python
~/Documents/Test/hello_world.py\": stat python
~/Documents/Test/hello_world.py: no such file or directory": unknown"
I'm confused about this.
Looks like you're trying to have the docker container run a python file from your machine. The docker container is isolated from it's host, so you need to either create your own Docker image where you add the file, or mount the ~/Documents/Test directory to your docker container. Something like this:
docker run -it -v ~/Documents/Test:/Test continuumio/miniconda3 /bin/bash
docker exec *container_name* "python /Test/hello_world.py"

How to execute a local python script into a docker from another python script?

Let me clarify what I want to do.
I have a python script in my local machine that performs a lot of stuff and in certain point it have to call another python script that must be executed into a docker container. Such script have some input arguments and it returns some results.
So i want to figure out how to do that.
Example:
def function()
do stuff
.
.
.
do more stuff
''' call another local script that must be executed into a docker'''
result = execute_python_script_into_a_docker(python script arguments)
The docker has been launched in a terminal as:
docker run -it -p 8888:8888 my_docker
You can add your file inside docker container thanks to -v option.
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
And execute your python inside your docker with :
py /myFile.py
or with the host:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker py /myFile.py
And even if your docker is already running
docker exec -ti docker_name py /myFile.py
docker_name is available after a docker ps command.
Or you can specify name in the run command like:
docker run -it --name docker_name -v myFile.py:/myFile.py -p 8888:8888 my_docker
It's like:
-v absoluteHostPath:absoluteRemotePath
You can specify folder too in the same way:
-v myFolder:/customPath/myFolder
More details at docker documentation.
You can use docker's python SDK library. First you need to move your script there, I recommend you do it when you create the container or when you start it as Callmemath mentioned:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
Then to run the script using the library:
...
client = docker.client.from_env()
container = client.containers.get(CONTAINER_ID)
exit_code, output = container.exec_run("python your_script.py script_args")
...
you have to use docker exec -it image_name python /filename
Note: To use 'docker exec' you must run the container using docker run

Categories