I am new to docker, trying to run multiple python processes in docker.
Though it's not recommended, however, it should work as suggested here "https://docs.docker.com/engine/admin/multi-service_container/"
My Dockerfile :
FROM custom_image
MAINTAINER Shubham
RUN apt-get update -y
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/bin/bash"]
CMD ["start.sh"]
start.sh :
nohup python flask-app.py &
nohup python sink.py &
nohup python faceConsumer.py &
nohup python classifierConsumer.py &
nohup python demo.py &
echo lastLine
run command :
docker run --runtime=nvidia -p 5000:5000 out_image
the same shell script worked when I go to terminal and run.
tried without nohup, didn't work.
tried python subprocess also to start other python processes, didn't work.
Is it possible to run multiple processes without supervisord or docker-compose?
update: not getting any error, only "lastLine" is getting printed and docker container exits.
Docker docs has examples of how to do this. If you're using Python, then using supervisord is a good option.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
The advantage of this over running a bunch of background processes is you get better job control and processes that exit prematurely will be restarted.
Your problem is putting everything in background. Your container starts, executes all commands and then exits when CMD process finishes - despite background processes running. Docker does not know that.
You could try running everything else in background but then
python demo.py
as it is. This would cause the process to stay alive assuming demo.py does not exit.
You can also run in detached mode or redirect the nohup to log/nohup.out as default docker runs the command by socket, redirection does not happen.
Related
I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &
My existing shell script is trying to:
Start a docker,
Change the permission of the Python file in it,
Run that while looped (streaming) python job in docker (Not wait for it),
Run a python job outside (in the local machine) which will feed data to the docker python job for which it waits.
#!/bin/bash
clear
docker run -d -v /home/ubuntu/Downloads/docker_work/test_py_app/app:/workspace/app -p 8881:8888 -p 5002:5002 --gpus all --name pytorch nvcr.io/nvidia/pytorch:server-v1.0 tail -f /dev/null
sudo docker exec -it pytorch chmod 777 /workspace/server/server.py
sudo docker exec -it pytorch python /workspace/server/server.py
python /home/ubuntu/PycharmProjects/test/pipeline/client.py
exit
If the two python programs' runs are put into two different shell, it works perfectly fine, but the problem with the existing piece is, the shell gets stuck at server python file run inside docker. How can I fire it in an async fashion? So that, immediately after triggering the first python job, it goes and starts the local machine's python job?
Trying to host a python http server and works fine.
FROM python:latest
COPY index.html /
CMD python3 -m http.server
But when trying with python virtualenv, facing issues.
FROM python:3
COPY index.html .
RUN pip install virtualenv
RUN virtualenv --python="python3" .virtualenv
RUN .virtualenv/bin/pip install boto3
RUN python3 -m http.server &
CMD ["/bin/bash"]
Please help.
I just want to point up that using virtualenv within docker container might be redundant.
With docker, you are encapsulating your one specific application along with its dependencies (libraries, frameworks, boto3 in your case), as opposed to your local machine where you might have several applications being developed, each with different dependencies.
Thus, I would recommend considering again the necessity of virtualenv within docker.
Second, running the command:
RUN python3 -m http.server &
in the background is also bad practice here. You want to run it with the CMD command in the foreground, so it will run as the first process (PID 1). Then it will receive all docker signals, and start automatically with the container start:
CMD ["virtualenv/bin/python3", "-m", "http.server"]
I feel there is a subtle difference between the --tty and --interactive switches of the docker run command, that I don't grasp:
--interactive, -i: Keep STDIN open even if not attached
--tty , -t: Allocate a pseudo-TTY
So I decided to run some tests.
First I created a basic Python script, which continuously prints a string.
Then I created a basic docker image, which will run this script when a container is started.
my_script.py
import time
while True:
time.sleep(1)
print('still running...')
Dockerfile
FROM python:3.8.1-buster
COPY my_script.py /
CMD [ "python3", "/my_script.py"]
Built using command:
docker build --tag pytest .
Test 1
I run docker run --name pytest1 -i pytest, to test the interactive behaviour of the container.
Nothing is printed to the console, but when I press Control+C the python script is interrupted and the container stops running.
This confirms my thinking that stdin was open on the container and my keyboard input entered the container.
Test 2
I run docker run --name pytest1 -t pytest, to test the pseudo-tty behaviour of the container. It repeatedly prints still running... to the console, ánd when I press Control+C the python script is interrupted and the container stops running.
Test 3
I run docker run --name pytest1 -it pytest, to test the combined behaviour. The behaviour is the same as in Test 2.
Questions
What are the nuances I'm missing here?
Why would one use the combined -it switches, as you often see, if there is no benefit to the -t switch?
Does the --tty switch just keeps bóth stdin and stdout open?
-t option is needed if you want to interact with a shell like /bin/sh for instance. The shell works by controlling tty. No tty available, no shell.
we use -i in combination with -t to be able to write commands to the shell we opened
few tests you could reproduce to understand:
docker run alpine /bin/sh: the container exits. shell needs to wait for stdin
docker run -i alpine /bin/sh: the container stays, but the shell won't start. we cannot type commands
docker run -t alpine /bin/sh: shell starts, but we are stuck, the keys we press are not interpreted
docker run -it alpine /bin/sh : yeah our shell is working
I am learning how to develop Django application in docker with this official tutorial: https://docs.docker.com/compose/django/
I have successfully run through the tutorial, and
docker-compose run web django-admin.py startproject composeexample . creates the image
docker-compose up runs the application
The question is:
I often use python manage.py shell to run Django in shell mode, but I do not know how to achieve that with docker.
I use this command (when run with compose)
docker-compose run <service_name> python manage.py shell
where <service name> is the name of the docker service(in docker-compose.yml).
So, In your case the command will be
docker-compose run web python manage.py shell
https://docs.docker.com/compose/reference/run/
When run with Dockerfile
docker exec -it <container_id> python manage.py shell
Run docker exec -it --user desired_user your_container bash Running this command has similar effect then runing ssh to remote server - after you run this command you will be inside container's bash terminal. You will be able to run all Django's manage.py commands.
Inside your container just run python manage.py shell
You can use docker exec in the container to run commands like below.
docker exec -it container_id python manage.py shell
If you're using docker-compose you shouldn't always run additional containers when it's not needed to, as each run will start new container and you'll lose a lot of disk space. So you can end up with running multiple containers when you totally won't have to. Basically it's better to:
Start your services once with docker-compose up -d
Execute (instead of running) your commands:
docker-compose exec web ./manage.py shell
or, if you don't want to start all services (because, for example - you want to run only one command in Django), then you should pass --rm flag to docker-compose run command, so the container will be removed just after passed command will be finished.
docker-compose run --rm web ./manage.py shell
In this case when you'll escape shell, the container created with run command will be destroyed, so you'll save much space on your disk.
If you're using Docker Compose (using command docker compose up) to spin up your applications, after you run that command then you can run the interactive shell in the container by using the following command:
docker compose exec <container id or name of your Django app> python3 <path to your manage.py file, for example, src/manage.py> shell
Keep in mind the above is using Python version 3+ with python3.