How I understand ENTRYPOINT logging using Flask inside Docker - python

My Docker container run gunicorn, which points to "myapp.py", which is using the flask.
cat Dockerfile:
FROM python:3.7
<snip no important>
USER nobody
ENTRYPOINT ["/usr/sbin/flask-docker-entrypoint.sh"]
EXPOSE 8000
flask-docker-entrypoint.sh:
#!/bin/bash
/usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
All work good!
Docker daemon logging set to 'json/file'. I tell gunicorn log to stdout (vers 20 it is default). I can send logs from myapp.py to 'docker logs' with simple logging statements. Why this is?
ps -ef
UID PID PPID C STIME TTY TIME CMD
nobody 1 0 0 22:01 ? 00:00:00 /bin/bash /usr/sbin/flask-docker-entrypoint.sh
nobody 12 1 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 15 12 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 57 0 7 22:44 pts/0 00:00:00 bash
nobody 62 57 0 22:44 pts/0 00:00:00 ps -e
flask-docker-entrypoint.sh is pid 1 so that log to stdout. Get that. Do all children of pid using entrypoint also inherit ability to log to stdout? Seems gunicorn is not pid 1 and myapp.py is not pid 1, but both log to stdout?
Thanks you

The default behavior in Unix-like environments is for a process to inherit its parent's stdout (and stdin and stderr). You can demonstrate this in your local shell easily enough:
#!/bin/sh
# This is script1
./script2
#!/bin/sh
# This is script2
echo hi there
$ ./script1
hi there
$ ./script1 > log
$ cat log
hi there
In the last example, if script1's output goes to the console or is redirected into a log file, when it runs script2 as a subprocess it inherits that same stdout.
The reason gunicorn isn't pid 1 is because you have a shell wrapper. You can use the exec shell built-in to replace the shell process with the thing it wants to run
#!/bin/sh
exec /usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
Unless you need to do more setup, it might be simpler to put the command directly into the Dockerfile. (CMD is easier to override at runtime to do things like get debugging shells if you need to; this replaces your existing ENTRYPOINT line.)
CMD ["gunicorn", "myapp:app", "-c", "/local/gunicorn.conf.py"]

Related

Unexpected output of bash 'ps -p $$' command returned by 'subprocess.run()'

I am running Linux Mint 18.1 and Python 3.9. To find out which shell is executing shell commands I have started to use ps -p $$ which is expected to return the info about the shell as value of CMD.
When using subprocess.run() in Python not specifying the shell or specifying the shell as executable='sh' the CMD value is sh for both passed commands (see code below), but when I specify executable='bash' I get different results (ps and bash).
The GNOME-Terminal which is running bash prints bash as CMD value when running ps -p $$
What is the reason for the different values of CMD being ps printed by the code below in case of ps -p $$ and bash in case of ps -p $$;echo $0?
from subprocess import run
print(run('ps -p $$ ', capture_output=True, shell=True,
encoding='utf-8', executable='bash').stdout)
print(run('ps -p $$;echo $0', capture_output=True, shell=True,
encoding='utf-8', executable='bash').stdout)
which prints:
PID TTY TIME CMD
22928 ? 00:00:00 ps
PID TTY TIME CMD
22929 ? 00:00:00 bash
bash
UPDATE to respond to the given answer and comments:
#Charles Duffy : YES, without Python involved when running the commands in GNOME Terminal using bash -c I get the same behavior as if run with subprocess.run() in Python, but ... I don't get it when running without the preliminary bash -c.
#Barmar : Trying to check out the explanation in your answer I have introduced a third command echo $0;ps -p $$ to see if the last command in the sequence will give a CMD value of ps. Below the result of a terminal session:
$ bash -c 'ps -p $$'
PID TTY TIME CMD
23386 pts/1 00:00:00 ps
$ bash -c 'ps -p $$; echo $0'
PID TTY TIME CMD
23388 pts/1 00:00:00 bash
bash
$ bash -c 'echo $0;ps -p $$'
bash
PID TTY TIME CMD
23395 pts/1 00:00:00 bash
What have I misunderstood in your answer expecting from the third command to give ps as CMD value?
This is a bash optimization. If the command line is just a single command, it's is implemented by simply calling execv() rather than forking a child to execute the command. This replaces the shell process with the ps program, keeping the same PID. It's as if you executed.
print(run('exec ps -p $$', ...))
You don't see it in the second attempt because ps is not the last command in the sequence. It has to fork a child process, while the shell keeps running to wait for it to exit and execute the following commands.

Start a remote script from a Mac OS X machine via SSH command

I am trying to start a python script on my VM from my local Mac OS
I did
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server;pkill -f server.py;./server.py;"
Result
It's SSH in and it quickly runs those commands and it quickly logging me out. I was expecting it to stay open in SSH session.
My script is NOT running ...
ps -aux | grep python
root 901 0.0 0.2 553164 18584 ? Ssl Jan19 20:37 /usr/bin/pytho -Es /usr/sbin/tuned -l -P
root 15444 0.0 0.0 112648 976 pts/0 S+ 19:16 0:00 grep --color=auto python
If I do this it works
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server"
Then
./server.py;
Then, it works.
Am I missing anything?
You might need to state the shell starting your script i.e /bin/bash server.py:
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; pkill -f server.py; /bin/bash ./server.py;"
If you would like to start the script and leave it running even after you end your ssh session you could use nohup. Notice that you need to put the process in the background and redirect stdin, stdout and stderr to completly detach from the remote process:
-i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; nohup /bin/bash ./server.py < /dev/null > std.out 2> std.err &"
It seems like the reason that your ssh command returns imediately is because somehow the call to pkill -f server.py will also terminate the actual ssh session, since it also contains the server.py in the commandline.
I don't have my regular MacBook Pro here to test with, but I think that adding another semicolon and ending the command line with /bin/bash might do it.

`ps -ef` shows running process twice if started with `subprocess.Popen`

I use the following snippet in a larger Python program to spawn a process in background:
import subprocess
command = "/media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)
After that I wanted to check whether the process was running when my Python program returned.
Output of ps -ef | grep -v grep | grep FOOBAR:
ap 3396 937 0 16:08 pts/16 00:00:00 /bin/sh -c /media/sf_SharedDir/FOOBAR
ap 3397 3396 0 16:08 pts/16 00:00:00 /bin/sh /media/sf_SharedDir/FOOBAR
I was surprised to see two lines of and they have differend PIDs so are those two processes running? Is there something wrong with my Popen call?
FOOBAR Script:
#!/bin/bash
while :
do
echo "still alive"
sleep 1
done
EDIT: Starting the script in a terminal ps displayes only one process.
Started via ./FOOBAR
ap#VBU:/media/sf_SharedDir$ ps -ef | grep -v grep | grep FOOBAR
ap 4115 3463 0 16:34 pts/5 00:00:00 /bin/bash ./FOOBAR
EDIT: shell=True is causing this issue (if it is one). But how would I fix that if I required shell to be True to run bash commands?
There is nothing wrong, what you see is perfectly normal. There is no "fix".
Each of your processes has a distinct function. The top-level process is running the python interpreter.
The second process, /bin/sh -c /media/sf_SharedDir/FOOBAR' is the shell that interprets the cmd line (because you want | or * or $HOME to be interpreted, you specified shell=True).
The third process, /bin/sh /media/sf_SharedDir/FOOBAR is the FOOBAR cmd. The /bin/sh comes from the #! line inside your FOOBAR program. If it were a C program, you'd just see /media/sf_SharedDir/FOOBAR here. If it were a python program, you'd see /usr/bin/python/media/sf_SharedDir/FOOBAR.
If you are really bothered by the second process, you could modify your python program like so:
command = "exec /media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)

How to kill Django runserver sub processes from a bash script?

I'm working on a Django website where I have various compilation programs that need to run (Compass/Sass, coffeescript, hamlpy), so I made this shell script for convenience:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
PIDS[0]=$!
compass watch $SITE/media/compass/ &
PIDS[1]=$!
coffee -o $SITE/media/js -cw $SITE/media/coffee &
PIDS[2]=$!
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
PIDS[3]=$!
trap "echo PIDS: ${PIDS[*]} && kill ${PIDS[*]}" SIGINT
wait
Everything except for the Django server shuts down nicely on a ctrl+c because the PID of the server process isn't the PID of the python manage.py runserver command. Which means everytime I stop the script, I have to find the running process PID and shut it down.
Here's an example:
$> ./compile.sh
RMX using siteroot....
...
[ctrl+c]
PIDS: 29725 29726 29728 29729
$> ps -A | grep python
29732 pts/2 00:00:00 python
The first PID, 29725, is the initial python manage.py runserver call, but 29732 is the actual dev server process.
edit Looks like this is due to Django's auto-reload feature which can be disabled with the --noreload flag. Since I'd like to keep the auto reload feature, the question now becomes how to kill the child processes from the bash script. I would think killing the initial python runserver command would do it...
SOLVED
Thanks to this SO question, I've changed my script to this:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
compass watch $SITE/media/compass/ &
coffee -o $SITE/media/js -cw $SITE/media/coffee &
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
trap "kill -TERM -$$" SIGINT
wait
PIDs preceded with the dash operate on the PID group with the kill command, and the $$ references the PID of the bash script itself.
Thanks for the help, me!
No problem, self, and hey -- you're awesome.
You can execute this to kill or process and servers, you set PORT number:
$ netstat -tulpn | grep PORT | awk '{print $7}' | cut -d/ -f 1 | xargs kill
OR
$ sudo lsof -i tcp:PORT
$ sudo lsof -i tcp:PORT|awk '{print $2}'|cut -d/ -f 1|xargs kill

Changing Process Name using Shell for nagios monitoring with check_procs

I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')

Categories