Systemd + non-root Gunicorn service = defunct subprocess - python

I'm following this document to setup a Systemd socket and service for my gunicorn server.
Systemd starts gunicorn as www-data
gunicorn forks itself (default behavior)
the server starts a subprocess with subprocess.Popen()
the subprocess finishes without an error, but the parent keeps getting None from from p.poll() instead of an exit code
the subprocess ends up defunct
Here's the process hierarchy:
$ ps eauxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
...
www-data 14170 0.0 0.2 65772 20452 ? Ss 10:57 0:00 /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14176 0.8 3.4 39592776 283124 ? Sl 10:57 0:05 \_ /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14346 5.0 0.0 0 0 ? Z 11:07 0:01 \_ [python] <defunct>
Here's the kicker: when I run the service as root instead of www-data, everything works as expected. The subprocess finishes and the parent immediately gets the child's return code.
/lib/systemd/system/digits.service
[Unit]
Description=DIGITS daemon
Requires=digits.socket
After=local-fs.target network.target
[Service]
PIDFile=/run/digits/pid
User=www-data
Group=www-data
Environment="DIGITS_JOBS_DIR=/var/lib/digits/jobs"
Environment="DIGITS_LOGFILE_FILENAME=/var/log/digits/digits.log"
ExecStart=/usr/bin/gunicorn digits.webapp:app \
--pid /run/digits/pid \
--config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
/lib/systemd/system/digits.socket
[Unit]
Description=DIGITS socket
[Socket]
ListenStream=/run/digits/socket
ListenStream=0.0.0.0:34448
[Install]
WantedBy=sockets.target
/usr/lib/tmpfiles.d/digits.conf
d /run/digits 0755 www-data www-data -

I ran into the same issue today on CentOS-7. I finally overcame it by ignoring the instructions in this document -- which indicates to use the /run/ hierarchy within which to create the socket -- and I instead used /tmp/. That worked.
Note that my PID file is still placed underneath /run/ (no issues there).
In summary, instead of placing your socket somewhere underneath /run/..., try placing it somewhere underneath /tmp/... instead. It worked for me on CentOS-7 with systemd.

Related

gunicorn processes wont shut down

I am trying to kill my gunicorn processes on my server.
When I run kill {id} they seem to shut down for maybe 1sec and then they start back up.
$ ps ax | grep gunicorn
42898 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
42924 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
then I run
pkill -f gunicorn
the processes go away for maybe 1second and then start back up on session id's
43170 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
43171 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
I have also tried killing them individually using the kill process
I have also tried a server restart, and that is not working the gunicorn processes seems to start up when the servers back online.

How I understand ENTRYPOINT logging using Flask inside Docker

My Docker container run gunicorn, which points to "myapp.py", which is using the flask.
cat Dockerfile:
FROM python:3.7
<snip no important>
USER nobody
ENTRYPOINT ["/usr/sbin/flask-docker-entrypoint.sh"]
EXPOSE 8000
flask-docker-entrypoint.sh:
#!/bin/bash
/usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
All work good!
Docker daemon logging set to 'json/file'. I tell gunicorn log to stdout (vers 20 it is default). I can send logs from myapp.py to 'docker logs' with simple logging statements. Why this is?
ps -ef
UID PID PPID C STIME TTY TIME CMD
nobody 1 0 0 22:01 ? 00:00:00 /bin/bash /usr/sbin/flask-docker-entrypoint.sh
nobody 12 1 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 15 12 0 22:01 ? 00:00:00 /usr/local/bin/python /usr/local/bin/gunicorn myapp:app -c /external/
nobody 57 0 7 22:44 pts/0 00:00:00 bash
nobody 62 57 0 22:44 pts/0 00:00:00 ps -e
flask-docker-entrypoint.sh is pid 1 so that log to stdout. Get that. Do all children of pid using entrypoint also inherit ability to log to stdout? Seems gunicorn is not pid 1 and myapp.py is not pid 1, but both log to stdout?
Thanks you
The default behavior in Unix-like environments is for a process to inherit its parent's stdout (and stdin and stderr). You can demonstrate this in your local shell easily enough:
#!/bin/sh
# This is script1
./script2
#!/bin/sh
# This is script2
echo hi there
$ ./script1
hi there
$ ./script1 > log
$ cat log
hi there
In the last example, if script1's output goes to the console or is redirected into a log file, when it runs script2 as a subprocess it inherits that same stdout.
The reason gunicorn isn't pid 1 is because you have a shell wrapper. You can use the exec shell built-in to replace the shell process with the thing it wants to run
#!/bin/sh
exec /usr/local/bin/gunicorn myapp:app -c /local/gunicorn.conf.py
Unless you need to do more setup, it might be simpler to put the command directly into the Dockerfile. (CMD is easier to override at runtime to do things like get debugging shells if you need to; this replaces your existing ENTRYPOINT line.)
CMD ["gunicorn", "myapp:app", "-c", "/local/gunicorn.conf.py"]

systemd service failed to start bash script

I am running a bash script as systemd service but it is giving me this error
Failed at step EXEC spawning /home/pipeline/entity-extraction/start_consumer.sh: Permission denied
Feb 8 11:59:58 irum systemd[1]: ee-consumer.service: main process exited, code=exited, status=203/EXEC
Feb 8 11:59:58 irum systemd[1]: Unit ee-consumer.service entered failed state.
My bash scrip is running 2 Python scripts and it runs fine when I run it from terminal as
sudo bash start_consumer.sh
start_consumer.sh
while true
do
echo "starting FIRST Consumer.py : $(date +"%T")"
python3 /home/irum/Desktop/Marketsyc/Consumer.py &
pid=$!
echo "pid:$pid"
sleep 60
echo "starting SECOND Consumer.py : $(date +"%T")"
python3 /home/irum/Desktop/Marketsyc/Consumer.py &
new_pid=$!
echo "new_pid:$new_pid"
# Here I want to kill FIRST Consumer.py
echo "killing first consumer"
kill "$pid"
sleep 60
# Here I want to kill SECOND Consumer.py
echo "killing second consumer"
kill "$new_pid"
done
code of my systemd service ee-consumer.service
[Unit]
Description=Entity extraction - consumer
After=default.target
[Service]
Type=simple
Restart=always
User=pipeline
ExecStart=/home/pipeline/entity-extraction/start_consumer.sh
how can I resolve this issue ?
You have to set the shebang line and permission to the script, for systemd to execute.
Add #!/bin/bash to the start of the bash script. And do the following,
chmod 755 /home/pipeline/entity-extraction/start_consumer.sh

Start a remote script from a Mac OS X machine via SSH command

I am trying to start a python script on my VM from my local Mac OS
I did
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server;pkill -f server.py;./server.py;"
Result
It's SSH in and it quickly runs those commands and it quickly logging me out. I was expecting it to stay open in SSH session.
My script is NOT running ...
ps -aux | grep python
root 901 0.0 0.2 553164 18584 ? Ssl Jan19 20:37 /usr/bin/pytho -Es /usr/sbin/tuned -l -P
root 15444 0.0 0.0 112648 976 pts/0 S+ 19:16 0:00 grep --color=auto python
If I do this it works
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server"
Then
./server.py;
Then, it works.
Am I missing anything?
You might need to state the shell starting your script i.e /bin/bash server.py:
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; pkill -f server.py; /bin/bash ./server.py;"
If you would like to start the script and leave it running even after you end your ssh session you could use nohup. Notice that you need to put the process in the background and redirect stdin, stdout and stderr to completly detach from the remote process:
-i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; nohup /bin/bash ./server.py < /dev/null > std.out 2> std.err &"
It seems like the reason that your ssh command returns imediately is because somehow the call to pkill -f server.py will also terminate the actual ssh session, since it also contains the server.py in the commandline.
I don't have my regular MacBook Pro here to test with, but I think that adding another semicolon and ending the command line with /bin/bash might do it.

uwsgi : why two processes are loaded per each app? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
root#www:~# ps aux | grep uwsgi
root 4660 0.0 0.0 10620 892 pts/1 S+ 19:13 0:00 grep --color=auto uwsgi
root 19372 0.0 0.6 51228 6628 ? Ss 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
root 19373 0.0 0.1 40420 1292 ? S 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
www-data 19374 0.0 1.9 82640 20236 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19375 0.0 2.4 95676 25324 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
www-data 19385 0.0 2.1 90772 22248 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19389 0.0 2.0 95676 21244 ? S 06:41 0:00 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
above is ps output of uwsgi processes. Strange thing is that for each ini files there are two instances loaded - even I have two uwsgi masters. is this normal?
the deployment strategy for uwsgi is
have Emperor managed by upstart
Emperor searches for each uwsgi.ini in apps folder
uwsgi.conf for upstart:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --master --die-on-term --emperor "/var/www/*/uwsgi.ini"
uwsgi.ini(I have two apps, and both apps have same ini except app# numbering):
[uwsgi]
# variables
uid = www-data
gid = www-data
projectname = myproject
projectdomain = www.myproject.com
base = /var/www/app2
# config
enable-threads
protocol = uwsgi
venv = %(base)/
pythonpath = %(base)/
wsgi-file = %(base)/app.wsgi
socket = /tmp/%(projectdomain).sock
logto = %(base)/logs/uwsgi.log
You started it with the --master option, which spawns a master process to control the workers.
From the official documentation https://uwsgi-docs.readthedocs.org/en/latest/Glossary.html?highlight=master
master
uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments it’s not really a good idea not to use master mode.
You should read http://uwsgi-docs.readthedocs.org/en/latest/Options.html#master
And also this thread might have some info for you. uWSGI: --master with --emperor spawns two emperors
It is generally not recommended to use --master and --emperor together.
My educated guess on this topic is that it should be transfer to Server Fault indeed.
But here is the answer:
You should have started the upstart script two time ;-)
Just try to kill the main ROOT process with a SIGTERM and see if childs process died to.
If you have run the upstart script twice, you will have one ROOT and Two childs remaining.

Categories