gunicorn processes wont shut down - python

I am trying to kill my gunicorn processes on my server.
When I run kill {id} they seem to shut down for maybe 1sec and then they start back up.
$ ps ax | grep gunicorn
42898 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
42924 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
then I run
pkill -f gunicorn
the processes go away for maybe 1second and then start back up on session id's
43170 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
43171 ? S 0:00 /usr/bin/python3 /usr/bin/gunicorn cms_project.wsgi -b 0.0.0.0:8000 -w 1 --timeout 90
I have also tried killing them individually using the kill process
I have also tried a server restart, and that is not working the gunicorn processes seems to start up when the servers back online.

Related

Gunicorn has more than one thread despite --thread 1

I don't really understand up to now, how gunicorn works. What I currently see is, that if I start:
/usr/bin/python3 /usr/bin/gunicorn -k eventlet --timeout 60 --log-level debug --workers=1 -b 0.0.0.0:5001 flaskVideoClient2:create_app(5001,10)
and than run
ps -aux | grep flaskVideo
I get this response
user 0.0 0.0 13464 1096 pts/14 S+ 10:33 0:00 grep --color=auto flaskVideo
user 13684 0.0 0.4 95624 34796 pts/7 S+ 10:20 0:00 /usr/bin/python3 /usr/bin/gunicorn -k eventlet --timeout 60 --log-level debug --workers=1 -b 0.0.0.0:5001 flaskVideoClient2:create_app(5001,10)
user 13698 0.4 0.5 199228 45696 pts/7 S+ 10:20 0:03 /usr/bin/python3 /usr/bin/gunicorn -k eventlet --timeout 60 --log-level debug --workers=1 -b 0.0.0.0:5001 flaskVideoClient2:create_app(5001,10)
so it seems, that there is running more than one thread.
How do I have to interpretate the two running threads?

Can not restart gunicorn by supervisor

when i run "supervisorctl status hitbot" then i face this error
FATAL Exited too quickly (process log may have details)
#/bin/gunicorn_start
Here **BIND= ip_address:port **
/etc/supervisor/conf.d/hitbot.conf
But when i type these command
In log file
But when it test gunicorn_start by "bash /bin/gunicorn_start"* then it working fine
Try this command: pkill -HUP gunicorn
Gunicorn docs: http://docs.gunicorn.org/en/stable/faq.html
"You can gracefully reload by sending HUP signal to gunicorn: $ kill -HUP masterpid"
or with the full command line:
pkill -HUP -f '/usr/bin/python /usr/bin/gunicorn -w 5 -b 127.0.0.1:5000 myapp:app'

how to use gunicorn with swagger_server on flask

I'm trying to start the swagger server using gunicorn on ec2 instance by using the following code:
I tried :
gunicorn -w 4 -b 0.0.0.0:8080 -p pidfile -D swagger_server:app
and this:
gunicorn -w 4 -b 0.0.0.0:8080 -p pidfile -D "python3 -m swagger_server":app
and even this :
gunicorn -w 4 -b 0.0.0.0:8080 -p pidfile -D __main__:app
How can I get it to work?
RAW python code which works : python3 -m swagger_server
What you are trying to do is equivalent to:
from swagger_server.__main__ import main
For this to work with gunicorn, try:
gunicorn "swagger_server.__main__:main" -w 4 -b 0.0.0.0:8080`
In case you have the error:
ImportError: No module named swagger_server
add the PYTHONPATH to gunicorn command:
gunicorn "swagger_server.__main__:main" -w 4 -b 0.0.0.0:8080 --pythonpath path_to_swagger_server
gunicorn -b 0.0.0.0:8080 main:app --reload
This should be the correct syntax, obviously make sure you're in the correct directory and source your virtualenv.
isn't your application looking for a configuration file with a section like [app:main]?
This one worked for me:
gunicorn "swagger_server.__main__:app" -w 4 -b 0.0.0.0:8080

Two Celery Processes Running

I am debugging an issue where every scheduled task is run twice. I saw two processes named celery. Is it normal for two celery tasks to be running?
$ ps -ef | grep celery
hgarg 303 32764 0 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
hgarg 307 21179 0 17:24 pts/1 00:00:00 grep celery
hgarg 32764 1 4 17:24 ? 00:00:00 /home/hgarg/.pythonbrew/venvs/Python-2.7.3/hgarg_env/bin/python /data/hgarg/current/manage.py celeryd -B -s celery -E --scheduler=djcelery.schedulers.DatabaseScheduler -P eventlet -c 1000 -f /var/log/celery/celeryd.log -l INFO --pidfile=/var/run/celery/celeryd.pid --verbosity=1 --settings=settings
There were two pairs of Celery processes, the older of which shouldn't have been. Killing them all and restarting celery seems to have fixed it. Without any other recent changes, unlikely that anything else could have caused it.

Systemd + non-root Gunicorn service = defunct subprocess

I'm following this document to setup a Systemd socket and service for my gunicorn server.
Systemd starts gunicorn as www-data
gunicorn forks itself (default behavior)
the server starts a subprocess with subprocess.Popen()
the subprocess finishes without an error, but the parent keeps getting None from from p.poll() instead of an exit code
the subprocess ends up defunct
Here's the process hierarchy:
$ ps eauxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
...
www-data 14170 0.0 0.2 65772 20452 ? Ss 10:57 0:00 /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14176 0.8 3.4 39592776 283124 ? Sl 10:57 0:05 \_ /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14346 5.0 0.0 0 0 ? Z 11:07 0:01 \_ [python] <defunct>
Here's the kicker: when I run the service as root instead of www-data, everything works as expected. The subprocess finishes and the parent immediately gets the child's return code.
/lib/systemd/system/digits.service
[Unit]
Description=DIGITS daemon
Requires=digits.socket
After=local-fs.target network.target
[Service]
PIDFile=/run/digits/pid
User=www-data
Group=www-data
Environment="DIGITS_JOBS_DIR=/var/lib/digits/jobs"
Environment="DIGITS_LOGFILE_FILENAME=/var/log/digits/digits.log"
ExecStart=/usr/bin/gunicorn digits.webapp:app \
--pid /run/digits/pid \
--config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
/lib/systemd/system/digits.socket
[Unit]
Description=DIGITS socket
[Socket]
ListenStream=/run/digits/socket
ListenStream=0.0.0.0:34448
[Install]
WantedBy=sockets.target
/usr/lib/tmpfiles.d/digits.conf
d /run/digits 0755 www-data www-data -
I ran into the same issue today on CentOS-7. I finally overcame it by ignoring the instructions in this document -- which indicates to use the /run/ hierarchy within which to create the socket -- and I instead used /tmp/. That worked.
Note that my PID file is still placed underneath /run/ (no issues there).
In summary, instead of placing your socket somewhere underneath /run/..., try placing it somewhere underneath /tmp/... instead. It worked for me on CentOS-7 with systemd.

Categories