Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
root#www:~# ps aux | grep uwsgi
root 4660 0.0 0.0 10620 892 pts/1 S+ 19:13 0:00 grep --color=auto uwsgi
root 19372 0.0 0.6 51228 6628 ? Ss 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
root 19373 0.0 0.1 40420 1292 ? S 06:41 0:03 uwsgi --master --die-on-term --emperor /var/www/*/uwsgi.ini
www-data 19374 0.0 1.9 82640 20236 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19375 0.0 2.4 95676 25324 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
www-data 19385 0.0 2.1 90772 22248 ? S 06:41 0:03 /usr/local/bin uwsgi --ini /var/www/app2/uwsgi.ini
www-data 19389 0.0 2.0 95676 21244 ? S 06:41 0:00 /usr/local/bin uwsgi --ini /var/www/app3/uwsgi.ini
above is ps output of uwsgi processes. Strange thing is that for each ini files there are two instances loaded - even I have two uwsgi masters. is this normal?
the deployment strategy for uwsgi is
have Emperor managed by upstart
Emperor searches for each uwsgi.ini in apps folder
uwsgi.conf for upstart:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --master --die-on-term --emperor "/var/www/*/uwsgi.ini"
uwsgi.ini(I have two apps, and both apps have same ini except app# numbering):
[uwsgi]
# variables
uid = www-data
gid = www-data
projectname = myproject
projectdomain = www.myproject.com
base = /var/www/app2
# config
enable-threads
protocol = uwsgi
venv = %(base)/
pythonpath = %(base)/
wsgi-file = %(base)/app.wsgi
socket = /tmp/%(projectdomain).sock
logto = %(base)/logs/uwsgi.log
You started it with the --master option, which spawns a master process to control the workers.
From the official documentation https://uwsgi-docs.readthedocs.org/en/latest/Glossary.html?highlight=master
master
uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments it’s not really a good idea not to use master mode.
You should read http://uwsgi-docs.readthedocs.org/en/latest/Options.html#master
And also this thread might have some info for you. uWSGI: --master with --emperor spawns two emperors
It is generally not recommended to use --master and --emperor together.
My educated guess on this topic is that it should be transfer to Server Fault indeed.
But here is the answer:
You should have started the upstart script two time ;-)
Just try to kill the main ROOT process with a SIGTERM and see if childs process died to.
If you have run the upstart script twice, you will have one ROOT and Two childs remaining.
Related
Any way to hot reload python modules for a running python process? In usual cases we could run kill -HUP <pid> for some of the servers like squid, nginx,gunicorn. My running processes are
root 6 0.6 0.9 178404 39116 ? S 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml
root 7 0.0 1.0 501552 43404 ? Sl 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml
root 8 0.0 1.0 501808 43540 ? Sl 14:21 0:00 python3 ./src/app.py --config ./conf/config.yml
Is the question about reloading a Sanic app? If yes, then there is a hot reload built into the server.
app.run(debug=True)
Or if you want the reload without debugging
app.run(auto_reload=True)
See docs
Or, if this is a question in general, checkout aoiklivereload
I have a basic django rest application in my digital ocean server (Ubuntu 16.04) with a local virtual environment.
The basic wsgi.py is:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "workout_rest.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
I have followed step by step this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
When I test Gunicorn's ability to serve the project with this command:
gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application
All works well.
So I've tried to setup Gunicorn to use systemd service file.
My /etc/systemd/system/gunicorn.service file is:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ben
Group=www-data
WorkingDirectory=/home/ben/myproject
ExecStart=/home/ben/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/ben/myproject/myproject.sock myproject.wsgi:application
[Install]
WantedBy=multi-user.target
My Nginx configuration is:
server {
listen 8000;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ben/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ben/myproject/myproject.sock;
}
}
I've changed listen port from 80 to 8000 because 80 give me a err_connection_refused error.
After starting the server with this command:
sudo systemctl restart nginx
When I try to run my website, I get an 502 Bad Gateway error.
I've tried these commands (found on the tutorial comments):
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
sudo systemctl restart nginx
but nothing changes.
When I take a look at the Nginix logs with this command:
sudo tail -f /var/log/nginx/error.log
I can read that sock file doesn't exists:
2016/10/07 09:00:18 [crit] 24974#24974: *1 connect() to unix:/home/ben/myproject/myproject.sock failed (2: No such file or directory) while connecting to upstream, client: 86.197.20.27, server: 139.59.150.116, request: "GET / HTTP/1.1", upstream: "http://unix:/home/ben/myproject/myproject.sock:/", host: "server_ip_adress:8000"
Why this sock file isn't created? How can I configure django/gunicorn to create this file?
I have added gunicorn in my INSTALLED_APP in my Django project but it doesn't change anything.
EDIT:
When I test the nginx config file with nginx -t I get an error: open() "/run/nginx.pid" failed (13: Permission denied).
But if I run the command with sudo: sudo nginx -t, the test is successful. Does that mean that I have to allow 'ben' user to run Ngnix?
About gunicorn logfile, I cannot find a way to read them. Where are they stored?
When I check whether gunicorn is running by using ps aux | grep gunicorn:
ben 26543 0.0 0.2 14512 1016 pts/0 S+ 14:52 0:00 grep --color=auto gunicorn
Here is hat happens when you run the systemctl enable and start commands for gunicorn:
sudo systemctl enable gunicorn
Synchronizing state of gunicorn.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable gunicorn
sudo systemctl start gunicorn
I get no output with this command
sudo systemctl is-active gunicorn
active
sudo systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2016-10-06 15:40:29 UTC; 23h ago
Oct 06 15:40:29 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 18:52:56 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 20:55:05 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 20:55:17 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:07:36 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:16:42 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:21:38 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:25:28 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 08:58:43 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 15:01:22 DevUsine systemd[1]: Started gunicorn daemon.
I had to change the permissions of my sock folder:
sudo chown ben:www-data /home/ben/myproject/
Another thing is that I have changed the sock location after reading in many post that it's not a good pratice to keep the sock file in the django project.
My new location is:
/home/ben/run/
Don't forget to change permissions:
sudo chown ben:www-data /home/ben/run/
To be sure that gunicorn is refreshed, run these commands:
pkill gunicorn
sudo systemctl daemon-reload
sudo systemctl start gunicorn
That will kill the gunicorn processes and start new ones.
You can run this command to make the process start at server boot:
sudo systemctl enable gunicorn
All works well now.
While the accepted answer works, there is one (imo major) issue with it, which is that the gunicorn web server is (probably) running as root, which is not recommended. The reason you end up needing to chown the socket is because it is owned by root:root, because that is the user/group your init job assumes by default. There are multiple ways to get your job to assume another role. As of this time (with gunicorn 19.9.0), in my opinion, the simplest solution to this is to use the --user and --group flags provided as part of the gunicorn command. This means your server can start with the user/group you specify. In your case:
exec gunicorn --user ben --group www-data --bind unix:/home/ben/myproject/myproject.sock -m 007 wsgi
will start gunicorn under ben:www-data user and create a socket owned by ben:www-data with the permissions 770, or read/write/execute privilege for the user ben and group www-data on the socket, which is exactly what you ned in this case.
I have given path to the sock file outside my project. I needed to just create the directory so that the gunicorn can create the file inside that directory as I had had mentioned that path in the .services file. Basically, I made sure that I had all directories existing according to the path in the .services file. No need to change permissions or ownership
Try run
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl status gunicorn.service
The last line helped me to re-create .scok file
I'm following this document to setup a Systemd socket and service for my gunicorn server.
Systemd starts gunicorn as www-data
gunicorn forks itself (default behavior)
the server starts a subprocess with subprocess.Popen()
the subprocess finishes without an error, but the parent keeps getting None from from p.poll() instead of an exit code
the subprocess ends up defunct
Here's the process hierarchy:
$ ps eauxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
...
www-data 14170 0.0 0.2 65772 20452 ? Ss 10:57 0:00 /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14176 0.8 3.4 39592776 283124 ? Sl 10:57 0:05 \_ /usr/bin/python /usr/bin/gunicorn digits.webapp:app --pid /run/digits/pid --config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
www-data 14346 5.0 0.0 0 0 ? Z 11:07 0:01 \_ [python] <defunct>
Here's the kicker: when I run the service as root instead of www-data, everything works as expected. The subprocess finishes and the parent immediately gets the child's return code.
/lib/systemd/system/digits.service
[Unit]
Description=DIGITS daemon
Requires=digits.socket
After=local-fs.target network.target
[Service]
PIDFile=/run/digits/pid
User=www-data
Group=www-data
Environment="DIGITS_JOBS_DIR=/var/lib/digits/jobs"
Environment="DIGITS_LOGFILE_FILENAME=/var/log/digits/digits.log"
ExecStart=/usr/bin/gunicorn digits.webapp:app \
--pid /run/digits/pid \
--config /usr/lib/python2.7/dist-packages/digits/gunicorn_config.py
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
/lib/systemd/system/digits.socket
[Unit]
Description=DIGITS socket
[Socket]
ListenStream=/run/digits/socket
ListenStream=0.0.0.0:34448
[Install]
WantedBy=sockets.target
/usr/lib/tmpfiles.d/digits.conf
d /run/digits 0755 www-data www-data -
I ran into the same issue today on CentOS-7. I finally overcame it by ignoring the instructions in this document -- which indicates to use the /run/ hierarchy within which to create the socket -- and I instead used /tmp/. That worked.
Note that my PID file is still placed underneath /run/ (no issues there).
In summary, instead of placing your socket somewhere underneath /run/..., try placing it somewhere underneath /tmp/... instead. It worked for me on CentOS-7 with systemd.
I am running a uwsgi/flask python app in a conda virtual environment using python 2.7.11.
I am moving from CentOS 6 to CentOS 7 and want to make use of systemd to run my app as a service. Everything (config and code) works fine if I manually call the start script for my app (sh start-foo.sh) but when I try to start it as a systemd service (sudo systemctl foo start) it starts the app but then fails right away with the following error:
WSGI app 0 (mountpoint='') ready in 8 seconds on interpreter 0x14c38d0 pid: 3504 (default app)
mountpoint already configured. skip.
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 3504)
emperor_notify_ready()/write(): Broken pipe [core/emperor.c line 2463]
VACUUM: pidfile removed.
Here is my systemd Unit file:
[Unit]
Description=foo
[Service]
ExecStart=/bin/bash /app/foo/bin/start-foo.sh
ExecStop=/bin/bash /app/foo/bin/stop-foo.sh
[Install]
WantedBy=multi-user.target
Not sure if necessary, but here are my uwsgi emperor and vassal configs:
Emperor
[uwsgi]
emperor = /app/foo/conf/vassals/
daemonize = /var/log/foo/emperor.log
Vassal
[uwsgi]
http-timeout = 500
chdir = /app/foo/scripts
pidfile = /app/foo/scripts/foo.pid
#socket = /app/foo/scripts/foo.soc
http = :8888
wsgi-file = /app/foo/scripts/foo.py
master = 1
processes = %(%k * 2)
threads = 1
module = foo
callable = app
vacuum = True
daemonize = /var/log/foo/uwsgi.log
I tried to Google for this issue but can't seem to find anything related. I suspect this has something to do with running uwsgi in a virtual environment and using systemctl to start it. I'm a systemd n00b so let me know if I'm doing something wrong in my Unit file.
This is not a blocker because I can still start/stop my app by executing the scripts manually, but I would like to be able to run it as a service and automatically launch it on startup using systemd.
Following the instructions here at uwsgi's documentation regarding setting up a systemd service fixed the problem.
Here is what I changed:
Removed daemonize from both Emperor and Vassal configs.
Took the Unit file from the link above and modified slightly to work with my app
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/app/foo/bin/uwsgi /app/foo/conf/emperor.ini
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
How do you ensure celeryd only runs as a single process? When I run manage.py celeryd --concurrency=1 and then ps aux | grep celery I see 3 instances running:
www-data 8609 0.0 0.0 20744 1572 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
www-data 8625 0.0 1.7 325916 71372 ? S 13:42 0:01 python manage.py celeryd --concurrency=1
www-data 8768 0.0 1.5 401460 64024 ? S 13:42 0:00 python manage.py celeryd --concurrency=1
I've noticed a similar problem with celerybeat, which always runs as 2 processes.
As per this link .. The number of processes would be 4: one main process, two child processes and one celerybeat process,
also if you're using FORCE_EXECV there's another process started to cleanup semaphores.
If you use celery+django-celery development, and using RabbitMQ or Redis as a broker, then it shouldn't use more
than one extra thread (none if CELERY_DISABLE_RATE_LIMITS is set)