I've some problem with run uwsgi.
I run application(Pyramid with zerorpc, gevent) in uwsgi. And some requests fails.
Python writes this error:
Assertion failed: ok (bundled/zeromq/src/mailbox.cpp:79)
Aborted
uWSGI worker 1 screams: UAAAAAAH my master disconnected: i will kill myself !!!
Why there might be such a problem?
uwsgi config:
[uwsgi]
socket = /tmp/sock.sock
chmod-socket = 666
master = true
processes = 1
vacuum = true
i run so:
uwsgi --ini-paste development.ini
the whole zeromq magic is managed by a background thread. A property of threads is that they "disappear" after fork(), so zeromq will not work in your uWSGI worker. Just add
lazy-apps = true
in your uWSGI options to load zeromq (read: your app) after each fork()
Related
I have had trouble to kill uwsgi processes.
Whenever I use below commands to shutdown uwsgi, another uwsgi workers respawned after a few seconds. So after shutdown uwsgi process and start up uwsgi server again, uwsgi processes continues to accumulate in memory.
sudo pkill -f uwsgi -9
kill -9 cat /tmp/MyProject.pid
uwsgi --stop /tmp/MyProject.pid
I want to know how to kill uwsgi processes not making uwsgi workers respawned.
I attach ini file as below.
[uwsgi]
chdir = /home/data/MyProject/
module = MyProject.wsgi
home = /home/data/anaconda3/envs/env1
master = true
pidfile = /tmp/MyProject.pid
single-interpreter = true
die-on-term = true
processes = 4
socket = /home/data/MyProject/MyProject.sock
chmod-socket = 666
vacuum = true
Thank you in advance!!
at first:
uwsgi is a binary protocol that uWSGI uses to communicate with other servers. uWSGI is an application server.
You should avoid (1, 2) to use -9 (SIGKILL) to stop process if you care about child processes (workers)
what I can say according your information:
Whenever I use below commands to shutdown uwsgi, another uwsgi workers
respawned after a few seconds
Looks like you trying to kill worker (child) process but not uWSGI application server (master process). Here is processes = 4 in your config, so application server (master process) watching for minimum running workers (child processes). And if one of it exited (by KILL signal or source code exception, no matter) application server starting new 4th process.
So after shutdown uwsgi process and start up uwsgi server again
If uwsgi process is a worker (child process) - see answer above
If uwsgi process is an application server (master process) - here is another problem. You using KILL (-9) signal to stop server. And that is not allow to exit application properly (see at first's second point). So when your application server is unexpectedly killed, it leave all 4 child processes running without parent (master) process (say hello to orphaned process)
You should to use SIGTERM instdead of SIGKILL. Do you understand meaning of die-on-term = true? Yeah! That means please stop stack of all processes on SIGTERM.
uwsgi --stop /tmp/MyProject.pid
This command should stop all processes properly. Here is no information in your question to decide what problem that may be...
I have three guesses:
web application source problem: exit operation is not handled properly
die-on-term = true inverts the meanings of SIGTERM and SIGQUIT to uWSGI, so maybe stop working like reload in this case? not sure.
some kind of misunderstanding
Update 1: how to control number of child processes dynamically
In addition, you may check this helpful article to understand how to scale-in and scale-out child process dynamically to be able run all 4th processes only when it needed and run only 1 process when service is idle.
Update 2: CHeck if behaviour reproducible
I created simple uWSGI application to check behaviour:
opt
`-- wsgi_example
|-- app.py
`-- wsgi.ini
wsgi.ini
[uwsgi]
chdir = /opt/wsgi_example
module = app
master = true
pidfile = /opt/wsgi_example/app.pid
single-interpreter = true
die-on-term = true
processes = 2
socket = /opt/wsgi_example/app.sock
chmod-socket = 666
vacuum = true
app.py
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
I running this application by wsgi wsgi.ini
I stopping this application by uwsgi --stop app.pid
Output is
spawned uWSGI master process (pid: 15797)
spawned uWSGI worker 1 (pid: 15798, cores: 1)
spawned uWSGI worker 2 (pid: 15799, cores: 1)
SIGINT/SIGQUIT received...killing workers...
worker 1 buried after 1 seconds
worker 2 buried after 1 seconds
goodbye to uWSGI.
VACUUM: pidfile removed.
VACUUM: unix socket /opt/wsgi_example/app.sock removed.
All working properly. All processes stopped.
Your question is not reproducible.
Search for the problem inside application code or your individual infrastructure configuration.
※ checked with uWSGI 2.0.8 and 2.0.19.1 and python 3.6.7
I have a Flask application running inside a container, where I have setup the logging with the StreamHandler(), so the logs are sent to stdout.
When my uwsgi.ini file includes a statement to redirect the logs to a file (by using logto), then the logs from the application are available in the log file intermixed with the uWSGI logs (as it is expected).
But when I remove the logto from the uwsgi.ini - because I want those logs sent to the Docker container stdout only the uWSGI logs are visible in the docker container logs, the application logs are not. (the uWSGI logs were there even before)
uwsgi.ini:
[uwsgi]
base = /app_home/app
wsgi-file = /app_home/app/wsgi.py
callable = app
socket = /tmp/uwsgi.sock
chmod-socket = 666
# Log directory - we needed this to be turned off, so the logs are sent to STDOUT
# logto = /var/log/uwsgi/app.log
vacuum = true
master = true
processes = 3
enable-threads = true
uid = app
gid = app
master-fifo = /tmp/fifo0
master-fifo = /tmp/fifo1
chdir = /app_home/app
When the logto is enabled, then the log file includes the app's logs (as it should):
[2019-03-05 17:19:05,415] INFO in __init__: Initial starting of app...
2019-03-05 17:19:05,415 (9) - INFO - Initial starting of app...- [in ./app/__init__.py:128, function:create_app]
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x1ad49b0 pid: 9 (default app)
*** uWSGI is running in multiple interpreter mode ***
gracefully (RE)spawned uWSGI master process (pid: 9)
spawned uWSGI worker 1 (pid: 32, cores: 1)
Once the logto is disabled, then no log file (as expected), but no app logs in the Docker container log either. The docker container looks exactly the same as before:
2019-03-05T22:19:09.956784133Z 2019-03-05 17:19:09,956 CRIT Supervisor running as root (no user in config file)
2019-03-05T22:19:09.959701644Z 2019-03-05 17:19:09,959 INFO supervisord started with pid 1
2019-03-05T22:19:10.961366502Z 2019-03-05 17:19:10,961 INFO spawned: 'nginx' with pid 9
2019-03-05T22:19:10.963312945Z 2019-03-05 17:19:10,962 INFO spawned: 'uwsgi' with pid 10
2019-03-05T22:19:12.928470278Z 2019-03-05 17:19:12,928 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-03-05T22:19:12.928498809Z 2019-03-05 17:19:12,928 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
uWSGI documentation shows that the logs are sent to stdout/stderr by default (see https://uwsgi-docs.readthedocs.io/en/latest/Logging.html), so I don't really see a reason why the app's logs would be not sent to stdout with the uWSGI's own logs, but sent to a file with logto.
There are two issues:
As uWSGI is running in the Docker container with Supervisor, you need to make supervisor redirect the stdout of uwsgi to its stdout.
supervisord.conf:
[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /app_home/app/uwsgi.ini --uid app --gid app --log-master
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
While this has worked with other applications (like nginx), it was not enough for uWSGI, see step #2:
A special flag (--log-master) needs to be added to uWSGI to delegate the logging to the master process:
[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /app_home/app/uwsgi.ini --uid app --gid app --log-master
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
And then, the logs from the Flask applications are visible in the Docker container log.
This answer was posted as an edit to the question UWSGI does not redirect application stdout to stdout, only to file by the OP Zoltan Fedor under CC BY-SA 4.0.
I've been trying to run my applications with emperor mode and got it working but the problem is the moment i run emperor mode my computer slows down like crazy and i can't do anything. My configuration files code for both applications are similar.
[uwsgi]
module = Restful
chdir = path
processes = 2
http-socket = :5001
chmod-socket = 660
vacuum = true
die-on-term = true
[uwsgi]
module = Flaskapp
chdir = /home/muba/PycharmProjects/Work/
wsgi-file = Work/wsgi.py
processes = 2
http-socket = :5000
chmod-socket = 660
vacuum = true
die-on-term = true
My code i run is
uwsgi --emperor vassals --uid http --gid http --master
It works and i see that both my apps are running at the same time but a few seconds later my laptop slows down. Anything I'm doing wrong? it was working the first time i tried then after that it slowed down. I also made an emperor.ini file in my vassals.
Wait you have an emperor.ini file? If it has configuration similar to what you have put in your uwsgi code, then it's probably running it twice and slowing down your computer?
I am running a uwsgi/flask python app in a conda virtual environment using python 2.7.11.
I am moving from CentOS 6 to CentOS 7 and want to make use of systemd to run my app as a service. Everything (config and code) works fine if I manually call the start script for my app (sh start-foo.sh) but when I try to start it as a systemd service (sudo systemctl foo start) it starts the app but then fails right away with the following error:
WSGI app 0 (mountpoint='') ready in 8 seconds on interpreter 0x14c38d0 pid: 3504 (default app)
mountpoint already configured. skip.
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 3504)
emperor_notify_ready()/write(): Broken pipe [core/emperor.c line 2463]
VACUUM: pidfile removed.
Here is my systemd Unit file:
[Unit]
Description=foo
[Service]
ExecStart=/bin/bash /app/foo/bin/start-foo.sh
ExecStop=/bin/bash /app/foo/bin/stop-foo.sh
[Install]
WantedBy=multi-user.target
Not sure if necessary, but here are my uwsgi emperor and vassal configs:
Emperor
[uwsgi]
emperor = /app/foo/conf/vassals/
daemonize = /var/log/foo/emperor.log
Vassal
[uwsgi]
http-timeout = 500
chdir = /app/foo/scripts
pidfile = /app/foo/scripts/foo.pid
#socket = /app/foo/scripts/foo.soc
http = :8888
wsgi-file = /app/foo/scripts/foo.py
master = 1
processes = %(%k * 2)
threads = 1
module = foo
callable = app
vacuum = True
daemonize = /var/log/foo/uwsgi.log
I tried to Google for this issue but can't seem to find anything related. I suspect this has something to do with running uwsgi in a virtual environment and using systemctl to start it. I'm a systemd n00b so let me know if I'm doing something wrong in my Unit file.
This is not a blocker because I can still start/stop my app by executing the scripts manually, but I would like to be able to run it as a service and automatically launch it on startup using systemd.
Following the instructions here at uwsgi's documentation regarding setting up a systemd service fixed the problem.
Here is what I changed:
Removed daemonize from both Emperor and Vassal configs.
Took the Unit file from the link above and modified slightly to work with my app
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/app/foo/bin/uwsgi /app/foo/conf/emperor.ini
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I have a flask server running on top of uWSGI, with the following configuration:
[uwsgi]
http-socket = :9000
plugin = python
wsgi-file = /.../whatever.py
enable-threads = true
The flask server has a background thread which makes periodic calls to another server, using the following command:
r = requests.get(...)
I've added logging before and after this command, and it seems that the command never returns, and the thread just stops there.
Any idea why the background thread is hanging? Note that I've added enable-threads = true to the configuration.
Updates
I've added a timeout parameter to requests.get(). Now the behaviour is unexpected - the background thread works in one server, but fails in another.
killing all the uWSGI instances and restarting them using sudo service uwsgi restart solved the problem.
It seems that sudo service uwsgi stop does not actually stop all the instances of uwsgi.