I've been trying to run my applications with emperor mode and got it working but the problem is the moment i run emperor mode my computer slows down like crazy and i can't do anything. My configuration files code for both applications are similar.
[uwsgi]
module = Restful
chdir = path
processes = 2
http-socket = :5001
chmod-socket = 660
vacuum = true
die-on-term = true
[uwsgi]
module = Flaskapp
chdir = /home/muba/PycharmProjects/Work/
wsgi-file = Work/wsgi.py
processes = 2
http-socket = :5000
chmod-socket = 660
vacuum = true
die-on-term = true
My code i run is
uwsgi --emperor vassals --uid http --gid http --master
It works and i see that both my apps are running at the same time but a few seconds later my laptop slows down. Anything I'm doing wrong? it was working the first time i tried then after that it slowed down. I also made an emperor.ini file in my vassals.
Wait you have an emperor.ini file? If it has configuration similar to what you have put in your uwsgi code, then it's probably running it twice and slowing down your computer?
Related
I have had trouble to kill uwsgi processes.
Whenever I use below commands to shutdown uwsgi, another uwsgi workers respawned after a few seconds. So after shutdown uwsgi process and start up uwsgi server again, uwsgi processes continues to accumulate in memory.
sudo pkill -f uwsgi -9
kill -9 cat /tmp/MyProject.pid
uwsgi --stop /tmp/MyProject.pid
I want to know how to kill uwsgi processes not making uwsgi workers respawned.
I attach ini file as below.
[uwsgi]
chdir = /home/data/MyProject/
module = MyProject.wsgi
home = /home/data/anaconda3/envs/env1
master = true
pidfile = /tmp/MyProject.pid
single-interpreter = true
die-on-term = true
processes = 4
socket = /home/data/MyProject/MyProject.sock
chmod-socket = 666
vacuum = true
Thank you in advance!!
at first:
uwsgi is a binary protocol that uWSGI uses to communicate with other servers. uWSGI is an application server.
You should avoid (1, 2) to use -9 (SIGKILL) to stop process if you care about child processes (workers)
what I can say according your information:
Whenever I use below commands to shutdown uwsgi, another uwsgi workers
respawned after a few seconds
Looks like you trying to kill worker (child) process but not uWSGI application server (master process). Here is processes = 4 in your config, so application server (master process) watching for minimum running workers (child processes). And if one of it exited (by KILL signal or source code exception, no matter) application server starting new 4th process.
So after shutdown uwsgi process and start up uwsgi server again
If uwsgi process is a worker (child process) - see answer above
If uwsgi process is an application server (master process) - here is another problem. You using KILL (-9) signal to stop server. And that is not allow to exit application properly (see at first's second point). So when your application server is unexpectedly killed, it leave all 4 child processes running without parent (master) process (say hello to orphaned process)
You should to use SIGTERM instdead of SIGKILL. Do you understand meaning of die-on-term = true? Yeah! That means please stop stack of all processes on SIGTERM.
uwsgi --stop /tmp/MyProject.pid
This command should stop all processes properly. Here is no information in your question to decide what problem that may be...
I have three guesses:
web application source problem: exit operation is not handled properly
die-on-term = true inverts the meanings of SIGTERM and SIGQUIT to uWSGI, so maybe stop working like reload in this case? not sure.
some kind of misunderstanding
Update 1: how to control number of child processes dynamically
In addition, you may check this helpful article to understand how to scale-in and scale-out child process dynamically to be able run all 4th processes only when it needed and run only 1 process when service is idle.
Update 2: CHeck if behaviour reproducible
I created simple uWSGI application to check behaviour:
opt
`-- wsgi_example
|-- app.py
`-- wsgi.ini
wsgi.ini
[uwsgi]
chdir = /opt/wsgi_example
module = app
master = true
pidfile = /opt/wsgi_example/app.pid
single-interpreter = true
die-on-term = true
processes = 2
socket = /opt/wsgi_example/app.sock
chmod-socket = 666
vacuum = true
app.py
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
I running this application by wsgi wsgi.ini
I stopping this application by uwsgi --stop app.pid
Output is
spawned uWSGI master process (pid: 15797)
spawned uWSGI worker 1 (pid: 15798, cores: 1)
spawned uWSGI worker 2 (pid: 15799, cores: 1)
SIGINT/SIGQUIT received...killing workers...
worker 1 buried after 1 seconds
worker 2 buried after 1 seconds
goodbye to uWSGI.
VACUUM: pidfile removed.
VACUUM: unix socket /opt/wsgi_example/app.sock removed.
All working properly. All processes stopped.
Your question is not reproducible.
Search for the problem inside application code or your individual infrastructure configuration.
※ checked with uWSGI 2.0.8 and 2.0.19.1 and python 3.6.7
I am trying to set the uwsgi.ini file so that it will work with a docker container.
In the Dockerfile, I have exposed port 8888. Below are the pieces of the Dockerfile that are related to this problem:
Dockerfile
EXPOSE 8888
ENV DOCKER_CONTAINER=1
#CMD ["uwsgi", "--ini", "/code/uwsgi.ini"] <<< right now, this is commented out
CMD ["/bin/bash"]
Above, the CMD to run the uwsgi.ini file is commented out because, for me, it did not work initially. I changed the CMD to "/bin/bash" so that I could log in to the OS level of the container. After doing so, I then ran the code below:
uwsgi --http 923b235d270e:8888 --chdir=/code/backendworkproj --module=backendworkproj.wsgi:application --env DJANGO_SETTINGS_MODULE=backendworkproj.settings --master --pidfile=/tmp/backendworkproj-master.pid --socket=127.0.0.1:49152 --processes=5 --uid=1000 --gid=2000 --harakiri=20 --max-requests=5000 --vacuum
Once complete, I was able to go to port 8888 on the machine and see the website.
So, in short, everything worked.
The problem I am facing now is to convert the command above to something that will work in the uwgsi.ini file
If you look at part of the command above, I used :
--http 923b235d270e:8888
to specify a port. The 923b235d270e is associated with the container (since 127.0.0.1 did not work)
How can I represent this (and env variables like DJANGO_SETTINGS_MODULE ) properly in the uwsgi file so that the server will work? Below is the .ini file I have.
TIA
uwsgi.ini
[uwsgi]
--http 923b235d270e:8888
chdir=/code/backendworkproj
module=backendworkproj.wsgi:application
--env DJANGO_SETTINGS_MODULE=backendworkproj.settings
master=True
pidfile=/tmp/backendworkproj-master.pid
socket=127.0.0.1:49152
processes=5
uid=1000
gid=2000
harakiri=20
max-requests=5000
vacuum=True
Never mind. This configuration worked.
[uwsgi]
http-socket = :8888
chdir = /code/backendworkproj
module = backendworkproj.wsgi:application
env = DJANGO_SETTINGS_MODULE=backendworkproj.settings
master = True
pidfile = /tmp/backendworkproj-master.pid
socket = 127.0.0.1:49152
processes = 5
uid = 1000
gid = 2000
harakiri = 20
max-requests = 5000
vacuum = True
I am running a uwsgi/flask python app in a conda virtual environment using python 2.7.11.
I am moving from CentOS 6 to CentOS 7 and want to make use of systemd to run my app as a service. Everything (config and code) works fine if I manually call the start script for my app (sh start-foo.sh) but when I try to start it as a systemd service (sudo systemctl foo start) it starts the app but then fails right away with the following error:
WSGI app 0 (mountpoint='') ready in 8 seconds on interpreter 0x14c38d0 pid: 3504 (default app)
mountpoint already configured. skip.
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 3504)
emperor_notify_ready()/write(): Broken pipe [core/emperor.c line 2463]
VACUUM: pidfile removed.
Here is my systemd Unit file:
[Unit]
Description=foo
[Service]
ExecStart=/bin/bash /app/foo/bin/start-foo.sh
ExecStop=/bin/bash /app/foo/bin/stop-foo.sh
[Install]
WantedBy=multi-user.target
Not sure if necessary, but here are my uwsgi emperor and vassal configs:
Emperor
[uwsgi]
emperor = /app/foo/conf/vassals/
daemonize = /var/log/foo/emperor.log
Vassal
[uwsgi]
http-timeout = 500
chdir = /app/foo/scripts
pidfile = /app/foo/scripts/foo.pid
#socket = /app/foo/scripts/foo.soc
http = :8888
wsgi-file = /app/foo/scripts/foo.py
master = 1
processes = %(%k * 2)
threads = 1
module = foo
callable = app
vacuum = True
daemonize = /var/log/foo/uwsgi.log
I tried to Google for this issue but can't seem to find anything related. I suspect this has something to do with running uwsgi in a virtual environment and using systemctl to start it. I'm a systemd n00b so let me know if I'm doing something wrong in my Unit file.
This is not a blocker because I can still start/stop my app by executing the scripts manually, but I would like to be able to run it as a service and automatically launch it on startup using systemd.
Following the instructions here at uwsgi's documentation regarding setting up a systemd service fixed the problem.
Here is what I changed:
Removed daemonize from both Emperor and Vassal configs.
Took the Unit file from the link above and modified slightly to work with my app
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/app/foo/bin/uwsgi /app/foo/conf/emperor.ini
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I've some problem with run uwsgi.
I run application(Pyramid with zerorpc, gevent) in uwsgi. And some requests fails.
Python writes this error:
Assertion failed: ok (bundled/zeromq/src/mailbox.cpp:79)
Aborted
uWSGI worker 1 screams: UAAAAAAH my master disconnected: i will kill myself !!!
Why there might be such a problem?
uwsgi config:
[uwsgi]
socket = /tmp/sock.sock
chmod-socket = 666
master = true
processes = 1
vacuum = true
i run so:
uwsgi --ini-paste development.ini
the whole zeromq magic is managed by a background thread. A property of threads is that they "disappear" after fork(), so zeromq will not work in your uWSGI worker. Just add
lazy-apps = true
in your uWSGI options to load zeromq (read: your app) after each fork()
This is uwsgi config:
[uwsgi]
uid = 500
listen=200
master = true
profiler = true
processes = 8
logdate = true
socket = 127.0.0.1:8000
module = www.wsgi
pythonpath = /root/www/
pythonpath = /root/www/www
pidfile = /root/www/www.pid
daemonize = /root/www/www.log
enable-threads = true
memory-report = true
limit-as = 6048
This is Nginx config:
server{
listen 80;
server_name 119.254.35.221;
location / {
uwsgi_pass 127.0.0.1:8000;
include uwsgi_params;
}
}
The django works ok, but modifed pages can't be seen unless i restart uwsgi.(what's more, as i config 8 worker process, i can see the modified page when i press on ctrl+f5 for a while, seems that only certain worker can read and response the modified page, but others just shows the old one, who caches the old page? i didn't config anything about cache)
I didn't config the django, and it works well with "python manager runserver ...", but havfe this problem when working with nginx+uwsgi.
(the nginx and uwsgi are both new installation, i'm sure nothing else is configed here..)
uwsgi does not reload your code automatically, only development server does
runserver is for debug purposes, uwsgi and nginx for production
in production you can restart uwsgi by service uwsgi restart or via init.d script
there is even better way to reload uwsg by using touch-reload
usually there is no need to cleanup .pyc files, it happens only when timestamps on files are wrong (I've seen it only couple times at my entire carieer)
This is normal behavior. uwsgi will not re-read your code unless you restart it (it does not work like runserver when you have DEBUG=True).
If after you have updated your code, restarted uwsgi, cleared your browser cache and it still doesn't reflect your changes, then you should delete *.pyc files from your source directory.
I typically use this:
find . -name "*.pyc" -exec rm {} \;
Roughly speaking, .pyc is the "compiled" version of your code. Python will load this optimized version if it doesn't detect a change in the source. If you delete these files; then it will re-read your source files.