uwsgi+flask to start|stop a python daemon process - python

I have an app writen by python with Flask, and deploy it use uwsgi +ngix, here is my config of uwsgi:
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 2
My app will response a request which want to start or stop a daemon process writen by pytho too. as below
in the request function do
os.system("python /SWS/webservice.py %s" % cmd)
where cmd is start|stop. in my daemon process, it is single process and single thread, and i capture SIGTEM then exit,like this
signal(SIGTERM, lambda signo,handler:sys.exit(0))
But. when I start this daemon process by uwsgi in my request function, i can't stop it, for example
kill -15 pid or python /SWS/web service.py stop
just like the SIGTERM signal does not send to my daemon process.
however, when i config uwsgi with 4 processes and 1 thread, this works fine. config like this
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 1
I can not figure out the reason, so I have to ask for help.
Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Related

Django and Subprocess: Development Server is Terminated on signal.SIGTERM

I am using subprocess.Popen in a Django application. I then store the pid in a database. The user can then send a rest request to /api/task/task.id/stop/ in order to kill the task.
This all works fine, except that when I kill the subprocess the Django development server is also terminated. How do I go about killing the subprocess without killing the dev server?
I start my process like this:
process = subprocess.Popen(["nohup {} {} {}".format(settings.PYTHON, "task.py", task.id) ], shell=True )
task.pid = process.pid
task.save()
And I am terminating it like this:
os.killpg( os.getpgid(task.pid), signal.SIGTERM )

uwsgi subprocess create a new process,the socket close wait

I use bottle and uwsgi.
uwsgi config:
[uwsgi]
http-socket = :8087
processes = 4
workers=4
master = true
file=app.py
app.py:
import bottle
import os
application = bottle.app()
#bottle.route('/test')
def test():
os.system('./restart_ssdb.sh')
if __name__=='__main__':
bottle.run()
restart_ssdb.sh(just restart a service and do not care about what the service is):
./ssdb-server -d ssdb.conf -s restart
Then I start the uwsgi and it works well.
Then I access url:127.0.0.1/test
The image shows that one of uwsgi processes becomes ssdb server.
Then I stop uwsgi:
The port 8087 belongs to ssdb. It causes uwsgi server to be unable to restart because the port is used.
What causes the problem in Figure 2 to appear?
I just want to execute shell(restart ssdb server), But it must be
guaranteed not to affect the uwsgi server, what can I do?
http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
I solve it when i set the
close-on-exec
option to my uwsgi setting

Background threads not starting before server shutdown

I'm having some trouble getting simple multi-threading functionality up and running in my web application.
Im using Flask, uwsgi, nginx on Ubuntu 12.04.
Each time I start a new thread it will not execute before I shut down the uwsgi server. Its very odd!
If I'm doing a simple task (e.g. printing) it will execute as expected 9/10 times. If I do a heavy computing job (e.g. OCR on a file) it will always start executing when the server is restarting (doing a shutdown)
Any idea why my code does not perform as expected?
Code:
def hello_world(world):
print "Hello, " + world # This will get printed when the uwsgi server restarts
def thread_test():
x = "World!"
t = threading.Thread(target=hello_world, args=(x,))
t.start()
#application.route('/api/test')
def test():
thread_test()
return "Hello, World!", 200
EDIT 1:
My uwsgi configuration looks like this:
[uwsgi]
chdir = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/ml/
module = app.app:application
master = True
vacuum = True
socket = /tmp/archii.sock
processes = 4
pidfile = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.pid
daemonize = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.log
virtualenv = /Users/vingtoft/Documents/Development/virtualenv/flask/
wsgi-file = /Users/vingtoft/Documents/Development/archii/server/archii2/app/app.py
ssl = True
uWSGI server by default will disable threads support for some performance improvements, but you can enable it back using either:
threads = 2 # or any greater number
or
enable-threads = true
But be warned that first method will tell uWSGI to create 2 threads for each of your workers so for 4 workers, you will end up with 8 actual threads.
That threads will work as separate workers, so they are not for your use for background jobs, but using any number of threads greater than one will enable thread support for uWSGI server, so now you can create more of it for some background tasks.

Single apscheduler instance in Flask application

Setup:
Flask application running in Apache's httpd via wsgi
Single wsgi process with 25 threads: WSGIDaemonProcess myapp threads=25
apscheduler to run jobs (send emails)
RethinkDB as the backend for the job store
I'm trying to prevent apscheduler from running the same job multiple times by preventing multiple instances of apscheduler from starting. Currently I'm using the following code to ensure the scheduler is only started once:
if 'SCHEDULER' not in app.config or app.config['SCHEDULER'] is None:
logger.info("Configuring scheduler")
app.config['SCHEDULER'] = scheduler.configure()
However, when I look at my logs, I see the scheduler being started twice:
[07:07:56.796001 pid 24778 INFO] main.py 57:Configuring scheduler
[07:07:56.807977 pid 24778 INFO] base.py 132:Scheduler started
[07:07:56.812253 pid 24778 DEBUG] base.py 795:Looking for jobs to run
[07:07:56.818019 pid 24778 DEBUG] base.py 840:Next wakeup is due at-10-14 11:30:00+00:00 (in 1323.187678 seconds)
[07:07:57.919869 pid 24777 INFO] main.py 57:Configuring scheduler
[07:07:57.930654 pid 24777 INFO] base.py 132:Scheduler started
[07:07:57.935212 pid 24777 DEBUG] base.py 795:Looking for jobs to run
[07:07:57.939795 pid 24777 DEBUG] base.py 840:Next wakeup is due at-10-14 11:30:00+00:00 (in 1322.064753 seconds)
As can be seen by the pid, there are two processes that are being started somewhere/somehow. How can I prevent this? Where is this configuration in httpd?
Say I did want two processes running, I could use flock to prevent apscheduler from starting twice. However, this won't work because the process that does NOT start apscheduler won't be able to add/remove jobs because app.config['SCHEDULER'] set for that process to use.
What is the best way to configure/setup a Flask web app with multiple processes that can add/remove jobs, and yet prevent the scheduler from running the job multiple times?
I finally settled on using a file-based lock to ensure that the task doesn't run twice:
def get_lock(name):
fd = open('/tmp/' + name, 'w')
try:
flock(fd, LOCK_EX | LOCK_NB) # open for exclusive locking
return fd
except IOError as e:
logger.warn('Could not get the lock for ' + str(name))
fd.close()
return None
def release_lock(fd):
sleep(2) # extend the time a bit longer in the hopes that it blocks the other proc
flock(fd, LOCK_UN)
fd.close()
It's a bit of a hack, but seems to be working...

Where to place register code to zookeeper when using nd_service_registry with uwsgi+Django stack?

I'm using nd_service_registry to register my django service to zookeeper, which launched with uwsgi.
versions:
uWSGI==2.0.10
Django==1.7.5
My question is, what is the correct way to place nd_service_registry.set_node code to register itself to zookeeper server, avoiding duplicated register or deregister.
my uwsgi config ini, with processes=2, enable-threads=true, threads=2:
[uwsgi]
chdir = /data/www/django-proj/src
module = settings.wsgi:application
env = DJANGO_SETTINGS_MODULE=settings.test
master = true
pidfile = /tmp/uwsgi-proj.pid
socket = /tmp/uwsgi_proj.sock
processes = 2
threads = 2
harakiri = 20
max-requests = 50000
vacuum = true
home = /data/www/django-proj/env
enable-threads = true
buffer-size = 65535
chmod-socket=666
register code:
from nd_service_registry import KazooServiceRegistry
nd = KazooServiceRegistry(server=ZOOKEEPER_SERVER_URL)
nd.set_node('/web/test/server0', {'host': 'localhost', 'port': 80})
I've tested such cases and both worked as expected, django service registered at uwsgi master process startup only once.
place code in settings.py
place code in wsgi.py
Even if I killed uwsgi worker processes(then master process will relaunch another worker) or let worker process kill+restart by uwsgi harakiri options, no new register action triggered.
So my question is whether my register code is correct for django+uwsgi with processes and threads enabled, and where to place it.
The problem happened when you use uwsgi with master/worker. When uwsgi master process spawns workers, the connection to zookeeper maintained by thread in zookeeper client can't be copy to worker correctly.So in application of uwsgi, you should use uwsgi decorators: uwsgidecorators.postfork to initialize register code. The function decorated by #postfork will be called when spawning new workers.
Hope it will help.

Categories