Background threads not starting before server shutdown - python

I'm having some trouble getting simple multi-threading functionality up and running in my web application.
Im using Flask, uwsgi, nginx on Ubuntu 12.04.
Each time I start a new thread it will not execute before I shut down the uwsgi server. Its very odd!
If I'm doing a simple task (e.g. printing) it will execute as expected 9/10 times. If I do a heavy computing job (e.g. OCR on a file) it will always start executing when the server is restarting (doing a shutdown)
Any idea why my code does not perform as expected?
Code:
def hello_world(world):
print "Hello, " + world # This will get printed when the uwsgi server restarts
def thread_test():
x = "World!"
t = threading.Thread(target=hello_world, args=(x,))
t.start()
#application.route('/api/test')
def test():
thread_test()
return "Hello, World!", 200
EDIT 1:
My uwsgi configuration looks like this:
[uwsgi]
chdir = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/ml/
module = app.app:application
master = True
vacuum = True
socket = /tmp/archii.sock
processes = 4
pidfile = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.pid
daemonize = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.log
virtualenv = /Users/vingtoft/Documents/Development/virtualenv/flask/
wsgi-file = /Users/vingtoft/Documents/Development/archii/server/archii2/app/app.py
ssl = True

uWSGI server by default will disable threads support for some performance improvements, but you can enable it back using either:
threads = 2 # or any greater number
or
enable-threads = true
But be warned that first method will tell uWSGI to create 2 threads for each of your workers so for 4 workers, you will end up with 8 actual threads.
That threads will work as separate workers, so they are not for your use for background jobs, but using any number of threads greater than one will enable thread support for uWSGI server, so now you can create more of it for some background tasks.

Related

Trouble getting Liveness Probe working for GKE non-http Docker build

I have a process building successfully with Docker but that always fails the deployment step.
The process is a non-http quickly-running sweep of some files ... I tried adding a TCP liveness and readiness probe to the deploy.yaml in the /kubernetes directory for the GKE automated deployment setup.
I also: reversed the exit codes (was returning 1 on success so I made this 0 as Kubernetes expects) ...
Started with two threads: one a tcp server that does serve_forever at the end and the other the real work process, with extra sleep to let Kubernetes catch up ...
if __name__ == '__main__':
t = Thread(target=main, args=(None,None))
t2 = Thread(target=tcpserve, args=([1]), daemon=True)
t.start()
t2.start()
I'm just about out of arrows on this; any suggestions?
I found it!
The Tcp server I was using I started like this:
aServer = socketserver.TCPServer(("127.0.0.1", 8080), MyTCPRequestHandler)
But instead it needed to be this:
aServer = socketserver.TCPServer(("0.0.0.0", 8080), MyTCPRequestHandler)
splut ... I should have seen this earlier!!!

uwsgi+flask to start|stop a python daemon process

I have an app writen by python with Flask, and deploy it use uwsgi +ngix, here is my config of uwsgi:
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 2
My app will response a request which want to start or stop a daemon process writen by pytho too. as below
in the request function do
os.system("python /SWS/webservice.py %s" % cmd)
where cmd is start|stop. in my daemon process, it is single process and single thread, and i capture SIGTEM then exit,like this
signal(SIGTERM, lambda signo,handler:sys.exit(0))
But. when I start this daemon process by uwsgi in my request function, i can't stop it, for example
kill -15 pid or python /SWS/web service.py stop
just like the SIGTERM signal does not send to my daemon process.
however, when i config uwsgi with 4 processes and 1 thread, this works fine. config like this
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 1
I can not figure out the reason, so I have to ask for help.
Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

uwsgi subprocess create a new process,the socket close wait

I use bottle and uwsgi.
uwsgi config:
[uwsgi]
http-socket = :8087
processes = 4
workers=4
master = true
file=app.py
app.py:
import bottle
import os
application = bottle.app()
#bottle.route('/test')
def test():
os.system('./restart_ssdb.sh')
if __name__=='__main__':
bottle.run()
restart_ssdb.sh(just restart a service and do not care about what the service is):
./ssdb-server -d ssdb.conf -s restart
Then I start the uwsgi and it works well.
Then I access url:127.0.0.1/test
The image shows that one of uwsgi processes becomes ssdb server.
Then I stop uwsgi:
The port 8087 belongs to ssdb. It causes uwsgi server to be unable to restart because the port is used.
What causes the problem in Figure 2 to appear?
I just want to execute shell(restart ssdb server), But it must be
guaranteed not to affect the uwsgi server, what can I do?
http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
I solve it when i set the
close-on-exec
option to my uwsgi setting

Clients keeps waiting for RabbitMQ response

I am using rabbitMQ to launch processes in remote hosts located in other parts of the world. Eg, RabbitMQ is running in an Oregon host, and it receives a client message to launch processes in Ireland and California.
Most of the time, the processes are launched, and, when they finish, rabbitMQ returns the output to the client. But, sometimes, the jobs finish successfully but rabbitMQ hasn't return the output to the client, and the client keeps hanging waiting for the response. These processes can take 10 minutes to execute, so the client is 10 minutes hanged waiting for the response.
I am using celery to connect to the rabbitMQ, and the client calls are blocking using task.get(). In other words, the client hangs until it receives the response for its call. I would like to understand why the client did not get the response if the jobs have finished. How can I debug this problem?
Here is my celeryconfig.py
import os
import sys
# add hadoop python to the env, just for the running
sys.path.append(os.path.dirname(os.path.basename(__file__)))
# broker configuration
# medusa-rabbitmq is the name of the hosts where rabbitmq is running
BROKER_URL = "amqp://celeryuser:celery#medusa-rabbitmq/celeryvhost"
CELERY_RESULT_BACKEND = "amqp"
TEST_RUNNER = 'celery.contrib.test_runner.run_tests'
# for debug
# CELERY_ALWAYS_EAGER = True
# module loaded
CELERY_IMPORTS = ("medusa.mergedirs", "medusa.medusasystem",
"medusa.utility", "medusa.pingdaemon", "medusa.hdfs", "medusa.vote.voting")

Where to place register code to zookeeper when using nd_service_registry with uwsgi+Django stack?

I'm using nd_service_registry to register my django service to zookeeper, which launched with uwsgi.
versions:
uWSGI==2.0.10
Django==1.7.5
My question is, what is the correct way to place nd_service_registry.set_node code to register itself to zookeeper server, avoiding duplicated register or deregister.
my uwsgi config ini, with processes=2, enable-threads=true, threads=2:
[uwsgi]
chdir = /data/www/django-proj/src
module = settings.wsgi:application
env = DJANGO_SETTINGS_MODULE=settings.test
master = true
pidfile = /tmp/uwsgi-proj.pid
socket = /tmp/uwsgi_proj.sock
processes = 2
threads = 2
harakiri = 20
max-requests = 50000
vacuum = true
home = /data/www/django-proj/env
enable-threads = true
buffer-size = 65535
chmod-socket=666
register code:
from nd_service_registry import KazooServiceRegistry
nd = KazooServiceRegistry(server=ZOOKEEPER_SERVER_URL)
nd.set_node('/web/test/server0', {'host': 'localhost', 'port': 80})
I've tested such cases and both worked as expected, django service registered at uwsgi master process startup only once.
place code in settings.py
place code in wsgi.py
Even if I killed uwsgi worker processes(then master process will relaunch another worker) or let worker process kill+restart by uwsgi harakiri options, no new register action triggered.
So my question is whether my register code is correct for django+uwsgi with processes and threads enabled, and where to place it.
The problem happened when you use uwsgi with master/worker. When uwsgi master process spawns workers, the connection to zookeeper maintained by thread in zookeeper client can't be copy to worker correctly.So in application of uwsgi, you should use uwsgi decorators: uwsgidecorators.postfork to initialize register code. The function decorated by #postfork will be called when spawning new workers.
Hope it will help.

Categories