So I'm building a longrunning query web app for internal use.
My goal is to have a flask app with a daemon process that starts when the server starts, that will update a global dictionary object.
I don't necessarily have any sample code to post, as I've tried to accomplish this many ways and none have been successful.
The daemon will be creating a thread pool (multiprocessing.Pool) to loop through all database instances and running a couple queries on them.
It seems that no matter how I try and implement this (right now, using the flask development server) it locks up the app and nothing else can be done while it's running. I have tried reading through a bunch of documentation, but as per usual a lot of other knowledge is assumed and I end up overwhelmed.
I'm wondering if anyone can offer some guidance, even if it's places I can look for this, because I have searched all over for 'flask startup routine' and similar, but have found nothing of use. It seems that when I deploy this to our server, I may be able to define some startup daemons in my .wsgi file, but until then is there any way to do this locally? Is that even the right approach when I do push it out for General use?
Otherwise, I was just thinking of setting up a cron job that continuously runs a python script that does the queries I need, and dumps to a MongoDB instance or something, so that the clients can simply pull from that (as doing all of the queries on the server side of the Flask app just locks up the server, so nothing else can be done with it -- aka: can't take action on info, kill spids etc)
Any help with this would help majorly, my brain has been spinning for days.
from flask import Flask
from celery import Celery
app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = 'amqp://guest#localhost//'
app.config['CELERY_RESULT_BACKEND'] = 'amqp://guest#localhost//'
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
output = 0
#app.before_first_request
def init():
task = my_task.apply_async()
#app.route('/')
def hello_world():
global output
return 'Hello World! - ' + str(output)
#celery.task
def my_task():
global output
result = 0
for i in range(100):
result += i
output = result
if __name__ == '__main__':
app.run()
Depending how complex your query is, you could consider running your query via a second thread. Because of the GIL you don't need to worry about common data structure objects (such as the dictionary) being thread safe. A nice thing about threads is even though there's a GIL they're generally good about not blocking other threads executing during intense I/O (such as a thread for queries). See 2. Trivial example:
import threading
import time
import random
from flask import Flask
app = Flask(__name__)
data_store = {'a': 1}
def interval_query():
while True:
time.sleep(1)
vals = {'a': random.randint(0,100)}
data_store.update(vals)
thread = threading.Thread(name='interval_query', target=interval_query)
thread.setDaemon(True)
thread.start()
#app.route('/')
def hello_world():
return str(data_store['a'])
if __name__ == "__main__":
app.run()
Well, first of all: don't try to solve this problem by yourself: don't use threads or any kind of multiprocessing. Why? Because later on you want to scale up and the best way is to leave this up to the server - gunicorn, uwsgi. If you would try to handle this by yourself it would very likely collide with how these servers works.
Instead what you should do is to use one service for processing the request and message queue with a worker process that handles asynchronous tasks. This approach is more better at scaling.
From your question it seems that your are not looking for an answer but rather for guidance, have a look here: http://flask.pocoo.org/docs/0.10/patterns/celery/ and this https://www.quora.com/Celery-distributed-task-queue-What-is-the-difference-between-workers-and-processes
The advantage here is that the web worker / task worker / celery solution scales much better than the alternatives as the only bottleneck is the database.
Related
I have a python program. It produces new thread which is API server. It works on test environment. However I want to use WSGI (e.g. gunicorn) on production environment. How can I do it without major change?
There are many reasons why I want to use WSGI but one big reason is that I want to set up timeout in order to kill open connections for very slow requests. I am facing a performance issue. The very slow request affects another requests eventually.
*files
├── server.py
└── api.py
*server.py
import time
import api
api_server = api.APIServer()
api_server.daemon = True
api_server.start()
while True:
time.sleep(1)
*api.py
import threading
from flask import Flask
class APIServer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
app = Flask(__name__)
#app.route('/')
def index():
return 'hello'
app.run(host="127.0.0.1", port=5000, threaded=True)
Update 1
The real code is here. I picked up some from this code.
https://github.com/CounterpartyXCP/counterparty-lib/blob/master/counterpartylib/server.py#L403
vidstige's way is absolutely right. But I don't want to change a lot if possible.
The tricky part for you is to make sure you run your other thread smoothly. I suggest doing it in two steps
Change the threads around so that the other stuff is started in a background thread.
Then switch to gunicorn, you can start the background thread using post_worker_init server hook.
Your thread is not really needed here. You can just immediately switch to gunicorn. Just install pip install gunicorn and move out the app to e.g. server.py. This is very important.
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return 'hello'
Now you can run it without threads like so FLASK_APP=server.py flask run. Once you have it at this point you can switch to a production WSGI such as gunicorn. Just launch it like this instead.
gunicorn server:app
Once you can run using gunicorn, you can configure it to use multiple threads, and even processes. If your work is IO-bound I recommend using gevent instead.
🦄
I wrote a Django website that handles concurrent database requests and subprocess calls perfectly fine, if I just run "python manage.py runserver"
This is my model
class MyModel:
...
def foo(self):
args = [......]
pipe = subprocess.Popen(args, stdout=subproccess.PIPE, stderr=subprocess.PIPE)
In my view:
def call_foo(request):
my_model = MyModel()
my_model.foo()
However, after I wrap it using Tornado server, it's no longer able to handle concurrent request. When I click my website where it sends async get request to this call_foo() function, it seems like my app is not able to handle other requests. For example, if I open the home page url, it keeps waiting and won't display until the above subprocess call in foo() has finished.
If I do not use Tornado, everything works fine.
Below is my code to start the tornado server. Is there anything that I did wrong?
MAX_WAIT_SECONDS_BEFORE_SHUTDOWN = 5
def sig_handler(sig, frame):
logging.warning('Caught signal: %s', sig)
tornado.ioloop.IOLoop.instance().add_callback(force_shutdown)
def force_shutdown():
logging.info("Stopping tornado server")
server.stop()
logging.info('Will shutdown in %s seconds ...', MAX_WAIT_SECONDS_BEFORE_SHUTDOWN)
io_loop = tornado.ioloop.IOLoop.instance()
deadline = time.time() + MAX_WAIT_SECONDS_BEFORE_SHUTDOWN
def stop_loop():
now = time.time()
if now < deadline and (io_loop._callbacks or io_loop._timeouts):
io_loop.add_timeout(now + 1, stop_loop)
else:
io_loop.stop()
logging.info('Force Shutdown')
stop_loop()
def main():
parse_command_line()
logging.info("starting tornado web server")
os.environ['DJANGO_SETTINGS_MODULE'] = 'mydjango.settings'
django.setup()
wsgi_app = tornado.wsgi.WSGIContainer(django.core.handlers.wsgi.WSGIHandler())
tornado_app = tornado.web.Application([
(r'/(favicon\.ico)', tornado.web.StaticFileHandler, {'path': "static"}),
(r'/static/(.*)', tornado.web.StaticFileHandler, {'path': "static"}),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),
])
global server
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
signal.signal(signal.SIGTERM, sig_handler)
signal.signal(signal.SIGINT, sig_handler)
tornado.ioloop.IOLoop.instance().start()
logging.info("Exit...")
if __name__ == '__main__':
main()
There is nothing wrong with your set-up. This is by design.
So, WSGI protocol (and so Django) uses syncronous model. It means that when your app starts processing a request it takes control and gives it back only when request is finished. That's why it can process single request at once. To allow simultaneous requests one usually launches wsgi application in multithreaded or multiprocessed mode.
The Tornado server on other side uses asynchronous model. The idea here is to have own scheduler instead of OS scheduler that works with threads and processes. So your code runs some logic, then launches some long task (DB call, URL fetch), sets up what to run when task finishes and gives control back to scheduler.
Giving controll back to scheduler is crucial part, it allows async server to work fast because it can start processing new request while previous is waiting for data.
This answer explains sync/async detailed. It focuses on client, but I think you can see the idea.
So whats wrong with your code: Popen does not give control to IOLoop. Python does nothing until your subprocess is finished, and so can not process other requests, even not Django's requests. runserver "works" here because it's multithreaded. So while locking entirely the thread, other threads can still process requests.
For this reason it's usually not recommended to run WSGI apps under async server like tornado. The doc claims it will be less scalable, but you can see the problem on your own code. So if you need both servers (e.g. Tornado for sockets and Django for main site), I'd suggest to run both behind nginx, and use uwsgi or gunicorn to run Django. Or take a look at django-channels app instead of tornado.
Besides, while it works on test environment, I guess it's not a recomended way to do what you try to achieve. It's hard to suggest the solution, as I don't know what do you call with Popen, but it seams to be something long running. Maybe you should take a look at Celery project. It's a package for running long-term background job.
However, back to running sub-processes. In Tornado you can use tornado.process.Subprocess. It's a wrapper over Popen to allow it to work with IOLoop. Unfortunately I don't know if you can use it in wsgi part under tornado. There are some projects I remember, like django futures but it seems to be abandoned.
As another quick and dirty fix - you can run Tornado with several processes. Check this example on how to fork server. But I will not recoment using this in production anyway (fork is OK, running wsgi fallback is not).
So to summarize, I would rewrite your code to do one of the following:
Run the Popen call in some background queue, like Celery
Process such views with Tornado and use tornado.processes module to run subprocess.
And overall, I'd seek for another deployment infrastructure, and would not run Durango under tornado.
I need to start a long-running background process with subprocess when someone visits a particular view.
My code:
from flask import Flask
import subprocess
app = Flask(__name__)
#app.route("/")
def index():
subprocess.Popen(["sleep", "10"])
return "hi\n"
if __name__ == "__main__":
app.run(debug=True)
This works great, for the most part.
The problem is that when the process (sleep) ends, ps -Af | grep sleep shows it as [sleep] <defunct>.
From what I've read, this is because I still have a reference to the process in flask.
Is there a way to drop this reference after the process exits?
I tried doing g.subprocess = subprocess.Popen(["sleep", "10"]), and waiting for the process to end in #app.after_request(response) so I can use del on it, but this prevents flask from returning the response until the subprocess exits - I need it to return the response before the subprocess exits.
Note:
I need the subprocess.Popen operation to be non-blocking - this is important.
As I've suggested in the comments, one of the cleanest and most robust way of achieving this kind of thing in Python is by using celery.
Celery requires a broker transport for messaging, for which rabbitmq is the default, and at least a process with workers running. However, the thing that increases readbility an dmaintanability is that the worker code can co-exist in the same file or files than your server app. You invoke the remote procedures as though it where a simple function call.
Celery can handle retries, post-task events, and lots of other things for free, everything with mature code hardened by years of use in production.
This is your example after re-writting it for use with Celery:
from flask import Flask
from celery import Celery
import subprocess
app = Flask(__name__)
celery_app = Celery("test")
#celery_app.task
def run_process():
subprocess.Popen(["sleep", "5"])
#app.route("/")
def index():
run_process.delay()
return "hi\n"
if __name__ == "__main__":
app.run(debug=True, port=8080)
With this code, in a system with the rabbitmq server running with default options (I installed the package, and started the service - no configurations whatsoever. Of course on production you would have to tune that - but if everything is to be on the same server, it may not even be needed.)
With rabbitmq in place, one starts the worker process with a command line like: celery worker -A bla1.celery_app -D (pip install celery on the same virtualenv you have your Flask). Then just launch the flask server and see it working.
Of course this has even more advantages if you are doing more work in Python itself than just calling an external process. It can have access to your database models, and you can perform assynchronous actions that modify objects in there (and eventually trigger responses for the user, as "flash" messages on the user session, or e-mails)
I've seen a lot of "poor man's parallel processing" using subprocess.Popen and letting it run freely, but that's often leading to zombie problems as you noted.
You could run your process in a thread (in that case, no need for Popen, just use call or check_call if you want to raise an exception if process failed). call or check_call (or run since Python 3.5) waits for the process to complete so no zombies, and since you're running it in a thread you're not blocked.
import threading
def in_background():
subprocess.call(["sleep", "10"])
#app.route("/")
def index():
t = threading.Thread(target=in_background)
t.start()
return "hi\n"
Note: To wait for thread completion you'd have to use t.join() and for that you'd have to keep a reference on the t thread object.
BTW, I suppose that your real process isn't sleep, or it's not very useful and time.sleep(10) does the same (always in a thread of course!)
I have a simple function that go over a list of URLs, using GET to retrieve some information and update the DB (PostgresSQL) accordingly. The function works perfect. However, going over each URL one at a time talking too much time.
Using python, I'm able to do to following to parallel these tasks:
from multiprocessing import Pool
def updateDB(ip):
code goes here...
if __name__ == '__main__':
pool = Pool(processes=4) # process per core
pool.map(updateDB, ip)
This is working pretty well. However, I'm trying to find how do the same on django project. Currently I have a function (view) that go over each URL to get the information, and update the DB.
The only thing I could find is using Celery, but this seems to be a bit overpower for the simple task I want to perform.
Is there anything simple that i can do or do I have to use Celery?
Currently I have a function (view) that go over each URL to get the
information, and update the DB.
It means response time does not matter for you and instead of doing it in the background (asynchronously), you are OK with doing it in the foreground if your response time is cut by 4 (using 4 sub-processes/threads). If that is the case you can simply put your sample code in your view. Like
from multiprocessing import Pool
def updateDB(ip):
code goes here...
def my_view(request):
pool = Pool(processes=4) # process per core
pool.map(updateDB, ip)
return HttpResponse("SUCCESS")
But, if you want to do it asynchronously in the background then you should use Celery or follow one of #BasicWolf's suggestions.
Though using Celery may seem an overkill, it is a well-known way of doing asynchronous tasks. Essentially Django serves WSGI request-response cycle which knows nothing of multiprocessing or background tasks.
Here are alternative options:
Django background tasks - might fit your case better.
Redis queue
I will recommend to use gevent for multithreading solution instead of multiprocessing. Multiprocessing can cause problem in production environment where spawning new processes are restricted.
Example code:
from django.shortcuts import HttpResponse
from gevent.pool import Pool
def square(number):
return number * number
def home(request):
pool = Pool(50)
numbers = [1, 3, 5]
results = pool.map(square, numbers)
return HttpResponse(results)
I have developed a rather extensive http server written in python utilizing tornado. Without setting anything special, the server blocks on requests and can only handle one at a time. The requests basically access data (mysql/redis) and print it out in json. These requests can take upwards of a second at the worst case. The problem is that a request comes in that takes a long time (3s), then an easy request comes in immediately after that would take 5ms to handle. Well since that first request is going to take 3s, the second one doesn't start until the first one is done. So the second request takes >3s to be handled.
How can I make this situation better? I need that second simple request to begin executing regardless of other requests. I'm new to python, and more experienced with apache/php where there is no notion of two separate requests blocking each other. I've looked into mod_python to emulate the php example, but that seems to block as well. Can I change my tornado server to get the functionality that I want? Everywhere I read, it says that tornado is great at handling multiple simultaneous requests.
Here is the demo code I'm working with. I have a sleep command which I'm using to test if the concurrency works. Is sleep a fair way to test concurrency?
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.gen
import time
class MainHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
#tornado.gen.engine
def handlePing1(self):
time.sleep(4)#simulating an expensive mysql call
self.write("response to browser ....")
self.finish()
def get(self):
start = time.time()
self.handlePing1()
#response = yield gen.Task(handlePing1)#i see tutorials around that suggest using something like this ....
print "done with request ...", self.request.path, round((time.time()-start),3)
application = tornado.web.Application([
(r"/.*", MainHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
port=8833;
http_server.listen(port)
print "listening on "+str(port);
tornado.ioloop.IOLoop.instance().start()
Thanks for any help!
Edit: remember that Redis is also single threaded, so even if you have concurrent requests, your bottleneck will be Redis. You won't be able to process more requests because Redis won't be able to process them.
Tornado is single-threaded, event-loop based server.
From the documentation:
By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Concurrency in tornado is achieved through asynchronous callbacks. The idea is to do as little as possible in the main event loop (single-threaded) to avoid blocking and defer i/o operations through callbacks.
If using asynchronous operations doesn't work for you (ex: no async driver for MySQL, or Redis), your only way of handling more concurrent requests is to run multiple processes.
The easiest way is to front your tornado processes with a reverse-proxy like HAProxy or Nginx. The tornado doc recommends Nginx: http://www.tornadoweb.org/en/stable/overview.html#running-tornado-in-production
Your basically run multiple versions of your app on different ports. Ex:
python app.py --port=8000
python app.py --port=8001
python app.py --port=8002
python app.py --port=8003
A good rule of thumb is to run 1 process for each core on your server.
Nginx will take care of balancing each incoming requests to the different backends. So if one of the request is slow (~ 3s) you have n-1 other processes listening for incoming requests. It is possible – and very likely – that all processes will be busy processing a slow-ish request, in which case requests will be queued and processed when any process becomes free, eg. finished processing the request.
I strongly recommend you start with Nginx before trying HAProxy as the latter is a little bit more advanced and thus a bit more complex to setup properly (lots of switches to tweak).
Hope this helps. Key take-away: Tornado is great for async I/O, less so for CPU heavy workloads.
I had same problem, but no tornado, no mysql.
Do you have one database connection shared with all server?
I created a multiprocessing.Pool. Each have its own db connection provided by init function. I wrap slow code in function and map it to Pool. So i have no shared variables and connections.
Sleep not blocks other threads, but DB transaction may block threads.
You need to setup Pool at top of your code.
def spawn_pool(fishes=None):
global pool
from multiprocessing import Pool
def init():
from storage import db #private connections
db.connect() #connections stored in db-framework and will be global in each process
pool = Pool(processes=fishes,initializer=init)
if __name__ == "__main__":
spawn_pool(8)
from storage import db #shared connection for quick-type requests.
#code here
if __name__ == "__main__":
start_server()
Many of concurrent quick requests may slowdown one big request, but this concurrency will be placed on database server only.