Stopping threads spawned by BaseHTTPServer using ThreadingMixin - python

I have read on here on this post that using ThreadingMixin (from the SocketServer module), you are able to create a threaded server with BaseHTTPServer. I have tried it, and it does work. However, how can I stop active threads spawned by the server (for example, during a server shutdown)? Is this possible?

The simplest solution is to just use daemon_threads. The short version is: just set this to True, and don't worry about it; when you quit, any threads still working will stop automatically.
As the ThreadingMixIn docs say:
When inheriting from ThreadingMixIn for threaded connection behavior, you should explicitly declare how you want your threads to behave on an abrupt shutdown. The ThreadingMixIn class defines an attribute daemon_threads, which indicates whether or not the server should wait for thread termination. You should set the flag explicitly if you would like threads to behave autonomously; the default is False, meaning that Python will not exit until all threads created by ThreadingMixIn have exited.
Further details are available in the threading docs:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
Sometimes this isn't appropriate, because you want to shut down without quitting, or because your handlers may have cleanup that needs to be done. But when it is appropriate, you can't get any simpler.
If all you need is a way to shutdown without quitting, and don't need guaranteed cleanup, you may be able to use platform-specific thread-cancellation APIs via ctypes or win32api. This is generally a bad idea, but occasionally it's what you want.
If you need clean shutdown, you need to build your own machinery for that, where the threads cooperate. For example, you could create a global "quit flag" variable protected by a threading.Condition, and have your handle function check this periodically.
This is great if the threads are just doing slow, non-blocking work that you can break up into smaller pieces. For example, if the handle function always checks the quit flag at least once every 5 seconds, you can guarantee being able to shutdown the threads within 5 seconds. But what if the threads are doing blocking work—as they probably are, because the whole reason you used ThreadingMixIn was to let you make blocking calls instead of writing select loops or using asyncore or the like?
Well, there is no good answer. Obviously if you just need the shutdown to happen "eventually" rather than "within 5 seconds" (or if you're willing to abandon clean shutdown after 5 seconds, and revert to either using platform-specific APIs or daemonizing the threads), you can just put the checks before and after each blocking call, and it will "often" work. But if that's not good enough, there's really nothing you can do.
If you need this, the best answer is to change your architecture to use a framework that has ways to do this. The most popular choices are Twisted, Tornado, and gevent. In the future, PEP 3156 will bring similar functionality into the standard library, and there's a partly-complete reference implementation tulip that's worth playing with if you're not trying to build something for the real world that has to be ready soon.

Here's example code showing how to use threading.Event to shutdown the server on any POST request,
import SocketServer
import BaseHTTPServer
import threading
quit_event = threading.Event()
class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
"""This handler fires the quit event on POST."""
def do_GET(self):
self.send_response(200)
def do_POST(self):
quit_event.set()
self.send_response(200)
class MyThreadingHTTPServer(
SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
pass
server = MyThreadingHTTPServer(('', 8080), MyRequestHandler)
threading.Thread(target=server.serve_forever).start()
quit_event.wait()
server.shutdown()
The server is shutdown cleanly, so you can immediately restart the server and the port is available rather than getting "Address already in use".

Related

Non-blocking, non-concurrent tasks in Python

I am working on an implementation of a very small library in Python that has to be non-blocking.
On some production code, at some point, a call to this library will be done and it needs to do its own work, in its most simple form it would be a callable that needs to pass some information to a service.
This "passing information to a service" is a non-intensive task, probably sending some data to an HTTP service or something similar. It also doesn't need to be concurrent or to share information, however it does need to terminate at some point, possibly with a timeout.
I have used the threading module before and it seems the most appropriate thing to use, but the application where this library will be used is so big that I am worried of hitting the threading limit.
On local testing I was able to hit that limit at around ~2500 threads spawned.
There is a good possibility (given the size of the application) that I can hit that limit easily. It also makes me weary of using a Queue given the memory implications of placing tasks at a high rate in it.
I have also looked at gevent but I couldn't see an example of being able to spawn something that would do some work and terminate without joining. The examples I went through where calling .join() on a spawned Greenlet or on an array of greenlets.
I don't need to know the result of the work being done! It just needs to fire off and try to talk to the HTTP service and die with a sensible timeout if it didn't.
Have I misinterpreted the guides/tutorials for gevent ? Is there any other possibility to spawn a callable in fully non-blocking fashion that can't hit a ~2500 limit?
This is a simple example in Threading that does work as I would expect:
from threading import Thread
class Synchronizer(Thread):
def __init__(self, number):
self.number = number
Thread.__init__(self)
def run(self):
# Simulating some work
import time
time.sleep(5)
print self.number
for i in range(4000): # totally doesn't get past 2,500
sync = Synchronizer(i)
sync.setDaemon(True)
sync.start()
print "spawned a thread, number %s" % i
And this is what I've tried with gevent, where it obviously blocks at the end to
see what the workers did:
def task(pid):
"""
Some non-deterministic task
"""
gevent.sleep(1)
print('Task', pid, 'done')
for i in range(100):
gevent.spawn(task, i)
EDIT:
My problem stemmed out from my lack of familiarity with gevent. While the Thread code was indeed spawning threads, it also prevented the script from terminating while it did some work.
gevent doesn't really do that in the code above, unless you add a .join(). All I had to do to see the gevent code do some work with the spawned greenlets was to make it a long running process. This definitely fixes my problem as the code that needs to spawn the greenlets is done within a framework that is a long running process in itself.
Nothing requires you to call join in gevent, if you're expecting your main thread to last longer than any of your workers.
The only reason for the join call is to make sure the main thread lasts at least as long as all of the workers (so that the program doesn't terminate early).
Why not spawn a subprocess with a connected pipe or similar and then, instead of a callable, just drop your data on the pipe and let the subprocess handle it completely out of band.
As explained in Understanding Asynchronous/Multiprocessing in Python, asyncoro framework supports asynchronous, concurrent processes. You can run tens or hundreds of thousands of concurrent processes; for reference, running 100,000 simple processes takes about 200MB. If you want to, you can mix threads in rest of the system and coroutines with asyncoro (provided threads and coroutines don't share variables, but use coroutine interface functions to send messages etc.).

is twisted incompatible with multiprocessing events and queues?

I am trying to simulate a network of applications that run using twisted. As part of my simulation I would like to synchronize certain events and be able to feed each process large amounts of data. I decided to use multiprocessing Events and Queues. However, my processes are getting hung.
I wrote the example code below to illustrate the problem. Specifically, (about 95% of the time on my sandy bridge machine), the 'run_in_thread' function finishes, however the 'print_done' callback is not called until after I press Ctrl-C.
Additionally, I can change several things in the example code to make this work more reliably such as: reducing the number of spawned processes, calling self.ready.set from reactor_ready, or changing the delay of deferLater.
I am guessing there is a race condition somewhere between the twisted reactor and blocking multiprocessing calls such as Queue.get() or Event.wait()?
What exactly is the problem I am running into? Is there a bug in my code that I am missing? Can I fix this or is twisted incompatible with multiprocessing events/queues?
Secondly, would something like spawnProcess or Ampoule be the recommended alternative? (as suggested in Mix Python Twisted with multiprocessing?)
Edits (as requested):
I've run into problems with all the reactors I've tried glib2reactor selectreactor, pollreactor, and epollreactor. The epollreactor seems to give the best results and seems to work fine for the example given below but still gives me the same (or a similar) problem in my application. I will continue investigating.
I'm running Gentoo Linux kernel 3.3 and 3.4, python 2.7, and I've tried Twisted 10.2.0, 11.0.0, 11.1.0, 12.0.0, and 12.1.0.
In addition to my sandy bridge machine, I see the same issue on my dual core amd machine.
#!/usr/bin/python
# -*- coding: utf-8 *-*
from twisted.internet import reactor
from twisted.internet import threads
from twisted.internet import task
from multiprocessing import Process
from multiprocessing import Event
class TestA(Process):
def __init__(self):
super(TestA, self).__init__()
self.ready = Event()
self.ready.clear()
self.start()
def run(self):
reactor.callWhenRunning(self.reactor_ready)
reactor.run()
def reactor_ready(self, *args):
task.deferLater(reactor, 1, self.node_ready)
return args
def node_ready(self, *args):
print 'node_ready'
self.ready.set()
return args
def reactor_running():
print 'reactor_running'
df = threads.deferToThread(run_in_thread)
df.addCallback(print_done)
def run_in_thread():
print 'run_in_thread'
for n in processes:
n.ready.wait()
def print_done(dfResult=None):
print 'print_done'
reactor.stop()
if __name__ == '__main__':
processes = [TestA() for i in range(8)]
reactor.callWhenRunning(reactor_running)
reactor.run()
The short answer is yes, Twisted and multiprocessing are not compatible with each other, and you cannot reliably use them as you are attempting to.
On all POSIX platforms, child process management is closely tied to SIGCHLD handling. POSIX signal handlers are process-global, and there can be only one per signal type.
Twisted and stdlib multiprocessing cannot both have a SIGCHLD handler installed. Only one of them can. That means only one of them can reliably manage child processes. Your example application doesn't control which of them will win that ability, so I would expect there to be some non-determinism in its behavior arising from that fact.
However, the more immediate problem with your example is that you load Twisted in the parent process and then use multiprocessing to fork and not exec all of the child processes. Twisted does not support being used like this. If you fork and then exec, there's no problem. However, the lack of an exec of a new process (perhaps a Python process using Twisted) leads to all kinds of extra shared state which Twisted does not account for. In your particular case, the shared state that causes this problem is the internal "waker fd" which is used to implement deferToThread. With the fd shared between the parent and all the children, when the parent tries to wake up the main thread to deliver the result of the deferToThread call, it most likely wakes up one of the child processes instead. The child process has nothing useful to do, so that's just a waste of time. Meanwhile the main thread in the parent never wakes up and never notices your threaded task is done.
It's possible you can avoid this issue by not loading any of Twisted until you've already created the child processes. This would turn your usage into a single-process use case as far as Twisted is concerned (in each process, it would be initially loaded, and then that process would not go on to fork at all, so there's no question of how fork and Twisted interact anymore). This means not even importing Twisted until after you've created the child processes.
Of course, this only helps you out as far as Twisted goes. Any other libraries you use could run into similar trouble (you mentioned glib2, that's a great example of another library that will totally choke if you try to use it like this).
I highly recommend not using the multiprocessing module at all. Instead, use any multi-process approach that involves fork and exec, not fork alone. Ampoule falls into that category.

Aborting HTTP request cross-thread

I'm porting one of my projects from C# and am having trouble solving a multithreading issue in Python. The problem relates to a long-lived HTTP request, which is expected (the request will respond when a certain event occurs on the server). Here's the summary:
I send the request using urllib2 on a separate thread. When the request returns or times out, the main thread is notified. This works fine. However, there are cases where I need to abort this outstanding request and switch to a different URL. There are four solutions that I can consider:
Abort the outstanding request. C# has WebRequest.Abort(), which I can call cross-thread to abort the request. Python urllib2.Request appears to be a pure data class, in that instances only store request information; responses are not connected to Request objects. So I can't do this.
Interrupt the thread. C# has Thread.Interrupt(), which will raise a ThreadInterruptedException in the thread if it is in a wait state, or the next time it enters such a state. (Waiting on a monitor and file/socket I/O are both waiting states.) Python doesn't seem to have anything comparable; there does not appear to be a way to wake up a thread that is blocked on I/O.
Set a low timeout on the request. On a timeout, check an "aborted" flag. If it's false, restart the request.
Similar to option 3, add an "aborted" flag to the state object so that when the request does finally end in one way or another, the thread knows that the response is no longer needed and just shuts itself down.
Options 3 and 4 seem to be the only ones supported by Python, but option 3 is a horrible solution and 4 will keep open a connection I don't need. I am hoping to be a good netizen and close this connection when I no longer need it. Is there any way to actually abort the outstanding request, one way or another?
Consider using gevent. Gevent uses non-thread cooperating units of execution called greenlets. Greenlets can "block" on IO, which really means "go to sleep until the IO is ready". You could have a requester greenlet that owns the socket and a main greenlet that decides when to abort. When you want to abort and switch URLs the main greenlet kills the requester greenlet. The requester catches the resulting exception, closes its socket/urllib2 request, and starts over.
Edited to add: Gevent is not compatible with threads, so be careful with that. You'll have to either use gevent all the way or threads all the way. Threads in python are kinda lame anyway because of the GIL.
Similar to Spike Gronim's answer, but even more heavy handed.
Consider rewriting this in twisted. You probably would want to subclass twisted.web.http.HTTPClient, in particular implementing handleResponsePart to do your client interaction (or handleResponseEnd if you don't need to see it before the response ends). To close the connection early, you just call the loseConnection method on the client protocol.
Maybe this snippet of "killable thread" could be useful to you if you have no other choice. But i would have the same opinion as Spike Gronim and recommend using gevent.
I found this question using google and used Spike Gronim's answer to come up with:
from gevent import monkey
monkey.patch_all()
import gevent
import requests
def post(*args, **kwargs):
if 'stop_event' in kwargs:
stop_event = kwargs['stop_event']
del kwargs['stop_event']
else:
stop_event = None
req = gevent.spawn(requests.post, *args, **kwargs)
while req.value is None:
req.join(timeout=0.1)
if stop_event and stop_event.is_set():
req.kill()
break
return req.value
I thought it might be useful for other people as well.
It works just like a regular request.post, but takes an extra keyword argument 'stop_event'. This is a threading.Event. The request will abort if the stop_event gets set.
Use with caution, because if it's not waiting for either the connection or the communitation, it can block GIL (as mentioned). It (gevent) does seem compatible with threading these days (through monkey patch).

Pattern for a background Twisted server that fills an incoming message queue and empties an outgoing message queue?

I'd like to do something like this:
twistedServer.start() # This would be a nonblocking call
while True:
while twistedServer.haveMessage():
message = twistedServer.getMessage()
response = handleMessage(message)
twistedServer.sendResponse(response)
doSomeOtherLogic()
The key thing I want to do is run the server in a background thread. I'm hoping to do this with a thread instead of through multiprocessing/queue because I already have one layer of messaging for my app and I'd like to avoid two. I'm bringing this up because I can already see how to do this in a separate process, but what I'd like to know is how to do it in a thread, or if I can. Or if perhaps there is some other pattern I can use that accomplishes this same thing, like perhaps writing my own reactor.run method. Thanks for any help.
:)
The key thing I want to do is run the server in a background thread.
You don't explain why this is key, though. Generally, things like "use threads" are implementation details. Perhaps threads are appropriate, perhaps not, but the actual goal is agnostic on the point. What is your goal? To handle multiple clients concurrently? To handle messages of this sort simultaneously with events from another source (for example, a web server)? Without knowing the ultimate goal, there's no way to know if an implementation strategy I suggest will work or not.
With that in mind, here are two possibilities.
First, you could forget about threads. This would entail defining your event handling logic above as only the event handling parts. The part that tries to get an event would be delegated to another part of the application, probably something ultimately based on one of the reactor APIs (for example, you might set up a TCP server which accepts messages and turns them into the events you're processing, in which case you would start off with a call to reactor.listenTCP of some sort).
So your example might turn into something like this (with some added specificity to try to increase the instructive value):
from twisted.internet import reactor
class MessageReverser(object):
"""
Accept messages, reverse them, and send them onwards.
"""
def __init__(self, server):
self.server = server
def messageReceived(self, message):
"""
Callback invoked whenever a message is received. This implementation
will reverse and re-send the message.
"""
self.server.sendMessage(message[::-1])
doSomeOtherLogic()
def main():
twistedServer = ...
twistedServer.start(MessageReverser(twistedServer))
reactor.run()
main()
Several points to note about this example:
I'm not sure how your twistedServer is defined. I'm imagining that it interfaces with the network in some way. Your version of the code would have had it receiving messages and buffering them until they were removed from the buffer by your loop for processing. This version would probably have no buffer, but instead just call the messageReceived method of the object passed to start as soon as a message arrives. You could still add buffering of some sort if you want, by putting it into the messageReceived method.
There is now a call to reactor.run which will block. You might instead write this code as a twistd plugin or a .tac file, in which case you wouldn't be directly responsible for starting the reactor. However, someone must start the reactor, or most APIs from Twisted won't do anything. reactor.run blocks, of course, until someone calls reactor.stop.
There are no threads used by this approach. Twisted's cooperative multitasking approach to concurrency means you can still do multiple things at once, as long as you're mindful to cooperate (which usually means returning to the reactor once in a while).
The exact times the doSomeOtherLogic function is called is changed slightly, because there's no notion of "the buffer is empty for now" separate from "I just handled a message". You could change this so that the function is installed called once a second, or after every N messages, or whatever is appropriate.
The second possibility would be to really use threads. This might look very similar to the previous example, but you would call reactor.run in another thread, rather than the main thread. For example,
from Queue import Queue
from threading import Thread
class MessageQueuer(object):
def __init__(self, queue):
self.queue = queue
def messageReceived(self, message):
self.queue.put(message)
def main():
queue = Queue()
twistedServer = ...
twistedServer.start(MessageQueuer(queue))
Thread(target=reactor.run, args=(False,)).start()
while True:
message = queue.get()
response = handleMessage(message)
reactor.callFromThread(twistedServer.sendResponse, response)
main()
This version assumes a twistedServer which works similarly, but uses a thread to let you have the while True: loop. Note:
You must invoke reactor.run(False) if you use a thread, to prevent Twisted from trying to install any signal handlers, which Python only allows to be installed in the main thread. This means the Ctrl-C handling will be disabled and reactor.spawnProcess won't work reliably.
MessageQueuer has the same interface as MessageReverser, only its implementation of messageReceived is different. It uses the threadsafe Queue object to communicate between the reactor thread (in which it will be called) and your main thread where the while True: loop is running.
You must use reactor.callFromThread to send the message back to the reactor thread (assuming twistedServer.sendResponse is actually based on Twisted APIs). Twisted APIs are typically not threadsafe and must be called in the reactor thread. This is what reactor.callFromThread does for you.
You'll want to implement some way to stop the loop and the reactor, one supposes. The python process won't exit cleanly until after you call reactor.stop.
Note that while the threaded version gives you the familiar, desired while True loop, it doesn't actually do anything much better than the non-threaded version. It's just more complicated. So, consider whether you actually need threads, or if they're merely an implementation technique that can be exchanged for something else.

Putting a thread to sleep until event X occurs

I'm writing to many files in a threaded app and I'm creating one handler per file. I have HandlerFactory class that manages the distribution of these handlers. What I'd like to do is that
thread A requests and gets foo.txt's file handle from the HandlerFactory class
thread B requests foo.txt's file handler
handler class recognizes that this file handle has been checked out
handler class puts thread A to sleep
thread B closes file handle using a wrapper method from HandlerFactory
HandlerFactory notifies sleeping threads
thread B wakes and successfully gets foo.txt's file handle
This is what I have so far,
def get_handler(self, file_path, type):
self.lock.acquire()
if file_path not in self.handlers:
self.handlers[file_path] = open(file_path, type)
elif not self.handlers[file_path].closed:
time.sleep(1)
self.lock.release()
return self.handlers[file_path][type]
I believe this covers the sleeping and handler retrieval successfully, but I am unsure how to wake up all threads, or even better wake up a specific thread.
What you're looking for is known as a condition variable.
Condition Variables
Here is the Python 2 library reference.
For Python 3 it can be found here
Looks like you want a threading.Semaphore associated with each handler (other synchronization objects like Events and Conditions are also possible, but a Semaphore seems simplest for your needs). (Specifically, use a BoundedSemaphore: for your use case, that will raise an exception immediately for programming errors that erroneously release the semaphone more times than they acquire it -- and that's exactly the reason for being of the bounded version of semaphones;-).
Initialize each semaphore to a value of 1 when you build it (so that means the handler is available). Each using-thread calls acquire on the semaphore to get the handler (that may block it), and release on it when it's done with the handler (that will unblock exactly one of the waiting threads). That's simpler than the acquire/wait/notify/release lifecycle of a Condition, and more future-proof too, since as the docs for Condition say:
The current implementation wakes up
exactly one thread, if any are
waiting. However, it’s not safe to
rely on this behavior. A future,
optimized implementation may
occasionally wake up more than one
thread.
while with a Semaphore you're playing it safe (the semantics whereof are safe to rely on: if a semaphore is initialized to N, there are at all times between 0 and N-1 [[included]] threads that have successfully acquired the semaphore and not yet released it).
You do realize that Python has a giant lock, so that most of the benefits of multi-threading you do not get, right?
Unless there is some reason for the master thread to do something with the results of each worker, you may wish to consider just forking off another process for each request. You won't have to deal with locking issues then. Have the children do what they need to do, then die. If they do need to communicate back, do it over a pipe, with XMLRPC, or through a sqlite database (which is threadsafe).

Categories