I have some Python code which uses threading.Timer to implement a 60-second timeout for an operation.
The problem is that this code runs in a job-control environment where it may get pre-empted by a higher priority job. In this case it will be sent SIGSTOP, and then some time later, SIGCONT. I need a way to somehow notice that this has happened and reset the timeout: obviously the operation hasn't really timed out if it's been suspended for the whole 60 seconds.
I tried to add a signal handler for SIGCONT but this seems to get executed after the code provided to threading.Timer has been executed.
Is there some way to achieve this?
A fairly simple answer that occurred to me after posting this is to simply break up the timer into multiple sub-timers, e.g. having 10 6-second timers instead where each one starts the next one in a chain. That way, if I get suspended, I only lose one of the timers and still get most of the wait before timing out.
This is of course not foolproof, especially if I get repeatedly suspended and restarted, but it's easy to do and seems like it might be good enough.
You need to rethink what you're asking for; a timeout reflects elapsed time (wall time); you want to know the time used by your process.
Fortunately you can measure this with getrusage: http://docs.python.org/library/resource.html
You'll still need to set a timeout; when it returns, measure the increase in user or system time usage since the start of the operation and terminate the operation if it exceeds the limit, else reschedule the timeout appropriately.
If your application is multi-threaded, the docs says that:
only the main thread can set a new signal handler, and the main thread will be the only one to receive signals
Make sure you are handling your signals from the main thread.
Related
I am trying wait for any of multiple multiprocessing events at the same time, so I came up with code like this:
if e1.wait(timeout) or e2.wait(timeout):
# this part will be reached if either of both
# events is set or the wait timed out
It works like the comment says. But how does this work? Is the if polling bot methods all the time? Or is it called as soon as one event gets set?
Bonus question: Is there some clever way to adjust the code to wait for any number of events, i.e. a list of events? if True in [e1.wait(timeout),e2.wait(timeout)] does not work as expected.
It only waits for the first one. This is due to python's support of short circuiting.
Wait on a thread or process is blocking, so it will block the current thread for going future until the timeout or the thread has finished. The semantics of if in Python is short circuit, which means that if the first one returns true, then the second one will not be called - simonzack said.
Waiting on a number of threads would be kinda hard to implement and maintain for a variety of threads. I would suggest you to use Message passing, and get each process to send a message to a Queue when it is finished. This way you could just check if the queue was of ´len(n)´, where ´n´ is the number of threads/processes. see more here Queues in multiprocessing
In my (python) code I have a thread listening for changes from a couchdb feed (continuous changes). The changes request has a timeout parameter which is too big in certain circumstances (for example when a user wants to interrupt the program manually with ^C).
How can I abort a long-running blocking http request?
Is this possible, or do I need to reduce the timeout to make my program more responsive?
This would be unfortunate, because having a timeout small enough to make the program really responsive (say, 1s), means that there are lots of connections being created (one per second!), which defeats the purpose of listening to changes, and makes it very difficult to make sure that we are not missing any changes (in the re-connecting timespan we can indeed miss changes, so that special code is needed to handle that case)
The other option is to forcefully abort the thread, but that is not really an option in python.
If I understand correctly it looks like you are waiting too long between requests before deciding whether to respond to the users or not. You are right continuously closing and creating new connections will defeat the purpose of changes feed.
A solution could be to use heartbeat query parameter in which couchdb will keep sending newlines to tell the client that the connection is still alive.
http://localhost:5984/hello/_changes?feed=continuous&heartbeat=1000&include_docs=true
as long as you are getting heartbeats (newlines) you can be sure that you are getting new changes. A new line will indicate that no changes have occurred. Where as an actual change will be reported back. No need to close the connection. Respond to your clients if resp!="/n"
Blocking the thread execution in general prevents the thread from beeing terminated. You need to wait until the request timed out. But this is already clear.
Using a library that supports non blocking requests is maybe a solution, but I don't know if there is any.
Anyway ... you've mentioned that reducing the timeout will lead to more connections. I'd suggest to implement a waiting loop between requests that can be interrupted by an external signal to terminate the thread. with this loop you can control the number of requests independent from the timeout.
Hi while i'm programming i had to do a choice :
while not cont_flag :
pass
and using a Event object :
if not const_flag.is_set() :
const_flag.wait()
i want to know if there is a difference in performance between the two methode
There is. The first method is called busy waiting and is very different from blocking. In busy waiting, the CPU is being used constantly as the while loop is executed. In blocking, the thread is actually suspended until a wake-up condition is met.
See also this discussion:
What is the difference between busy-wait and polling?
The first one is referred to as busy waiting, it will eat up 100% of the CPU time while waiting. It's a way better practice to have some signalling mechanism to communicate events (e.g. something's done).
Python only allows a single thread to execute at a time, regardless of how many cpus your system may have. If multiple threads are ready to run, python will switch among them periodically. If you "busy wait" as in your first example, that while loop will eat up much of the time that your other threads could use for their work. While the second solution is far superior, if you end up using the first one, add a modest sleep to it.
while not cont_flag:
time.sleep(.1)
I am fairly new to Python programming and Threads isn't my area of expertise. I have a problem for which i would hope that people here can help me out with.
Task: as a part of my master thesis, i need to make a mixed reality game which involves multiplayer capability. in my game design, each player can set a bunch of traps, each of which is active for a specific time period e.g. 30 secs. In order to maintain a consistent game state across all the players, all the time check needs to be done on the server side, which is implemented in Python.
I decided to start a python thread, everytime a new trap is laid by a player and run a timer on the thread. All this part is fine, but the real problem arises when i need to notify the main thread that the time is up for this particular trap, so that i can communicate the same to the client (android device).
i tried creating a queue and inserting information into the queue when the task is done, but i cant do a queue.join() since it will put the main thread on hold till the task is done and this is not what i need nor is it ideal in my case, since the main thread is constantly communicating with the client and if it is halted, then all the communication with the players will come to a standstill.
I need the secondary thread, which is running a timer, to tell the main thread, as soon as the time runs out that the time has run out and send the ID of the trap, so that i can pass this information to the android client to remove it. How can i achieve this ??
Any other suggestions on how this task can be achieved without starting a gazillion threads, are also welcome.. :) :)
Thanks in advance for the help..
Cheers
i have finally found a nice little task scheduler written in python, which actually is quite light and quite handy to schedule events for a later time or date with a callback mechanism, which allows the child thread to pass-back a value to the main thread notifying the main thread of its status and whether the job was successfully done or not.
people out there, who need a similar functionality as the one in the question and dont want to haggle around with threads can use this scheduler to schedule their events and get a callback when the event is done
here is the link to APScheduler
It may be easier to have the timers all done in the main thread - have a list of timers that you keep appending new ones to. Each timer doesn't actually do anything, it just has a time when it goes off - which is easier if you work in arbitrary 'rounds' than in real time, but still doable. Each interval, the mainloop should check all of them, and see if it is time (or past time) for them to expire - if it is, remove them from the list (of course, be careful about removing items from a list you're iterating over - it mightn't do what you expect).
If you have a lot of timers, and by profiling you find out that running through all of them every interval is costing you too much time, a simple optimisation would be to keep them in a heapq - this will keep them sorted for you, so you know after the first one that hasn't expired yet that none of the rest have either. Something like:
while True:
if not q:
break
timer = heapq.heappop(q)
if timer.expiry <= currenttime:
# trigger events
else:
heapq.heappush(q)
break
This does still cost you one unnecessary pop/push pair, but its hard to see how you would do better - again, doing something like:
for timer in q:
if timer.expiry <= currenttime:
heapq.heappop(timer)
# trigger events
else:
break
Can have subtle bugs because list iterators (functions in heapq work on sequences and use side effects, rather than there being a full-fledged heapq class for some reason) work by keeping track of what index they're up to - so if you remove the current element, you push everything after it one index to the left and end up skipping the next one.
The only important thing is that currenttime is consistently updated each interval in the main loop (or, if your heart is set on having it in real time, based on the system clock), and timer.expiry is measured in the same units - if you have a concept of 'rounds', and a trap lasts six rounds, when it is placed you would do heapq.heappush(q, Timer(expiry=currenttime+6).
If you do want to do it the multithreaded way, your way of having a producer/consumer queue for cleanup will work - you just need to not use Queue.join(). Instead, as the timer in a thread runs out, it calls q.put(), and then dies. The mainloop would use q.get(False), which will avoid blocking, or else q.get(True, 0.1) which will block for at most 0.1 seconds - the timeout can be any positive number; tune it carefully for the best tradeoff between blocking long enough that clients notice and having events go off late because they only just missed being in the queue on time.
The main thread creates a queue and a bunch of worker threads that are
pulling tasks from the queue. As long as the queue is empty all worker
threads block and do nothing. When a task is put into the queue a random
worker thread acquires the task, does it job and sleeps as soon as its
ready. That way you can reuse a thread over and over again without
creating a new worker threads.
When you need to stop the threads you put a kill object into the queue
that tells the thread to shut down instead of blocking on the queue.
In my program I have a bunch of threads running and I'm trying
to interrupt the main thread to get it to do something asynchronously.
So I set up a handler and send the main process a SIGUSR1 - see the code
below:
def SigUSR1Handler(signum, frame):
self._logger.debug('Received SIGUSR1')
return
signal.signal(signal.SIGUSR1, SigUSR1Handler)
[signal.signal(signal.SIGUSR1, signal.SIG_IGN)]
In the above case, all the threads and the main process stops - from a 'c'
point of view this was unexpected - I want the threads to continue as they
were before the signal. If I put the SIG_IGN in instead, everything continues
fine.
Can somebody tell me how to do this? Maybe I have to do something with the 'frame'
manually to get back to where it was..just a guess though
thanks in advance,
Thanks for your help on this.
To explain a bit more, I have thread instances writing string information to
a socket which is also output to a file. These threads run their own timers so they
independently write their outputs to the socket. When the program runs I also see
their output on stdout but it all stops as soon as I see the debug line from the signal.
I need the threads to constantly send this info but I need the main program to
take a command so it also starts doing something else (in parallel) for a while.
I thought I'd just be able to send a signal from the command line to trigger this.
Mixing signals and threads is always a little precarious. What you describe should not happen, however. Python only handles signals in the main thread. If the OS delivered the signal to another thread, that thread may be briefly interrupted (when it's performing, say, a systemcall) but it won't execute the signal handler. The main thread will be asked to execute the signalhandler at the next opportunity.
What are your threads (including the main thread) actually doing when you send the signal? How do you notice that they all 'stop'? Is it a brief pause (easily explained by the fact that the main thread will need to acquire the GIL before handling the signal) or does the process break down entirely?
I'll sort-of answer my own question:
In my first attempt at this I was using time.sleep(run_time) in the main
thread to control how long the threads ran until they were stopped. By adding
debug I could see that the sleep loop seemed to be exiting as soon as the
signal handler returned so everything was shutting down normally but early!
I've replaced the sleep with a while loop and that doesn't jump out after
the signal handler returns so my threads keep running. So it solves the
problem but I'm still a bit puzzled about sleep()'s behaviour.
You should probably use a threading.Condition variable instead of sending signals. Have your main thread check it every loop and perform its special operation if it's been set.
If you insist on using signals, you'll want to move to using subprocess instead of threads, as your problem is likely due to the GIL.
Watch this presentation by David Beazley.
http://blip.tv/file/2232410
It also explains some quirky behavior related to threads and signals (Python specific, not the general quirkiness of the subject :-) ).
http://pyprocessing.berlios.de/ Pyprocessing is a neat library that makes it easier to work with separate processes in Python.