Both threads work if accessed but when they are executed together halt_listener will monopolize the resources not allowing import_1 to execute. The end goal is to have halt_listener listen for a kill message and then set a run variable to false. This has worked when I was sending a pipe to the halt_listener but I prefer a queue.
Here is my code:
import multiprocessing
import time
from threading import Thread
class test_imports:#Test classes remove
alive = {'import_1': True, 'import_2': True};
def halt_listener(self, control_Queue, thread_Name, kill_command):
while True:
print ("Checking queue for kill")
isAlive = control_queue.get()
print ("isAlive", isAlive)
if isAlive == kill_command:
print ("kill listener triggered")
self.alive[thread_Name] = False;
return
def import_1(self, control_Queue, thread_Number):
print ("Import_1 number %d started") % thread_Number
halt = test_imports()
t = Thread(target=halt.halt_listener, args=(control_Queue, 'import_1', 't1kill'))
count = 0
t.run()
global alive
run = test_imports.alive['import_1'];
while run:
print ("Thread type 1 number %d run count %d") % (thread_Number, count)
count = count + 1
print ("Test Import_1 ", run)
run = self.alive['import_1'];
print ("Killing thread type 1 number %d") % thread_Number
Am I missing something?
The problem is that you're calling t.run(). run isn't a method that starts a thread; run is the actual code that's meant to run on the thread. By calling it directly, you're running it on your thread, and waiting for it to finish.
What you want is t.start().
See the documentation on threading.Thread for details.
While we're at it, there are a few other problems with your code.
First, you don't have a lock around self.alive. You can't change a value (except for a small number of automatically-self-synchronized types like Queue) in one thread and access it in another without a lock. You will often get away with it, but "often" in a multithreaded program just means it won't fail until your big demonstration, and it will then take weeks to figure out how to reproduce before you can even begin fixing it… (In this case, a Condition might make more sense than a Lock, but either way, you need to synchronize on something.)
Meanwhile, looping as fast as possible to poll self.alive['import_1'] is going to burn 100% CPU for no good reason. There's almost always a better way to wait on something (e.g., in this case, if you used a Condition for synchronization, you could also use it for waiting here); in the rare cases when there isn't, you should at least sleep every time through the loop.
alive is actually a class attribute rather than an instance attribute. That's usually not what you want. In fact, you try to access both test_imports.alive and self.alive, but both of those will end up being the class attribute as long as you never assign to it, which makes it more confusing. And then, on top of that, you have a global with the same name, which is just a recipe for extreme confusion.
Also, this looks like Python 2 code, but you're using print as if it were a function in some cases—e.g., print ("isAlive", isAlive). This isn't going to do what you want—instead of printing something like isAlive command, it's going to print something like ('isAlive', 'command'), which is not very pretty. And meanwhile, the extra parentheses in expressions like ("Import_1 number %d started") % thread_Number mean that someone has to read that over a few times to convince themselves that the parentheses aren't actually doing anything.
Finally, why are you creating a separate test_imports instance to call halt_listener on? Clearly the two methods are trying to communicate through attributes on self, but they're not going to do that if they're called on two different objects. Why not just target=self.half_listener?
Related
I know there are a few questions and answers related to hanging threads in Python, but my situation is slightly different as the script is hanging AFTER all the threads have been completed. The threading script is below, but obviously the first 2 functions are simplified massively.
When I run the script shown, it works. When I use my real functions, the script hangs AFTER THE LAST LINE. So, all the scenarios are processed (and a message printed to confirm), logStudyData() then collates all the results and writes to a csv. "Script Complete" is printed. And THEN it hangs.
The script with threading functionality removed runs fine.
I have tried enclosing the main script in try...except but no exception gets logged. If I use a debugger with a breakpoint on the final print and then step it forward, it hangs.
I know there is not much to go on here, but short of including the whole 1500-line script, I don't know hat else to do. Any suggestions welcome!
def runScenario(scenario):
# Do a bunch of stuff
with lock:
# access global variables
pass
pass
def logStudyData():
# Combine results from all scenarios into a df and write to csv
pass
def worker():
global q
while True:
next_scenario = q.get()
if next_scenario is None:
break
runScenario(next_scenario)
print(next_scenario , " is complete")
q.task_done()
import threading
from queue import Queue
global q, lock
q = Queue()
threads = []
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
num_worker_threads = 6
lock = threading.Lock()
for i in range(num_worker_threads):
print("Thread number ",i)
this_thread = threading.Thread(target=worker)
this_thread.start()
threads.append(this_thread)
for scenario_name in scenario_list:
q.put(scenario_name)
q.join()
print("q.join completed")
logStudyData()
print("script complete")
As the docs for Queue.get say:
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
In other words, there is no way get can ever return None, except by you calling q.put(None) on the main thread, which you don't do.
Notice that the example directly below those docs does this:
for i in range(num_worker_threads):
q.put(None)
for t in threads:
t.join()
The second one is technically necessary, but you usually get away with not doing it.
But the first one is absolutely necessary. You need to either do this, or come up with some other mechanism to tell your workers to quit. Without that, your main thread just tries to exit, which means it tries to join every worker, but those workers are all blocked forever on a get that will never happen, so your program hangs forever.
Building a thread pool may not be rocket science (if only because rocket scientists tend to need their calculations to be deterministic and hard real-time…), but it's not trivial, either, and there are plenty of things you can get wrong. You may want to consider using one of the two already-built threadpools in the Python standard library, concurrent.futures.ThreadPoolExecutor or multiprocessing.dummy.Pool. This would reduce your entire program to:
import concurrent.futures
def work(scenario):
runScenario(scenario)
print(scenario , " is complete")
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as x:
results = list(x.map(work, scenario_list))
print("q.join completed")
logStudyData()
print("script complete")
Obviously you'll still need a lock around any mutable variables you change inside runScenario—although if you're only using a mutable variable there because you couldn't figure out how to return values to the main thread, that's trivial with an Executor: just return the values from work, and then you can use them like this:
for result in x.map(work, scenario_list):
do_something(result)
Ive been trying to read up on threading and multiprocessing but all the examples are to intricate and advanced for my level of python/programming knowlegde. I want to run a function, which consists of a while loop, and while that loop runs I want to continue with the program and eventually change the condition for the while-loop and end that process. This is the code:
class Example():
def __init__(self):
self.condition = False
def func1(self):
self.condition = True
while self.condition:
print "Still looping"
time.sleep(1)
print "Finished loop"
def end_loop(self):
self.condition = False
The I make the following function-calls:
ex = Example()
ex.func1()
time.sleep(5)
ex.end_loop()
What I want is for the func1 to run for 5s before the end_loop() is called and changes the condition and ends the loop and thus also the function. I.e I want one process to start and "go" into func1 and at the same time I want time.sleep(5) to be called, so the processes "split" when arriving at func1, one process entering the function while the other continues down the program and start with the time.sleep(5) execution.
This must be the most basic example of a multiprocess, still Ive had trouble finding a simple way to do it!
Thank you
EDIT1: regarding do_something. In my real problem do_something is replaced by some code that communicates with another program via a socket and receives packages with coordinates every 0.02s and stores them in membervariables of the class. I want this constant updating of the coordinates to start and then be able to to read the coordinates via other functions at the same time.
However that is not so relevant. What if do_something is replaced by:
time.sleep(1)
print "Still looping"
How do I solve my problem then?
EDIT2: I have tried multiprocessing like this:
from multiprocessing import Process
ex = Example()
p1 = Process(target=ex.func1())
p2 = Process(target=ex.end_loop())
p1.start()
time.sleep(5)
p2.start()
When I ran this, I never got to p2.start(), so that did not help. Even if it had this is not really what Im looking for either. What I want would be just to start the process p1, and then continue with time.sleep and ex.end_loop()
The first problem with your code are the calls
p1 = Process(target=ex.func1())
p2 = Process(target=ex.end_loop())
With ex.func1() you're calling the function and pass the return value as target parameter. Since the function doesn't return anything, you're effectively calling
p1 = Process(target=None)
p2 = Process(target=None)
which makes, of course, no sense.
After fixing that, the next problem will be shared data: when using the multiprocessing package, you implement concurrency using multiple processes which, by default, cannot simply share data afaik. Have a look at Sharing state between processes in the package's documentation to read about this. Especially take the first sentence into account: "when doing concurrent programming it is usually best to avoid using shared state as far as possible"!
So you might want to also have a look at Exchanging objects between processes to read about how to send/receive data between two different processes. So, instead of simply setting a flag to stop the loop, it might be better to send a message to signal the loop should be terminated.
Also note that processes are a heavyweight form of multiprocessing, they spawn multiple OS processes which comes with a relatively big overhead. multiprocessing's main purpose is to avoid problems imposed by Python's Global Interpreter Lock (google about this to read more...) If your problem is'nt much more complex than what you've told us, you might want to use the threading package instead: threads come with less overhead than processes and also allow to access the same data (although you really should read about synchronization when doing this...)
I'm afraid, multiprocessing is an inherently complex subject. So I think you will need to advance your programming/python skills to successfully use it. But I'm sure you'll manage this, the python documentation about this is comprehensive and there are a lot of other resources about this.
To tackle your EDIT2 problem, you could try using the shared memory map Value.
import time
from multiprocessing import Process, Value
class Example():
def func1(self, cond):
while (cond.value == 1):
print('do something')
time.sleep(1)
return
if __name__ == '__main__':
ex = Example()
cond = Value('i', 1)
proc = Process(target=ex.func1, args=(cond,))
proc.start()
time.sleep(5)
cond.value = 0
proc.join()
(Note the target=ex.func1 without the parentheses and the comma after cond in args=(cond,).)
But look at the answer provided by MartinStettner to find a good solution.
This has been discussed many, many times, but I still don't have a good grasp on how to best accomplish this.
Suppose I have two threads: a main app thread and a worker thread. The main app thread (say it's a WXWidgets GUI thread, or a thread that is looping and accepting user input at the console) could have a reason to stop the worker thread - the user's closing the application, a stop button was clicked, some error occurred in the main thread, whatever.
Commonly suggested is to setup a flag that the thread checks frequently to determine whether to exit. I have two problems with the suggested ways to approach this, however:
First, writing constant checks of a flag into my code makes my code really ugly, and it's very, very prone to problems due to the huge amount of code duplication. Take this example:
def WorkerThread():
while (True):
doOp1() # assume this takes say 100ms.
if (exitThread == True):
safelyEnd()
return
doOp2() # this one also takes some time, say 200ms
if (exitThread == True):
safelyEnd()
return
if (somethingIsTrue == True):
doSomethingImportant()
if (exitThread == True): return
doSomethingElse()
if (exitThread == True): return
doOp3() # this blocks for an indeterminate amount of time - say, it's waiting on a network respond
if (exitThread == True):
safelyEnd()
return
doOp4() # this is doing some math
if (exitThread == True):
safelyEnd()
return
doOp5() # This calls a buggy library that might block forever. We need a way to detect this and kill this thread if it's stuck for long enough...
saveSomethingToDisk() # might block while the disk spins up, or while a network share is accessed...whatever
if (exitThread == True):
safelyEnd()
return
def safelyEnd():
cleanupAnyUnfinishedBusiness() # do whatever is needed to get things to a workable state even if something was interrupted
writeWhatWeHaveToDisk() # it's OK to wait for this since it's so important
If I add more code or change code, I have to make sure I'm adding those check blocks all over the place. If my worker thread is a very lengthy thread, I could easily have tens or even hundreds of those checks. Very cumbersome.
Think of the other problems. If doOp4() does accidentally deadlock, my app will spin forever and never exit. Not a good user experience!
Using daemon threads isn't really a good option either because it denies me the opportunity to execute the safelyEnd() code. This code might be important - flushing disk buffers, writing log data for debugging purposes, etc.
Second, my code might call functions that block where I don't have the opportunity to check frequently. Let's say this function exists but it's in code that I don't have access to - say part of a library:
def doOp4():
time.sleep(60) # imagine that this is a network thread, that waits for 60 seconds for a reply before returning.
If that timeout is 60 seconds, even if my main thread gives the signal for the thread to end, it still might sit there for 60 seconds, when it would be perfectly reasonable for it to just stop waiting for a network response and exit. If that code is part of a library I didn't write, however, I have no control over how that works.
Even if I did write the code for a network check, I'd basically have to refactor it so that rather than waiting 60 seconds, it loops 60 times and waits 1 second before checking the exit thread! Again, very messy!
The upshot of all of this, is it feels like a good way to be able to implement this easily would be to somehow cause an exception on a specific thread. If I could do that, I could wrap the entire worker thread's code in a try block, and put the safelyEnd() code in the exception handler, or even a finally block.
Is there a way to either accomplish this, or refactor this code with a different technique that will make things work? The thing is, ideally, when the user requests a quit, we want to make them wait the minimum possible amount. It seems that there has to be a simple way to accomplish this, as this is a very common thing in apps!
Most of the thread communication objects don't allow for this type of setup. They might allow for a cleaner way to have an exit flag, but it still doesn't eliminate the need to constantly check that exit flag, and it still won't deal with the thread blocking because of an external call or because it's simply in a busy loop.
The biggest thing for me is really that if I have a long worker thread procedure I have to litter it with hundreds of checks of the flag. This just seems way too messy and doesn't feel like it's very good coding practice. There has to be a better way...
Any advice would be greatly appreciated.
First, you can make this a lot less verbose and repetitive by using an exception, without needing the ability to raise exceptions into the thread from outside, or any other new tricks or language features:
def WorkerThread():
class ExitThreadError(Exception):
pass
def CheckEnd():
if exitThread:
raise ExitThreadError()
try:
while True:
doOp1() # assume this takes say 100ms.
CheckEnd()
doOp2() # this one also takes some time, say 200ms
CheckEnd()
# etc.
except ExitThreadError:
safelyEnd()
Note that you really ought to be guarding exitThread with a Lock or Condition—which is another good reason to wrap up the check, so you only need to fix that in one place.
Anyway, I've taken out some excessive parentheses, == True checks, etc. that added nothing to the code; hopefully you can still see how it's equivalent to the original.
You can take this even farther by restructuring your function into a simple state machine; then you don't even need an exception. I'll show a ridiculously trivial example, where every state always implicitly transitions to the next state no matter what. For this case, the refactor is obviously reasonable; whether it's reasonable for your real code, only you can really tell.
def WorkerThread():
states = (doOp1, doOp2, doOp3, doOp4, doOp5)
current = 0
while not exitThread:
states[current]()
current += 1
safelyEnd()
Neither of these does anything to help you interrupt in the middle of one of your steps.
If you have some function that takes 60 seconds and there's not a damn thing you can do about it, then there's no way to cancel your thread during those 60 seconds and there's not a damn thing you can do about it. That's just the way it is.
But usually, things that take 60 seconds are really doing something like blocking on a select, and there is something you can do about that—create a pipe, stick its read end in the select, and write on the other end to wake up the thread.
Or, in you're feeling hacky, often just closing/deleting/etc. a file or other object that the function is waiting on/processing/otherwise using will often guarantee that it fails quickly with an exception. Of course sometimes it guarantees a segfault, or corrupted data, or a 50% chance of exiting and a 50% chance of hanging forever, or… So, even if you can't control that doOp4 function, you'd better be able to analyze its source and/or whitebox test it.
If worst comes to worst, then yes, you do have to either change that one 60-second timeout into 60 1-second timeouts. But usually it won't come to that.
Finally, if you really do need to be able to kill a thread, don't use a thread, use a child process. Those are killable.
Just make sure that your process is always in a state where it's safe to kill it—or, if you only care about Unix, use a USR signal and mask it out when the process isn't in a safe-to-kill state.
But if it's not safe to kill your process in the middle of that 60-second doOp4 call, this isn't really going to help you, because you still won't be able to kill it during those 60 seconds.
In some cases, you can have the child process arrange for the parent to clean up for it if it gets killed unexpectedly, or even arrange for it to be cleaned up on the next run (e.g., think of a typical database journal).
But ultimately, what you're asking for is ultimately a contradiction: You want to hard-kill a thread without giving it a chance to finish what it's doing, but you want to guarantee that it finishes what it's doing, and you don't want to rewrite the code to make that possible. So, you need to rethink your design so that it requires something that isn't impossible.
If you do not mind your code running about ten times slower, you can use the Thread2 class implemented below. An example follows that shows how calling the new stop method should kill the thread on the next bytecode instruction. Implementing a cleanup system is left as an exercise for the reader to accomplish.
import threading
import sys
class StopThread(StopIteration): pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
################################################################################
import time
def main():
test = Thread2(target=printer)
test.start()
time.sleep(1)
test.stop()
test.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I am using this code:
def startThreads(arrayofkeywords):
global i
i = 0
while len(arrayofkeywords):
try:
if i<maxThreads:
keyword = arrayofkeywords.pop(0)
i = i+1
thread = doStuffWith(keyword)
thread.start()
except KeyboardInterrupt:
sys.exit()
thread.join()
for threading in python, I have almost everything done, but I dont know how to manage the results of each thread, on each thread I have an array of strings as result, how can I join all those arrays into one safely? Because, I if I try writing into a global array, two threads could be writing at the same time.
First, you actually need to save all those thread objects to call join() on them. As written, you're saving only the last one of them, and then only if there isn't an exception.
An easy way to do multithreaded programming is to give each thread all the data it needs to run, and then have it not write to anything outside that working set. If all threads follow that guideline, their writes will not interfere with each other. Then, once a thread has finished, have the main thread only aggregate the results into a global array. This is know as "fork/join parallelism."
If you subclass the Thread object, you can give it space to store that return value without interfering with other threads. Then you can do something like this:
class MyThread(threading.Thread):
def __init__(self, ...):
self.result = []
...
def main():
# doStuffWith() returns a MyThread instance
threads = [ doStuffWith(k).start() for k in arrayofkeywords[:maxThreads] ]
for t in threads:
t.join()
ret = t.result
# process return value here
Edit:
After looking around a bit, it seems like the above method isn't the preferred way to do threads in Python. The above is more of a Java-esque pattern for threads. Instead you could do something like:
def handler(outList)
...
# Modify existing object (important!)
outList.append(1)
...
def doStuffWith(keyword):
...
result = []
thread = Thread(target=handler, args=(result,))
return (thread, result)
def main():
threads = [ doStuffWith(k) for k in arrayofkeywords[:maxThreads] ]
for t in threads:
t[0].start()
for t in threads:
t[0].join()
ret = t[1]
# process return value here
Use a Queue.Queue instance, which is intrinsically thread-safe. Each thread can .put its results to that global instance when it's done, and the main thread (when it knows all working threads are done, by .joining them for example as in #unholysampler's answer) can loop .getting each result from it, and use each result to .extend the "overall result" list, until the queue is emptied.
Edit: there are other big problems with your code -- if the maximum number of threads is less than the number of keywords, it will never terminate (you're trying to start a thread per keyword -- never less -- but if you've already started the max numbers you loop forever to no further purpose).
Consider instead using a threading pool, kind of like the one in this recipe, except that in lieu of queueing callables you'll queue the keywords -- since the callable you want to run in the thread is the same in each thread, just varying the argument. Of course that callable will be changed to peel something from the incoming-tasks queue (with .get) and .put the list of results to the outgoing-results queue when done.
To terminate the N threads you could, after all keywords, .put N "sentinels" (e.g. None, assuming no keyword can be None): a thread's callable will exit if the "keyword" it just pulled is None.
More often than not, Queue.Queue offers the best way to organize threading (and multiprocessing!) architectures in Python, be they generic like in the recipe I pointed you to, or more specialized like I'm suggesting for your use case in the last two paragraphs.
You need to keep pointers to each thread you make. As is, your code only ensures the last created thread finishes. This does not imply that all the ones you started before it have also finished.
def startThreads(arrayofkeywords):
global i
i = 0
threads = []
while len(arrayofkeywords):
try:
if i<maxThreads:
keyword = arrayofkeywords.pop(0)
i = i+1
thread = doStuffWith(keyword)
thread.start()
threads.append(thread)
except KeyboardInterrupt:
sys.exit()
for t in threads:
t.join()
//process results stored in each thread
This also solves the problem of write access because each thread will store it's data locally. Then after all of them are done, you can do the work to combine each threads local data.
I know that this question is a little bit old, but the best way to do this is not to harm yourself too much in the way proposed by other colleagues :)
Please read the reference on Pool. This way you will fork-join your work:
def doStuffWith(keyword):
return keyword + ' processed in thread'
def startThreads(arrayofkeywords):
pool = Pool(processes=maxThreads)
result = pool.map(doStuffWith, arrayofkeywords)
print result
Writing into a global array is fine if you use a semaphore to protect the critical section. You 'acquire' the lock when you want to append to the global array, then 'release' when you are done. This way, only one thread is every appending to the array.
Check out http://docs.python.org/library/threading.html and search for semaphore for more info.
sem = threading.Semaphore()
...
sem.acquire()
# do dangerous stuff
sem.release()
try some semaphore's methods, like acquire and release..
http://docs.python.org/library/threading.html
Is there any way to make a function (the ones I'm thinking of are in the style of the simple ones I've made which generate the fibonnacci sequence from 0 to a point, and all the primes between two points) run indefinitely. E.g. until I press a certain key or until a time has passed, rather than until a number reaches a certain point?
Also, if it is based on time then is there any way I could just extend the time and start it going from that point again, rather than having to start again from 0? I am aware there is a time module, i just don't know much about it.
The simplest way is just to write a program with an infinite loop, and then hit control-C to stop it. Without more description it's hard to know if this works for you.
If you do it time-based, you don't need a generator. You can just have it pause for user input, something like a "Continue? [y/n]", read from stdin, and depending on what you get either exit the loop or not.
If you really want your function to run and still wants user (or system) input, you have two solutions:
multi-thread
multi-process
It will depend on how fine the interaction. If you just want to interrupt the function and don't care about the exit, then multi-process is fine.
In both cases, you can rely on some shared resources (file or shared memory for multi-thread, variable with associated mutex for multi-thread) and check for the state of that resource regularly in your function. If it is set up to tell you to quit, just do it.
Example on multi-thread:
from threading import Thread, Lock
from time import sleep
class MyFct(Thread):
def __init__(self):
Thread.__init__(self)
self.mutex = Lock()
self._quit = False
def stopped(self):
self.mutex.acquire()
val = self._quit
self.mutex.release()
return val
def stop(self):
self.mutex.acquire()
self._quit = True
self.mutex.release()
def run(self):
i = 1
j = 1
print i
print j
while True:
if self.stopped():
return
i,j = j,i+j
print j
def main_fct():
t = MyFct()
t.start()
sleep(1)
t.stop()
t.join()
print "Exited"
if __name__ == "__main__":
main_fct()
You could use a generator for this:
def finished():
"Define your exit condition here"
return ...
def count(i=0):
while not finished():
yield i
i += 1
for i in count():
print i
If you want to change the exit condition you could pass a value back into the generator function and use that value to determine when to exit.
As in almost all languages:
while True:
# check what you want and eventually break
print nextValue()
The second part of your question is more interesting:
Also, if it is based on time then is there anyway I could just extend the time and start it going from that point again rather than having to start again from 0
you can use a yield instead of return in the function nextValue()
If you use a child thread to run the function while the main thread waits for character input it should work. Just remember to have something that stops the child thread (in the example below the global runthread)
For example:
import threading, time
runthread = 1
def myfun():
while runthread:
print "A"
time.sleep(.1)
t = threading.Thread(target=myfun)
t.start()
raw_input("")
runthread = 0
t.join()
does just that
If you want to exit based on time, you can use the signal module's alarm(time) function, and the catch the SIGALRM - here's an example http://docs.python.org/library/signal.html#example
You can let the user interrupt the program in a sane manner by catching KeyboardInterrupt. Simply catch the KeyboardInterrupt exception from outside you main loop, and do whatever cleanup you want.
If you want to continue later where you left off, you will have to add some sort persistence. I would pickle a data structure to disk, that you could read back in to continue the operations.
I haven't tried anything like this, but you could look into using something like memoizing, and caching to the disk.
You could do something like this to generate fibonnacci numbers for 1 second then stop.
fibonnacci = [1,1]
stoptime = time.time() + 1 # set stop time to 1 second in the future
while time.time() < stoptime:
fibonnacci.append(fibonnacci[-1]+fibonnacci[-2])
print "Generated %s numbers, the last one was %s." % (len(fibonnacci),fibonnacci[-1])
I'm not sure how efficient it is to call time.time() in every loop - depending on the what you are doing inside the loop, it might end up taking a lot of the performance away.