making a programme run indefinitely in python - python

Is there any way to make a function (the ones I'm thinking of are in the style of the simple ones I've made which generate the fibonnacci sequence from 0 to a point, and all the primes between two points) run indefinitely. E.g. until I press a certain key or until a time has passed, rather than until a number reaches a certain point?
Also, if it is based on time then is there any way I could just extend the time and start it going from that point again, rather than having to start again from 0? I am aware there is a time module, i just don't know much about it.

The simplest way is just to write a program with an infinite loop, and then hit control-C to stop it. Without more description it's hard to know if this works for you.
If you do it time-based, you don't need a generator. You can just have it pause for user input, something like a "Continue? [y/n]", read from stdin, and depending on what you get either exit the loop or not.

If you really want your function to run and still wants user (or system) input, you have two solutions:
multi-thread
multi-process
It will depend on how fine the interaction. If you just want to interrupt the function and don't care about the exit, then multi-process is fine.
In both cases, you can rely on some shared resources (file or shared memory for multi-thread, variable with associated mutex for multi-thread) and check for the state of that resource regularly in your function. If it is set up to tell you to quit, just do it.
Example on multi-thread:
from threading import Thread, Lock
from time import sleep
class MyFct(Thread):
def __init__(self):
Thread.__init__(self)
self.mutex = Lock()
self._quit = False
def stopped(self):
self.mutex.acquire()
val = self._quit
self.mutex.release()
return val
def stop(self):
self.mutex.acquire()
self._quit = True
self.mutex.release()
def run(self):
i = 1
j = 1
print i
print j
while True:
if self.stopped():
return
i,j = j,i+j
print j
def main_fct():
t = MyFct()
t.start()
sleep(1)
t.stop()
t.join()
print "Exited"
if __name__ == "__main__":
main_fct()

You could use a generator for this:
def finished():
"Define your exit condition here"
return ...
def count(i=0):
while not finished():
yield i
i += 1
for i in count():
print i
If you want to change the exit condition you could pass a value back into the generator function and use that value to determine when to exit.

As in almost all languages:
while True:
# check what you want and eventually break
print nextValue()
The second part of your question is more interesting:
Also, if it is based on time then is there anyway I could just extend the time and start it going from that point again rather than having to start again from 0
you can use a yield instead of return in the function nextValue()

If you use a child thread to run the function while the main thread waits for character input it should work. Just remember to have something that stops the child thread (in the example below the global runthread)
For example:
import threading, time
runthread = 1
def myfun():
while runthread:
print "A"
time.sleep(.1)
t = threading.Thread(target=myfun)
t.start()
raw_input("")
runthread = 0
t.join()
does just that

If you want to exit based on time, you can use the signal module's alarm(time) function, and the catch the SIGALRM - here's an example http://docs.python.org/library/signal.html#example
You can let the user interrupt the program in a sane manner by catching KeyboardInterrupt. Simply catch the KeyboardInterrupt exception from outside you main loop, and do whatever cleanup you want.
If you want to continue later where you left off, you will have to add some sort persistence. I would pickle a data structure to disk, that you could read back in to continue the operations.
I haven't tried anything like this, but you could look into using something like memoizing, and caching to the disk.

You could do something like this to generate fibonnacci numbers for 1 second then stop.
fibonnacci = [1,1]
stoptime = time.time() + 1 # set stop time to 1 second in the future
while time.time() < stoptime:
fibonnacci.append(fibonnacci[-1]+fibonnacci[-2])
print "Generated %s numbers, the last one was %s." % (len(fibonnacci),fibonnacci[-1])
I'm not sure how efficient it is to call time.time() in every loop - depending on the what you are doing inside the loop, it might end up taking a lot of the performance away.

Related

Python script is hanging AFTER multithreading

I know there are a few questions and answers related to hanging threads in Python, but my situation is slightly different as the script is hanging AFTER all the threads have been completed. The threading script is below, but obviously the first 2 functions are simplified massively.
When I run the script shown, it works. When I use my real functions, the script hangs AFTER THE LAST LINE. So, all the scenarios are processed (and a message printed to confirm), logStudyData() then collates all the results and writes to a csv. "Script Complete" is printed. And THEN it hangs.
The script with threading functionality removed runs fine.
I have tried enclosing the main script in try...except but no exception gets logged. If I use a debugger with a breakpoint on the final print and then step it forward, it hangs.
I know there is not much to go on here, but short of including the whole 1500-line script, I don't know hat else to do. Any suggestions welcome!
def runScenario(scenario):
# Do a bunch of stuff
with lock:
# access global variables
pass
pass
def logStudyData():
# Combine results from all scenarios into a df and write to csv
pass
def worker():
global q
while True:
next_scenario = q.get()
if next_scenario is None:
break
runScenario(next_scenario)
print(next_scenario , " is complete")
q.task_done()
import threading
from queue import Queue
global q, lock
q = Queue()
threads = []
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
num_worker_threads = 6
lock = threading.Lock()
for i in range(num_worker_threads):
print("Thread number ",i)
this_thread = threading.Thread(target=worker)
this_thread.start()
threads.append(this_thread)
for scenario_name in scenario_list:
q.put(scenario_name)
q.join()
print("q.join completed")
logStudyData()
print("script complete")
As the docs for Queue.get say:
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
In other words, there is no way get can ever return None, except by you calling q.put(None) on the main thread, which you don't do.
Notice that the example directly below those docs does this:
for i in range(num_worker_threads):
q.put(None)
for t in threads:
t.join()
The second one is technically necessary, but you usually get away with not doing it.
But the first one is absolutely necessary. You need to either do this, or come up with some other mechanism to tell your workers to quit. Without that, your main thread just tries to exit, which means it tries to join every worker, but those workers are all blocked forever on a get that will never happen, so your program hangs forever.
Building a thread pool may not be rocket science (if only because rocket scientists tend to need their calculations to be deterministic and hard real-time…), but it's not trivial, either, and there are plenty of things you can get wrong. You may want to consider using one of the two already-built threadpools in the Python standard library, concurrent.futures.ThreadPoolExecutor or multiprocessing.dummy.Pool. This would reduce your entire program to:
import concurrent.futures
def work(scenario):
runScenario(scenario)
print(scenario , " is complete")
scenario_list = ['s1','s2','s3','s4','s5','s6','s7','s8','s9','s10','s11','s12']
with concurrent.futures.ThreadPoolExecutor(max_workers=6) as x:
results = list(x.map(work, scenario_list))
print("q.join completed")
logStudyData()
print("script complete")
Obviously you'll still need a lock around any mutable variables you change inside runScenario—although if you're only using a mutable variable there because you couldn't figure out how to return values to the main thread, that's trivial with an Executor: just return the values from work, and then you can use them like this:
for result in x.map(work, scenario_list):
do_something(result)

User input to break infinite loop?

I'm working on calculating a bunch of triangles with special properties for a Number Theorist friend. There are infinitely many of these triangles, but they require a lot of computational power to find.
We've got an infinite loop running through different b,d combinations. When the program ends, it calls the go(dict) function to export the triangles it found. Currently, we tell the program at the start what interval of time to run for. This is causing problems when we realize we need the computing power for something else, but the program still has hours to run and we don't want to lose the triangles it has already calculated by exiting the program without running go(dict).
Ideally, we want some user input to cause the program to break the loop, run go(dict) with whatever current version of the dictionary it is holding in memory, then exit. Trying with atexit.register(go, dict) was unsuccessful, as it is called many times within the loop and runs many times when the program is terminated.
(See the abbreviated loop code below)
interval = eval(input("How many hours shall I run for? "))*3600
starttime = time.time()
dict = {}
b = start_value
while True:
for d in range (1, b):
compute stuff
if (condition):
add triangle to dict
if (time.time()-starttime)>interval:
go(dict)
return
b +=1
This is what exceptions are can be used for: you press Ctrl+C to interrupt the process, and your code handles it by saving the results:
while True:
try:
# your code here
except KeyboardInterrupt:
go(dict)
break
Note that you can't return from a standalone loop, but you can break from it, however.
one thing you can do is take over ctrl+c using except KeyboardInterrupt: when you send an interrupt to the script it will run this block in which you can put code to exit cleanly
here is an example:
i = 0
try:
while True:
i+=1
except KeyboardInterrupt:
print 'caught INT'
print i
Using Signals:
import signal
interrupted = False # Used to break the loop when we send SIGINT
# When SIGINT is received, set interrupted to True
def signal_handler(signal, frame):
global interrupted
interrupted = True
# Sets signal_handler to run if a SIGINT was received
signal.signal(signal.SIGINT, signal_handler)
interval = eval(input("How many hours shall I run for? "))*3600
starttime = time.time()
dict = {}
b = start_value
while True:
for d in range (1, b):
compute stuff
if (condition):
add triangle to dict
if (time.time()-starttime)>interval:
go(dict)
break
if interrupted:
go(dict)
break
b +=1
Now when we hit ctrl+c, we set interrupted to True which runs go(dict) and breaks the loop.

python, calling method on main thread from timer callback

I'm very new to python development, I need to call a function every x seconds.
So I'm trying to use a timer for that, something like:
def start_working_interval():
def timer_tick():
do_some_work() // need to be called on the main thread
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer = threading.Timer(10.0, timer_tick)
timer.start()
the do_some_work() method need to be called on the main thread, and I think using the timer causing it to execute on different thread.
so my question is, how can I call this method on the main thread?
I'm now sure what you trying to achive but i played with your code and did this:
import threading
import datetime
def do_some_work():
print datetime.datetime.now()
def start_working_interval():
def timer_tick():
do_some_work()
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer_tick()
start_working_interval()
So basically what i did was to set the Time inside the timer_tick() so it will call it-self after 10 sec and so on, but i removed the second timer.
I needed to do this too, here's what I did:
import time
MAXBLOCKINGSECONDS=5 #maximum time that a new task will have to wait before it's presence in the queue gets noticed.
class repeater:
repeatergroup=[] #our only static data member it holds the current list of the repeaters that need to be serviced
def __init__(self,callback,interval):
self.callback=callback
self.interval=abs(interval) #because negative makes no sense, probably assert would be better.
self.reset()
self.processing=False
def reset(self):
self.nextevent=time.time()+self.interval
def whennext(self):
return self.nextevent-time.time() #time until next event
def service(self):
if time.time()>=self.nextevent:
if self.processing=True: #or however you want to be re-entrant safe or thread safe
return 0
self.processing==True
self.callback(self) #just stuff all your args into the class and pull them back out?
#use this calculation if you don't want slew
self.nextevent+=self.interval
#reuse this calculation if you do want slew/don't want backlog
#self.reset()
#or put it just before the callback
self.processing=False
return 1
return 0
#this the transition code between class and classgroup
#I had these three as a property getter and setter but it was behaving badly/oddly
def isenabled(self):
return (self in self.repeatergroup)
def start(self):
if not (self in self.repeatergroup):
self.repeatergroup.append(self)
#another logical place to call reset if you don't want backlog:
#self.reset()
def stop(self):
if (self in self.repeatergroup):
self.repeatergroup.remove(self)
#group calls in c++ I'd make these static
def serviceall(self): #the VB hacker in me wants to name this doevents(), the c hacker in me wants to name this probe
ret=0
for r in self.repeatergroup:
ret+=r.service()
return ret
def minwhennext(self,max): #this should probably be hidden
ret=max
for r in self.repeatergroup:
ret=min(ret,r.whennext())
return ret
def sleep(self,seconds):
if not isinstance(threading.current_thread(), threading._MainThread): #if we're not on the main thread, don't process handlers, just sleep.
time.sleep(seconds)
return
endtime=time.time()+seconds #record when caller wants control back
while time.time()<=endtime: #spin until then
while self.serviceall()>0: #service each member of the group until none need service
if (time.time()>=endtime):
return #break out of service loop if caller needs control back already
#done with servicing for a while, yield control to os until we have
#another repeater to service or it's time to return control to the caller
minsleeptime=min(endtime-time.time(),MAXBLOCKINGPERIOD) #smaller of caller's requested blocking time, and our sanity number (1 min might be find for some systems, 5 seconds is good for some systems, 0.25 to 0.03 might be better if there could be video refresh code waiting, 0.15-0.3 seems a common range for software denouncing of hardware buttons.
minsleeptime=self.minwhennext(minsleeptime)
time.sleep(max(0,minsleeptime))
###################################################################
# and now some demo code:
def handler1(repeater):
print("latency is currently {0:0.7}".format(time.time()-repeater.nextevent))
repeater.count+=repeater.interval
print("Seconds: {0}".format(repeater.count))
def handler2(repeater): #or self if you prefer
print("Timed message is: {0}".format(repeater.message))
if repeater.other.isenabled():
repeater.other.stop()
else:
repeater.other.start()
repeater.interval+=1
def demo_main():
counter=repeater(handler1,1)
counter.count=0 #I'm still new enough to python
counter.start()
greeter=repeater(handler2,2)
greeter.message="Hello world." #that this feels like cheating
greeter.other=counter #but it simplifies everything.
greeter.start()
print ("Currently {0} repeaters in service group.".format(len(repeater.repeatergroup)))
print("About to yield control for a while")
greeter.sleep(10)
print("Got control back, going to do some processing")
time.sleep(5)
print("About to yield control for a while")
counter.sleep(20) #you can use any repeater to access sleep() but
#it will only service those currently enabled.
#notice how it gets behind but tries to catch up, we could add repeater.reset()
#at the beginning of a handler to make it ignore missed events, or at the
#end to let the timing slide, depending on what kind of processing we're doing
#and what sort of sensitivity there is to time.
#now just replace all your main thread's calls to time.sleep() with calls to mycounter.sleep()
#now just add a repeater.sleep(.01) or a while repeater.serviceall(): pass to any loop that will take too long.
demo_main()
There's a couple of odd things left to consider:
Would it be better to sort handlers that you'd prefer to run on main thread from handlers that you don't care? I later went on to add a threadingstyle property, which depending on it's value would run on main thread only, on either main thread or a shared/group thread, or stand alone on it's own thread. That way longer or more time-sensitive tasks, could run without causing the other threads to be as slowed down, or closer to their scheduled time.
I wonder whether, depending on the implementation details of threading: is my 'if not main thread: time.sleep(seconds); return' effectively make it sufficiently more likely to be the main thread's turn, and I shouldn't worry about the difference.
(It seems like adding our MAXBLOCKINGPERIOD as the 3rd arg to the sched library could fix it's notorious issue of not servicing new events after older longer in the future events have already hit the front of the queue.)

Python thread execution blocking other threads

Both threads work if accessed but when they are executed together halt_listener will monopolize the resources not allowing import_1 to execute. The end goal is to have halt_listener listen for a kill message and then set a run variable to false. This has worked when I was sending a pipe to the halt_listener but I prefer a queue.
Here is my code:
import multiprocessing
import time
from threading import Thread
class test_imports:#Test classes remove
alive = {'import_1': True, 'import_2': True};
def halt_listener(self, control_Queue, thread_Name, kill_command):
while True:
print ("Checking queue for kill")
isAlive = control_queue.get()
print ("isAlive", isAlive)
if isAlive == kill_command:
print ("kill listener triggered")
self.alive[thread_Name] = False;
return
def import_1(self, control_Queue, thread_Number):
print ("Import_1 number %d started") % thread_Number
halt = test_imports()
t = Thread(target=halt.halt_listener, args=(control_Queue, 'import_1', 't1kill'))
count = 0
t.run()
global alive
run = test_imports.alive['import_1'];
while run:
print ("Thread type 1 number %d run count %d") % (thread_Number, count)
count = count + 1
print ("Test Import_1 ", run)
run = self.alive['import_1'];
print ("Killing thread type 1 number %d") % thread_Number
Am I missing something?
The problem is that you're calling t.run(). run isn't a method that starts a thread; run is the actual code that's meant to run on the thread. By calling it directly, you're running it on your thread, and waiting for it to finish.
What you want is t.start().
See the documentation on threading.Thread for details.
While we're at it, there are a few other problems with your code.
First, you don't have a lock around self.alive. You can't change a value (except for a small number of automatically-self-synchronized types like Queue) in one thread and access it in another without a lock. You will often get away with it, but "often" in a multithreaded program just means it won't fail until your big demonstration, and it will then take weeks to figure out how to reproduce before you can even begin fixing it… (In this case, a Condition might make more sense than a Lock, but either way, you need to synchronize on something.)
Meanwhile, looping as fast as possible to poll self.alive['import_1'] is going to burn 100% CPU for no good reason. There's almost always a better way to wait on something (e.g., in this case, if you used a Condition for synchronization, you could also use it for waiting here); in the rare cases when there isn't, you should at least sleep every time through the loop.
alive is actually a class attribute rather than an instance attribute. That's usually not what you want. In fact, you try to access both test_imports.alive and self.alive, but both of those will end up being the class attribute as long as you never assign to it, which makes it more confusing. And then, on top of that, you have a global with the same name, which is just a recipe for extreme confusion.
Also, this looks like Python 2 code, but you're using print as if it were a function in some cases—e.g., print ("isAlive", isAlive). This isn't going to do what you want—instead of printing something like isAlive command, it's going to print something like ('isAlive', 'command'), which is not very pretty. And meanwhile, the extra parentheses in expressions like ("Import_1 number %d started") % thread_Number mean that someone has to read that over a few times to convince themselves that the parentheses aren't actually doing anything.
Finally, why are you creating a separate test_imports instance to call halt_listener on? Clearly the two methods are trying to communicate through attributes on self, but they're not going to do that if they're called on two different objects. Why not just target=self.half_listener?

How to read users input when in loop (and without blocking work in this loop)?

How to read users input when in loop (and without blocking work in this loop)?
I want to do some basic stuff, like switching DEBUG variable, print values of some variables etc on some specific keys that user will print, but my program work in constant loop, and this loop fire another threads. How can i do this?
Use threads:
import threading
import time
value = 3
def process():
while True:
print(value)
time.sleep(1)
thread = threading.Thread(target=process)
thread.start()
while True:
value = input('Enter value: ')
(Output gets kind of messed up here because of both loops printing stuff to the terminal but I think the idea should be clear.)

Categories