To update a widget in time I use the .after() method, usually in the following form:
def update():
do_something()
<widget>.after(<delay>, update)
It is my understanding that the widget waits for a certain amount of time and then executes the update() function, at the end of which the widget waits once again before re-executing the function and so on.
This seems to me a lot like recursion. So, the question is: Does .after() actually work by means of recursion or not?
If it does, then there is a limit to the depth of recursion, but the following example should prove that such limit is never reached:
from tkinter import *
counter = 0
def count():
global counter
counter += 1
lbl.config(text=counter)
root.after(10, count)
root = Tk()
lbl = Label(root, text='0')
lbl.pack()
Button(root, text='Start count', command=count).pack()
root.mainloop()
In my system the limit to the depth of recursion is 1000, but this example goes far beyond that value in a few seconds until I stop it.
Recursion means that the current instance of a function is placed on hold and a new instance is created and run. after works differently, and is not recursion.
You can think of the mainloop as an infinite loop that maintains a todo list. The list has functions and the time that they ought to be run. The mainloop constantly checks the todo list and if an item in the todo list is due to be run, then the mainloop removes the item from the list and runs it. When it's done, it goes back to looping and checking the list. The after method just adds a function to this todo list along with a time to run it.
It is my understanding that the widget waits for a certain amount of time and then executes the update() function, at the end of which the widget waits once again before re-executing the function and so on.
The highlighted section is false. after simply places the function on a queue. It doesn't re-execute anything. mainloop simply pops things off of the "after" queue and runs them once.
So, the question is: Does .after() actually work by means of recursion or not?
No. after should have been named add_job_to_queue. It isn't recursion, it simply places a job on a queue.
If it does, then there is a limit to the depth of recursion, but the following example should prove that such limit is never reached:
def count():
global counter
counter += 1
lbl.config(text=counter)
root.after(10, count)
The reason no limit is reached is, again, because it's not recursion. When you call count by clicking on a button, it does some work and then it adds one item to the "after" queue. The length of the queue is now one.
When the time comes, mainloop will pop that item off of the queue, making the queue have a length of zero. Then, your code adds itself to the queue, making the length one. When the time comes, mainloop will pop that item off the queue, making the queue have a length of zero. Then, ...
There's no recursion at all in your example, since count() is not called from itself (you're just telling Tk that it needs to call your function after 10ms) but invoked by Tk's main loop ;).
in my program
keyboard=tk.Tk()
def readsm_s():
...
keyboard.after(30, readsm_s)
readsm_s() is recalled many time, after that there is a error 'maximum recursion depth exceeded while calling a Python object'
find python the default depth of recursion is limited. ( the default is 1000 )
https://www.codestudyblog.com/cs2112pya/1208015041.html
Took a look at the python source code, I don't think .after works recursively. I starts new threads using the Tcl library.
def after(self, ms, func=None, *args):
"""Call function once after given time.
MS specifies the time in milliseconds. FUNC gives the
function which shall be called. Additional parameters
are given as parameters to the function call. Return
identifier to cancel scheduling with after_cancel."""
if not func:
# I'd rather use time.sleep(ms*0.001)
self.tk.call('after', ms)
else:
def callit():
try:
func(*args)
finally:
try:
self.deletecommand(name)
except TclError:
pass
callit.__name__ = func.__name__
name = self._register(callit)
return self.tk.call('after', ms, name)
Modules/_tkinter.c: Registers the function and calls it. The Tk class is builtin class also located in the same file. The API works by calling Tcl library functions.
The function bounded to tk.call is Tkapp_Call:
{"call", Tkapp_Call, METH_VARARGS},
The comments for this function explain that this just calls Tcl functions.
/* This is the main entry point for calling a Tcl command.
It supports three cases, with regard to threading:
1. Tcl is not threaded: Must have the Tcl lock, then can invoke command in
the context of the calling thread.
2. Tcl is threaded, caller of the command is in the interpreter thread:
Execute the command in the calling thread. Since the Tcl lock will
not be used, we can merge that with case 1.
3. Tcl is threaded, caller is in a different thread: Must queue an event to
the interpreter thread. Allocation of Tcl objects needs to occur in the
interpreter thread, so we ship the PyObject* args to the target thread,
and perform processing there. */
Additionally, the arguments are freed when this is called at the end of the function: Tkapp_CallDeallocArgs(objv, objStore, objc);, so if the arguments are recursively used, they would not have been freed after 1 call.
Related
I'm very new to python development, I need to call a function every x seconds.
So I'm trying to use a timer for that, something like:
def start_working_interval():
def timer_tick():
do_some_work() // need to be called on the main thread
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer = threading.Timer(10.0, timer_tick)
timer.start()
the do_some_work() method need to be called on the main thread, and I think using the timer causing it to execute on different thread.
so my question is, how can I call this method on the main thread?
I'm now sure what you trying to achive but i played with your code and did this:
import threading
import datetime
def do_some_work():
print datetime.datetime.now()
def start_working_interval():
def timer_tick():
do_some_work()
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer_tick()
start_working_interval()
So basically what i did was to set the Time inside the timer_tick() so it will call it-self after 10 sec and so on, but i removed the second timer.
I needed to do this too, here's what I did:
import time
MAXBLOCKINGSECONDS=5 #maximum time that a new task will have to wait before it's presence in the queue gets noticed.
class repeater:
repeatergroup=[] #our only static data member it holds the current list of the repeaters that need to be serviced
def __init__(self,callback,interval):
self.callback=callback
self.interval=abs(interval) #because negative makes no sense, probably assert would be better.
self.reset()
self.processing=False
def reset(self):
self.nextevent=time.time()+self.interval
def whennext(self):
return self.nextevent-time.time() #time until next event
def service(self):
if time.time()>=self.nextevent:
if self.processing=True: #or however you want to be re-entrant safe or thread safe
return 0
self.processing==True
self.callback(self) #just stuff all your args into the class and pull them back out?
#use this calculation if you don't want slew
self.nextevent+=self.interval
#reuse this calculation if you do want slew/don't want backlog
#self.reset()
#or put it just before the callback
self.processing=False
return 1
return 0
#this the transition code between class and classgroup
#I had these three as a property getter and setter but it was behaving badly/oddly
def isenabled(self):
return (self in self.repeatergroup)
def start(self):
if not (self in self.repeatergroup):
self.repeatergroup.append(self)
#another logical place to call reset if you don't want backlog:
#self.reset()
def stop(self):
if (self in self.repeatergroup):
self.repeatergroup.remove(self)
#group calls in c++ I'd make these static
def serviceall(self): #the VB hacker in me wants to name this doevents(), the c hacker in me wants to name this probe
ret=0
for r in self.repeatergroup:
ret+=r.service()
return ret
def minwhennext(self,max): #this should probably be hidden
ret=max
for r in self.repeatergroup:
ret=min(ret,r.whennext())
return ret
def sleep(self,seconds):
if not isinstance(threading.current_thread(), threading._MainThread): #if we're not on the main thread, don't process handlers, just sleep.
time.sleep(seconds)
return
endtime=time.time()+seconds #record when caller wants control back
while time.time()<=endtime: #spin until then
while self.serviceall()>0: #service each member of the group until none need service
if (time.time()>=endtime):
return #break out of service loop if caller needs control back already
#done with servicing for a while, yield control to os until we have
#another repeater to service or it's time to return control to the caller
minsleeptime=min(endtime-time.time(),MAXBLOCKINGPERIOD) #smaller of caller's requested blocking time, and our sanity number (1 min might be find for some systems, 5 seconds is good for some systems, 0.25 to 0.03 might be better if there could be video refresh code waiting, 0.15-0.3 seems a common range for software denouncing of hardware buttons.
minsleeptime=self.minwhennext(minsleeptime)
time.sleep(max(0,minsleeptime))
###################################################################
# and now some demo code:
def handler1(repeater):
print("latency is currently {0:0.7}".format(time.time()-repeater.nextevent))
repeater.count+=repeater.interval
print("Seconds: {0}".format(repeater.count))
def handler2(repeater): #or self if you prefer
print("Timed message is: {0}".format(repeater.message))
if repeater.other.isenabled():
repeater.other.stop()
else:
repeater.other.start()
repeater.interval+=1
def demo_main():
counter=repeater(handler1,1)
counter.count=0 #I'm still new enough to python
counter.start()
greeter=repeater(handler2,2)
greeter.message="Hello world." #that this feels like cheating
greeter.other=counter #but it simplifies everything.
greeter.start()
print ("Currently {0} repeaters in service group.".format(len(repeater.repeatergroup)))
print("About to yield control for a while")
greeter.sleep(10)
print("Got control back, going to do some processing")
time.sleep(5)
print("About to yield control for a while")
counter.sleep(20) #you can use any repeater to access sleep() but
#it will only service those currently enabled.
#notice how it gets behind but tries to catch up, we could add repeater.reset()
#at the beginning of a handler to make it ignore missed events, or at the
#end to let the timing slide, depending on what kind of processing we're doing
#and what sort of sensitivity there is to time.
#now just replace all your main thread's calls to time.sleep() with calls to mycounter.sleep()
#now just add a repeater.sleep(.01) or a while repeater.serviceall(): pass to any loop that will take too long.
demo_main()
There's a couple of odd things left to consider:
Would it be better to sort handlers that you'd prefer to run on main thread from handlers that you don't care? I later went on to add a threadingstyle property, which depending on it's value would run on main thread only, on either main thread or a shared/group thread, or stand alone on it's own thread. That way longer or more time-sensitive tasks, could run without causing the other threads to be as slowed down, or closer to their scheduled time.
I wonder whether, depending on the implementation details of threading: is my 'if not main thread: time.sleep(seconds); return' effectively make it sufficiently more likely to be the main thread's turn, and I shouldn't worry about the difference.
(It seems like adding our MAXBLOCKINGPERIOD as the 3rd arg to the sched library could fix it's notorious issue of not servicing new events after older longer in the future events have already hit the front of the queue.)
I am using thread in my python card application.
whenever i press the refresh button i am calling a function using thread, when the function was called, there will be a another function inside the main function.
what i want is whenever the child function ends. I want the thread to be killed or stopped without closing the application or ctrl+ c.
i started the thread like this
def on_refresh_mouseClick(self,event):
thread.start_new_thread( self.readalways,() )
in the "readalways" function i am using while loop, in that while loop whenever the condition satisfies it will call continuousread() function. check it:
def readalways(self):
while 1:
cardid = self.parent.uhf.readTagId()
print "the tag id is",cardid
self.status = self.parent.db.checktagid(cardid)
if len(self.status) != 0:
break
print "the value is",self.status[0]['id']
self.a = self.status[0]['id']
self.continuesread()
def continuesread(self):
.......
.......
after this continuesread read function the values that in the thread should be cleared.
Because, if i again click the refresh button a new thread is starting but the some of the values are coming from the old thread.
so i want to kill the old thread when it completes the continuesread function
Please note that different threads from the same process share their memory, e.g. when you access self.status, you (probably) manipulate an object shared within the whole process. Thus, even if your threads are killed when finishing continuesread (what they probably are), the manipulated object's state will still remain the same.
You could either
hold the status in a local variable instead of an attribute of self,
initialize those attributes when entering readalways,
or save this state in local storage of an thread object, which is not shared (see documentation).
The first one seems to be the best as far as I can see.
On which thread does the callback function gets executed after every "interval" milliseconds when we schedule a function using the following method??
def glib.timeout_add(interval, callback, ...)
https://developer.gnome.org/pygobject/stable/glib-functions.html#function-glib--timeout-add
In the thread which is running the default main loop.
If it's not documented, you'll either have to read the source code, or you can print out the return value from thread.get_ident() from inside the callback function and compare it to values printed from inside known threads within your code.
It's possible that the ident won't match any of the other threads, in which case it will be a thread created internally just for the purposes of the callbacks.
I've got an issue with working with the threading class within a Tkinter GUI. On initiating the Tkinter GUI, I create new Threading & Queue objects with a daemon and start it. In the Tkinter GUI, I have a button that calls an internal method. This method then calls put on the Queue object and is posted below. The Threading object performs all the necessary actions that I expect.
def my_method_threaded(self, my_name):
try:
self.queue.put(("test", dict(name=my_name)))
self.label_str_var.set('')
self.queue.join()
except:
self.error_out(msg=traceback.format_exc())
However, I am encountering an issue AFTER it has finished. If I call self.queue.join(), then the set call is never executed and the app freezes after the thread has completed its task. If I comment out the join() command, the set call IS executed, but the button will only work the first time, after it does nothing (I am tracking what the run() method is doing using a logger. It is only ever called the first time).
I am assuming there is an issue with calling join() and the Tkinter loop, which is why the first issue occurs. Can anyone shed any light on the second issue? If you need more code, then let me know.
Edit: A second issue I've just noticed is that the while True loop executes my action twice even though I have called self.queue.task_done(). Code for the run method is below:
def run(self):
args = self._queue.get()
my_name = args[1]["name"]
while True:
if my_name == "Barry":
#calls a static method elsewhere
self.queue.task_done()
I've got three different generators, which yields data from the web. Therefore, each iteration may take a while until it's done.
I want to mix the calls to the generators, and thought about roundrobin (Found here).
The problem is that every call is blocked until it's done.
Is there a way to loop through all the generators at the same time, without blocking?
You can do this with the iter() method on my ThreadPool class.
pool.iter() yields threaded function return values until all of the decorated+called functions finish executing. Decorate all of your async functions, call them, then loop through pool.iter() to catch the values as they happen.
Example:
import time
from threadpool import ThreadPool
pool = ThreadPool(max_threads=25, catch_returns=True)
# decorate any functions you need to aggregate
# if you're pulling a function from an outside source
# you can still say 'func = pool(func)' or 'pool(func)()
#pool
def data(ID, start):
for i in xrange(start, start+4):
yield ID, i
time.sleep(1)
# each of these calls will spawn a thread and return immediately
# make sure you do either pool.finish() or pool.iter()
# otherwise your program will exit before the threads finish
data("generator 1", 5)
data("generator 2", 10)
data("generator 3", 64)
for value in pool.iter():
# this will print the generators' return values as they yield
print value
In short, no: there's no good way to do this without threads.
Sometimes ORMs are augmented with some kind of peek function or callback that will signal when data is available. Otherwise, you'll need to spawn threads in order to do this. If threads are not an option, you might try switching out your database library for an asynchronous one.