On which thread does the callback function gets executed after every "interval" milliseconds when we schedule a function using the following method??
def glib.timeout_add(interval, callback, ...)
https://developer.gnome.org/pygobject/stable/glib-functions.html#function-glib--timeout-add
In the thread which is running the default main loop.
If it's not documented, you'll either have to read the source code, or you can print out the return value from thread.get_ident() from inside the callback function and compare it to values printed from inside known threads within your code.
It's possible that the ident won't match any of the other threads, in which case it will be a thread created internally just for the purposes of the callbacks.
Related
I have an AWS Lambda function that gets invoked from another function. The first function processes the data and invokes the other when it is finished. The second function will get n instances to run at the same time.
For example the second function takes about 5 seconds (for each invoke) to run; I want this function to run all at the time they are invoked for a total run time of about 5 seconds.
The function takes longer than that and runs each function one at a time until the one prior is finished; this process takes 5*n seconds.
I see that I can scale the function to run up to 1,000 in my region as stated by AWS. How can I make this run concurrently? Don't need a code example, just a general process I can look into to fix the problem.
The first function header looks like this: (I have other code that gets the json_file that I left out)
def lambda_handler(event=None, context=None):
for n in range(len(json_file)):
response = client.invoke(
FunctionName='docker-selenium-lambda-prod-demo',
InvocationType='RequestResponse',
Payload=json.dumps(json_file[n])
)
responseJson = json.load(response['Payload'])
where json_file[n] is being sent to the other function to run.
As you can see in boto3 docs about invoke function:
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType to Event .
If you are using RequestResponse, your code will wait until the lambda called is terminated.
You can either change InvocationType to Event or use something like ThreadPoolExecutor and wait until all executions are finished
In python, I have two methods. In method A, I receive parameters and put them into parameter array. In procedure B, I process the data of parameter array, and put the results of processing into a log array. I want to get the data of reading log data by while loop in final of method A, and get the processing of parameters currently passed into A. As a result, I would like to ask how to pause to start method B when A is half-executed, otherwise A will endless loop.
Adding sleep method expects A to interrupt and B to execute, but it has no effect.
def A()
try:
datas=request.get_data()
data=json.loads(datas)
global queque_list,log_list
queque_list.append("data":data)
finally:
while 1:
sleep(3)
if len(log_list)>0
for logdata in log_list:
if logdata.get('uuid')==uuid:
return logdata.get('msg')
def B(task):
try:
do(task)
finally:
log_list.append({"uuid":uuid,"msg":msg})
def C():
while True:
if len(queque_list)>0:
task=queque_list.pop(0)
B(task)
t=threading.Thread(target=C)
t.start()
I expect if method A can interrupt when executing final module and wait for method B to finish executing before executing. but now method A executing final module and method B non-execution ,the mothod endless loop
You can use queue.Queue to send messages between the threads, specifically the put() method to send a message and the get() method to wait for a message in another thread. With this, you can get the threads to work in lock-step.
I'm not sure what you are trying to do, but perhaps you can get away with doing all the work in a single thread for simplicity.
I am using Pools to kick off worker processes in python3.6. The workers will return True or False after completion, and I was wondering what the difference is between using the AsyncResult returned object or using a callback function to check if the worker returned True or False. From my understanding the callback is called in the main process, the same place I would do the checking anyway.
#Using the AsyncResult way
def check_result(result):
if result:
#Successful do something
else:
#Failed
with Pool() as pool:
result = pool.apply_async(upload, (args, ))
check_result(result.get())
#Using callbacks
def check_result(result):
if result:
#Successful do something
def err_result(result):
#Do something
with Pool() as pool:
pool.appy_async(upload, (args,), callback=check_result, error_callback=err_result)
I see that in python3.6 they allow error_callback, so are these two bits of code equivalent? What are the pros and cons of both?
Thanks
The comparison between AsyncResult and callback is somewhat unlucky.
Note that you only have callbacks available for asynchronous methods (returning AsyncResult objects), so there is no 'versus' in this story regarding these things.
When you write check_result(result.get()), you don't pass some AsyncResult-object into check_result, but an already awaited normal result, in your case a boolean value (if not an exception). So it's not a difference between AsyncResult and callback, but between manually calling check_result on a result or registering a callback beforehand.
I see that in python3.6 they allow error_callback, so are these two bits of code equivalent? What are the pros and cons of both?
No, these two snippets are not equivalent. error_callback is an exception handler, your possible False-result won't trigger that, but an exception will.
Your result argument within err_result will be filled with an exception instance in such a case. The difference with your upper snippet is, that an exception there will blow up in your face as soon as you call result.get() and you have not enclosed it within an try-except-block.
The obvious 'pro' of an error_callback is the omitted try-except-block, the 'pro' of the regular callback also is reduced code length. Use both only for immediately returning tasks like checking and logging, to prevent blocking the thread your pool runs in.
To update a widget in time I use the .after() method, usually in the following form:
def update():
do_something()
<widget>.after(<delay>, update)
It is my understanding that the widget waits for a certain amount of time and then executes the update() function, at the end of which the widget waits once again before re-executing the function and so on.
This seems to me a lot like recursion. So, the question is: Does .after() actually work by means of recursion or not?
If it does, then there is a limit to the depth of recursion, but the following example should prove that such limit is never reached:
from tkinter import *
counter = 0
def count():
global counter
counter += 1
lbl.config(text=counter)
root.after(10, count)
root = Tk()
lbl = Label(root, text='0')
lbl.pack()
Button(root, text='Start count', command=count).pack()
root.mainloop()
In my system the limit to the depth of recursion is 1000, but this example goes far beyond that value in a few seconds until I stop it.
Recursion means that the current instance of a function is placed on hold and a new instance is created and run. after works differently, and is not recursion.
You can think of the mainloop as an infinite loop that maintains a todo list. The list has functions and the time that they ought to be run. The mainloop constantly checks the todo list and if an item in the todo list is due to be run, then the mainloop removes the item from the list and runs it. When it's done, it goes back to looping and checking the list. The after method just adds a function to this todo list along with a time to run it.
It is my understanding that the widget waits for a certain amount of time and then executes the update() function, at the end of which the widget waits once again before re-executing the function and so on.
The highlighted section is false. after simply places the function on a queue. It doesn't re-execute anything. mainloop simply pops things off of the "after" queue and runs them once.
So, the question is: Does .after() actually work by means of recursion or not?
No. after should have been named add_job_to_queue. It isn't recursion, it simply places a job on a queue.
If it does, then there is a limit to the depth of recursion, but the following example should prove that such limit is never reached:
def count():
global counter
counter += 1
lbl.config(text=counter)
root.after(10, count)
The reason no limit is reached is, again, because it's not recursion. When you call count by clicking on a button, it does some work and then it adds one item to the "after" queue. The length of the queue is now one.
When the time comes, mainloop will pop that item off of the queue, making the queue have a length of zero. Then, your code adds itself to the queue, making the length one. When the time comes, mainloop will pop that item off the queue, making the queue have a length of zero. Then, ...
There's no recursion at all in your example, since count() is not called from itself (you're just telling Tk that it needs to call your function after 10ms) but invoked by Tk's main loop ;).
in my program
keyboard=tk.Tk()
def readsm_s():
...
keyboard.after(30, readsm_s)
readsm_s() is recalled many time, after that there is a error 'maximum recursion depth exceeded while calling a Python object'
find python the default depth of recursion is limited. ( the default is 1000 )
https://www.codestudyblog.com/cs2112pya/1208015041.html
Took a look at the python source code, I don't think .after works recursively. I starts new threads using the Tcl library.
def after(self, ms, func=None, *args):
"""Call function once after given time.
MS specifies the time in milliseconds. FUNC gives the
function which shall be called. Additional parameters
are given as parameters to the function call. Return
identifier to cancel scheduling with after_cancel."""
if not func:
# I'd rather use time.sleep(ms*0.001)
self.tk.call('after', ms)
else:
def callit():
try:
func(*args)
finally:
try:
self.deletecommand(name)
except TclError:
pass
callit.__name__ = func.__name__
name = self._register(callit)
return self.tk.call('after', ms, name)
Modules/_tkinter.c: Registers the function and calls it. The Tk class is builtin class also located in the same file. The API works by calling Tcl library functions.
The function bounded to tk.call is Tkapp_Call:
{"call", Tkapp_Call, METH_VARARGS},
The comments for this function explain that this just calls Tcl functions.
/* This is the main entry point for calling a Tcl command.
It supports three cases, with regard to threading:
1. Tcl is not threaded: Must have the Tcl lock, then can invoke command in
the context of the calling thread.
2. Tcl is threaded, caller of the command is in the interpreter thread:
Execute the command in the calling thread. Since the Tcl lock will
not be used, we can merge that with case 1.
3. Tcl is threaded, caller is in a different thread: Must queue an event to
the interpreter thread. Allocation of Tcl objects needs to occur in the
interpreter thread, so we ship the PyObject* args to the target thread,
and perform processing there. */
Additionally, the arguments are freed when this is called at the end of the function: Tkapp_CallDeallocArgs(objv, objStore, objc);, so if the arguments are recursively used, they would not have been freed after 1 call.
I am using thread in my python card application.
whenever i press the refresh button i am calling a function using thread, when the function was called, there will be a another function inside the main function.
what i want is whenever the child function ends. I want the thread to be killed or stopped without closing the application or ctrl+ c.
i started the thread like this
def on_refresh_mouseClick(self,event):
thread.start_new_thread( self.readalways,() )
in the "readalways" function i am using while loop, in that while loop whenever the condition satisfies it will call continuousread() function. check it:
def readalways(self):
while 1:
cardid = self.parent.uhf.readTagId()
print "the tag id is",cardid
self.status = self.parent.db.checktagid(cardid)
if len(self.status) != 0:
break
print "the value is",self.status[0]['id']
self.a = self.status[0]['id']
self.continuesread()
def continuesread(self):
.......
.......
after this continuesread read function the values that in the thread should be cleared.
Because, if i again click the refresh button a new thread is starting but the some of the values are coming from the old thread.
so i want to kill the old thread when it completes the continuesread function
Please note that different threads from the same process share their memory, e.g. when you access self.status, you (probably) manipulate an object shared within the whole process. Thus, even if your threads are killed when finishing continuesread (what they probably are), the manipulated object's state will still remain the same.
You could either
hold the status in a local variable instead of an attribute of self,
initialize those attributes when entering readalways,
or save this state in local storage of an thread object, which is not shared (see documentation).
The first one seems to be the best as far as I can see.