dictionary changed size during iteration in multithreading app - python

I do not know how to solve this problem:
Traceback (most recent call last): File
"/usr/local/cabinet_dev/cabinet/lib/python3.4/site-packages/eventlet/hubs/hub.py",
line 458, in fire_timers
timer() File "/usr/local/cabinet_dev/cabinet/lib/python3.4/site-packages/eventlet/hubs/timer.py",
line 58, in call
cb(*args, **kw) File "/usr/local/cabinet_dev/cabinet/lib/python3.4/site-packages/eventlet/greenthread.py",
line 218, in main
result = function(*args, **kwargs) File "./monitor.py", line 148, in caughtBridge
for call in self.active.keys(): RuntimeError: dictionary changed size during iteration
In the code below:
def caughtBridge(self):
while True:
event = self.bridgeQueue.get()
uniqueid1 = str(event.headers.get('Uniqueid1'))
uniqueid2 = str(event.headers.get('Uniqueid2'))
for call in self.active.keys():
if self.active[call]['uniqueid'] == uniqueid1:
self.active[call]['uniqueid2'] = uniqueid2
if self.active[call]['uniqueid'] == uniqueid1:
for listener in self.listeners:
for number in listener.getNumbers():
if number == self.active[call]['exten']:
if not self.active[call]['answered']:
self.sendEvent({"status": "bridge", "id": self.active[call]['uniqueid'],
"number": self.active[call]['exten']},
listener.getRoom())
self.__callInfo(self.active[call], listener.getRoom())
self.active[call]['answered'] = True
self.bridgeQueue.task_done()

Use a copy of self.active.keys(), for example:
for call in list(self.active.keys()):
Didn't see if you add or remove dict Entries?
In case of adding, the other Threads will not see the added dict Entries.
In case of removing, the current Thread will fail with Key Error,
you have to catch these.
For example:
for call in list(self.active.keys()):
<Lock that call to prevent removing>
if call in self.active:
...
self.active[call]['answered'] = True
else:
# call removed do nothing
<Unlocked that call to do whatever in other Thread>
self.bridgeQueue.task_done()
Read about Python ยป 3.6.2 Documentation: threading.html#lock-objects
Basicly implement Pair Methods self.lock(call) and self.unlock(call), for instance:
Untested Code:
To prevent Deadlocks you have to guarantee self.unlock(call) will be reached!
class xxx
def __init__....
self_lock = threading.Lock
# Init all self.active[call]['lock'] = False
def lock(self, call):
# self._lock ist class threading.Lock
# self._lock has to be the same for all Threads
with self._lock:
if call in self.active and not self.active[call]['lock']:
self.active[call]['lock'] = True
return True
else:
return False
def unlock(self, call):
with self._lock:
self.active[call]['lock'] = False
# Usage:
for call in list(self.active.keys()):
if self.lock(call):
...
self.active[call]['answered'] = True
self.unlock(call)
else:
# call removed do nothing
self.bridgeQueue.task_done()

Related

Purpose of python yield when not used in iterator

I've inherited some fairly buggy code from another project. One of the functions is a callback(draw_ui method) from a library that has a yield statement in it. I'm wondering what the purpose of having a yield in python is if your not using it in a iterator context to return a value. What possible benefit could it have?
def draw_ui(self, graphics):
self._reset_components()
imgui.set_next_window_size(200, 200, imgui.ONCE)
if imgui.begin("Entity"):
if not self._selected:
imgui.text("No entity selected")
else:
imgui.text(self._selected.name)
yield
imgui.end() # end entity window
When a function has empty yield statement, the function will just return None for the first iteration, so you can say that the function acts as generator that can be iterated only once and yields None value:
def foo():
yield
>>> f = foo()
>>> print(next(f))
None
>>> print(next(f))
Traceback (most recent call last):
File "<input>", line 1, in <module>
StopIteration
That's what an empty yield does. But when a function has empty yield in between two block of code, it will execute the codes before yield for the first iteration, and the codes after yield will be executed on second iteration:
def foo():
print('--statement before yield--')
yield
print('--statement after yield--')
>>> f = foo()
>>> next(f)
--statement before yield--
>>> next(f)
--statement after yield--
Traceback (most recent call last):
File "<input>", line 1, in <module>
StopIteration
So, it somehow allows you to pause the execution of a function in the middle, however, it throws StopIteration Exception for the second iteration because the function doesn't actually yield anything on the second iteration, to avoid this, you can pass a default value to next function:
Looking at your code, your function is also doing the same thing
def draw_ui(self, graphics):
self._reset_components()
imgui.set_next_window_size(200, 200, imgui.ONCE)
if imgui.begin("Entity"):
if not self._selected:
imgui.text("No entity selected")
else:
imgui.text(self._selected.name)
yield #<--------------
imgui.end() #
So, while calling funciton draw_ui, if control goes to else block, then line outside the else block, i.e. imgui.end() is not called until the second iteration.
This type of implementation is generally done to be used in ContextManager and you can relate to following code snippet copied from contextlib.contextmanager documentation
from contextlib import contextmanager
#contextmanager
def managed_resource(*args, **kwds):
# Code to acquire resource, e.g.:
resource = acquire_resource(*args, **kwds)
try:
yield resource
finally:
# Code to release resource, e.g.:
release_resource(resource)
>>> with managed_resource(timeout=3600) as resource:
... # Resource is released at the end of this block,
... # even if code in the block raises an exception

How to add optional arguments in a python class?

I am trying to call different functions based on the value for rb_selection, calling func1 if rb_selection value is 0 and calling func2 if rb_selection value is 1. Both functions take a different set of arguments.
I do not need folder argument(func2 values) when I call func1 and similarly I do not need batch, term arguments(func1 values) when I call func2
It throws me the below error when I try to call the second function, as the values for batch, term are not passed.
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Himajak\Anaconda3\lib\tkinter\__init__.py", line 1705, in __call__
return self.func(*args)
File "<ipython-input-13-02b5f954b815>", line 122, in tb_click
ThreadedTask(self.queue,self.batch_name,self.term_name,self.course,self.rb_selection,self.folder).start()
AttributeError: 'GUI' object has no attribute 'batch_name'
Code looks similar to this:
class class1():
def def1(self):
self.queue = queue.Queue()
ThreadedTask(self.queue,self.rb_selection,self.batch_name,self.folder).start()
#self.master.after(10, self.process_queue)
class class2():
def __init__(self, queue,rb_selection, batch_name ,term_name, folder):
threading.Thread.__init__(self)
self.queue = queue
self.rb_selection = rb_selection
self.batch = batch_name
self.term = term_name
self.folder = folder
def func1(self,batch,term):
time.sleep(5)
print("Fucntion 1 reached")
print(self.batch,self.term)
def func2(self,folder):
time.sleep(5)
print("Function 2 reached")
print(self.folder)
def run(self):
time.sleep(0) # Simulate long running process
if self.rb_selection == '0':
self.func1(self.batch,self.term)
elif self.rb_selection == '1':
self.func2(self.folder)
self.queue.put("Task finished")
Please suggest on how to resolve this issue, thanks in advance!
There is no concept of optional arguments, you can give default value when creating the function like
def __init__(self, queue,rb_selection ,term_name, folder, batch_name="default batch name"):
So that you need not pass batch_name while creating the Instance.

How to multithread call another endpoint method in python?

I have an api with 2 endpoints, one is a simple post receiving a json and the other is an endpoint which calls the 1st multiple times depending on the length of a list of jsons and save the return to a list.
First method
#app.route('/getAudience', methods=['POST', 'OPTIONS'])
def get_audience(audience_=None):
try:
if audience_:
audience = audience_
else:
audience = request.get_json()
except (BadRequest, ValueError):
return make_response(jsonify(exception_response), 500)
return get_audience_response(audience, exception_response)
Second method
#app.route('/getMultipleAudience', methods=['POST', 'OPTIONS'])
def get_multiple_audience():
try:
audiences = request.json
except (BadRequest, ValueError):
return make_response(jsonify(exception_response), 500)
response = []
for audience in audiences:
new_resp = json.loads(get_audience(audience).data)
response.append(new_resp)
return make_response(jsonify(response))
I wanted to call the first method starting a thread per object in the list of the second method so I tried this:
def get_multiple_audience():
with app.app_context():
try:
audiences = request.get_json()
except (BadRequest, ValueError):
return make_response(jsonify(exception_response), 500)
for audience in audiences:
thread = Thread(target=get_audience, args=audience)
thread.start()
thread.join()
return make_response(jsonify(response))
And got this error:
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Python27\lib\site-packages\flask_cors\decorator.py", line 123, in wrapped_function
options = get_cors_options(current_app, _options)
File "C:\Python27\lib\site-packages\flask_cors\core.py", line 286, in get_cors_options
options.update(get_app_kwarg_dict(appInstance))
File "C:\Python27\lib\site-packages\flask_cors\core.py", line 299, in get_app_kwarg_dict
app_config = getattr(app, 'config', {})
File "C:\Python27\lib\site-packages\werkzeug\local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "C:\Python27\lib\site-packages\werkzeug\local.py", line 306, in _get_current_object
return self.__local()
File "C:\Python27\lib\site-packages\flask\globals.py", line 51, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
So then I tried to modify the first method like this:
#app.route('/getAudience', methods=['POST', 'OPTIONS'])
def get_audience(audience_=None):
with app.app_context():
try:
...
And got the same error. Can anyone give me a hint, advice, best practice or solution?
There are multiple problems here. Firstly, here:
for audience in audiences:
thread = Thread(target=get_audience, args=audience)
thread.start()
thread.join()
You are only waiting for the last thread to complete. You should have a list of all the threads, and wait for all of them to complete.
threads = []
for audience in audiences:
thread = Thread(target=get_audience, args=audience)
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
Second problem is that you are returning a single response which isn't even set anywhere. But that's not how multi-threading works. You will have multiple results from all the threads and you will have to keep track of them. So you can create a results array to hold the answers for each thread's return value. Here I will make a simple function sum as an example.
results = []
threads = []
def sum(a, b):
results.append(a + b)
#app.route("/test")
def test():
with app.app_context():
for i in range(5):
t = Thread(target=sum, args=(1, 2))
threads.append(t)
t.start()
for t in threads:
t.join()
return jsonify(results)
This will happily work, and it will return the result of all the calls to sum() function.
Now if I change sum to:
#app.route("/mysum/a/b")
def sum(a, b):
results.append(a + b)
return jsonify(a + b)
I will get a similar error as the one you were getting earlier: namely, RuntimeError: Working outside of request context., even thought the return value would still be correct: [3, 3, 3, 3, 3]. What's happening here is that your sum function is now trying to return a flask response, but it is residing inside its own temporary thread and doesn't have access to any of flask's internal contexts. What you should do is to never return a value inside a temporary worker thread, but have a pool to store them for future reference.
But this doesn't mean you can't have a /mysum route. Indeed, you can, but the logic has to be separated. To put it all together:
results = []
threads = []
def sum(a, b):
return a + b
def sum_worker(a, b):
results.append(sum(a, b))
#app.route("/mysum/a/b")
def mysum(a, b):
return jsonify(sum(a, b))
#app.route("/test")
def test():
with app.app_context():
for i in range(5):
t = Thread(target=sum_worker, args=(1, 2))
threads.append(t)
t.start()
for t in threads:
t.join()
return jsonify(results)
Note that this code is very crude and is only for demonstration purposes. I can't recommend making global variables throughout your app, so some cleanup is required.

AssertionError: None is not callable

I'm in the process of learning how to program in twisted, and going through Dave Peticolas' tutorial (http://krondo.com/wp-content/uploads/2009/08/twisted-intro.html). I'm trying to solve the suggested exercise at the end of Part 3 - having multiple independent countdowns going on countdown.py. Here is my code, and the error I'm getting:
#!/usr/bin/python
class countdown(object):
def __init__(self):
self.timer = 0
def count(self, timer):
if self.timer == 0:
reactor.stop()
else:
print self.timer, '...'
self.timer -= 1
reactor.callLater(1, self.count)
from twisted.internet import reactor
obj = countdown()
obj.timer = 10
reactor.callWhenRunning(obj.count(obj.timer))
print 'starting...'
reactor.run()
print 'stopped.'
When executed:
$ ./countdown.py
10 ...
Traceback (most recent call last):
File "./countdown.py", line 21, in <module>
reactor.callWhenRunning(obj.count(obj.timer))
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 666, in callWhenRunning
_callable, *args, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 645, in addSystemEventTrigger
assert callable(_f), "%s is not callable" % _f
AssertionError: None is not callable
I assume I'm not doing something properly in leveraging an object variable; though I'm not sure what I'm doing wrong.
You are calling you callable before passing it in. The returned result of the obj.count() call is not callable.
You need to pass in the method, not the result of calling it:
reactor.callWhenRunning(obj.count, (obj.timer,))
The positional arguments for your method (here just obj.timer) should be given as a separate tuple.
At closer inspection, you don't even need to pass in obj.timer as an argument. You can just access it on self after all, there is no need to pass it in separately:
class countdown(object):
def __init__(self):
self.timer = 0
def count(self):
if self.timer == 0:
reactor.stop()
else:
print self.timer, '...'
self.timer -= 1
reactor.callLater(1, self.count)
and adjust your callWhenRunning() call accordingly:
reactor.callWhenRunning(obj.count)

Trap exception, try again decorator in Python

I have little experience with decorators in Python, but I'd like to write a function decorator that runs the function, catches a specific exception, and if the exception is caught then re-tries the function a certain number of times. That is, I'd like to do this:
#retry_if_exception(BadStatusLine, max_retries=2)
def thing_that_sometimes_fails(self, foo):
foo.do_something_that_sometimes_raises_BadStatusLine()
I assume this kind of thing is easy with decorators, but I'm not clear about how exactly to go about it.
from functools import wraps
def retry_if_exception(ex, max_retries):
def outer(func):
#wraps(func)
def wrapper(*args, **kwargs):
assert max_retries > 0
x = max_retries
while x:
try:
return func(*args, **kwargs)
except ex:
x -= 1
return wrapper
return outer
see why you better use #wraps
I think you're basically wanting something like this:
def retry_if_exception(exception_type=Exception, max_retries=1):
def decorator(fn):
def wrapper(*args, **kwargs):
for i in range(max_retries+1):
print('Try #', i+1)
try:
return fn(*args, **kwargs)
except exception_type as e:
print('wrapper exception:', i+1, e)
return wrapper
return decorator
#retry_if_exception()
def foo1():
raise Exception('foo1')
#retry_if_exception(ArithmeticError)
def foo2():
x=1/0
#retry_if_exception(Exception, 2)
def foo3():
raise Exception('foo3')
The following seems to do what you've described:
def retry_if_exception( exception, max_retries=2 ):
def _retry_if_exception( method_fn ):
# method_fn is the function that gives rise
# to the method that you've decorated,
# with signature (slf, foo)
from functools import wraps
def method_deco( slf, foo ):
tries = 0
while True:
try:
return method_fn(slf, foo)
except exception:
tries += 1
if tries > max_retries:
raise
return wraps(method_fn)(method_deco)
return _retry_if_exception
Here's an example of it in use:
d = {}
class Foo():
def usually_raise_KeyError(self):
print("d[17] = %s" % d[17])
foo1 = Foo()
class A():
#retry_if_exception(KeyError, max_retries=2)
def something_that_sometimes_fails( self, foo ):
print("About to call foo.usually_raise_KeyError()")
foo.usually_raise_KeyError()
a = A()
a.something_that_sometimes_fails(foo1)
This gives:
About to call foo.usually_raise_KeyError()
About to call foo.usually_raise_KeyError()
About to call foo.usually_raise_KeyError()
Traceback (most recent call last):
File " ......... TrapRetryDeco.py", line 39, in <module>
a.something_that_sometimes_fails( foo1)
File " ......... TrapRetryDeco.py", line 15, in method_deco
return method_fn( slf, foo)
File " ......... TrapRetryDeco.py", line 36, in something_that_sometimes_fails
foo.usually_raise_KeyError()
File " ......... TrapRetryDeco.py", line 28, in usually_raise_KeyError
print("d[17] = %s" % d[17])
KeyError: 17
I assume that by "2 retries" you mean the operation will be attempted 3x all told. Your example has a couple of complications which may obscure the basic setup:
It seems you want a method decorator, as your function/method's first parameter is "self"; however, that method immediately delegates to some bad method of its foo parameter. I preserved these complications :)
As outline, you would do something along these lines:
import random
def shaky():
1/random.randint(0,1)
def retry_if_exception(f):
def inner(retries=2):
for retry in range(retries):
try:
return f()
except ZeroDivisionError:
print 'try {}'.format(retry)
raise
return inner
#retry_if_exception
def thing_that_may_fail():
shaky()
thing_that_may_fail()
As written, that will fail about 1/2 the time.
When it does fail, prints:
try 0
try 1
Traceback (most recent call last):
File "Untitled 2.py", line 23, in <module>
thing_that_may_fail()
File "Untitled 2.py", line 10, in inner
return f()
File "Untitled 2.py", line 21, in thing_that_may_fail
shaky()
File "Untitled 2.py", line 4, in shaky
1/random.randint(0,1)
ZeroDivisionError: integer division or modulo by zero
You could adapt this structure to many different types of errors.

Categories