Conditional execution without having to check repeatedly for a condition - python

I have a class with code that fits into the following template:
class aClass:
def __init__(self, switch = False):
self.switch = switch
def f(self):
done = False
while not done:
# a dozen lines of code
if self.switch:
# a single line of code
# another dozen lines of code
So the single line of code in the if statement will either never be executed, or it will be executed in all iterations. And this is actually known as soon as the object is initialized.
When self.switch is True, I would like the single line of code to be executed without having to check for self.switch at every single iteration. And when self.switch is False, I would like the single line of code to be ignored, again without having to repeatedly check for self.switch.
I have of course considered writing two versions of f and selecting the appropriate one in __init__ according to the value of the switch, but duplicating all this code except for a single line doesn't feel right.
Can anyone suggest an elegant way to solve this problem? Perhaps a way to generate the appropriate version of the f method at initialization?

That's a completely valid ask. If not for performance then for readability.
Extract the three pieces of logic (before, inside, and after your condition) in three separate methods and in f() just write two implementations of the big loop:
def first(self):
pass
def second(self):
pass
def third(self):
pass
def f(self):
if self.switch:
while ...:
self.first()
self.third()
else:
while ...:
self.first()
self.second()
self.third()
If you want it more elegant (although it depends on taste), you express the two branches of my f() into two methods first_loop and second_loop and then in __init__ assign self.f = self.first_loop or self.f = self.second_loop depending on the switch:
class SuperUnderperformingAccordingToManyYetReadable(object):
def __init__(self, switch):
if self.switch:
self.f = self._first_loop
else:
self.f = self._second_loop
def _first(self):
pass
def _second(self):
pass
def _third(self):
pass
def _first_loop(self):
while ...:
self.first()
self.third()
def _second_loop(self):
while ...:
self.first()
self.second()
self.third()
You may need to do some extra work to manage breaking out of the while loop.

If the .switch attribute is not supposed to change, try to select the loop body dynamicly in the __init__() method:
def __init__(self, switch=False):
self.switch = switch
self.__fBody = self.__fSwitchTrue if switch else self.__fSwitchFalse
def f(self):
self.__done = False
while not self.__done:
self.__fBody()
def __fSwitchTrue(self):
self.__fBodyStart()
... # a single line of code
self.__fBodyEnd()
def __fSwitchFalse(self):
self.__fBodyStart()
self.__fBodyEnd()
def __fBodyStart(self):
... # a dozen lines of code
def __fBodyEnd(self):
... # another dozen lines of code
Remember to change values used by more than one of the defined methods to attributes (like done is changed to .__done).

In a comment to my original question, JohnColeman suggested using exec and provided a link to another relevant question.
That was an excellent suggestion and the solution I was lead to is:
_template_pre = """\
def f(self):
for i in range(5):
print("Executing code before the optional segment.")
"""
_template_opt = """\
print("Executing the optional segment")
"""
_template_post = """\
print("Executing code after the optional segment.")
"""
class aClass:
def __init__(self, switch = False):
if switch:
fdef = _template_pre + _template_opt + _template_post
else:
fdef = _template_pre + _template_post
exec(fdef, globals(), self.__dict__)
# bind the function
self.f = self.f.__get__(self)
You can verify this actually works:
aClass(switch = False).f()
aClass(switch = True).f()
Before jumping to conclusions as to how "pythonic" this is, let me point out that such an approach is employed in a couple of metaclass recipes I have encountered and even in the Python Standard Library (check the implementation of namedtuple, to name one example).

Related

Detect the last inline method call of a class

Let's say we have a python class with methods intended to be called one or more times inline. My goal is to make methods behave differently when they are invoked last in an inline chain of calls.
Example:
class InlineClass:
def __init__(self):
self.called = False
def execute(self):
if self.called:
print('do something with awareness of prior call')
return self
else:
print('first call detected')
self.called = True
return self
def end(self):
print ('last call detected, do something with awareness this is the last call')
self.called = False
return self
x = InlineClass()
x.execute().execute().execute().end() # runs 3 execute calls inline and end.
The example above only knows it has reached the last inline call once the end method is invoked. What I would like to do, in essence, is to make that step redundant
QUESTION
Keeping in mind the intent for this class's methods to always be called one or more times inline, is there an elegant way to format the class so it is aware it has reached its last inline method call, and not necessitate the end call as in the example above.
Instead of chaining the functions, you can try creating a function that handles passing different parameters depending on how many times the function has been / will be called.
Here is some example code:
class Something:
def repeat(self, function, count):
for i in range(count):
if i == 0:
function("This is the first time")
elif i == count - 1:
function("This is the last time")
else:
function("This is somewhere in between")
def foo_function(self, text):
print(text)
foo = Something()
foo.repeat(foo.foo_function, 5)
foo.repeat(foo.foo_function, 2)
foo.repeat(foo.foo_function, 6)
foo.repeat(foo.foo_function, 8)
Output:
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time
This is the first time
This is the last time
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time
You need to return modified copies instead of self, here is the example with behaviour you described:
class InlineClass:
def __init__(self, counter=0):
self.counter = counter
def execute(self):
return InlineClass(self.counter+1)
def __str__(self):
return f'InlineClass<counter={self.counter}>'
x = InlineClass()
print(x)
# => InlineClass<counter=0>
y = x.execute().execute().execute()
print(y)
# => InlineClass<counter=3>
print(x.execute().execute().execute())
# => InlineClass<counter=3>
print(y.execute().execute().execute())
# => InlineClass<counter=6>

Unit testing a method that iterates through loops

I am trying to come up with a good way to test a method that goes through some loops and then gets an object and calls one of its methods. That class has its own tests so I'm not sure exactly what to test here. I have written it to have different behavior in the test case, which is obviously not ideal.
I'm looking for suggestions on how to improve this to test without the conditionals.
def direct(self, test=False):
"""
routes data in self.data_groups to
consumers in self.consumers_list
"""
if test:
output_list = []
for data_type, group in self.data_groups.items():
if test:
output_list.append(data_type)
output_list.append(group)
for consumer_name in self.consumers_list[data_type]:
for record in group:
if test:
output_list.append(record.values()[0])
else:
consumer = self.get_consumer(consumer_name,
record)
consumer_output = consumer.do_something()
if test:
return output_list
return True
Unfortunately I'm not sure that what you are talking about is possible. I'd say you could use a decorator but that'd be useless without overriding direct completely. I'm not sure if this is any better for you but you could just override the method for your tests in its own class? This would make it look a lot cleaner and you could group your code in a more intuitive way.
do something like:
class DirectClass:
def __init__(self):
self.data_groups = dict
self.consumers_list = list
def direct(self):
"""
routes data in self.data_groups to
consumers in self.consumers_list
"""
for data_type, group in self.data_groups.items():
for consumer_name in self.consumers_list[data_type]:
for record in group:
consumer = self.get_consumer(consumer_name,
record)
consumer_output = consumer.do_something()
return True
class TestDirect(DirectClass):
def __init__(self):
DirectClass.__init__(self)
def direct(self):
output_list = []
for data_type, group in self.data_groups.items():
output_list.append(data_type)
output_list.append(group)
for consumer_name in self.consumers_list[data_type]:
for record in group:
output_list.append(record.values()[0])
return output_list
I settled on sub-classing the object returned by consumer so that I could call the same code and have the over-ridden run (formerly do_something) method return test data. There are still more ifs than I want but it achieves most of my objective. Credit to #user2916286 for getting me thinking about subclassing in this case.
def direct(self, test=False):
"""
routes data in self.data_groups to
consumers in self.consumers_list
"""
if test:
output_list = []
for data_type, group in self.data_groups.items():
if test:
output_list.append(data_type)
output_list.append(group)
for consumer_name in self.consumers_list[data_type]:
for record in group:
consumer = self.get_consumer(consumer_name,
record, test=test)
consumer_output = consumer.run()
if not consumer_output:
raise Exception('consumer failed')
output_list.append(consumer_output)
if test:
return output_list
return True

How to encapsulate handlers in an efficient manner in Python?

I am making up a handler to handle different types of data. Here is my current solution:
def get_handler_by_type(type):
def handler_for_type_A:
...
#code for processing data type A
def handler_for_type_B:
...
#code for processing data type B
def handler_for_type_C:
...
#code for processing data type C
handler_map = {type_A: handler_for_type_A,
type_B: handler_for_type_B,
type_C: handler_for_type_C,
}
return handler_map(type)
However, this seems quite inefficient as I will call get_handler_by_type frequently and every time it gets called, the dictionary will be constructed again.
I know I could do this instead:
def handler_for_type_A:
...
#code for processing data type A
def handler_for_type_B:
...
#code for processing data type B
def handler_for_type_C:
...
#code for processing data type C
handler_map = {type_A: handler_for_type_A,
type_B: handler_for_type_B,
type_C: handler_for_type_C,
}
def get_handler_by_type(type, handler_map = handler_map):
return handler_map(type)
But this is pretty ugly in my opinion. Because I have handler_for_type_Xs and handler_map that are polluting the global space. Is there a way of doing this both efficiently and elegantly?
Thanks for any inputs.
One way is to look the handler up dynamically (if you have a consistent naming convention)
return vars()['handler_for_'+type]
Another way is to store the map as an attribute of the function
def get_handler_by_type(type):
def handler_for_type_A:
...
#code for processing data type A
def handler_for_type_B:
...
#code for processing data type B
def handler_for_type_C:
...
#code for processing data type C
if not hasattr(get_handler_by_type, 'handler_map'):
get_handler_by_type.handler_map = {'type_A': handler_for_type_A,
'type_B': handler_for_type_B,
'type_C': handler_for_type_C,
}
return get_handler_by_type.handler_map[type]
This way will encapsulate it:
def _handler_helper():
def fna():
print "a"
pass
def fnb():
print "b"
pass
m = {"a":fna,"b":fnb}
return lambda x:m[x]
get_handler_by_type = _handler_helper()
You may want to use def if you want to have a docstring, but this works.
Another option might be to have a more OOP approach:
class _HandlerHelper:
def fna(self):
print 'a'
def fnb(self):
print 'b'
# __call__ is a magic method which lets you treat the object as a function
def __call__(self, fn):
return getattr(self, 'fn' + fn)
get_handler_by_type = _HandlerHelper()

How to pass a list of parameters to a function in Python

I warped a class in this way:
import Queue
import threading
class MyThread():
q = Queue.Queue()
content = []
result = {}
t_num = 0
t_func = None
def __init__ (self, t_num, content, t_func):
for item in content:
self.q.put(item)
self.t_num = t_num
self.t_func = t_func
def start(self):
for i in range(self.t_num):
t = threading.Thread(target=self.worker)
t.daemon = True
t.start()
self.q.join()
return self.result
def worker(self):
while True:
item = self.q.get()
value = self.t_func(item)
self.result[item] = value
self.q.task_done()
x = [5, 6, 7, 8, 9]
def func(i):
return i + 1
m = MyThread(4, x, func)
print m.start()
It works well. If I design the function func with 2 or more parameters, and pass these parameters in a list to the class, how can I call the func function in the function worker properly?
eg.
def __init__ (self, t_num, content, t_func, t_func_p):
for item in content:
self.q.put(item)
self.t_num = t_num
self.t_func = t_func
self.t_func_p = t_func_p
def func(i, j, k):
m = MyThread(4, x, func, [j, k])
You need to use *args and **kwargs to pass any number of parameters to a function.
Here is more info: http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/
Maybe this might help:
def __init__(self, t_num, content, func, *params):
func(*params) # params is a list here [param1, param2, param3....]
def func(param1, param2, param3):
# or
def func(*params): # for arbitrary number of params
m = MyThread(4, x, func, param1, param2, param3....)
As a general rule, if you are going to be passing many parameters to a particular function, you may consider wrapping them into a simple object, the reasons are
If you ever need to add/remove parameters, you just need to modify the object, and the function itself, the method signature (and all its references) will remain untouched
When working with objects, you will always know what your function is receiving (this is specially useful if you are working on a team, where more people will use that function).
Finally, because you control the creation of the object on its constructor, you can ensure that the values associated with the object are correct (for example, in the constructor you can make sure that you have no empty values, or that the types are correct).
If still you want to go with multiple parameters, check the *args and **kwargs, although I personally do not like that, as it may end up forcing people to read the function's source in order to use it.
Good luck :)

conditional python with

I have a five or six resources that have nice 'with' handlers, and normally I'd do this:
with res1, res2, res3, res4, res5, res6:
do1
do2
However, sometimes one or more of these resources should not be activated. Which leads to very ugly repetitive code:
with res1, res3, res4, res6: # these always acquired
if res2_enabled:
with res2:
if res5_enabled:
with res5:
do1
do2
else:
do1
do2
else if res5_enabled:
with res5:
...
There must be clean easy ways to do this surely?
You could create a wrapper object that supports the with statement, and do the checking in there. Something like:
with wrapper(res1), wrapper(res2), wrapper(res3):
...
or a wrapper than handles all of them:
with wrapper(res1, res2, res3):
...
The definition for you wrapper would be:
class wrapper(object):
def __init__(self, *objs):
...
def __enter__(self):
initialize objs here
def __exit__(self):
release objects here
If I understand you correctly you can do this:
from contextlib import contextmanager, nested
def enabled_resources(*resources):
return nested(*(res for res,enabled in resources if enabled))
# just for testing
#contextmanager
def test(n):
print n, "entered"
yield
resources = [(test(n), n%2) for n in range(10)]
# you want
# resources = [(res1, res1_enabled), ... ]
with enabled_resources(*resources):
# do1, do2
pass
Original Poster here; here is my approach refined so far:
I can add (or monkey-patch) the bool operator __nonzero__ onto the with objects, returning whether they are enabled. Then, when objects are mutually exclusive, I can have:
with res1 or res2 or res3 or res4:
...
When an resource is togglable, I can create an empty withable that is a nop; wither seems a nice name for it:
class sither:
#classmethod
def __enter__(cls): pass
#classmethod
def __exit__(cls,*args): pass
...
with res1 or wither, res2 or wither:
...
I can also use this keeping the toggling out of the withable objects:
with res1 if res1enabled else wither, res2 if res2enabled else wither:
..
Finally, those I have most control over, I can integrate the enabled checking into the class itself such that when used and not enabled, they are nop:
with res1, res2, res3:
...
The with statement is absolutely adorable, it just seems a bit unentrenched yet. It will be interesting to see what finesse others come up with in this regard...

Categories