Detect the last inline method call of a class - python

Let's say we have a python class with methods intended to be called one or more times inline. My goal is to make methods behave differently when they are invoked last in an inline chain of calls.
Example:
class InlineClass:
def __init__(self):
self.called = False
def execute(self):
if self.called:
print('do something with awareness of prior call')
return self
else:
print('first call detected')
self.called = True
return self
def end(self):
print ('last call detected, do something with awareness this is the last call')
self.called = False
return self
x = InlineClass()
x.execute().execute().execute().end() # runs 3 execute calls inline and end.
The example above only knows it has reached the last inline call once the end method is invoked. What I would like to do, in essence, is to make that step redundant
QUESTION
Keeping in mind the intent for this class's methods to always be called one or more times inline, is there an elegant way to format the class so it is aware it has reached its last inline method call, and not necessitate the end call as in the example above.

Instead of chaining the functions, you can try creating a function that handles passing different parameters depending on how many times the function has been / will be called.
Here is some example code:
class Something:
def repeat(self, function, count):
for i in range(count):
if i == 0:
function("This is the first time")
elif i == count - 1:
function("This is the last time")
else:
function("This is somewhere in between")
def foo_function(self, text):
print(text)
foo = Something()
foo.repeat(foo.foo_function, 5)
foo.repeat(foo.foo_function, 2)
foo.repeat(foo.foo_function, 6)
foo.repeat(foo.foo_function, 8)
Output:
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time
This is the first time
This is the last time
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time
This is the first time
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is somewhere in between
This is the last time

You need to return modified copies instead of self, here is the example with behaviour you described:
class InlineClass:
def __init__(self, counter=0):
self.counter = counter
def execute(self):
return InlineClass(self.counter+1)
def __str__(self):
return f'InlineClass<counter={self.counter}>'
x = InlineClass()
print(x)
# => InlineClass<counter=0>
y = x.execute().execute().execute()
print(y)
# => InlineClass<counter=3>
print(x.execute().execute().execute())
# => InlineClass<counter=3>
print(y.execute().execute().execute())
# => InlineClass<counter=6>

Related

get iterator value from itertool created variable python

I created an iterator to increment the figure number in various plotting function calls:
figndx=itertools.count()
I then proceed to call these throughout my code, passing next(figndx) as an argument to increment the value I use for the figure number: - for ex:
an.plotimg(ref_frame,next(figndx),'Ref Frame')
an.plotimg(new_frame,next(figndx),'New Frame')
etc...
After some particular function call, I want to read back the figndx value and store it in a variable for later use. However, when I look at figndx , it returns count(7), for example. How do I extract the '7' from this?
I've tried :
figndx
figndx.__iter__()
and I can't find anything else in the 'suggested' methods (when I type the dot (.)) that will get the actual iterator value. Can this be done?
`
Just wrap a count object
class MyCount:
def __init__(self, *args, **kwargs):
self._c = itertools.count(*args, **kwargs)
self._current = next(self._c)
def __next__(self):
current = self._current
self._current = next(self._c)
return current
def __iter__(self):
return self
def peek(self):
return self._current
You can create yourself a peeker, using itertools.tee, and encapsulate the peek:
from itertools import count, tee
def peek(iterator):
iterator, peeker = tee(iterator)
return iterator, next(peeker)
Then you can call it like
figndx = count(1)
next(figndx)
next(figndx)
figndx, next_value = peek(figndx)
next_value
# 3

wrapping generators to have a single `next` call instead of two steps ( __iter__ + __next__ )

I'm receiving an unknown number of records for background processing from generators. If there is a more important job, I have to stop to release the process.
The main process is best described as:
def main():
generator_source = generator_for_test_data() # 1. contact server to get data.
uw = UploadWrapper(generator_source) # 2. wrap the data.
while not interrupt(): # 3. check for interrupts.
row = next(uw)
if row is None:
return
print(long_running_job(row)) # 4. do the work.
Is there a way to get to __next__ without having to plug __iter__?
Having two steps - (1) make an iterator, then (2) iterate over it, just seems clumsy.
There are many cases where I'd prefer to submit a function to a function manager (mapreduce style), but in this case I need an instantiated class with some settings. Registering a single function can therefor only work if that function alone is __next__
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self._iterator = None
def __iter__(self):
for page in self.generator:
yield from page.data
def __next__(self):
if self._iterator is None: # ugly bit.
self._iterator = self.__iter__() #
try:
return next(self._iterator)
except StopIteration:
return None
Q: Is there a simpler way?
Working sample added for completeness:
import time
import random
class Page(object):
def __init__(self, data):
self.data = data
def generator_for_test_data():
for t in range(10):
page = Page(data=[(t, i) for i in range(100, 110)])
yield page
def long_running_job(row):
time.sleep(random.randint(1,10)/100)
assert len(row) == 2
assert row[0] in range(10)
assert row[1] in range(100, 110)
return row
def interrupt(): # interrupt check
if random.randint(1,50) == 1:
print("INTERRUPT SIGNAL!")
return True
return False
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self._iterator = None
def __iter__(self):
for ft in self.generator:
yield from ft.data
def __next__(self):
if self._iterator is None:
self._iterator = self.__iter__()
try:
return next(self._iterator)
except StopIteration:
return None
def main():
gen = generator_for_test_data()
uw = UploadWrapper(gen)
while not interrupt(): # check for job interrupt.
row = next(uw)
if row is None:
return
print(long_running_job(row))
if __name__ == "__main__":
main()
Your UploadWrapper seems overtly complex, there is more than a single simpler solution.
My first thought is to ditch the class altogether and just use a function instead:
def uploadwrapper(page_gen):
for page in page_gen:
yield from page.data
Just replace uw = UploadWrapper(gen) with uw = uploadwrapper(gen), and that'll work.
If you insist on the class, you can just get rid of the __next__() and replace uw = UploadWrapper(gen) with uw = iter(UploadWrapper(gen)), and it'll work.
In either case, you must also catch the StopIteration in the caller. __next__() is supposed to raise StopIteration when it's done, and not return None, like yours does. Otherwise, it won't work with things expecting a well-behaving iterator, eg. for loops.
I think you might have some misconceptions about how it all is supposed to fit together, so I'll try my best to explain how it's supposed to work, to the best of my knowledge:
The point of __iter__() is that if you have eg. a list, you can get multiple independent iterators by calling iter(). When you have a for loop, you're essentially first getting an iterator with iter() and then calling next() on it on every loop iteration. If you have two nested loops that use the same list, the iterators and their positions are still separate so there's no conflict. __iter__() is supposed to return an iterator for the container it's on, or if it's called on an iterator, it's supposed to just return self. In that sense, it's kind of wrong for UploadWrapper not to return self in __iter__(), since it wraps a generator and so can't really give independent iterators. As for why leaving out __next__() works, it's because when you define a generator (ie. use yield in a function), the generator has an __iter__() (that returns self, as it should) and __next__() that does what you'd expect. In your original code, you're not really using __iter__() at all for what it's supposed to be used: the code works even if you rename it to something else! This is because you never call iter() on the instance, and just directly call next().
If you wanted to do it "properly" as a class, I think something like this might suffice:
class UploadWrapper(object):
def __init__(self, generator):
self.generator = generator
self.subgen = iter(next(generator).data)
def __iter__(self):
return self
def __next__(self):
while True:
try:
return next(self.subgen)
except StopIteration:
self.subgen = iter(next(self.generator).data)

In my LinkedList, head_pop() and append() take the same time?

I want to test if delete a LinkList's head element fast than add an element to LinkList's end.
This is my LinkList's main code:
class LNode:
def __init__(self,elem,next_=None):
self.elem=elem
self.next=next_
class LinkList:
def __init__(self):
self.__head=None
#delete head element
def head_pop(self):
if self.__head is None:
raise LinkedListUnderflow("in pop")
e=self.__head.elem
self.__head=self.__head.next
return e
#add an element at end
def append(self,elem):
if self.__head is None:
self.__head=LNode(elem)
return
p=self.__head
while p.next is not None:
p=p.next
p.next=LNode(elem)
import time
#test time
def timetest(f):
start=time.clock()
for a in range(0,1000000):
f
end=time.clock()
print("times:"+str(end-start))
then,I try this:
llist=LinkList()
def append():
llist.append(666)
def head_pop():
llist.head_pop()
timetest(append())
timetest(head_pop())
Output:
times:0.029582597002445254
times:0.03032071299821837
As you can see, they cost same time.
But I think it should be O(n):O(1).
What you're doing is passing the result of append() to your time test function, whereas you want to pass the function itself!
Change your time-test to call the f function:
def timetest(f):
start=time.clock()
for a in range(0,1000000):
f() # <- note the () here
end=time.clock()
print("times:"+str(end-start))
Then use this to test:
timetest(append)
timetest(head_pop)
As you can see, we're passing in the function for the test to call, instead of the RESULT of the function (being called once!)

Conditional execution without having to check repeatedly for a condition

I have a class with code that fits into the following template:
class aClass:
def __init__(self, switch = False):
self.switch = switch
def f(self):
done = False
while not done:
# a dozen lines of code
if self.switch:
# a single line of code
# another dozen lines of code
So the single line of code in the if statement will either never be executed, or it will be executed in all iterations. And this is actually known as soon as the object is initialized.
When self.switch is True, I would like the single line of code to be executed without having to check for self.switch at every single iteration. And when self.switch is False, I would like the single line of code to be ignored, again without having to repeatedly check for self.switch.
I have of course considered writing two versions of f and selecting the appropriate one in __init__ according to the value of the switch, but duplicating all this code except for a single line doesn't feel right.
Can anyone suggest an elegant way to solve this problem? Perhaps a way to generate the appropriate version of the f method at initialization?
That's a completely valid ask. If not for performance then for readability.
Extract the three pieces of logic (before, inside, and after your condition) in three separate methods and in f() just write two implementations of the big loop:
def first(self):
pass
def second(self):
pass
def third(self):
pass
def f(self):
if self.switch:
while ...:
self.first()
self.third()
else:
while ...:
self.first()
self.second()
self.third()
If you want it more elegant (although it depends on taste), you express the two branches of my f() into two methods first_loop and second_loop and then in __init__ assign self.f = self.first_loop or self.f = self.second_loop depending on the switch:
class SuperUnderperformingAccordingToManyYetReadable(object):
def __init__(self, switch):
if self.switch:
self.f = self._first_loop
else:
self.f = self._second_loop
def _first(self):
pass
def _second(self):
pass
def _third(self):
pass
def _first_loop(self):
while ...:
self.first()
self.third()
def _second_loop(self):
while ...:
self.first()
self.second()
self.third()
You may need to do some extra work to manage breaking out of the while loop.
If the .switch attribute is not supposed to change, try to select the loop body dynamicly in the __init__() method:
def __init__(self, switch=False):
self.switch = switch
self.__fBody = self.__fSwitchTrue if switch else self.__fSwitchFalse
def f(self):
self.__done = False
while not self.__done:
self.__fBody()
def __fSwitchTrue(self):
self.__fBodyStart()
... # a single line of code
self.__fBodyEnd()
def __fSwitchFalse(self):
self.__fBodyStart()
self.__fBodyEnd()
def __fBodyStart(self):
... # a dozen lines of code
def __fBodyEnd(self):
... # another dozen lines of code
Remember to change values used by more than one of the defined methods to attributes (like done is changed to .__done).
In a comment to my original question, JohnColeman suggested using exec and provided a link to another relevant question.
That was an excellent suggestion and the solution I was lead to is:
_template_pre = """\
def f(self):
for i in range(5):
print("Executing code before the optional segment.")
"""
_template_opt = """\
print("Executing the optional segment")
"""
_template_post = """\
print("Executing code after the optional segment.")
"""
class aClass:
def __init__(self, switch = False):
if switch:
fdef = _template_pre + _template_opt + _template_post
else:
fdef = _template_pre + _template_post
exec(fdef, globals(), self.__dict__)
# bind the function
self.f = self.f.__get__(self)
You can verify this actually works:
aClass(switch = False).f()
aClass(switch = True).f()
Before jumping to conclusions as to how "pythonic" this is, let me point out that such an approach is employed in a couple of metaclass recipes I have encountered and even in the Python Standard Library (check the implementation of namedtuple, to name one example).

Decrementing Function Arguments (PYTHON)

I'm calling functions similar to those that follow, inside a loop:
def bigAnim(tick,firstRun):
smallAnim(x,y,duration)
#more anims and logic...
def smallAnim(x, y,duration):
duration -= 1
if duration != 0:
Anim.blit(screen,(x ,y))
Anim.play()
else:
Anim.stop()
loopedOnce = True
return loopedOnce
Now say I were to call the smallAnim inside the big anim as follows:
def bigAnim(tick,firstRun):
smallAnim(0,50,5)
smallAnim is now being called indefinitely, as duration will never go lower than 4 (being reset to 5 every time it's called in the loop). What would be the best way to solve this problem?
You need to do the counting in bigAnim and only call smallAnim() when the value is greater than zero.
Or you can return the current duration:
def bigAnim(tick,firstRun):
duration = smallAnim(x,y,duration)
#more anims and logic...
def smallAnim(x, y, duration):
duration -= 1
if duration > 0:
Anim.blit(screen,(x ,y))
Anim.play()
return duration
Your underlying problem is Python does pass the references to the variables, but integers are immutable.
This is a little easier to understand with strings:
The function
def foo(s):
s = " world"
will only modify s local to the function if you call foo("hello"). The typical pattern you'll see instead is:
def foo(s):
return s + " world"
And then ... print foo("hello")

Categories