Pythonic enumeration of while loop - python

Python has an elegant way of automatically generating a counter variable in for loops: the enumerate function. This saves the need of initializing and incrementing a counter variable. Counter variables are also ugly because they are often useless once the loop is finished, yet their scope is not the scope of the loop, so they occupy the namespace without need (although I am not sure whether enumerate actually solves this).
My question is, whether there is a similar pythonic solution for while loops. enumerate won't work for while loops since enumerate returns an iterator. Ideally, the solution should be "pythonic" and not require function definitions.
For example:
x=0
c=0
while x<10:
x=int(raw_input())
print x,c
c+=1
In this case we would want to avoid initializing and incrementing c.
Clarification:
This can be done with an endless for loop with manual termination as some have suggested, but I am looking for a solution that makes the code clearer, and I don't think that solution makes the code clearer in this case.

Improvement (in readability, I'd say) to Ignacio's answer:
x = 0
for c in itertools.takewhile(lambda c: x < 10, itertools.count()):
x = int(raw_input())
print x, c
Advantages:
Only the while loop condition is in the loop header, not the side-effect raw_input.
The loop condition can depend on any condition that a normal while loop could. It's not necessary to "import" the variables referenced into the takewhile, as they are already visible in the lambda scope. Additionally it can depend on the count if you want, though not in this case.
Simplified: enumerate no longer appears at all.

Again with the itertools...
import itertools
for c, x in enumerate(
itertools.takewhile(lambda v: v < 10,
(int(raw_input()) for z in itertools.count())
)
):
print c, x

If you want zero initialization before the while loop, you can use a Singleton with a counter:
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Singleton, cls).__new__(
cls, *args, **kwargs)
cls.count=0
else:
cls.count+=1
return cls._instance
Then there will only be one instance of Singleton and each additional instance just adds one:
>>> Singleton().count # initial instance
0
>>> Singleton().count
1
>>> Singleton().count
2
>>> Singleton().count
3
Then your while loop becomes:
while Singleton():
x=int(raw_input('x: '))
if x>10: break
print 'While loop executed',Singleton().count,'times'
Entering 1,2,3,11 it prints:
x: 1
x: 2
x: 3
x: 11
While loop executed 4 times
If you do not mind a single line initialization before the while loop, you can just subclass an interator:
import collections
class WhileEnum(collections.Iterator):
def __init__(self,stop=None):
self.stop=stop
self.count=0
def next(self): # '__next__' on Py 3, 'next' on Py 2
if self.stop is not None:
self.remaining=self.stop-self.count
if self.count>=self.stop: return False
self.count+=1
return True
def __call__(self):
return self.next()
Then your while loop becomes:
enu=WhileEnum()
while enu():
i=int(raw_input('x: '))
if i>10: break
print enu.count
I think the second is the far better approach. You can have multiple enumerators and you can also set a limit on how many loops to go:
limited_enum=WhileEnum(5)

I don't think it's possible to do what you want in the exact way you want it. If I understand right, you want a while loop that increments a counter each time through, without actually exposing a visible counter outside the scope of the loop. I think the way to do this would be to rewrite your while loop as a nonterminating for loop, and check the end condition manually. For your example code:
import itertools
x = 0
for c in itertools.count():
x = int(raw_input())
print x, c
if x >= 10:
break
The problem is that fundamentally you're doing iteration, with the counter. If you don't want to expose that counter, it needs to come from the loop construct. Without defining a new function, you're stuck with a standard loop and an explicit check.
On the other hand, you could probably also define a generator for this. You'd still be iterating, but you could at least wrap the check up in the loop construct.

Related

Iteration for the last value of iteration in Python

How can I define a function in python in such a way that it takes the previous value of my iteration where I define the initial value.
My function is defined as following:
def Deulab(c, yh1, a, b):
Deulab = c- (EULab(c, yh1, a, b)-1)*0.3
return (Deulab,yh1, a,b)
Output is
Deulab(1.01, 1, 4, 2)
0.9964391705626454
Now I want to iterate keeping yh1, a ,b fixed and start with c0=1 and iterate recursively for c.
The most pythonic way of doing this is to define an interating generator:
def iterates(f,x):
while True:
yield x
x = f(x)
#test:
def f(x):
return 3.2*x*(1-x)
orbit = iterates(f,0.1)
for _ in range(10):
print(next(orbit))
Output:
0.1
0.2880000000000001
0.6561792000000002
0.7219457839595519
0.6423682207442558
0.7351401271107676
0.6230691859914625
0.7515327214700762
0.5975401280955426
0.7695549549155365
You can use the generator until some stop criterion is met. For example, in fixed-point iteration you might iterate until two successive iterates are within some tolerance of each other. The generator itself will go on forever, so when you use it you need to make sure that your code doesn't go into an infinite loop (e.g. don't simply assume convergence).
It sound like you are after recursion.
Here is a basic example
def f(x):
x += 1
if x < 10:
x = f(x)
return x
print (f(4))
In this example a function calls itself until a criteria is met.
CodeCupboard has supplied an example which should fit your needs.
This is a bit of a more persistent version of that, which would allow you to go back to where you were with multiple separate function calls
class classA:
#Declare initial values for class variables here
fooResult = 0 #Say, taking 0 as an initial value, not unreasonable!
def myFoo1(x):
y = 2*x + fooResult #A simple example function
classA.fooResult = y #This line is updating that class variable, so next time you come in, you'll be using it as part of calc'ing y
return y #and this will return the calculation back up to wherever you called it from
#Example call
rtn = classA.myFoo1(5)
#rtn1 will be 10, as this is the first call to the function, so the class variable had initial state of 0
#Example call2
rtn2 = classA.myFoo1(3)
#rtn2 will be 16, as the class variable had a state of 10 when you called classA.myFoo1()
So if you were working with a dataset where you didn't know what the second call would be (i.e. the 3 in call2 above was unknown), then you can revisit the function without having to worry about handling the data retention in your top level code. Useful for a niche case.
Of course, you could use it as per:
list1 = [1,2,3,4,5]
for i in list1:
rtn = classA.myFoo1(i)
Which would give you a final rtn value of 30 when you exit the for loop.

Intermediate results from recursion

I have a problem where I need to produce something which is naturally computed recursively, but where I also need to be able to interrogate the intermediate steps in the recursion if needed.
I know I can do this by passing and mutating a list or similar structure. However, this looks ugly to me and I'm sure there must be a neater way, e.g. using generators. What I would ideally love to be able to do is something like:
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
in an efficient way. While my solution is not performance critical, I can't justify the massive amount of redundant effort in that first line. It looks to me like a generator would be perfect for this except for the fact that f is fundamentally much more suited to recursion in my case (which at least in my mind is the complete opposite of a generator, but maybe I'm just not thinking far enough outside of the box).
Is there a neat Pythonic way of doing something like this that I just don't know about, or do I just need to just capitulate and pollute my function f by passing it an intermediate_results list which I then mutate as a side-effect?
I have a generic solution for you using a decorator. We create a Memoize class which stores the results of previous times the function is executed (including in recursive calls). If the arguments given have already been seen, the cached versions are used to quickly lookup the result.
The custom class has the benefit over an lru_cache in that you can see the results.
from functools import wraps
class Memoize:
def __init__(self):
self.store = {}
def save(self, fun):
#wraps(fun)
def wrapper(*args):
if args not in self.store:
self.store[args] = fun(*args)
return self.store[args]
return wrapper
m = Memoize()
#m.save
def fibo(n):
if n <= 0: return 0
elif n == 1: return 1
else: return fibo(n-1) + fibo(n-2)
Then after running different things you can see what the cache contains. When you run future function calls, m.store will be used as a lookup so calculation doesn't need to be redone.
>>> f(8)
21
>>> m.store
{(1,): 1,
(0,): 0,
(2,): 1,
(3,): 2,
(4,): 3,
(5,): 5,
(6,): 8,
(7,): 13,
(8,): 21}
You could modify the save function to use the name of the function and the args as the key, so that multiple function results can be stored in the same Memoize class.
You can use your existing solution that makes many "redundant" calls to f, but employ the use of function caching to save the results to previous calls to f.
In other words, when f(x1) is called, it's input arguments and corresponding return values are saved, and the next time it is called, the result is simply pulled from the cache
see functools.lru_cache for the standard library solution to this
ie:
from functools import lru_cache
#lru_cache
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
Note, however, f must be a pure function (no side-effects, 1-to-1 mapping) for this to work properly
Having considered your comments, I'll now try to give another perspective on the problem.
So, let's consider a concrete example:
def f(x):
a = 2
return g(x) + a if x != 0 else 0
def g(x):
b = 1
return h(x) - b
def h(x):
c = 1/2
return f(x-1)*(1+c)
I
First of all, it should be mentioned that (in our particular case) the algorithm has form of: f(x) = p(f(x - 1)) for some p. It follows that f(x) = p^x(f(0)) = p^x(0). That means we should just apply p to 0 x times to get the desired result, which can be done in an iterative process, so this can be written without recursion. Though I believe that your real case is much harder. Moreover, it would be too boring and uninformative to stop here)
II
Generally speaking, we can divide all possible solutions into two groups: the ones that require refactoring (i.e. rewriting functions f, g, h) and the ones that do not. I have little to offer from the latter one (and I don't think anyone can). Consider the following, however:
def fk(x, k):
a = 2
return k(gk(x, k) + a if x != 0 else 0)
def gk(x, k):
b = 1
return k(hk(x, k) - b)
def hk(x, k):
c = 1/2
return k(fk(x-1, k)*(1+c))
def printret(x):
print(x)
return x
f(4, printret) # see what happens
Inspired by continuation-passing style, but that's totally not it.
What's the point? It's something between your idea of passing a list to write down all the computations and memoizing. This k carries additional behavior with it, such as printing or writing to list (you can make a function that writes to some list, why not?). But if you look carefully you'll see that it lefts inner code of these functions practically untouched (only input and output to function are affected), so one can produce a decorator associated with a function like printret that does essentially the same thing for f, g, h.
Pros: no need to modify code, much more flexible than passing a list, no additional work (like in memoizing).
Cons: Impure (printing or modifying sth), not so flexible as we would like.
III
Now let's see how modifying function bodies can help. Don't be afraid of what's written below, take your time and play with that thing a little.
class Logger:
def __init__(self, lst, cur_val):
self.lst = lst
self.cur_val = cur_val
def bind(self, f):
res = f(self.cur_val)
return Logger([self.cur_val] + res.lst + self.lst, res.cur_val)
def __repr__(self):
return "Logger( " + repr({'value' : self.cur_val,'lst' : self.lst}) + " )"
def unit(x):
return Logger([], x)
# you can also play with lala
def lala(x):
if x <= 0:
return unit(1)
else:
return lala(x - 1).bind(lambda y: unit(2*y))
def f(x):
a = 2
if x == 0:
return unit(0)
else:
return g(x).bind(lambda y: unit(y + a))
def g(x):
b = 1
return h(x).bind(lambda y: unit(y - b))
def h(x):
c = 1/2
return f(x-1).bind(lambda y: unit(y*(1+c)))
f(4) # see for yourself
Logger is called a monad. I'm not very familiar with this concept myself, but I guess I'm doing everything right) f, g, h are functions that take a number and return a Logger instance. Logger's bind takes in a function (like f) and returns Logger with new value (computed by f) and updated 'logs'. The key point - as I see it - is the ability to do whatever we want with collected functions in the order the resulting value was calculated.
Afterword
I'm not at all some kind of 'guru' of functional programming, I believe I'm missing a lot of things here. But what I've understood is that functional programming is about inversing the flow of the program. That's why, for instance, I totally agree with your opinion about generators being opposed to functional programming. When we use generator gen in, say, function func, we yield values one by one to func and func does sth with them in e.g. a loop. The functional approach would be to make gen a function taking func as a parameter and make func perform computations on 'yielded' values. It's like gen and func exchanged their places. So the flow is inversed! And there are plenty of other ways of inversing the flow. Monads are one of them.
itertools islice gets a generator, start value and stop value. it will give you the elements between the start value and stop value as a generator. if islice is not clear you can check the docs here https://docs.python.org/3/library/itertools.html
intermediate_result = map(f, range(T))
final_result = next(itertools.islice(intermediate_result, start=T-1, stop=T))

python newbie: for j in range

I am new to learn python these days. While reading a book, I found a line of code that I can't understand.
Please see line 46 under print_progression() method, print(' '.join(str(next(self)) for j in range(n))).
class Progression:
'''Iterator producing a generic progression.
Default iterator produces the whole number, 0, 1, 2, ...
'''
def __init__(self, start = 0):
'''
Initialize current to the first value of the progression.
'''
self._current = start
def _advance(self):
'''
Update self.current to a new value.
This should be overriden by a subclass to customize progression.
By convension, if current is set to None, this designates the
end of a finite progression.
'''
self._current += 1
def __next__(self):
'''
Return the next element, or else raise StopIteration error.
'''
# Our convention to end a progression
if self._current is None:
raise StopIteration()
else:
# record current value to return
answer = self._current
# advance to prepare for next time
self._advance()
# return the answer
return answer
def __iter__(self):
'''
By convention, an iterator must return itself as an iterator.
'''
return self
def print_progression(self, n):
'''
Print next n values of the progression.
'''
print(' '.join(str(next(self)) for j in range(n)))
class ArithmeticProgression(Progression): # inherit from Progression
pass
if __name__ == '__main__':
print('Default progression:')
Progression().print_progression(10)
'''Output is
Default progression:
0 1 2 3 4 5 6 7 8 9 10'''
I have no idea how next(self) and j works.
I think it should be str(Progression.next()). (solved)
I cannot find j anywhere. What is j for? Why not using while loop such as while Progression.next() <= range(n)?
For my final thought, it should be
print(' '.join(str(next(self)) while next(self) <= range(n)))
Save this newbie.
Thanks in advance!
I think #csevier added a reasonable discussion about your first question, but I'm not sure the second question is answered as clearly for you based on your comments so I'm going to try a different angle.
Let's say you did:
for x in range(10):
print(x)
That's reasonably understandable - you created a list [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] and you printed each of the values in that list in-turn. Now let's say that we wanted to just print "hello" 10 times; well we could modify our existing code very simply:
for x in range(10):
print(x)
print('hello')
Umm, but now the x is messing up our output. There isn't a:
do this 10 times:
print('hello')
syntax. We could use a while loop but that means defining an extra counter:
loop_count = 0
while loop_count < 10:
print('hello')
loop_count += 1
That's baggage. So, the better way would be just to use for x in range(10): and just not bother doing print(x); the value is there to make our loop work, not because it's actually useful in any other way. This is the same for j (though I've used x in my examples because I think you're more likely to encounter it in tutorials, but you could use almost any name you want). Also, while loops are generally used for loops that can run indefinitely, not for iterating over an object with fixed size: see here.
Welcome to the python community! This is a great question. In python, as in other languages, there are many ways to do things. But when you follow a convention that the python community does, that is often referred to as a "pythonic" solution. The method print_progression is a common pythonic solution to iteration of a user defined data structure. In the case above, lets explain first how the code works and then why we would do it that way.
Your print_progression method takes advantage of the fact that your Progression class implements the iteration protocol by implementing the next and iter dunder/magic methods. Because those are implemented you can iterate your class instance both internally as next(self) has done, and externally next(Progression()) which is the exactly what you were getting at with you number 1. Because this protocol is implemented already, this class can by used in any builtin iterator and generator context for any client! Thats a polymorphic solution. Its just used internally as well because you don't need to do it in 2 different ways.
Now for the unused J variable. They are just using that so they can use the for loop. Just using range(n) would just return an itterable but not iterate over it. I dont quite agree with the authors use of the variable named J, its often more common to denote an unused variable that is just used because it needs to be as a single underscore. I like this a little better:
print(' '.join(str(next(self)) for _ in range(n)))

python generator endless stream without using yield

i'm trying to generate an endless stream of results given a function f and an initial value x
so first call should give the initial value, second call should give f(x), third call is f(x2) while x2 is the previous result of f(x) and so on..
what i have come up with:
def generate(f, x):
return itertools.repeat(lambda x: f(x))
which does not seem to work. any ideas? (i cant use yield in my code). also i cant use more than 1 line of code for this problem. any help would be appreciated.
also note that in a previous ex. i was asked to use the yield. with no problems:
while True:
yield x
x = f(x)
this works fine. but now.. no clue how to do it without
In Python 3.3, you can use itertools.accumulate:
import itertools
def generate(f, x):
return itertools.accumulate(itertools.repeat(x), lambda v,_:f(v))
for i, val in enumerate(generate(lambda x: 2*x, 3)):
print(val)
if i == 10:
break
I think this works:
import itertools as it
def g(f, x):
return it.chain([x],(setattr(g, 'x', f(getattr(g, 'x', x))) or getattr(g, 'x') for _ in it.count()))
def f(x):
return x + 1
gen = g(f, 1)
print next(gen)
print next(gen)
print next(gen)
print next(gen)
Of course, it relys on some sketchy behavior where I actually add an attribute to the function itself to keep the state. Basically, this function will only work the first time you call it. After that, all bets are off.
If we want to relax that restriction, we can use a temporary namespace. The problem is that to get a temporary namespace we need a unique class instance (or class, but an instance is cleaner and only requires 1 extra set of parenthesis). To make that happen in one line, we need to create a new function inline and use that as a default argument:
import itertools as it
def g(f, x):
return (lambda f, x, ns=type('foo', (object,), {})(): \
it.chain([x],
(setattr(ns, 'x', f(getattr(ns, 'x', x))) or getattr(ns, 'x')
for _ in it.count()))
)(f, x)
def f(x):
return x + 1
gen = g(f, 1)
print next(gen) == 1
print next(gen) == 2
print next(gen) == 3
print next(gen) == 4
print "first worked?"
gen2 = g(f, 2)
print next(gen2) == 2
print next(gen2) == 3
print next(gen2) == 4
I've broken it into a few lines, for readability, but it's a 1-liner at heart.
A version without any imports
(and the most robust one yet I believe).
def g(f, x):
return iter(lambda f=f, x=x, ns=type('foo', (object,), {'x':x}): ((getattr(ns, 'x'),setattr(ns, 'x', f(getattr(ns, 'x'))))[0]), object())
One trick here is the same as before. We create a lambda function with a mutable default argument to keep the state. Inside the function, we build a tuple. The first item is what we actually want, the second item is the return value of the setattr function which is used to update the state. In order to get rid of the itertools.chain, we set the initial value on the namespace to the value of x so the class is already initialzed to have the starting state. The second trick is that we use the two argument form of iter to get rid of it.count() which was only used to create an infinite iterable before. iter keeps calling the function you give it as the first argument until the return value is equal to the second argument. However, since my second argument is an instance of object, nothing returned from our function will ever be equal to it so we've effectively created an infinite iterable without itertools or yield! Come to think of it, I believe this last version is the most robust too. Previous versions had a bug where they relied on the truthfulness of the return value of f. I think they might have failed if f returned 0. This last version fixes that bug.
I'm guessing this is some sort of homework or assignment? As such, I'd say you should take a look at generator expressions. Though I agree with the other commenters that this seems an exercise of dubious value...

python expression that evaluates to True n times

I have a function
def f():
while True:
blah
I want to alter f in such a way that the caller could control the number of times the while loop in f runs, without altering much of the code in f (specially not adding a counter in f). Something like
def f(num_executions = True):
while num_executions:
blah()
f() will run an infinite loop
but f(an_expression_that_evaluates_to_true_n_times) will run the while loop n times.
What could such an expression be?
UPDATE:
I know, there are plenty of way to control how many times a loop will run, but the real question here is -
Can an expression in python evaluate to True for configurable number of times?
Some ideas I am toying with
-making an expression out of list = list[:-1]
-modifying default parameters of a function within a function
No need for a while-loop. Use a for-loop:
>>> def f(n):
... for _ in range(n):
... dostuff()
_ is used as a variable name in a for loop normally to be a placeholder. This loop loops through n amount of times. So f(5) would loop five times.
While I agree with the others that this is a bad idea, it is entirely (and easily) possible:
class BoolChange:
def __init__(self):
self.count = 0
def __bool__(self):
self.count += 1
return self.count <= 5
x = BoolChange()
while x:
print("Running")
This outputs Running five times, then exits.
The main reason this is a bad idea is that it means checking the state of the object modifies it, which is weird behaviour people won't expect. I can't imagine a good use case for this.
You can't do exactly what you describe. What is passed in python is not an expression, but a value. An object. An Immutable object in general evaluate to either True or to False. It will not change during the loop. Mutable object can change its truth value, but you can't make arbitrary object change during a general loop (which does not touch it in any way). In general, as have been said here, you really need to use for statement, or pass in a callable object (say, a function):
def f(is_true = lambda x : True):
while is_true():
blah()
Note that the reason that the callable solution is acceptable, while the "hiding in boolean" #Lattyware demonstrated is not, is that the additional coputation here is explicit - the () tells the reader that almost anything can happen here, depending on the object passed, and you don't need to know that the __bool__ method in this object is silently called and is expected to have side effect.
def f(c=-1):
while c:
print 'blah'
if c > 0: c -= 1
How about using a generator as a coroutine?
def blah():
print 'hi'
def f():
while True:
blah()
yield
x = f()
next(x) # "hi"
The generator here isn't be used for what it yields, but you get to control how many times it blahs externally, because it yields control ever time it blahs.
for i in range(3):
next(x) # blah blah blah
This will also work -
def foo(n=[1,2,3]):
foo.func_defaults = tuple([foo.func_defaults[0][:-1]],)
return n
while foo():
blah()

Categories