I intend to make a while loop inside a defined function. In addition, I want to return a value on every iteration. Yet it doesn't allow me to iterate over the loop.
Here is the plan:
def func(x):
n=3
while(n>0):
x = x+1
return x
print(func(6))
I know the reason to such issue-return function breaks the loop.
Yet, I insist to use a defined function. Therefore, is there a way to somehow iterate over returning a value, given that such script is inside a defined function?
When you want to return a value and continue the function in the next call at the point where you returned, use yield instead of return.
Technically this produces a so called generator, which gives you the return values value by value. With next() you can iterate over the values. You can also convert it into a list or some other data structure.
Your original function would like this:
def foo(n):
for i in range(n):
yield i
And to use it:
gen = foo(100)
print(next(gen))
or
gen = foo(100)
l = list(gen)
print(l)
Keep in mind that the generator calculates the results 'on demand', so it does not allocate too much memory to store results. When converting this into a list, all results are caclculated and stored in the memory, which causes problems for large n.
Depending on your use case, you may simply use print(x) inside the loop and then return the final value.
If you actually need to return intermediate values to a caller function, you can use yield.
You can create a generator for that, so you could yield values from your generator.
Example:
def func(x):
n=3
while(n>0):
x = x+1
yield x
func_call = func(6) # create generator
print(next(func_call)) # 7
print(next(func_call)) # 8
Related
How can I define a function in python in such a way that it takes the previous value of my iteration where I define the initial value.
My function is defined as following:
def Deulab(c, yh1, a, b):
Deulab = c- (EULab(c, yh1, a, b)-1)*0.3
return (Deulab,yh1, a,b)
Output is
Deulab(1.01, 1, 4, 2)
0.9964391705626454
Now I want to iterate keeping yh1, a ,b fixed and start with c0=1 and iterate recursively for c.
The most pythonic way of doing this is to define an interating generator:
def iterates(f,x):
while True:
yield x
x = f(x)
#test:
def f(x):
return 3.2*x*(1-x)
orbit = iterates(f,0.1)
for _ in range(10):
print(next(orbit))
Output:
0.1
0.2880000000000001
0.6561792000000002
0.7219457839595519
0.6423682207442558
0.7351401271107676
0.6230691859914625
0.7515327214700762
0.5975401280955426
0.7695549549155365
You can use the generator until some stop criterion is met. For example, in fixed-point iteration you might iterate until two successive iterates are within some tolerance of each other. The generator itself will go on forever, so when you use it you need to make sure that your code doesn't go into an infinite loop (e.g. don't simply assume convergence).
It sound like you are after recursion.
Here is a basic example
def f(x):
x += 1
if x < 10:
x = f(x)
return x
print (f(4))
In this example a function calls itself until a criteria is met.
CodeCupboard has supplied an example which should fit your needs.
This is a bit of a more persistent version of that, which would allow you to go back to where you were with multiple separate function calls
class classA:
#Declare initial values for class variables here
fooResult = 0 #Say, taking 0 as an initial value, not unreasonable!
def myFoo1(x):
y = 2*x + fooResult #A simple example function
classA.fooResult = y #This line is updating that class variable, so next time you come in, you'll be using it as part of calc'ing y
return y #and this will return the calculation back up to wherever you called it from
#Example call
rtn = classA.myFoo1(5)
#rtn1 will be 10, as this is the first call to the function, so the class variable had initial state of 0
#Example call2
rtn2 = classA.myFoo1(3)
#rtn2 will be 16, as the class variable had a state of 10 when you called classA.myFoo1()
So if you were working with a dataset where you didn't know what the second call would be (i.e. the 3 in call2 above was unknown), then you can revisit the function without having to worry about handling the data retention in your top level code. Useful for a niche case.
Of course, you could use it as per:
list1 = [1,2,3,4,5]
for i in list1:
rtn = classA.myFoo1(i)
Which would give you a final rtn value of 30 when you exit the for loop.
This function return array of number n digit but it use memory very much. How can I improve this func for reduce memory
def think(n=5):
if n == 1:
return ([str(i) for i in range(1,10)])
else :
result = []
# result1 = think(n-1)
for i in think(n-1):
for j in range(10):
result.append(i+str(j))
return result
You can start by removing the result list out of recursion. Because for every recursive call you are creating a new result list. Instead, you should create it once before calling the recursive function and keep passing it as function parameter.
I have the following function:
def infinite_sequence(starting_value, function):
value = starting_value
while True:
yield value
value = function(value)
Is it possible to express this as a generator comprehension? If we were dealing with a fixed range, instead of an infinite sequence, it could be dealt with like so: (Edit: actually that's wrong)
(function(value) for value in range(start, end))
But since we're dealing with an infinite sequence, is it possible to express this using a generator comprehension?
You would need some sort of recursive generator expression:
infinite_sequence = itertools.imap(f, itertools.chain(x, __currentgenerator__))
where __currentgenerator__ is a hypothetical magic reference to the generator expression it occurs in. (Note that the issue is not that you want an infinite sequence, but that the sequence is defined recursively in terms of itself.)
Unfortunately, Python does not have such a feature. Haskell is an example of a language that does, due to its lazy argument evaluation:
infinite_sequence = map f x:infinite_sequence
You can, however, still achieve something similar in Python 3 while still using a def statement, by defining a recursive generator.
def infinite_sequence(f, sv):
x = f(sv)
yield from itertools.chain(x, infinite_sequence(f, x))
(itertools.chain isn't strictly necessary; you could use
def inifinite_sequence(f, sv):
x = f(sv)
yield x
yield from infinite_sequence(f, x)
but I was attempting to preserve the flavor of the Haskell expression x:infinite_sequence.)
This is itertools.accumulate, ignoring all but the first value:
from itertools import accumulate, repeat
def function(x): return x*2
start = 1
seq = accumulate(repeat(start), lambda last, _: function(last))
Just write it out in full, though.
Yes, just use an infinite iterator such as itertools.count.
(function(value) for value in itertools.count(starting_value))
What is lazy evaluation in Python?
One website said :
In Python 3.x the range() function returns a special range object which computes elements of the list on demand (lazy or deferred evaluation):
>>> r = range(10)
>>> print(r)
range(0, 10)
>>> print(r[3])
3
What is meant by this?
The object returned by range() (or xrange() in Python2.x) is known as a lazy iterable.
Instead of storing the entire range, [0,1,2,..,9], in memory, the generator stores a definition for (i=0; i<10; i+=1) and computes the next value only when needed (AKA lazy-evaluation).
Essentially, a generator allows you to return a list like structure, but here are some differences:
A list stores all elements when it is created. A generator generates the next element when it is needed.
A list can be iterated over as much as you need, a generator can only be iterated over exactly once.
A list can get elements by index, a generator cannot -- it only generates values once, from start to end.
A generator can be created in two ways:
(1) Very similar to a list comprehension:
# this is a list, create all 5000000 x/2 values immediately, uses []
lis = [x/2 for x in range(5000000)]
# this is a generator, creates each x/2 value only when it is needed, uses ()
gen = (x/2 for x in range(5000000))
(2) As a function, using yield to return the next value:
# this is also a generator, it will run until a yield occurs, and return that result.
# on the next call it picks up where it left off and continues until a yield occurs...
def divby2(n):
num = 0
while num < n:
yield num/2
num += 1
# same as (x/2 for x in range(5000000))
print divby2(5000000)
Note: Even though range(5000000) is a generator in Python3.x, [x/2 for x in range(5000000)] is still a list. range(...) does it's job and generates x one at a time, but the entire list of x/2 values will be computed when this list is create.
In a nutshell, lazy evaluation means that the object is evaluated when it is needed, not when it is created.
In Python 2, range will return a list - this means that if you give it a large number, it will calculate the range and return at the time of creation:
>>> i = range(100)
>>> type(i)
<type 'list'>
In Python 3, however you get a special range object:
>>> i = range(100)
>>> type(i)
<class 'range'>
Only when you consume it, will it actually be evaluated - in other words, it will only return the numbers in the range when you actually need them.
A github repo named python patterns and wikipedia tell us what lazy evaluation is.
Delays the eval of an expr until its value is needed and avoids repeated evals.
range in python3 is not a complete lazy evaluation, because it doesn't avoid repeated eval.
A more classic example for lazy evaluation is cached_property:
import functools
class cached_property(object):
def __init__(self, function):
self.function = function
functools.update_wrapper(self, function)
def __get__(self, obj, type_):
if obj is None:
return self
val = self.function(obj)
obj.__dict__[self.function.__name__] = val
return val
The cached_property(a.k.a lazy_property) is a decorator which convert a func into a lazy evaluation property. The first time property accessed, the func is called to get result and then the value is used the next time you access the property.
eg:
class LogHandler:
def __init__(self, file_path):
self.file_path = file_path
#cached_property
def load_log_file(self):
with open(self.file_path) as f:
# the file is to big that I have to cost 2s to read all file
return f.read()
log_handler = LogHandler('./sys.log')
# only the first time call will cost 2s.
print(log_handler.load_log_file)
# return value is cached to the log_handler obj.
print(log_handler.load_log_file)
To use a proper word, a python generator object like range are more like designed through call_by_need pattern, rather than lazy evaluation
I have a class where each instance is basically of a bunch of nested lists, each
of which holds a number of integers or another list containing integers, or a
list of lists, etc., like so:
class Foo(list):
def __init__(self):
self.extend(
list(1), list(2), list(3), range(5), [range(3), range(2)]
)
I want to define a method to walk the nested lists and give me
one integer at a time, not unlike os.walk. I tried this:
def _walk(self):
def kids(node):
for x in node:
try:
for y in kids(x):
yield y
except TypeError:
yield x
return kids(x)
But it immediately raises a stopiteration error. If I add a print statement to print each "node" in the first for loop, the function appears to iterate over the whole container in the way I want, but without yielding each node. It just prints them all the first time I call next on the generator.
I'm stumped. Please help!
It works if you change return kids(x) to return kids(self)
Here's a function that is a simpler version of your _walk method that does what you want on an arbitrary iterable. The internal kids function is not required.
def walk(xs):
for x in xs:
try:
for y in walk(x):
yield y
except TypeError:
yield x
This could be trivially adapted to work as a method on your Foo object.