I think my question should be more clearly understood by this short code:
fs = []
for k in range(0, 10):
def f(x):
return x + 2*k
fs.append(f)
fs[0](1)
# expecting 1 but 19(=1+2*9)
How do I instead make f return what I want? Please note that f cannot receive k as an argument.
(What I'm actually trying to do is prepare multiple constraint functions that are fed to scipy.optimize.minimize)
The typical way to fix this is to do something like:
def f(x, k=k):
return x + 2*k
For the most part, this shouldn't affect your "f cannot receive k as an argument" condition because it isn't a required argument.
A related, but different approach would be to define f out of the loop.
def f(k, x):
return x + 2*k
Then in the loop use functools.partial.
import functools
fs = []
for k in range(10):
fs.append(functools.partial(f, k))
In this approach, your function won't accept a value for k even if you try to pass one.
Basically the problem is that the variable k, in this case, continually changes as the loop iterates. This means that all things which are pointing to the variable "k" are pointing to the same value at all times.
There are a couple of ways to solve this. This is perhaps the most common.
def f(x, k=k):
# This sets k as a locally bound variable which is evaluated
# at the time the function is created.
return x + 2*k
The detriment is that this solution will allow a later function to call the newly created functions with a different value of k. This means you could call f("cat","dog") and get "catdogdog" as a return. While this is not the end of the world, it certainly isn't intended.
However, you could also do something like this:
def f_maker(k):
# Create a new function whose variable "k" does not exist in outside scope.
def f(x):
return x + 2*k
return f
fs = []
for k in range(0, 10):
fs.append(f_maker(k))
fs[0](1)
Related
How can I define a function in python in such a way that it takes the previous value of my iteration where I define the initial value.
My function is defined as following:
def Deulab(c, yh1, a, b):
Deulab = c- (EULab(c, yh1, a, b)-1)*0.3
return (Deulab,yh1, a,b)
Output is
Deulab(1.01, 1, 4, 2)
0.9964391705626454
Now I want to iterate keeping yh1, a ,b fixed and start with c0=1 and iterate recursively for c.
The most pythonic way of doing this is to define an interating generator:
def iterates(f,x):
while True:
yield x
x = f(x)
#test:
def f(x):
return 3.2*x*(1-x)
orbit = iterates(f,0.1)
for _ in range(10):
print(next(orbit))
Output:
0.1
0.2880000000000001
0.6561792000000002
0.7219457839595519
0.6423682207442558
0.7351401271107676
0.6230691859914625
0.7515327214700762
0.5975401280955426
0.7695549549155365
You can use the generator until some stop criterion is met. For example, in fixed-point iteration you might iterate until two successive iterates are within some tolerance of each other. The generator itself will go on forever, so when you use it you need to make sure that your code doesn't go into an infinite loop (e.g. don't simply assume convergence).
It sound like you are after recursion.
Here is a basic example
def f(x):
x += 1
if x < 10:
x = f(x)
return x
print (f(4))
In this example a function calls itself until a criteria is met.
CodeCupboard has supplied an example which should fit your needs.
This is a bit of a more persistent version of that, which would allow you to go back to where you were with multiple separate function calls
class classA:
#Declare initial values for class variables here
fooResult = 0 #Say, taking 0 as an initial value, not unreasonable!
def myFoo1(x):
y = 2*x + fooResult #A simple example function
classA.fooResult = y #This line is updating that class variable, so next time you come in, you'll be using it as part of calc'ing y
return y #and this will return the calculation back up to wherever you called it from
#Example call
rtn = classA.myFoo1(5)
#rtn1 will be 10, as this is the first call to the function, so the class variable had initial state of 0
#Example call2
rtn2 = classA.myFoo1(3)
#rtn2 will be 16, as the class variable had a state of 10 when you called classA.myFoo1()
So if you were working with a dataset where you didn't know what the second call would be (i.e. the 3 in call2 above was unknown), then you can revisit the function without having to worry about handling the data retention in your top level code. Useful for a niche case.
Of course, you could use it as per:
list1 = [1,2,3,4,5]
for i in list1:
rtn = classA.myFoo1(i)
Which would give you a final rtn value of 30 when you exit the for loop.
I have a problem where I need to produce something which is naturally computed recursively, but where I also need to be able to interrogate the intermediate steps in the recursion if needed.
I know I can do this by passing and mutating a list or similar structure. However, this looks ugly to me and I'm sure there must be a neater way, e.g. using generators. What I would ideally love to be able to do is something like:
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
in an efficient way. While my solution is not performance critical, I can't justify the massive amount of redundant effort in that first line. It looks to me like a generator would be perfect for this except for the fact that f is fundamentally much more suited to recursion in my case (which at least in my mind is the complete opposite of a generator, but maybe I'm just not thinking far enough outside of the box).
Is there a neat Pythonic way of doing something like this that I just don't know about, or do I just need to just capitulate and pollute my function f by passing it an intermediate_results list which I then mutate as a side-effect?
I have a generic solution for you using a decorator. We create a Memoize class which stores the results of previous times the function is executed (including in recursive calls). If the arguments given have already been seen, the cached versions are used to quickly lookup the result.
The custom class has the benefit over an lru_cache in that you can see the results.
from functools import wraps
class Memoize:
def __init__(self):
self.store = {}
def save(self, fun):
#wraps(fun)
def wrapper(*args):
if args not in self.store:
self.store[args] = fun(*args)
return self.store[args]
return wrapper
m = Memoize()
#m.save
def fibo(n):
if n <= 0: return 0
elif n == 1: return 1
else: return fibo(n-1) + fibo(n-2)
Then after running different things you can see what the cache contains. When you run future function calls, m.store will be used as a lookup so calculation doesn't need to be redone.
>>> f(8)
21
>>> m.store
{(1,): 1,
(0,): 0,
(2,): 1,
(3,): 2,
(4,): 3,
(5,): 5,
(6,): 8,
(7,): 13,
(8,): 21}
You could modify the save function to use the name of the function and the args as the key, so that multiple function results can be stored in the same Memoize class.
You can use your existing solution that makes many "redundant" calls to f, but employ the use of function caching to save the results to previous calls to f.
In other words, when f(x1) is called, it's input arguments and corresponding return values are saved, and the next time it is called, the result is simply pulled from the cache
see functools.lru_cache for the standard library solution to this
ie:
from functools import lru_cache
#lru_cache
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
Note, however, f must be a pure function (no side-effects, 1-to-1 mapping) for this to work properly
Having considered your comments, I'll now try to give another perspective on the problem.
So, let's consider a concrete example:
def f(x):
a = 2
return g(x) + a if x != 0 else 0
def g(x):
b = 1
return h(x) - b
def h(x):
c = 1/2
return f(x-1)*(1+c)
I
First of all, it should be mentioned that (in our particular case) the algorithm has form of: f(x) = p(f(x - 1)) for some p. It follows that f(x) = p^x(f(0)) = p^x(0). That means we should just apply p to 0 x times to get the desired result, which can be done in an iterative process, so this can be written without recursion. Though I believe that your real case is much harder. Moreover, it would be too boring and uninformative to stop here)
II
Generally speaking, we can divide all possible solutions into two groups: the ones that require refactoring (i.e. rewriting functions f, g, h) and the ones that do not. I have little to offer from the latter one (and I don't think anyone can). Consider the following, however:
def fk(x, k):
a = 2
return k(gk(x, k) + a if x != 0 else 0)
def gk(x, k):
b = 1
return k(hk(x, k) - b)
def hk(x, k):
c = 1/2
return k(fk(x-1, k)*(1+c))
def printret(x):
print(x)
return x
f(4, printret) # see what happens
Inspired by continuation-passing style, but that's totally not it.
What's the point? It's something between your idea of passing a list to write down all the computations and memoizing. This k carries additional behavior with it, such as printing or writing to list (you can make a function that writes to some list, why not?). But if you look carefully you'll see that it lefts inner code of these functions practically untouched (only input and output to function are affected), so one can produce a decorator associated with a function like printret that does essentially the same thing for f, g, h.
Pros: no need to modify code, much more flexible than passing a list, no additional work (like in memoizing).
Cons: Impure (printing or modifying sth), not so flexible as we would like.
III
Now let's see how modifying function bodies can help. Don't be afraid of what's written below, take your time and play with that thing a little.
class Logger:
def __init__(self, lst, cur_val):
self.lst = lst
self.cur_val = cur_val
def bind(self, f):
res = f(self.cur_val)
return Logger([self.cur_val] + res.lst + self.lst, res.cur_val)
def __repr__(self):
return "Logger( " + repr({'value' : self.cur_val,'lst' : self.lst}) + " )"
def unit(x):
return Logger([], x)
# you can also play with lala
def lala(x):
if x <= 0:
return unit(1)
else:
return lala(x - 1).bind(lambda y: unit(2*y))
def f(x):
a = 2
if x == 0:
return unit(0)
else:
return g(x).bind(lambda y: unit(y + a))
def g(x):
b = 1
return h(x).bind(lambda y: unit(y - b))
def h(x):
c = 1/2
return f(x-1).bind(lambda y: unit(y*(1+c)))
f(4) # see for yourself
Logger is called a monad. I'm not very familiar with this concept myself, but I guess I'm doing everything right) f, g, h are functions that take a number and return a Logger instance. Logger's bind takes in a function (like f) and returns Logger with new value (computed by f) and updated 'logs'. The key point - as I see it - is the ability to do whatever we want with collected functions in the order the resulting value was calculated.
Afterword
I'm not at all some kind of 'guru' of functional programming, I believe I'm missing a lot of things here. But what I've understood is that functional programming is about inversing the flow of the program. That's why, for instance, I totally agree with your opinion about generators being opposed to functional programming. When we use generator gen in, say, function func, we yield values one by one to func and func does sth with them in e.g. a loop. The functional approach would be to make gen a function taking func as a parameter and make func perform computations on 'yielded' values. It's like gen and func exchanged their places. So the flow is inversed! And there are plenty of other ways of inversing the flow. Monads are one of them.
itertools islice gets a generator, start value and stop value. it will give you the elements between the start value and stop value as a generator. if islice is not clear you can check the docs here https://docs.python.org/3/library/itertools.html
intermediate_result = map(f, range(T))
final_result = next(itertools.islice(intermediate_result, start=T-1, stop=T))
I am reading Hackers and Painters and am confused by a problem mentioned by the author to illustrate the power of different programming languages.
The problem is:
We want to write a function that generates accumulators—a function that takes a number n, and returns a function that takes another number i and returns n incremented by i. (That’s incremented by, not plus. An accumulator has to accumulate.)
The author mentions several solutions with different programming languages. For example, Common Lisp:
(defun foo (n)
(lambda (i) (incf n i)))
and JavaScript:
function foo(n) { return function (i) { return n += i } }
However, when it comes to Python, the following codes do not work:
def foo(n):
s = n
def bar(i):
s += i
return s
return bar
f = foo(0)
f(1) # UnboundLocalError: local variable 's' referenced before assignment
A simple modification will make it work:
def foo(n):
s = [n]
def bar(i):
s[0] += i
return s[0]
return bar
I am new to Python. Why doesn the first solution not work while the second one does? The author mentions lexical variables but I still don't get it.
s += i is just sugar for s = s + i.*
This means you assign a new value to the variable s (instead of mutating it in place). When you assign to a variable, Python assumes it is local to the function. However, before assigning it needs to evaluate s + i, but s is local and still unassigned -> Error.
In the second case s[0] += i you never assign to s directly, but only ever access an item from s. So Python can clearly see that it is not a local variable and goes looking for it in the outer scope.
Finally, a nicer alternative (in Python 3) is to explicitly tell it that s is not a local variable:
def foo(n):
s = n
def bar(i):
nonlocal s
s += i
return s
return bar
(There is actually no need for s - you could simply use n instead inside bar.)
*The situation is slightly more complex, but the important issue is that computation and assignment are performed in two separate steps.
An infinite generator is one implementation. You can call __next__ on a generator instance to extract successive results iteratively.
def incrementer(n, i):
while True:
n += i
yield n
g = incrementer(2, 5)
print(g.__next__()) # 7
print(g.__next__()) # 12
print(g.__next__()) # 17
If you need a flexible incrementer, one possibility is an object-oriented approach:
class Inc(object):
def __init__(self, n=0):
self.n = n
def incrementer(self, i):
self.n += i
return self.n
g = Inc(2)
g.incrementer(5) # 7
g.incrementer(3) # 10
g.incrementer(7) # 17
In Python if we use a variable and pass it to a function then it will be Call by Value whatever changes you make to the variable it will not be reflected to the original variable.
But when you use a list instead of a variable then the changes that you make to the list in the functions are reflected in the original List outside the function so this is called call by reference.
And this is the reason for the second option does work and the first option doesn't.
So I'm making a simple program that gets 2 functions(a and k) and one integer value(b), then it gets the formal parameter in the two functions(a and k) which is "x" and applies a condition x < b then based on the condition makes a function call, either a or b. But when I run the program it gives an error that x is not defined in the global frame. I want it to get "x" from the formal parameter assigned to the functions a and b and then get the condition based on that.
Here's my code
def square(x):
return x * x
def increment(x):
return x + 1
def piecewise(a, k, b):
if x<b:
return a
else:
return k
mak = piecewise(increment,square ,3 )
print(mak(1))
I guess you want to do something like this:
def piecewise(a, k, b):
def f(x):
if x < b:
return a(x)
else:
return k(x)
return f
However, I am not sure if it is a good practice. So, I leave my answer here to see the comments and learn if there is any problem with it.
i would like to perform a calculation using python, where the current value (i) of the equation is based on the previous value of the equation (i-1), which is really easy to do in a spreadsheet but i would rather learn to code it
i have noticed that there is loads of information on finding the previous value from a list, but i don't have a list i need to create it! my equation is shown below.
h=(2*b)-h[i-1]
can anyone give me tell me a method to do this ?
i tried this sort of thing, but that will not work as when i try to do the equation i'm calling a value i haven't created yet, if i set h=0 then i get an error that i am out of index range
i = 1
for i in range(1, len(b)):
h=[]
h=(2*b)-h[i-1]
x+=1
h = [b[0]]
for val in b[1:]:
h.append(2 * val - h[-1]) # As you add to h, you keep up with its tail
for large b list (brr, one-letter identifier), to avoid creating large slice
from itertools import islice # For big list it will keep code less wasteful
for val in islice(b, 1, None):
....
As pointed out by #pad, you simply need to handle the base case of receiving the first sample.
However, your equation makes no use of i other than to retrieve the previous result. It's looking more like a running filter than something which needs to maintain a list of past values (with an array which might never stop growing).
If that is the case, and you only ever want the most recent value,then you might want to go with a generator instead.
def gen():
def eqn(b):
eqn.h = 2*b - eqn.h
return eqn.h
eqn.h = 0
return eqn
And then use thus
>>> f = gen()
>>> f(2)
4
>>> f(3)
2
>>> f(2)
0
>>>
The same effect could be acheived with a true generator using yield and send.
First of, do you need all the intermediate values? That is, do you want a list h from 0 to i? Or do you just want h[i]?
If you just need the i-th value you could us recursion:
def get_h(i):
if i>0:
return (2*b) - get_h(i-1)
else:
return h_0
But be aware that this will not work for large i, as it will exceed the maximum recursion depth. (Thanks for pointing this out kdopen) In that case a simple for-loop or a generator is better.
Even better is to use a (mathematically) closed form of the equation (for your example that is possible, it might not be in other cases):
def get_h(i):
if i%2 == 0:
return h_0
else:
return (2*b)-h_0
In both cases h_0 is the initial value that you start out with.
h = []
for i in range(len(b)):
if i>0:
h.append(2*b - h[i-1])
else:
# handle i=0 case here
You are successively applying a function (equation) to the result of a previous application of that function - the process needs a seed to start it. Your result looks like this [seed, f(seed), f(f(seed)), f(f(f(seed)), ...]. This concept is function composition. You can create a generalized function that will do this for any sequence of functions, in Python functions are first class objects and can be passed around just like any other object. If you need to preserve the intermediate results use a generator.
def composition(functions, x):
""" yields f(x), f(f(x)), f(f(f(x)) ....
for each f in functions
functions is an iterable of callables taking one argument
"""
for f in functions:
x = f(x)
yield x
Your specs require a seed and a constant,
seed = 0
b = 10
The equation/function,
def f(x, b = b):
return 2*b - x
f is applied b times.
functions = [f]*b
Usage
print list(composition(functions, seed))
If the intermediate results are not needed composition can be redefined as
def composition(functions, x):
""" Returns f(x), g(f(x)), h(g(f(x)) ....
for each function in functions
functions is an iterable of callables taking one argument
"""
for f in functions:
x = f(x)
return x
print composition(functions, seed)
Or more generally, with no limitations on call signature:
def compose(funcs):
'''Return a callable composed of successive application of functions
funcs is an iterable producing callables
for [f, g, h] returns f(g(h(*args, **kwargs)))
'''
def outer(f, g):
def inner(*args, **kwargs):
return f(g(*args, **kwargs))
return inner
return reduce(outer, funcs)
def plus2(x):
return x + 2
def times2(x):
return x * 2
def mod16(x):
return x % 16
funcs = (mod16, plus2, times2)
eq = compose(funcs) # mod16(plus2(times2(x)))
print eq(15)
While the process definition appears to be recursive, I resisted the temptation so I could stay out of maximum recursion depth hades.
I got curious, searched SO for function composition and, of course, there are numerous relavent Q&A's.