I'm trying to write a function that generates the restrictions of a function g at a given point p.
For example, let's say g(x, y, z) = 2x + 3y + z and p = (5, 10, 15). I'm trying to create a function that would return [lambda x : g(x, 10, 15), lambda y: g(5, y, 15), lambda z: g(5, 10, z)]. In other words, I want to take my multivariate function and return a list of univariate functions.
I wrote some Python to describe what I want, but I'm having trouble figuring out how to pass the right inputs from p into the lambda properly.
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(p[0], p[1], ..., p[i-1], p[x], p[i+1], .... p[-1])
restrictions.append(restriction)
return restrictions
Purpose: I wrote a short function to estimate the derivative of a univariate function, and I'm trying to extend it to compute the gradient of a multivariate function by computing the derivative of each restriction function in the list returned by restriction_generator.
Apologies if this question has been asked before. I couldn't find anything after some searching, but I'm having trouble articulating my problem without all of this extra context. Another title for this question would probably be more appropriate.
Since #bandicoot12 requested some more solutions, I will try to fix up your proposed code. I'm not familiar with the ... notation, but I think this slight change should work:
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(*p[: i], x, *p[i+1:])
restrictions.append(restriction)
return restrictions
Although I am not familiar with the ... notation, if I had to guess, your original code doesn't work because it probably always inputs p[0]. Maybe it can be fixed by changing it from p[0], p[1], ..., p[i-1] to p[0], ..., p[i-1].
try something like this:
def foo(fun, p, i):
def bar(x):
p[i] = x
return fun(*p)
return bar
and
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restrictions.append(foo(g, p, i))
return restrictions
Related
I read such a script:
add_numbers = lambda x, y: x+y
add_five = lambda y: add_numbers(5,y)
It derive a new function of one variable, add_five, that adds 5 to its argument:
from this point, introduced functools
In [9]: from functools import partial
In [10]: add_five = partial(add_numbers, 5)
In [11]: add_five(7)
Out[11]: 12
As a novice, I guess it can be easily achieved by
add_five = lambda y: 5+y
add_six = lambda y: 6+y
I am confused what's the benefit if not define add_five in a straighforward method?
The utility of partial is to easily create specialised versions of functions from a general definition.
The case of adding numbers can be illustrating here add_numbers is the general case.
from functools import partial
def add_numbers(x, y):
return x + y
add5 = partial(add_nums, 5)
Here add5 is a specialised case of add_numbers roughly equivalent to
def add5(x):
return add_numbers(x, 5)
Adding numbers is a very trivial example and does not show the utility of partial
The following is a simple example that may better show the utility of partial.
Consider writing a procedure to compute the square root of a number using the Babylonian method.
def square_root(x, tolerance, convergence_test):
y = 1
while not convergence_test(x, y, tolerance):
y = (y + x/y)/2
return y
For most numbers, the convergence test can simply check the difference between y squared and x is 0. Let's call this the absolute error of the estimate
def absolute_error(x, y, tolerance):
return abs(x - y**2) <= tolerance
For very large and small numbers, using absolute error of the estimate can lead to wrong answers for various reasons. In those cases, it is better to use the relative error:
def relative_error(x, y, tolerance):
return abs(x/(y**2) - 1) <= tolerance
With partial, we can easily create specialised functions for using the either absolute and relative error.
sqrt_rel_err = partial(square_root, convergence_test=relative_error)
sqrt_abs_err = partial(square_root, convergence_test=absolute_error)
Now using either is trivial
>>> sqrt_rel_err(2, 0.00001)
1.4142156862745097
>>> sqrt_abs_err(2, 0.00001)
1.4142156862745097
And for small numbers: we see using absolute error gives the wrong answer (especially when the tolerance is greater than the number we are trying to get the square root of)
>>> x = sqrt_abs_err(1e-6, 0.00001)
>>> x**2
4.4981362843183905e-06
Whilst the relative error method yields a more accurate answer.
>>> x = sqrt_rel_err(1e-6, 0.00001)
>>> x**2
1.0000003066033492e-06
I have a rather lengthy equation that I need to integrate over using scipy.integrate.quad and was wondering if there is a way to add lambda functions to each other. What I have in mind is something like this
y = lambda u: u**(-2) + 8
x = lambda u: numpy.exp(-u)
f = y + x
int = scipy.integrate.quad(f, 0, numpy.inf)
The equations that I am really using are far more complicated than I am hinting at here, so for readability it would be useful to break up the equation into smaller, more manageable parts.
Is there a way to do with with lambda functions? Or perhaps another way which does not even require lambda functions but will give the same output?
In Python, you'll normally only use a lambda for very short, simple functions that easily fit inside the line that's creating them. (Some languages have other opinions.)
As #DSM hinted in their comment, lambdas are essentially a shortcut to creating functions when it's not worth giving them a name.
If you're doing more complex things, or if you need to give the code a name for later reference, a lambda expression won't be much of a shortcut for you -- instead, you might as well define a plain old function.
So instead of assigning the lambda expression to a variable:
y = lambda u: u**(-2) + 8
You can define that variable to be a function:
def y(u):
return u**(-2) + 8
Which gives you room to explain a bit, or be more complex, or whatever you need to do:
def y(u):
"""
Bloopinate the input
u should be a positive integer for fastest results.
"""
offset = 8
bloop = u ** (-2)
return bloop + offset
Functions and lambdas are both "callable", which means they're essentially interchangable as far as scipy.integrate.quad() is concerned.
To combine callables, you can use several different techniques.
def triple(x):
return x * 3
def square(x):
return x * x
def triple_square(x):
return triple(square(x))
def triple_plus_square(x):
return triple(x) + square(x)
def triple_plus_square_with_explaining_variables(x):
tripled = triple(x)
squared = square(x)
return tripled + squared
There are more advanced options that I would only consider if it makes your code clearer (which it probably won't). For example, you can put the callables in a list:
all_the_things_i_want_to_do = [triple, square]
Once they're in a list, you can use list-based operations to work on them (including applying them in turn to reduce the list down to a single value).
But if your code is like most code, regular functions that just call each other by name will be the simplest to write and easiest to read.
There's no built-in functionality for that, but you can implement it quite easily (with some performance hit, of course):
import numpy
class Lambda:
def __init__(self, func):
self._func = func
def __add__(self, other):
return Lambda(
lambda *args, **kwds: self._func(*args, **kwds) + other._func(*args, **kwds))
def __call__(self, *args, **kwds):
return self._func(*args, **kwds)
y = Lambda(lambda u: u**(-2) + 8)
x = Lambda(lambda u: numpy.exp(-u))
print((x + y)(1))
Other operators can be added in a similar way.
With sympy you can do function operation like this:
>>> import numpy
>>> from sympy.utilities.lambdify import lambdify, implemented_function
>>> from sympy.abc import u
>>> y = implemented_function('y', lambda u: u**(-2) + 8)
>>> x = implemented_function('x', lambda u: numpy.exp(-u))
>>> f = lambdify(u, y(u) + x(u))
>>> f(numpy.array([1,2,3]))
array([ 9.36787944, 8.13533528, 8.04978707])
Use code below to rich same result with writing as less code as possible:
y = lambda u: u**(-2) + 8
x = lambda u: numpy.exp(-u)
f = lambda u, x=x, y=y: x(u) + y(u)
int = scipy.integrate.quad(f, 0, numpy.inf)
As a functional programmer, I suggest generalizing the solutions to an applicative combinator:
In [1]: def lift2(h, f, g): return lambda x: h(f(x), g(x))
In [2]: from operator import add
In [3]: from math import exp
In [4]: y = lambda u: u**(-2) + 8
In [5]: x = lambda u: exp(-u)
In [6]: f = lift2(add, y, x)
In [7]: [f(u) for u in range(1,5)]
Out[7]: [9.367879441171443, 8.385335283236612, 8.160898179478975, 8.080815638888733]
Using lift2, you can combine the output of two functions using arbitrary binary functions in a pointfree way. And most of the stuff in operator should probably be enough for typical mathematical combinations, avoiding having to write any lambdas.
In a similar fasion, you might want to define lift1 and maybe lift3, too.
I am now programming on BFGS algorithm, where I need to create a function with a doulbe sum. I need to return a FUNCTION but not a number, so something like sum+= is not acceptable.
def func(X,W):
return a function of double sum of X, W
A illustrative example:
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
I want to get a function that, for each instance X[i] in X, and for each W[j] in W, return a function of the sum of numpy.dot(X[i],W[j]). For example, X[1] dot W[2] shoulde be 2*3+2*3+2*3+2*3
----------This contend is edited by me:-------------
When I saw the answers provided below, I think my question is not clear enough. Actually, I want to get a function:
Func = X[0]W[0]+X[0]W[1]+X[0]W[2]+ X[1]W[0]+X[1]W[1]+X[1]W[2]+
X[2]W[0]+X[2]W[1]+X[2]W[2]+ X[3]W[0]+X[3]W[1]+X[3]W[2] +
X[4]W[0]+X[4]W[1]+X[4]W[2]
-------------------end the edited content--------------
If I only got one dimension of W, the problem is easy by using numpy.sum(X,W).
However, how can I return a function of two sums with Python?
If you want to return the function f(i,j) -> X[i].W[j] :
def func(X,W):
def f(i,j): return np.dot(X[i],W[j])
return f
will work.
EDIT:
The VALUE you name Func in your edit is computed by sum([np.dot(x,w) for x in X for w in W]) or, more efficient, np.einsum('ij,kj->',X,W) .
if you want to return the FUNCTION that return Func, you can do it like that :
def func(X,W):
Func=np.einsum('ij,kj->',X,W)
return lambda : Func
Then f=func(X,W); print(f()) will print 360, the value named Func in your example.
If I got your question right, this should do exactly what you want (python-2.7):
import numpy as np
def sample_main():
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
f = lambda i, j : reduce (lambda a, b: a+b, map(lambda x, w: x*w, X[i], W[j]), 0)
return f
if __name__ == '__main__':
f = sample_main()
print (f(0, 0))
Just replace the sample_main function with your function that takes X and W.
Actually, I want to implement L_BFGS algorithm in my Python code. Inspired by the two answers provided by #B.M. and #siebenschlaefer, I figure out how to implement in my code:
func = np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],w[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(3)) for i in range (5))
Please do not mind the details of the formula, what I want to say is that, I use two sum here and just using i in rage (5) and t in range (3) to tell the code do the sums.
Thanks again for the answers provided by #B.M. and #siebenschlaefer!!
Say I have the following code
def myfunc(x):
return monsterMathExpressionOf(x)
and I would like to find numerically the solution of myfunc(x) == y for diverse values of y. If y == 0 then there are a lot of root finding procedures available, e.g. from scipy. However, if I'd like to find the solution for e.g. y==1 it seems I have to define a new function
def myfunc1(x):
return myfunc(x) - 1
and then find it's root using available procedures. This way does not work for me as I will need to find a lot of solution by running a loop, and I don't want to redefine the function in each step of the loop. Is there a neater solution?
You don't have to redefine a function for every value of y: just define a single function of y that returns a function of x, and use that function inside your loop:
def wrapper(y):
def myfunc(x):
return monsterMathExpressionOf(x) - y
return myfunc
for y in y_values:
f = wrapper(y)
find_root(f, starting_point, ...)
You can also use functools.partial, which may be more to your liking:
def f(x, y):
return monsterMathExpressionOf(x) - y
for y in y_values:
g = partial(f, y=y)
find_root(g, starting_point, ...)
Read the documentation to see how partial is roughly implemented behind the scenes; you'll see it may not be too different compared to the first wrapper implementation.
#Evert's answer shows how you can do this by using either a closure or by using functools.partial, which are both fine solutions.
Another alternative is provided by many numerical solvers. Consider, for example, scipy.optimize.fsolve. That function provides the args argument, which allows you to pass additional fixed arguments to the function to be solved.
For example, suppose myfunc is x**3 + x
def myfunc(x):
return x**3 + x
Define one additional function that includes the parameter y as an argument:
def myfunc2(x, y):
return myfunc(x) - y
To solve, say, myfunc(x) = 3, you can do this:
from scipy.optimize import fsolve
x0 = 1.0 # Initial guess
sol = fsolve(myfunc2, x0, args=(3,))
Instead of defining myfunc2, you could use an anonymous function as the first argument of fsolve:
sol = fsolve(lambda x, y: myfunc(x) - y, x0, args=(3,))
But then you could accomplish the same thing using
sol = fsolve(lambda x: myfunc(x) - 3, x0)
I am using scipy.optimize.minimize() to get the minimum value and it's x,y
def fun(self):
cols=self.maintablewidget.columnCount()-1
for k in range(3,cols):
for i in range(1,k):
d=string.atof(self.maintablewidget.item(i-1,k-1).text())
xi=string.atof(self.xytablewidget.item(i-1,0).text())
yi=string.atof(self.xytablewidget.item(i-1,1).text())
f=lambda x,y: np.sum((np.sqrt((x-xi)**2+(y-yi)**2)-d)**2)
res=optimize.minimize(f,0,0)#I do not know how to give the optimize.minimize's parameter
print(res['x'][0])
print(res['x'],res['fun'])
I do not know how to give the optimize.minimize's parameter. Can someone explain to me how I can do this?
Take a look at the documentation. Essentially if your function depends on two parameters, you need to pass them as x[0] and x[1] instead of x and y. So in the end you function will depend on a single vector parameter x.For example:
f = lambda x: np.sum((np.sqrt((x[0]-xi)**2+(x[1]-yi)**2)-d)**2)
res = optimize.minimize(f, (initial_x, initial_y))
The minimum will be in res.x and will have the form of a vector [x, y].