Related
I'm trying to write a function that generates the restrictions of a function g at a given point p.
For example, let's say g(x, y, z) = 2x + 3y + z and p = (5, 10, 15). I'm trying to create a function that would return [lambda x : g(x, 10, 15), lambda y: g(5, y, 15), lambda z: g(5, 10, z)]. In other words, I want to take my multivariate function and return a list of univariate functions.
I wrote some Python to describe what I want, but I'm having trouble figuring out how to pass the right inputs from p into the lambda properly.
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(p[0], p[1], ..., p[i-1], p[x], p[i+1], .... p[-1])
restrictions.append(restriction)
return restrictions
Purpose: I wrote a short function to estimate the derivative of a univariate function, and I'm trying to extend it to compute the gradient of a multivariate function by computing the derivative of each restriction function in the list returned by restriction_generator.
Apologies if this question has been asked before. I couldn't find anything after some searching, but I'm having trouble articulating my problem without all of this extra context. Another title for this question would probably be more appropriate.
Since #bandicoot12 requested some more solutions, I will try to fix up your proposed code. I'm not familiar with the ... notation, but I think this slight change should work:
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(*p[: i], x, *p[i+1:])
restrictions.append(restriction)
return restrictions
Although I am not familiar with the ... notation, if I had to guess, your original code doesn't work because it probably always inputs p[0]. Maybe it can be fixed by changing it from p[0], p[1], ..., p[i-1] to p[0], ..., p[i-1].
try something like this:
def foo(fun, p, i):
def bar(x):
p[i] = x
return fun(*p)
return bar
and
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restrictions.append(foo(g, p, i))
return restrictions
In my code I need to create a lambda to realize ax1+~~~~~~zx100, in which a,~~z, are known parameters. I need to put a for loop inside a lambda expression, to realize such function:
x = lambda x: 5*x[0]+20*x[1]+~~~~~~21*x[99]
I wonder, if number of my variables are 1 million, how to realize it? I do not know how to make it happen. Please help, thank you so much!
If you need to pass both the parameters, you could make a lambda to accept both lists, like so:
a = [1,2,3,4,5]
x = [6,7,8,9,0]
sum_of_products = lambda _a,_x: sum(y*z for y, z in zip(_a, _x))
print(sum_of_products(a,x))
80
Alternatively, and preferably you can also just define a normal function for this as well, and achieve the same results.:
def sum_of_products(a, x):
return sum(y*z for y, z in zip(a, x))
Once you've written the function, you can also pass it around just like a lambda, so if you were going to assign it to a variable to begin with, it might be easier to read if you just def your function in the normal way.
a = [1,2,3,4,5]
x = [6,7,8,9,0]
def sum_of_products(_a, _x):
return sum(y*z for y, z in zip(_a, _x))
my_function = sum_of_products
print(my_function(a, x))
80
Try something like this:
lambda x: sum(a * b for a, b in zip(x, [5, 20, ..., 21]))
I am trying to solve a non linear system. Here is the code for a toy problem.
import collections
import numpy as np
import scipy
def flat(x):
''' flattens a shallow list
ex: [[1,2,3],[4,5],[6]] ----> flattens to [1,2,3,4,5]
numpy flatten does not work on lists.
'''
if isinstance(x, collections.Iterable):
return [a for i in x for a in flat(i)]
else:
return [x]
def func(X):
'''setups the matrix dynamic equation and the set of constraints
'''
A = [[0,1,0,1],[2,1,0,4],[1,4,1,3],[3, 2, 1,0]]
A1 = [[1,0,1,-1], [0,-1,2,1],[1,2,0,1],[1,2,0,-2]]
x = X[:-1]
alpha = X[-1]
x0 = [1,2,3,4]
y = x - x0
# x[0] = 0.5
# x[3] = 0.3
dyneqn = np.dot(A,y) + alpha * np.dot(A1,x)
cons = (1/2.0)*np.dot(x.T,np.dot(A1,x)) + np.dot([-1,1,2,-3], x) + 0.5
return flat([dyneqn, cons])
sol = scipy.optimize.root(func,[1,-1,2,0,-1])
sol.x
Problem Statement
The argument X of the objective function f has five unknowns that we are solving for. I want to set the first parameter, i.e., X[0]=0.5and the fourth parameter i.e., X[3] = 0.3 and solve for the remaining 3 unknowns. Let us assume for simplicity that such a solution exists and my initial guess is somehow a good one.
Attempt:
I know I should probably pass these arguments to the args=() argument in scipy.optimize.root. I tried setting
args = (X[0]=0.5, X[3]=0.3)
init_guess = [0.5,-1,2,0.3,-1]
scipy.optimize.root(func,init_guess, args=args)
This is obviously wrong.
Question? How can I fix this?.
Note: I added the flat function so that the code is self contained. It has nothing to do with this question.
Typically with scipy functions like root, minimize, etc
root(func, x0, args=(a, b, c, ...))
requires a func that accepts:
func(x0, a, b, c, ...)
# do something those arguments
return value
x0 is the value that root varies, a,b,c are args value that are passed unchanged to your function. Depending of the problem x0 may be an array. The nature of the args is entirely up to you.
From your example I reconstruct that you want to solve for the second and third component of some vector x as well as the parameter alpha. With the args keyword of scipy.optmize.root that would look something like
def func(x_solve, x0, x3):
#x_solve.size should be 3
x = np.empty(4)
x[0], x[3] = x0, x3
x[1:3] = x_solve[:2]
alpha = x_solve[2]
...
scipy.optimize.root(func, [-1,2,-1], args=(.5, .3))
As Azat and kazemakase pointed out, I'm also not sure if you actually want to use root, but the usage of scipy.optimize.minimize is pretty much the same.
Edit: It should be possible to have a flexible set of fixed variables by using a dictionary as an additional argument which specifies those:
def func(x_solve, fixed):
x = x_solve[:-1] # last value is alpha
for idx in fixed.keys(): # overwrite fixed entries
x[idx] = fixed[idx]
alpha = x_solve[-1]
# fixed variables, key is the index
fixed_vars = {0:.5, 3:.3}
# find roots
scipy.optimize.root(func,
[.5, -1, 2, .3, -1],
args=(fixed_vars,))
That way, when the optimizer in root numerically evaluates the Jacobian it obtains zero for the fixed variables and should therefore leave those invariant. However, that might lead to complications in the convergence of the algorithm.
I am now programming on BFGS algorithm, where I need to create a function with a doulbe sum. I need to return a FUNCTION but not a number, so something like sum+= is not acceptable.
def func(X,W):
return a function of double sum of X, W
A illustrative example:
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
I want to get a function that, for each instance X[i] in X, and for each W[j] in W, return a function of the sum of numpy.dot(X[i],W[j]). For example, X[1] dot W[2] shoulde be 2*3+2*3+2*3+2*3
----------This contend is edited by me:-------------
When I saw the answers provided below, I think my question is not clear enough. Actually, I want to get a function:
Func = X[0]W[0]+X[0]W[1]+X[0]W[2]+ X[1]W[0]+X[1]W[1]+X[1]W[2]+
X[2]W[0]+X[2]W[1]+X[2]W[2]+ X[3]W[0]+X[3]W[1]+X[3]W[2] +
X[4]W[0]+X[4]W[1]+X[4]W[2]
-------------------end the edited content--------------
If I only got one dimension of W, the problem is easy by using numpy.sum(X,W).
However, how can I return a function of two sums with Python?
If you want to return the function f(i,j) -> X[i].W[j] :
def func(X,W):
def f(i,j): return np.dot(X[i],W[j])
return f
will work.
EDIT:
The VALUE you name Func in your edit is computed by sum([np.dot(x,w) for x in X for w in W]) or, more efficient, np.einsum('ij,kj->',X,W) .
if you want to return the FUNCTION that return Func, you can do it like that :
def func(X,W):
Func=np.einsum('ij,kj->',X,W)
return lambda : Func
Then f=func(X,W); print(f()) will print 360, the value named Func in your example.
If I got your question right, this should do exactly what you want (python-2.7):
import numpy as np
def sample_main():
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
f = lambda i, j : reduce (lambda a, b: a+b, map(lambda x, w: x*w, X[i], W[j]), 0)
return f
if __name__ == '__main__':
f = sample_main()
print (f(0, 0))
Just replace the sample_main function with your function that takes X and W.
Actually, I want to implement L_BFGS algorithm in my Python code. Inspired by the two answers provided by #B.M. and #siebenschlaefer, I figure out how to implement in my code:
func = np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],w[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(3)) for i in range (5))
Please do not mind the details of the formula, what I want to say is that, I use two sum here and just using i in rage (5) and t in range (3) to tell the code do the sums.
Thanks again for the answers provided by #B.M. and #siebenschlaefer!!
Say I have the following code
def myfunc(x):
return monsterMathExpressionOf(x)
and I would like to find numerically the solution of myfunc(x) == y for diverse values of y. If y == 0 then there are a lot of root finding procedures available, e.g. from scipy. However, if I'd like to find the solution for e.g. y==1 it seems I have to define a new function
def myfunc1(x):
return myfunc(x) - 1
and then find it's root using available procedures. This way does not work for me as I will need to find a lot of solution by running a loop, and I don't want to redefine the function in each step of the loop. Is there a neater solution?
You don't have to redefine a function for every value of y: just define a single function of y that returns a function of x, and use that function inside your loop:
def wrapper(y):
def myfunc(x):
return monsterMathExpressionOf(x) - y
return myfunc
for y in y_values:
f = wrapper(y)
find_root(f, starting_point, ...)
You can also use functools.partial, which may be more to your liking:
def f(x, y):
return monsterMathExpressionOf(x) - y
for y in y_values:
g = partial(f, y=y)
find_root(g, starting_point, ...)
Read the documentation to see how partial is roughly implemented behind the scenes; you'll see it may not be too different compared to the first wrapper implementation.
#Evert's answer shows how you can do this by using either a closure or by using functools.partial, which are both fine solutions.
Another alternative is provided by many numerical solvers. Consider, for example, scipy.optimize.fsolve. That function provides the args argument, which allows you to pass additional fixed arguments to the function to be solved.
For example, suppose myfunc is x**3 + x
def myfunc(x):
return x**3 + x
Define one additional function that includes the parameter y as an argument:
def myfunc2(x, y):
return myfunc(x) - y
To solve, say, myfunc(x) = 3, you can do this:
from scipy.optimize import fsolve
x0 = 1.0 # Initial guess
sol = fsolve(myfunc2, x0, args=(3,))
Instead of defining myfunc2, you could use an anonymous function as the first argument of fsolve:
sol = fsolve(lambda x, y: myfunc(x) - y, x0, args=(3,))
But then you could accomplish the same thing using
sol = fsolve(lambda x: myfunc(x) - 3, x0)