Realizing a matrix of equations: Using a loop to define functions (Python) - python

For context, I am essentially using a numerical integrator that takes in a set of differential equations defined as functions. A large set of these functions follow a regular pattern and I would like to define them in a loop (or whatever is the most suitable way). For example;
#system coordinates
s = [y1,y2]
#system equations
def e1(s):
x1 = s[1]**2 + 1
return x1
def e2(s):
x1 = s[2]**2 + 2
return x1
#equations of motion
eom = [e1,e2]
Not all of the functions will follow the exact pattern, for those that do though ideally I need something like,
def en(s)
x1 = s[n]**2 + n
return x1
where it is possible to iterate over a range of 'n' values. Thanks for any advice.

Why not simply use a second parameter in your function like so:
def en(s, n)
x1 = s[n]**2 + n
return x1
result = []
for i in range(100): # 100 is just for illustration purposes..
result[0] = en(s, i) # you do not have to store them in a list. just an example

I would use partial, wich bind values to functions arguments:
import functools
def e1(s, n, v1, v2):
x1 = s[n]**v1 + v2
return x1
[functools.partial(e1, n=i, v1=2, v2=1) for i in range(10)] # this was your first example
#your second example
[functools.partial(e1, n=n, v1=2, v2=n) for n in range(10)]

Related

How to define a function that has a condition as input?

I need a function that takes a rule/condition as an input. for example given an array of integers detect all the numbers that are greater than two, and all the numbers greater than four. I know this can be achieved easily without a function, but I need this to be inside a function. The function I would like to have is like
def _select(x,rule):
outp = rule(x)
return outp
L = np.round(np.random.normal(2,4,50),decimals=2)
y = _select(x=L,rule=(>2))
y1 = _select(x=L,rule=(>4))
How should I code a function like this?
Functions are first class objects meaning you can treat them as any other variable.
import numpy as np
def _select(x,rule):
outp = rule(x)
return outp
def rule_2(val):
return val > 2
def rule_4(val):
return val > 4
L = np.round(np.random.normal(2,4,50),decimals=2)
y = _select(x=L,rule=rule_2)
print(y)
y1 = _select(x=L,rule=rule_4)
print(y1)
In your example, the condition you want to use can be expressed as a simple expression. The python lambda keyword lets you define expressions as anonymous functions in other statements and expressions. So, you could replace the explicit def of the functions
import numpy as np
def _select(x,rule):
outp = rule(x)
return outp
L = np.round(np.random.normal(2,4,50),decimals=2)
y = _select(x=L,rule=lambda val: val > 2)
print(y)
y1 = _select(x=L,rule=lambda val: val > 4)
print(y1)

Sum of Functions in Python

I have a function f(x,a) where 'x' is a variable and 'a' is a parameter. I want creat a function F(x) that is a sum of f(x,a) for a range of parameter 'a', for instance:
F(x) = f(x,a1) + f(x,a2) + f(x,a3) + ... + f(x,aN) but how I have a large range for 'a' (a=[a1,a2,a3,...,aN]) I want to write a program for this but I don't now how. For instance:
import numpy as np
# Black-Body radiation equation: 'x' is related to frequency and 'a' is related to temperature
def f(x,a):
return x**3/(np.exp(a*x) - 1)
# range for parameter a:
a = [1000,2000,3000,4000,5000,6000]
# Superposition of spectrum
def F(x):
return f(x,a[0]) + f(x,a[1]) + f(x,a[2]) + f(x,a[3]) + f(x,a[4]) + f(x,a[5])
The last line for function F(x) isn't very smart, so I tried make a loop in the above sum with sum() function
def F(x):
spectrum = []
for i in a:
spectrum = sum(f(x,i))
return spectrum
But as I don't have much experience with Python this doesn't work and I got the error:
import matplotlib.pyplot as plt
x = np.linspace(0,100,500)
plt.plot(x,F(x))
plt.show()
# ValueError: x and y must have same first dimension, but have shapes (500,) and (1,)
Does anyone know how to do this? thank you very much
From what i understand, this should do the job:
def F(x):
return sum(f(x, _a) for _a in a)
The thing I do in the sum() function is called list comprehension, feel free to look this Python feature if you are interested by Python coding: it is very powerful.

passing in initial/boundary conditions for a function in scipy.optimize.root as the args argument

I am trying to solve a non linear system. Here is the code for a toy problem.
import collections
import numpy as np
import scipy
def flat(x):
''' flattens a shallow list
ex: [[1,2,3],[4,5],[6]] ----> flattens to [1,2,3,4,5]
numpy flatten does not work on lists.
'''
if isinstance(x, collections.Iterable):
return [a for i in x for a in flat(i)]
else:
return [x]
def func(X):
'''setups the matrix dynamic equation and the set of constraints
'''
A = [[0,1,0,1],[2,1,0,4],[1,4,1,3],[3, 2, 1,0]]
A1 = [[1,0,1,-1], [0,-1,2,1],[1,2,0,1],[1,2,0,-2]]
x = X[:-1]
alpha = X[-1]
x0 = [1,2,3,4]
y = x - x0
# x[0] = 0.5
# x[3] = 0.3
dyneqn = np.dot(A,y) + alpha * np.dot(A1,x)
cons = (1/2.0)*np.dot(x.T,np.dot(A1,x)) + np.dot([-1,1,2,-3], x) + 0.5
return flat([dyneqn, cons])
sol = scipy.optimize.root(func,[1,-1,2,0,-1])
sol.x
Problem Statement
The argument X of the objective function f has five unknowns that we are solving for. I want to set the first parameter, i.e., X[0]=0.5and the fourth parameter i.e., X[3] = 0.3 and solve for the remaining 3 unknowns. Let us assume for simplicity that such a solution exists and my initial guess is somehow a good one.
Attempt:
I know I should probably pass these arguments to the args=() argument in scipy.optimize.root. I tried setting
args = (X[0]=0.5, X[3]=0.3)
init_guess = [0.5,-1,2,0.3,-1]
scipy.optimize.root(func,init_guess, args=args)
This is obviously wrong.
Question? How can I fix this?.
Note: I added the flat function so that the code is self contained. It has nothing to do with this question.
Typically with scipy functions like root, minimize, etc
root(func, x0, args=(a, b, c, ...))
requires a func that accepts:
func(x0, a, b, c, ...)
# do something those arguments
return value
x0 is the value that root varies, a,b,c are args value that are passed unchanged to your function. Depending of the problem x0 may be an array. The nature of the args is entirely up to you.
From your example I reconstruct that you want to solve for the second and third component of some vector x as well as the parameter alpha. With the args keyword of scipy.optmize.root that would look something like
def func(x_solve, x0, x3):
#x_solve.size should be 3
x = np.empty(4)
x[0], x[3] = x0, x3
x[1:3] = x_solve[:2]
alpha = x_solve[2]
...
scipy.optimize.root(func, [-1,2,-1], args=(.5, .3))
As Azat and kazemakase pointed out, I'm also not sure if you actually want to use root, but the usage of scipy.optimize.minimize is pretty much the same.
Edit: It should be possible to have a flexible set of fixed variables by using a dictionary as an additional argument which specifies those:
def func(x_solve, fixed):
x = x_solve[:-1] # last value is alpha
for idx in fixed.keys(): # overwrite fixed entries
x[idx] = fixed[idx]
alpha = x_solve[-1]
# fixed variables, key is the index
fixed_vars = {0:.5, 3:.3}
# find roots
scipy.optimize.root(func,
[.5, -1, 2, .3, -1],
args=(fixed_vars,))
That way, when the optimizer in root numerically evaluates the Jacobian it obtains zero for the fixed variables and should therefore leave those invariant. However, that might lead to complications in the convergence of the algorithm.

Double sum in Python

I am now programming on BFGS algorithm, where I need to create a function with a doulbe sum. I need to return a FUNCTION but not a number, so something like sum+= is not acceptable.
def func(X,W):
return a function of double sum of X, W
A illustrative example:
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
I want to get a function that, for each instance X[i] in X, and for each W[j] in W, return a function of the sum of numpy.dot(X[i],W[j]). For example, X[1] dot W[2] shoulde be 2*3+2*3+2*3+2*3
----------This contend is edited by me:-------------
When I saw the answers provided below, I think my question is not clear enough. Actually, I want to get a function:
Func = X[0]W[0]+X[0]W[1]+X[0]W[2]+ X[1]W[0]+X[1]W[1]+X[1]W[2]+
X[2]W[0]+X[2]W[1]+X[2]W[2]+ X[3]W[0]+X[3]W[1]+X[3]W[2] +
X[4]W[0]+X[4]W[1]+X[4]W[2]
-------------------end the edited content--------------
If I only got one dimension of W, the problem is easy by using numpy.sum(X,W).
However, how can I return a function of two sums with Python?
If you want to return the function f(i,j) -> X[i].W[j] :
def func(X,W):
def f(i,j): return np.dot(X[i],W[j])
return f
will work.
EDIT:
The VALUE you name Func in your edit is computed by sum([np.dot(x,w) for x in X for w in W]) or, more efficient, np.einsum('ij,kj->',X,W) .
if you want to return the FUNCTION that return Func, you can do it like that :
def func(X,W):
Func=np.einsum('ij,kj->',X,W)
return lambda : Func
Then f=func(X,W); print(f()) will print 360, the value named Func in your example.
If I got your question right, this should do exactly what you want (python-2.7):
import numpy as np
def sample_main():
X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
f = lambda i, j : reduce (lambda a, b: a+b, map(lambda x, w: x*w, X[i], W[j]), 0)
return f
if __name__ == '__main__':
f = sample_main()
print (f(0, 0))
Just replace the sample_main function with your function that takes X and W.
Actually, I want to implement L_BFGS algorithm in my Python code. Inspired by the two answers provided by #B.M. and #siebenschlaefer, I figure out how to implement in my code:
func = np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],w[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(3)) for i in range (5))
Please do not mind the details of the formula, what I want to say is that, I use two sum here and just using i in rage (5) and t in range (3) to tell the code do the sums.
Thanks again for the answers provided by #B.M. and #siebenschlaefer!!

Multiprocessing in Python: how to implement a loop over "apply_async" as "map_async" using a callback function

I would like to integrate a system of differential equations for several parameter combinations using Python's multiprocessing module. So, the system should get integrated and the parameter combination should be stored as well as its index and the final value of one of the variables.
While that works fine when I use apply_async - which is already faster than doing it in a simple for-loop - I fail to implement the same thing using map_async which seems to be faster than apply_async. The callback function is never called and I have no clue why. Could anyone explain why this happens and how to get the same output using map_async instead of apply_async?!
Here is my code:
from pylab import *
import multiprocessing as mp
from scipy.integrate import odeint
import time
#my system of differential equations
def myODE (yn,tvec,allpara):
(x, y, z) = yn
a, b = allpara['para']
dx = -x + a*y + x*x*y
dy = b - a*y - x*x*y
dz = x*y
return (dx, dy, dz)
#returns the index of the parameter combination, the parameters and the integrated solution
#this way I know which parameter combination belongs to which outcome in the asynch-case
def runMyODE(yn,tvec,allpara):
return allpara['index'],allpara['para'],transpose(odeint(myODE, yn, tvec, args=(allpara,)))
#for reproducibility
seed(0)
#time settings for integration
dt = 0.01
tmax = 50
tval = arange(0,tmax,dt)
numVar = 3 #number of variables (x, y, z)
numPar = 2 #number of parameters (a, b)
numComb = 5 #number of parameter combinations
INIT = zeros((numComb,numVar)) #initial conditions will be stored here
PARA = zeros((numComb,numPar)) #parameter combinations for a and b will be stored here
#create some initial conditions and random parameters
for combi in range(numComb):
INIT[combi,:] = append(10*rand(2),0) #initial conditions for x and y are randomly chosen, z is 0
PARA[combi,:] = 10*rand(2) #parameter a and b are chosen randomly
#################################using loop over apply####################
#results will be stored in here
asyncResultsApply = []
#my callback function
def saveResultApply(result):
# storing the index, a, b and the final value of z
asyncResultsApply.append((result[0], result[1], result[2][2,-1]))
#start the multiprocessing part
pool = mp.Pool(processes=4)
for combi in range(numComb):
pool.apply_async(runMyODE, args=(INIT[combi,:],tval,{'para': PARA[combi,:], 'index': combi}), callback=saveResultApply)
pool.close()
pool.join()
for res in asyncResultsApply:
print res[0], res[1], res[2] #printing the index, a, b and the final value of z
#######################################using map#####################
#the only difference is that the for loop is replaced by a "map_async" call
print "\n\nnow using map\n\n"
asyncResultsMap = []
#my callback function which is never called
def saveResultMap(result):
# storing the index, a, b and the final value of z
asyncResultsMap.append((result[0], result[1], result[2][2,-1]))
pool = mp.Pool(processes=4)
pool.map_async(lambda combi: runMyODE(INIT[combi,:], tval, {'para': PARA[combi,:], 'index': combi}), range(numComb), callback=saveResultMap)
pool.close()
pool.join()
#this does not work yet
for res in asyncResultsMap:
print res[0], res[1], res[2] #printing the index, a, b and the final value of z
If I understood you correctly, it stems from something that confuses people quite often. apply_async's callback is called after the single op, but so does map's - it does not call the callback on each element, but rather once on the entire result.
You are correct in noting that map is faster than apply_asyncs. If you want something to happen after each result, there are a few ways to go:
You can effectively add the callback to the operation you want to be performed on each element, and map using that.
You could use imap (or imap_unordered) in a loop, and do the callback within the loop body. Of course, this means that all will be performed in the parent process, but the nature of stuff written as callbacks means that's usually not a problem (it tends to be cheap functions). YMMV.
For example, suppose you have the functions f and cb, and you'd like to map f on es with cb for each op. Then you could either do:
def look_ma_no_cb(e):
r = f(e)
cb(r)
return r
p = multiprocessing.Pool()
p.map(look_ma_no_cb, es)
or
for r in p.imap(f, es):
cb(r)

Categories