Defining a global function in a Python script - python

I'm new to Python. I am writing a script that will numerically integrate a set of ordinary differential equations using a Runge-Kutta method. Since the Runge-Kutta method is a useful mathematical algorithm, I've put it in its own .py file, rk4.py.
def rk4(x,dt):
k1=diff(x)*dt
k2=diff(x+k1/2)*dt
k3=diff(x+k2/2)*dt
k4=diff(x+k3)*dt
return x+(k1+2*k2+2*k3+k4)/6
The method needs to know the set of equations that the user is working with in order to perform the algorithm, so it calls a function diff(x) that will find give rk4 the derivatives it needs to work. Since the equations will change by use, I want diff() to be defined in the script that will run the particular problem. In this case the problem is the orbit of mercury, so I wrote mercury.py. (This isn't how it will look in the end, but I've simplified it for the sake of figuring out what I'm doing.)
from rk4 import rk4
import numpy as np
def diff(x):
return x
def mercury(u0,phi0,dphi):
x=np.array([u0,phi0])
dt=2
x=rk4(x,dt)
return x
mercury(1,1,2)
When I run mercury.py, I get an error:
File "PATH/mercury.py", line 10, in mercury
x=rk4(x,dt)
File "PATH/rk4.py", line 2, in rk4
k1=diff(x)*dt
NameError: global name 'diff' is not defined
I take it since diff() is not a global function, when rk4 runs it knows nothing about diff. Obviously rk4 is a small piece of code and I could just shove it into whatever script I'm using at the time, but I think a Runge-Kutta integrator is a fundamental mathematical tool, just like the array defined in NumPy, and so it makes sense to make it a function that is called rather one that is defined in every script that uses it (which may be many). But I also can't go telling rk4.py to import a particular diff from a particular .py file, because that ruins the generality of rk4 that I want in the first place.
Is there a way to define diff globally within a script like mercury.py so that when rk4 is called, it will know about diff?

Accept the function as an argument:
def rk4(diff, # accept an argument of the function to call
x, dt)
k1=diff(x)*dt
k2=diff(x+k1/2)*dt
k3=diff(x+k2/2)*dt
k4=diff(x+k3)*dt
return x+(k1+2*k2+2*k3+k4)/6
Then, when you call rk4, simply pass in the function to be executed:
from rk4 import rk4
import numpy as np
def diff(x):
return x
def mercury(u0,phi0,dphi):
x=np.array([u0,phi0])
dt=2
x=rk4(diff, # here we send the function to rk4
x, dt)
return x
mercury(1,1,2)
It might be a good idea for mercury to accept diff as an argument too, rather than getting it from the closure (the surrounding code). You then have to pass it in as usual - your call to mercury in the last line would read mercury(diff, 1, 1, 2).
Functions are 'first-class citizens' in Python (as is nearly everything, including classes and modules), in the sense that they can be used as arguments, be held in lists, be assigned to names in namespaces, etc etc.

diff is already a global in the module mercury.py. But in order to use it in rk4.py you would need to import it like this:
from mercury import diff
That's the direct answer to your question.
However, passing the diff function to rk4 as suggested by #poorsod is much more elegant and also avoids a circular dependency between mercury.py and rk4.py, so I suggest you do it that way.

Related

How do I use newtons method on python to solve a system of equations?

I have an assignment where I need to make a single defined function that runs newtons method and then be able plug in other defined functions to it and it will solve them all. I wrote one that works for equations that have 1 variable, and I only need to solve for one variable from the system but I don't know how to do that in code without solving for all four of them.
the function I wrote to run newtons method is this:
def fnewton(function, dx, x, n):
#defined the functions that need to be evaluated so that this code can be applied to any function I call
def f(x):
f=eval(function)
return f
#eval is used to evaluate whatever I put in the function place when I recall fnewton
#this won't work without eval to run the functions
def df(x):
df=eval(dx)
return df
for intercept in range(1,n):
i= x-(f(x)/df(x))
x= i
#this is literally just newtons method
#to find an intercept you can literally input intercept in a for loop and it'll do it for you
#I just found this out
#putting n in the range makes it count iterations
print ('root is at')
print (x)
print ('after this many iterations:')
print (n)
my current system of equations function looks like this:
def c(x):
T=x[0]
y=x[1]
nl=x[2]
nv=x[3]
RLN=.63*Antoine(An,Bn,Cn,T)-y*760
RLH=(1-.63)*Antoine(Ah,Bh,Ch,T)-(1-y)*760
N=.63*nl+y*nv-50
H=(1-.63)*nl+(1-y)*nv-50
return[RLN,RLH,N,H]
To use my function to solve this I've entered multiple variations of:
fnewton("c(x)","dcdx(x)", (2,2,2,2), 10)
Do I need to change the system of equations into 1 equation somehow or something I don't know how to manipulate my code to make this work and also work for equations with only 1 variable.
To perform Newton's method in multiple dimensions, you must replace the simple derivative by a Jacobian matrix, that is, a matrix that has the derivatives of all components of your multidimensional function with respect to all given variables. This is decribed here: https://en.wikipedia.org/wiki/Newton%27s_method#Systems_of_equations,
(or, perhaps more helpful, here: https://web.mit.edu/18.06/www/Spring17/Multidimensional-Newton.pdf in Sec. 1.4)
Instead of f(x)/f'(x), you need to work with the inverse of the Jacobian matrix times the vector function f. So the formula is actually quite similar!

How to define a python function 'on the fly' for use with pymanopt/autodifferentiation

I had no idea how to phrase the title of this question, so apologies for any confusion there. I am using the pymanopt package for optimization and would like to be able to create some sort of a function/method that allows for a generalized input (variable amount of input arrays). To use pymanopt, one has to provide a cost function defined in terms of array that are to be optimized to minimize the cost.
For example, a cost function could be:
#pymanopt.function.Autograd
def f(A,B):
return ((X - A#B.T)**2).sum()
To do the optimization, the variable X is defined prior to f, then f is supplied as the cost function to the pymanopt solver. Optimization is done with respect to the arguments of f and these arrays are returned by pymanopt with values that minimize the cost function.
Ideally, I would like to be able to do this definition more dynamically. So instead of defining a function in terms of hard coded arrays, to be able to supply a list of variables to be optimized. So if my cost function was instead:
#pymanopt.function.Autograd
def f(L):
return ((X - np.linalg.multi_dot(L)**2).sum()
Where the arrays A,B,...,C would be stored in a list, L. However, as far as I can tell, the variables to be optimized have to be directly defined as individual arrays in the cost function supplied to the solver.
The only thing I can think of doing is to define the cost function by creating a string that contains the 'hard coded' function and executing it via exec() with something like this:
args = ','.join(['A{}'.format(i) for i in range(len(L))])
exec('#pymanopt.function.Autograd\ndef({}):\n\treturn ((X-np.linalg.multi_dot({}))**2).sum()'.format(args,args))
but I understand that using this method should be avoided if possible. Any advice for navigating this sort of problem is greatly appreciated - thanks! Please let me know if anything is unclear/doesn't make sense.

Can I pass additional arguments into a scipy bvp function or boundary conditions? / SciPy questions from MATLAB

I'm working on converting a code that solves a BVP (Boundary Value Problem) from MATLAB to Python (SciPy). However, I'm having a little bit of trouble. I wanted to pass a few arguments into the function and the boundary conditions; so in MATLAB, it's something like:
solution = bvp4c(#(x,y)fun(x,y,args),#(ya,yb)boundarycond(ya,yb,arg1,args),solinit);
Where arg1 is a value, and args is a structure or a class instance. So I've been trying to do this on scipy, and the closest I get to is something like:
solBVP = solve_bvp(func(x,y,args), boundC(x,y,arg1,args), x, y)
But then it errors out saying that y is not defined (which it isn't, because it's the vector of first order DEs).
Has anyone tried to pass additional arguments into solve_bvp? If so, how did you manage to do it? I have a workaround right now essentially putting args and arg1 as global variables, but that seems incredibly sketchy if I want to make this code somewhat modular.
Thank you!

How can I use a decorator to wrap the result of my function, inside of a multiple external library functions

I've only recently learned about decorators, and despite reading nearly every search result I can find about this question, I cannot figure this out. All I want to do is define some function "calc(x,y)", and wrap its result with a series of external functions, without changing anything inside of my function, nor its calls in the script, such as:
#tan
#sqrt
def calc(x,y):
return (x+y)
### calc(x,y) = tan(sqrt(calc(x,y))
### Goal is to have every call of calc in the script automatically nest like that.
After reading about decorators for almost 10 hours yesterday, I got the strong impression this is what they were used for. I do understand that there are various ways to modify how the functions are passed to one another, but I can't find any obvious guide on how to achieve this. I read that maybe functools wraps can be used for this purpose, but I cannot figure that out either.
Most of the desire here is to be able to quickly and easily test how different functions modify the results of others, without having to tediously wrap functions between parenthesis... That is, to avoid having to mess with parenthesis at all, having my modifier test functions defined on their own lines.
A decorator is simply a function that takes a function and returns another function.
def tan(f):
import math
def g(x,y):
return math.tan(f(x,y))
return g

Can I pass the objective and derivative functions to scipy.optimize.minimize as one function?

I'm trying to use scipy.optimize.minimize to minimize a complicated function. I noticed in hindsight that the minimize function takes the objective and derivative functions as separate arguments. Unfortunately, I've already defined a function which returns the objective function value and first-derivative values together -- because the two are computed simultaneously in a for loop. I don't think there is a good way to separate my function into two without the program essentially running the same for loop twice.
Is there a way to pass this combined function to minimize?
(FYI, I'm writing an artificial neural network backpropagation algorithm, so the for loop is used to loop over training data. The objective and derivatives are accumulated concurrently.)
Yes, you can pass them in a single function:
import numpy as np
from scipy.optimize import minimize
def f(x):
return np.sin(x) + x**2, np.cos(x) + 2*x
sol = minimize(f, [0], jac=True, method='L-BFGS-B')
Something that might work is: you can memoize the function, meaning that if it gets called with the same inputs a second time, it will simply return the same outputs corresponding to those inputs without doing any actual work the second time. What is happening behind the scenes is that the results are getting cached. In the context of a nonlinear program, there could be thousands of calls which implies a large cache. Often with memoizers(?), you can specify a cache limit and the population will be managed FIFO. IOW you still benefit fully for your particular case because the inputs will be the same only when you are needing to return function value and derivative around the same point in time. So what I'm getting at is that a small cache should suffice.
You don't say whether you are using py2 or py3. In Py 3.2+, you can use functools.lru_cache as a decorator to provide this memoization. Then, you write your code like this:
#functools.lru_cache
def original_fn(x):
blah
return fnvalue, fnderiv
def new_fn_value(x):
fnvalue, fnderiv = original_fn(x)
return fnvalue
def new_fn_deriv(x):
fnvalue, fnderiv = original_fn(x)
return fnderiv
Then you pass each of the new functions to minimize. You still have a penalty because of the second call, but it will do no work if x is unchanged. You will need to research what unchanged means in the context of floating point numbers, particularly since the change in x will fall away as the minimization begins to converge.
There are lots of recipes for memoization in py2.x if you look around a bit.
Did I make any sense at all?

Categories