I am trying to implement a numerical integration. Specifically,
I am using the trapezoidal rule one gets the following analytical expression
When I try to implement it numerically.
gamma=0.1 #parameter
Tmax=100
N=100000 #number of steps
t=np.linspace(0,Tmax,N)
B=np.random.rand(len(t))
def A(t):
A=np.zeros(len(t))
trapezi=0
for i in range(1,N-1):
trapezi+=(np.exp(-gamma*(Tmax-t[i]))*B[i])
h=Tmax/N
A=h*(trapezi+1/2*(B[-1]+B[0]*np.exp(-gamma*Tmax)))
return A
where I have created a random B matrix.
The problem is that right now the output A(t) is just an single value (i.e. for this t I get A(t)=0.47), but what I need is an array of the function going from t=0 to t=t. This is my main problem: how to use the variable t as both a parameter in the integration and as a variable.
Any advice would be highly welcomed.
Related
I have an assignment where I need to make a single defined function that runs newtons method and then be able plug in other defined functions to it and it will solve them all. I wrote one that works for equations that have 1 variable, and I only need to solve for one variable from the system but I don't know how to do that in code without solving for all four of them.
the function I wrote to run newtons method is this:
def fnewton(function, dx, x, n):
#defined the functions that need to be evaluated so that this code can be applied to any function I call
def f(x):
f=eval(function)
return f
#eval is used to evaluate whatever I put in the function place when I recall fnewton
#this won't work without eval to run the functions
def df(x):
df=eval(dx)
return df
for intercept in range(1,n):
i= x-(f(x)/df(x))
x= i
#this is literally just newtons method
#to find an intercept you can literally input intercept in a for loop and it'll do it for you
#I just found this out
#putting n in the range makes it count iterations
print ('root is at')
print (x)
print ('after this many iterations:')
print (n)
my current system of equations function looks like this:
def c(x):
T=x[0]
y=x[1]
nl=x[2]
nv=x[3]
RLN=.63*Antoine(An,Bn,Cn,T)-y*760
RLH=(1-.63)*Antoine(Ah,Bh,Ch,T)-(1-y)*760
N=.63*nl+y*nv-50
H=(1-.63)*nl+(1-y)*nv-50
return[RLN,RLH,N,H]
To use my function to solve this I've entered multiple variations of:
fnewton("c(x)","dcdx(x)", (2,2,2,2), 10)
Do I need to change the system of equations into 1 equation somehow or something I don't know how to manipulate my code to make this work and also work for equations with only 1 variable.
To perform Newton's method in multiple dimensions, you must replace the simple derivative by a Jacobian matrix, that is, a matrix that has the derivatives of all components of your multidimensional function with respect to all given variables. This is decribed here: https://en.wikipedia.org/wiki/Newton%27s_method#Systems_of_equations,
(or, perhaps more helpful, here: https://web.mit.edu/18.06/www/Spring17/Multidimensional-Newton.pdf in Sec. 1.4)
Instead of f(x)/f'(x), you need to work with the inverse of the Jacobian matrix times the vector function f. So the formula is actually quite similar!
I’m trying to solve a simple ODE to visualise the temporal response, which works well for constant input conditions using the new solve_ivp integration API in SciPy. For example:
def dN1_dt_simple(t, N1):
return -100 * N1
sol = solve_ivp(fun=dN1_dt_simple, t_span=[0, 100e-3], y0=[N0,])
However, I wonder is it possible to plot the response to a time-varying input? For instance, rather than having y0 fixed at N0, can I find the response to a simple sinusoid?
Is there a compatible way to pass time-varying input conditions into the API?
The function you pass to solve_ivp has a t in its signature for exactly this reason. You can do with it whatever you like¹. For example, to get a smooth, one-time pulse, you can do:
from numpy import pi, cos
def fun(t,N1):
input = 1-cos(t) if 0<t<2*pi else 0
return -100*N1 + input
sol = solve_ivp(fun=fun, t_span=[0,20], y0=[N0])
Note that y0 is not the input in your use of the term, but the initial condition. It is defined and makes sense for one point in time only – where you start your integration/simulation.
With ODEs, you typically model external inputs as forces or similar (affecting the time derivative of the system like in the above example) rather than direct changes to the state.
With this approach and in your context of an excitable system, N0 is already the outcome of some external input.
¹ As long as it is sufficiently smooth for the needs of the respective integrator, usually continuously differentiable (C¹). If you want to implement a step, better use a very sharp sigmoid instead.
I just wanted to ask you all about what is fitfunc, errfunc followed by scipy.optimize.leastsq is intuitively. I am not really used to python but I would like to understand this. Here is the code that I am trying to understand.
def optimize_parameters2(p0,mz):
fitfunc = lambda p,p0,mz: calculate_sp2(p, p0, mz)
errfunc = lambda p,p0,mz: exp-fitfunc(p,p0,mz)
return scipy.optimize.leastsq(errfunc, p0, args=(p0,mz))
Can someone please explain what this code is saying narratively word by word?
Sorry for being so specific but I really do have trouble understanding what it's saying.
This particular code snippet is implementing nonlinear least-squares regression to find the parameters of a curve function (this is the fitfunc, here) that best fit a set of data (exp, probably an abbreviation for "experimental data"). leastsq() is a somewhat more general routine for doing nonlinear least-squares optimization, not just curve-fitting. It requires a function (named errfunc, here) that is given a vector of parameters (p) and returns an array. It will attempt to find the parameter vector that minimizes the square of the returned array. In order to implement "fitting a curve to data" with leastsq, you have to provide an errfunc that evaluates the curve (fitfunc) at the given trial parameter vector and then subtracts it from the data (i.e. calculate the "error" or sometimes called the "residuals").
Just to be clear, none of these names are important. I'm just using them to refer to specific parts of the code snippet you provided. You will find other code that uses leastsq() for curve-fitting that names and organizes the code a little bit differently, but now that you know the general scheme, you should be able to follow along.
Python supports the creation of anonymous functions (i.e. functions that are not bound to a name) at runtime, using a construct called lambda. In your example, fitfunc and errfunc are two such lambda functions.
I believe calculate_sp2 and exp_fitfunc are simply two functions which are in the code but you didn't provide their code in the example. So, in short fitfunc actually calls the calculate_sp2 function with 3 parameters (p, p0, mz) and returns the value which is returned by calculate_sp2. errfunc also works in the same manner.
As mentioned in official documentation of scipy.optimize.leastsq, leastsq() minimizes the sum of squares of a set of equations. You can learn about the parameters of leastsq() from the official documentation.
I am giving a simple example to illustrate how lambda function works.
def add(x,y):
return x + y
def subtract(x,y):
return x-y if x > y else y-x
def main(x,y):
addition = lambda x,y: add(x,y)
subtraction = lambda x,y: subtract(x,y)
return addition(x,y) * subtraction(x,y)
print(main(7,4)) # prints 33 which is equal to (7+4)*(7-4)
EDIT: looks like this was already answered before here
It didn't show up in my searches because I didn't know the right nomenclature. I'll leave the question here for now in case someone arrives here because of the constraints.
I'm trying to optimize a function which is flat on almost all points ("steps function", but in a higher dimension).
The objective is to optimize a set of weights, that must sum to one, and are the parameters of a function which I need to minimize.
The problem is that, as the function is flat at most points, gradient techniques fail because they immediately converge on the starting "guess".
My hypothesis is that this could be solved with (a) Annealing or (b) Genetic Algos. Scipy sends me to basinhopping. However, I cannot find any way to use the constraint (the weights must sum to 1) or ranges (weights must be between 0 and 1) using scipy.
Actual question: How can I solve a minimization problem without gradients, and also use constraints and ranges for the input variables?
The following is a toy example (evidently this one could be solved using the gradient):
# import minimize
from scipy.optimize import minimize
# define a toy function to minimize
def my_small_func(g):
x = g[0]
y = g[1]
return x**2 - 2*y + 1
# define the starting guess
start_guess = [.5,.5]
# define the acceptable ranges (for [g1, g2] repectively)
my_ranges = ((0,1),(0,1))
# define the constraint (they must always sum to 1)
def constraint(g):
return g[0] + g[1] - 1
cons = {'type':'eq', 'fun': constraint}
# minimize
minimize(my_small_func, x0=start_guess, method='SLSQP',
bounds=rranges, constraints=cons)
I usually use R so maybe this is a bad answer, but anyway here goes.
You can solve optimization problems like the using a global optimizer. An example of this is Differential Evolution. The linked method does not use gradients. As for constraints, I usually build them manually. That looks something like this:
# some dummy function to minimize
def objective.function(a, b)
if a + b != 1 # if some constraint is not met
# return a very high value, indicating a very bad fit
return(10^90)
else
# do actual stuff of interest
return(fit.value)
Then you simply feed this function to the differential evolution package function and that should do the trick. Methods like differential evolution are made to solve in particular very high dimensional problems. However the constraint you mentioned can be a problem as it will likely result in very many invalid parameter configurations. This is not necessarily a problem for the algorithm, but is simply means you need to do a lot of tweaking and need to expect a lot of waiting time. Depending on your problem, you could try optimizing weights/ parameters in blocks. That means, optimize parameters given a set of weights, then optimize weights given the previous set of parameters and repeat that many times.
Hope this helps :)
I'm trying to use scipy.optimize.minimize to minimize a complicated function. I noticed in hindsight that the minimize function takes the objective and derivative functions as separate arguments. Unfortunately, I've already defined a function which returns the objective function value and first-derivative values together -- because the two are computed simultaneously in a for loop. I don't think there is a good way to separate my function into two without the program essentially running the same for loop twice.
Is there a way to pass this combined function to minimize?
(FYI, I'm writing an artificial neural network backpropagation algorithm, so the for loop is used to loop over training data. The objective and derivatives are accumulated concurrently.)
Yes, you can pass them in a single function:
import numpy as np
from scipy.optimize import minimize
def f(x):
return np.sin(x) + x**2, np.cos(x) + 2*x
sol = minimize(f, [0], jac=True, method='L-BFGS-B')
Something that might work is: you can memoize the function, meaning that if it gets called with the same inputs a second time, it will simply return the same outputs corresponding to those inputs without doing any actual work the second time. What is happening behind the scenes is that the results are getting cached. In the context of a nonlinear program, there could be thousands of calls which implies a large cache. Often with memoizers(?), you can specify a cache limit and the population will be managed FIFO. IOW you still benefit fully for your particular case because the inputs will be the same only when you are needing to return function value and derivative around the same point in time. So what I'm getting at is that a small cache should suffice.
You don't say whether you are using py2 or py3. In Py 3.2+, you can use functools.lru_cache as a decorator to provide this memoization. Then, you write your code like this:
#functools.lru_cache
def original_fn(x):
blah
return fnvalue, fnderiv
def new_fn_value(x):
fnvalue, fnderiv = original_fn(x)
return fnvalue
def new_fn_deriv(x):
fnvalue, fnderiv = original_fn(x)
return fnderiv
Then you pass each of the new functions to minimize. You still have a penalty because of the second call, but it will do no work if x is unchanged. You will need to research what unchanged means in the context of floating point numbers, particularly since the change in x will fall away as the minimization begins to converge.
There are lots of recipes for memoization in py2.x if you look around a bit.
Did I make any sense at all?