Pyomo DAE optimal control problem with FREE end points - python

I use pyomo.dae to solve differential equation optimization
I defined a set
m.e = ContinuousSet(bounds=(e0, ef))
But I want ef to be free as variable. It's a flexible end points differential problem. How can I accomplish it?

You change your differential equation
y'(x) = f(x,y(x))
over a flexible interval [e0, ef]
to a version over the standard interval [0,1] via
u'(s) = T*f(e0+T*s, u(s))
where now e0 and T=ef-e0 can be treated like any other parameter.

Related

Get previous values when solving an ODE in Gekko

I'm trying to solve a DAE in Gekko, where some of the components will depend upon the solution to a convolution integral
This requires a constant dt, but I'm sure that's somewhere in the options. Consequently, what I want to do is use a function to record the current value of a state variable in an array, and return the sum up to that point. Here was my attempt using one of the simple ODE examples:
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
m = GEKKO()
class adder:
def __init__(self,):
self.i=m.Array(m.Param, 10)
self.count=0
def f(self, y):
self.i[self.count]=y
self.count+=1
return sum(self.i)
a=adder()
m.Equation(y.dt()==-y+1+a.f(y))
m.time = np.linspace(0,5)
m.options.IMODE = 4
m.solve()
but the solution 1) looks incorrect and 2) I can't print anything about the solution using the a.f() function and 3) even when I set the size of the self.i array to 1, it doesn't throw an out of bounds error so I expect it's not being called in the way I think. I've also seen people suggest using the delay() function, but I don't know how to access which timestep I'm currently in and to loop over each previous timestep.
Mixing continuous differential equations with discrete equations can be a challenge. Two recommendations are to either (1) convert the DAEs to a discrete form (such as with Orthogonal Collocation on Finite Elements) or (2) use a continuous integral form of the convolution integral. The m.integral() function is available in Gekko to help with any integral expressions. See How to solve integral in Python Gekko? and Integral Objective (Luus) for two examples of solving problems with continuous integrals.
Gekko calls functions only once when building the model. The model file is found in m.path as gk0_model.apm (text file) or by opening the folder with m.open_folder(). The model must be expressed symbolically so that it can be compiled for automatic differentiation.

Python Sympy Solve - how can I call a number of equations dynamically?

I was working on some linear equation problems regarding spline function, which already exists in our beautiful Python library, but the thing is that my professor requested me to find out the every coefficients of spline function so that I can fully understand its mathematical structure.
As a result, I came out with some simultaneous linear equations - the number of equations depend on the user input.
So I allocated my variables dynamically so that if the user declares he/she will utilize 3 points, it prints out 3 equations, and if declare 4 points, prints out 4 equations, and so on.
and now I have to solve these equations without even knowing how many equations there will be.
I searched some methods for solving linear equations using solve of Sympy, but none of them were showing me how to call these equations dynamically, depending on the user input.
All articles were saying like, oh it's easy, you can write as:
solve((eq1, eq2), dict=True)
but mine will be like :
solve((eq1, eq2, eq3, eq4, eq5, eq6, ... )
and the variables will be also like (a_0, a_1, a_2, a_3, ...)
I was trying to use 'eq{}'.format(i) to call all the 'eq{}'s I made, but systematically failed for the reason I don't know.
How can I call all the equations and variables I made in 'solve' method dynamically?
I'm a newbie to Python, please help me......
If there are not symbolic variables other than the ones for which you want to solve, then my favorite way of getting the symbols is:
def neq(n):
return list or tuple containing the n equations
from sympy import Tuple, solve
eqs = Tuple(*neq(n))
syms = eqs.free_symbols
sol = solve(eqs, syms)
You will get back a single dictionary (or a list of dictionaries) with the mapping for the symbol: solution.

Python: Solving a second order differential equation with complex initial conditions

I want to solve a second order differential equation with variable coefficients by using something like odeint. The problem with this one is that it doesn't work if the initial conditions are complex (which is the case now).
Do you know a way to solve the aforementioned equation with something similar to odeint?
odeint does not accept complex variables. You could use: the newer solver, solve_ivp; the older ode class with the "zvode" integrator; or odeintw, a wrapper of odeint that I wrote that handles complex-valued and matrix-valued differential equations.
You could always work with the real components (odeint convention)
def odesys(u,t):
z = u[0]+1j*u[1]
dz = u[2]+1j*u[3]
d2z = f(t,z,dz)
return [ dz.real, dz.imag, d2z.real, d2z.imag ]
where f stands for the explicit form of the second order ODE.
If I remember correctly, one of the methods ("vzode"?) that you can use in scipy.integrate.ode works directly with complex state variables.

How can I minimize a function in Python, without using gradients, and using constraints and ranges?

EDIT: looks like this was already answered before here
It didn't show up in my searches because I didn't know the right nomenclature. I'll leave the question here for now in case someone arrives here because of the constraints.
I'm trying to optimize a function which is flat on almost all points ("steps function", but in a higher dimension).
The objective is to optimize a set of weights, that must sum to one, and are the parameters of a function which I need to minimize.
The problem is that, as the function is flat at most points, gradient techniques fail because they immediately converge on the starting "guess".
My hypothesis is that this could be solved with (a) Annealing or (b) Genetic Algos. Scipy sends me to basinhopping. However, I cannot find any way to use the constraint (the weights must sum to 1) or ranges (weights must be between 0 and 1) using scipy.
Actual question: How can I solve a minimization problem without gradients, and also use constraints and ranges for the input variables?
The following is a toy example (evidently this one could be solved using the gradient):
# import minimize
from scipy.optimize import minimize
# define a toy function to minimize
def my_small_func(g):
x = g[0]
y = g[1]
return x**2 - 2*y + 1
# define the starting guess
start_guess = [.5,.5]
# define the acceptable ranges (for [g1, g2] repectively)
my_ranges = ((0,1),(0,1))
# define the constraint (they must always sum to 1)
def constraint(g):
return g[0] + g[1] - 1
cons = {'type':'eq', 'fun': constraint}
# minimize
minimize(my_small_func, x0=start_guess, method='SLSQP',
bounds=rranges, constraints=cons)
I usually use R so maybe this is a bad answer, but anyway here goes.
You can solve optimization problems like the using a global optimizer. An example of this is Differential Evolution. The linked method does not use gradients. As for constraints, I usually build them manually. That looks something like this:
# some dummy function to minimize
def objective.function(a, b)
if a + b != 1 # if some constraint is not met
# return a very high value, indicating a very bad fit
return(10^90)
else
# do actual stuff of interest
return(fit.value)
Then you simply feed this function to the differential evolution package function and that should do the trick. Methods like differential evolution are made to solve in particular very high dimensional problems. However the constraint you mentioned can be a problem as it will likely result in very many invalid parameter configurations. This is not necessarily a problem for the algorithm, but is simply means you need to do a lot of tweaking and need to expect a lot of waiting time. Depending on your problem, you could try optimizing weights/ parameters in blocks. That means, optimize parameters given a set of weights, then optimize weights given the previous set of parameters and repeat that many times.
Hope this helps :)

fsolve problems with the starting point

I'm using fsolve in order to solve a non linear equation. My problem is that, depending on the starting point the solutions change and I am not sure that the ones that I found are the most reasonable.
This is the code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq,newton
A = np.arange(0.05,0.95,0.01)
PHI = np.deg2rad(np.arange(0,90,1))
def f(b):
return np.angle((1+3*a**4-3*a**2)+(a**4-a**6)*(np.exp(2j*b)+2*np.exp(-1j*b))+(a**2-2*a**4+a**6)*(np.exp(-2j*b)+2*np.exp(1j*b)))-Phi
B = np.zeros((len(A),len(PHI)))
for i in range(len(A)):
for j in range(len(PHI)):
a = A[i]
Phi = PHI[j]
b = fsolve(f, 1)
B[i,j]= b
I fixed x0 = 1 because it seems to give the more reasonable values. But sometimes, I think the method doesn't converge and the resulting values are too big.
What can I do to find the best solution?
Many thanks!
The eternal issue with turning non-linear solvers loose is having a really good understanding of your function, your initial guess, the solver itself, and the problem you are trying to address.
I note that there are many (a,Phi) combinations where your function does not have real roots. You should do some math, directed by the actual problem you are trying to solve, and determine where the function should have roots. Not knowing the actual problem, I can't do that for you.
Also, as noted on a (since deleted) answer, this is cyclical on b, so using a bounded solver (such as scipy.optimize.minimize using method='L-BFGS-B' might help to keep things under control. Note that to find roots with a minimizer you use the square of your function. If the found minimum is not close to zero (for you to define based on the problem), the real minima might be a complex conjugate pair.
Good luck.

Categories