Newton-Raphson linearization? Second-Order Nonlinear ODE numpy-scipy Python - python

I am try to solve the next equation more than one week:
I have to use Newton-Raphson Method for getting the approximate solution of u. I have the script to do that, but I need to "linearize" this non linear ODE. The k1-k4 are not constants. On each grid point (x=1-100) they get a different value which is calculated. The initial condition is u(0)=0.

Is this a homework assignment?
Also, is it a boundary value problem or an ODE? From what you write, it sounds like BVP. Also, your boundary condition at u(0) is not enough.
If BVP, you can just use scikits.bvp_solver or scikits.bvp1lg which do the difficult parts for you.
If ODE, write the problem as a first order system, and use scipy.integrate.odeint or scipy.integrate.ode.
Regarding linearization (assuming this is a BVP): in practice it is usually enough to compute the partial derivative required for the Newton method via numerical differentiation.

Related

Solving first-order ODE, which contains another ODE (odeint / solve_ivp in Python)

I'm trying to set up a fast numerical solver in Python for a differential problem of the form:
where r is some constant.
I want to integrate A over some time period, t of interest. However, this is complicated by the fact that the dA/dt equation includes another variable B, which itself is described by an ODE dB/dt. B is actually a vector, but I've simplified the expression to try and highlight my problems more clearly.
I currently have a solution using a manual Euler method: ie compute dB/dt (then use B = B_previous + dB/dt * dt) and manually step along using a fixed time step size dt. However, this is slow and unreliable. I imagine it would be far better to use the built-in ODE solvers in Numpy, but I'm not sure this is possible given the coupled nature of the problem I'm trying to solve?
Is this possible using Numpy odeint or solve_ivp please? And if so, can anyone suggest any pointers please! Thanks.
What you have is a coupled differential equation which are standard to solve using Runge kutta, Eulers, and many other methods. You can use this example to guide you in writting your python code:
https://scipy-cookbook.readthedocs.io/items/CoupledSpringMassSystem.html
Keep in mind that that not all equations can be solved with ODEINT. If your ODE is a "stiff" ODE then you will have to choose your algorithm precisely. The definition of a stiff ODE is not completely defined but usually they arise if you have large or non-integral powers of your dependent variable in your ODE.
The first step in solving a coupled ODE though is to use standard methods. If they don't work then look into something else.

Computing intersection of a function with a specific interval using scipy

I'm stuck trying to get functions that are existent in scipy (or sympy) for the following task:
Suppose we are given the following function:
f(A,B,C) = k1-A*sin(B*k2-C)
for each of the axis A,B,C of the space we have a specific interval, like [a_lb, a_ub], [b_lb, b_ub], [c_lb, c_ub], [d_lb, d_ub].
Which functions of scipy can be used to compute if the space encompassed by the boundaries is intersected by the given function? I thought of like e.g. computing the Hessian matrix.
Thank you for hints
Best regards
If I understand correctly, what you are looking for is an answer to whether f(A,B,C) bounded in the domain [a_l,a_u]x[b_l,b_u]x[c_l,c_u] has a value within [d_l,d_u]. You can try using scipy.optimize.minimize for this.
If you run scipy.optimize.minimize on f with the bounds [a_l,a_u]x[b_l,b_u]x[c_l,c_u], you should get the minimal value of f in the domain. Similarly, minimizing -f will give you the maximal value of f in the domain. f intersects the given boundary if and only if the interval [fmin, fmax] intersects the interval [d_l,d_u].
Note that scipy.optimize.minimize is a non-linear optimization and therefore requires an initial guess. The middle point of the domain box is a natural choice, but since the non-linear optimization may encounter a local minimum (or not converge), you may want to try several other initial guesses as well. scipy.optimize.minimize has many (optional) parameters so I recommend you read its documentation and play with them to fine-tune your usage to your needs.

FiPy: spatially varying coefficient outside of gradient?

This may be a simple question but what is the correct FiPy syntax to use if I want to solve PDE with spatially varying coefficient that is outside of gradient? All the examples I've seen so far only talk about coefficient inside of the gradient.
For example:
d/dt(Sigma) = (1/r) d/dr (r^0.5 d/dr(nu Sigma r^0.5))
(I'm ignoring numerical factors)
and I want to solve for Sigma(t,r). How do I handle (1/r) in front of d/dr?
I know this simple equation can be massaged so that I don't need to worry about spatially varying coefficient that is outside of gradient (or simply move the coefficient inside the time derivative term) but I'll have to add in more terms for the actual problem I'm trying to solve and the trick will no longer be valid. For instance, what should I do if my equation looks like:
d/dt (var) = f(r) d^2/dr^2 (var) + g(r) d/dr (var)
Any help would be greatly appreciated!
This may be a simple question but what is the correct FiPy syntax to use if I want to solve PDE with spatially varying coefficient that is outside of
gradient?
I think that the general equation can always be rewritten in such a way that FiPy can still operate implicitly on the solution variable. This does require an extra source term in most cases.
I know this simple equation can be massaged so that I don't need to worry
about spatially varying coefficient that is outside of gradient (or simply
move the coefficient inside the time derivative term) but I'll have to add
in more terms for the actual problem I'm trying to solve and the trick will
no longer be valid.
The example equation,
can be represented as
in FiPy, which doesn't add any extra terms. For the general case,
the equation can be represented as
Although there is an extra term, the terms are still implicit in \phi and the extra term can be represented as an implicit source term for \phi depending on the sign of g/f. FiPy should handle the sign issue just by using the ImplicitSourceTerm.
Note that often r occurs outside of the operator when using cylindrical coordinates. FiPy has a mesh class, CylindricalGrid1D that handles cylindrical coordinates while using the standard terms (no need to and the r spatial variable in the equations).

Using a solution to a PDE, to define another PDE - FEniCS

I am currently trying to solve the Monge-Ampere equation in FEniCS, by implementing a non standard boundary condition.
The boundary condition, requires that the gradient of the solution must map the boundary of the original domain to another prescribed domain.
When the target domain is prescribed to be the unit circle, the implementation is quite simple, and I have tackled it by putting the following into my system:
+(dot(grad(uh),grad(uh))-1)*vh*ds\ (1)
where uh, is a trial function, and vh is a test function.
When considering a more complex target space, such as the square [−1,1]×[−1,1] things become more difficult, since it is not so simple to solve by hand, so my idea is to use the distance function.
To do this I have solved a stabilized version of the Eikonal Equation, who's solution is the signed distance function, then my idea was to replace (1) with:
+E(grad(uh))*vh*ds
Where E is the solution of the Eikonal equation, but when I try to implement this, I get an error, stating that the function expected scalar arguments,
Is there a way to programme the solution to accept grad(uh) as an input, in a second differential form?
Thank you all for your time!
You'd have to specify Neumann conditions (gradient vector) on the common boundary instead of Dirchelet (potential scalar).
If I were modeling a physics problem of conduction/diffusion between two dissimilar regions, conservation of energy would demand that the fluxes on either side of the boundary must balance. How would you express that boundary condition in your equation?

Solving a system of coupled iterative equations containing lots of heavy integrals and derivatives

I am trying to solve a system of coupled iterative equations, each of which containing lots of integrations and derivatives.
First I used maxima (embedded in Sage) to solve it analytically, but the solution was too dependent on the initial guesses I had make for my unknown functions, constant initial guesses yielded answering back almost immediately while symbolic functions when used as initial guesses yielded the system to go deep into calculations, sometimes seemingly never ending ones.
However, what I tried with Sage was actually a simplified version of my original equations so I thought it might be the case that I have no other choice rather than to treat the integrations and derivatives numerically, however, I had some not ignorable problems:
integrations were only allowed to have numerical limits and variables were not allowed as e.g. their upper limits (I thought maybe a numerical method algorithm is faster than the analytic one even-though I leave a variable or parameter in its calculations, but it just didn't work so).
integrands couldn't also admit extra variables and parameters w.r.t. which not being integrated.
the derivative function was itself a big obstacle, as I wasn't able to compute partial derivatives or to use a derivative in the integrand of an integral.
To get rid of all the problems with numerical derivative I substitute it by the symbolic diff() function and the speed improvement was still hopeful but the problems with numerical integration persist.
Now I have three questions:
a- Is it right to conclude there is no other way for me rather than to discretize the equations and do a complete numerical treatment instead of a mixed one?
b- If so then is there any way to do this automatically? My equations are not DE ones to use ODEint or else, they are iterative equations, I have integrations and derivatives only to update my unknowns at each step to their newer values.
c- If my calculations are so huge in size is there any suggestion on switching from python to fortran or things like that as well?
Best Regards

Categories