Using a solution to a PDE, to define another PDE - FEniCS - python

I am currently trying to solve the Monge-Ampere equation in FEniCS, by implementing a non standard boundary condition.
The boundary condition, requires that the gradient of the solution must map the boundary of the original domain to another prescribed domain.
When the target domain is prescribed to be the unit circle, the implementation is quite simple, and I have tackled it by putting the following into my system:
+(dot(grad(uh),grad(uh))-1)*vh*ds\ (1)
where uh, is a trial function, and vh is a test function.
When considering a more complex target space, such as the square [−1,1]×[−1,1] things become more difficult, since it is not so simple to solve by hand, so my idea is to use the distance function.
To do this I have solved a stabilized version of the Eikonal Equation, who's solution is the signed distance function, then my idea was to replace (1) with:
+E(grad(uh))*vh*ds
Where E is the solution of the Eikonal equation, but when I try to implement this, I get an error, stating that the function expected scalar arguments,
Is there a way to programme the solution to accept grad(uh) as an input, in a second differential form?
Thank you all for your time!

You'd have to specify Neumann conditions (gradient vector) on the common boundary instead of Dirchelet (potential scalar).
If I were modeling a physics problem of conduction/diffusion between two dissimilar regions, conservation of energy would demand that the fluxes on either side of the boundary must balance. How would you express that boundary condition in your equation?

Related

Computing intersection of a function with a specific interval using scipy

I'm stuck trying to get functions that are existent in scipy (or sympy) for the following task:
Suppose we are given the following function:
f(A,B,C) = k1-A*sin(B*k2-C)
for each of the axis A,B,C of the space we have a specific interval, like [a_lb, a_ub], [b_lb, b_ub], [c_lb, c_ub], [d_lb, d_ub].
Which functions of scipy can be used to compute if the space encompassed by the boundaries is intersected by the given function? I thought of like e.g. computing the Hessian matrix.
Thank you for hints
Best regards
If I understand correctly, what you are looking for is an answer to whether f(A,B,C) bounded in the domain [a_l,a_u]x[b_l,b_u]x[c_l,c_u] has a value within [d_l,d_u]. You can try using scipy.optimize.minimize for this.
If you run scipy.optimize.minimize on f with the bounds [a_l,a_u]x[b_l,b_u]x[c_l,c_u], you should get the minimal value of f in the domain. Similarly, minimizing -f will give you the maximal value of f in the domain. f intersects the given boundary if and only if the interval [fmin, fmax] intersects the interval [d_l,d_u].
Note that scipy.optimize.minimize is a non-linear optimization and therefore requires an initial guess. The middle point of the domain box is a natural choice, but since the non-linear optimization may encounter a local minimum (or not converge), you may want to try several other initial guesses as well. scipy.optimize.minimize has many (optional) parameters so I recommend you read its documentation and play with them to fine-tune your usage to your needs.

FiPy: spatially varying coefficient outside of gradient?

This may be a simple question but what is the correct FiPy syntax to use if I want to solve PDE with spatially varying coefficient that is outside of gradient? All the examples I've seen so far only talk about coefficient inside of the gradient.
For example:
d/dt(Sigma) = (1/r) d/dr (r^0.5 d/dr(nu Sigma r^0.5))
(I'm ignoring numerical factors)
and I want to solve for Sigma(t,r). How do I handle (1/r) in front of d/dr?
I know this simple equation can be massaged so that I don't need to worry about spatially varying coefficient that is outside of gradient (or simply move the coefficient inside the time derivative term) but I'll have to add in more terms for the actual problem I'm trying to solve and the trick will no longer be valid. For instance, what should I do if my equation looks like:
d/dt (var) = f(r) d^2/dr^2 (var) + g(r) d/dr (var)
Any help would be greatly appreciated!
This may be a simple question but what is the correct FiPy syntax to use if I want to solve PDE with spatially varying coefficient that is outside of
gradient?
I think that the general equation can always be rewritten in such a way that FiPy can still operate implicitly on the solution variable. This does require an extra source term in most cases.
I know this simple equation can be massaged so that I don't need to worry
about spatially varying coefficient that is outside of gradient (or simply
move the coefficient inside the time derivative term) but I'll have to add
in more terms for the actual problem I'm trying to solve and the trick will
no longer be valid.
The example equation,
can be represented as
in FiPy, which doesn't add any extra terms. For the general case,
the equation can be represented as
Although there is an extra term, the terms are still implicit in \phi and the extra term can be represented as an implicit source term for \phi depending on the sign of g/f. FiPy should handle the sign issue just by using the ImplicitSourceTerm.
Note that often r occurs outside of the operator when using cylindrical coordinates. FiPy has a mesh class, CylindricalGrid1D that handles cylindrical coordinates while using the standard terms (no need to and the r spatial variable in the equations).

Newton-Raphson linearization? Second-Order Nonlinear ODE numpy-scipy Python

I am try to solve the next equation more than one week:
I have to use Newton-Raphson Method for getting the approximate solution of u. I have the script to do that, but I need to "linearize" this non linear ODE. The k1-k4 are not constants. On each grid point (x=1-100) they get a different value which is calculated. The initial condition is u(0)=0.
Is this a homework assignment?
Also, is it a boundary value problem or an ODE? From what you write, it sounds like BVP. Also, your boundary condition at u(0) is not enough.
If BVP, you can just use scikits.bvp_solver or scikits.bvp1lg which do the difficult parts for you.
If ODE, write the problem as a first order system, and use scipy.integrate.odeint or scipy.integrate.ode.
Regarding linearization (assuming this is a BVP): in practice it is usually enough to compute the partial derivative required for the Newton method via numerical differentiation.

Solving a system of coupled iterative equations containing lots of heavy integrals and derivatives

I am trying to solve a system of coupled iterative equations, each of which containing lots of integrations and derivatives.
First I used maxima (embedded in Sage) to solve it analytically, but the solution was too dependent on the initial guesses I had make for my unknown functions, constant initial guesses yielded answering back almost immediately while symbolic functions when used as initial guesses yielded the system to go deep into calculations, sometimes seemingly never ending ones.
However, what I tried with Sage was actually a simplified version of my original equations so I thought it might be the case that I have no other choice rather than to treat the integrations and derivatives numerically, however, I had some not ignorable problems:
integrations were only allowed to have numerical limits and variables were not allowed as e.g. their upper limits (I thought maybe a numerical method algorithm is faster than the analytic one even-though I leave a variable or parameter in its calculations, but it just didn't work so).
integrands couldn't also admit extra variables and parameters w.r.t. which not being integrated.
the derivative function was itself a big obstacle, as I wasn't able to compute partial derivatives or to use a derivative in the integrand of an integral.
To get rid of all the problems with numerical derivative I substitute it by the symbolic diff() function and the speed improvement was still hopeful but the problems with numerical integration persist.
Now I have three questions:
a- Is it right to conclude there is no other way for me rather than to discretize the equations and do a complete numerical treatment instead of a mixed one?
b- If so then is there any way to do this automatically? My equations are not DE ones to use ODEint or else, they are iterative equations, I have integrations and derivatives only to update my unknowns at each step to their newer values.
c- If my calculations are so huge in size is there any suggestion on switching from python to fortran or things like that as well?
Best Regards

Modeling a bounded function

I'm trying to model a monotonic function, that is bounded by y_min and y_max values, satisfies two value pairs (x0,y0), (x1,y1) within that range.
Is there some kind of package in python that might help in solving for the parameters for such a function?
Alternatively, if someone knows of a good source or paper on modeling such a function, I'd be much obliged...
(for anyone interested, i'm actually trying to model a market respons curve of a number of buyers vs. the price of a product, this is bounded by zero and the maximal demand)
well it's still not clear to me what you want, but the classic function of that form is the sigmoid (it's used in neural nets, for example). in a more flexible form that becomes the general logistic function which will fit your (x,y) constraints with suitable parameters. there's also the gompertz curve, which is similar.
however, those are all defined over an open domain and i doubt you have negative numbers of buyers. if that's an issue (you may not care as they get very close to zero) you could try transforming the number of buyers (taking the log kind-of works, but only if you can have a fraction of a buyer...).

Categories