I'm stuck trying to get functions that are existent in scipy (or sympy) for the following task:
Suppose we are given the following function:
f(A,B,C) = k1-A*sin(B*k2-C)
for each of the axis A,B,C of the space we have a specific interval, like [a_lb, a_ub], [b_lb, b_ub], [c_lb, c_ub], [d_lb, d_ub].
Which functions of scipy can be used to compute if the space encompassed by the boundaries is intersected by the given function? I thought of like e.g. computing the Hessian matrix.
Thank you for hints
Best regards
If I understand correctly, what you are looking for is an answer to whether f(A,B,C) bounded in the domain [a_l,a_u]x[b_l,b_u]x[c_l,c_u] has a value within [d_l,d_u]. You can try using scipy.optimize.minimize for this.
If you run scipy.optimize.minimize on f with the bounds [a_l,a_u]x[b_l,b_u]x[c_l,c_u], you should get the minimal value of f in the domain. Similarly, minimizing -f will give you the maximal value of f in the domain. f intersects the given boundary if and only if the interval [fmin, fmax] intersects the interval [d_l,d_u].
Note that scipy.optimize.minimize is a non-linear optimization and therefore requires an initial guess. The middle point of the domain box is a natural choice, but since the non-linear optimization may encounter a local minimum (or not converge), you may want to try several other initial guesses as well. scipy.optimize.minimize has many (optional) parameters so I recommend you read its documentation and play with them to fine-tune your usage to your needs.
Related
Given a set of 2D points, I would like to fit the optimal spline to this data with a given number of internal knots.
I have seen that we can use scipy's LSQUnivariateSpline to specify the number and position of knots, however it does not allow us to only specify the number of knots.
From the UnivariateSpline documentation, it seems implied that they have a method for fitting the spline with a given number of knots, as the documentation for the smoothing factor s states (emphasis mine):
Positive smoothing factor used to choose the number of knots. Number
of knots will be increased until the smoothing condition is satisfied...
So while I could go about this in a kind of backwards way and search through smoothing factors until it yields a spline with the desired number of knots, this seems to be a rather ridiculous way to approach this from a computational efficiency standpoint. Two extra search steps are happening just to cancel each other out and obtain a result that was already computed directly at the start.
I've searched around but haven't found a function to access this spline interpolation with a given number of knots directly. I'm not sure if I've missed something simple, or if it's hidden deeper down somewhere and/or not available in the API.
Note: a scipy solution is not required, any python libraries or handcrafted python code is fine (I am using scipy here just because that's where all of my searches about spline interpolation in python have landed me).
Unfortunately, it looks like the UnivariateSpline constructor passes off the computational work to the function dfitpack.curf0, which is implemented in Fortran.
Therefore, although the documentation indicates that the smoothing requirement is met by adjusting the number of knots, there is no way to directly access the function which fits a spline given a number of knots from the python API.
In light of this, it looks like one may need to look to another library or write the algorithm oneself, if avoiding the roundabout double search method is desired. However, in many cases, it may be acceptable to simply run a binary search for the desired number of knots by adjusting the smoothing parameter.
Scipy does not have smoothing splines with a fixed number of knots. You either provide your knots, or let FITPACK select it via the smoothing condition knob.
I have a polynomial function for which I would like to find all local extrema. I can evaluate the polynomial via P(x) and to its derivative via d_P(x).
My first thought was to use minimize_scalar, however this does not seem to be able to take advantage of the fact that I can evaluate the derivative. Alternatively, I can use the more general minimize function and provide the gradient.
Is there a rule of thumb about which method will work better, or is this something where I should test out both methods and see what works better. Since the function I am optimizing is a polynomial (well behaved) I wonder if it really matters so much which I use, but if someone has a more background that would be great.
In particular, P(x) is the (unique) polynomial of degree n which alternatively attains a value of 1 or -1 on a set of n-1 points.
Here is a sample of the P(x) scaled so that P(0)=1. Note that the y axis is plotted on a symlog scale.
Since you have a continuous scalar function, the documentation of minimize_scalar suggests a more discrete optimization approach. Since it doesn't use gradient information you won't have trouble with noise/discontinuities/discreteness in your objective. However, if you use minimize in conjunction with a gradient based method then you will have trouble with convergence for noise/discontinuities/discreteness.
If the objective function is fist order continuous then both minimize and minimize_scalar should yield the same solution for a given bound.
Is there a more intelligent function than scipy.optimize.curve_fit in Python?
I also need to define a function to fit data with.
I've spend ages trying to fit data with it. I can fit only basic functions and fitting two lines with piecewise function is impossible while the y-axis has low values like 0.01-0.05 and x-axis values like 20-60.
I know I have to plug in initial values, but still it takes too much time and sometimes it does not work.
EDIT
I added graph where are data I fitted and you can see the effect of changing bounds in scipy.optimize.curve_fit.
The function I fit with is this one:
def abslines(x,a,b,c,d):
return np.piecewise(x, [x < -b/a, x >= -b/a], [lambda x: a*x+b+d, lambda x: c*(x+b/a)+d])
Initial conditions are same everytime and I think they are close enough:
p0=[-0.001,0.2,0.005,0.]
because the values of parameters from best fit are:
[-0.00411946 0.19895546 0.00817832 0.00758401]
Bounds are:
No bounds;
bounds=([-1.,0.,0.,0.],[0.,1.,1.,1.])
bounds=([-0.5,0.01,0.0001,0.],[-0.001,0.5,0.5,1.])
bounds=([-0.1,0.01,0.0001,0.],[-0.001,0.5,0.1,1.])
bounds=([-0.01,0.1,0.0001,0.],[-0.001,0.5,0.1,1.])
starting with no bounds, end with best bounds
Still I think, that this takes too much time and curve_fit can find it better. This way I have to almost specify the function and it seems like I am fitting by changing parameters not that curve_fit is fitting.
Without knowing what is exactly the regression algorithm in Python it is quite impossible to give a definitive answer. Probably the calculus is iterative and requires initial guesses, which are probably derived from the specified bounds. So, the bounds have an indirect effect on the convergence and the results.
I suggest to try a simpler algorithm (not iterative, no initial guess) coming from this paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf
The code is easy to write in any computer language. I suppose this can be done with Python as well.
The piecewise function to be fitted is :
The parameters to be computed are a1, p1, q1, p2 and q2.
The result is shown on the next figure, with the approximate values of the parameters.
So that, no bounds are required to be specified and as a consequence no problems related to bounds.
NOTE : The method is based on the fitting of a convenient integral equation such as shown in the above referenced paper. The numerical calculus of the integral is subjected to deviations if the number of points is too small. In the present case, they are a large number of points. So, even scattered this is a favourable case for the practical application of this method.
1.Algorithms behind curve_fit expect differentiable functions, thus it can go south if given a non-differential one.
For a more powerful interface to curve fitting, have a look at lmfit.
I am currently trying to solve the Monge-Ampere equation in FEniCS, by implementing a non standard boundary condition.
The boundary condition, requires that the gradient of the solution must map the boundary of the original domain to another prescribed domain.
When the target domain is prescribed to be the unit circle, the implementation is quite simple, and I have tackled it by putting the following into my system:
+(dot(grad(uh),grad(uh))-1)*vh*ds\ (1)
where uh, is a trial function, and vh is a test function.
When considering a more complex target space, such as the square [−1,1]×[−1,1] things become more difficult, since it is not so simple to solve by hand, so my idea is to use the distance function.
To do this I have solved a stabilized version of the Eikonal Equation, who's solution is the signed distance function, then my idea was to replace (1) with:
+E(grad(uh))*vh*ds
Where E is the solution of the Eikonal equation, but when I try to implement this, I get an error, stating that the function expected scalar arguments,
Is there a way to programme the solution to accept grad(uh) as an input, in a second differential form?
Thank you all for your time!
You'd have to specify Neumann conditions (gradient vector) on the common boundary instead of Dirchelet (potential scalar).
If I were modeling a physics problem of conduction/diffusion between two dissimilar regions, conservation of energy would demand that the fluxes on either side of the boundary must balance. How would you express that boundary condition in your equation?
I am try to solve the next equation more than one week:
I have to use Newton-Raphson Method for getting the approximate solution of u. I have the script to do that, but I need to "linearize" this non linear ODE. The k1-k4 are not constants. On each grid point (x=1-100) they get a different value which is calculated. The initial condition is u(0)=0.
Is this a homework assignment?
Also, is it a boundary value problem or an ODE? From what you write, it sounds like BVP. Also, your boundary condition at u(0) is not enough.
If BVP, you can just use scikits.bvp_solver or scikits.bvp1lg which do the difficult parts for you.
If ODE, write the problem as a first order system, and use scipy.integrate.odeint or scipy.integrate.ode.
Regarding linearization (assuming this is a BVP): in practice it is usually enough to compute the partial derivative required for the Newton method via numerical differentiation.