Finding the minimum distance from a point to a curve - python

I need to find the minimum distance from a point (X,Y) to a curve defined by four coefficients C0, C1, C2, C3 like y = C0 + C1X + C2X^2 + C3X^3
I have used a numerical approach using np.linspace and np.polyval to generate discrete (X,Y) for the curve and then the shapely 's Point, MultiPoint and nearest_points to find the nearest points, and finally np.linalg.norm to find the distance.
This is a numerical approach by discretizing the curve.
My question is how can I find the distance by analytical methods and code it?

Problem definition
For the sake of simplicity let's use P for the point and Px and Py for the coordinates. Let's call the function f(x).
An other way to look at you're problem is that you're trying to find an x that minimzes the distance between the P and the point (x, f(x))
The problem can then be formulated as a minimization problem.
Find x that minimizes (x-Px)² + (f(x)-Py)²
(Not that we can drop the square root that should be there because square root is a monotonic function and doesn't change the optima. Some details here.)
Analytical solution
The fully analytical way to solve this would be a pen and paper approach. You can develop the equation and compute the derivative, see where they cancel out to find out where extremums are (This will be a lengthy process to do analytically. #Yves Daoust addresses it in his answer. Either do that or use a numerical solver for this part. For example a version of Newton's method should do). Then check if the extremums are maximums or minimums by computing the point and sampling a few points around to check how the function is evolving. From this you can find where the global minimum is and that gives you the x you're looking for. But developing this is content probably better suited for mathematics.
Optimization approach
So instead I'm gonna suggest a solution that uses numerical minimization that doesn't use a sampling approach. You can use the minimize function from scipy to solve the minimization problem.
from math import pow
from scipy.optimize import minimize
# Define function
C0 = -1
C1 = 5
C2 = -5
C3 = 6
f = lambda x: C0 + C1 * x + C2 * pow(x, 2) + C3 * pow(x, 3)
# Define function to minimize
p_x = 12
p_y = -7
min_f = lambda x: pow(x-p_x, 2) + pow(f(x) - p_y, 2)
# Minimize
min_res = minimze(min_f, 0) # The starting point doesn't really matter here
# Show result
print("Closest point is x=", min_res.x[0], " y=", f(min_res.x[0]))
Here I used your function with dummy values but you could use any function you want with this approach.

You need to differentiate (x - X)² + (C0 + C1 x + C2 x² + C3 x³ - Y)² and find the roots. But this is a quintic polynomial (fifth degree) with general coefficients so the Abel-Ruffini theorem fully applies, meaning that there is no solution in radicals.
There is a known solution anyway, by reducing the equation (via a lengthy substitution process) to the form x^5 - x + t = 0 known as the Bring–Jerrard normal form, and getting the solutions (called ultraradicals) by means of the elliptic functions of Hermite or evaluation of the roots by Taylor.
Personal note:
This approach is virtually foolish, as there exist ready-made numerical polynomial root-finders, and the ultraradical function is uneasy to evaluate.
Anyway, looking at the plot of x^5 - x, one can see that it is intersected once or three times by and horizontal, and finding an interval with a change of sign is easy. With that, you can obtain an accurate root by dichotomy (and far from the extrema, Newton will easily converge).
After having found this root, you can deflate the polynomial to a quartic, for which explicit formulas by radicals are known.

Related

Finding all roots of a transcendental equation in Python (eg. Bound state solutions of the Finite Square Well potential)

So I'm trying to solve for the eigenvalues of bound state solutions to the finite square well potential which involves solving the equation:
I was able to solve it graphically by plotting the two functions and figuring out where they intersect, then using the intersection as an initial guess in scipy.optimize.root. However, this seems like a fairly cumbersome process. From what I understand, root finding methods usually requires an initial guess and finds the local minima closest to the initial guess.
What I'm wondering is if there is a way to find all the roots or all local minima in a given range in Python without necessarily providing an initial guess. From the web searches I've done, there seems to be some methods in mathematica but in general, finding all roots within a good tolerance is impossible except for specific functions. What I'm wondering, though, is whether my equation happens to fall under those specific situations? Below is a graph of the two functions and the code I used using scipy.optimize.root:
def func(z, z0):
return np.tan(z) - np.sqrt((z0/z)**2 - 1)
#finding z0
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
#calculating roots
res = opt.root(func, [1.4, 4.0, 6.2], args = (z0, ))
res.x
#output
array([1.38165158, 4.11759958, 6.70483966])
There is no general way.
In this specific case, all roots are well localized due to periodicity of the tangent function, so you can simply run root-findind, e.g. brentq on each of these intervals.
This is more of a hack rather than an answer, but one may succeed in making the process of root finding a bit less tedious by providing some grid of initial guesses and taking the set of solutions found with all the attempts.
Using sympy (whose defaults in nsolve may provide a more robust solver) you could do
from sympy.solvers import nsolve
from sympy import tan, sqrt, symbols
import numpy as np
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
z = symbols('z')
expr = tan(z) - sqrt((z0/z)**2 - 1)
# try out a grid of initial conditions and get set of found solutions
# this may fail with some other choice of interval
initial_guesses = np.linspace(0.2, 7, 100)
# candidate solutions
set([nsolve(expr, z, guess) for guess in initial_guesses])

Is there a way to generate random solutions to non-square linear equations, preferably in python?

First of all, I know that these threads exist! So bear with me, my question is not fully answered by them.
As an example assume we are in a 4-dimensional vector space, i.e R^4. We are looking at the two linear equations:
3*x1 - 2* x2 + 7*x3 - 2*x4 = 6
1*x1 + 3* x2 - 2*x3 + 5*x4 = -2
The actual questions is: Is there a way to generate a number N of points that solve both of these equations making use of the linear solvers from NumPy etc?
The main problem with all python libraries I have tried so far is: they need n equations for a n-dimensional space
Solving the problem is very easy for one equation, since you can simply use n-1 randomly generated vlaues and adapt the last one such that the vector solves the equation.
My expected result would be a list of N "randomly" generated points that solve k linear equations in an n-dimensional space, where k<n.
A system of linear equations with more variables than equations is known as an underdetermined system.
An underdetermined linear system has either no solution or infinitely many solutions.
...
There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination.
As you say, many functions available in libraries (e.g. np.linalg.solve) require a square matrix (i.e. n equations for n unknowns), what you are looking for is an implementation of Gaussian elimination for non square linear systems.
This isn't 'random', but np.linalg.lstsq (least square) is will solve non-square matrices:
Return the least-squares solution to a linear matrix equation.
Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.
For more info, see:
solving Ax =b for a non-square matrix A using python
Since you have an underdetermined system of equations (too few constraints for your solutions, or fewer equations than variables) you can just pick some arbitrary values for x3 and x4 and solve the system in x1, x2 (this has 2 variables/2 equations).
You will just need to check that the resulting system is not inconsistent (i.e. it admits no solution) and that there are no duplicate solutions.
You could for instance fix x3=0 and choosing random values of x4 generate solutions for your equations in x1, x2
Here's an example generating 10 "random" solutions
n = 10
x3 = 0
X = []
for x4 in np.random.choice(1000, n):
b = np.array([[6-7*x3+2*x4],[-2+2*x3-5*x4]])
x = np.linalg.solve(a, b)
X.append(np.append(x,[x3,x4]))
# check solution nr. 3
[x1, x2, x3, x4] = X[3]
3*x1 - 2* x2 + 7*x3 - 2*x4
# output: 6.0
1*x1 + 3* x2 - 2*x3 + 5*x4
# output: -2.0
Thanks for the answers, which both helped me and pointed me in the right direction.
I now have an easy step-by-step solution to my problem for arbitrary k<n.
1. Find one solution to all equations given. This can be done by using
solution_vec = numpy.linalg.lstsq(A,b)
this gives a solution as seen in ukemis answer. In my example above, the Matrix A is equal to the coefficients of the equations on the left side, b represents the vector on the right side.
2. Determine the null space of your matrix A.
These are all vectors v such that the skalar product v*A_i = 0 for every(!) row A_i of A. The following function, found in this thread can be used to get representatives of the null space of A:
def nullSpaceOfMatrix(A, eps=1e-15):
u, s, vh = scipy.linalg.svd(A)
null_mask = (s <= eps)
null_space = scipy.compress(null_mask, vh, axis=0)
return scipy.transpose(null_space)
3. Generate as many (N) "random" linear combinations (meaning with random coefficients) of solution_vec and resulting vectors of the nullspace of the matrix as you want! This works because the scalar product is additive and nullspace vectors have a scalar product of 0 to the vectors of the equations. Those linear combinations always must contain solution_vec, as in:
linear_combination = solution_vec + a*null_spacevec_1 + b*nullspacevec_2...
where a and b can be randomly chosen.

Find global minimum using scipy.optimize.minimize

Given a 2D point p, I'm trying to calculate the smallest distance between that point and a functional curve, i.e., find the point on the curve which gives me the smallest distance to p, and then calculate that distance. The example function that I'm using is
f(x) = 2*sin(x)
My distance function for the distance between some point p and a provided function is
def dist(p, x, func):
x = np.append(x, func(x))
return sum([[i - j]**2 for i,j in zip(x,p)])
It takes as input, the point p, a position x on the function, and the function handle func. Note this is a squared Euclidean distance (since minimizing in Euclidean space is the same as minimizing in squared Euclidean space).
The crucial part of this is that I want to be able to provide bounds for my function so really I'm finding the closest distance to a function segment. For this example my bounds are
bounds = [0, 2*np.pi]
I'm using the scipy.optimize.minimize function to minimize my distance function, using the bounds. A result of the above process is shown in the graph below.
This is a contour plot showing distance from the sin function. Notice how there appears to be a discontinuity in the contours. For convenience, I've plotted a few points around that discontinuity and the "closet" points on the curve that they map to.
What's actually happening here is that the scipy function is finding a local minimum (given some initial guess), but not a global one and that is causing the discontinuity. I know finding the global minimum of any function is impossible, but I'm looking for a more reliable way to find the global minimum.
Possible methods for finding a global minimum would be
Choose a smart initial guess, but this amounts to knowing approximately where the global minimum is to begin with, which is using the solution of the problem to solve it.
Use a multiple initial guesses and choose the answer which gets to the best minimum. This however seems like a poor choice, especially when my functions get more complicated (and higher dimensional).
Find the minimum, then perturb the solution and find the minimum again, hoping that I may have knocked it into a better minimum. I'm hoping that maybe there is some way to do this simply without evoking some complicated MCMC algorithm or something like that. Speed counts for this process.
Any suggestions about the best way to go about this, or possibly directions to useful functions that may tackle this problem would be great!
As suggest in a comment, you could try a global optimization algorithm such as scipy.optimize.differential_evolution. However, in this case, where you have a well-defined and analytically tractable objective function, you could employ a semi-analytical approach, taking advantage of the first-order necessary conditions for a minimum.
In the following, the first function is the distance metric and the second function is (the numerator of) its derivative w.r.t. x, that should be zero if a minimum occurs at some 0<x<2*np.pi.
import numpy as np
def d(x, p):
return np.sum((p-np.array([x,2*np.sin(x)]))**2)
def diff_d(x, p):
return -2 * p[0] + 2 * x - 4 * p[1] * np.cos(x) + 4 * np.sin(2*x)
Now, given a point p, the only potential minimizers of d(x,p) are the roots of diff_d(x,p) (if any), as well as the boundary points x=0 and x=2*np.pi. It turns out that diff_d may have more than one root. Noting that the derivative is a continuous function, the pychebfun library offers a very efficient method for finding all the roots, avoiding cumbersome approaches based on the scipy root-finding algorithms.
The following function provides the minimum of d(x, p) for a given point p:
import pychebfun
def min_dist(p):
f_cheb = pychebfun.Chebfun.from_function(lambda x: diff_d(x, p), domain = (0,2*np.pi))
potential_minimizers = np.r_[0, f_cheb.roots(), 2*np.pi]
return np.min([d(x, p) for x in potential_minimizers])
Here is the result:

Find intersection of A(x) and B(y) in complex plane plus corr. x and y

suppose I have the following Problem:
I have a complex function A(x) and a complex function B(y). I know these functions cross in the complex plane. I would like to find out the corresponding x and y of this intersection point, numerically ( and/or graphically). What is the most clever way of doing that?
This is my starting point:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
def A_(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
A = np.vectorize(A_)
def B_(y):
return 3/(1j*y*(1+1j*y))
B = np.vectorize(B_)
real_A = np.real(A(x))
imag_A = np.imag(A(x))
real_B = np.real(B(y))
imag_B = np.imag(B(y))
plt.plot(real_A, imag_A, color='blue')
plt.plot(real_B, imag_B, color='red')
plt.show()
I don't have to plot it necessarily. I just need x_intersection and y_intersection (with some error that depends on x and y).
Thanks a lot in advance!
EDIT:
I should have used different variable names. To clarify what i need:
x and y are numpy arrays and i need the index of the intersection point of each array plus the corresponding x and y value (which again is not the intersection point itself, but some value of the arrays x and y ).
Here I find the minimum of the distance between the two curves. Also, I cleaned up your code a bit (eg, vectorize wasn't doing anything useful).
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
from scipy import optimize
def A(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
def B(y):
return 3/(1j*y*(1+1j*y))
# The next three lines find the intersection
def dist(x):
return abs(A(x[0])-B(x[1]))
sln = optimize.minimize(dist, [1, 1])
# plotting everything....
a0, b0 = A(sln.x[0]), B(sln.x[1])
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
a, b = A(x), B(y)
plt.plot(a.real, a.imag, color='blue')
plt.plot(b.real, b.imag, color='red')
plt.plot(a0.real, a0.imag, "ob")
plt.plot(b0.real, b0.imag, "xr")
plt.show()
The specific x and y values at the intersection point are sln.x[0] and sln.x[1], since A(sln.x[0])=B(sln.x[1]). If you need the index, as you also mention in your edit, I'd use, for example, numpy.searchsorted(x, sln.x[0]), to find where the values from the fit would insert into your x and y arrays.
I think what's a bit tricky with this problem is that the space for graphing where the intersection is (ie, the complex plane) does not show the input space, but one has to optimize over the input space. It's useful for visualizing the solution, then, to plot the distance between the curves over the input space. That can be done like this:
data = dist((X, Y))
fig, ax = plt.subplots()
im = ax.imshow(data, cmap=plt.cm.afmhot, interpolation='none',
extent=[min(x), max(x), min(y), max(y)], origin="lower")
cbar = fig.colorbar(im)
plt.plot(sln.x[0], sln.x[1], "xw")
plt.title("abs(A(x)-B(y))")
From this it seems much more clear how optimize.minimum is working -- it just rolls down the slope to find the minimum distance, which is zero in this case. But still, there's no obvious single visualization that one can use to see the whole problem.
For other intersections one has to dig a bit more. That is, #emma asked about other roots in the comments, and there I mentioned that there's no generally reliable way to find all roots to arbitrary equations, but here's how I'd go about looking for other roots. Here I won't lay out the complete program, but just list the changes and plots as I go along.
First, it's obvious that for the domain shown in my first plot that there's only one intersection, and that there are no intersection in the region to the left. The only place there could be another intersection is to the right, but for that I'll need to allow the sqrt in the def of B to get a negative argument without throwing an exception. An easy way to do this is to add 0j to the argument of the sqrt, like this, sqrt(1+0j-(1/x)**2). Then the plot with the intersection becomes
I plotted this over a broader range (x=np.linspace(-10, 10, 10000) and y=np.linspace(-400, 400, 10000)) and the above is the zoom of the only place where anything interesting is going on. This shows the intersection found above, plus the point where it looks like the two curves might touch (where the red curve, B, comes to a point nearly meeting the blue curve A going upward), so that's the new interesting thing, and the thing I'll look for.
A bit of playing around with limits, etc, show that B is coming to a point asymptotically, and the equation of B is obvious that it will go to 0 + 0j for large +/- y, so that's about all there is to say for B.
It's difficult to understand A from the above plot, so I'll look at the real and imaginary parts independently:
So it's not a crazy looking function, and the jumping between Re=const and Im=const is just the nature of sqrt(1-x-2), which is pure complex for abs(x)<1 and pure real for abs(x)>1.
It's pretty clear now that the other time the curves are equal is at y= +/-inf and x=0. And, quick look at the equations show that A(0)=0+0j and B(+/- inf)=0+0j, so this is another intersection point (though since it occurs at B(+/- inf), it's sort-of ambiguous on whether or not it would be called an intersection).
So that's about it. One other point to mention is that if these didn't have such an easy analytic solution, like it wasn't clear what B was at inf, etc, one could also graph/minimize, etc, by looking at B(1/y), and then go from there, using the same tools as above to deal with the infinity. So using:
def dist2(x):
return abs(A(x[0])-B(1./x[1]))
Where the min on the right is the one initially found, and the zero, now at x=-0 and 1./y=0 is the other one (which, again, isn't interesting enough to apply an optimizer here, but it could be interesting in other equations).
Of course, it's also possible to estimate this by just finding the minimum of the data that goes into the above graph, like this:
X, Y = np.meshgrid(x, y)
data = dist((X, Y))
r = np.unravel_index(data.argmin(), data.shape)
print x[r[1]], y[r[0]]
# 2.06306306306 1.8008008008 # min approach gave 2.05973231 1.80069353
But this is only approximate (to the resolution of data) and involved many more calculations (1M compared to a few hundred). I only post this because I think it might be what the OP originally had in mind.
Briefly, two analytic solutions are derived for the roots of the problem. The first solution removes the parametric representation of x and solves for the roots directly in the (u, v) plane, where for example A(x): u(x) + i v(y) gives v(u) = f(u). The second solution uses a polar representation, e.g. A(x) is given by r(x) exp(i theta(x)), and offers a better understanding of the behavior of the square root as x passes through unity towards zero. Possible solutions occurring at the singular points are explored. Finally, a bisection root finding algorithm is constructed as a Python iterator to invert certain solutions. Summarizing, the one real root can be found as a solution to either of the following equations:
and gives:
x0 = -2.059732
y0 = +1.800694
A(x0) = B(y0) = (-0.707131, -i 0.392670)
As in most problems there are a number of ways to proceed. One can use a "black box" and hopefully find the root they are looking for. Sometimes an answer is all that is desired, and with a little understanding of the functions this may be an adequate way forward. Unfortunately, it is often true that such an approach will provide less insight about the problem then others.
For example, algorithms find it difficult locating roots in the global space. Local roots may be found with other roots lying close by and yet undiscovered. Consequently, the question arises: "Are all the roots accounted for?" A more complete understanding of the functions, e.g. asymptotic behaviors, branch cuts, singular points, can provide the global perspective to better answer this, as well as other important questions.
So another possible solution would be building one's own "black box." A simple bisection routine might be a starting point. Robust if the root lies in the initial interval and fairly efficient. This encourages us to look at the global behavior of the functions. As the code is structured and debugged the various functions are explored, new insights are gained, and the algorithm has become a tool towards a more complete solution to the problem. Perhaps, with some patience, a closed-form solution can be found. A Python iterator is constructed and listed below implementing a bisection root finding algorithm.
Begin by putting the functions A(x) and B(x) in a more standard form:
C(x) = u(x) + i v(x)
and here the complex number i is brought out of the denominator and into the numerator, casting the problem into the form of functions of a complex variable. The new representation simplifies the original functions considerably. The real and imaginary parts are now clearly separated. An interesting graph is to plot A(x) and B(x) in the 3-dimensional space (u, v, x) and then visualize the projection into the u-v plane.
import numpy as np
from numpy import real, imag
import matplotlib.pyplot as plt
ax = fig.gca(projection='3d')
s = np.linspace(a, b, 1000)
ax.plot(f(s).real, f(s).imag, z, color='blue')
ax.plot(g(s).real, g(s).imag, z, color='red')
ax.plot(f(s).real, f(s).imag, 0, color='black')
ax.plot(g(s).real, g(s).imag, 0, color='black')
The question arises: "Can the parametric representation be replaced so that a relationship such as,
A(x): u(x) + i v(x) gives v(u) = f(u)
is obtained?" This will provide A(x) as a function v(u) = f(u) in the u-v plane. Then, if for
B(x): u(x) + i v(x) gives v(u) = g(u)
a similar relationship can be found, the solutions can be set equal to one another,
f(u) = g(u)
and the root(s) computed. In fact, it is convenient to look for a solution in the square of the above equation. The worst case is that an algorithm will have to be built to find the root, but at this point the behavior of our functions are better understood. For example, if f(u) and g(u) are polynomials of degree n then it is known that there are n roots. The best case is that a closed-form solution might be a reward for our determination.
Here is more detail to the solution. For A(x) the following is derived:
and v(u) = f(u) is just v(u) = constant. Similarly for B(x) a slightly more complex form is required:
Look at the function g(u) for B(x). It is imaginary if u > 0, but the root must be real since f(u) is real. This means that u must be less then 0, and there is both a positive and negative real branch to the square root. The sign of f(u) then allows one to pick the negative branch as the solution for the root. So the fact that the solution must be real is determined by the sign of u, and the fact that the real root is negative specifies what branch of the square root to choose.
In the following plot both the real (u < 0) and complex (u > 0) solutions are shown.
The camera looks toward the origin in the back corner, where the red and blue curves meet. The z-axis is the magnitude of f(u) and g(u). The x and y axes are the real/complex values of u respectively. The blue curves are the real solution with (3 - |u|). The red curves are the complex solution with (3 + |u|). The two sets meet at u = 0. The black curve is f(u) equal to (-pi/8).
There is a divergence in g(u) at |u| = 3 and this is associated with x = 0. It is far removed from the solution and will not be considered further.
To obtain the roots to f = g it is easier to square f(u) and equate the two functions. When the function g(u) is squared the branches of the square root are lost, much like squaring the solutions for x**2 = 4. In the end the appropriate root will be chosen by the sign of f(u) and so this is not an issue.
So by looking at the dependence of A and B, with respect to the parametric variable x, a representation for these functions was obtained where v is a function of u and the roots found. A simpler representation can be obtained if the term involving c in the square root is ignored.
The answer gives all the roots to be found. A cubic equation has at most three roots and one is guaranteed to be real. The other two may be imaginary or real. In this case the real root has been found and the other two roots are complex. Interestingly, as c changes these two complex roots may move into the real plane.
In the above figure the x-axis is u and the y axis is the evaluated cubic equation with constant c. The blue curve has c as (pi/8) squared. The red curve uses a larger and negative value for c, and has been translated upwards for purposes of demonstration. For the blue curve there is an inflection point near (0, 0.5), while the red curve has a maximum at (-0.9, 2.5) and a minimum at (0.9, -0.3).
The intersection of the cubic with the black line represents the roots given by: u**3 + c u + 3c = 0. For the blue curve there is one intersection and one real root with two roots in the complex plane. For the red curve there are three intersections, and hence 3 roots. As we change the value of the constant c (blue to red) the one real root undergoes a "pitchfork" bifurcation, and the two roots in the complex plane move into the real space.
Although the root (u_t, v_t) has been located, obtaining the value for x requires that (u, v) be inverted. In the present example this is a trivial matter, but if not, a bisection routine can be used to avoid the algebraic difficulties.
The parametric representation also leads to a solution for the real root, and it rounds out the analysis with an independent verification of the first result. Second, it answers the question about what happens at the singularity in the square root. Third, it gives a greater understanding of the multiplicity of roots.
The steps are: (1) convert A(x) and B(x) into polar form, (2) equate the modulus and argument giving two equations in two unknowns, (3) make a substitution for z = x**2. Converting A(x) to polar form:
Absolute value bars are not indicated, and it should be understood that the moduli r(x) and s(x) are positive definite as their names imply. For B(x):
The two equations in two unknowns:
Finally, the cubic solution is sketched out here where the substitution z = x**2 has been made:
The solution for z = x**2 gives x directly, which allows one to substitute into both A(x) and B(x). This is an exact solution if all terms are maintained in the cubic solution, and there is no error in x0, y0, A(x0), or B(x0). A simpler representation can be found by considering terms proportional to 1/d as small.
Before leaving the polar representation consider the two singular points where: (1) abs(x) = 1, and (2) x = 0. A complicating factor is that the arguments behave something like 1/x instead of x. It is worthwhile to look at a plot of the arctan(a) and then ask yourself how that changes if its argument is replaced by 1/a. The following graphs will then look less foreign.
Consider the polar representation of B(x). As x approaches 0 the modulus and argument tend toward infinity, i.e. the point is infinitely far from the origin and lies along the y-axis. Approaching 0 from the negative direction the point lies along the negative y-axis with varphi = (-pi/2), while approaching from the other direction the point lies along the positive y-axis with varphi = (+pi/2).
A somewhat more complicated behavior is exhibited by A(x). A(x) is even in x since the modulus is positive definite and the argument involves only x**2. There is a symmetry across the y-axis that allows us to only consider the x > 0 plane.
At x = 1 the modulus is just (pi/8), and as x continues to approach 0 so does r(x). The behavior of the argument is more complex. As x approaches unity from large positive values the argument is diverging towards +inf and so theta is approaching (+pi/2). As x passes through 1 the argument becomes complex. At x equals 0 the argument has reached its minimum value of -i. For complex arguments the arctan is given by:
The following are plots of the arguments for A(x) and B(x). The x-axis is the value of x, and the y-axis is the value of the angle in units of pi. In the first plot theta is shown in blue curves, and as x approaches 1 the angle approaches (+pi/2). Theta is real because abs(x) >= 1, and notice it is symmetric across the y-axis. The black curve is varphi and as x approaches 0 it approaches plus or minus (pi/2). Notice it is an odd function in x.
In the second plot A(x) is shown where abs(x) < 1 and the argument becomes complex. Near x = 1 theta is equal to (+pi/2), the blue curve, minus a small imaginary part, the red curve. As x approaches zero theta is equal to (+pi/2) minus a large imaginary part. At x equals 0 the argument is equal to -i and theta = (+pi/2) minus an infinite imaginary part, i.e ln(0) = -inf:
The values for x0 and y0 are determined by the set of equations that equate modulus and argument of A(x) and B(x), and there are no other roots. If x0 = 0 was a root, then it would fall out of these equations. The same holds for x0 = 1. In fact, if one uses approximations in the argument of A(x) about these points, and then substitutes into the equation for the modulus, the equality cannot be maintained there.
Here is another perspective: consider the set of equations where x is assumed large and call it x_inf. The equation for the argument then gives x_inf = y_inf, where 1 is neglected with respect to x_inf squared. Upon substitution into the second equation a cubic is obtained in x_inf. Will this give the correct answer? Yes, if x0 is actually large, and in this case you might get away with it since x0 is approximately 2. The difference between the sqrt(4) and the sqrt(5) is around 10%. But does this mean that x_inf = 100 is a solution? No it does not: x_inf is only a solution if it equals x0.
The initial reason for examining the problem in the first place was to find a context for building a root-finding bisection routine as a Python iterator. This can be used to find any of the roots discussed here, and looks something like this:
class Bisection:
def __init__(self, a, b, func, max_iter):
self.max_iter = max_iter
self.count_iter = 0
self.a = a
self.b = b
self.func = func
fa = func(self.a)
fb = func(self.b)
if fa*fb >= 0.0:
raise ValueError
def __iter__(self):
self.x1 = self.a
self.x2 = self.b
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
return self
def __next__(self):
f1 = self.func(self.x1)
f2 = self.func(self.x2)
error = abs(f1 - f2)
fmid = self.func(self.xmid)
if fmid == 0.0:
return self.xmid
if f1*fmid < 0:
self.x2 = self.xmid
else:
self.x1 = self.xmid
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
f1 = self.func(self.x1)
fmid = self.func(self.xmid)
self.count_iter += 1
if self.count_iter >= self.max_iter:
raise StopIteration
return self.xmid
The routine does only a minimal amount in the way of catching exceptions and was used to find x for the given solution in the u-v plane. The arguments a and b give the lower and upper brackets for the root to be found. The argument func is the function for the root to be found. This might look like: u0 - B(x).real. The constant max_iterations tells the iterator to stop after a given number of bisections has been attempted.

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories