What do extra results of numpy.polyfit mean? - python

When creating a line of best fit with numpy's polyfit, you can specify the parameter full to be True. This returns 4 extra values, apart from the coefficents. What do these values mean and what do they tell me about how well the function fits my data?
https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html
What i'm doing is:
bestFit = np.polyfit(x_data, y_data, deg=1, full=True)
and I get the result:
(array([ 0.00062008, 0.00328837]), array([ 0.00323329]), 2, array([
1.30236506, 0.55122159]), 1.1102230246251565e-15)
The documentation says that the four extra pieces of information are: residuals, rank, singular_values, and rcond.
Edit:
I am looking for a further explanation of how rcond and singular_values describes goodness of fit.
Thank you!

how rcond and singular_values describes goodness of fit.
Short answer: they don't.
They do not describe how well the polynomial fits the data; this is what residuals are for. They describe how numerically robust was the computation of that polynomial.
rcond
The value of rcond is not really about quality of fit, it describes the process by which the fit was obtained, namely a least-squares solution of a linear system. Most of the time the user of polyfit does not provide this parameter, so a suitable value is picked by polyfit itself. This value is then returned to the user for their information.
rcond is used for truncation in ill-conditioned matrices. Least squares solver does two things:
Finds x that minimizes the norm of residuals Ax-b
If multiple x achieve this minimum, returns x with the smallest norm among those.
The second clause occurs when some changes of x do not affect the right-hand side at all. But since floating point computations are imperfect, usually what happens is that some changes of x affect the right hand side very little. And this is where rcond is used to decide when "very little" should be considered as "zero up to noise".
For example, consider the system
x1 = 1
x1 + 0.0000000001 * x2 = 2
This one can be solved exactly: x1 = 1 and x2 = 10000000000. But... that tiny coefficient (that in reality, came after some matrix manipulations) has some numeric error in it; for all we know it could be negative, or zero. Should we let it have such huge influence on the solution?
So, in such a situation the matrix (specifically its singular values) gets truncated at level rcond. This leaves
x1 = 1
x1 = 2
for which the least-squares solution is x1 = 1.5, x2 = 0. Note that this solution is robust: no huge numbers from tiny fluctuations of coefficients.
Singular values
When one solves a linear system Ax = b in the least squares sense, the singular values of A determine how numerically tricky this is. Specifically, large disparity between largest and smallest singular values is problematic: such systems are ill-conditioned. An example is
0.835*x1 + 0.667*x2 = 0.168
0.333*x1 + 0.266*x2 = 0.0067
The exact solution is (1, -1). But if the right hand side is changed from 0.067 to 0.066, the solution is (-666, 834) -- totally different. The problem is that the singular values of A are (roughly) 1 and 1e-6; this magnifies any changes on the right by the factor of 1e6.
Unfortunately, polynomial fit often results in ill-conditioned matrices. For example, fitting a polynomial of degree 24 to 25 equally spaced data points is unadvisable.
import numpy as np
x = np.arange(25)
np.polyfit(x, x, 24, full=True)
The singular values are
array([4.68696731e+00, 1.55044718e+00, 7.17264545e-01, 3.14298605e-01,
1.16528492e-01, 3.84141241e-02, 1.15530672e-02, 3.20120674e-03,
8.20608411e-04, 1.94870760e-04, 4.28461687e-05, 8.70404409e-06,
1.62785983e-06, 2.78844775e-07, 4.34463936e-08, 6.10212689e-09,
7.63709211e-10, 8.39231664e-11, 7.94539407e-12, 6.32326226e-13,
4.09332903e-14, 2.05501534e-15, 7.55397827e-17, 4.81104905e-18,
8.98275758e-20]),
which, with the default value of rcond (5.55e-15 here), gets four of them truncated to 0.
The difference in magnitude between smallest and largest singular values indicates that perturbing the y-values by numbers of size 1e-15 can result in changes of about 1 to the coefficients. (Not every perturbation will do that, just some that happen to align with a singular vector for a small singular value).
Rank
The effective rank is just the number of singular values above the rcond threshold. In the above example it's 21. This means that even though the fit is for 25 points, and we get a polynomial with 25 coefficients, there are only 21 degrees of freedom in the solution.

Related

scipy bvp solver for four differential equation systems

I am trying to solve this systems but I get error.
I have to definition y3d=0 because y3'=0 in the equation systems. but when I did this, program cant solve. if I say y3d=y[3] then program run,
equation system that ı have to solve is like this:
dy1/dx=y2
dy2/dx=-y3*y1
dy3/dx=0
dy4/d=y1**2+y2**2 and boundary condition y1(0)=y1(1)=0 and y4(0)= 0 y4(1)=1
can scipy handle this?
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
def eqts(x,y):
y1d=y[1]
y2d=-y[2]*y[0]
y3d=0
y4d=y[0]**2+y[1]**2
return np.vstack((y1d,y2d,y3d,y4d))
def bc(ya,yb):
return np.array([ya[0],yb[3],ya[0],yb[3]-1])
x = np.linspace(0,1,10)
y= np.zeros((4,x.size))
y[2,:]=1
sol=solve_bvp(eqts,bc,x,y)
Unfortunately I get the following error message ;
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 10 and the array at index 2 has size 1
Well, first of all, in your script, your boundary conditions are overdetermined. Nowhere it is said that y3(0) = 0 or y3(1) = 0. Actually, it is not: y3(t) is a constant but it is not zero. If you impose such condition y3(t) = 0, things will not work at all. On top of that, this system looks non-linear (quadratic) but actually is a linear system. You can solve it explicitly without python. If I am not mistaken, the only way you can have a solution is when y3 > 0, which gives you
y1(t) = B * sin(k*pi*t)
y2(t) = k*pi*B * cos(k*pi*t)
y3(t) = k^2*pi^2
y4(t) = t + (k^2pi^2 - 1) B^2 * sin(2*k*pi*t) / (4*k*pi)
where B = sqrt( 2*pi*k / (k^2*pi^2 + 1) )
and k is an arbitrary non-zero integer
or at least something along those lines.
Determinant
The main issue is here
y1d=y[1]
y2d=-y[2]*y[0]
y3d=0
y4d=y[0]**2+y[1]**2
In your implementation all y1d, y2d, and y4d are vector, but y3d is scalar!
You may use y3d = np..zeros_like(y1d)
solve_bvp requires to return a rhs with same size of y see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html?highlight=solve_bvp#scipy.integrate.solve_bvp
In scipy's solve_bvp there is the possibility to treat constant components as parameters that are to be fitted. Thus one can define the system in dimension 3 as
def eqn(t,x,p): return x[1], -p[0]*x[0],x[0]**2+x[1]**2
def bc(x0,x1,p): return x0[0], x1[0], x0[2], x1[2]-1
t_init = np.linspace(0,1,6)
x_init = np.zeros([3,len(t_init)])
x_init[0,1]=1; # break the symmetry that gives a singular Jacobian
Another variant to get a general initial guess would be to fill it with random noise.
To get different solutions the initial conditions need to be different, one point of influence is the frequency square. Setting it close to the expected values gives indeed different solutions.
for w0 in np.arange(1,10)*3:
res = solve_bvp(eqn,bc,t_init,x_init,p=[w0**2])
print(res.message,res.p[0])
This gives as output
The algorithm converged to the desired accuracy. 9.869869348810965
The algorithm converged to the desired accuracy. 39.47947747757308
The algorithm converged to the desired accuracy. 88.82805487260174
The algorithm converged to the desired accuracy. 88.8280548726001
The algorithm converged to the desired accuracy. 157.91618155379464
The algorithm converged to the desired accuracy. 157.91618155379678
The algorithm converged to the desired accuracy. 355.31352482772894
The algorithm converged to the desired accuracy. 355.3135248277307
The algorithm converged to the desired accuracy. 157.91618163528014
As one can see, in the higher frequencies the given initial frequency gets balanced against the lower frequency behavior of the initial functions. This is a general problem, if not forced to stay orthogonal to the lower frequency solutions, the solver tends towards the smoother lower frequencies.
Adding plot commands plt.plot(res.x, res.y[0]) etc.
shows the expected sinusoidal solutions.

numpy roots() returns false roots

I'm trying to use numpy to find the roots of some polynomials, but I am getting some erroneous results:
>> poly = np.polynomial.Polynomial([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>> roots = poly.roots()
>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>> poly(roots)
array([-3.74803539e+23, -7.99360578e-15, -1.89182003e-13])
What is up with the false root -1.64593692e+09 which results in -3.74803539e+23? This is clearly not a root.
Is this the result of floating-point errors? or something else?..
And more importantly;
Is there a way to get around it?
..perhaps something I can tweak, or a different function I can use?. Any help is much appreciated.
I found this and this previous question which seemed to be related, but after reading them and the answers/comments I don't think that they are the same problem.
First of all, computing the roots of a polynomial is a classically ill-conditioned problem, meaning (roughly) that no matter what algorithm you use to solve it, small changes in the coefficients of many polynomials can lead to huge changes in their roots. That means we should be a little careful not to place an extraordinary amount of faith in root-finding results in general, and that perhaps that we shouldn't be too surprised when a root finder gives weird results. There's a pretty good example on Wikipedia, Wilkinson's polynomial, that shows how things can go wrong.
In this instance, the coefficients of the polynomial of interest are of such different magnitudes that it's not surprising that the results seem poor. But consider this: if our original polynomial is p() and it has a root x, then p(x) = 0, but also c*p(x) = 0 for any constant c. In other words, we can scale the coefficients without changing the roots, so happen if we normalized the polynomial by dividing by the coefficient of largest magnitude, 7e25?
Original polynomial: p(x) = 4.4 + 2.3e+14*x - 7.0e25*x**2 - 4.3e16*x**3
Scaled polynomial: p(x) = 6.3e-26 + 3.2e-12*x - x**2 - 6.1e-10*x**3
So for this polynomial, the largest coefficient ~7e25 is so huge that the smallest coefficient ~4.4 is essentially negligible. That should give us a hint that what counts as zero in a root finding iteration isn't what we would normally consider "small."
The short answer is that the root calculated by NumPy isn't perfect, but it is an estimate of an actual root. Here's some code to convince us.
>>> import numpy as np
>>> coefs = np.array([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>>> coefs_normed = coefs / np.abs(coefs).max()
>>> coefs_normed
array([ 6.25524549e-26, 3.24916108e-12, -1.00000000e+00, -6.07556697e-10])
>>> poly = np.polynomial.Polynomial(coefs)
>>> roots = poly.roots()
>>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly(roots)
array([-3.74803539e+23, 8.43769499e-14, -1.89182003e-13])
>>> poly_normed = np.polynomial.Polynomial(coefs_normed)
>>> roots_normed = poly_normed.roots()
>>> roots_normed
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly_normed(roots_normed)
array([-5.34791419e-03, 1.20534089e-39, -2.11221641e-39])
Now, -5e-03 is not very close to machine epsilon, but that should convince us that maybe the calculated root isn't quite as bad as it seemed at first.
A final point: the np.polynomial.Polynomial class has domain and window arguments that determine how it does its computations. Since polynomials get absolutely huge as the domain tends to +infinity or -infinity, it's unrealistic to expect accurate calculations for a value around 10^9.
The root appears real:
x = np.linspace(-2e9, 1000, 10000)
plt.plot(x, poly(x))
The problem is that the scale of the data is very large. -3e23 is tiny compared to say 6e43. The discrepancy is caused by roundoff error. Third order polynomials have an analytical solution, but it's not going to be numerically stable when your domain is on the order of 1e9.
You can try to use the domain and window parameters to attempt to introduce some numerical stability. For example, a common choice of domain is something that envelops your entire dataset. You would have to adjust the coefficients to compenstate, since those values are usually used for fitting data.

What does np.polyfit do and return?

I went through the docs but I'm not able to interpret correctly
IN my code, I wanted to find a line that goes through 2 points(x1,y1), (x2,y2), so I've used
np.polyfit((x1,x2),(y1,y2),1)
since its a 1 degree polynomial(a straight line)
It returns me [ -1.04 727.2 ]
Though my code (which is a much larger file) runs properly, and does what it is intended to do - i want to understand what this is returning
I'm assuming polyfit returns a line(curved, straight, whatever) that satisfies(goes through) the points given to it, so how can a line be represented with 2 points which it is returning?
From the numpy.polyfit documentation:
Returns:
p : ndarray, shape (deg + 1,) or (deg + 1, K)
Polynomial coefficients, highest power first. If y was 2-D, the coefficients for k-th data set are in p[:,k].
So these numbers are the coefficients of your polynomial. Thus, in your case:
y = -1.04*x + 727.2
By the way, numpy.polyfit will only return an equation that goes through all the points (say you have N) if the degree of the polynomial is at least N-1. Otherwise, it will return a best fit that minimises the squared error.
These are essentially the beta and the alpha values for the given data.
Where beta necessarily demonstrates the degree of volatility or the slope

Find intersection of A(x) and B(y) in complex plane plus corr. x and y

suppose I have the following Problem:
I have a complex function A(x) and a complex function B(y). I know these functions cross in the complex plane. I would like to find out the corresponding x and y of this intersection point, numerically ( and/or graphically). What is the most clever way of doing that?
This is my starting point:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
def A_(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
A = np.vectorize(A_)
def B_(y):
return 3/(1j*y*(1+1j*y))
B = np.vectorize(B_)
real_A = np.real(A(x))
imag_A = np.imag(A(x))
real_B = np.real(B(y))
imag_B = np.imag(B(y))
plt.plot(real_A, imag_A, color='blue')
plt.plot(real_B, imag_B, color='red')
plt.show()
I don't have to plot it necessarily. I just need x_intersection and y_intersection (with some error that depends on x and y).
Thanks a lot in advance!
EDIT:
I should have used different variable names. To clarify what i need:
x and y are numpy arrays and i need the index of the intersection point of each array plus the corresponding x and y value (which again is not the intersection point itself, but some value of the arrays x and y ).
Here I find the minimum of the distance between the two curves. Also, I cleaned up your code a bit (eg, vectorize wasn't doing anything useful).
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
from scipy import optimize
def A(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
def B(y):
return 3/(1j*y*(1+1j*y))
# The next three lines find the intersection
def dist(x):
return abs(A(x[0])-B(x[1]))
sln = optimize.minimize(dist, [1, 1])
# plotting everything....
a0, b0 = A(sln.x[0]), B(sln.x[1])
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
a, b = A(x), B(y)
plt.plot(a.real, a.imag, color='blue')
plt.plot(b.real, b.imag, color='red')
plt.plot(a0.real, a0.imag, "ob")
plt.plot(b0.real, b0.imag, "xr")
plt.show()
The specific x and y values at the intersection point are sln.x[0] and sln.x[1], since A(sln.x[0])=B(sln.x[1]). If you need the index, as you also mention in your edit, I'd use, for example, numpy.searchsorted(x, sln.x[0]), to find where the values from the fit would insert into your x and y arrays.
I think what's a bit tricky with this problem is that the space for graphing where the intersection is (ie, the complex plane) does not show the input space, but one has to optimize over the input space. It's useful for visualizing the solution, then, to plot the distance between the curves over the input space. That can be done like this:
data = dist((X, Y))
fig, ax = plt.subplots()
im = ax.imshow(data, cmap=plt.cm.afmhot, interpolation='none',
extent=[min(x), max(x), min(y), max(y)], origin="lower")
cbar = fig.colorbar(im)
plt.plot(sln.x[0], sln.x[1], "xw")
plt.title("abs(A(x)-B(y))")
From this it seems much more clear how optimize.minimum is working -- it just rolls down the slope to find the minimum distance, which is zero in this case. But still, there's no obvious single visualization that one can use to see the whole problem.
For other intersections one has to dig a bit more. That is, #emma asked about other roots in the comments, and there I mentioned that there's no generally reliable way to find all roots to arbitrary equations, but here's how I'd go about looking for other roots. Here I won't lay out the complete program, but just list the changes and plots as I go along.
First, it's obvious that for the domain shown in my first plot that there's only one intersection, and that there are no intersection in the region to the left. The only place there could be another intersection is to the right, but for that I'll need to allow the sqrt in the def of B to get a negative argument without throwing an exception. An easy way to do this is to add 0j to the argument of the sqrt, like this, sqrt(1+0j-(1/x)**2). Then the plot with the intersection becomes
I plotted this over a broader range (x=np.linspace(-10, 10, 10000) and y=np.linspace(-400, 400, 10000)) and the above is the zoom of the only place where anything interesting is going on. This shows the intersection found above, plus the point where it looks like the two curves might touch (where the red curve, B, comes to a point nearly meeting the blue curve A going upward), so that's the new interesting thing, and the thing I'll look for.
A bit of playing around with limits, etc, show that B is coming to a point asymptotically, and the equation of B is obvious that it will go to 0 + 0j for large +/- y, so that's about all there is to say for B.
It's difficult to understand A from the above plot, so I'll look at the real and imaginary parts independently:
So it's not a crazy looking function, and the jumping between Re=const and Im=const is just the nature of sqrt(1-x-2), which is pure complex for abs(x)<1 and pure real for abs(x)>1.
It's pretty clear now that the other time the curves are equal is at y= +/-inf and x=0. And, quick look at the equations show that A(0)=0+0j and B(+/- inf)=0+0j, so this is another intersection point (though since it occurs at B(+/- inf), it's sort-of ambiguous on whether or not it would be called an intersection).
So that's about it. One other point to mention is that if these didn't have such an easy analytic solution, like it wasn't clear what B was at inf, etc, one could also graph/minimize, etc, by looking at B(1/y), and then go from there, using the same tools as above to deal with the infinity. So using:
def dist2(x):
return abs(A(x[0])-B(1./x[1]))
Where the min on the right is the one initially found, and the zero, now at x=-0 and 1./y=0 is the other one (which, again, isn't interesting enough to apply an optimizer here, but it could be interesting in other equations).
Of course, it's also possible to estimate this by just finding the minimum of the data that goes into the above graph, like this:
X, Y = np.meshgrid(x, y)
data = dist((X, Y))
r = np.unravel_index(data.argmin(), data.shape)
print x[r[1]], y[r[0]]
# 2.06306306306 1.8008008008 # min approach gave 2.05973231 1.80069353
But this is only approximate (to the resolution of data) and involved many more calculations (1M compared to a few hundred). I only post this because I think it might be what the OP originally had in mind.
Briefly, two analytic solutions are derived for the roots of the problem. The first solution removes the parametric representation of x and solves for the roots directly in the (u, v) plane, where for example A(x): u(x) + i v(y) gives v(u) = f(u). The second solution uses a polar representation, e.g. A(x) is given by r(x) exp(i theta(x)), and offers a better understanding of the behavior of the square root as x passes through unity towards zero. Possible solutions occurring at the singular points are explored. Finally, a bisection root finding algorithm is constructed as a Python iterator to invert certain solutions. Summarizing, the one real root can be found as a solution to either of the following equations:
and gives:
x0 = -2.059732
y0 = +1.800694
A(x0) = B(y0) = (-0.707131, -i 0.392670)
As in most problems there are a number of ways to proceed. One can use a "black box" and hopefully find the root they are looking for. Sometimes an answer is all that is desired, and with a little understanding of the functions this may be an adequate way forward. Unfortunately, it is often true that such an approach will provide less insight about the problem then others.
For example, algorithms find it difficult locating roots in the global space. Local roots may be found with other roots lying close by and yet undiscovered. Consequently, the question arises: "Are all the roots accounted for?" A more complete understanding of the functions, e.g. asymptotic behaviors, branch cuts, singular points, can provide the global perspective to better answer this, as well as other important questions.
So another possible solution would be building one's own "black box." A simple bisection routine might be a starting point. Robust if the root lies in the initial interval and fairly efficient. This encourages us to look at the global behavior of the functions. As the code is structured and debugged the various functions are explored, new insights are gained, and the algorithm has become a tool towards a more complete solution to the problem. Perhaps, with some patience, a closed-form solution can be found. A Python iterator is constructed and listed below implementing a bisection root finding algorithm.
Begin by putting the functions A(x) and B(x) in a more standard form:
C(x) = u(x) + i v(x)
and here the complex number i is brought out of the denominator and into the numerator, casting the problem into the form of functions of a complex variable. The new representation simplifies the original functions considerably. The real and imaginary parts are now clearly separated. An interesting graph is to plot A(x) and B(x) in the 3-dimensional space (u, v, x) and then visualize the projection into the u-v plane.
import numpy as np
from numpy import real, imag
import matplotlib.pyplot as plt
ax = fig.gca(projection='3d')
s = np.linspace(a, b, 1000)
ax.plot(f(s).real, f(s).imag, z, color='blue')
ax.plot(g(s).real, g(s).imag, z, color='red')
ax.plot(f(s).real, f(s).imag, 0, color='black')
ax.plot(g(s).real, g(s).imag, 0, color='black')
The question arises: "Can the parametric representation be replaced so that a relationship such as,
A(x): u(x) + i v(x) gives v(u) = f(u)
is obtained?" This will provide A(x) as a function v(u) = f(u) in the u-v plane. Then, if for
B(x): u(x) + i v(x) gives v(u) = g(u)
a similar relationship can be found, the solutions can be set equal to one another,
f(u) = g(u)
and the root(s) computed. In fact, it is convenient to look for a solution in the square of the above equation. The worst case is that an algorithm will have to be built to find the root, but at this point the behavior of our functions are better understood. For example, if f(u) and g(u) are polynomials of degree n then it is known that there are n roots. The best case is that a closed-form solution might be a reward for our determination.
Here is more detail to the solution. For A(x) the following is derived:
and v(u) = f(u) is just v(u) = constant. Similarly for B(x) a slightly more complex form is required:
Look at the function g(u) for B(x). It is imaginary if u > 0, but the root must be real since f(u) is real. This means that u must be less then 0, and there is both a positive and negative real branch to the square root. The sign of f(u) then allows one to pick the negative branch as the solution for the root. So the fact that the solution must be real is determined by the sign of u, and the fact that the real root is negative specifies what branch of the square root to choose.
In the following plot both the real (u < 0) and complex (u > 0) solutions are shown.
The camera looks toward the origin in the back corner, where the red and blue curves meet. The z-axis is the magnitude of f(u) and g(u). The x and y axes are the real/complex values of u respectively. The blue curves are the real solution with (3 - |u|). The red curves are the complex solution with (3 + |u|). The two sets meet at u = 0. The black curve is f(u) equal to (-pi/8).
There is a divergence in g(u) at |u| = 3 and this is associated with x = 0. It is far removed from the solution and will not be considered further.
To obtain the roots to f = g it is easier to square f(u) and equate the two functions. When the function g(u) is squared the branches of the square root are lost, much like squaring the solutions for x**2 = 4. In the end the appropriate root will be chosen by the sign of f(u) and so this is not an issue.
So by looking at the dependence of A and B, with respect to the parametric variable x, a representation for these functions was obtained where v is a function of u and the roots found. A simpler representation can be obtained if the term involving c in the square root is ignored.
The answer gives all the roots to be found. A cubic equation has at most three roots and one is guaranteed to be real. The other two may be imaginary or real. In this case the real root has been found and the other two roots are complex. Interestingly, as c changes these two complex roots may move into the real plane.
In the above figure the x-axis is u and the y axis is the evaluated cubic equation with constant c. The blue curve has c as (pi/8) squared. The red curve uses a larger and negative value for c, and has been translated upwards for purposes of demonstration. For the blue curve there is an inflection point near (0, 0.5), while the red curve has a maximum at (-0.9, 2.5) and a minimum at (0.9, -0.3).
The intersection of the cubic with the black line represents the roots given by: u**3 + c u + 3c = 0. For the blue curve there is one intersection and one real root with two roots in the complex plane. For the red curve there are three intersections, and hence 3 roots. As we change the value of the constant c (blue to red) the one real root undergoes a "pitchfork" bifurcation, and the two roots in the complex plane move into the real space.
Although the root (u_t, v_t) has been located, obtaining the value for x requires that (u, v) be inverted. In the present example this is a trivial matter, but if not, a bisection routine can be used to avoid the algebraic difficulties.
The parametric representation also leads to a solution for the real root, and it rounds out the analysis with an independent verification of the first result. Second, it answers the question about what happens at the singularity in the square root. Third, it gives a greater understanding of the multiplicity of roots.
The steps are: (1) convert A(x) and B(x) into polar form, (2) equate the modulus and argument giving two equations in two unknowns, (3) make a substitution for z = x**2. Converting A(x) to polar form:
Absolute value bars are not indicated, and it should be understood that the moduli r(x) and s(x) are positive definite as their names imply. For B(x):
The two equations in two unknowns:
Finally, the cubic solution is sketched out here where the substitution z = x**2 has been made:
The solution for z = x**2 gives x directly, which allows one to substitute into both A(x) and B(x). This is an exact solution if all terms are maintained in the cubic solution, and there is no error in x0, y0, A(x0), or B(x0). A simpler representation can be found by considering terms proportional to 1/d as small.
Before leaving the polar representation consider the two singular points where: (1) abs(x) = 1, and (2) x = 0. A complicating factor is that the arguments behave something like 1/x instead of x. It is worthwhile to look at a plot of the arctan(a) and then ask yourself how that changes if its argument is replaced by 1/a. The following graphs will then look less foreign.
Consider the polar representation of B(x). As x approaches 0 the modulus and argument tend toward infinity, i.e. the point is infinitely far from the origin and lies along the y-axis. Approaching 0 from the negative direction the point lies along the negative y-axis with varphi = (-pi/2), while approaching from the other direction the point lies along the positive y-axis with varphi = (+pi/2).
A somewhat more complicated behavior is exhibited by A(x). A(x) is even in x since the modulus is positive definite and the argument involves only x**2. There is a symmetry across the y-axis that allows us to only consider the x > 0 plane.
At x = 1 the modulus is just (pi/8), and as x continues to approach 0 so does r(x). The behavior of the argument is more complex. As x approaches unity from large positive values the argument is diverging towards +inf and so theta is approaching (+pi/2). As x passes through 1 the argument becomes complex. At x equals 0 the argument has reached its minimum value of -i. For complex arguments the arctan is given by:
The following are plots of the arguments for A(x) and B(x). The x-axis is the value of x, and the y-axis is the value of the angle in units of pi. In the first plot theta is shown in blue curves, and as x approaches 1 the angle approaches (+pi/2). Theta is real because abs(x) >= 1, and notice it is symmetric across the y-axis. The black curve is varphi and as x approaches 0 it approaches plus or minus (pi/2). Notice it is an odd function in x.
In the second plot A(x) is shown where abs(x) < 1 and the argument becomes complex. Near x = 1 theta is equal to (+pi/2), the blue curve, minus a small imaginary part, the red curve. As x approaches zero theta is equal to (+pi/2) minus a large imaginary part. At x equals 0 the argument is equal to -i and theta = (+pi/2) minus an infinite imaginary part, i.e ln(0) = -inf:
The values for x0 and y0 are determined by the set of equations that equate modulus and argument of A(x) and B(x), and there are no other roots. If x0 = 0 was a root, then it would fall out of these equations. The same holds for x0 = 1. In fact, if one uses approximations in the argument of A(x) about these points, and then substitutes into the equation for the modulus, the equality cannot be maintained there.
Here is another perspective: consider the set of equations where x is assumed large and call it x_inf. The equation for the argument then gives x_inf = y_inf, where 1 is neglected with respect to x_inf squared. Upon substitution into the second equation a cubic is obtained in x_inf. Will this give the correct answer? Yes, if x0 is actually large, and in this case you might get away with it since x0 is approximately 2. The difference between the sqrt(4) and the sqrt(5) is around 10%. But does this mean that x_inf = 100 is a solution? No it does not: x_inf is only a solution if it equals x0.
The initial reason for examining the problem in the first place was to find a context for building a root-finding bisection routine as a Python iterator. This can be used to find any of the roots discussed here, and looks something like this:
class Bisection:
def __init__(self, a, b, func, max_iter):
self.max_iter = max_iter
self.count_iter = 0
self.a = a
self.b = b
self.func = func
fa = func(self.a)
fb = func(self.b)
if fa*fb >= 0.0:
raise ValueError
def __iter__(self):
self.x1 = self.a
self.x2 = self.b
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
return self
def __next__(self):
f1 = self.func(self.x1)
f2 = self.func(self.x2)
error = abs(f1 - f2)
fmid = self.func(self.xmid)
if fmid == 0.0:
return self.xmid
if f1*fmid < 0:
self.x2 = self.xmid
else:
self.x1 = self.xmid
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
f1 = self.func(self.x1)
fmid = self.func(self.xmid)
self.count_iter += 1
if self.count_iter >= self.max_iter:
raise StopIteration
return self.xmid
The routine does only a minimal amount in the way of catching exceptions and was used to find x for the given solution in the u-v plane. The arguments a and b give the lower and upper brackets for the root to be found. The argument func is the function for the root to be found. This might look like: u0 - B(x).real. The constant max_iterations tells the iterator to stop after a given number of bisections has been attempted.

Difference between fitting algorithms in scipy

I have a question about the fit algorithms used in scipy. In my program, I have a set of x and y data points with y errors only, and want to fit a function
f(x) = (a[0] - a[1])/(1+np.exp(x-a[2])/a[3]) + a[1]
to it.
The problem is that I get absurdly high errors on the parameters and also different values and errors for the fit parameters using the two fit scipy fit routines scipy.odr.ODR (with least squares algorithm) and scipy.optimize. I'll give my example:
Fit with scipy.odr.ODR, fit_type=2
Beta: [ 11.96765963 68.98892582 100.20926023 0.60793377]
Beta Std Error: [ 4.67560801e-01 3.37133614e+00 8.06031988e+04 4.90014367e+04]
Beta Covariance: [[ 3.49790629e-02 1.14441187e-02 -1.92963671e+02 1.17312104e+02]
[ 1.14441187e-02 1.81859542e+00 -5.93424196e+03 3.60765567e+03]
[ -1.92963671e+02 -5.93424196e+03 1.03952883e+09 -6.31965068e+08]
[ 1.17312104e+02 3.60765567e+03 -6.31965068e+08 3.84193143e+08]]
Residual Variance: 6.24982731975
Inverse Condition #: 1.61472215874e-08
Reason(s) for Halting:
Sum of squares convergence
and then the fit with scipy.optimize.leastsquares:
Fit with scipy.optimize.leastsq
beta: [ 11.9671859 68.98445306 99.43252045 1.32131099]
Beta Std Error: [0.195503 1.384838 34.891521 45.950556]
Beta Covariance: [[ 3.82214235e-02 -1.05423284e-02 -1.99742825e+00 2.63681933e+00]
[ -1.05423284e-02 1.91777505e+00 1.27300761e+01 -1.67054172e+01]
[ -1.99742825e+00 1.27300761e+01 1.21741826e+03 -1.60328181e+03]
[ 2.63681933e+00 -1.67054172e+01 -1.60328181e+03 2.11145361e+03]]
Residual Variance: 6.24982904455 (calulated by me)
My Point is the third fit parameter: The results are
scipy.odr.ODR, fit_type=2:
C = 100.209 +/- 80600
scipy.optimize.leastsq:
C = 99.432 +/- 12.730
I don't know why the first error is so much higher. Even better: If I put exactly the same data points with errors into Origin 9 I get
C = x0 = 99,41849 +/- 0,20283
and again exactly the same data into c++ ROOT Cern
C = 99.85+/- 1.373
even though I used exactly the same initial variables for ROOT and Python. Origin doesn't need any.
Do you have any clue why this happens and which is the best result?
I added the code for you at pastebin:
Data
C++ code
Python code: http://pastebin.com/jZVyzMkS
Thank you for helping!
EDIT: here's the plot related to SirJohnFranklins post:
Did you actually try plotting the ODR and leastsq fits side by side? They look basically identical:
Consider what the parameters correspond to - the step function described by beta[0] and beta[1], the initial and final values, explains by far the majority of the variance in your data. By contrast, small changes in beta[2] and beta[3], the inflexion point and slope, will have comparatively little effect on the overall shape of the curve and therefore the residual variance for the fit. It's therefore no surprise that these parameters have high standard errors, and are fitted slightly differently by the two algorithms.
The overall greater standard errors reported by ODR are due to the fact that this model incorporates errors in the y-values whereas the ordinary least squares fit does not - errors in the measured y-values ought to reduce our confidence in the estimated fit parameters.
(Sadly, i can't upload the fit, because I need more reputation. I'll give the plot to Captain Sandwich, so he can upload it for me.)
I'm in the same workgroup as the person who started the thread, but I did this plot.
So, I added x-errors on the data, because I was not that far the last time. The error obtained through the ODR is still absurdly high (4.18550164e+04 on beta[2]). In the plot, I show you what the FIT from [ROOT Cern][2] gives, now with x and y error. Here, x0 is the beta[2].
The red and the green curve have a different beta, the left one minus the error of the fit of 3.430 obtained by ROOT and the right one plus the error. I think this makes totally sense, much more, than the error of 0.2 given by the fit of Origin 9 (which can only handle y-errors, I think) or the error of about 40k given by the ODR which also includes x and y errors.
Maybe, because ROOT is mostly used by astrophysicists who need very roubust fitting algorithms, it can handle much more difficult fits, but I don't know enough about the robustness of fitting algorithms.

Categories