I'm pretty new to python and am trying to write some code to solve a given quadratic function. I'm having some trouble with rounding errors in floats, I think because I am dividing two numbers that are very large with a very small difference. (Also I'm assuming all inputs have real solutions for now.) I've put two different versions of the quadratic equation to show my problem. It works fine for most inputs, but when I try a = .001, b = 1000, c = .001 I get two answers that have a significant difference. Here is my code:
from math import sqrt
a = float(input("Enter a: "))
b = float(input("Enter b: "))
c = float(input("Enter c: "))
xp = (-b+sqrt(b**2-4*a*c))/(2*a)
xn = (-b-sqrt(b**2-4*a*c))/(2*a)
print("The solutions are: x = ",xn,", ",xp,sep = '')
xp = (2*c)/(-b-sqrt(b**2-4*a*c))
xn = (2*c)/(-b+sqrt(b**2-4*a*c))
print("The solutions are: x = ",xn,", ",xp,sep = '')
I'm no expert in the maths field but I believe you should use numpy (a py module for maths), due to internal number representation on computers your calculus will not match real math. (floating point arithmetics)
http://docs.python.org/2/tutorial/floatingpoint.html
Check this is almost exaclty what you want.
http://www.annigeri.in/2012/02/python-class-for-quadratic-equations.html
To get more precise results with floating point, be careful not to subtract similar quantities. For the quadratic x^2 + a x + b = 0 you know that the roots x1 and x2 make
b = x1 * x2
Compute the one with larger absolute value, and get the other one from this relation.
Solutions:
Numpy as suggested by user dhunter is usually the best solution for math in python. The numpy libraries are capable of doing quick and accurate math in a number of different fields.
Decimal data types were added in python 2.4 If you do not want to download an external library and do not anticipate doing many long or complex equations, decimal datatypes may fit the bill.
Simply add:
from decimal import *
to the top of your code and then replace all instances of the word float with the word Decimal (note the uppercase "D".)
Ex: Decimal(1.1047262519) as opposed to float(1.1047262519)
Theory:
Float arithmetic is based off of binary math and is therefore not always exactly what a user would expect. An excelent description of the float Vs. decimal types is located Here
The previously-mentioned numpy module is not particularly relevant to the rounding error mentioned in the question. On the other hand, the decimal module can be used in a brute-force manner to get accurate computations. The following snippet from an ipython interpreter session illustrates its use (with default 28-digit accuracy), and also shows that the corresponding floating-point calculation only has 5 decimal places of accuracy.
In [180]: from decimal import Decimal
In [181]: a=Decimal('0.001'); b=Decimal('1000'); c=Decimal('0.001')
In [182]: (b*b - 4*a*c).sqrt()
Out[182]: Decimal('999.9999999979999999999980000')
In [183]: b-(b*b - 4*a*c).sqrt()
Out[183]: Decimal('2.0000000000020000E-9')
In [184]: a = .001; b = 1000; c = .001
In [185]: math.sqrt(b*b - 4*a*c)
Out[185]: 999.999999998
In [186]: b-math.sqrt(b*b - 4*a*c)
Out[186]: 1.999978849198669e-09
In [187]: 2*a*c/b
Out[187]: 1.9999999999999997e-09
Taylor series for the square root offers an alternative method to use when 4ac is tiny compared to b**2. In this case, √(b*b-4*a*c) ≈ b - 4*a*c/(2*b), whence b - √(b*b-4*a*c) ≈ 2*a*c/b. As can be seen in the line [187] entries above, Taylor series computation gives a 12-digits-accurate result while using floating point instead of Decimal. Using another Taylor series term might add a couple more digits of accuracy.
There are special cases that you should deal with:
a == 0 means a linear equation and one root: x = -c/b
b == 0 means two roots of the form x1, x2 = ±sqrt(-c/a)
c == 0 means two roots, but one of them is zero: x*(ax+b) = 0
If the discriminant is negative, you have two complex conjugate roots.
I'd recommend calculating the discriminant this way:
discriminant = b*sqrt(1.0-4.0*a*c/b)
I'd also recommend reading this:
https://math.stackexchange.com/questions/187242/quadratic-equation-error
Related
I wish to calculate the natural logarithm of a value that is very close to 1, but not exactly one.
For example, np.log(1 + 1e-22) is 0 and not some non-zero value. However, np.log(1 + 1e-13) is not zero, and calculated to be 9.992007221625909e-14.
How can I understand this tradeoff of precision while using a numpy function vs. defining a numpy array with dtype as np.longdouble?
Floating precision information of numpy (v1.22.2) on the system I am using:
>>> np.finfo(np.longdouble)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> 1 + np.finfo(np.longdouble).eps
1.0000000000000002
To complete the good solution of #yut23 using Numpy. If you need to deal with very small floats that does not fit in native types defined by Numpy or numbers close to 1 with a precision more than ~10 digits, then you can use the decimal package. It is slower than native float but it can gives you an arbitrary precision. The thing is it does not support the natural logarithm (ie. log) function directly since it is based on the transcendental Euler number (ie. e) which can hardly be computed with an arbitrary precision (at least not when the precision is huge). Fortunately, you can compute the natural logarithm from the based-10 logarithm and a precomputed Euler number based on existing number databases like this one (I guess 10000 digits should be enough for your needs ;) ). Here is an example:
import decimal
from decimal import Decimal
decimal.setcontext(decimal.Context(prec=100)) # 100 digits of precision
e = Decimal('2.71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217852516643')
result = (Decimal("1")+Decimal("1e-13")).log10() / e.log10()
# result = 9.999999999999500000000000033333333333330833333333333533333333333316666666666668095238095237970238087E-14
The precision of result is of 99 digits (only the last one is not correct).
For practical usage, take a look at np.log1p(x), which computes log(1 + x) without the roundoff error that comes from 1 + x. From the docs:
For real-valued input, log1p is accurate also for x so small that 1 + x == 1 in floating-point accuracy.
Even the seemingly working example of log(1 + 1e-13) differs from the true value at the 3rd decimal place with 64-bit floats, and at the 6th with 128-bit floats (true value is from WolframAlpha):
>>> (1 + 1e-13) - 1
9.992007221626409e-14
>>> np.log(1 + 1e-13)
9.992007221625909e-14
>>> np.log(1 + np.array(1e-13, dtype=np.float128))
9.999997791637127032e-14
>>> np.log1p(1e-13)
9.9999999999995e-14
>>> 9.999999999999500000000000033333333333330833333333333533333*10**-14
9.9999999999995e-14
Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).
I have instances in my code where two complex numbers (using the cmath module) that should be exactly the same, do not cancel out due to the floating point precision of the base 2 system causing the numbers to deviate from each other by a small difference in value at some nth decimal place.
If they were floating numbers of sufficient size, it would be a simple matter of just rounding them to a decimal place where the value difference no longer exists.
How could I do the same for the real and imaginary parts of complex numbers represented using the cmath module?
e.g. The following two complex numbers should be exactly the same, how could I implement some code to ensure that the real and imaginary components of some complex number are rounded to the nearest ith decimal place of my choice?
(0.6538461538461539-0.2692307692307693j)
(0.6538461538461539-0.26923076923076916j)
One possible solution, recommended by jonrsharpe:
if abs(a - b) < threshold:
a = b
"round" does not work directly on complex numbers, but it does work separately on the real resp. imaginary part of the number, e.g. rounding on 4 digits:
x = 0.6538461538461539-0.2692307692307693j
x_real = round(x.real, 4)
x_imag = round(x.imag, 4)
x = x_real + x_imag * 1j
Numpy is defacto for numerical computing in python. Try np.round():
import numpy as np
x = 0.6538461538461539-0.2692307692307693j
print(np.round(x,decimals=3))
print(np.round(x,decimals=3)==np.round(x,decimals=3)) # True
print(np.round(x,decimals=3)==np.round(x,decimals=4)) # False
Is it possible to do higher precision matrix exponential in Python? I mean obtain higher precision than double floating numbers.
I have following testing code:
import sympy
from sympy import N
import random
n = 100
#A = sympy.Matrix([[random.random(),random.random()],
# [random.random(),random.random()]])
A = sympy.Matrix([[1,2],[3,4]])
dlt = 1000
e1 = A.exp()
e1 = N(e1, n)
ee2 = (A/dlt).exp()
ee2 = N(ee2, n)
e2 = sympy.eye(2)
for i in range(dlt):
e2 = e2*ee2
print(N(max(e1-e2)))
Theoretically, the final result should be zero. With scipy, the error is about 1e-14.
By sympy, if the matrix is like [[1,2],[3,4]], the output of previous code is about 1e-98. However, for random matrix, the error is around 1e-14.
Is it possible to get results like 1e-100 for random matrices?
Speed is not concern.
Once you use N, you are in the realm of floating point operations, as such you can never assume that you will reach absolute zero. This is the case with all floating point arithmetic, as discussed here and in many other places. The only reliable solution is to include a suitably chosen eps variable and a function to check.
So instead of checking result == 0 define isZero = lambda val: abs(val) < eps and check isZero(result).
This is a universal problem in floating point operations. In principle, using sympy, you can find real zeros because it is an algebra library, not a floating point math library. However, in the example you gave, not using N (which is what switches to float arithmetic), makes the computation extremely slow.
I made a mistake when trying mpmath.
I have tried mpmath again and it's a perfect solution for this problem.
For 1-D numpy arrays, this two expressions should yield the same result (theorically):
(a*b).sum()/a.sum()
dot(a, b)/a.sum()
The latter uses dot() and is faster. But which one is more accurate? Why?
Some context follows.
I wanted to compute the weighted variance of a sample using numpy.
I found the dot() expression in another answer, with a comment stating that it should be more accurate. However no explanation is given there.
Numpy dot is one of the routines that calls the BLAS library that you link on compile (or builds its own). The importance of this is the BLAS library can make use of Multiply–accumulate operations (usually Fused-Multiply Add) which limit the number of roundings that the computation performs.
Take the following:
>>> a=np.ones(1000,dtype=np.float128)+1E-14
>>> (a*a).sum()
1000.0000000000199948
>>> np.dot(a,a)
1000.0000000000199948
Not exact, but close enough.
>>> a=np.ones(1000,dtype=np.float64)+1E-14
>>> np.dot(a,a)
1000.0000000000176 #off by 2.3948e-12
>>> (a*a).sum()
1000.0000000000059 #off by 1.40948e-11
The np.dot(a, a) will be the more accurate of the two as it use approximately half the number of floating point roundings that the naive (a*a).sum() does.
A book by Nvidia has the following example for 4 digits of precision. rn stands for 4 round to the nearest 4 digits:
x = 1.0008
x2 = 1.00160064 # true value
rn(x2 − 1) = 1.6006 × 10−4 # fused multiply-add
rn(rn(x2) − 1) = 1.6000 × 10−4 # multiply, then add
Of course floating point numbers are not rounded to the 16th decimal place in base 10, but you get the idea.
Placing np.dot(a,a) in the above notation with some additional pseudo code:
out=0
for x in a:
out=rn(x*x+out) #Fused multiply add
While (a*a).sum() is:
arr=np.zeros(a.shape[0])
for x in range(len(arr)):
arr[x]=rn(a[x]*a[x])
out=0
for x in arr:
out=rn(x+out)
From this its easy to see that the number is rounded twice as many times using (a*a).sum() compared to np.dot(a,a). These small differences summed can change the answer minutely. Additional exmaples can be found here.