((gamma-(gamma**2-omega**2)**0.5)*(gamma+(gamma**2-omega**2)**0.5)).simplify()
The output is:
gamma^2 - (gamma^2 -omega^2)^{1.0} $
However, I expected the result to be omega^2. I know in the sympy docs, it warns about being careful with floating point numbers, but I was under the impression that integers and also fractional powers of 2 (which can be represented exactly) were fine.
The following code correctly reproduces omega^2:
((gamma-(gamma**2-omega**2)**sym.Rational(1,2))*(gamma+(gamma**2-omega**2)**sym.Rational(1,2))).simplify()
Why does the first code not produce the expected result?
SymPy considers that there is a distinction between exact and inexact numbers. In this context floats like 0.5 and 1.0 are considered to be inexact and therefore it is not clear that x**1.0 is really equal to x or equal to something slightly different like say x**1.00000000000000000000001. That is because floats usually arise from floating point calculations which can have rounding errors. In your example the result is:
In [5]: from sympy import *
In [6]: gamma, omega = symbols('gamma, omega')
In [7]: e = ((gamma-(gamma**2-omega**2)**0.5)*(gamma+(gamma**2-omega**2)**0.5)).simplify()
In [8]: e
Out[8]:
1.0
2 ⎛ 2 2⎞
γ - ⎝γ - ω ⎠
If you want to tell SymPy that the 1.0 should be treated as an exact 1 then you can use SymPy's nsimplify function:
In [9]: nsimplify(e)
Out[9]:
2
ω
Related
I am using Python 3.7.7 and numpy 1.19.1. This is the code:
import numpy as np
a = 55.74947517067784019673 + 0j
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
and this is the output:
True, -3.141592653589793, 3.141592653589793
I have two questions:
Why does the angle function give different outputs for the same input?
According to the documentation, the angle output range is (-pi, pi], so why is one of the outputs -np.pi?
If you look at the source of the np.angle, it uses the function np.arctan2. Now, according to the numpy docs, np.arctan2 uses the underlying C library, which has the following rule:
Note that +0 and -0 are distinct floating point numbers, as are +inf and -inf.
which results in different behavior when calculating using +/-0. So, in this case, the rule is:
y: +/- 0
x: <0
angle: +/- pi
Now, if you try:
a = 55.74947517067784019673
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
#True, 3.141592653589793, 3.141592653589793
and if you try:
a = 55.74947517067784019673 + 0j
print(-a)
#(-55.74947517067784-0j)
print(-1*a)
#(-55.74947517067784+0j)
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
#True, -3.141592653589793, 3.141592653589793
Which is inline with the library protocol.
As for your second question, I guess it is a typo/mistake since the np.arctan2 doc says:
Array of angles in radians, in the range [-pi, pi]. This is a scalar if both x1 and x2 are scalars.
Explanation of -a vs. -1*a:
To start with, 55.74947517067784019673 + 0j is NOT construction of a complex number and merely addition of a float to a complex number (to construct a complex number explicitly use complex(55.74947517067784019673, 0.0) and beware that integers do not have signed zeros and only floats have). -a is simply reverting the sign and quite self explanatory. Lets see what happens when we calculate -1*a:
For simplicity assume a = 55.5 + 0j
First a = 55.5+0j converts to complex(55.5, 0.0)
Second -1 equals to complex(-1.0, 0.0)
Then complex(-1.0, 0.0)*complex(55.5, 0.0) equals to complex((-1.0*55.5 - 0.0*0.0), (-1.0*0.0 + 0.0*55.5)) equals to complex((-55.5 - 0.0), (-0.0 + 0.0)) which then equals to complex(-55.5, 0.0).
Note that -0.0+0.0 equals to 0.0 and the sign rule only applies to multiplication and division as mentioned in this link and quoted in comments below. To better understand it, see this:
print(complex(-1.0, -0.0)*complex(55.5, 0.0))
#(-55.5-0j)
where the imaginary part is (-0.0*55.5 - 1.0*0.0) = (-0.0 - 0.0) = -0.0
For 1) print -a and -1*a, you'll see they are different.
-a
Out[4]: (-55.74947517067784-0j)
-1*a
Out[5]: (-55.74947517067784+0j) # note +0j not -0j
Without knowing the details of the numpy implementation, the sign of the imaginary part is probably used to compute the angle... which could explain why this degenerate case gives different results.
For 2) this looks like a bug or a doco mistake to me then...
I’m slightly disappointed that np.inf // 2 evaluates to np.nan and not to np.inf, as is the case for normal division.
Is there a reason I’m missing why nan is a better choice than inf?
I'm going to be the person who just points at the C level implementation without any attempt to explain intent or justification:
*mod = fmod(vx, wx);
div = (vx - *mod) / wx;
It looks like in order to calculate divmod for floats (which is called when you just do floor division) it first calculates the modulus and float('inf') %2 only makes sense to be NaN, so when it calculates vx - mod it ends up with NaN so everything propagates nan the rest of the way.
So in short, since the implementation of floor division uses modulus in the calculation and that is NaN, the result for floor division also ends up NaN
Floor division is defined in relation to modulo, both forming one part of the divmod operation.
Binary arithmetic operations
The floor division and modulo operators are connected by the following
identity: x == (x//y)*y + (x%y). Floor division and modulo are also
connected with the built-in function divmod(): divmod(x, y) == (x//y, x%y).
This equivalence cannot hold for x = inf — the remainder inf % y is undefined — making inf // y ambiguous. This means nan is at least as good a result as inf. For simplicity, CPython actually only implements divmod and derives both // and % by dropping a part of the result — this means // inherits nan from divmod.
Infinity is not a number. For example, you can't even say that infinity - infinity is zero. So you're going to run into limitations like this because NumPy is a numerical math package. I suggest using a symbolic math package like SymPy which can handle many different expressions using infinity:
import sympy as sp
sp.floor(sp.oo/2)
sp.oo - 1
sp.oo + sp.oo
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
When study the python built-in float function, I read the floating point doc. And got some understanding.
Float's real value is different with their demonstration value, like 0.1's real value is '0.1000000000000000055511151231257827021181583404541015625'
Any float in python has a fixed value using IEEE-754
math.fsum give us the closest exactly representable value to the exact mathematical sum of the inputs
But after doing a bunch of experiments, I still encounter some unsolved doubts.
Doubt1
In the tutorial doc I mentioned in the first paragraph, it gave us an example:
>>> sum([0.1] * 10) == 1.0
False
>>> math.fsum([0.1] * 10) == 1.0
True
With the doc's instructions, I got an impression that math.fsum will give us a more accurate result when doing float summation.
But I found a special case within the range(20) where sum([0.1] * 12) == 1.2 evals True, meanwhile math.fsum([0.1] * 12) == 1.2 evals False. Which makes me perplexed.
Why this happened?
And what's the mechanism of sum when doing float summation?
Doubt2
I found for some float computation, plus operation has the same effect as its equivalent multiply operation. Such as 0.1+0.1+0.1+0.1+0.1 is equal to 0.1*5. But on some cases, there are not equivalent, like adding up 0.1 12 times is not equal to 0.1*12. This makes me really confused. As per float is a fixed value calculated by IEEE-754 standard. According to math principle, such kind of addition should be equal to its equivalent multiplication. The only explanation is that python didn't fully applied math principle here, some tricky stuff happens.
But what's the mechanism and details of this tricky stuff?
In [64]: z = 0
In [64]: z = 0
In [65]: 0.1*12 == 1.2
Out[65]: False
In [66]: for i in range(12):
...: z += 0.1
...:
In [67]: z == 1.2
Out[67]: True
In [71]: 0.1*5 == 0.5
Out[71]: True
In [72]: z = 0
In [73]: for i in range(5):
...: z += 0.1
...:
In [74]: z == 0.5
Out[74]: True
When .1 is converted to 64-bit binary IEEE-754 floating-point, the result is exactly 0.1000000000000000055511151231257827021181583404541015625. When you add this individually 12 times, various rounding errors occur during the additions, and the final sum is exactly 1.1999999999999999555910790149937383830547332763671875.
Coincidentally, when 1.2 is converted to floating-point, the result is also exactly 1.1999999999999999555910790149937383830547332763671875. This is a coincidence because some of the rounding errors in adding .1 rounded up and some rounded down, with the net result that 1.1999999999999999555910790149937383830547332763671875 was produced.
However, if .1 is converted to floating-point and then added 12 times using exact mathematics, the result is exactly 1.20000000000000006661338147750939242541790008544921875. Python’s math.fsum may produce this value internally, but it does not fit in 64-bit binary floating-point, so it is rounded to 1.20000000000000017763568394002504646778106689453125.
As you can see, the more accurate value 1.20000000000000017763568394002504646778106689453125 differs from the result of converting 1.2 directly to floating-point, 1.1999999999999999555910790149937383830547332763671875, so the comparison reports they are unequal.
In this answer, I step through several additions of .1 to examine the rounding errors in detail.
floor(-1e-14 % 2)
Out[1]: 1.0
floor(-1e-16 % 2)
Out[2]: 2.0
I understand that -1e-16 may be too close from 0, but in no way the result of floor after a % 2 operation should be 2 (0 or 1)!
It is not a bug in floor.Check the result of %
In [61]: -1e-16 % 2
Out[61]: 2.0
In [62]: -1e-14 % 2
Out[62]: 1.99999999999999
You may read What Every Computer Scientist Should Know About Floating-Point Arithmetic to know more on why % is behaving so.
The decimal module provides support for decimal floating point arithmetic. It offers several advantages over the foat datatype. So for precise math on floating point values
sys.float_info
For detailed information about float type one may use sys.flaot_info.
sys.float_info.dig shows maximum number of decimal digits that can be faithfully represented in a float; For calculations that includes values with more digits you may not expect accurate results with the given precision
This is what i have got
In [217]: import sys
In [218]: sys.float_info.dig
Out[218]: 15
You're right that the result of -1e-16 should not be 2.0, but floats are weird, and infamously less than precise. The specification of the % operator states:
While abs(x%y) < abs(y) is true mathematically, for floats it may not be true numerically due to roundoff. For example, and assuming a platform on which a Python float is an IEEE 754 double-precision number, in order that -1e-100 % 1e100 have the same sign as 1e100, the computed result is -1e-100 + 1e100, which is numerically exactly equal to 1e100. The function math.fmod() returns a result whose sign matches the sign of the first argument instead, and so returns -1e-100 in this case. Which approach is more appropriate depends on the application.
I'm pretty new to python and am trying to write some code to solve a given quadratic function. I'm having some trouble with rounding errors in floats, I think because I am dividing two numbers that are very large with a very small difference. (Also I'm assuming all inputs have real solutions for now.) I've put two different versions of the quadratic equation to show my problem. It works fine for most inputs, but when I try a = .001, b = 1000, c = .001 I get two answers that have a significant difference. Here is my code:
from math import sqrt
a = float(input("Enter a: "))
b = float(input("Enter b: "))
c = float(input("Enter c: "))
xp = (-b+sqrt(b**2-4*a*c))/(2*a)
xn = (-b-sqrt(b**2-4*a*c))/(2*a)
print("The solutions are: x = ",xn,", ",xp,sep = '')
xp = (2*c)/(-b-sqrt(b**2-4*a*c))
xn = (2*c)/(-b+sqrt(b**2-4*a*c))
print("The solutions are: x = ",xn,", ",xp,sep = '')
I'm no expert in the maths field but I believe you should use numpy (a py module for maths), due to internal number representation on computers your calculus will not match real math. (floating point arithmetics)
http://docs.python.org/2/tutorial/floatingpoint.html
Check this is almost exaclty what you want.
http://www.annigeri.in/2012/02/python-class-for-quadratic-equations.html
To get more precise results with floating point, be careful not to subtract similar quantities. For the quadratic x^2 + a x + b = 0 you know that the roots x1 and x2 make
b = x1 * x2
Compute the one with larger absolute value, and get the other one from this relation.
Solutions:
Numpy as suggested by user dhunter is usually the best solution for math in python. The numpy libraries are capable of doing quick and accurate math in a number of different fields.
Decimal data types were added in python 2.4 If you do not want to download an external library and do not anticipate doing many long or complex equations, decimal datatypes may fit the bill.
Simply add:
from decimal import *
to the top of your code and then replace all instances of the word float with the word Decimal (note the uppercase "D".)
Ex: Decimal(1.1047262519) as opposed to float(1.1047262519)
Theory:
Float arithmetic is based off of binary math and is therefore not always exactly what a user would expect. An excelent description of the float Vs. decimal types is located Here
The previously-mentioned numpy module is not particularly relevant to the rounding error mentioned in the question. On the other hand, the decimal module can be used in a brute-force manner to get accurate computations. The following snippet from an ipython interpreter session illustrates its use (with default 28-digit accuracy), and also shows that the corresponding floating-point calculation only has 5 decimal places of accuracy.
In [180]: from decimal import Decimal
In [181]: a=Decimal('0.001'); b=Decimal('1000'); c=Decimal('0.001')
In [182]: (b*b - 4*a*c).sqrt()
Out[182]: Decimal('999.9999999979999999999980000')
In [183]: b-(b*b - 4*a*c).sqrt()
Out[183]: Decimal('2.0000000000020000E-9')
In [184]: a = .001; b = 1000; c = .001
In [185]: math.sqrt(b*b - 4*a*c)
Out[185]: 999.999999998
In [186]: b-math.sqrt(b*b - 4*a*c)
Out[186]: 1.999978849198669e-09
In [187]: 2*a*c/b
Out[187]: 1.9999999999999997e-09
Taylor series for the square root offers an alternative method to use when 4ac is tiny compared to b**2. In this case, √(b*b-4*a*c) ≈ b - 4*a*c/(2*b), whence b - √(b*b-4*a*c) ≈ 2*a*c/b. As can be seen in the line [187] entries above, Taylor series computation gives a 12-digits-accurate result while using floating point instead of Decimal. Using another Taylor series term might add a couple more digits of accuracy.
There are special cases that you should deal with:
a == 0 means a linear equation and one root: x = -c/b
b == 0 means two roots of the form x1, x2 = ±sqrt(-c/a)
c == 0 means two roots, but one of them is zero: x*(ax+b) = 0
If the discriminant is negative, you have two complex conjugate roots.
I'd recommend calculating the discriminant this way:
discriminant = b*sqrt(1.0-4.0*a*c/b)
I'd also recommend reading this:
https://math.stackexchange.com/questions/187242/quadratic-equation-error