Is there any difference between the infinities returned by the math module and cmath module?
Does the complex infinity have an imaginary component of 0?
Any difference?
No, there is no difference. According to the docs, both math.inf and cmath.inf are equivalent to float('inf'), or floating-point infinity.
If you want a truly complex infinity that has a real component of infinity and an imaginary component of 0, you have to build it yourself: complex(math.inf, 0)
There is, however, cmath.infj, if you want 0 as a real value and infinity as the imaginary component.
Constructing imaginary infinity
As others have pointed out math.inf + 0j is a bit faster than complex(math.inf, 0). We're talking on the order of nanoseconds though.
Related
I wish to calculate the natural logarithm of a value that is very close to 1, but not exactly one.
For example, np.log(1 + 1e-22) is 0 and not some non-zero value. However, np.log(1 + 1e-13) is not zero, and calculated to be 9.992007221625909e-14.
How can I understand this tradeoff of precision while using a numpy function vs. defining a numpy array with dtype as np.longdouble?
Floating precision information of numpy (v1.22.2) on the system I am using:
>>> np.finfo(np.longdouble)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> 1 + np.finfo(np.longdouble).eps
1.0000000000000002
To complete the good solution of #yut23 using Numpy. If you need to deal with very small floats that does not fit in native types defined by Numpy or numbers close to 1 with a precision more than ~10 digits, then you can use the decimal package. It is slower than native float but it can gives you an arbitrary precision. The thing is it does not support the natural logarithm (ie. log) function directly since it is based on the transcendental Euler number (ie. e) which can hardly be computed with an arbitrary precision (at least not when the precision is huge). Fortunately, you can compute the natural logarithm from the based-10 logarithm and a precomputed Euler number based on existing number databases like this one (I guess 10000 digits should be enough for your needs ;) ). Here is an example:
import decimal
from decimal import Decimal
decimal.setcontext(decimal.Context(prec=100)) # 100 digits of precision
e = Decimal('2.71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217852516643')
result = (Decimal("1")+Decimal("1e-13")).log10() / e.log10()
# result = 9.999999999999500000000000033333333333330833333333333533333333333316666666666668095238095237970238087E-14
The precision of result is of 99 digits (only the last one is not correct).
For practical usage, take a look at np.log1p(x), which computes log(1 + x) without the roundoff error that comes from 1 + x. From the docs:
For real-valued input, log1p is accurate also for x so small that 1 + x == 1 in floating-point accuracy.
Even the seemingly working example of log(1 + 1e-13) differs from the true value at the 3rd decimal place with 64-bit floats, and at the 6th with 128-bit floats (true value is from WolframAlpha):
>>> (1 + 1e-13) - 1
9.992007221626409e-14
>>> np.log(1 + 1e-13)
9.992007221625909e-14
>>> np.log(1 + np.array(1e-13, dtype=np.float128))
9.999997791637127032e-14
>>> np.log1p(1e-13)
9.9999999999995e-14
>>> 9.999999999999500000000000033333333333330833333333333533333*10**-14
9.9999999999995e-14
Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).
I’m slightly disappointed that np.inf // 2 evaluates to np.nan and not to np.inf, as is the case for normal division.
Is there a reason I’m missing why nan is a better choice than inf?
I'm going to be the person who just points at the C level implementation without any attempt to explain intent or justification:
*mod = fmod(vx, wx);
div = (vx - *mod) / wx;
It looks like in order to calculate divmod for floats (which is called when you just do floor division) it first calculates the modulus and float('inf') %2 only makes sense to be NaN, so when it calculates vx - mod it ends up with NaN so everything propagates nan the rest of the way.
So in short, since the implementation of floor division uses modulus in the calculation and that is NaN, the result for floor division also ends up NaN
Floor division is defined in relation to modulo, both forming one part of the divmod operation.
Binary arithmetic operations
The floor division and modulo operators are connected by the following
identity: x == (x//y)*y + (x%y). Floor division and modulo are also
connected with the built-in function divmod(): divmod(x, y) == (x//y, x%y).
This equivalence cannot hold for x = inf — the remainder inf % y is undefined — making inf // y ambiguous. This means nan is at least as good a result as inf. For simplicity, CPython actually only implements divmod and derives both // and % by dropping a part of the result — this means // inherits nan from divmod.
Infinity is not a number. For example, you can't even say that infinity - infinity is zero. So you're going to run into limitations like this because NumPy is a numerical math package. I suggest using a symbolic math package like SymPy which can handle many different expressions using infinity:
import sympy as sp
sp.floor(sp.oo/2)
sp.oo - 1
sp.oo + sp.oo
I have instances in my code where two complex numbers (using the cmath module) that should be exactly the same, do not cancel out due to the floating point precision of the base 2 system causing the numbers to deviate from each other by a small difference in value at some nth decimal place.
If they were floating numbers of sufficient size, it would be a simple matter of just rounding them to a decimal place where the value difference no longer exists.
How could I do the same for the real and imaginary parts of complex numbers represented using the cmath module?
e.g. The following two complex numbers should be exactly the same, how could I implement some code to ensure that the real and imaginary components of some complex number are rounded to the nearest ith decimal place of my choice?
(0.6538461538461539-0.2692307692307693j)
(0.6538461538461539-0.26923076923076916j)
One possible solution, recommended by jonrsharpe:
if abs(a - b) < threshold:
a = b
"round" does not work directly on complex numbers, but it does work separately on the real resp. imaginary part of the number, e.g. rounding on 4 digits:
x = 0.6538461538461539-0.2692307692307693j
x_real = round(x.real, 4)
x_imag = round(x.imag, 4)
x = x_real + x_imag * 1j
Numpy is defacto for numerical computing in python. Try np.round():
import numpy as np
x = 0.6538461538461539-0.2692307692307693j
print(np.round(x,decimals=3))
print(np.round(x,decimals=3)==np.round(x,decimals=3)) # True
print(np.round(x,decimals=3)==np.round(x,decimals=4)) # False
The following code causes the print statements to be executed:
import numpy as np
import math
foo = np.array([1/math.sqrt(2), 1/math.sqrt(2)], dtype=np.complex_)
total = complex(0, 0)
one = complex(1, 0)
for f in foo:
total = total + pow(np.abs(f), 2)
if(total != one):
print str(total) + " vs " + str(one)
print "NOT EQUAL"
However, my input of [1/math.sqrt(2), 1/math.sqrt(2)] results in the total being one:
(1+0j) vs (1+0j) NOT EQUAL
Is it something to do with mixing NumPy with Python's complex type?
When using floating point numbers it is important to keep in mind that working with these numbers is never accurate and thus computations are every time subject to rounding errors. This is caused by the design of floating point arithmetic and currently the most practicable way to do high arbitrary precision mathematics on computers with limited resources. You can't compute exactly using floats (means you have practically no alternative), as your numbers have to be cut off somewhere to fit in a reasonable amount of memory (in most cases at maximum 64 bits), this cut-off is done by rounding it (see below for an example).
To deal correctly with these shortcomings you should never compare to floats for equality, but for closeness. Numpy provides 2 functions for that: np.isclose for comparison of single values (or a item-wise comparison for arrays) and np.allclose for whole arrays. The latter is a np.all(np.isclose(a, b)), so you get a single value for an array.
>>> np.isclose(np.float32('1.000001'), np.float32('0.999999'))
True
But sometimes the rounding is very practicable and matches with our analytical expectation, see for example:
>>> np.float(1) == np.square(np.sqrt(1))
True
After squaring the value will be reduced in size to fit in the given memory, so in this case it's rounded to what we would expect.
These two functions have built-in absolute and relative tolerances (you can also give then as parameter) that are use to compare two values. By default they are rtol=1e-05 and atol=1e-08.
Also, don't mix different packages with their types. If you use Numpy, use Numpy-Types and Numpy-Functions. This will also reduce your rounding errors.
Btw: Rounding errors have even more impact when working with numbers which differ in their exponent widely.
I guess, the same considerations as for real numbers are applicable: never assume they can be equal, but rather close enough:
eps = 0.000001
if abs(a - b) < eps:
print "Equal"