A function returns a list which contains of float values. If I plot this list, I see that some of the float values are equal -1.#IND. I also checked the type of those -1.#IND values. And they are also of float type.
But how can I understand this -1.#IND values? What do they represent or stand for?
-1.#IND means indefinite, the result of a floating point equation that doesn't have a solution. On other platforms, you'd get NaN instead, meaning 'not a number', -1.#IND is specific to Windows. On Python 2.5 on Linux I get:
>>> 1e300 * 1e300 * 0
-nan
You'll only find this on python versions 2.5 and before, on Windows platforms. The float() code was improved in python 2.6 and consistently uses float('nan') for such results; mostly because there was no way to turn 1.#INF and -1.#IND back into an actual float() instance again:
>>> repr(inf)
'1.#INF'
>>> float(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): 1.#INF
>>> repr(nan)
'-1.#IND'
>>> float(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): -1.#IND
On versions 2.6 and newer this has all been cleaned up and made consistent:
>>> 1e300 * 1e300 * 0
nan
>>> 1e300 * 1e300
inf
>>> 1e300 * 1e300 * -1
-inf
Related
In the following code, both coeff1 and coeff2 are Decimal objects. When i check their type using type(coeff1), i get (class 'decimal.Decimal') but when i made a test code and checked decimal objects i get decimal. Decimal, without the word class
coeff1 = system[i].normal_vector.coordinates[i]
coeff2 = system[m].normal_vector.coordinates[i]
x = coeff2/coeff1
print(type(x))
system.xrow_add_to_row(x,i,m)
another issue is when i change the first input to the function xrow_add_to_row to negative x:
system.xrow_add_to_row(-x,i,m)
I get invalid operation error at a line that is above the changed code:
<ipython-input-11-ce84b250bafa> in compute_triangular_form(self)
93 coeff1 = system[i].normal_vector.coordinates[i]
94 coeff2 = system[m].normal_vector.coordinates[i]
---> 95 x = coeff2/coeff1
96 print(type(coeff1))
97 system.xrow_add_to_row(-x,i,m)
InvalidOperation: [<class 'decimal.DivisionUndefined'>]
But then again in a test code i use negative numbers with Decimal objects and it works fine. Any idea what the problem might be? Thanks.
decimal.DivisionUndefined is raised when you attempt to divide zero by zero. It's a bit confusing as you get a different exception when only the divisor is zero (decimal.DivisionByZero)
>>> import decimal.Decimal as D
>>> D(0) / D(0)
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
D(0) / D(0)
decimal.InvalidOperation: [<class 'decimal.DivisionUndefined'>]
>>> D(1) / D(0)
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
D(1) / D(0)
decimal.DivisionByZero: [<class 'decimal.DivisionByZero'>]
I have a sympy poly that looks like:
Poly(0.764635937801645*I**4 + 7.14650839258644*I**3 - 0.667712176660315*I**2 - 2.81663805543677*I - 0.623299856233272, I, domain='RR')
I'm converting to mpc using the following code:
a = val.subs('I',1.0j)
b = sy.re(a)
c = sy.im(a)
d = mpmath.mpc(b,c)
Two questions.
Assuming my mpc and sympy type have equal precision (of eg 100 dps) is there a precision loss using this conversion from a to d?
Is there a better way to convert?
Aside: sympy seems to treat I just like a symbol here. How do I get sympy to simplify this polynomial?
Edit: Ive also noticed that the following works in place of a above:
a = val.args[0]
Strings and expressions
Root cause of the issue is seen in val.subs('I', 1.0j) -- you appear to pass strings as arguments to SymPy functions. There are some valid uses for this (such as creation of high-precision floats), but when symbols are concerned, using strings is a recipe for confusion. The string 'I' gets implicitly converted to SymPy expression Symbol('I'), which is different from SymPy expression I. So the answer to
How do I get sympy to simplify this polynomial?
is to revisit the process of creation of that polynomial, and fix that. If you really need to create it from a string, then use locals parameter:
>>> S('3.3*I**2 + 2*I', locals={'I': I})
-3.3 + 2*I
Polynomials and expressions
If the Poly structure is not needed, use the method as_expr() of Poly to get an expression from it.
Conversion to mpmath and precision loss
is there a precision loss using this conversion from a to d?
Yes, splitting into real and imaginary and then recombining can lead to precision loss. Pass a SymPy object directly to mpc if you know it's a complex number. Or to mpmathify if you want mpmath to decide what type it should have. An example:
>>> val = S('1.111111111111111111111111111111111111111111111111')*I**3 - 2
>>> val
-2 - 1.111111111111111111111111111111111111111111111111*I
>>> import mpmath
>>> mpmath.mp.dps = 40
>>> mpmath.mpc(val)
mpc(real='-2.0', imag='-1.111111111111111111111111111111111111111111')
>>> mpmath.mpmathify(val)
mpc(real='-2.0', imag='-1.111111111111111111111111111111111111111111')
>>> mpmath.mpc(re(val), im(val))
mpc(real='-2.0', imag='-1.111111111111111111111111111111111111111114')
Observations:
When I is actual imaginary unit, I**3 evaluates fo -I, you don't have to do anything for it to happen.
A string representation of high-precision decimal is used to create such a float in SymPy. Here S stands for sympify. One can also be more direct and use Float('1.1111111111111111111111111')
Direct conversion of a SymPy complex number to an mpmath complex number is preferable to splitting in real/complex and recombining.
Conclusion
Most of the above is just talking around an XY problem. Your expression with I was not what you think it was, so you tried to do strange things that were not needed, and my answer is mostly a waste of time.
I'm adding my own answer here, as FTP's answer, although relevant and very helpful, did not (directly) resolve my issue (which wasn't that clear from the question tbh). When I ran the code in his example I got the following:
>>> from sympy import *
>>> import mpmath
>>> val = S('1.111111111111111111111111111111111111111111111111')*I**3 - 2
>>> val
-2 - 1.111111111111111111111111111111111111111111111111*I
>>> mpmath.mp.dps = 40
>>> mpmath.mpc(val)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\mpmath\ctx_mp_python.py", line 373, in __new__
real = cls.context.mpf(real)
File "C:\Python27\lib\site-packages\mpmath\ctx_mp_python.py", line 77, in __new__
v._mpf_ = mpf_pos(cls.mpf_convert_arg(val, prec, rounding), prec, rounding)
File "C:\Python27\lib\site-packages\mpmath\ctx_mp_python.py", line 96, in mpf_convert_arg
raise TypeError("cannot create mpf from " + repr(x))
TypeError: cannot create mpf from -2 - 1.111111111111111111111111111111111111111111111111*I
>>> mpmath.mpmathify(val)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\mpmath\ctx_mp_python.py", line 662, in convert
return ctx._convert_fallback(x, strings)
File "C:\Python27\lib\site-packages\mpmath\ctx_mp.py", line 614, in _convert_fallback
raise TypeError("cannot create mpf from " + repr(x))
TypeError: cannot create mpf from -2 - 1.111111111111111111111111111111111111111111111111*I
>>> mpmath.mpc(re(val), im(val))
mpc(real='-2.0', imag='-1.111111111111111111111111111111111111111114')
>>> mpmath.mpmathify(val)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\mpmath\ctx_mp_python.py", line 662, in convert
return ctx._convert_fallback(x, strings)
File "C:\Python27\lib\site-packages\mpmath\ctx_mp.py", line 614, in _convert_fallback
raise TypeError("cannot create mpf from " + repr(x))
TypeError: cannot create mpf from -2 - 1.111111111111111111111111111111111111111111111111*I
Updating my sympy (1.0->1.1.1) and mpmath (0.19->1.0.0) fixed the exceptions. I did not test which of these upgrades actually resolved the issue.
I want to normalize floating-point numbers to nn.nn strings, and to do some special handling if the number is out of range.
try:
norm = '{:5.2f}'.format(f)
except ValueError:
norm = 'BadData' # actually a bit more complex than this
except it doesn't work: .format silently overflows the 5-character width. Obviously I could length-check norm and raise my own ValueError, but have I missed any way to force format (or the older % formatting) to raise an exception on field-width overflow?
You can not achieve this with format(). You have to create your custom formatter which raises the exception. For example:
def format_float(num, max_int=5, decimal=2):
if len(str(num).split('.')[0])>max_int:
raise ValueError('Integer part of float can have maximum {} digits'.format(max_int))
return "{:.2f}".format(num)
Sample run:
>>> format_float(123.456)
'123.46'
>>> format_float(123.4)
'123.40'
>>> format_float(123789.456) # Error since integer part is having length more than 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in format_float
ValueError: Integer part of float can have maximum 5 digits
Given (Python3):
>>> float('inf') == Decimal('inf')
True
>>> float('-inf') <= float('nan') <= float('inf')
False
>>> float('-inf') <= Decimal(1) <= float('inf')
True
Why are the following invalid? I have read Special values.
Invalid
>>> Decimal('-inf') <= Decimal('nan') <= Decimal('inf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
>>> Decimal('-inf') <= float('nan') <= Decimal('inf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
>>> float('-inf') <= Decimal('nan') <= float('inf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
From the decimal.py source code:
# Note: The Decimal standard doesn't cover rich comparisons for
# Decimals. In particular, the specification is silent on the
# subject of what should happen for a comparison involving a NaN.
# We take the following approach:
#
# == comparisons involving a quiet NaN always return False
# != comparisons involving a quiet NaN always return True
# == or != comparisons involving a signaling NaN signal
# InvalidOperation, and return False or True as above if the
# InvalidOperation is not trapped.
# <, >, <= and >= comparisons involving a (quiet or signaling)
# NaN signal InvalidOperation, and return False if the
# InvalidOperation is not trapped.
#
# This behavior is designed to conform as closely as possible to
# that specified by IEEE 754.
And from the Special values section you say you read:
An attempt to compare two Decimals using any of the <, <=, > or >= operators will raise the InvalidOperation signal if either operand is a NaN, and return False if this signal is not trapped.
Note that IEEE 754 uses NaN as a floating point exception value; e.g. you did something that cannot be computed and you got an exception instead. It is a signal value and should be seen as an error, not something to compare other floats against, which is why in the IEEE 754 standard it is unequal to anything else.
Moreover, the Special values section mentions:
Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons involving a NaN were taken from the IEEE 854 standard (see Table 3 in section 5.7).
and looking at IEEE 854 section 5.7 we find:
In addition to the true/false response, an invalid operation exception (see 7.1) shall be signaled
when, as indicated in the last column of Table 3, “unordered” operands are compared using one of the predicates
involving “<” or “>” but not “?.” (Here the symbol “?” signifies “unordered.” )
with comparisons with NaN classified as unordered.
By default InvalidOperation is trapped, so a Python exception is raised when using <= and >= against Decimal('NaN'). This is a logical extension; Python has actual exceptions so if you compare against the NaN exception value, you can expect an exception being raised.
You could disable trapping by using a Decimal.localcontext():
>>> from decimal import localcontext, Decimal, InvalidOperation
>>> with localcontext() as ctx:
... ctx.traps[InvalidOperation] = 0
... Decimal('-inf') <= Decimal('nan') <= Decimal('inf')
...
False
I just tried
>>> 2.17 * 10**27
2.17e+27
>>> str(2.17 * 10**27)
'2.17e+27'
>>> "%i" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
>>> "%f" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
>>> "%l" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: incomplete format
Now I ran out of ideas. I want to get
2170000000000000000000000000
How can I print such big numbers? (I don't care if it's a Python 2.7+ solution or a Python 3.X solution)
You are getting your operator precedence wrong. You are formatting 2.17, then multiplying that by a long integer:
>>> r = "%f" % 2.17
>>> r
'2.170000'
>>> r * 10 ** 27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
Put parentheses around the multiplication:
>>> "%f" % (2.17 * 10**27)
'2169999999999999971109634048.000000'
This is one of the drawbacks of overloading the modulus operator for string formatting; the newer Format String syntax used by the str.format() method and the Format Specification Mini-Language it employs (and can be used with the format() function) neatly skirt around that issue. I'd use format() for this case:
>>> format(2.17 * 10**27, 'f')
'2169999999999999971109634048.000000'