How can I future-proof the `round` function in Python2? - python

When round is imported from the future, it does not behave the same as the Python3 round function. Specifically, it does not support negative digit rounding.
In Python3:
>>> round(4781, -2)
4800
In Python2:
>>> from builtins import round
>>> round(4781, -2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/future/builtins/newround.py", line 33, in newround
raise NotImplementedError('negative ndigits not supported yet')
NotImplementedError: negative ndigits not supported yet
Possible solutions include error handling for negative rounding, writing my own round function, etc. How should this be handled? (Implicitly, I'm asking for best practices, most Pythonic, accepted by community, etc.)

I was going to suggest your custom function idea so you can ensure it always does what you want, but this appears to be a special (weird) case where if I don't use future.builtin.round() I get
Python 3.6:
>>> round(4781, -2)
4800
and Python 2.7:
>>> round(4781, -2)
4800.0
It appears to just be the future.builtins that is somehow broken; in this case, you should avoid using the future.builtins.round() function. Note that py3 round() returns an integer while py2 round() returns a float, but that seems to be the only difference between the two stock implementations (for simple rounding operations such as the examples given).

Related

How to use self-docummenting equals (debugging) specifier with str.format()?

Python 3.8 introduced = specifier in f-strings (see this issue and pull request).
It allows to quickly represent both the value and the name of the variable:
from math import pi as π
f'{π=}'
# 'π=3.141592653589793'
I would like to use this feature on a pre-defined string with str.format():
'{π=}'.format(π=π)
However, it raises an exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'π='
Is there a way to make it work (e.g. with a special dunder method)?
Why it may be useful?
one could have a programmatic template for multiple values of the same variable (in a loop)
in contrasts, f-strings have to be hardcoded; think about internationalization
one could reference constants defined in a module in its docstring (module.__doc__.format(**vars(module));
workaround: define an f-string variable at the end of the module, overwrite the module.__doc__ at runtime.

Difference between passing '1/2' and str(1/2) to decimal.Decimal in python

I am learning about decimal type in python when I came under this doubt
Passing str(1/2) to decimal.Decimal() returns Decimal('0.5')
>>>import decimal
>>>decimal.Decimal(str(1/2))
>>>Decimal('0.5')
But when I pass '1/2' as argument it returns error:
>>>import decimal
>>> decimal.Decimal('1/2')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.ConversionSyntax'>]
Can anyone please explain the reason behind this ?
Thanks in advance
When you make str(1/2) is being evaluated to str(0.5) and then to '0.5'. On your second example, passing the string '1/2' returns an error because the evaluation of expressions is not supported on initializing an instance of the Decimal class.
str(1/2) is evaluated to a float (0.5) during compilation, long before it is passed to the Decimal() constructor (which expects a float or a string representation of a float). I doesn't try to evaluate a string that is passed to it.
The point is, when you pass 1/2 to str(), it gets calculated. So str(1/2) makes absolutely no difference from str(0.5).
So the question turns to decimal.Decimal('0.5') and decimal.Decimal('1/2'). When initializing instances of Decimal, the expression is supposed to be convertible to float. Surely the string 1/2 is not the case: float('1/2') gives ValueError.

When does python raise a FloatingPointError?

Python documentation says that FloatingPointError is raised when a float calculation fails. But what is exactly meant here by "a float calculation"?
I tried adding, multiplying and dividing with floats but never managed to raise this specific error. Instead, i got a TypeError:
10/'a'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for /: 'int' and 'str'
Can someone help me understand when a FloatingPointError is raised in python?
It is part of the fpectl module. The FloatingPointError shouldn't be raised if you don't explicitly turn it on (fpectl.turnon_sigfpe()).
However mind the note:
The fpectl module is not built by default, and its usage is discouraged and may be dangerous except in the hands of experts. See also the section fpectl-limitations on limitations for more details.
Update: The fpectl module has been removed as of Python 3.7.
Even with FloatingPointErrors turned on, 10/'a' will never raise one. It will always raise a TypeError. A FloatingPointError will only be raised for operations that reach the point of actually performing floating-point math, like 1.0/0.0. 10/'a' doesn't get that far.
You can also trigger a FloatingPointError within numpy, by setting the appropriate numpy.seterr (or numpy.errstate context manager) flag. For an example taken from the documentation:
>>> np.sqrt(-1)
nan
>>> with np.errstate(invalid='raise'):
... np.sqrt(-1)
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
FloatingPointError: invalid value encountered in sqrt
Interestingly, it also raises FloatingPointError when all operands are integers:
>>> old_settings = np.seterr(all='warn', over='raise')
>>> np.int16(32000) * np.int16(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FloatingPointError: overflow encountered in short_scalars
The documentation notes the conditions under which the FloatingPointError will be raised:
The floating-point exceptions are defined in the IEEE 754 standard [1]:
Division by zero: infinite result obtained from finite numbers.
Overflow: result too large to be expressed.
Underflow: result so close to zero that some precision was lost.
Invalid operation: result is not an expressible number, typically indicates that a NaN was produced.

().is_integer() not working

Whats wrong with this code:
n = 10
((n/3)).is_integer()
I do not understand why I cannot set n = any number and check if it is an integer or not.
Thanks for your help!
python 2.7.4
error:
Traceback (most recent call last):
File "/home/userh/Arbeitsfläche/übung.py", line 2, in <module>
print ((n/3)).is_integer()
AttributeError: 'int' object has no attribute 'is_integer'
The reason you get this error is because you divide the integer 10 by 3 using integer division, getting the integral number 3 in the form of an int instance as a result. You then try to call the method is_integer() on that result but that method is in the float class and not in the int class, just as the error message says.
A quick fix would be to change your code and divide by 3.0 instead of 3 which would result in floating point division and give you a float instance on which you can call the is_integer() method like you are trying to. Do this:
n = 10
((n/3.0)).is_integer()
You are using Python 2.7. Unless you use from __future__ import division, dividing two integers will return you and integer. is_integer exists only in float, hence your error.
the other answers say this but aren't very clear (imho).
in python 2, the / sign means "integer division" when the arguments are integers. that gives you just the integer part of the result:
>>> 10/3
3
which means that in (10/3).is_integer() you are calling is_integer() on 3, which is an integer. and that doesn't work:
>>> (3.0).is_integer()
True
>>> (3).is_integer()
AttributeError: 'int' object has no attribute 'is_integer'
what you probably want is to change one of the numbers to a float:
>>> (10/3.0).is_integer()
False
this is fixed in python 3, by the way (which is the future, and a nicer language in many small ways).
You can use isdigit, this is a good function provided by Python itself
You can refer documentation here https://docs.python.org/2/library/stdtypes.html#str.isdigit
if token.isdigit():
return int(token)
...
When I wrote this answer there was no information about language.
But in python2 you can use the following to check if it's an integer or not
isinstance( <var>, ( int, long ) )

Changing floating point behavior in Python to Numpy style

Is there a way to make Python floating point numbers follow numpy's rules regarding +/- Inf and NaN? For instance, making 1.0/0.0 = Inf.
>>> from numpy import *
>>> ones(1)/0
array([ Inf])
>>> 1.0/0.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: float division
Numpy's divide function divide(1.0,0.0)=Inf however it is not clear if it can be used similar to from __future__ import division.
You should have a look at how Sage does it. IIRC they wrap the Python REPL in their own preprocessor.
I tried to do something similar, and I never figured out how to do it nicely. But, I can tell you a few things I tried, that didn't work:
Setting float = numpy.float -- python still uses the old float
trying to change float.div to a user-defined function -- "TypeError: can't set attributes of built-in/extension type float". Also, python doesn't like you mucking with the dict object in built-in objects.
I decided to go in and change the actual cpython source code to have it do what I wanted, which is obviously not practical, but it worked.
I think the reason why something like this is not possible is that float/int/list are implemented in C in the background, and their behavior cannot be changed cleanly from inside the language.
You could wrap all your floats in numpy.float64, which is the numpy float type.
a = float64(1.)
a/0 # Inf
In fact, you only need to wrap the floats on the left of arithmetic operations, obviously.

Categories