Using locale format without default rounding off - python

I am trying to convert 123456789.123456789 into 123,456,789.123456789.
Say
In:
f=123456789.123456789
"{:0,f}".format(f)
Out:
'123,456,789.123457'
How do I use format without it automatically rounding off at the millionths place?

Try something like:
>>> '{:,.8f}'.format(f)
'123,456,789.12345679'
To round the number to 8 digits.
Note that for that specific case, floating-point dictates that str(f) => '123456789.12345679', so the longer-precision rounding is yet inevitable unless you use Decimals.

Related

Cast float to int in Python results wrong answer [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have an algorithm that is calculating:
result = int(14949283383840498/5262*27115)
The correct result should be 77033412951888085, but Python3.8 gives me 77033412951888080
I also have tried the following:
>>> result = 77033412951888085
>>> print(result)
77033412951888085
>>> print(int(result))
77033412951888085
>>> print(float(result))
7.703341295188808e+16
>>> print(int(float(result)))
77033412951888080
It seems the problem occours when I cast the float to int. What am I missing?
PS: I have found that using result = 14949283383840498//5262*27115 I get the right answer!
Casting is not the issue. Floating-point arithmetic has limitations with respect to precision. See https://docs.python.org/3/tutorial/floatingpoint.html
Need to either use integer division or use the decimal module which defaults to using 28 places in precision.
Using integer division
result = 14949283383840498 // 5262 * 27115
print(result)
Output:
77033412951888085
Using decimal module
from decimal import Decimal
result = Decimal(14949283383840498) / 5262 * 27115
print(result)
Output:
77033412951888085
It is an precision limitation :
result = 14949283383840498/5262*27115
result
7.703341295188808e+16
In this case, result is a float.
You can see that the precision is of 15 digits.
Convert that to int, you see that the last non zero digit is 8, it is correct to what result: float show when printed.
Try the following:
print(sys.float_info.dig)
15
dig is the maximum number of decimal digits that can be faithfully represented in a float.
A very good explanation regarding this issue is available here.
But there are ways to do better with Python, see from the Python's doc:
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a
look at the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project

How to convert exponent in Python and get rid of the 'e+'?

I'm starting with Python and I recently came across a dataset with big values.
One of my fields has a list of values that looks like this: 1.3212724310201994e+18 (note the e+18 by the end of the number).
How can I convert it to a floating point number and remove the the exponent without affecting the value?
First of all, the number is already a floating point number, and you do not need to change this. The only issue is that you want to have more control over how it is converted to a string for output purposes.
By default, floating point numbers above a certain size are converted to strings using exponential notation (with "e" representing "*10^"). However, if you want to convert it to a string without exponential notation, you can use the f format specifier, for example:
a = 1.3212724310201994e+18
print("{:f}".format(a))
gives:
1321272431020199424.000000
or using "f-strings" in Python 3:
print(f"{a:f}")
here the first f tells it to use an f-string and the :f is the floating point format specifier.
You can also specify the number of decimal places that should be displayed, for example:
>>> print(f"{a:.2f}") # 2 decimal places
1321272431020199424.00
>>> print(f"{a:.0f}") # no decimal places
1321272431020199424
Note that the internal representation of a floating-point number in Python uses 53 binary digits of accuracy (approximately one part in 10^16), so in this case, the value of your number of magnitude approximately 10^18 is not stored with accuracy down to the nearest integer, let alone any decimal places. However, the above gives the general principle of how you control the formatting used for string conversion.
You can use Decimal from the decimal module for each element of your data:
from decimal import Decimal
s = 1.3212724310201994e+18
print(Decimal(s))
Output:
1321272431020199424

How to avoid modulus floating point error?

I'm using the modulus operator and I'm get some floating point errors. For example,
>>> 7.2%3
1.2000000000000002
Is my only recourse to handle this by using the round function? E.g.
>>> round(7.2%3, 1)
1.2
I don't a priori know the number of digits I'm going to need to round to, so I'm wondering if there's a better solution?
If you want arbitrary precision, use the decimal module:
>>> import decimal
>>> decimal.Decimal('7.2') % decimal.Decimal('3')
Decimal('1.2')
Please read the documentation carefully.
Notice I used a str as an argument to Decimal. Look what happens if I didn't:
>>> decimal.Decimal(7.2) % decimal.Decimal(3)
Decimal('1.200000000000000177635683940')
>>>

Python Shell - "Extras" in float subtraction [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Floating Point Limitations
Using Python 2.7 here.
Can someone explain why this happens in the shell?
>>> 5.2-5.0
0.20000000000000018
Searching yielded things about different scales of numbers not producing the right results (a very small number and a very large number), but that seemed pretty general, and considering the numbers I'm using are of the same scale, I don't think that's why this happens.
EDIT: I suppose I didn't define that the "this thing happening" I meant was that it returns 0.2 ... 018 instead of simply resulting in 0.2. I get that print rounds, and removed the print part in the code snippet, as that was misleading.
You need to understand that 5.2-5.0 really is 0.20000000000000018, not 0.2. The standard explanation for this is found in What Every Computer Scientist Should Know About Floating-Point Arithmetic.
If you don't want to read all of that, just accept that 5.2, 5.0, and 0.20000000000000018 are all just approximations, as close as the computer can get to the numbers you really way.
Python has some tricks to allow you to not know what every computer scientist should know and still get away with it. The main trick is that str(f)—that is, the human-readable rendition of a floating-point number—is truncated to 12 significant digits, so str(5.2-5.0) is "0.2", not "0.20000000000000018". But sometimes you need all the precision you can get, so repr(f)—that is, the machine-readable rendition—is not truncated, so repr(5.2-5.0) is "0.20000000000000018".
Now the only thing left to understand is what the interpreter shell does. As Ashwini Chaudhary explains, just evaluating something in the shell prints out its repr, while the print statement prints out its str.
shell uses repr():
In [1]: print repr(5.2-5.0)
0.20000000000000018
In [2]: print str(5.2-5.0)
0.2
In [3]: print 5.2-5.0
0.2
The default implementation of float.__str__ limits the output to 12 digits only.
Thus, the least significant digits are dropped and what is left is the value 0.2.
To print more digits (if available), use string formatting:
print '%f' % result # prints 0.200000
That defaults to 6 digits, but you can specify more precision:
print '%.16f' % result # prints 0.2000000000000002
Alternatively, python offers a newer string formatting method too:
print '{0:.16f}'.format(result) # prints 0.2000000000000002
Why python produces the 'imprecise' result in the first place has everything to do with the imprecise nature of floating point arithmetic. Use the decimal module instead if you need more predictable precision:
>>> from decimal import *
>>> getcontext().prec = 1
>>> Decimal(5.2) - Decimal(5.0)
Decimal('0.2')
Python has two different ways of converting an object to a string, the __str__ and __repr__ methods. __str__ is meant to be a normal string output and is used by print; __repr__ is meant to be a more exact representation and is what is displayed when you don't use print, or when you print the contents of a list or dictionary. __str__ rounds floating-point values.
As for why the actual result of the subtraction is 0.20000000000000018 rather than 0.2 exactly, it has to do with the internal representation of floating point. It's impossible to represent 5.2 exactly because it's an infinitely repeating binary number. The closest that you can come is approximately 5.20000000000000018.

Significant figures in the decimal module

So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:
from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")
But this doesn't:
>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")
So two questions:
Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.
You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.
Once you've arrived at the final result, what you probably want is quantize:
>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")
You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.
You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?
Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try
decimal.getcontext().prec=2
EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).
Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:
def lround(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default)
or rounded to the specified digit as counted from the leftmost
non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format
The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:
>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.
What's wrong with floating point?
>>> "%8.2e"% ( 1.0/3.0 )
'3.33e-01'
It was designed for scientific-style calculations with a limited number of significant digits.
If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.
You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.
I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.

Categories