>>> str(1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702)
'1.41421356237'
Is there a way I can make str() record more digits of the number into the string? I don't understand why it truncates by default.
Python's floating point numbers use double precision only, which is 64 bits. They simply cannot represent (significantly) more digits than you're seeing.
If you need more, have a look at the built-in decimal module, or the mpmath package.
Try this:
>>> from decimal import *
>>> Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
The float literal is truncated by default to fit in the space made available for it (i.e. it's not because of str):
>>> 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702
1.4142135623730951
If you need more decimal places use decimal instead.
The Python compiler is truncating; your float literal has more precision than can be represented in a C double. Express the number as a string in the first place if you need more precision.
That's because it's converting to a float. It's not the conversion to the string that's causing it.
You should use decimal.Decimal for representing such high precision numbers.
Related
I'm starting with Python and I recently came across a dataset with big values.
One of my fields has a list of values that looks like this: 1.3212724310201994e+18 (note the e+18 by the end of the number).
How can I convert it to a floating point number and remove the the exponent without affecting the value?
First of all, the number is already a floating point number, and you do not need to change this. The only issue is that you want to have more control over how it is converted to a string for output purposes.
By default, floating point numbers above a certain size are converted to strings using exponential notation (with "e" representing "*10^"). However, if you want to convert it to a string without exponential notation, you can use the f format specifier, for example:
a = 1.3212724310201994e+18
print("{:f}".format(a))
gives:
1321272431020199424.000000
or using "f-strings" in Python 3:
print(f"{a:f}")
here the first f tells it to use an f-string and the :f is the floating point format specifier.
You can also specify the number of decimal places that should be displayed, for example:
>>> print(f"{a:.2f}") # 2 decimal places
1321272431020199424.00
>>> print(f"{a:.0f}") # no decimal places
1321272431020199424
Note that the internal representation of a floating-point number in Python uses 53 binary digits of accuracy (approximately one part in 10^16), so in this case, the value of your number of magnitude approximately 10^18 is not stored with accuracy down to the nearest integer, let alone any decimal places. However, the above gives the general principle of how you control the formatting used for string conversion.
You can use Decimal from the decimal module for each element of your data:
from decimal import Decimal
s = 1.3212724310201994e+18
print(Decimal(s))
Output:
1321272431020199424
Can someone explain why the following three examples are not all equal?
ipdb> Decimal(71.60) == Decimal(71.60)
True
ipdb> Decimal('71.60') == Decimal('71.60')
True
ipdb> Decimal(71.60) == Decimal('71.60')
False
Is there a general 'correct' way to create Decimal objects in Python? (ie, as strings or as floats)
Floating point numbers, what are used by default, are in base 2. 71.6 can't be accurately represented in base 2. (Think of numbers like 1/3 in base 10).
Because of this, they will be converted to be as many decimal places as the floating point can represent. Because the number 71.6 in base 2 would go on forever and you almost certainly don't have infinate memory to play with, the computer decides to represent it (well, is told to) in a fewer number of bits.
If you were to use a string instead, the program can use an algorithm to convert it exactly instead of starting from the dodgy rounded floating point number.
>>> decimal.Decimal(71.6)
Decimal('71.599999999999994315658113919198513031005859375')
Compared to
>>> decimal.Decimal("71.6")
Decimal('71.6')
However, if your number is representable exactly as a float, it is just as accurate as a string
>>> decimal.Decimal(71.5)
Decimal('71.5')
Normally Decimal is used to avoid the floating point precision problem. For example, the float literal 71.60 isn't mathematically 71.60, but a number very close to it.
As a result, using float to initialize Decimal won't avoid the problem. In general, you should use strings to initialize Decimal.
When printing a floating point variable in Python 3 like this:
str(1318516946165810000000000.123123123)
The output is:
1.31851694616581e+24
Is there a simple way in the standard lib (not Numpy) to print the same thing with only 32 bit float precision? (or more general any precision)
Be aware precision != places, like in Decimal
EDIT
The result should should be a string like str does but with a limited precision for example:
32 bit representation of the above float:
1.31851e+24
I may have misunderstood, but is using format with a suitable precision modifier what you are asking for?
>>> "{0:6g}".format(1.31851694616581e24)
'1.31852e+24'
Change the 6 to control the number of significant figures
I need to write a simple program that calculates a mathematical formula.
The only problem here is that one of the variables can take the value 10^100.
Because of this I can not write this program in C++/C (I can't use external libraries like gmp).
Few hours ago I read that Python is capable of calculating such values.
My question is:
Why
print("%.10f"%(10.25**100))
is returning the number "118137163510621843218803309161687290343217035128100169109374848108012122824436799009169146127891562496.0000000000"
instead of
"118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171461"?
By default, Python uses a fixed precision floating-point data type to represent fractional numbers (just like double in C). You can work with precise rational numbers, though:
>>> from fractions import Fraction
>>> Fraction("10.25")
Fraction(41, 4)
>>> x = Fraction("10.25")
>>> x**100
Fraction(189839102486063226543090986563273122284619337618944664609359292215966165735102377674211649585188827411673346619890309129617784863285653302296666895356073140724001, 1606938044258990275541962092341162602522202993782792835301376)
You can also use the decimal module if you want arbitrary precision decimals (only numbers that are representable as finite decimals are supported, though):
>>> from decimal import *
>>> getcontext().prec = 150
>>> Decimal("10.25")**100
Decimal('118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171460995169093650913476028229144848989')
Python is capable of handling arbitrarily large integers, but not floating point values. They can get pretty large, but as you noticed, you lose precision in the low digits.
I have a string:
x = "12.000"
And I want it to convert it to digits. However, I have used int, float, and others but I only get 12.0 and i want to keep all the zeroes. Please help!
I want x = 12.000 as a result.
decimal.Decimal allows you to use a specific precision.
>>> decimal.Decimal('12.000')
Decimal('12.000')
If you really want to perform calculations that take precision into account, the easiest way is to probably to use the uncertainties module. Here is an example
>>> import uncertainties
>>> x = uncertainties.ufloat('12.000')
>>> x
12.0+/-0.001
>>> print 2*x
24.0+/-0.002
The uncertainties module transparently handles uncertainties (precision) for you, whatever the complexity of the mathematical expressions involved.
The decimal module, on the other hand, does not handle uncertainties, but instead sets the number of digits after the decimal point: you can't trust all the digits given by the decimal module. Thus,
>>> 100*decimal.Decimal('12.1')
Decimal('1210.0')
whereas 100*(12.1±0.1) = 1210±10 (not 1210.0±0.1):
>>> 100*uncertainties.ufloat('12.1')
1210.0+/-10.0
Thus, the decimal module gives '1210.0' even though the precision on 100*(12.1±0.1) is 100 times larger than 0.1.
So, if you want numbers that have a fixed number of digits after the decimal point (like for accounting applications), the decimal module is good; if you instead need to perform calculations with uncertainties, then the uncertainties module is appropriate.
(Disclaimer: I'm the author of the uncertainties module.)
You may be interested by the decimal python lib.
You can set the precision with getcontext().prec.