I have a string that contains a decimal value in "ab.xy" (for example "32.15"). I need to convert this to a decimal number like ab.xy (32.15).
Should I do
float (number_string)
or should i do
decimal.Decimal(number_string)
We don't want the integrity of the decimal number represented in the string to be changed. Meaning we want the number represented in the string to be converted exactly as is to decimal number. According to blog Decimal vs float in Python, using decimal.Decimal is better than float. Can you please weigh in?
Floats cannot be represented exactly as they are. I would use decimal or fraction.
Alternatively convert the strings to integers, and only at the last possible stage revert back to two decimals. This is my preferred option if I have to do calculations and comparisons using the numbers.
Related
Is it possible to store more than 15 digits in a float.
For example, I have this float phi.
Example:
phi = 21.618033988749894848204586834365 # 30 decimal digits
print(phi) # -> ~21.618033988749893, 15 imprecise decimal digits
How can I guarantee I get more than 15 accurate digits from this float, without using the Decimal class?
NOTE: I know that this is not applicable, as Decimal is built into Python. I am leaving it open for reference to others with a similar situation, not realizing Decimal is a builtin.
Floats are fixed-precision. They physically cannot store the amount of data you're trying to stuff into them. You will have to use an arbitrary-precision data type; decimal.Decimal is one, but there are others that may be more useful depending on what you're trying to do.
I'm starting with Python and I recently came across a dataset with big values.
One of my fields has a list of values that looks like this: 1.3212724310201994e+18 (note the e+18 by the end of the number).
How can I convert it to a floating point number and remove the the exponent without affecting the value?
First of all, the number is already a floating point number, and you do not need to change this. The only issue is that you want to have more control over how it is converted to a string for output purposes.
By default, floating point numbers above a certain size are converted to strings using exponential notation (with "e" representing "*10^"). However, if you want to convert it to a string without exponential notation, you can use the f format specifier, for example:
a = 1.3212724310201994e+18
print("{:f}".format(a))
gives:
1321272431020199424.000000
or using "f-strings" in Python 3:
print(f"{a:f}")
here the first f tells it to use an f-string and the :f is the floating point format specifier.
You can also specify the number of decimal places that should be displayed, for example:
>>> print(f"{a:.2f}") # 2 decimal places
1321272431020199424.00
>>> print(f"{a:.0f}") # no decimal places
1321272431020199424
Note that the internal representation of a floating-point number in Python uses 53 binary digits of accuracy (approximately one part in 10^16), so in this case, the value of your number of magnitude approximately 10^18 is not stored with accuracy down to the nearest integer, let alone any decimal places. However, the above gives the general principle of how you control the formatting used for string conversion.
You can use Decimal from the decimal module for each element of your data:
from decimal import Decimal
s = 1.3212724310201994e+18
print(Decimal(s))
Output:
1321272431020199424
I am trying to return a number with 6 decimal places, regardless of what the number is.
For example:
>>> a = 3/6
>>> a
0.5
How can I take a and make it 0.500000 while preserving its type as a float?
I've tried
'{0:.6f}'.format(a)
but that returns a string. I'd like something that accomplishes this same task, but returns a float.
In memory of the computer, the float is being stored as an IEEE754 object, that means it's just a bunch of binary data exposed with a given format that's nothing alike the string of the number as you write it.
So when you manipulate it, it's still a float and has no number of decimals after the dot. It's only when you display it that it does, and whatever you do, when you display it, it gets converted to a string.
That's when you do the conversion to string that you can specify the number of decimals to show, and you do it using the string format as you wrote.
This question shows a slight misunderstanding on the nature of data types such as float and string.
A float in a computer has a binary representation, not a decimal one. The rendering to decimal that python is giving you in the console was converted to a string when it was printed, even if it's implicit by the print function. There is no difference between how a 0.5 and 0.5000000 is stored as a float in its binary representation.
When you are writing application code, it is best not to worry about the presentation until it gets to the end user where it must, somehow, be converted to a string if only implicitly. At that point you can worry about decimal places, or even whether you want it shown in decimal at all.
Can someone explain why the following three examples are not all equal?
ipdb> Decimal(71.60) == Decimal(71.60)
True
ipdb> Decimal('71.60') == Decimal('71.60')
True
ipdb> Decimal(71.60) == Decimal('71.60')
False
Is there a general 'correct' way to create Decimal objects in Python? (ie, as strings or as floats)
Floating point numbers, what are used by default, are in base 2. 71.6 can't be accurately represented in base 2. (Think of numbers like 1/3 in base 10).
Because of this, they will be converted to be as many decimal places as the floating point can represent. Because the number 71.6 in base 2 would go on forever and you almost certainly don't have infinate memory to play with, the computer decides to represent it (well, is told to) in a fewer number of bits.
If you were to use a string instead, the program can use an algorithm to convert it exactly instead of starting from the dodgy rounded floating point number.
>>> decimal.Decimal(71.6)
Decimal('71.599999999999994315658113919198513031005859375')
Compared to
>>> decimal.Decimal("71.6")
Decimal('71.6')
However, if your number is representable exactly as a float, it is just as accurate as a string
>>> decimal.Decimal(71.5)
Decimal('71.5')
Normally Decimal is used to avoid the floating point precision problem. For example, the float literal 71.60 isn't mathematically 71.60, but a number very close to it.
As a result, using float to initialize Decimal won't avoid the problem. In general, you should use strings to initialize Decimal.
>>> str(1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702)
'1.41421356237'
Is there a way I can make str() record more digits of the number into the string? I don't understand why it truncates by default.
Python's floating point numbers use double precision only, which is 64 bits. They simply cannot represent (significantly) more digits than you're seeing.
If you need more, have a look at the built-in decimal module, or the mpmath package.
Try this:
>>> from decimal import *
>>> Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
The float literal is truncated by default to fit in the space made available for it (i.e. it's not because of str):
>>> 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702
1.4142135623730951
If you need more decimal places use decimal instead.
The Python compiler is truncating; your float literal has more precision than can be represented in a C double. Express the number as a string in the first place if you need more precision.
That's because it's converting to a float. It's not the conversion to the string that's causing it.
You should use decimal.Decimal for representing such high precision numbers.