This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 2 years ago.
I was working on a project to compute the Leibniz approximation for pi with the below code:
def pi(precision):
sign = True
ret = 0
for i in range(1,precision+1):
odd = 2 * i - 1
if sign:
ret += 1.0 / odd
else:
ret -= 1.0 / odd
sign = not sign
return ret
However, the output value was always was 12 digits long. How can I increase the precision (e.g. more digits) of the calculation? Does Python support more precise floating points, or will I have to use some external library?
Try using Decimal.
Read Arbitrary-precision elementary mathematical functions (Python)original for more information
Python's float type maps to whatever your platform's C compiler calls a double (see http://en.wikipedia.org/wiki/IEEE_floating_point_number).
The Python standard library also comes with an arbitrary-precision decimal module, called decimal: http://docs.python.org/2/library/decimal.html
The Leibniz formula converges extremely slowly - honestly, you won't live long enough for it get 12 digits of accuracy. Click here for one way to accelerate it enormously.
With Python's float, you get 15–17 digits of precision (if you are seeing fewer, you may need to use a different format specifier when printing).
If you need more, you'll need to use a different method (one that only uses integer arithmetic), or a different way to represent floating-point numbers.
See Python floating point arbitrary precision available?
Related
This question already has an answer here:
Why doesn't decimal.getcontext().prec=3 work for decimal.Decimal(1.234)
(1 answer)
Closed 25 days ago.
So I was trying to minimize floating point errors when doing arithmetic in python and I stumbled upon the Decimal module of python. It worked great in the first up until this operation.
from decimal import *
getcontext().prec = 100
test_x = Decimal(str(3.25)).quantize(Decimal('0.000001'), rounding=ROUND_HALF_UP)
test_y = Decimal(str(2196.646351)).quantize(Decimal('0.000001'), rounding=ROUND_HALF_UP)
print((test_y)*(test_x**Decimal('2')))
The above code outputs 23202.077082437500000000 instead of 23202.07708 where it is the output of our usual conventional arithmetic calculator. How can I output it like our calculator with rounding off to 6 decimal places? Also do you have better ways to do arithmetic calculations in python?
I have tried the round() function of the python but that is off limits for me because I am dealing with very large numbers which reaches the maximum length of numbers that the round() function support
Adding further context to the code. I cant change the value of getcontext().prec and the .quantize(Decimal('0.000001')) because I am dealing with numbers like 109796940503037.6545639765 and it is giving me errors if I dont set getcontext().prec to a high number.
I can't change the getcontext().prec to let's say 6 because it always gives the error:
InvalidOperation: [<class 'decimal.InvalidOperation'>]
If you do: Decimal(str(123312.12321221332)) it converts a float to string and the passes it to Decimal and you are losing precision during that conversion.
Do: Decimal('123312.12321221332') instead.
Also keep in mind that:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem
https://docs.python.org/3/library/decimal.html
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have an algorithm that is calculating:
result = int(14949283383840498/5262*27115)
The correct result should be 77033412951888085, but Python3.8 gives me 77033412951888080
I also have tried the following:
>>> result = 77033412951888085
>>> print(result)
77033412951888085
>>> print(int(result))
77033412951888085
>>> print(float(result))
7.703341295188808e+16
>>> print(int(float(result)))
77033412951888080
It seems the problem occours when I cast the float to int. What am I missing?
PS: I have found that using result = 14949283383840498//5262*27115 I get the right answer!
Casting is not the issue. Floating-point arithmetic has limitations with respect to precision. See https://docs.python.org/3/tutorial/floatingpoint.html
Need to either use integer division or use the decimal module which defaults to using 28 places in precision.
Using integer division
result = 14949283383840498 // 5262 * 27115
print(result)
Output:
77033412951888085
Using decimal module
from decimal import Decimal
result = Decimal(14949283383840498) / 5262 * 27115
print(result)
Output:
77033412951888085
It is an precision limitation :
result = 14949283383840498/5262*27115
result
7.703341295188808e+16
In this case, result is a float.
You can see that the precision is of 15 digits.
Convert that to int, you see that the last non zero digit is 8, it is correct to what result: float show when printed.
Try the following:
print(sys.float_info.dig)
15
dig is the maximum number of decimal digits that can be faithfully represented in a float.
A very good explanation regarding this issue is available here.
But there are ways to do better with Python, see from the Python's doc:
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a
look at the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project
I'd like to calculate (⌊2^(1918)*π⌋+124476) in python but I get this error when I do it using the following code:
b = math.floor((2**1918) * math.pi) + 124476
print(b)
OverflowError: int too large to convert to float
How can you get this to work? In the end I just like to have it all as hexadecimal (if that helps with answering my question) but I was actually only trying to get it as an integer first :)
The right solution really depends on how precise the results are required. Since 2^1918 already is too large for both standard integer and floating point containers, it is not possible to get away with direct calculations without loosing all the precision below ~ 10^300.
In order to compute the desired result, you should use arbitrary-precision calculation techniques. You can implement the algorithms yourself or use one of the available libraries.
Assuming you are looking for an integer part of your expression, it will take about 600 decimal places to store the results precisely. Here is how you can get it using mpmath:
from mpmath import mp
mp.dps = 600
print(mp.floor(mp.power(2, 1918)*mp.pi + 124476))
74590163000744215664571428206261183464882552592869067139382222056552715349763159120841569799756029042920968184704590129494078052978962320087944021101746026347535981717869532122259590055984951049094749636380324830154777203301864744802934173941573749720376124683717094961945258961821638084501989870923589746845121992752663157772293235786930128078740743810989039879507242078364008020576647135087519356182872146031915081433053440716531771499444683048837650335204793844725968402892045220358076481772902929784589843471786500160230209071224266538164123696273477863853813807997663357545.0
Next, all you have to do is to convert it to hex representation (or extract hex from its internal binary form), which is a matter for another subject :)
The basic problem is what the message says. Python integers can be arbitrarily large, larger even than the range of a float. 2**1918 in decimal contains 578 significant digits and is way bigger than the biggest float your IEEE754 hardware can represent. So the call just fails.
You could try looking at the mpmath module. It is designed for floating point arithmetic outside the bounds of what ordinary hardware can handle.
I think the problem can be solved without resorting to high-precision arithmetic. floor(n.something + m) where m and n are integers is equal to floor(n.something) + m. So in this case you are looking for floor(2**1918 * pi) plus an integer (namely 124476). floor(2**whatever * pi) is just the first whatever + 2 bits of pi. So just look up the first 1920 bits of pi, add the bits for 124476, and output as hex digits.
A spigot algorithm can generate digits of pi without using arbitrary precision. A quick web search seems to find some Python implementations for generating digits in base 10. I didn't see anything about base 2, but Plouffe's formula generates base 16 digits if I am not mistaken.
The problem is that (2**1918) * math.pi attempts to convert the integer to 64-bit floating point precision, which is insufficiently large. You can convert math.pi to a fraction to use arbitrary precision.
>>> math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
74590163000744212756918704280961225881025315246628098737524697383138220762542289800871336766911957454080350508173317171375032226685669280397906783245438534131599390699781017605377332298669863169044574050694427882869191541933796848577277592163846082732344724959222075452985644173036076895843129191378853006780204194590286508603564336292806628948212533561286572102730834409985441874735976583720122784469168008083076285020654725577288682595262788418426186598550864392013191287665258445673204426746083965447956681216069719524525240073122409298640817341016286940008045020172328756796
Note that arbitrary precision applies to the calculation; math.pi is defined only with 64-bit floating point precision. Use an external library, such as mpmath, if you need the exact value.
To convert this to a hexadecimal string, use hex or a string format:
>>> hex(math.floor((2**1918) * fractions.Fraction(math.pi) + 124476))
'0xc90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> '%x' % math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
'c90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> f'{math.floor((2**1918) * fractions.Fraction(math.pi) + 124476):X}'
'C90FDAA22168C0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001E63C'
For string formats, x provides lower-case hex whereas X provides upper-case case.
I'm wondering what causes this behaviour. I haven't been able to find an answer that covers this. It is probably something simple and obvious, but it is not to me. I am using python 2.7.3 in Ubuntu.
In [1]: 2 == 1.9999999999999999
Out[1]: True
In [2]: 2 == 1.999999999999999
Out[2]: False
EDIT:
To clarify my question. Is there a written(in documentation) max number of 9's where python will evaluate the expression above as being equal to 2?
Python uses floating point representation
What a floating point actually is, is a fixed-width binary number (called the "significand") plus a small integer to tell you how many powers of two to shift that value by (the "exponent"). Plus a sign bit. Just like scientific notation, but in base 2 instead of 10.
The closest 64 bit floating point value to 1.9999999999999999 is 2.0, because 64 bit floating point values (so-called "double precision") uses 52 bits of significand, which is equivalent to about 15 decimal places. So the literal 1.9999999999999999 is just another way of writing 2.0. However, the closest value to 1.999999999999999 is less than 2.0 (I think it's 1.9999999999999988897769753748434595763683319091796875 exactly, but I'm too lazy to check that's correct, I'm just relying on Python's formatting code to be exact).
I don't actually know whether the use specifically of 64 bit floats is required by the Python language, or is an implementation detail of CPython. But whatever size is used, the important thing is not specifically the number of decimal places, it is where the closest floating-point value of that size lies to your decimal literal. It will be closer for some literals than others.
Hence, 1.9999999999999999 == 2 for the same reason that 2.0 == 2 (Python allows mixed-type numeric operations including comparison, and the integer 2 is equal to the float 2.0). Whereas 1.999999999999999 != 2.
Types coercion
>>> 2 == 2.0
True
And consequences of maximum number of digits that can be represented in python :
>>> import sys
>>> sys.float_info.dig
15
>>> 1.9999999999999999
2.0
more from docs
>>> float('9876543211234567')
9876543211234568.0
note ..68 on the end instead of expected ..67
This is due to the way floats are implemented in Python. To keep it short and simple: Since floats almost always are an approximation and thus have more digits than most people find useful, the Python interpreter displays a rounded value.
More detailed, floats are stored in binary. This means that they're stored as fractions to the base 2, unlike decimal, were you can display a float as fractions to the base 10. However, most decimal fractions don't have an exact representation in binary. Because of that, they are typically stored with a precision of 53 bits. This renders them pretty much useless if you want to do more complex arithmetic operations, since you'll run into some strange problems, e. g.:
>>> 0.1 + 0.2
0.30000000000000004
>>> round(2.675, 2)
2.67
See The docs on floats as well.
Mathematically speaking, 2.0 does equal 1.9999... forever. They are two different ways of writing the same number.
However, in software, it's important to never compare two floats or decimals for equality - instead, subtract them, take the absolute value, and verify that the (always positive) difference is sufficiently low for your purposes.
EG:
if abs(value1 - value2) < 1e10:
# they are close enough
else:
# they are not
You probably should set EPSILON = 1e10, and use the symbolic constant instead of scattering 1e10 throughout your code, or better still use a comparison function.
So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:
from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")
But this doesn't:
>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")
So two questions:
Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.
You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.
Once you've arrived at the final result, what you probably want is quantize:
>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")
You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.
You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?
Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try
decimal.getcontext().prec=2
EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).
Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:
def lround(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default)
or rounded to the specified digit as counted from the leftmost
non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format
The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:
>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.
What's wrong with floating point?
>>> "%8.2e"% ( 1.0/3.0 )
'3.33e-01'
It was designed for scientific-style calculations with a limited number of significant digits.
If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.
You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.
I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.