Set Python calculating precision higher than 15 digits - python

for a more precise overview about my problem please look here:
Polygonally calculate Pi
I used mpmath, decimal and some other modules to display more than the standard amount of digits, but only the first 15 of them are correct - meaning that when I run my program calculating pi the rest after the first 15 digits is inaccurate.
Or if I say mpf(1/3) it prints 0.3333333333333331574563241647.... Is there any way to let Python calculate with exact numbers? It is very very important for me because my method will produce angles like 0.0000000000000000000000245331°, for example, and if this will be rounded I've got a problem.
Another example I tried is to calculate 0.0001 + 0.0002 - 0.0003 with and without the decimal module. Without it printed 0.0 and with it gave some sort of 1.5223456224E-31. I assume this problem is coming from the binary calculation of Python?
Do you know a language that can calculate it exactly or better how to realise it in Python?
PS: Where does the problem with the lack of precision after 15 digits come from?

The problem is how you create your variables. When you do mpf(1/3), Python first calculates 1/3 as a 64-bit approximation, and then mpmath converts that approximation o a higher precision, but the error is already present. You need to initialize your variables using exact arguments. mpf(1)/3 should provide the expected results.

Related

Encoding float constants as extremely long binary strings

Recently, I've been trying to implement the 15 tests for randomness described in NIST SP800-22. As a check of my function implementation, I have been running the examples that the NIST document provides for each of it's tests. Some of these tests require bit strings that are very long (up to a million bits). For example, on one of the examples, the input is "the first 100,000 bits of e." That brings up the question: how do I generate a bit representation of a float value that exceeds the precision available for floating point numbers in Python?
I have found articles converting integers to binary strings (the bin() function), and converting floating point fractions to binary (repeated division by 2 (slow!) and limited by floating point precision). I've considered constructing it iteratively in some way using $e=\sum_{n=0}^{\infty}\frac{2n+2}{(2n+1)!}$, calculating the next portion value, converting it to a binary representation, and somehow adding it to the cumulative representation (still thinking through how to do this). However, I've hit the same wall going down this path: the precision of the floating point values as I go farther out on this sum.
Does anyone have some suggestions on creating arbitrarily long bit strings from arbitrarily precise floating point values?
PS - Also, is there any way to get my Markdown math equation above to render properly here? :-)
I maintain the gmpy2 library and its supports arbitrary-precision binary arithmetic. Here is an example of generating the first 100 bits of e.
>>> import gmpy2
>>> gmpy2.get_context().precision=100
>>> gmpy2.exp(1).digits(2)[0]
'101011011111100001010100010110001010001010111011010010101001101010101
1111101110001010110001000000010'

Python cosine function precision [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
From mathematics we know that the cosine of a 90 degree angle is 0 but Python says it's a bit more than that.
import math
math.cos(math.radians(90))
6.123233995736766e-17
What's the matter between Python and the number "0"?
Repeat after me:
Computers cannot process real numbers.
Python uses double precision IEEE floats, which round to 53 binary digits of precision and have limits on range. Since π/2 is an irrational number, the computer rounds it to the nearest representable number (or to a close representable number — some operations have exact rounding, some have error greater than 1/2 ULP).
Therefore, you never asked the computer to compute cos(π/2), you really asked it to compute cos(π/2+ε), where ε is the roundoff error for computing π/2. The result is then rounded again.
Why does Excel (or another program) show the correct result?
Possibility 1: The program does symbolic computations, not numeric ones. This applies to programs like Mathematica and Maxima, not Excel.
Possibility 2: The program is hiding the data (most likely). Excel will only show you the digits you ask for, e.g.,
>>> '%.10f' % math.cos(math.radians(90))
'0.0000000000'
Python has a finely tuned function for printing out floats so that they survive a round trip to text and back. This means that Python prints more digits by default than, for example, printf.
Possibility 3: The program you are using had two round-off errors that canceled.
As Dietrich points out, the included Math package uses numerical approximations to calculate trig functions - pi has some level of precision represented through a float. But there are a bunch of good python packages for doing symbolic calculations too - Sympy is an easy way do more precise calculations, if you'd like.
consider:
import math
math.cos( 3*math.pi/2 )
# Outputs -1.8369701987210297e-16
as apposed to
import sympy
sympy.cos( 3*sympy.pi/2 )
# Outputs 0
There aren't a lot of cases where this makes a difference, and sympy is considerably slower. I tested how many cosine calculations my computer could do in five seconds with math, and with sympy, and the it did 38 times more calculations with math. It depends what you're looking for.

Miscalculation of big floating point numbers in Gcc, Python and Google calculator

Why the result of these two expressions should be different ?
The same thing happens in gcc and python. what is happening in here ? Is there any way to prevent it ?
Floating point numbers have limited precision. If you add a small number (3) to a large number (1e20), the result often is the same as the large number. That is the case here, hence
(3 + 1e20) - 1e20 = 1e20 - 1e20 = 0
The precision of double is roughly 15 decimal digits, floats have about 7 decimal digits of precision.
Although it's related to timestamps, the article “Don't store that in a float” gives a rough overview of the pitfalls you can get when using floating point arithmetics, most importantly:
This real example demonstrates a few things:
Any time you add or subtract floats of widely varying magnitudes you need to watch for loss of precision
Sometimes using ‘double’ instead of ‘float’ is the correct solution, but often a more stable algorithm is more important
In your second case you're adding 10²⁰ to 3, which is a widely varying magnitude. Due to the limited precision of doubles (14 digits approx, 7 for four byte floats (single precision)), the 3 will just get lost in the result. If you however first subtract 10²⁰ from itself, you get a zero, which added to 3 does not change the result at all.
These slight difference in operation ordering can become important in certain calculations and is a thing one should always bear in mind when dealing with floating point numbers on IEEE platforms. A simulation which ran fine for hours suddenly breaking without any reason or only when something specific happens can easily be caused by floating point arithmetics.

Python Division Rounding

In Python, is there a good way to divide 1 over 0.99 immediately resulting in 1.01?
In other words, I am dividing a lot (and by a lot I mean millions) of doubles that are rounded to the second place after the decimal. If I divide 1 over 0.99 in Python I get 1.0101010101010102. All I need is the 1.01 portion of this number.
I am aware of the fact that I can round the result, but this is super slow in the context of my application. Is there a faster way to divide two numbers and get a result that is rounded already?
Thanks!
Use the built-in decimal module for fixed-point math.

Accurate trig in python [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 8 years ago.
Straight to the chase I want to create a function which calculates the length of a step.
This is what I have:
def humanstep(angle):
global human
human[1]+=math.sin(math.radians(angle))*0.06
human[2]+=math.cos(math.radians(angle))*0.06
So if the angle is 90 then the x value (human[1]) should equal 0.06 and the y value equal to 0.
Instead the conversion between radians and degrees is not quite perfect and these values are returned.
[5.99999999999999, 3.6739403974420643e-16]
Is there anyway to fix this?
This is representation error due to how floating point arithmetic works. See the following page from the Python documentation: Floating Point Arithmetic: Issues and Limitations.
FTA:
Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).
For further reading, see the following pages:
The Perils of Floating Point
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Exactly how accurate do you want? The above is accurate to 15dp.
If you want accurate results, you are doing it correctly.
If you want mathematically exact results like [6, 0], use a symbolic math library such as sympy
Notice that these are very different goals.
You should read up on floating-point numbers, as these calculations are naturally imperfect and some numbers cannot be represented accurately using Python's floating point numbers. (Obviously, a fixed number of bits cannot represent the infinitely many real numbers.
The short answer is no. You can round if you want.
As the others have said, it's floating point error. You can use the Decimal module, which can give you arbitrary precision math
If you want to avoid the representation issues inherent in floating-point numbers, you can use a Decimal, but you will need to implement your own trigonometric functions. This will get you arbitrary precision but it will be rather slow.
As everyone has said, this is to do with binary representation issues - 0.06 is not a "round" figure in binary (just like 1/3 is not in decimal). It has nothing to do with the trig functions. If you drop them out and just do:
human[1]+=0.06
human[2]+=0.0
and look at the results you will see the same.
However, in Python 2.7 up the representation of floating point numbers has been improved. It will now display the shortest decimal number that gives that binary number - in this case I think it would display the answer you expect (only running 2.5 here so I can't quickly test). See this article for more information.

Categories