Fixed-point arithmetic - python

Does anyone know of a library to do fixed point arithmetic in Python?
Or, does anyone has sample code?

If you are interested in doing fixed point arithmetic, the Python Standard Library has a decimal module that can do it.
Actually, it has a more flexible floating point ability than the built-in too. By flexible I mean that it:
Has "signals" for various exceptional conditions (these can be set to do a variety of things on signaling)
Has positive and negative
infinities, as well as NaN (not a
number)
Can differentiate between positive
and negative 0
Allows you to set different rounding
schemes.
Allows you to set your own min and
max values.
All in all, it is handy for a million household uses.

The deModel package sounds like what you're looking for.

Another option worth considering if you want to simulate the behaviour of binary fixed-point numbers beyond simple arithmetic operations, is the spfpm module. That will allow you to calculate square-roots, powers, logarithms and trigonometric functions using fixed numbers of bits. It's a pure-python module, so doesn't offer the ultimate performance but can do hundreds of thousands of arithmetic operations per second on 256-bit numbers.

recently I'm working on similar project, https://numfi.readthedocs.io/en/latest/
>>> from numfi import numfi
>>> x = numfi(0.68751,1,6,3)
>>> x + 1/3
numfi([1.125]) s7/3-r/s
>>> np.sin(x)
numfi([0.625 ]) s6/3-r/s

Related

python interval arithmetic library

I would like to know if there exists a python interval arithmetic library with the following features:
If x and y are both intervals, eg: [1.2,2.1], [3.8,9.9], then their sum/product/ratio is an interval, and in particular, any real number in the first interval plus/times/over the a real number in the second interval, will be contained in the resulting interval.
In particular, I would like to know of such a library that does this and accounts for floating point error so that the results can be used in mathematical proofs. I have found some libraries such as this one: https://pythonhosted.org/uncertainties/ that account for error on real numbers, however they don't do so in this stricter sense that I require.
Thanks to TimPeters for suggesting this library:
https://mpmath.org/doc/current/contexts.html#arbitrary-precision-interval-arithmetic-iv
I think this should be the correct tool to use.
As suggested in comments, the OP appeared to be happy with the basic interval arithmetic facilities (binary floating point, with user-settable precision, emulated in software) supplied by the widely used mpmath extension library.

Encoding float constants as extremely long binary strings

Recently, I've been trying to implement the 15 tests for randomness described in NIST SP800-22. As a check of my function implementation, I have been running the examples that the NIST document provides for each of it's tests. Some of these tests require bit strings that are very long (up to a million bits). For example, on one of the examples, the input is "the first 100,000 bits of e." That brings up the question: how do I generate a bit representation of a float value that exceeds the precision available for floating point numbers in Python?
I have found articles converting integers to binary strings (the bin() function), and converting floating point fractions to binary (repeated division by 2 (slow!) and limited by floating point precision). I've considered constructing it iteratively in some way using $e=\sum_{n=0}^{\infty}\frac{2n+2}{(2n+1)!}$, calculating the next portion value, converting it to a binary representation, and somehow adding it to the cumulative representation (still thinking through how to do this). However, I've hit the same wall going down this path: the precision of the floating point values as I go farther out on this sum.
Does anyone have some suggestions on creating arbitrarily long bit strings from arbitrarily precise floating point values?
PS - Also, is there any way to get my Markdown math equation above to render properly here? :-)
I maintain the gmpy2 library and its supports arbitrary-precision binary arithmetic. Here is an example of generating the first 100 bits of e.
>>> import gmpy2
>>> gmpy2.get_context().precision=100
>>> gmpy2.exp(1).digits(2)[0]
'101011011111100001010100010110001010001010111011010010101001101010101
1111101110001010110001000000010'

floating point subtraction in python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I'm trying to subtract to floating point numbers in python.
I've values
a = 1460356156116843.000000, b = 2301.93138123
When I try to print a-b, it's resulting the value 1460356156114541.000000 contrary to the actual value 1460356156114541.06861877.
What are the limitations in python while doing floating point arithmetic. Is there any way in python through which I can get the actual result of this subtraction?
Python has the same limitations for floating point arithmetic as all the other languages. You can use Decimal to get the accurate result:
from decimal import Decimal
a = Decimal('1460356156116843.000000')
b = Decimal('2301.93138123')
print a - b # 1460356156114541.06861877
Python uses IEEE 754 doubles for its floats. So you should treat anything after 15 significant figures or so as science fiction. And that's just for a freshly-initialised number. When you start doing operations with floats you can lose more precision, especially doing addition or subtraction between numbers that differ significantly in absolute magnitude.
OTOH, doing subtraction between numbers very close to each other in magnitude can lead to catastrophic cancellation.
If you are careful, you can reduce the impact of these problems, but you do need a good understanding of how floating-point arithmetic works, and well-behaved data.
Alternatively, you can work with a library that provides higher precision, eg Python's Decimal module. You still need to take care to avoid catastrophic cancellation and the other problems that lead to loss of significance, but at least you've got more significant digits to play with.
The Decimal module just provides basic arithmetic operations. If you need advanced mathematical functions like trig and exponential functions, take a look at the excellent 3rd-party arbitrary precision mathematics module mpmath. It can handle complex numbers, solve equations, and provides some calculus operations.
Using decimal is convenient. But for the sake of demonstration regarding the importance of keeping significant digits, let me throw in this example.
import sys
print(sys.maxsize)
9223372036854775807 # for 64 bit machine, the max integer number. But it can grow as needed.
So for the above case you can do the computation in two steps.
1460356156116842 - 2301 = 1460356156114541 # all integer digits preserved
1 - .93138123 = 0.06861877 # all significant float digits preserved.
So the answer would be adding the two. But if you do that you will lose all float digits. The 64bit is not big enough to keep all digits.

Using 'Decimal' numbers with scipy?

I have numbers too large to use with Python's inbuilt types and as such am using the decimal library. I want to use scipy.optimise.brentq with a function that operates on 'decimals' but when the function returns decimal it obviously cannot be used with the optimisation function's float based internals. How can I get around this: How can I use scipy optimisation techniques with the Decimal class for big numbers?
You can't. Scipy heavily relies on numerical algorithms that only deal with true numerical data types, and can't deal with the decimal class.
As a general rule of thumb: If your problem is well-defined and well-conditioned (that's something that numerical mathematicians define), you can just scale it so that it fits into normal python floats, and then you can apply scipy functionality to it.
If your problem, however, involves very small numbers as well as numbers that can't fit into float, there's little you can numerically do about that problem usually: It's hard to find a good solution.
If, however, your function only returns values that would fit into float, then you could just use
lambda x: float(your_function(x))
instead of your_function in brentq.

Accurate trig in python [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 8 years ago.
Straight to the chase I want to create a function which calculates the length of a step.
This is what I have:
def humanstep(angle):
global human
human[1]+=math.sin(math.radians(angle))*0.06
human[2]+=math.cos(math.radians(angle))*0.06
So if the angle is 90 then the x value (human[1]) should equal 0.06 and the y value equal to 0.
Instead the conversion between radians and degrees is not quite perfect and these values are returned.
[5.99999999999999, 3.6739403974420643e-16]
Is there anyway to fix this?
This is representation error due to how floating point arithmetic works. See the following page from the Python documentation: Floating Point Arithmetic: Issues and Limitations.
FTA:
Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).
For further reading, see the following pages:
The Perils of Floating Point
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Exactly how accurate do you want? The above is accurate to 15dp.
If you want accurate results, you are doing it correctly.
If you want mathematically exact results like [6, 0], use a symbolic math library such as sympy
Notice that these are very different goals.
You should read up on floating-point numbers, as these calculations are naturally imperfect and some numbers cannot be represented accurately using Python's floating point numbers. (Obviously, a fixed number of bits cannot represent the infinitely many real numbers.
The short answer is no. You can round if you want.
As the others have said, it's floating point error. You can use the Decimal module, which can give you arbitrary precision math
If you want to avoid the representation issues inherent in floating-point numbers, you can use a Decimal, but you will need to implement your own trigonometric functions. This will get you arbitrary precision but it will be rather slow.
As everyone has said, this is to do with binary representation issues - 0.06 is not a "round" figure in binary (just like 1/3 is not in decimal). It has nothing to do with the trig functions. If you drop them out and just do:
human[1]+=0.06
human[2]+=0.0
and look at the results you will see the same.
However, in Python 2.7 up the representation of floating point numbers has been improved. It will now display the shortest decimal number that gives that binary number - in this case I think it would display the answer you expect (only running 2.5 here so I can't quickly test). See this article for more information.

Categories