python interval arithmetic library - python

I would like to know if there exists a python interval arithmetic library with the following features:
If x and y are both intervals, eg: [1.2,2.1], [3.8,9.9], then their sum/product/ratio is an interval, and in particular, any real number in the first interval plus/times/over the a real number in the second interval, will be contained in the resulting interval.
In particular, I would like to know of such a library that does this and accounts for floating point error so that the results can be used in mathematical proofs. I have found some libraries such as this one: https://pythonhosted.org/uncertainties/ that account for error on real numbers, however they don't do so in this stricter sense that I require.
Thanks to TimPeters for suggesting this library:
https://mpmath.org/doc/current/contexts.html#arbitrary-precision-interval-arithmetic-iv
I think this should be the correct tool to use.

As suggested in comments, the OP appeared to be happy with the basic interval arithmetic facilities (binary floating point, with user-settable precision, emulated in software) supplied by the widely used mpmath extension library.

Related

How to use trig functions for very high accuracy in GMP/MPFR in Python and C?

I need to evaluate the tangent of angles between 0 and pi in steps of pi/20 with a precision of 40 digits, using gmpy 2. I don't think importing pi from numpy or the standard math library is sufficient. I need 40 digits, so 133 bits of precision. I set gmpy2.get_context().precision=133
from gmpy2 import mpfr
from numpy import pi
for i in range(1, 11):
print(tan(mpfr(i*pi/20)))
Well, this seems to be a school assignment, so I'll just give you some hints about how to solve that problem.
There's neither gmp_sin(3), gmp_cos(3) nor gmp_tan(3) in the GNU GMP library, so you'll have to implement those. You can use a Taylor series approximation for that, you will need to calculate how many terms you'll use, in order to reduce error difference under specifications.
You can use gmp_sin(3) to guess an approximation to PI accurate enough.
Once you have this, getting your table should present no problem.
This is a good programming challenge for calculus students, don't hesitate on working on it, you'll be satisfied with the result, in case you get to it.
You cannot use numpy.pi because it does not have the precision you need.
Luckily, there is a const_pi function in gmpy2 (https://gmpy2.readthedocs.io/en/latest/mpfr.html#mpfr-functions).
const_pi(...)
const_pi([precision=0]) returns the constant pi using
the specified precision. If no precision is specified, the default
precision is used.

Encoding float constants as extremely long binary strings

Recently, I've been trying to implement the 15 tests for randomness described in NIST SP800-22. As a check of my function implementation, I have been running the examples that the NIST document provides for each of it's tests. Some of these tests require bit strings that are very long (up to a million bits). For example, on one of the examples, the input is "the first 100,000 bits of e." That brings up the question: how do I generate a bit representation of a float value that exceeds the precision available for floating point numbers in Python?
I have found articles converting integers to binary strings (the bin() function), and converting floating point fractions to binary (repeated division by 2 (slow!) and limited by floating point precision). I've considered constructing it iteratively in some way using $e=\sum_{n=0}^{\infty}\frac{2n+2}{(2n+1)!}$, calculating the next portion value, converting it to a binary representation, and somehow adding it to the cumulative representation (still thinking through how to do this). However, I've hit the same wall going down this path: the precision of the floating point values as I go farther out on this sum.
Does anyone have some suggestions on creating arbitrarily long bit strings from arbitrarily precise floating point values?
PS - Also, is there any way to get my Markdown math equation above to render properly here? :-)
I maintain the gmpy2 library and its supports arbitrary-precision binary arithmetic. Here is an example of generating the first 100 bits of e.
>>> import gmpy2
>>> gmpy2.get_context().precision=100
>>> gmpy2.exp(1).digits(2)[0]
'101011011111100001010100010110001010001010111011010010101001101010101
1111101110001010110001000000010'

Python's decimal module and the table maker's dilemma

According to the documentation, the .exp() operation in
Python's decimal module "is correctly rounded using ...".
Because of the table maker's dilemma, I hope that's not guaranteed, since I'd prefer a guarantee
that it's computation on a normal-looking input with moderately low precision won't take, eg., a year.
How does Python address this?
(Is it different between versions?)
The exp() and pow() functions are different.
The "table-maker's dilemma" explanation you link to states that xy cannot be correctly rounded by any known algorithm with a bounded amount of time. However, this is obviously not the case for all subsets of its domain. If we restrict the domain to x=3 and y=2, then I can tell you what the correctly-rounded answer is.
A quick Google search turns up Correctly-Rounded Exponential Function in Double-Precision Arithmetic, by David Defour, Florent de Dinechin, Jean-Michel Muller (CiteSeer, PDF). The article provides an algorithm for computing a correctly-rounded exp() and provides the worst-case bound on its running time.
This is not the radix=10 case, but it shows how the table-maker's dilemma does not necessarily apply to the exp() function.

Deep zoom on a function such as Weierstrass functions - working example or library or starter code for very-high-precision floating point math?

This link shows information, including a "zoom into" the detail of a subsection of a Weierstrass function. It stops, and the notes say that the limitations of the software used to analyze the values for f(x) are hitting the limits of (I'm guessing) the most precise floating point type on the system.
My question is: Is there any Python code that can create a "deep zoom" plot of a function and go beyond the limitations of the floating point type (even if it's very very slow)? What would I use? BigNum? Something else? I'm hoping someone has already made a "plot tool" that could analyze a Weierstrass function at a resolution finer than floating point math.
Since the function is an endless series, one would have to evalulate it up to X terms of the expression, and so the deeper you zoom in, the more terms would have to be evaluated. Secondly, if it is possible (by some careful work) to make a special case of an evalulator that plots the Weierstrass function shown above to any arbitrary precision without requiring specialized data types (that is, working within floating point limits) then that would also be pretty great.
Try gmpy, which is Python bindings to the GNU multiprecision library. It can handle the range "roughly 2^-68719476768 to 2^68719476736" on a 32 bit system.
Python's Decimal type will give you arbitrary precision floats. matplotlib will allow you to create plots.

Fixed-point arithmetic

Does anyone know of a library to do fixed point arithmetic in Python?
Or, does anyone has sample code?
If you are interested in doing fixed point arithmetic, the Python Standard Library has a decimal module that can do it.
Actually, it has a more flexible floating point ability than the built-in too. By flexible I mean that it:
Has "signals" for various exceptional conditions (these can be set to do a variety of things on signaling)
Has positive and negative
infinities, as well as NaN (not a
number)
Can differentiate between positive
and negative 0
Allows you to set different rounding
schemes.
Allows you to set your own min and
max values.
All in all, it is handy for a million household uses.
The deModel package sounds like what you're looking for.
Another option worth considering if you want to simulate the behaviour of binary fixed-point numbers beyond simple arithmetic operations, is the spfpm module. That will allow you to calculate square-roots, powers, logarithms and trigonometric functions using fixed numbers of bits. It's a pure-python module, so doesn't offer the ultimate performance but can do hundreds of thousands of arithmetic operations per second on 256-bit numbers.
recently I'm working on similar project, https://numfi.readthedocs.io/en/latest/
>>> from numfi import numfi
>>> x = numfi(0.68751,1,6,3)
>>> x + 1/3
numfi([1.125]) s7/3-r/s
>>> np.sin(x)
numfi([0.625 ]) s6/3-r/s

Categories