Python's decimal module and the table maker's dilemma - python

According to the documentation, the .exp() operation in
Python's decimal module "is correctly rounded using ...".
Because of the table maker's dilemma, I hope that's not guaranteed, since I'd prefer a guarantee
that it's computation on a normal-looking input with moderately low precision won't take, eg., a year.
How does Python address this?
(Is it different between versions?)

The exp() and pow() functions are different.
The "table-maker's dilemma" explanation you link to states that xy cannot be correctly rounded by any known algorithm with a bounded amount of time. However, this is obviously not the case for all subsets of its domain. If we restrict the domain to x=3 and y=2, then I can tell you what the correctly-rounded answer is.
A quick Google search turns up Correctly-Rounded Exponential Function in Double-Precision Arithmetic, by David Defour, Florent de Dinechin, Jean-Michel Muller (CiteSeer, PDF). The article provides an algorithm for computing a correctly-rounded exp() and provides the worst-case bound on its running time.
This is not the radix=10 case, but it shows how the table-maker's dilemma does not necessarily apply to the exp() function.

Related

python interval arithmetic library

I would like to know if there exists a python interval arithmetic library with the following features:
If x and y are both intervals, eg: [1.2,2.1], [3.8,9.9], then their sum/product/ratio is an interval, and in particular, any real number in the first interval plus/times/over the a real number in the second interval, will be contained in the resulting interval.
In particular, I would like to know of such a library that does this and accounts for floating point error so that the results can be used in mathematical proofs. I have found some libraries such as this one: https://pythonhosted.org/uncertainties/ that account for error on real numbers, however they don't do so in this stricter sense that I require.
Thanks to TimPeters for suggesting this library:
https://mpmath.org/doc/current/contexts.html#arbitrary-precision-interval-arithmetic-iv
I think this should be the correct tool to use.
As suggested in comments, the OP appeared to be happy with the basic interval arithmetic facilities (binary floating point, with user-settable precision, emulated in software) supplied by the widely used mpmath extension library.

Norms in Python for floating point vs. Decimal (fixed-point)

Is it recommended to use Python's native floating point implementation, or its decimal implementation for use-cases where precision is important?
I thought this question would be easy to answer: if accumulated error has significant implications, e.g. perhaps in calculating orbital trajectories or the like, then an exact representation might make more sense.
I'm unsure for run of the mill deep learning use-cases, for scientific computing generally (e.g. many people use numpy or scikit-learn which i think use floating point implementations), and for financial computing (e.g. trading strategies) what the norms are.
Does anyone know the norms for floating point vs. Decimal use in python for these three areas?
Finance (Trading Strategies)
Deep Learning
Scientific Computing
Thanks
N.B.: This is /not/ a question about the difference between floating point and fixed-point representations, or why floating point arithmetic produces surprising results. This is a question about what norms are.
I learn more about Deep Learning and Scientific Computing, but since my family is running the financing business, I think I can answer the question.
First and foremost, the float numbers are not evil; all you need to do is to understand how much precision does your project needs.
Finance
In the Financing area, depending on usage, you can use decimal or float number. Plus, different banks have different requirements. Generally, if you are dealing with cash or cash equivalent, you may use decimal since the fractional monetary unit is known. For example, for dollars, the fractional monetary unit is 0.01. So you can use decimal to store it, and in the database, you can just use number(20,2)(oracle) or similar things to store your decimal number. The precision is enough since banks have a systematic way to minimize errors on day one, even before the computers appear. The programmers only need to correctly implement what the bank's guideline says.
For other things in the financing area, like analysis and interest rate, using double is enough. Here the precision is not important, but the simplicity matters. CPUs are optimized to calculate float numbers, so no special methods are needed to calculate float arithmetic. Since arithmetic in computers is a huge topic, using an optimized and stabilized way to perform a calculation is much safer than to create its own methods to do arithmetic. Plus, one or two float calculations will not have a huge compact on the precision. For example, banks usually store the value in decimal and then perform multiplication with a float interest rate and then convert back to decimal. In this way, errors will not accumulate. Considering we only need two digits to the right of the decimal point, the float number's precision is quite enough to do such a computation.
I have heard that in investment banks, they use double in all of their systems since they deal with very large amounts of cash. Thus in these banks, simplicity and performance are more important than precision.
Deep Learning
Deep Learning is one of the fields that do not need high precision but do need high performance. A neural network can have millions of parameters, so the precision of a single weight and bias will not impact the prediction of the network. Instead, the neural network needs to compute very fast to train on a given dataset and give out a prediction in a reasonable time interval. Plus, many accelerators can actually accelerate a specific type of float: half-precision i.e., fp16. Thus, to reduce the size of the network in memory and to accelerate the train and prediction process, many neural networks usually run in hybrid mode. The neural network framework and accelerator driver can decide what parameters can be computed in fp16 with minimum overflow and underflow risk since fp16 has a pretty small range: 10^-8 to 65504. Other parameters are still computed in fp32. In some edge usage, the usable memory is very small (for example, K 210 and edge TPU only has 8MB onboard SRAM), so neural networks need to use 8-bit fixed-point numbers to fit in these devices. The fixed-point numbers are like decimals that they are the opposite of floating-point numbers as they have fixed digits after the decimal point. Usually, they represent themselves in the system as int8 or unit8.
Scientific Computation
The double type (i.e. 64-bit floating number) usually meets the scientist's need in scientific computation. In addition, IEEE 754 also has defined quad precision (128 bit) to facilitate scientific computation. Intel's x86 processors also have an 80-bit extended precision format.
However, some of the scientific computation needs arbitrary precision arithmetic. For example, to compute pi and to do astronomical simulation need high precision computation. Thus, they need something different, which is called arbitrary-precision floating-point number. One of the most famous libraries that support arbitrary-precision floating-point numbers is GNU Multiple Precision Arithmetic Library(GMP). They generally store the number directly across the memory and use stacks to simulate a vertical method to compute a final result.
In general, standard floating-point numbers are designed fairly well and elegantly. As long as you understand your need, floating-point numbers are capable for most usages.

Using 'Decimal' numbers with scipy?

I have numbers too large to use with Python's inbuilt types and as such am using the decimal library. I want to use scipy.optimise.brentq with a function that operates on 'decimals' but when the function returns decimal it obviously cannot be used with the optimisation function's float based internals. How can I get around this: How can I use scipy optimisation techniques with the Decimal class for big numbers?
You can't. Scipy heavily relies on numerical algorithms that only deal with true numerical data types, and can't deal with the decimal class.
As a general rule of thumb: If your problem is well-defined and well-conditioned (that's something that numerical mathematicians define), you can just scale it so that it fits into normal python floats, and then you can apply scipy functionality to it.
If your problem, however, involves very small numbers as well as numbers that can't fit into float, there's little you can numerically do about that problem usually: It's hard to find a good solution.
If, however, your function only returns values that would fit into float, then you could just use
lambda x: float(your_function(x))
instead of your_function in brentq.

Deep zoom on a function such as Weierstrass functions - working example or library or starter code for very-high-precision floating point math?

This link shows information, including a "zoom into" the detail of a subsection of a Weierstrass function. It stops, and the notes say that the limitations of the software used to analyze the values for f(x) are hitting the limits of (I'm guessing) the most precise floating point type on the system.
My question is: Is there any Python code that can create a "deep zoom" plot of a function and go beyond the limitations of the floating point type (even if it's very very slow)? What would I use? BigNum? Something else? I'm hoping someone has already made a "plot tool" that could analyze a Weierstrass function at a resolution finer than floating point math.
Since the function is an endless series, one would have to evalulate it up to X terms of the expression, and so the deeper you zoom in, the more terms would have to be evaluated. Secondly, if it is possible (by some careful work) to make a special case of an evalulator that plots the Weierstrass function shown above to any arbitrary precision without requiring specialized data types (that is, working within floating point limits) then that would also be pretty great.
Try gmpy, which is Python bindings to the GNU multiprecision library. It can handle the range "roughly 2^-68719476768 to 2^68719476736" on a 32 bit system.
Python's Decimal type will give you arbitrary precision floats. matplotlib will allow you to create plots.

Fixed-point arithmetic

Does anyone know of a library to do fixed point arithmetic in Python?
Or, does anyone has sample code?
If you are interested in doing fixed point arithmetic, the Python Standard Library has a decimal module that can do it.
Actually, it has a more flexible floating point ability than the built-in too. By flexible I mean that it:
Has "signals" for various exceptional conditions (these can be set to do a variety of things on signaling)
Has positive and negative
infinities, as well as NaN (not a
number)
Can differentiate between positive
and negative 0
Allows you to set different rounding
schemes.
Allows you to set your own min and
max values.
All in all, it is handy for a million household uses.
The deModel package sounds like what you're looking for.
Another option worth considering if you want to simulate the behaviour of binary fixed-point numbers beyond simple arithmetic operations, is the spfpm module. That will allow you to calculate square-roots, powers, logarithms and trigonometric functions using fixed numbers of bits. It's a pure-python module, so doesn't offer the ultimate performance but can do hundreds of thousands of arithmetic operations per second on 256-bit numbers.
recently I'm working on similar project, https://numfi.readthedocs.io/en/latest/
>>> from numfi import numfi
>>> x = numfi(0.68751,1,6,3)
>>> x + 1/3
numfi([1.125]) s7/3-r/s
>>> np.sin(x)
numfi([0.625 ]) s6/3-r/s

Categories