sympy.Rational vs fractions.Fraction Comparison - python

Are there differences? if so, what are they? What are the advantages and disadvantages? If there are other implementations of rational numbers in python, feel free to include those in the comparison.

If you are just looking for an implementation of rational numbers for ordinary numeric calculations then it is better to use fractions.Fraction which is intended purely for that case. SymPy's Rational class is somewhat different because it is part of a framework of symbolic expressions and that carries some overhead compared to just an implementation of simple numbers. Compared to the alternatives it only really makes sense to use SymPy's Rational as part of symbolic calculations with SymPy.
Internally SymPy itself will use its QQ type for more efficient numeric calculations with rational numbers. If gmpy2 is installed then QQ refers to gmpy2.mpq which is much more efficient than either Rational or Fraction. If gmpy2 is not installed then QQ refers to SymPy's internal PythonMPQ type which is similar to Fraction.

Related

Encoding float constants as extremely long binary strings

Recently, I've been trying to implement the 15 tests for randomness described in NIST SP800-22. As a check of my function implementation, I have been running the examples that the NIST document provides for each of it's tests. Some of these tests require bit strings that are very long (up to a million bits). For example, on one of the examples, the input is "the first 100,000 bits of e." That brings up the question: how do I generate a bit representation of a float value that exceeds the precision available for floating point numbers in Python?
I have found articles converting integers to binary strings (the bin() function), and converting floating point fractions to binary (repeated division by 2 (slow!) and limited by floating point precision). I've considered constructing it iteratively in some way using $e=\sum_{n=0}^{\infty}\frac{2n+2}{(2n+1)!}$, calculating the next portion value, converting it to a binary representation, and somehow adding it to the cumulative representation (still thinking through how to do this). However, I've hit the same wall going down this path: the precision of the floating point values as I go farther out on this sum.
Does anyone have some suggestions on creating arbitrarily long bit strings from arbitrarily precise floating point values?
PS - Also, is there any way to get my Markdown math equation above to render properly here? :-)
I maintain the gmpy2 library and its supports arbitrary-precision binary arithmetic. Here is an example of generating the first 100 bits of e.
>>> import gmpy2
>>> gmpy2.get_context().precision=100
>>> gmpy2.exp(1).digits(2)[0]
'101011011111100001010100010110001010001010111011010010101001101010101
1111101110001010110001000000010'

floating point subtraction in python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I'm trying to subtract to floating point numbers in python.
I've values
a = 1460356156116843.000000, b = 2301.93138123
When I try to print a-b, it's resulting the value 1460356156114541.000000 contrary to the actual value 1460356156114541.06861877.
What are the limitations in python while doing floating point arithmetic. Is there any way in python through which I can get the actual result of this subtraction?
Python has the same limitations for floating point arithmetic as all the other languages. You can use Decimal to get the accurate result:
from decimal import Decimal
a = Decimal('1460356156116843.000000')
b = Decimal('2301.93138123')
print a - b # 1460356156114541.06861877
Python uses IEEE 754 doubles for its floats. So you should treat anything after 15 significant figures or so as science fiction. And that's just for a freshly-initialised number. When you start doing operations with floats you can lose more precision, especially doing addition or subtraction between numbers that differ significantly in absolute magnitude.
OTOH, doing subtraction between numbers very close to each other in magnitude can lead to catastrophic cancellation.
If you are careful, you can reduce the impact of these problems, but you do need a good understanding of how floating-point arithmetic works, and well-behaved data.
Alternatively, you can work with a library that provides higher precision, eg Python's Decimal module. You still need to take care to avoid catastrophic cancellation and the other problems that lead to loss of significance, but at least you've got more significant digits to play with.
The Decimal module just provides basic arithmetic operations. If you need advanced mathematical functions like trig and exponential functions, take a look at the excellent 3rd-party arbitrary precision mathematics module mpmath. It can handle complex numbers, solve equations, and provides some calculus operations.
Using decimal is convenient. But for the sake of demonstration regarding the importance of keeping significant digits, let me throw in this example.
import sys
print(sys.maxsize)
9223372036854775807 # for 64 bit machine, the max integer number. But it can grow as needed.
So for the above case you can do the computation in two steps.
1460356156116842 - 2301 = 1460356156114541 # all integer digits preserved
1 - .93138123 = 0.06861877 # all significant float digits preserved.
So the answer would be adding the two. But if you do that you will lose all float digits. The 64bit is not big enough to keep all digits.

Using 'Decimal' numbers with scipy?

I have numbers too large to use with Python's inbuilt types and as such am using the decimal library. I want to use scipy.optimise.brentq with a function that operates on 'decimals' but when the function returns decimal it obviously cannot be used with the optimisation function's float based internals. How can I get around this: How can I use scipy optimisation techniques with the Decimal class for big numbers?
You can't. Scipy heavily relies on numerical algorithms that only deal with true numerical data types, and can't deal with the decimal class.
As a general rule of thumb: If your problem is well-defined and well-conditioned (that's something that numerical mathematicians define), you can just scale it so that it fits into normal python floats, and then you can apply scipy functionality to it.
If your problem, however, involves very small numbers as well as numbers that can't fit into float, there's little you can numerically do about that problem usually: It's hard to find a good solution.
If, however, your function only returns values that would fit into float, then you could just use
lambda x: float(your_function(x))
instead of your_function in brentq.

Python's decimal module and the table maker's dilemma

According to the documentation, the .exp() operation in
Python's decimal module "is correctly rounded using ...".
Because of the table maker's dilemma, I hope that's not guaranteed, since I'd prefer a guarantee
that it's computation on a normal-looking input with moderately low precision won't take, eg., a year.
How does Python address this?
(Is it different between versions?)
The exp() and pow() functions are different.
The "table-maker's dilemma" explanation you link to states that xy cannot be correctly rounded by any known algorithm with a bounded amount of time. However, this is obviously not the case for all subsets of its domain. If we restrict the domain to x=3 and y=2, then I can tell you what the correctly-rounded answer is.
A quick Google search turns up Correctly-Rounded Exponential Function in Double-Precision Arithmetic, by David Defour, Florent de Dinechin, Jean-Michel Muller (CiteSeer, PDF). The article provides an algorithm for computing a correctly-rounded exp() and provides the worst-case bound on its running time.
This is not the radix=10 case, but it shows how the table-maker's dilemma does not necessarily apply to the exp() function.

Fixed-point arithmetic

Does anyone know of a library to do fixed point arithmetic in Python?
Or, does anyone has sample code?
If you are interested in doing fixed point arithmetic, the Python Standard Library has a decimal module that can do it.
Actually, it has a more flexible floating point ability than the built-in too. By flexible I mean that it:
Has "signals" for various exceptional conditions (these can be set to do a variety of things on signaling)
Has positive and negative
infinities, as well as NaN (not a
number)
Can differentiate between positive
and negative 0
Allows you to set different rounding
schemes.
Allows you to set your own min and
max values.
All in all, it is handy for a million household uses.
The deModel package sounds like what you're looking for.
Another option worth considering if you want to simulate the behaviour of binary fixed-point numbers beyond simple arithmetic operations, is the spfpm module. That will allow you to calculate square-roots, powers, logarithms and trigonometric functions using fixed numbers of bits. It's a pure-python module, so doesn't offer the ultimate performance but can do hundreds of thousands of arithmetic operations per second on 256-bit numbers.
recently I'm working on similar project, https://numfi.readthedocs.io/en/latest/
>>> from numfi import numfi
>>> x = numfi(0.68751,1,6,3)
>>> x + 1/3
numfi([1.125]) s7/3-r/s
>>> np.sin(x)
numfi([0.625 ]) s6/3-r/s

Categories