calculating n-th roots using Python 3's decimal module - python

Is there a built-in way to calculate the correctly rounded n-th root of a Python 3 decimal object?

According to the documentation, there is a function power(x,y) :
With two arguments, compute x**y. If x is negative then y must be
integral. The result will be inexact unless y is integral and the
result is finite and can be expressed exactly in ‘precision’ digits.
The result should always be correctly rounded, using the rounding mode
of the current thread’s context
This implies that power(x, 1.0/n) should give you what you want.
You can also take the nth root with
nthRoot = Decimal(x) ** (Decimal(1.0) / Decimal(n) )
Not sure if you consider either of these "built in" as you have to compute the reciprocal of n explicitly to get the nth root.

Related

When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?

Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).

Why Python builtin numeric type and decimal module differs on a same operation? [duplicate]

With simple ints:
>>> -45 % 360
315
Whereas, using a decimal.Decimal:
>>> from decimal import Decimal
>>> Decimal('-45') % 360
Decimal('-45')
I would expect to get Decimal('315').
Is there any reason for this? Is there a way to get a consistent behaviour (without patching decimal.Decimal)? (I did not change the context, and cannot find how it could be changed to solve this situation).
After a long search (because searching on "%", "mod", "modulo" etc. gives a thousand of results), I finally found that, surprisingly, this is intended:
There are some small differences between arithmetic on Decimal objects
and arithmetic on integers and floats. When the remainder operator %
is applied to Decimal objects, the sign of the result is the sign of
the dividend rather than the sign of the divisor:
>>> (-7) % 4
1
>>> Decimal(-7) % Decimal(4)
Decimal('-3')
I don't know the reason for this, but it looks like it's not possible to change this behaviour (without patching).
Python behaves according to IBM's General Decimal Arithmetic Specification.
The remainder is defined as:
remainder takes two operands; it returns the remainder from integer division. […]
the result is the residue of the dividend after the operation of calculating integer division as described for divide-integer, rounded to precision digits if necessary. The sign of the result, if non-zero, is the same as that of the original dividend.
So because Decimal('-45') // D('360') is Decimal('-0'), the remainder can only be Decimal('-45').
Though why is the quotient 0 and not -1? The specification says:
divide-integer takes two operands; it divides two numbers and returns the integer part of the result. […]
the result returned is defined to be that which would result from repeatedly subtracting the divisor from the dividend while the dividend is larger than or equal to the divisor. During this subtraction, the absolute values of both the dividend and the divisor are used: the sign of the final result is the same as that which would result if normal division were used. […]
Notes: […]
The divide-integer and remainder operations are defined so that they may be calculated as a by-product of the standard division operation (described above). The division process is ended as soon as the integer result is available; the residue of the dividend is the remainder.
How many times can you subtract 360 from 45? 0 times. Is an integer result available? It is. Then the quotient is 0 with a minus sign because the divide operation says that
The sign of the result is the exclusive or of the signs of the operands.
As for why the Decimal Specification goes on this route, instead of doing it like in math where the remainder is always positive, I'm speculating that it could be for the simplicity of the subtraction algorithm. No need to check the sign of the operands in order to compute the absolute value of the quotient. Modern implementations probably use more complicated algorithms anyway, but simplicity could be have an important factor back in the days when the standard was taking form and hardware was simpler (way fewer transistors). Fun fact: Intel switched from radix-2 integer division to radix-16 only in 2007 with the release of Penryn.

Python Numerical Differentiation and the minimum value for h

I calculate the first derivative using the following code:
def f(x):
f = np.exp(x)
return f
def dfdx(x):
Df = (f(x+h)-f(x-h)) / (2*h)
return Df
For example, for x == 10 this works fine. But when I set h to around 10E-14 or below, Df starts
to get values that are really far away from the expected value f(10) and the relative error between the expected value and Df becomes huge.
Why is that? What is happening here?
The evaluation of f(x) has, at best, a rounding error of |f(x)|*mu where mu is the machine constant of the floating point type. The total error of the central difference formula is thus approximately
2*|f(x)|*mu/(2*h) + |f'''(x)|/6 * h^2
In the present case, the exponential function is equal to all of its derivatives, so that the error is proportional to
mu/h + h^2/6
which has a minimum at h = (3*mu)^(1/3), which for the double format with mu=1e-16 is around h=1e-5.
The precision is increased if instead of 2*h the actual difference (x+h)-(x-h) between the evaluation points is used in the denominator. This can be seen in the following loglog plot of the distance to the exact derivative.
You are probably encountering some numerical instability, as for x = 10 and h =~ 1E-13, the argument for np.exp is very close to 10 whether h is added or subtracted, so small approximation errors in the value of np.exp are scaled significantly by the division with the very small 2 * h.
In addition to the answer by #LutzL I will add some info from a great book Numerical Recipes 3rd Edition: The Art of Scientific Computing from chapter 5.7 about Numerical Derivatives, especially about the choice of optimal h value for given x:
Always choose h so that h and x differ by an exactly representable number. Funny stuff like 1/3 should be avoided, except when x is equal to something along the lines of 14.3333333.
Round-off error is approximately epsilon * |f(x) * h|, where epsilon is floating point accuracy, Python represents floating point numbers with double precision so it's 1e-16. It may differ for more complicated functions (where precision errors arise further), though it's not your case.
Choice of optimal h: Not getting into details it would be sqrt(epsilon) * x for simple forward case, except when your x is near zero (you will find more information in the book), which is your case. You may want to use higher x values in such cases, complementary answer is already provided. In the case of f(x+h) - f(x-h) as in your example it would amount to epsilon ** 1/3 * x, so approximately 5e-6 times x, which choice might be a little difficult in case of small values like yours. Quite close (if one can say so bearing in mind floating point arithmetic...) to practical results posted by #LutzL though.
You may use other derivative formulas, except the symmetric one you are using. You may want to use the forward or backward evaluation(if the function is costly to evaluate and you have calculated f(x) beforehand. If your function is cheap to evaluate, you may want to evaluate it multiple times using higher order methods to make the precision error smaller (see five-point stencil on wikipedia as provided in the comment to your question).
This Python tutorial explains the reason behind the limited precision. In summary, decimals are ultimately represented in binary and the precision is about 17 significant digits. So, you are right that it gets fuzzy beyond 10E-14.

Python gmpy2 f_divmod function confusion

I'm pretty new to python and i just started playing with gmpy2, but i'm a little confused about one of the functions and gmpy's documentation isn't helpful in this regard:
I'd like to do division with a modulus (as well as a floor) so i found the f_divmod() function:
f_divmod(...) f_divmod(x, y) returns the quotient and remainder of x
divided by y. The quotient is rounded towards -Inf (floor rounding)
and the remainder will have the same sign as y. x and y must be
integers.
However if this does what i think it should do (and that is probably my mistake), it should do: x / y % m, and i see no way to provide an m. Is this the wrong function for that, or do i need to somehow define a modulus elsewhere?
I see my alternative being:
c = gmpy2.f_div(a, b) % m
Thanks in advance!
Note: I maintain gmpy2.
gmpy2.f_divmod() (along with gmpy2.c_divmod(), gmpy2.t_divmod(), and gmpy2.divmod()) are patterned after the builtin divmod(). All the functions return the quotient and remainder but each functions uses a slightly different rule to compute the quotient and remainder. The names are meant to imply that the functions return the tuple (a // b, a % b). They don't do division followed by mod.
If you want to calculate the quotient using floor division, and then reduce that result modulo another number, then your alternative is correct.
Slightly off-topic hint: You should get into the habit of using // for integer division. In Python 3, / becomes floating point division. // is integer division in both Python 2 and 3.

How to implement division with round-towards-infinity in Python

I want 3/2 to equal 2 not 1.5
I know there's a mathematical term for that operation(not called rounding up), but I can't recall it right now.
Anyway, how do i do that without having to do two functions?
ex of what I do NOT want:
answer = 3/2 then math.ceil(answer)=2 (why does math.ceil(3/2)=1?)
ex of what I DO want:
"function"(3/2) = 2
To give a short answer...
Python only offers native operators for two types of division: "true" division, and "round down" division. So what you want isn't available as a single function. However, it is possible to easily implement a number of different types of division-with-rounding using some short expressions.
Per the title's request: given strictly integer inputs, "round up" division can be implemented using (a+(-a%b))//b, and "round away from zero" division can be implemented using the more complex a//b if a*b<0 else (a+(-a%b))//b. One of those is probably what you want. As to why...
To give a longer answer...
First, let me answer the subquestion about why 3/2==1 and math.ceil(3/2)==1.0, by way of explaining how the Python division operator works. There are two main issues at play...
float vs int division: Under Python 2, division behaves differently depending on the type of the inputs. If both a and b are integers, a/b performs "round down" or "floor integer" division (eg 3/2==1, but -3/2==-2). This is equivalent to int(math.floor(float(a)/b)) .
But if at least one of a and b are floats, Python performs "true" division, and gives you a float result (eg 3.0/2==1.5, and -3.0/2==-1.5). This is why you'll sometimes see the construction float(a)/b: it's being used to force true division even both inputs are integers (eg float(3)/2==1.5). This is why your example math.ceil(3/2) returns 1.0, whereas math.ceil(float(3)/2) returns 2.0. The result has already been rounded down before it even reaches math.ceil().
"true division" by default: In 2001, it was decided (PEP 238) that Python's division operator should be changed so that it always performs "true" division, regardless of whether the inputs are floats or integers (eg, this would make 3/2==1.5). In order to not break existing scripts, the change in default behavior was deferred until Python 3.0; in order to get this behavior under Python 2.x, you have to enable it per-file by adding from __future__ import division to the top of the file. Otherwise the old type-dependant behavior is used.
But "round down" division is still frequently needed, so the PEP didn't do way with it entirely. Instead, it introduced a new division operator: a//b, which always performs round down division, even if the inputs include floats. This can be used without doing anything special under both Python 2.2+ and 3.x.
That out of that way, division-with-rounding:
In order to simplify things, the following expressions all use the a//b operator when working on integers, since it will behave the same under all python versions. As well, I'm making an assumption that 0<=a%b<b if b is positive, and b<=a%b<=0 if b is negative. This is how Python behaves, but other languages may have slightly different modulus operators.
The four basic types of integer division with rounding:
"round down" aka "floor integer" aka "round to minus infinity" divsion: python offers this natively via a//b.
"round up" aka "ceiling integer" aka "round to positive infinity" division: this can be achieved via int(math.ceil(float(a)/b)) or (a+(-a%b))//b. The latter equation works because -a%b is 0 if a is a multiple of b, and is otherwise the amount we need to add to a to get to the next highest multiple.
"round towards zero" aka "truncated" division - this can be achieved via int(float(a)/b). Doing this without using floating point is trickier... since Python only offers round-down integer division, and the % operator has a similar round-down bias, we don't have any non-floating-point operators which round symmetrically about 0. So the only way I can think of is to construct a piecewise expression out of round-down and round-up: a//b if a*b>0 else (a+(-a%b))//b.
"round away from zero" aka "round to (either) infinity" division - unfortunately, this is even trickier than round-towards-zero. We can't leverage the truncating behavior of the int operator anymore, so I can't think of a simple expression even when including floating-point ops. So I have to go with the inverse of the round-to-zero expression, and use a//b if a*b<0 else (a+(-a%b))//b.
Note that if you're only using positive integers, (a+b-1)//b provides round up / away from zero even more efficiently than any of the above solutions, but falls apart for negatives.
Hope that helps... and happy to make edits if anyone can suggest better equations for round to/away from zero. I find the ones I have particularly unsatisfactory.
Integral division in Python 3:
3 // 2 == 1
Non-integral division in Python 3:
3 / 2 == 1.5
What you're talking about is not a division by all means.
The intent of the OP's question is "How to implement division with round-towards-infinity in Python" (suggest you change the title).
This is a perfectly legitimate rounding mode as per the IEEE-754 standard (read this overview), and the term for it is "round towards infinity" (or "round away from zero"). Most of the 9 downvotes were beating up on the OP unfairly. Yes, there is no single-function way to do this in native Python, but we can use round(float(a)/b) or else subclass numbers.Number and override __div__().
The OP would need to clarify whether they want -3/2 to round to -2 or -1 (or don't-care for negative operands). Since they already said they don't want round-upwards, we can infer -3/2 should round to -2.
Enough theory. For implementations:
If you just want the fast-and-dirty one-line solution for round-towards-infinity , use round(float(a)/b)
math.ceil(float(a)/b) gives you round-upwards, which you said you don't want
But if this is your default division operation, or you are doing a lot of this, then do like the pseudocode below: inherit from one of the subclasses of numbers.Number Real, Rational or Integral (new in 2.6), redefine __div__() or else define a non-default alternative __divra__() operation. You could define a class member or classmethod rounding_mode and look it up during divisions. Be careful of __rdiv__() and mixing with ordinary floats though.
.
import numbers
class NumberWithRounding(numbers.Integral):
# Here you could implement a classmethod setRoundingMode() or member rounding_mode
def __div__(self,other):
# here you could consider value of rounding_mode, or else hardwire it like:
return round(float(self)/other)
# You also have to raise ImplementationError/ pass/ or implement the other 31
# methods for Float: __abs__(),...,__xor__() Just shortcut that for now...
When you divide two integers, the result is an integer.
3 / 2 equals 1, not 1.5.
See the documentation, note 1:
For (plain or long) integer division, the result is an integer. The result is always rounded towards minus infinity: 1/2 is 0, (-1)/2 is -1, 1/(-2) is -1, and (-1)/(-2) is 0. Note that the result is a long integer if either operand is a long integer, regardless of the numeric value.
Once you get 1 from the division, there is no way to turn that into 2.
To get 1.5, you need floating-point division: 3.0 / 2.
You can then call math.ceil to get 2.
You are mistaken; there is no mathematical function that divides, then rounds up.
The best you can do is write your own function that takes two floats and calls math.ceil.
What you probably want is something like:
math.ceil(3.0/2.0)
# or
math.ceil(float(3)/float(2))
You could also do an import from future:
from __future__ import division
math.ceil(3/2) # == 2
But, if you do this, to get the current behavior of integer division you need to use the double slash:
3 // 2 == 1 # True
Integer division with ceiling rounding (to +Inf), floor rounding (to -Inf), and truncation (to 0) is available in gmpy2.
>>> gmpy2.c_div(3,2)
mpz(2)
>>> help(gmpy2.c_div)
Help on built-in function c_div in module gmpy2:
c_div(...)
c_div(x,y): returns the quotient of x divided by y. The quotient
is rounded towards +Inf (ceiling rounding). x and y must be integers.
>>> help(gmpy2.f_div)
Help on built-in function f_div in module gmpy2:
f_div(...)
f_div(x,y): returns the quotient of x divided by y. The quotient
is rounded towards -Inf (floor rounding). x and y must be integers.
>>> help(gmpy2.t_div)
Help on built-in function t_div in module gmpy2:
t_div(...)
t_div(x,y): returns the quotient of x divided by y. The quotient
is rounded towards 0. x and y must be integers.
>>>
gmpy2 is available at http://code.google.com/p/gmpy/
(Disclaimer: I'm the current maintainer of gmpy and gmpy2.)
I think that what you're looking for is this:
assuming you have x (3) and y (2),
result = (x + y - 1) // y;
this is the equivalent of a ceiling without the use of floating points.
Of course, y cannot be 0.
Firstly, you want to be using floating-point division in the arguments. Use:
from __future__ import division
If you always want to round up, so f(3/2)==2 and f(1.4)==2, then you want f to be math.trunc(math.ceil(x)).
If you want to get the closest integer, but have ties round up, then you want math.trunc(x + 0.5). That way f(3/2)==2 and f(1.4)==1.

Categories