Related
I need to calculate the square root of some numbers, for example √9 = 3 and √2 = 1.4142. How can I do it in Python?
The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?
Related
Integer square root in python
How to find integer nth roots?
Is there a short-hand for nth root of x in Python?
Difference between **(1/2), math.sqrt and cmath.sqrt?
Why is math.sqrt() incorrect for large numbers?
Python sqrt limit for very large numbers?
Which is faster in Python: x**.5 or math.sqrt(x)?
Why does Python give the "wrong" answer for square root? (specific to Python 2)
calculating n-th roots using Python 3's decimal module
How can I take the square root of -1 using python? (focused on NumPy)
Arbitrary precision of square roots
Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title.
Option 1: math.sqrt()
The math module from the standard library has a sqrt function to calculate the square root of a number. It takes any type that can be converted to float (which includes int) as an argument and returns a float.
>>> import math
>>> math.sqrt(9)
3.0
Option 2: Fractional exponent
The power operator (**) or the built-in pow() function can also be used to calculate a square root. Mathematically speaking, the square root of a equals a to the power of 1/2.
The power operator requires numeric types and matches the conversion rules for binary arithmetic operators, so in this case it will return either a float or a complex number.
>>> 9 ** (1/2)
3.0
>>> 9 ** .5 # Same thing
3.0
>>> 2 ** .5
1.4142135623730951
(Note: in Python 2, 1/2 is truncated to 0, so you have to force floating point arithmetic with 1.0/2 or similar. See Why does Python give the "wrong" answer for square root?)
This method can be generalized to nth root, though fractions that can't be exactly represented as a float (like 1/3 or any denominator that's not a power of 2) may cause some inaccuracy:
>>> 8 ** (1/3)
2.0
>>> 125 ** (1/3)
4.999999999999999
Edge cases
Negative and complex
Exponentiation works with negative numbers and complex numbers, though the results have some slight inaccuracy:
>>> (-25) ** .5 # Should be 5j
(3.061616997868383e-16+5j)
>>> 8j ** .5 # Should be 2+2j
(2.0000000000000004+2j)
Note the parentheses on -25! Otherwise it's parsed as -(25**.5) because exponentiation is more tightly binding than unary negation.
Meanwhile, math is only built for floats, so for x<0, math.sqrt(x) will raise ValueError: math domain error and for complex x, it'll raise TypeError: can't convert complex to float. Instead, you can use cmath.sqrt(x), which is more more accurate than exponentiation (and will likely be faster too):
>>> import cmath
>>> cmath.sqrt(-25)
5j
>>> cmath.sqrt(8j)
(2+2j)
Precision
Both options involve an implicit conversion to float, so floating point precision is a factor. For example:
>>> n = 10**30
>>> x = n**2
>>> root = x**.5
>>> n == root
False
>>> n - root # how far off are they?
0.0
>>> int(root) - n # how far off is the float from the int?
19884624838656
Very large numbers might not even fit in a float and you'll get OverflowError: int too large to convert to float. See Python sqrt limit for very large numbers?
Other types
Let's look at Decimal for example:
Exponentiation fails unless the exponent is also Decimal:
>>> decimal.Decimal('9') ** .5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for ** or pow(): 'decimal.Decimal' and 'float'
>>> decimal.Decimal('9') ** decimal.Decimal('.5')
Decimal('3.000000000000000000000000000')
Meanwhile, math and cmath will silently convert their arguments to float and complex respectively, which could mean loss of precision.
decimal also has its own .sqrt(). See also calculating n-th roots using Python 3's decimal module
SymPy
Depending on your goal, it might be a good idea to delay the calculation of square roots for as long as possible. SymPy might help.
SymPy is a Python library for symbolic mathematics.
import sympy
sympy.sqrt(2)
# => sqrt(2)
This doesn't seem very useful at first.
But sympy can give more information than floats or Decimals:
sympy.sqrt(8) / sympy.sqrt(27)
# => 2*sqrt(6)/9
Also, no precision is lost. (√2)² is still an integer:
s = sympy.sqrt(2)
s**2
# => 2
type(s**2)
#=> <class 'sympy.core.numbers.Integer'>
In comparison, floats and Decimals would return a number which is very close to 2 but not equal to 2:
(2**0.5)**2
# => 2.0000000000000004
from decimal import Decimal
(Decimal('2')**Decimal('0.5'))**Decimal('2')
# => Decimal('1.999999999999999999999999999')
Sympy also understands more complex examples like the Gaussian integral:
from sympy import Symbol, integrate, pi, sqrt, exp, oo
x = Symbol('x')
integrate(exp(-x**2), (x, -oo, oo))
# => sqrt(pi)
integrate(exp(-x**2), (x, -oo, oo)) == sqrt(pi)
# => True
Finally, if a decimal representation is desired, it's possible to ask for more digits than will ever be needed:
sympy.N(sympy.sqrt(2), 1_000_000)
# => 1.4142135623730950488016...........2044193016904841204
NumPy
>>> import numpy as np
>>> np.sqrt(25)
5.0
>>> np.sqrt([2, 3, 4])
array([1.41421356, 1.73205081, 2. ])
docs
Negative
For negative reals, it'll return nan, so np.emath.sqrt() is available for that case.
>>> a = np.array([4, -1, np.inf])
>>> np.sqrt(a)
<stdin>:1: RuntimeWarning: invalid value encountered in sqrt
array([ 2., nan, inf])
>>> np.emath.sqrt(a)
array([ 2.+0.j, 0.+1.j, inf+0.j])
Another option, of course, is to convert to complex first:
>>> a = a.astype(complex)
>>> np.sqrt(a)
array([ 2.+0.j, 0.+1.j, inf+0.j])
Newton's method
Most simple and accurate way to compute square root is Newton's method.
You have a number which you want to compute its square root (num) and you have a guess of its square root (estimate). Estimate can be any number bigger than 0, but a number that makes sense shortens the recursive call depth significantly.
new_estimate = (estimate + num/estimate) / 2
This line computes a more accurate estimate with those 2 parameters. You can pass new_estimate value to the function and compute another new_estimate which is more accurate than the previous one or you can make a recursive function definition like this.
def newtons_method(num, estimate):
# Computing a new_estimate
new_estimate = (estimate + num/estimate) / 2
print(new_estimate)
# Base Case: Comparing our estimate with built-in functions value
if new_estimate == math.sqrt(num):
return True
else:
return newtons_method(num, new_estimate)
For example we need to find 30's square root. We know that the result is between 5 and 6.
newtons_method(30,5)
number is 30 and estimate is 5. The result from each recursive calls are:
5.5
5.477272727272727
5.4772255752546215
5.477225575051661
The last result is the most accurate computation of the square root of number. It is the same value as the built-in function math.sqrt().
This answer was originally posted by gunesevitan, but is now deleted.
Python's fractions module and its class, Fraction, implement arithmetic with rational numbers. The Fraction class doesn't implement a square root operation, because most square roots are irrational numbers. However, it can be used to approximate a square root with arbitrary accuracy, because a Fraction's numerator and denominator are arbitrary-precision integers.
The following method takes a positive number x and a number of iterations, and returns upper and lower bounds for the square root of x.
from fractions import Fraction
def sqrt(x, n):
x = x if isinstance(x, Fraction) else Fraction(x)
upper = x + 1
for i in range(0, n):
upper = (upper + x/upper) / 2
lower = x / upper
if lower > upper:
raise ValueError("Sanity check failed")
return (lower, upper)
See the reference below for details on this operation's implementation. It also shows how to implement other operations with upper and lower bounds (although there is apparently at least one error with the log operation there).
Daumas, M., Lester, D., Muñoz, C., "Verified Real Number Calculations: A Library for Interval Arithmetic", arXiv:0708.3721 [cs.MS], 2007.
Alternatively, using Python's math.isqrt, we can calculate a square root to arbitrary precision:
Square root of i within 1/2n of the correct value, where i is an integer:Fraction(math.isqrt(i * 2**(n*2)), 2**n).
Square root of i within 1/10n of the correct value, where i is an integer:Fraction(math.isqrt(i * 10**(n*2)), 10**n).
Square root of x within 1/2n of the correct value, where x is a multiple of 1/2n:Fraction(math.isqrt(x * 2**(n)), 2**n).
Square root of x within 1/10n of the correct value, where x is a multiple of 1/10n:Fraction(math.isqrt(x * 10**(n)), 10**n).
In the foregoing, i or x must be 0 or greater.
Binary search
Disclaimer: this is for a more specialised use-case. This method might not be practical in all circumstances.
Benefits:
can find integer values (i.e. which integer is the root?)
no need to convert to float, so better precision (can be done that well too)
I personally implemented this one for a crypto CTF challenge (RSA cube root attack),where I needed a precise integer value.
The general idea can be extended to any other root.
def int_squareroot(d: int) -> tuple[int, bool]:
"""Try calculating integer squareroot and return if it's exact"""
left, right = 1, (d+1)//2
while left<right-1:
x = (left+right)//2
if x**2 > d:
left, right = left, x
else:
left, right = x, right
return left, left**2==d
EDIT:
As #wjandrea have also pointed out, **this example code can NOT compute **. This is a side-effect of the fact that it does not convert anything into floats, so no precision is lost. If the root is an integer, you get that back. If it's not, you get the biggest number whose square is smaller than your number. I updated the code so that it also returns a bool indicating if the value is correct or not, and also fixed an issue causing it to loop infinitely (also pointed out by #wjandrea). This implementation of the general method still works kindof weird for smaller numbers, but above 10 I had no problems with.
Overcoming the issues and limits of this method/implementation:
For smaller numbers, you can just use all the other methods from other answers. They generally use floats, which might be a loss of precision, but for small integers that should mean no problem at all. All of those methods that use floats have the same (or nearly the same) limit from this.
If you still want to use this method and get float results, it should be trivial to convert this to use floats too. Note that that will reintroduce precision loss, this method's unique benefit over the others, and in that case you can also just use any of the other answers. I think the newton's method version converges a bit faster, but I'm not sure.
For larger numbers, where loss of precision with floats come into play, this method can give results closer to the actual answer (depending on how big is the input). If you want to work with non-integers in this range, you can use other types, for example fixed precision numbers in this method too.
Edit 2, on other answers:
Currently, and afaik, the only other answer that has similar or better precision for large numbers than this implementation is the one that suggest SymPy, by Eric Duminil. That version is also easier to use, and work for any kind of number, the only downside is that it requires SymPy. My implementation is free from any huge dependencies if that is what you are looking for.
Arbitrary precision square root
This variation uses string manipulations to convert a string which represents a decimal floating-point number to an int, calls math.isqrt to do the actual square root extraction, and then formats the result as a decimal string. math.isqrt rounds down, so all produced digits are correct.
The input string, num, must use plain float format: 'e' notation is not supported. The num string can be a plain integer, and leading zeroes are ignored.
The digits argument specifies the number of decimal places in the result string, i.e., the number of digits after the decimal point.
from math import isqrt
def str_sqrt(num, digits):
""" Arbitrary precision square root
num arg must be a string
Return a string with `digits` after
the decimal point
Written by PM 2Ring 2022.01.26
"""
int_part , _, frac_part = num.partition('.')
num = int_part + frac_part
# Determine the required precision
width = 2 * digits - len(frac_part)
# Truncate or pad with zeroes
num = num[:width] if width < 0 else num + '0' * width
s = str(isqrt(int(num)))
if digits:
# Pad, if necessary
s = '0' * (1 + digits - len(s)) + s
s = f"{s[:-digits]}.{s[-digits:]}"
return s
Test
print(str_sqrt("2.0", 30))
Output
1.414213562373095048801688724209
For small numbers of digits, it's faster to use decimal.Decimal.sqrt. Around 32 digits or so, str_sqrt is roughly the same speed as Decimal.sqrt. But at 128 digits, str_sqrt is 2.2× faster than Decimal.sqrt, at 512 digits, it's 4.3× faster, at 8192 digits, it's 7.4× faster.
Here's a live version running on the SageMathCell server.
find square-root of a number
while True:
num = int(input("Enter a number:\n>>"))
for i in range(2, num):
if num % i == 0:
if i*i == num:
print("Square root of", num, "==>", i)
break
else:
kd = (num**0.5) # (num**(1/2))
print("Square root of", num, "==>", kd)
OUTPUT:-
Enter a number: 24
Square root of 24 ==> 4.898979485566356
Enter a number: 36
Square root of 36 ==> 6
Enter a number: 49
Square root of 49 ==> 7
✔ Output 💡 CLICK BELOW & SEE ✔
In python, suppose the code is:
import.math
a = math.sqrt(2.0)
if a * a == 2.0:
x = 2
else:
x = 1
This is a variant of "Floating Point Numbers are Approximations -- Not Exact".
Mathematically speaking, you are correct that sqrt(2) * sqrt(2) == 2. But sqrt(2) can not be exactly represented as a native datatype (read: floating point number). (Heck, the sqrt(2) is actually guaranteed to be an infinite decimal!). It can get really close, but not exact:
>>> import math
>>> math.sqrt(2)
1.4142135623730951
>>> math.sqrt(2) * math.sqrt(2)
2.0000000000000004
Note the result is, in fact, not exactly 2.
If you want the x = 2 branch to execute, you will need to use an epsilon value of "is the result close enough?":
epsilon = 1e-6 # 0.000001
if abs(2.0 - a*a) < epsilon:
x = 2
else:
x = 1
Numbers with decimals are stored as floating point numbers and they can only be an approximation to the real number in some cases.
So your comparison needs to be not "are these two numbers exactly equal (==)" but "are they sufficiently close as to be considered equal".
Fortunately, in the math library, there's a function to do that conveniently. Using isClose(), you can compare with a defined tolerance. The function isn't too complicated, you could do it yourself.
math.isclose(a*a, 2, abs_tol=0.0001)
>> True
So I have to approximate Pi with following way: 4*(1-1/3+1/5-1/7+1/9-...). Also it should be based on number of iterations. So the function should look like this:
>>> piApprox(1)
4.0
>>> piApprox(10)
3.04183961893
>>> piApprox(300)
3.13825932952
But it works like this:
>>> piApprox(1)
4.0
>>> piApprox(10)
2.8571428571428577
>>> piApprox(300)
2.673322240709928
What am I doing wrong? Here is the code:
def piApprox(num):
pi=4.0
k=1.0
est=1.0
while 1<num:
k+=2
est=est-(1/k)+1/(k+2)
num=num-1
return pi*est
This is what you're computing:
4*(1-1/3+1/5-1/5+1/7-1/7+1/9...)
You can fix it just by adding a k += 2 at the end of your loop:
def piApprox(num):
pi=4.0
k=1.0
est=1.0
while 1<num:
k+=2
est=est-(1/k)+1/(k+2)
num=num-1
k+=2
return pi*est
Also the way you're counting your iterations is wrong since you're adding two elements at the time.
This is a cleaner version that returns the output that you expect for 10 and 300 iterations:
def approximate_pi(rank):
value = 0
for k in xrange(1, 2*rank+1, 2):
sign = -(k % 4 - 2)
value += float(sign) / k
return 4 * value
Here is the same code but more compact:
def approximate_pi(rank):
return 4 * sum(-float(k%4 - 2) / k for k in xrange(1, 2*rank+1, 2))
Important edit:
whoever expects this approximation to yield PI -- quote from Wikipedia:
It converges quite slowly, though – after 500,000 terms, it produces
only five correct decimal digits of π
Original answer:
This is an educational example. You try to use a shortcut and attempt to implement the "oscillating" sign of the summands by handling two steps for k in the same iteration. However, you adjust k only by one step per iteration.
Usually, in math at least, an oscillating sign is achieved with (-1)**i. So, I have chosen this for a more readable implementation:
def pi_approx(num_iterations):
k = 3.0
s = 1.0
for i in range(num_iterations):
s = s-((1/k) * (-1)**i)
k += 2
return 4 * s
As you can see, I have changed your approach a bit, to improve readability. There is no need for you to check for num in a while loop, and there is no particular need for your pi variable. Your est actually is a sum that grows step by step, so why not call it s ("sum" is a built-in keyword in Python). Just multiply the sum with 4 in the end, according to your formula.
Test:
>>> pi_approx(100)
3.1514934010709914
The convergence, however, is not especially good:
>>> pi_approx(100) - math.pi
0.009900747481198291
Your expected output is flaky somehow, because your piApprox(300) (should be 3.13825932952, according to your) is too far away from PI. How did you come up with that? Is that possibly affected by an accumulated numerical error?
Edit
I would not trust the book too much in regard of what the function should return after 10 and 300 iterations. The intermediate result, after 10 steps, should be rather free of numerical errors, indeed. There, it actually makes a difference whether you take two steps of k at the same time or not. So this most likely is the difference between my pi_approx(10) and the books'. For 300 iterations, numerical error might have severely affected the result in the book. If this is an old book, and they have implemented their example in C, possibly using single precision, then a significant portion of the result may be due to accumulation of numerical error (note: this is a prime example for how bad you can be affected by numerical errors: a repeated sum of small and large values, it does not get worse!).
What counts is that you have looked at the math (the formula for PI), and you have implemented a working Python version of approximating that formula. That was the learning goal of the book, so go ahead and tackle the next problem :-).
def piApprox(num):
pi=4.0
k=3.0
est=1.0
while 1<num:
est=est-(1/k)+1/(k+2)
num=num-1
k+=4
return pi*est
Also for real task use math.pi
Here is a slightly simpler version:
def pi_approx(num_terms):
sign = 1. # +1. or -1.
pi_by_4 = 1. # first term
for div in range(3, 2 * num_terms, 2): # 3, 5, 7, ...
sign = -sign # flip sign
pi_by_4 += sign / div # add next term
return 4. * pi_by_4
which gives
>>> for n in [1, 10, 300, 1000, 3000]:
... print(pi_approx(n))
4.0
3.0418396189294032
3.1382593295155914
3.140592653839794
3.1412593202657186
While all of these answers are perfectly good approximations, if you are using the Madhava-Leibniz Series than you should arrive at ,"an approximation of π correct to 11 decimal places as 3.14159265359" within in first 21 terms according to this website: https://en.wikipedia.org/wiki/Approximations_of_%CF%80
Therefore, a more accurate solution could be any variation of this:
import math
def estimate_pi(terms):
ans = 0.0
for k in range(terms):
ans += (-1.0/3.0)**k/(2.0*k+1.0)
return math.sqrt(12)*ans
print(estimate_pi(21))
Output: 3.141592653595635
This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
Been looking through other answers and I still don't understand the modulo for negative numbers in python
For example the answer by df
x == (x/y)*y + (x%y)
so it makes sense that (-2)%5 = -2 - (-2/5)*5 = 3
Doesn't this (-2 - (-2/5)*5) =0 or am I just crazy?
Modulus operation with negatives values - weird thing?
Same with this
negative numbers modulo in python
Where did he get -2 from?
Lastly if the sign is dependent on the dividend why don't negative dividends have the same output as their positive counterparts?
For instance the output of
print([8%5,-8%5,4%5,-4%5])
is
[3, 2, 4, 1]
In Python, modulo is calculated according to two rules:
(a // b) * b + (a % b) == a, and
a % b has the same sign as b.
Combine this with the fact that integer division rounds down (towards −∞), and the resulting behavior is explained.
If you do -8 // 5, you get -1.6 rounded down, which is -2. Multiply that by 5 and you get -10; 2 is the number that you'd have to add to that to get -8. Therefore, -8 % 5 is 2.
In Python, a // b is defined as floor(a/b), as opposed to most other languages where integer division is defined as trunc(a/b). There is a corresponding difference in the interpretation of a % b = a - (a // b) * b.
The reason for this is that Python's definition of the % operator (and divmod) is generally more useful than that of other languages. For example:
def time_of_day(seconds_since_epoch):
minutes, seconds = divmod(seconds_since_epoch, 60)
hours, minutes = divmod(minutes, 60)
days, hours = divmod(hours, 24)
return '%02d:%02d:%02d' % (hours, minutes, seconds)
With this function, time_of_day(12345) returns '03:25:45', as you would expect.
But what time is it 12345 seconds before the epoch? With Python's definition of divmod, time_of_day(-12345) correctly returns '20:34:15'.
What if we redefine divmod to use the C definition of / and %?
def divmod(a, b):
q = int(a / b) # I'm using 3.x
r = a - b * q
return (q, r)
Now, time_of_day(-12345) returns '-3:-25:-45', which isn't a valid time of day. If the standard Python divmod function were implemented this way, you'd have to write special-case code to handle negative inputs. But with floor-style division, like my first example, it Just Works.
The rationale behind this is really the mathematical definition of least residue. Python respects this definition, whereas in most other programming language the modulus operator is really more like a 'reaminder after division' operator. To compute the least residue of -5 % 11, simply add 11 to -5 until you obtain a positive integer in the range [0,10], and the result is 6.
When you divide ints (-2/5)*5 does not evaluate to -2, as it would in the algebra you're used to. Try breaking it down into two steps, first evaluating the part in the parentheses.
(-2/5) * 5 = (-1) * 5
(-1) * 5 = -5
The reason for step 1 is that you're doing int division, which in python 2.x returns the equivalent of the float division result rounded down to the nearest integer.
In python 3 and higher, 2/5 will return a float, see PEP 238.
Check out this BetterExplained article and look # David's comment (No. 6) to get what the others are talking about.
Since we're working w/ integers, we do int division which, in Python, floors the answer as opposed to C. For more on this read Guido's article.
As for your question:
>>> 8 % 5 #B'coz (5*1) + *3* = 8
3
>>> -8 % 5 #B'coz (5*-2) + *2* = -8
2
Hope that helped. It confused me in the beginning too (it still does)! :)
Say -a % b needs to be computed. for ex.
r= 11 % 10
find the next number after 11 that is perfectly divisible by 10 i.e on dividing that next number after 11 gives the remainder 0.
In the above case its 20 which on dividing by 10 gives 0.
Hence, 20-11 = 9 is the number that needs to be added to 11.
The concept if 60 marbles needs to be equally divided among 8 people, actually what you get after dividing 60/8 is 7.5 since you can'nt halve the marbles, the next value after 60 that is perfectly divisible by 8 is 64. Hence 4 more marbles needs to be added to the lot so that everybody share the same joy of marbles.
This is how Python does it when negatives numbers are divided using modulus operator.
This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
Can you please tell me how much is (-2) % 5?
According to my Python interpreter is 3, but do you have a wise explanation for this?
I've read that in some languages the result can be machine-dependent, but I'm not sure though.
By the way: most programming languages would disagree with Python and give the result -2. Depending on the interpretation of modulus this is correct. However, the most agreed-upon mathematical definition states that the modulus of a and b is the (strictly positive) rest r of the division of a / b. More precisely, 0 <= r < b by definition.
The result of the modulus operation on negatives seems to be programming language dependent and here is a listing http://en.wikipedia.org/wiki/Modulo_operation
Your Python interpreter is correct.
One (stupid) way of calculating a modulus is to subtract or add the modulus until the resulting value is between 0 and (modulus − 1).
e.g.:
13 mod 5 = (13 − 5) mod 5 = (13 − 10) mod 5 = 3
or in your case: −2 mod 5 = (−2 + 5) mod 5 = 3
Like the documentation says in Binary arithmetic operations, Python assures that:
The integer division and modulo operators are connected by the following identity: x == (x/y)*y + (x%y). Integer division and modulo are also connected with the built-in function divmod(): divmod(x, y) == (x/y, x%y).
And truly,
>>> divmod(-2, 5)
(-1, 3).
Another way to visualize the uniformity of this method is to calculate divmod for a small sequence of numbers:
>>> for number in xrange(-10, 10):
... print divmod(number, 5)
...
(-2, 0)
(-2, 1)
(-2, 2)
(-2, 3)
(-2, 4)
(-1, 0)
(-1, 1)
(-1, 2)
(-1, 3)
(-1, 4)
(0, 0)
(0, 1)
(0, 2)
(0, 3)
(0, 4)
(1, 0)
(1, 1)
(1, 2)
(1, 3)
(1, 4)
Well, 0 % 5 should be 0, right?
-1 % 5 should be 4 because that's the next allowed digit going in the reverse direction (i.e., it can't be 5, since that's out of range).
And following along by that logic, -2 must be 3.
The easiest way to think of how it will work is that you keep adding or subtracting 5 until the number falls between 0 (inclusive) and 5 (exclusive).
I'm not sure about machine dependence - I've never seen an implementation that was, but I can't say it's never done.
As explained in other answers, there are many choices for a modulo operation with negative values. In general different languages (and different machine architectures) will give a different result.
According to the Python reference manual,
The modulo operator always yields a result with the same sign as its second operand (or zero); the absolute value of the result is strictly smaller than the absolute value of the second operand.
is the choice taken by Python. Basically modulo is defined so that this always holds:
x == (x/y)*y + (x%y)
so it makes sense that (-2)%5 = -2 - (-2/5)*5 = 3
Well, -2 divided by 5 would be 0 with a remainder of 3. I don't believe that should be very platform dependent, but I've seen stranger things.
It is indeed 3. In modular arithmetic, a modulus is simply the remainder of a division, and the remainder of -2 divided by 5 is 3.
The result depends on the language. Python returns the sign of the divisor, where for example c# returns the sign of the dividend (ie. -2 % 5 returns -2 in c#).
One explanation might be that negative numbers are stored using 2's complement. When the python interpreter tries to do the modulo operation it converts to unsigned value. As such instead of doing (-2) % 5 it actually computes 0xFFFF_FFFF_FFFF_FFFD % 5 which is 3.
Be careful not to rely on this mod behavior in C/C++ on all OSes and architectures. If I recall correctly, I tried to rely on C/C++ code like
float x2 = x % n;
to keep x2 in the range from 0 to n-1 but negative numbers crept in when I would compile on one OS, but things would work fine on another OS. This made for an evil time debugging since it only happened half the time!
There seems to be a common confusion between the terms "modulo" and "remainder".
In math, a remainder should always be defined consistent with the quotient, so that if a / b == c rem d then (c * b) + d == a. Depending on how you round your quotient, you get different remainders.
However, modulo should always give a result 0 <= r < divisor, which is only consistent with round-to-minus-infinity division if you allow negative integers. If division rounds towards zero (which is common), modulo and remainder are only equivalent for non-negative values.
Some languages (notably C and C++) don't define the required rounding/remainder behaviours and % is ambiguous. Many define rounding as towards zero, yet use the term modulo where remainder would be more correct. Python is relatively unusual in that it rounds to negative infinity, so modulo and remainder are equivalent.
Ada rounds towards zero IIRC, but has both mod and rem operators.
The C policy is intended to allow compilers to choose the most efficient implementation for the machine, but IMO is a false optimisation, at least these days. A good compiler will probably be able to use the equivalence for optimisation wherever a negative number cannot occur (and almost certainly if you use unsigned types). On the other hand, where negative numbers can occur, you almost certainly care about the details - for portability reasons you have to use very carefully designed overcomplex algorithms and/or checks to ensure that you get the results you want irrespective of the rounding and remainder behaviour.
In other words, the gain for this "optimisation" is mostly (if not always) an illusion, whereas there are very real costs in some cases - so it's a false optimisation.