This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
Can you please tell me how much is (-2) % 5?
According to my Python interpreter is 3, but do you have a wise explanation for this?
I've read that in some languages the result can be machine-dependent, but I'm not sure though.
By the way: most programming languages would disagree with Python and give the result -2. Depending on the interpretation of modulus this is correct. However, the most agreed-upon mathematical definition states that the modulus of a and b is the (strictly positive) rest r of the division of a / b. More precisely, 0 <= r < b by definition.
The result of the modulus operation on negatives seems to be programming language dependent and here is a listing http://en.wikipedia.org/wiki/Modulo_operation
Your Python interpreter is correct.
One (stupid) way of calculating a modulus is to subtract or add the modulus until the resulting value is between 0 and (modulus − 1).
e.g.:
13 mod 5 = (13 − 5) mod 5 = (13 − 10) mod 5 = 3
or in your case: −2 mod 5 = (−2 + 5) mod 5 = 3
Like the documentation says in Binary arithmetic operations, Python assures that:
The integer division and modulo operators are connected by the following identity: x == (x/y)*y + (x%y). Integer division and modulo are also connected with the built-in function divmod(): divmod(x, y) == (x/y, x%y).
And truly,
>>> divmod(-2, 5)
(-1, 3).
Another way to visualize the uniformity of this method is to calculate divmod for a small sequence of numbers:
>>> for number in xrange(-10, 10):
... print divmod(number, 5)
...
(-2, 0)
(-2, 1)
(-2, 2)
(-2, 3)
(-2, 4)
(-1, 0)
(-1, 1)
(-1, 2)
(-1, 3)
(-1, 4)
(0, 0)
(0, 1)
(0, 2)
(0, 3)
(0, 4)
(1, 0)
(1, 1)
(1, 2)
(1, 3)
(1, 4)
Well, 0 % 5 should be 0, right?
-1 % 5 should be 4 because that's the next allowed digit going in the reverse direction (i.e., it can't be 5, since that's out of range).
And following along by that logic, -2 must be 3.
The easiest way to think of how it will work is that you keep adding or subtracting 5 until the number falls between 0 (inclusive) and 5 (exclusive).
I'm not sure about machine dependence - I've never seen an implementation that was, but I can't say it's never done.
As explained in other answers, there are many choices for a modulo operation with negative values. In general different languages (and different machine architectures) will give a different result.
According to the Python reference manual,
The modulo operator always yields a result with the same sign as its second operand (or zero); the absolute value of the result is strictly smaller than the absolute value of the second operand.
is the choice taken by Python. Basically modulo is defined so that this always holds:
x == (x/y)*y + (x%y)
so it makes sense that (-2)%5 = -2 - (-2/5)*5 = 3
Well, -2 divided by 5 would be 0 with a remainder of 3. I don't believe that should be very platform dependent, but I've seen stranger things.
It is indeed 3. In modular arithmetic, a modulus is simply the remainder of a division, and the remainder of -2 divided by 5 is 3.
The result depends on the language. Python returns the sign of the divisor, where for example c# returns the sign of the dividend (ie. -2 % 5 returns -2 in c#).
One explanation might be that negative numbers are stored using 2's complement. When the python interpreter tries to do the modulo operation it converts to unsigned value. As such instead of doing (-2) % 5 it actually computes 0xFFFF_FFFF_FFFF_FFFD % 5 which is 3.
Be careful not to rely on this mod behavior in C/C++ on all OSes and architectures. If I recall correctly, I tried to rely on C/C++ code like
float x2 = x % n;
to keep x2 in the range from 0 to n-1 but negative numbers crept in when I would compile on one OS, but things would work fine on another OS. This made for an evil time debugging since it only happened half the time!
There seems to be a common confusion between the terms "modulo" and "remainder".
In math, a remainder should always be defined consistent with the quotient, so that if a / b == c rem d then (c * b) + d == a. Depending on how you round your quotient, you get different remainders.
However, modulo should always give a result 0 <= r < divisor, which is only consistent with round-to-minus-infinity division if you allow negative integers. If division rounds towards zero (which is common), modulo and remainder are only equivalent for non-negative values.
Some languages (notably C and C++) don't define the required rounding/remainder behaviours and % is ambiguous. Many define rounding as towards zero, yet use the term modulo where remainder would be more correct. Python is relatively unusual in that it rounds to negative infinity, so modulo and remainder are equivalent.
Ada rounds towards zero IIRC, but has both mod and rem operators.
The C policy is intended to allow compilers to choose the most efficient implementation for the machine, but IMO is a false optimisation, at least these days. A good compiler will probably be able to use the equivalence for optimisation wherever a negative number cannot occur (and almost certainly if you use unsigned types). On the other hand, where negative numbers can occur, you almost certainly care about the details - for portability reasons you have to use very carefully designed overcomplex algorithms and/or checks to ensure that you get the results you want irrespective of the rounding and remainder behaviour.
In other words, the gain for this "optimisation" is mostly (if not always) an illusion, whereas there are very real costs in some cases - so it's a false optimisation.
Related
I've been playing with Python's hash function. For small integers, it appears hash(n) == n always. However this does not extend to large numbers:
>>> hash(2**100) == 2**100
False
I'm not surprised, I understand hash takes a finite range of values. What is that range?
I tried using binary search to find the smallest number hash(n) != n
>>> import codejamhelpers # pip install codejamhelpers
>>> help(codejamhelpers.binary_search)
Help on function binary_search in module codejamhelpers.binary_search:
binary_search(f, t)
Given an increasing function :math:`f`, find the greatest non-negative integer :math:`n` such that :math:`f(n) \le t`. If :math:`f(n) > t` for all :math:`n \ge 0`, return None.
>>> f = lambda n: int(hash(n) != n)
>>> n = codejamhelpers.binary_search(f, 0)
>>> hash(n)
2305843009213693950
>>> hash(n+1)
0
What's special about 2305843009213693951? I note it's less than sys.maxsize == 9223372036854775807
Edit: I'm using Python 3. I ran the same binary search on Python 2 and got a different result 2147483648, which I note is sys.maxint+1
I also played with [hash(random.random()) for i in range(10**6)] to estimate the range of hash function. The max is consistently below n above. Comparing the min, it seems Python 3's hash is always positively valued, whereas Python 2's hash can take negative values.
2305843009213693951 is 2^61 - 1. It's the largest Mersenne prime that fits into 64 bits.
If you have to make a hash just by taking the value mod some number, then a large Mersenne prime is a good choice -- it's easy to compute and ensures an even distribution of possibilities. (Although I personally would never make a hash this way)
It's especially convenient to compute the modulus for floating point numbers. They have an exponential component that multiplies the whole number by 2^x. Since 2^61 = 1 mod 2^61-1, you only need to consider the (exponent) mod 61.
See: https://en.wikipedia.org/wiki/Mersenne_prime
Based on python documentation in pyhash.c file:
For numeric types, the hash of a number x is based on the reduction
of x modulo the prime P = 2**_PyHASH_BITS - 1. It's designed so that
hash(x) == hash(y) whenever x and y are numerically equal, even if
x and y have different types.
So for a 64/32 bit machine, the reduction would be 2 _PyHASH_BITS - 1, but what is _PyHASH_BITS?
You can find it in pyhash.h header file which for a 64 bit machine has been defined as 61 (you can read more explanation in pyconfig.h file).
#if SIZEOF_VOID_P >= 8
# define _PyHASH_BITS 61
#else
# define _PyHASH_BITS 31
#endif
So first off all it's based on your platform for example in my 64bit Linux platform the reduction is 261-1, which is 2305843009213693951:
>>> 2**61 - 1
2305843009213693951
Also You can use math.frexp in order to get the mantissa and exponent of sys.maxint which for a 64 bit machine shows that max int is 263:
>>> import math
>>> math.frexp(sys.maxint)
(0.5, 64)
And you can see the difference by a simple test:
>>> hash(2**62) == 2**62
True
>>> hash(2**63) == 2**63
False
Read the complete documentation about python hashing algorithm https://github.com/python/cpython/blob/master/Python/pyhash.c#L34
As mentioned in comment you can use sys.hash_info (in python 3.X) which will give you a struct sequence of parameters used for computing
hashes.
>>> sys.hash_info
sys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003, algorithm='siphash24', hash_bits=64, seed_bits=128, cutoff=0)
>>>
Alongside the modulus that I've described in preceding lines, you can also get the inf value as following:
>>> hash(float('inf'))
314159
>>> sys.hash_info.inf
314159
Hash function returns plain int that means that returned value is greater than -sys.maxint and lower than sys.maxint, which means if you pass sys.maxint + x to it result would be -sys.maxint + (x - 2).
hash(sys.maxint + 1) == sys.maxint + 1 # False
hash(sys.maxint + 1) == - sys.maxint -1 # True
hash(sys.maxint + sys.maxint) == -sys.maxint + sys.maxint - 2 # True
Meanwhile 2**200 is a n times greater than sys.maxint - my guess is that hash would go over range -sys.maxint..+sys.maxint n times until it stops on plain integer in that range, like in code snippets above..
So generally, for any n <= sys.maxint:
hash(sys.maxint*n) == -sys.maxint*(n%2) + 2*(n%2)*sys.maxint - n/2 - (n + 1)%2 ## True
Note: this is true for python 2.
The implementation for the int type in cpython can be found here.
It just returns the value, except for -1, than it returns -2:
static long
int_hash(PyIntObject *v)
{
/* XXX If this is changed, you also need to change the way
Python's long, float and complex types are hashed. */
long x = v -> ob_ival;
if (x == -1)
x = -2;
return x;
}
I need to calculate the square root of some numbers, for example √9 = 3 and √2 = 1.4142. How can I do it in Python?
The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?
Related
Integer square root in python
How to find integer nth roots?
Is there a short-hand for nth root of x in Python?
Difference between **(1/2), math.sqrt and cmath.sqrt?
Why is math.sqrt() incorrect for large numbers?
Python sqrt limit for very large numbers?
Which is faster in Python: x**.5 or math.sqrt(x)?
Why does Python give the "wrong" answer for square root? (specific to Python 2)
calculating n-th roots using Python 3's decimal module
How can I take the square root of -1 using python? (focused on NumPy)
Arbitrary precision of square roots
Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title.
Option 1: math.sqrt()
The math module from the standard library has a sqrt function to calculate the square root of a number. It takes any type that can be converted to float (which includes int) as an argument and returns a float.
>>> import math
>>> math.sqrt(9)
3.0
Option 2: Fractional exponent
The power operator (**) or the built-in pow() function can also be used to calculate a square root. Mathematically speaking, the square root of a equals a to the power of 1/2.
The power operator requires numeric types and matches the conversion rules for binary arithmetic operators, so in this case it will return either a float or a complex number.
>>> 9 ** (1/2)
3.0
>>> 9 ** .5 # Same thing
3.0
>>> 2 ** .5
1.4142135623730951
(Note: in Python 2, 1/2 is truncated to 0, so you have to force floating point arithmetic with 1.0/2 or similar. See Why does Python give the "wrong" answer for square root?)
This method can be generalized to nth root, though fractions that can't be exactly represented as a float (like 1/3 or any denominator that's not a power of 2) may cause some inaccuracy:
>>> 8 ** (1/3)
2.0
>>> 125 ** (1/3)
4.999999999999999
Edge cases
Negative and complex
Exponentiation works with negative numbers and complex numbers, though the results have some slight inaccuracy:
>>> (-25) ** .5 # Should be 5j
(3.061616997868383e-16+5j)
>>> 8j ** .5 # Should be 2+2j
(2.0000000000000004+2j)
Note the parentheses on -25! Otherwise it's parsed as -(25**.5) because exponentiation is more tightly binding than unary negation.
Meanwhile, math is only built for floats, so for x<0, math.sqrt(x) will raise ValueError: math domain error and for complex x, it'll raise TypeError: can't convert complex to float. Instead, you can use cmath.sqrt(x), which is more more accurate than exponentiation (and will likely be faster too):
>>> import cmath
>>> cmath.sqrt(-25)
5j
>>> cmath.sqrt(8j)
(2+2j)
Precision
Both options involve an implicit conversion to float, so floating point precision is a factor. For example:
>>> n = 10**30
>>> x = n**2
>>> root = x**.5
>>> n == root
False
>>> n - root # how far off are they?
0.0
>>> int(root) - n # how far off is the float from the int?
19884624838656
Very large numbers might not even fit in a float and you'll get OverflowError: int too large to convert to float. See Python sqrt limit for very large numbers?
Other types
Let's look at Decimal for example:
Exponentiation fails unless the exponent is also Decimal:
>>> decimal.Decimal('9') ** .5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for ** or pow(): 'decimal.Decimal' and 'float'
>>> decimal.Decimal('9') ** decimal.Decimal('.5')
Decimal('3.000000000000000000000000000')
Meanwhile, math and cmath will silently convert their arguments to float and complex respectively, which could mean loss of precision.
decimal also has its own .sqrt(). See also calculating n-th roots using Python 3's decimal module
SymPy
Depending on your goal, it might be a good idea to delay the calculation of square roots for as long as possible. SymPy might help.
SymPy is a Python library for symbolic mathematics.
import sympy
sympy.sqrt(2)
# => sqrt(2)
This doesn't seem very useful at first.
But sympy can give more information than floats or Decimals:
sympy.sqrt(8) / sympy.sqrt(27)
# => 2*sqrt(6)/9
Also, no precision is lost. (√2)² is still an integer:
s = sympy.sqrt(2)
s**2
# => 2
type(s**2)
#=> <class 'sympy.core.numbers.Integer'>
In comparison, floats and Decimals would return a number which is very close to 2 but not equal to 2:
(2**0.5)**2
# => 2.0000000000000004
from decimal import Decimal
(Decimal('2')**Decimal('0.5'))**Decimal('2')
# => Decimal('1.999999999999999999999999999')
Sympy also understands more complex examples like the Gaussian integral:
from sympy import Symbol, integrate, pi, sqrt, exp, oo
x = Symbol('x')
integrate(exp(-x**2), (x, -oo, oo))
# => sqrt(pi)
integrate(exp(-x**2), (x, -oo, oo)) == sqrt(pi)
# => True
Finally, if a decimal representation is desired, it's possible to ask for more digits than will ever be needed:
sympy.N(sympy.sqrt(2), 1_000_000)
# => 1.4142135623730950488016...........2044193016904841204
NumPy
>>> import numpy as np
>>> np.sqrt(25)
5.0
>>> np.sqrt([2, 3, 4])
array([1.41421356, 1.73205081, 2. ])
docs
Negative
For negative reals, it'll return nan, so np.emath.sqrt() is available for that case.
>>> a = np.array([4, -1, np.inf])
>>> np.sqrt(a)
<stdin>:1: RuntimeWarning: invalid value encountered in sqrt
array([ 2., nan, inf])
>>> np.emath.sqrt(a)
array([ 2.+0.j, 0.+1.j, inf+0.j])
Another option, of course, is to convert to complex first:
>>> a = a.astype(complex)
>>> np.sqrt(a)
array([ 2.+0.j, 0.+1.j, inf+0.j])
Newton's method
Most simple and accurate way to compute square root is Newton's method.
You have a number which you want to compute its square root (num) and you have a guess of its square root (estimate). Estimate can be any number bigger than 0, but a number that makes sense shortens the recursive call depth significantly.
new_estimate = (estimate + num/estimate) / 2
This line computes a more accurate estimate with those 2 parameters. You can pass new_estimate value to the function and compute another new_estimate which is more accurate than the previous one or you can make a recursive function definition like this.
def newtons_method(num, estimate):
# Computing a new_estimate
new_estimate = (estimate + num/estimate) / 2
print(new_estimate)
# Base Case: Comparing our estimate with built-in functions value
if new_estimate == math.sqrt(num):
return True
else:
return newtons_method(num, new_estimate)
For example we need to find 30's square root. We know that the result is between 5 and 6.
newtons_method(30,5)
number is 30 and estimate is 5. The result from each recursive calls are:
5.5
5.477272727272727
5.4772255752546215
5.477225575051661
The last result is the most accurate computation of the square root of number. It is the same value as the built-in function math.sqrt().
This answer was originally posted by gunesevitan, but is now deleted.
Python's fractions module and its class, Fraction, implement arithmetic with rational numbers. The Fraction class doesn't implement a square root operation, because most square roots are irrational numbers. However, it can be used to approximate a square root with arbitrary accuracy, because a Fraction's numerator and denominator are arbitrary-precision integers.
The following method takes a positive number x and a number of iterations, and returns upper and lower bounds for the square root of x.
from fractions import Fraction
def sqrt(x, n):
x = x if isinstance(x, Fraction) else Fraction(x)
upper = x + 1
for i in range(0, n):
upper = (upper + x/upper) / 2
lower = x / upper
if lower > upper:
raise ValueError("Sanity check failed")
return (lower, upper)
See the reference below for details on this operation's implementation. It also shows how to implement other operations with upper and lower bounds (although there is apparently at least one error with the log operation there).
Daumas, M., Lester, D., Muñoz, C., "Verified Real Number Calculations: A Library for Interval Arithmetic", arXiv:0708.3721 [cs.MS], 2007.
Alternatively, using Python's math.isqrt, we can calculate a square root to arbitrary precision:
Square root of i within 1/2n of the correct value, where i is an integer:Fraction(math.isqrt(i * 2**(n*2)), 2**n).
Square root of i within 1/10n of the correct value, where i is an integer:Fraction(math.isqrt(i * 10**(n*2)), 10**n).
Square root of x within 1/2n of the correct value, where x is a multiple of 1/2n:Fraction(math.isqrt(x * 2**(n)), 2**n).
Square root of x within 1/10n of the correct value, where x is a multiple of 1/10n:Fraction(math.isqrt(x * 10**(n)), 10**n).
In the foregoing, i or x must be 0 or greater.
Binary search
Disclaimer: this is for a more specialised use-case. This method might not be practical in all circumstances.
Benefits:
can find integer values (i.e. which integer is the root?)
no need to convert to float, so better precision (can be done that well too)
I personally implemented this one for a crypto CTF challenge (RSA cube root attack),where I needed a precise integer value.
The general idea can be extended to any other root.
def int_squareroot(d: int) -> tuple[int, bool]:
"""Try calculating integer squareroot and return if it's exact"""
left, right = 1, (d+1)//2
while left<right-1:
x = (left+right)//2
if x**2 > d:
left, right = left, x
else:
left, right = x, right
return left, left**2==d
EDIT:
As #wjandrea have also pointed out, **this example code can NOT compute **. This is a side-effect of the fact that it does not convert anything into floats, so no precision is lost. If the root is an integer, you get that back. If it's not, you get the biggest number whose square is smaller than your number. I updated the code so that it also returns a bool indicating if the value is correct or not, and also fixed an issue causing it to loop infinitely (also pointed out by #wjandrea). This implementation of the general method still works kindof weird for smaller numbers, but above 10 I had no problems with.
Overcoming the issues and limits of this method/implementation:
For smaller numbers, you can just use all the other methods from other answers. They generally use floats, which might be a loss of precision, but for small integers that should mean no problem at all. All of those methods that use floats have the same (or nearly the same) limit from this.
If you still want to use this method and get float results, it should be trivial to convert this to use floats too. Note that that will reintroduce precision loss, this method's unique benefit over the others, and in that case you can also just use any of the other answers. I think the newton's method version converges a bit faster, but I'm not sure.
For larger numbers, where loss of precision with floats come into play, this method can give results closer to the actual answer (depending on how big is the input). If you want to work with non-integers in this range, you can use other types, for example fixed precision numbers in this method too.
Edit 2, on other answers:
Currently, and afaik, the only other answer that has similar or better precision for large numbers than this implementation is the one that suggest SymPy, by Eric Duminil. That version is also easier to use, and work for any kind of number, the only downside is that it requires SymPy. My implementation is free from any huge dependencies if that is what you are looking for.
Arbitrary precision square root
This variation uses string manipulations to convert a string which represents a decimal floating-point number to an int, calls math.isqrt to do the actual square root extraction, and then formats the result as a decimal string. math.isqrt rounds down, so all produced digits are correct.
The input string, num, must use plain float format: 'e' notation is not supported. The num string can be a plain integer, and leading zeroes are ignored.
The digits argument specifies the number of decimal places in the result string, i.e., the number of digits after the decimal point.
from math import isqrt
def str_sqrt(num, digits):
""" Arbitrary precision square root
num arg must be a string
Return a string with `digits` after
the decimal point
Written by PM 2Ring 2022.01.26
"""
int_part , _, frac_part = num.partition('.')
num = int_part + frac_part
# Determine the required precision
width = 2 * digits - len(frac_part)
# Truncate or pad with zeroes
num = num[:width] if width < 0 else num + '0' * width
s = str(isqrt(int(num)))
if digits:
# Pad, if necessary
s = '0' * (1 + digits - len(s)) + s
s = f"{s[:-digits]}.{s[-digits:]}"
return s
Test
print(str_sqrt("2.0", 30))
Output
1.414213562373095048801688724209
For small numbers of digits, it's faster to use decimal.Decimal.sqrt. Around 32 digits or so, str_sqrt is roughly the same speed as Decimal.sqrt. But at 128 digits, str_sqrt is 2.2× faster than Decimal.sqrt, at 512 digits, it's 4.3× faster, at 8192 digits, it's 7.4× faster.
Here's a live version running on the SageMathCell server.
find square-root of a number
while True:
num = int(input("Enter a number:\n>>"))
for i in range(2, num):
if num % i == 0:
if i*i == num:
print("Square root of", num, "==>", i)
break
else:
kd = (num**0.5) # (num**(1/2))
print("Square root of", num, "==>", kd)
OUTPUT:-
Enter a number: 24
Square root of 24 ==> 4.898979485566356
Enter a number: 36
Square root of 36 ==> 6
Enter a number: 49
Square root of 49 ==> 7
✔ Output 💡 CLICK BELOW & SEE ✔
I would like to generate a random number n such that n is in the range (a,b) or (a,b] where a < b. Is this possible in python? It seems the only choices are a + random.random()*(b-a) which is includes [a,b) or random.uniform(a,b) which includes the range [a,b] so neither meet my needs.
Computer generation of "random" numbers is tricky, and especially of "random" floats. You need to think long & hard about what you really want. In the end, you'll need to build something on top of integers, not directly out of floats.
Under the covers, in Python (and every other language using the Mersenne Twister's source code), generating a "random" IEEE-754 double (Python's basic random.random()) really works by generating a random 53-bit integer, then dividing by 2**53:
randrange(2**53) / 9007199254740992.0
That's why the output range is [0.0, 1.0), but not all representable floats in that range are equally likely. Only the ones that can be expressed in the form I/2**53 for an integer 0 <= I < 2**53. For example, the float 1.0 / 2**60 can never be returned.
There are no "real numbers" here, just representable binary-floating-point numbers, so to answer your question first requires that you specify the exact set of those from which you're trying to pick.
If the answer is that you don't want to get that picky, then the distinction between open and closed is also too picky to bother with. If you can specify the precise set, then the solution is to generate more-or-less obvious random integers that map to your output set.
For example, if you want to pick "random" floats from [3.0, 6.0] with just 2 bits after the radix point, there are 13 possible outputs. So the first step is
i = random.randrange(13)
Then map to the range of interest:
return 3.0 + i / 4.0
EDIT: USELESS BUT EDUCATIONAL ;-)
As noted in the comments, picking uniformly from all representable floats x with 0.0 < x < 1.0 can be done, but is very far from being uniformly distributed across that range. There are, for example, 2**52 representable floats in [0.5, 1.0), but also 2**52 representable floats in [0.25, 0.5), and ... in [2.0**-i, 2.0**(1-i)) for increasing i until the number of representable floats starts shrinking when we hit the subnormal range, eventually falling to none when we underflow to 0 completely.
As bit patterns they're very simple, though: the set of representable IEEE-754 doubles (Python floats on almost all platforms) in (0, 1) consists of, when viewing the bit patterns as integers, simply
range(1, 0x3ff0000000000000)
So a function to generate each of those with equal likelihood is straightforward to write using bit-fiddling tricks:
from struct import unpack
from random import randrange
def gen01():
i = randrange(1, 0x3ff0000000000000)
as_bytes = i.to_bytes(8, "big")
return unpack(">d", as_bytes)[0]
Just run that a few times to see why it's useless - it's very heavily skewed toward the 0.0 end of the range:
>>> for i in range(10):
... print(gen01())
9.796357610869274e-104
4.125848254595866e-197
1.8114434720880952e-253
1.4937625148849258e-285
1.0537573744489343e-304
2.79008159472542e-58
4.718459887295062e-217
2.7996009087703915e-295
3.4129442284798105e-170
2.299402306630583e-115
random.randint(a,b) seems to do that. https://docs.python.org/2/library/random.html
Though a bit tricky, you may use np.random.rand to generate random number in (a, b]:
import numpy as np
size = 10 # No. of random numbers to be generated
a, b = 0, 10 # Can be any values
rand_num = np.random.rand(size) # [0, 1)
rand_num *= -1 # (-1, 0]
rand_num += 1 # (0, 1]
rand_num = a + rand_num * (b - a) # (a, b]
This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
Been looking through other answers and I still don't understand the modulo for negative numbers in python
For example the answer by df
x == (x/y)*y + (x%y)
so it makes sense that (-2)%5 = -2 - (-2/5)*5 = 3
Doesn't this (-2 - (-2/5)*5) =0 or am I just crazy?
Modulus operation with negatives values - weird thing?
Same with this
negative numbers modulo in python
Where did he get -2 from?
Lastly if the sign is dependent on the dividend why don't negative dividends have the same output as their positive counterparts?
For instance the output of
print([8%5,-8%5,4%5,-4%5])
is
[3, 2, 4, 1]
In Python, modulo is calculated according to two rules:
(a // b) * b + (a % b) == a, and
a % b has the same sign as b.
Combine this with the fact that integer division rounds down (towards −∞), and the resulting behavior is explained.
If you do -8 // 5, you get -1.6 rounded down, which is -2. Multiply that by 5 and you get -10; 2 is the number that you'd have to add to that to get -8. Therefore, -8 % 5 is 2.
In Python, a // b is defined as floor(a/b), as opposed to most other languages where integer division is defined as trunc(a/b). There is a corresponding difference in the interpretation of a % b = a - (a // b) * b.
The reason for this is that Python's definition of the % operator (and divmod) is generally more useful than that of other languages. For example:
def time_of_day(seconds_since_epoch):
minutes, seconds = divmod(seconds_since_epoch, 60)
hours, minutes = divmod(minutes, 60)
days, hours = divmod(hours, 24)
return '%02d:%02d:%02d' % (hours, minutes, seconds)
With this function, time_of_day(12345) returns '03:25:45', as you would expect.
But what time is it 12345 seconds before the epoch? With Python's definition of divmod, time_of_day(-12345) correctly returns '20:34:15'.
What if we redefine divmod to use the C definition of / and %?
def divmod(a, b):
q = int(a / b) # I'm using 3.x
r = a - b * q
return (q, r)
Now, time_of_day(-12345) returns '-3:-25:-45', which isn't a valid time of day. If the standard Python divmod function were implemented this way, you'd have to write special-case code to handle negative inputs. But with floor-style division, like my first example, it Just Works.
The rationale behind this is really the mathematical definition of least residue. Python respects this definition, whereas in most other programming language the modulus operator is really more like a 'reaminder after division' operator. To compute the least residue of -5 % 11, simply add 11 to -5 until you obtain a positive integer in the range [0,10], and the result is 6.
When you divide ints (-2/5)*5 does not evaluate to -2, as it would in the algebra you're used to. Try breaking it down into two steps, first evaluating the part in the parentheses.
(-2/5) * 5 = (-1) * 5
(-1) * 5 = -5
The reason for step 1 is that you're doing int division, which in python 2.x returns the equivalent of the float division result rounded down to the nearest integer.
In python 3 and higher, 2/5 will return a float, see PEP 238.
Check out this BetterExplained article and look # David's comment (No. 6) to get what the others are talking about.
Since we're working w/ integers, we do int division which, in Python, floors the answer as opposed to C. For more on this read Guido's article.
As for your question:
>>> 8 % 5 #B'coz (5*1) + *3* = 8
3
>>> -8 % 5 #B'coz (5*-2) + *2* = -8
2
Hope that helped. It confused me in the beginning too (it still does)! :)
Say -a % b needs to be computed. for ex.
r= 11 % 10
find the next number after 11 that is perfectly divisible by 10 i.e on dividing that next number after 11 gives the remainder 0.
In the above case its 20 which on dividing by 10 gives 0.
Hence, 20-11 = 9 is the number that needs to be added to 11.
The concept if 60 marbles needs to be equally divided among 8 people, actually what you get after dividing 60/8 is 7.5 since you can'nt halve the marbles, the next value after 60 that is perfectly divisible by 8 is 64. Hence 4 more marbles needs to be added to the lot so that everybody share the same joy of marbles.
This is how Python does it when negatives numbers are divided using modulus operator.
Exactly how does the % operator work in Python, particularly when negative numbers are involved?
For example, why does -5 % 4 evaluate to 3, rather than, say, -1?
Unlike C or C++, Python's modulo operator (%) always return a number having the same sign as the denominator (divisor). Your expression yields 3 because
(-5) / 4 = -1.25 --> floor(-1.25) = -2
(-5) % 4 = (-2 × 4 + 3) % 4 = 3.
It is chosen over the C behavior because a nonnegative result is often more useful. An example is to compute week days. If today is Tuesday (day #2), what is the week day N days before? In Python we can compute with
return (2 - N) % 7
but in C, if N ≥ 3, we get a negative number which is an invalid number, and we need to manually fix it up by adding 7:
int result = (2 - N) % 7;
return result < 0 ? result + 7 : result;
(See http://en.wikipedia.org/wiki/Modulo_operator for how the sign of result is determined for different languages.)
Here's an explanation from Guido van Rossum:
http://python-history.blogspot.com/2010/08/why-pythons-integer-division-floors.html
Essentially, it's so that a/b = q with remainder r preserves the relationships b*q + r = a and 0 <= r < b.
In python, modulo operator works like this.
>>> mod = n - math.floor(n/base) * base
so the result is (for your case):
mod = -5 - floor(-1.25) * 4
mod = -5 - (-2*4)
mod = 3
whereas other languages such as C, JAVA, JavaScript use truncation instead of floor.
>>> mod = n - int(n/base) * base
which results in:
mod = -5 - int(-1.25) * 4
mod = -5 - (-1*4)
mod = -1
If you need more information about rounding in python, read this.
Other answers, especially the selected one have clearly answered this question quite well. But I would like to present a graphical approach that might be easier to understand as well, along with python code to perform normal mathematical modulo in python.
Python Modulo for Dummies
Modulo function is a directional function that describes how much we have to move further or behind after the mathematical jumps that we take during division over our X-axis of infinite numbers.
So let's say you were doing 7%3
So in forward direction, your answer would be +1, but in backward direction-
your answer would be -2. Both of which are correct mathematically.
Similarly, you would have 2 moduli for negative numbers as well. For eg: -7%3, can result both in -1 or +2 as shown -
Forward direction
Backward direction
In mathematics, we choose inward jumps, i.e. forward direction for a positive number and backward direction for negative numbers.
But in Python, we have a forward direction for all positive modulo operations. Hence, your confusion -
>>> -5 % 4
3
>>> 5 % 4
1
Here is the python code for inward jump type modulo in python:
def newMod(a,b):
res = a%b
return res if not res else res-b if a<0 else res
which would give -
>>> newMod(-5,4)
-1
>>> newMod(5,4)
1
Many people would oppose the inward jump method, but my personal opinion is, that this one is better!!
As pointed out, Python modulo makes a well-reasoned exception to the conventions of other languages.
This gives negative numbers a seamless behavior, especially when used in combination with the // integer-divide operator, as % modulo often is (as in math.divmod):
for n in range(-8,8):
print n, n//4, n%4
Produces:
-8 -2 0
-7 -2 1
-6 -2 2
-5 -2 3
-4 -1 0
-3 -1 1
-2 -1 2
-1 -1 3
0 0 0
1 0 1
2 0 2
3 0 3
4 1 0
5 1 1
6 1 2
7 1 3
Python % always outputs zero or positive*
Python // always rounds toward negative infinity
* ... as long as the right operand is positive. On the other hand 11 % -10 == -9
There is no one best way to handle integer division and mods with negative numbers. It would be nice if a/b was the same magnitude and opposite sign of (-a)/b. It would be nice if a % b was indeed a modulo b. Since we really want a == (a/b)*b + a%b, the first two are incompatible.
Which one to keep is a difficult question, and there are arguments for both sides. C and C++ round integer division towards zero (so a/b == -((-a)/b)), and apparently Python doesn't.
You can use:
result = numpy.fmod(x,y)
it will keep the sign , see numpy fmod() documentation.
It's also worth to mention that also the division in python is different from C:
Consider
>>> x = -10
>>> y = 37
in C you expect the result
0
what is x/y in python?
>>> print x/y
-1
and % is modulo - not the remainder! While x%y in C yields
-10
python yields.
>>> print x%y
27
You can get both as in C
The division:
>>> from math import trunc
>>> d = trunc(float(x)/y)
>>> print d
0
And the remainder (using the division from above):
>>> r = x - d*y
>>> print r
-10
This calculation is maybe not the fastest but it's working for any sign combinations of x and y to achieve the same results as in C plus it avoids conditional statements.
It's what modulo is used for. If you do a modulo through a series of numbers, it will give a cycle of values, say:
ans = num % 3
num
ans
3
0
2
2
1
1
0
0
-1
2
-2
1
-3
0
I also thought it was a strange behavior of Python. It turns out that I was not solving the division well (on paper); I was giving a value of 0 to the quotient and a value of -5 to the remainder. Terrible... I forgot the geometric representation of integers numbers. By recalling the geometry of integers given by the number line, one can get the correct values for the quotient and the remainder, and check that Python's behavior is fine. (Although I assume that you have already resolved your concern a long time ago).
#Deekshant has explained it well using visualisation. Another way to understand %(modulo) is ask a simple question.
What is nearest smaller number to dividend that can be divisible by divisor on X-axis ?
Let's have a look at few examples.
5 % 3
5 is Dividend, 3 is divisor. If you ask above question 3 is nearest smallest number that is divisible by divisor. ans would be 5 - 3 = 2. For positive Dividend, nearest smallest number would be always right side of dividend.
-5 % 3
Nearest smallest number that is divisible by 3 is -6 so ans would be -5 - (-6) = 1
-5 %4
Nearest smallest number that is divisible by 4 is -8 so ans would be -5 - (-8) = 3
Python answers every modulo expression with this method. Hope you can understand next how expression would be going to execute.
I attempted to write a general answer covering all input cases, because many people ask about various special cases (not just the one in OP, but also especially about negative values on the right-hand side) and it's really all the same question.
What does a % b actually give us in Python, explained in words?
Assuming that a and b are either float and/or int values, finite (not math.inf, math.nan etc.) and that b is not zero....
The result c is the unique number with the sign of b, such that a - c is divisible by b and abs(c) < abs(b). It will be an int when a and b are both int, and a float (even if it is exactly equal to an integer) when either a or b is an int.
For example:
>>> -9 % -5
-4
>>> 9 % 5
4
>>> -9 % 5
1
>>> 9 % -5
-1
The sign preservation also works for floating-point numbers; even when a is divisible by b, it is possible to get distinct 0.0 and -0.0 results (recalling that zero is signed in floating-point), and the sign will match b.
Proof of concept:
import math
def sign(x):
return math.copysign(1, x)
def test(a: [int, float], b: [int, float]):
c = a % b
if isinstance(a, int) and isinstance(b, int):
assert isinstance(c, int)
assert c * b >= 0 # same sign or c == 0
else:
assert isinstance(c, float)
assert sign(c) == sign(b)
assert abs(c) < abs(b)
assert math.isclose((a - c) / b, round((a - c) / b))
It's a little hard to phrase this in a way that covers all possible sign and type combinations and accounts for floating-point imprecision, but I'm pretty confident in the above. One specific gotcha for floats is that, because of that floating-point imprecision, the result for a % b might sometimes appear to give b rather than 0. In fact, it simply gives a value very close to b, because the result of the division wasn't quite exact:
>>> # On my version of Python
>>> 3.5 % 0.1
0.09999999999999981
>>> # On some other versions, it might appear as 0.1,
>>> # because of the internal rules for formatting floats for display
What if abs(a) < abs(b)?
A lot of people seem to think this is a special case, or for some reason have difficulty understanding what happens. But there is nothing special here.
For example: consider -1 % 3. How much, as a positive quantity (because 3 is positive), do we have to subtract from -1, in order to get a result divisible by 3? -1 is not divisible by 3; -1 - 1 is -2, which is also not divisible; but -1 - 2 is -3, which is divisible by 3 (dividing in exactly -1 times). By subtracting 2, we get back to divisibility; thus 2 is our predicted answer - and it checks out:
>>> -1 % 3
2
What about with b equal to zero?
It will raise ZeroDivisionError, regardless of whether b is integer zero, floating-point positive zero, or floating-point negative zero. In particular, it will not result in a NaN value.
What about special float values?
As one might expect, nan and signed infinity values for a produce a nan result, as long as b is not zero (which overrides everything else). nan values for b result in nan as well. NaN cannot be signed, so the sign of b is irrelevant in these cases.
Also as one might expect, inf % inf gives nan, regardless of the signs. If we are sharing out an infinite amount of as to an infinite amount of bs, there's no way to say "which infinity is bigger" or by how much.
The only slightly confusing cases are when b is a signed infinity value:
>>> 0 % inf
0.0
>>> 0 % -inf
-0.0
>>> 1 % inf
1.0
>>> 1 % -inf
-inf
As always, the result takes the sign of b. 0 is divisible by anything (except NaN), including infinity. But nothing else divides evenly into infinity. If a has the same sign as b, the result is simply a (as a floating-point value); if the signs differ, it will be b. Why? Well, consider -1 % inf. There isn't a finite value we can subtract from -1, in order to get to 0 (the unique value that we can divide into infinity). So we have to keep going, to infinity. The same logic applies to 1 % -inf, with all the signs reversed.
What about other types?
It's up to the type. For example, the Decimal type overloads the operator so that the result takes the sign of the numerator, even though it functionally represents the same kind of value that a float does. And, of course, strings use it for something completely different.
Why not always give a positive result, or take the sign of a?
The behaviour is motivated by integer division. While % happens to work with floating-point numbers, it's specifically designed to handle integer inputs, and the results for floats fall in line to be consistent with that.
After making the choice for a // b to give a floored division result, the % behaviour preserves a useful invariant:
>>> def check_consistency(a, b):
... assert (a // b) * b + (a % b) == a
...
>>> for a in range(-10, 11):
... for b in range(-10, 11):
... if b != 0:
... check_consistency(a, b) # no assertion is raised
...
In other words: adding the modulus value back, corrects the error created by doing an integer division.
(This, of course, lets us go back to the first section, and say that a % b simply computes a - ((a // b) * b). But that just kicks the can down the road; we still need to explain what // is doing for signed values, especially for floats.)
One practical application for this is when converting pixel coordinates to tile coordinates. // tells us which tile contains the pixel coordinate, and then % tells us the offset within that tile. Say we have 16x16 tiles: then the tile with x-coordinate 0 contains pixels with x-coordinates 0..15 inclusive, tile 1 corresponds to pixel coordinate values 16..31, and so on. If the pixel coordinate is, say, 100, we can easily calculate that it is in tile 100 // 16 == 6, and offset 100 % 16 == 4 pixels from the left edge of that tile.
We don't have to change anything in order to handle tiles on the other side of the origin. The tile at coordinate -1 needs to account for the next 16 pixel coordinates to the left of 0 - i.e., -16..-1 inclusive. And indeed, we find that e.g. -13 // 16 == -1 (so the coordinate is in that tile), and -13 % 16 == 3 (that's how far it is from the left edge of the tile).
By setting the tile width to be positive, we defined that the within-tile coordinates progress left-to-right. Therefore, knowing that a point is within a specific tile, we always want a positive result for that offset calculation. Python's % operator gives us that, on both sides of the y-axis.
What if I want it to work another way?
math.fmod will take the sign of the numerator. It will also return a floating-point result, even for two integer inputs, and raises an exception for signed-infinity a values with non-nan b values:
>>> math.fmod(-13, 16)
-13.0
>>> math.fmod(13, -16)
13.0
>>> math.fmod(1, -inf) # not -inf
1.0
>>> math.fmod(inf, 1.0) # not nan
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: math domain error
It otherwise handles special cases the same way - a zero value for b raises an exception; otherwise any nan present causes a nan result.
If this also doesn't suit your needs, then carefully define the exact desired behaviour for every possible corner case, figure out where they differ from the built-in options, and make a wrapper function.