I was trying to understand the math behind calculations using / and // and % operators by doing some trials and found the results are similar to calculator only when using Decimal() but without it the results kinda confusing, i tried to add comments #No Ideato my code to mark the points i don't understand,for example:
in this trial for % operator by applying signed and unsigned number the results and with and without Decimal() the results are :
>>> 9%5 #This result will be the reminder
4
>>> (-9)%5 #No Idea
1
>>> Decimal(9)% Decimal(5) #This result will be the reminder
Decimal('4')
>>> Decimal(-9)% Decimal(5) #The result will be the signed reminder
Decimal('-4')
in this trial for // operator and using signed and unsigned number with and without Decimal() the results are :
>>> 9//5 #int result
1
>>> -9//5 #No Idea
-2
>>> Decimal(9)/Decimal(5) #Same result as using calculator
Decimal('1.8')
>>> Decimal(-9)//Decimal(5) #No Idea
Decimal('-1')
Please consider that this question is not a duplicate and i have done some research to get an answer but i found some answered questions that explain only about // operator using only positive signed numbers and doesn't include information about negative signed numbers or using the Decimal() and doesn't have answer about % operator.
so,It will be helpful if someone knows why the results are different and how they are calculated.
Explanation for the behaviour of integers
From python documentation:
Division of integers yields a float, while floor division of integers
results in an integer; the result is that of mathematical division
with the ‘floor’ function applied to the result.
Therefore, an integer division (//) of negative negative and positive number works as follows:
-9 // 5 == floor(-9 / 5) == floor(-1.8) == -2
The modulo operator is the remainder of the integer division, i.e. x % y = x - x // y * y. In your example:
-9 % 5 == -9 - (-9 // 5 * 5) == (-9) - (-2 * 5) == (-9) - (-10) == 1
The documentation also says:
The modulo operator always yields a result with the same sign as its
second operand (or zero); the absolute value of the result is strictly
smaller than the absolute value of the second operand.
But that comes naturally from the formula above, e.g.:
9 % -5 == 9 - (9 // (-5) * (-5)) == 9 - (-2 * (-5)) == 9 - 10 == -1
decimal.Decimal is different
The documentation explains the difference well:
There are some small differences between arithmetic on Decimal objects
and arithmetic on integers and floats. When the remainder operator %
is applied to Decimal objects, the sign of the result is the sign of
the dividend rather than the sign of the divisor:
>>> (-7) % 4
1
>>> Decimal(-7) % Decimal(4)
Decimal('-3')
The integer division operator // behaves analogously, returning the
integer part of the true quotient (truncating towards zero) rather
than its floor, so as to preserve the usual identity x == (x // y) * y
+ x % y:
>>> -7 // 4
-2
>>> Decimal(-7) // Decimal(4)
Decimal('-1')
As I understand the question, the OP is asking about the different behavior between Python integers and Decimals. I don't think there is any good reason for it. Both choices are possible, but it is a bit confusing for the user that they differ.
Let's call the numerator n, the denominator d and split the result in the interger result i and the remainder r. This means that
n // d = i
n % d = r
For the operations to make sense, we need
i * d + r == n
For n = -9 and d = 5 we see that this is uphold for both i = -1, r = -4 and for i = -2, r = 1 as can be seen by
(i = -1, r = -4) => -1 * 5 + -4 == -9
(i = -2, r = 1) => -2 * 5 + 1 == -9
Now, in Python integer division is defined as always truncate towards minus infinity (down) and the Decimal implementation has chosen to round towards zero. That means that positive values are truncated/rounded down, whereas negative values are rounded up.
Rounding towards zero is the choice made also made in the C language. However, my personal opinion is that the Python choice is much more sane, specifically coming from a hardware background. And given that this is the choice made in Python, I think it is strange (and bad) that Decimal has chosen to do as in the C language.
This question already has answers here:
How to find length of digits in an integer?
(31 answers)
Closed 6 years ago.
The question asks:
<< BACKGROUND STORY:
Suppose we’re designing a point-of-sale and order-tracking system for a new burger
joint. It is a small joint and it only sells 4 options for combos: Classic Single
Combo (hamburger with one patty), Classic Double With Cheese Combo (2 patties),
and Classic Triple with Cheese Combo (3 patties), Avant-Garde Quadruple with
Guacamole Combo (4 patties). We shall encode these combos as 1, 2, 3, and 4
respectively. Each meal can be biggie sized to acquire a larger box of fries and
drink. A biggie sized combo is represented by 5, 6, 7, and 8 respectively, for the
combos 1, 2, 3, and 4 respectively. >>
Write an iterative function called order_size which takes an order and returns the number of combos in the order. For example, order_size(237) -> 3.
Whereby I should have
order_size(0) = 0
order_size(6) = 1
order_size(51) = 2
order_size(682) = 3
My code is:
def order_size(order):
# Fill in your code here
if order > 0:
size = 0
while order > 0:
size += 1
order = order // 10
return size
else:
return 0
But I don't get the order // 10 portion. I'm guessing it's wrong but I can't think of any stuff to substitute that.
No need for iterative function, you can measure the length of the number by "turning" it into a string:
num = 127
order = len(str(num))
print(order) # prints 3
But if you really want to do it iteratively:
def order(num):
res = 0
while num > 0:
num = int(num / 10)
res += 1
return res
print(order(127)) # prints 3
How about this:
from math import log
def order_size(order):
if order <= 0: return 0
return int(log(order, 10) + 1)
Some samples (left column order, right column order size):
0 0
5 1
10 2
15 2
20 2
100 3
893 3
10232 5
There are a couple errors in your suggested answer.
The else statement and both return statements should be indented a level less.
Your tester questions indicate you are supposed to count the digits for nonnegative integers, not just positive ones (i.e. you algorithm must work on 0).
Here is my suggested alternative based on yours and the criteria of the task.
def order_size(order):
# Fill in your code here
if order >= 0:
size = 0
while order > 0:
size += 1
order = order // 10
return size
else:
return 0
Notice that
By using an inclusive inequality in the if condition, I am allowing 0 to enter the while loop, as I would any other nonnegative single digit number.
By pushing the first return statement back, it executes after the while loop. Thus after the order is counted in the variable size, it is returned.
By pushing the else: back, it executes in the even the if condition is not met (i.e. when the numbers passed to order_size(n) is negative).
By pushing the second return back, it is syntactically correct, and contained in the else block, as it should be.
Now that's taken care of, let me address this:
But I don't get the order // 10 portion.
As of Python 3, the // is a floor division (a.k.a integer division) binary operation.
It effectively performs a standard division, then rounds down (towards negative infinity) to the nearest integer.
Here are some examples to help you out. Pay attention to the last one especially.
10 // 2 # Returns 5 since 10/2 = 5, rounded down is 5
2 // 2 # Returns 1 since 2/2 = 1, rounded down is 1
11 // 2 # Returns 5 since 11/2 = 5.5, rounded down is 5
4 // 10 # Returns 0 since 4/10 = 0.4, rounded down is 0
(-4) // 10 # Returns -1 since (-4)/10 = -0.4, rounded down is -1
For nonnegative numerator n, n // d can be seen as the number of times d fits into n whole.
So for a number like n = 1042, n // 10 would give you how many whole times 10 fits into 1042.
This is 104 (since 1042/10 = 104.2, and rounded down we have 104).
Notice how we've effectively knocked off a digit?
Let's have a look at your while loop.
while order > 0:
size += 1
order = order // 10
Every time a digit is "knocked off" order, the size counter is incremented, thus counting how many digits you can knock off before you hit your terminating step.
Termination occurs when you knock of the final (single) digit. For example, say you reduced order to 1 (from 1042), then 1 // 10 returns 0.
So once all the digits are "knocked off" and counted, your order will have a value of 0. The while loop will then terminate, and your size counter will be returned.
Hope this helps!
Disclaimer: Perhaps this isn't what you want to hear, but many Universities consider copying code from the Internet and passing it off as your own to be plagiarism.
This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
Been looking through other answers and I still don't understand the modulo for negative numbers in python
For example the answer by df
x == (x/y)*y + (x%y)
so it makes sense that (-2)%5 = -2 - (-2/5)*5 = 3
Doesn't this (-2 - (-2/5)*5) =0 or am I just crazy?
Modulus operation with negatives values - weird thing?
Same with this
negative numbers modulo in python
Where did he get -2 from?
Lastly if the sign is dependent on the dividend why don't negative dividends have the same output as their positive counterparts?
For instance the output of
print([8%5,-8%5,4%5,-4%5])
is
[3, 2, 4, 1]
In Python, modulo is calculated according to two rules:
(a // b) * b + (a % b) == a, and
a % b has the same sign as b.
Combine this with the fact that integer division rounds down (towards −∞), and the resulting behavior is explained.
If you do -8 // 5, you get -1.6 rounded down, which is -2. Multiply that by 5 and you get -10; 2 is the number that you'd have to add to that to get -8. Therefore, -8 % 5 is 2.
In Python, a // b is defined as floor(a/b), as opposed to most other languages where integer division is defined as trunc(a/b). There is a corresponding difference in the interpretation of a % b = a - (a // b) * b.
The reason for this is that Python's definition of the % operator (and divmod) is generally more useful than that of other languages. For example:
def time_of_day(seconds_since_epoch):
minutes, seconds = divmod(seconds_since_epoch, 60)
hours, minutes = divmod(minutes, 60)
days, hours = divmod(hours, 24)
return '%02d:%02d:%02d' % (hours, minutes, seconds)
With this function, time_of_day(12345) returns '03:25:45', as you would expect.
But what time is it 12345 seconds before the epoch? With Python's definition of divmod, time_of_day(-12345) correctly returns '20:34:15'.
What if we redefine divmod to use the C definition of / and %?
def divmod(a, b):
q = int(a / b) # I'm using 3.x
r = a - b * q
return (q, r)
Now, time_of_day(-12345) returns '-3:-25:-45', which isn't a valid time of day. If the standard Python divmod function were implemented this way, you'd have to write special-case code to handle negative inputs. But with floor-style division, like my first example, it Just Works.
The rationale behind this is really the mathematical definition of least residue. Python respects this definition, whereas in most other programming language the modulus operator is really more like a 'reaminder after division' operator. To compute the least residue of -5 % 11, simply add 11 to -5 until you obtain a positive integer in the range [0,10], and the result is 6.
When you divide ints (-2/5)*5 does not evaluate to -2, as it would in the algebra you're used to. Try breaking it down into two steps, first evaluating the part in the parentheses.
(-2/5) * 5 = (-1) * 5
(-1) * 5 = -5
The reason for step 1 is that you're doing int division, which in python 2.x returns the equivalent of the float division result rounded down to the nearest integer.
In python 3 and higher, 2/5 will return a float, see PEP 238.
Check out this BetterExplained article and look # David's comment (No. 6) to get what the others are talking about.
Since we're working w/ integers, we do int division which, in Python, floors the answer as opposed to C. For more on this read Guido's article.
As for your question:
>>> 8 % 5 #B'coz (5*1) + *3* = 8
3
>>> -8 % 5 #B'coz (5*-2) + *2* = -8
2
Hope that helped. It confused me in the beginning too (it still does)! :)
Say -a % b needs to be computed. for ex.
r= 11 % 10
find the next number after 11 that is perfectly divisible by 10 i.e on dividing that next number after 11 gives the remainder 0.
In the above case its 20 which on dividing by 10 gives 0.
Hence, 20-11 = 9 is the number that needs to be added to 11.
The concept if 60 marbles needs to be equally divided among 8 people, actually what you get after dividing 60/8 is 7.5 since you can'nt halve the marbles, the next value after 60 that is perfectly divisible by 8 is 64. Hence 4 more marbles needs to be added to the lot so that everybody share the same joy of marbles.
This is how Python does it when negatives numbers are divided using modulus operator.
Exactly how does the % operator work in Python, particularly when negative numbers are involved?
For example, why does -5 % 4 evaluate to 3, rather than, say, -1?
Unlike C or C++, Python's modulo operator (%) always return a number having the same sign as the denominator (divisor). Your expression yields 3 because
(-5) / 4 = -1.25 --> floor(-1.25) = -2
(-5) % 4 = (-2 × 4 + 3) % 4 = 3.
It is chosen over the C behavior because a nonnegative result is often more useful. An example is to compute week days. If today is Tuesday (day #2), what is the week day N days before? In Python we can compute with
return (2 - N) % 7
but in C, if N ≥ 3, we get a negative number which is an invalid number, and we need to manually fix it up by adding 7:
int result = (2 - N) % 7;
return result < 0 ? result + 7 : result;
(See http://en.wikipedia.org/wiki/Modulo_operator for how the sign of result is determined for different languages.)
Here's an explanation from Guido van Rossum:
http://python-history.blogspot.com/2010/08/why-pythons-integer-division-floors.html
Essentially, it's so that a/b = q with remainder r preserves the relationships b*q + r = a and 0 <= r < b.
In python, modulo operator works like this.
>>> mod = n - math.floor(n/base) * base
so the result is (for your case):
mod = -5 - floor(-1.25) * 4
mod = -5 - (-2*4)
mod = 3
whereas other languages such as C, JAVA, JavaScript use truncation instead of floor.
>>> mod = n - int(n/base) * base
which results in:
mod = -5 - int(-1.25) * 4
mod = -5 - (-1*4)
mod = -1
If you need more information about rounding in python, read this.
Other answers, especially the selected one have clearly answered this question quite well. But I would like to present a graphical approach that might be easier to understand as well, along with python code to perform normal mathematical modulo in python.
Python Modulo for Dummies
Modulo function is a directional function that describes how much we have to move further or behind after the mathematical jumps that we take during division over our X-axis of infinite numbers.
So let's say you were doing 7%3
So in forward direction, your answer would be +1, but in backward direction-
your answer would be -2. Both of which are correct mathematically.
Similarly, you would have 2 moduli for negative numbers as well. For eg: -7%3, can result both in -1 or +2 as shown -
Forward direction
Backward direction
In mathematics, we choose inward jumps, i.e. forward direction for a positive number and backward direction for negative numbers.
But in Python, we have a forward direction for all positive modulo operations. Hence, your confusion -
>>> -5 % 4
3
>>> 5 % 4
1
Here is the python code for inward jump type modulo in python:
def newMod(a,b):
res = a%b
return res if not res else res-b if a<0 else res
which would give -
>>> newMod(-5,4)
-1
>>> newMod(5,4)
1
Many people would oppose the inward jump method, but my personal opinion is, that this one is better!!
As pointed out, Python modulo makes a well-reasoned exception to the conventions of other languages.
This gives negative numbers a seamless behavior, especially when used in combination with the // integer-divide operator, as % modulo often is (as in math.divmod):
for n in range(-8,8):
print n, n//4, n%4
Produces:
-8 -2 0
-7 -2 1
-6 -2 2
-5 -2 3
-4 -1 0
-3 -1 1
-2 -1 2
-1 -1 3
0 0 0
1 0 1
2 0 2
3 0 3
4 1 0
5 1 1
6 1 2
7 1 3
Python % always outputs zero or positive*
Python // always rounds toward negative infinity
* ... as long as the right operand is positive. On the other hand 11 % -10 == -9
There is no one best way to handle integer division and mods with negative numbers. It would be nice if a/b was the same magnitude and opposite sign of (-a)/b. It would be nice if a % b was indeed a modulo b. Since we really want a == (a/b)*b + a%b, the first two are incompatible.
Which one to keep is a difficult question, and there are arguments for both sides. C and C++ round integer division towards zero (so a/b == -((-a)/b)), and apparently Python doesn't.
You can use:
result = numpy.fmod(x,y)
it will keep the sign , see numpy fmod() documentation.
It's also worth to mention that also the division in python is different from C:
Consider
>>> x = -10
>>> y = 37
in C you expect the result
0
what is x/y in python?
>>> print x/y
-1
and % is modulo - not the remainder! While x%y in C yields
-10
python yields.
>>> print x%y
27
You can get both as in C
The division:
>>> from math import trunc
>>> d = trunc(float(x)/y)
>>> print d
0
And the remainder (using the division from above):
>>> r = x - d*y
>>> print r
-10
This calculation is maybe not the fastest but it's working for any sign combinations of x and y to achieve the same results as in C plus it avoids conditional statements.
It's what modulo is used for. If you do a modulo through a series of numbers, it will give a cycle of values, say:
ans = num % 3
num
ans
3
0
2
2
1
1
0
0
-1
2
-2
1
-3
0
I also thought it was a strange behavior of Python. It turns out that I was not solving the division well (on paper); I was giving a value of 0 to the quotient and a value of -5 to the remainder. Terrible... I forgot the geometric representation of integers numbers. By recalling the geometry of integers given by the number line, one can get the correct values for the quotient and the remainder, and check that Python's behavior is fine. (Although I assume that you have already resolved your concern a long time ago).
#Deekshant has explained it well using visualisation. Another way to understand %(modulo) is ask a simple question.
What is nearest smaller number to dividend that can be divisible by divisor on X-axis ?
Let's have a look at few examples.
5 % 3
5 is Dividend, 3 is divisor. If you ask above question 3 is nearest smallest number that is divisible by divisor. ans would be 5 - 3 = 2. For positive Dividend, nearest smallest number would be always right side of dividend.
-5 % 3
Nearest smallest number that is divisible by 3 is -6 so ans would be -5 - (-6) = 1
-5 %4
Nearest smallest number that is divisible by 4 is -8 so ans would be -5 - (-8) = 3
Python answers every modulo expression with this method. Hope you can understand next how expression would be going to execute.
I attempted to write a general answer covering all input cases, because many people ask about various special cases (not just the one in OP, but also especially about negative values on the right-hand side) and it's really all the same question.
What does a % b actually give us in Python, explained in words?
Assuming that a and b are either float and/or int values, finite (not math.inf, math.nan etc.) and that b is not zero....
The result c is the unique number with the sign of b, such that a - c is divisible by b and abs(c) < abs(b). It will be an int when a and b are both int, and a float (even if it is exactly equal to an integer) when either a or b is an int.
For example:
>>> -9 % -5
-4
>>> 9 % 5
4
>>> -9 % 5
1
>>> 9 % -5
-1
The sign preservation also works for floating-point numbers; even when a is divisible by b, it is possible to get distinct 0.0 and -0.0 results (recalling that zero is signed in floating-point), and the sign will match b.
Proof of concept:
import math
def sign(x):
return math.copysign(1, x)
def test(a: [int, float], b: [int, float]):
c = a % b
if isinstance(a, int) and isinstance(b, int):
assert isinstance(c, int)
assert c * b >= 0 # same sign or c == 0
else:
assert isinstance(c, float)
assert sign(c) == sign(b)
assert abs(c) < abs(b)
assert math.isclose((a - c) / b, round((a - c) / b))
It's a little hard to phrase this in a way that covers all possible sign and type combinations and accounts for floating-point imprecision, but I'm pretty confident in the above. One specific gotcha for floats is that, because of that floating-point imprecision, the result for a % b might sometimes appear to give b rather than 0. In fact, it simply gives a value very close to b, because the result of the division wasn't quite exact:
>>> # On my version of Python
>>> 3.5 % 0.1
0.09999999999999981
>>> # On some other versions, it might appear as 0.1,
>>> # because of the internal rules for formatting floats for display
What if abs(a) < abs(b)?
A lot of people seem to think this is a special case, or for some reason have difficulty understanding what happens. But there is nothing special here.
For example: consider -1 % 3. How much, as a positive quantity (because 3 is positive), do we have to subtract from -1, in order to get a result divisible by 3? -1 is not divisible by 3; -1 - 1 is -2, which is also not divisible; but -1 - 2 is -3, which is divisible by 3 (dividing in exactly -1 times). By subtracting 2, we get back to divisibility; thus 2 is our predicted answer - and it checks out:
>>> -1 % 3
2
What about with b equal to zero?
It will raise ZeroDivisionError, regardless of whether b is integer zero, floating-point positive zero, or floating-point negative zero. In particular, it will not result in a NaN value.
What about special float values?
As one might expect, nan and signed infinity values for a produce a nan result, as long as b is not zero (which overrides everything else). nan values for b result in nan as well. NaN cannot be signed, so the sign of b is irrelevant in these cases.
Also as one might expect, inf % inf gives nan, regardless of the signs. If we are sharing out an infinite amount of as to an infinite amount of bs, there's no way to say "which infinity is bigger" or by how much.
The only slightly confusing cases are when b is a signed infinity value:
>>> 0 % inf
0.0
>>> 0 % -inf
-0.0
>>> 1 % inf
1.0
>>> 1 % -inf
-inf
As always, the result takes the sign of b. 0 is divisible by anything (except NaN), including infinity. But nothing else divides evenly into infinity. If a has the same sign as b, the result is simply a (as a floating-point value); if the signs differ, it will be b. Why? Well, consider -1 % inf. There isn't a finite value we can subtract from -1, in order to get to 0 (the unique value that we can divide into infinity). So we have to keep going, to infinity. The same logic applies to 1 % -inf, with all the signs reversed.
What about other types?
It's up to the type. For example, the Decimal type overloads the operator so that the result takes the sign of the numerator, even though it functionally represents the same kind of value that a float does. And, of course, strings use it for something completely different.
Why not always give a positive result, or take the sign of a?
The behaviour is motivated by integer division. While % happens to work with floating-point numbers, it's specifically designed to handle integer inputs, and the results for floats fall in line to be consistent with that.
After making the choice for a // b to give a floored division result, the % behaviour preserves a useful invariant:
>>> def check_consistency(a, b):
... assert (a // b) * b + (a % b) == a
...
>>> for a in range(-10, 11):
... for b in range(-10, 11):
... if b != 0:
... check_consistency(a, b) # no assertion is raised
...
In other words: adding the modulus value back, corrects the error created by doing an integer division.
(This, of course, lets us go back to the first section, and say that a % b simply computes a - ((a // b) * b). But that just kicks the can down the road; we still need to explain what // is doing for signed values, especially for floats.)
One practical application for this is when converting pixel coordinates to tile coordinates. // tells us which tile contains the pixel coordinate, and then % tells us the offset within that tile. Say we have 16x16 tiles: then the tile with x-coordinate 0 contains pixels with x-coordinates 0..15 inclusive, tile 1 corresponds to pixel coordinate values 16..31, and so on. If the pixel coordinate is, say, 100, we can easily calculate that it is in tile 100 // 16 == 6, and offset 100 % 16 == 4 pixels from the left edge of that tile.
We don't have to change anything in order to handle tiles on the other side of the origin. The tile at coordinate -1 needs to account for the next 16 pixel coordinates to the left of 0 - i.e., -16..-1 inclusive. And indeed, we find that e.g. -13 // 16 == -1 (so the coordinate is in that tile), and -13 % 16 == 3 (that's how far it is from the left edge of the tile).
By setting the tile width to be positive, we defined that the within-tile coordinates progress left-to-right. Therefore, knowing that a point is within a specific tile, we always want a positive result for that offset calculation. Python's % operator gives us that, on both sides of the y-axis.
What if I want it to work another way?
math.fmod will take the sign of the numerator. It will also return a floating-point result, even for two integer inputs, and raises an exception for signed-infinity a values with non-nan b values:
>>> math.fmod(-13, 16)
-13.0
>>> math.fmod(13, -16)
13.0
>>> math.fmod(1, -inf) # not -inf
1.0
>>> math.fmod(inf, 1.0) # not nan
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: math domain error
It otherwise handles special cases the same way - a zero value for b raises an exception; otherwise any nan present causes a nan result.
If this also doesn't suit your needs, then carefully define the exact desired behaviour for every possible corner case, figure out where they differ from the built-in options, and make a wrapper function.