Check if float is an integer: is_integer() vs. modulo 1 - python

I've seen a number of questions asking how to check if a float is an integer. Majority of answers seem to recommend using is_integer():
(1.0).is_integer()
(1.55).is_integer()
I have also occasionally seen math.floor() being used:
import math
1.0 == math.floor(1.0)
1.55 == math.floor(1.55)
I'm wondering why % 1 is rarely used or recommended?
1.0 % 1 == 0
1.55 % 1 == 0
Is there a problem with using modulo for this purpose? Are there edge cases that this doesn't catch? Performance issues for really large numbers?
If % 1 is a fine alternative, then I'm also wondering why is_integer() was introduced to the standard library?
It seems that % is much more flexible. For example, its common to use % 2 to check if a number is odd/even, or % n to check if something is a multiple of n. Given this flexibility, why introduce a new method (is_integer) that does the same thing, or use math.floor, both of which require knowing/remembering that they exist and knowing how to use them? I know that math.floor has uses beyond just integer checking but still...

All are valid for the purpose. The math.floor option requires exact matching between a specific value and the result of the floor function. Which is not very convenient if you want to encapsulate it in a generic method. So it boils down to the first and third option. Both are valid and will do the job. So the key difference is simple - performance:
from timeit import Timer
def with_isint(num):
return num.is_integer()
def with_mod(num):
return num % 1 == 0
Timer(lambda: with_isint(10.0)).timeit(number=10000000)
#output: 2.0617980659008026
Timer(lambda: with_mod(10.0)).timeit(number=10000000)
#output: 2.6560597440693527
Naturally this is a simple operation so you'd need a lot of calls in order to see a considerable difference, as you can see in the example.

One soft reason is definitely: readability
If a function called is_integer() returns True, it is obvious what you have been testing.
However, using the modulo solution, one has to think through the process to see, that it is actually testing if a float is an integer. If you wrap your modulo formalism in a function with an obvious name such as simon_says_its_an_integer(), I think it's just as fine (apart from needlessly introducing an already existing function).

Related

Need help finding GCD (noob approach)

I am currently going through Math adventures with Python book by Peter Farrel. Now I am simply trying to improve my math skills while learning Python in a fun way. So we made a factors function as seen below:
def factors(num):
factorList = []
for i in range(1, num+1):
if num % i == 0:
factorList.append(i)
return factorList
Exercise 3-1 is asking to make GCF (Greatest Common Factor) function. All the answers here are how we could use builtin Python modules or recursive or Euclid algorithm. I have no clue what any of these things mean, let alone trying it on this assignment. I came with the following solution using the above function:
def gcFactor(num1, num2):
fnum1 = factors(num1)
fnum2 = factors(num2)
gcf = list(set(fnum1).intersection(fnum2))
return max(gcf)
print(gcFactor(28,21))
Is this the best way of doing it? Using the .intersection() function seems a little cheaty to me.
So what I wanted to do is if I could use a loop and separate the list values in fnum1 & fnum2 and compare them and then return the value that matches (which would make common factors) and is greatest (which would be GCF).
The idea behind your algorithm is sound, but there are a few problems:
In your original version, you used gcf[-1] to get the greatest factor, but that will not always work, since converting a set to list does not guarantee that the elements will be in sorted order, even if they were sorted before converting to set. Better use max (you already changed that).
Using set.intersection is definitely not "cheating" but just making good use of what the languages provides. It might be considered cheating to just use math.gcd, but not basic set or list functions.
Your algorithm is rather inefficient. I don't know the book, but I don't think you should actually use the factors function to calculate the gcf, but that was just an exercise to teach you stuff like loops and modulo. Consider two very different numbers as inputs, say 23764372 and 6. You'd calculate all the factors of 23764372 first, before testing the very few values that could actually be common factors. Instead of using factors directly, try to rewrite your gcFactor function to test which values up to the min of the two numbers are factors of both numbers.
Even then, your algorithm will not be very efficient. I would suggest reading up on Euclid's Algorithm and trying to implement that next. If unsure if you did it right, you can use your first function as a reference for testing, and to see the difference in performance.
About your factors function itself: Note that there is a symmetry: if i is a factor, so is n//i. If you use this, you do not have to test all the values up to n but just up to sqrt(n), which is a speed-up equivalent to reducing running time from O(n²) to O(n).

Cython returns 0 for expression that should evaluate to 0.5?

For some reason, Cython is returning 0 on a math expression that should evaluate to 0.5:
print(2 ** (-1)) # prints 0
Oddly enough, mix variables in and it'll work as expected:
i = 1
print(2 ** (-i)) # prints 0.5
Vanilla CPython returns 0.5 for both cases. I'm compiling for 37m-x86_64-linux-gnu, and language_level is set to 3.
What is this witchcraft?
It's because it's using C ints rather than Python integers so it matches C behaviour rather than Python behaviour. I'm relatively sure this used to be documented as a limitation somewhere but I can't find it now. If you want to report it as a bug then go to https://github.com/cython/cython/issues, but I suspect this is a deliberate trade-off of speed for compatibility.
The code gets translated to
__Pyx_pow_long(2, -1L)
where __Pyx_pow_long is a function of type static CYTHON_INLINE long __Pyx_pow_long(long b, long e).
The easiest way to fix it is to change one/both of the numbers to be a floating point number
print(2. ** (-1))
As a general comment on the design choice: people from the C world generally expect int operator int to return an int, and this option will be fastest. Python had tried to do this in the past with the Python 2 division behaviour (but inconsistently - power always returned a floating point number).
Cython generally tries to follow Python behaviour. However, a lot of people are using it for speed so they also try to fall back to quick, C-like operations especially when people specify types (since those people want speed). I think what's happened here is that it's been able to infer the types automatically, and so defaulted to C behaviour. I suspect ideally it should distinguish between specified types and types that it's inferred. However, it's also probably too late to start changing that.
It looks like Cython is incorrectly inferring the final data type as int rather than float when only numbers are involved
The following code works as expected:
print(2.0 ** (-1))
See this link for a related discussion: https://groups.google.com/forum/#!topic/cython-users/goVpote2ScY

Python :: Iteration vs Recursion on string manipulation

In the examples below, both functions have roughly the same number of procedures.
def lenIter(aStr):
count = 0
for c in aStr:
count += 1
return count
or
def lenRecur(aStr):
if aStr == '':
return 0
return 1 + lenRecur(aStr[1:])
Picking between the two techniques is a matter of style or is there a most efficient method here?
Python does not perform tail call optimization, so the recursive solution can hit a stack overflow on long strings. The iterative method does not have this flaw.
That said, len(str) is faster than both methods.
This is not correct: 'functions have roughly the same number of procedures'. You probably mean that: 'these procedures require the same number of operations', or, more formally 'they have the same computational time complexity'.
While both have the same computational time complexity, the one using recursion requires additional CPU instructions to execute code for creating new instances of procedures during recursion, and to switch contexts. And to clean up after returning from every recursion. While these operations do not increase the theoretical computational complexity, in most real life implementations of operating systems they will put significant load.
Also the resursive method will have higher space complexity, as each new instance of recursively-called procedure needs new storage for its data.
Surely the first approach is more optimized, as python doesn't have to do a lot of function call and string slicing, which each of these operations are contain some other operations that cost much for python interpreter, and may be cause a lot of problems in future and in dealing with log strings.
As a more pythonic way you better to use len() function in order to get the length of a string.
You can also use code object to see the required stack sized for each function:
>>> lenRecur.__code__.co_stacksize
4
>>> lenIter.__code__.co_stacksize
3

Use of bitwise operations instead of testing for even/odd

I'm trying to understand how this particular solution to prime decomposition (taken from http://rosettacode.org/wiki/Prime_decomposition#Python:_Using_floating_point ), and am a bit puzzled by the usage of bitwise operators in the definition of step
def fac(n):
step = lambda x: 1 + (x<<2) - ((x>>1)<<1)
maxq = long(floor(sqrt(n)))
d = 1
q = n % 2 == 0 and 2 or 3
while q <= maxq and n % q != 0:
q = step(d)
d += 1
return q <= maxq and [q] + fac(n//q) or [n]
I understand what it does (multiply by x 3 and then add 1 if x is even and 2 if x is odd), but son't quite see why one would resort to bitwise operations in this context. Is there a reason, besides the obvious succinctness of this formulation, for the use of bitwise operators instead of a more explicit solution:
mystep = lambda x: (3 * x) + 1 if (x % 2 == 0) else (3 * x) + 2
If there is a good reason (say, (x>>1)<<1 being more efficient than modulo arithmetic, as suggested here), is there a general strategy for extracting the underlying logic from an expression with several bitwise operators?
UPDATE
Following the suggestions in the answers, I timed both the version with step and with my step, and the difference is imperceptible:
%timeit fac(600851475143)
1000 loops, best of 3: 306 µs per loop
%timeit fac2(600851475143)
1000 loops, best of 3: 307 µs per loop
This could be an attempt to optimize around branch misprediction. Modern CPUs are massively pipelined; they speculatively execute 10 or more instructions ahead. A conditional branch that near-randomly goes one way half the time and the other way half the time means the CPU will have to throw out 10 instructions worth of work half the time, making your work 5x as slow. At least with CPython, much of the cost of branch mispredictions is hidden in the overhead, but you can still easily find cases where they increase time by at least 12%, if not the 500% you can expect in C.
The alternative is that the author is optimizing for something even less relevant. On 70s and 80s hardware, replacing arithmetic operations with bitwise operations often led to huge speedups, just because the ALUs were simple and the compilers didn't optimize much. Even people who don't actually expect to get the same speedups today have internalized all the standard bit-twiddling hacks and use them without thinking. (Or, of course, the author could have just ported some code over from C or Scheme or some other language without really thinking about it, and that code could have been written decades ago when this optimization made a big difference.)
At any rate, this code is almost certainly optimizing in the wrong place. Defining a function to call every time in your inner loop, instead of just inlining the one-liner expression there, is adding far more overhead than 12%. And the fact that the code uses step = lambda x: … instead of def step(x): … implies pretty strongly that the author isn't comfortable in Python and doesn't know how to optimize for it. If you really want to make this go faster, there are almost certainly a lot of things that would make a whole lot more difference than which implementation you use for step.
That being said, the right thing to do with any optimization that you're not sure about is to test it. Implement it both ways, use timeit to see the difference, and if you don't understand the results, use a Python-level profiler or hardware-level performance counters (e.g., via cachegrind) or something else to get more information. From a very quick test of the original code against your alternative, throwing various numbers at it with IPython's %timeit, I got results ranging from .92x to 1.08x time for your version. In other words, it seems to be a wash…
In theory, three bit shifts are more efficient than a single multiplication and a single division. In practice, such code should be profiled to ensure that the resulting optimization provides a sufficient speed boost to justify the loss of readability.
Any code that resorts to such optimizations should clearly document what the code does along with why the optimization was deemed useful, if only for the sake of future maintainers who may be tempted to replace the code with something more readable.

get the flagged bit index in pure python

i involve here 2 questions: one is for 'how', and second is for 'is this great solution sounds ok?'
the thing is this: i have an object with int value that stores all the persons' ids that used that object. it's done using a flagging technique (person id is 0-10).
i got to a situation where in case this value is flagged with only one id, i want to get this id.
for the first test i used value & (value-1) which is nice, but as for the second thing, i started to wonder what's the best way to do it (the reason i wonder about it is because this calculation happens at least 300 times in a second in a critical place).
so the 1st way i thought about is using math.log(x,2), but i feel a little uncomfortable with this solution since it involves "hard" math on a value, instead of very simple bits operation, and i feel like i'm missing something.
the 2nd way i thought about is to count value<<1 until it reaches 1, but it as you could see in the benchmark test, it was just worse.
the 3rd way i was implemented is a non-calc way, and was the fastest, and it's using a dictionary with all the possible values for ids 0-10.
so like i said before: is there a 'right' way for doing it in pure python?
is a dictionary-based solution is a "legitimate" solution? (readability/any-other-reason-why-not-to?)
import math
import time
def find_bit_using_loop(num,_):
c=0
while num!=1:
c+=1
num=num>>1
return c
def find_bit_using_dict(num,_):
return options[num]
def get_bit_idx(num, func):
t=time.time()
for i in xrange(100000):
a=func(num,2)
t=time.time()-t
#print a
return t
options={}
for i in xrange(20):
options[1<<i]=i
num=256
print "time using log:", get_bit_idx(num, math.log)
print "time using loop:", get_bit_idx(num, find_bit_using_loop)
print "time using dict:", get_bit_idx(num, find_bit_using_dict)
output:
time using log: 0.0450000762939
time using loop: 0.156999826431
time using dict: 0.0199999809265
(there's a very similar question here: return index of least significant bit in Python , but first, in this case i know there's only 1 flagged bit, and second, i want to keep it a pure python solution)
If you are using Python 2.7 or 3.1 and above, you can use the bit_length method on integers. It returns the number of bits necessary to represent the integer, i.e. one more than the index of the most significant bit:
>>> (1).bit_length()
1
>>> (4).bit_length()
3
>>> (32).bit_length()
6
This is probably the most Pythonic solution as it's part of the standard library. If you find that dict performs better and this is a performance bottleneck, I see no reason not to use it though.

Categories