Need help finding GCD (noob approach) - python

I am currently going through Math adventures with Python book by Peter Farrel. Now I am simply trying to improve my math skills while learning Python in a fun way. So we made a factors function as seen below:
def factors(num):
factorList = []
for i in range(1, num+1):
if num % i == 0:
factorList.append(i)
return factorList
Exercise 3-1 is asking to make GCF (Greatest Common Factor) function. All the answers here are how we could use builtin Python modules or recursive or Euclid algorithm. I have no clue what any of these things mean, let alone trying it on this assignment. I came with the following solution using the above function:
def gcFactor(num1, num2):
fnum1 = factors(num1)
fnum2 = factors(num2)
gcf = list(set(fnum1).intersection(fnum2))
return max(gcf)
print(gcFactor(28,21))
Is this the best way of doing it? Using the .intersection() function seems a little cheaty to me.
So what I wanted to do is if I could use a loop and separate the list values in fnum1 & fnum2 and compare them and then return the value that matches (which would make common factors) and is greatest (which would be GCF).

The idea behind your algorithm is sound, but there are a few problems:
In your original version, you used gcf[-1] to get the greatest factor, but that will not always work, since converting a set to list does not guarantee that the elements will be in sorted order, even if they were sorted before converting to set. Better use max (you already changed that).
Using set.intersection is definitely not "cheating" but just making good use of what the languages provides. It might be considered cheating to just use math.gcd, but not basic set or list functions.
Your algorithm is rather inefficient. I don't know the book, but I don't think you should actually use the factors function to calculate the gcf, but that was just an exercise to teach you stuff like loops and modulo. Consider two very different numbers as inputs, say 23764372 and 6. You'd calculate all the factors of 23764372 first, before testing the very few values that could actually be common factors. Instead of using factors directly, try to rewrite your gcFactor function to test which values up to the min of the two numbers are factors of both numbers.
Even then, your algorithm will not be very efficient. I would suggest reading up on Euclid's Algorithm and trying to implement that next. If unsure if you did it right, you can use your first function as a reference for testing, and to see the difference in performance.
About your factors function itself: Note that there is a symmetry: if i is a factor, so is n//i. If you use this, you do not have to test all the values up to n but just up to sqrt(n), which is a speed-up equivalent to reducing running time from O(n²) to O(n).

Related

Converting recursive function to completely iterative function without using extra space

Is it possible to convert a recursive function like the one below to a completely iterative function?
def fact(n):
if n <= 1:
return
for i in range(n):
fact(n-1)
doSomethingFunc()
It seems pretty easy to do given extra space like a stack or a queue, but I was wondering if we can do this in O(1) space complexity?
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of factorial(n).
Well, generally speaking no.
I mean, the space taken in the stack by recursive functions is not just an inconvenient of this programming style. It is the memory needed for the computation.
So, sure, for lot of algorithm, that space is unnecessary and could be spared. For a classical factorial for example
def fact(n):
if n<=1:
return 1
else:
return n*fact(n-1)
the stacking of all the n, n-1, n-2, ..., 1 arguments is not really necessary.
So, sure, you can find an implementation that get rid of it. But that is optimization (For example, in the specific case of terminal recursion. But I am pretty sure that you add that "doSomething" to make clear that you don't want to focus on that specific case).
You cannot assume in general that an algorithm that don't need all those values exist, recursive or iterative. Or else, that would be saying that all algorithm exist in a O(1) space complexity version.
Example: base representation of a positive integer
def baseRepr(num, base):
if num>=base:
s=baseRepr(num//base, base)
else:
s=''
return s+chr(48+num%base)
Not claiming it is optimal, or even well written.
But, the stacking of the arguments is needed. It is the way you implicitly store the digits that you compute in the reverse order.
An iterative function would also need some memory to store those digits, since you have to compute the last one first.
Well, I am pretty sure that for this simple example, you could find a way to compute from left to right, for example using a log computation to know in advance the number of digits or something. But that's not the point. Just imagine that there is no other algorithm known than the one computing digits from right to left. Then you need to store them. Either implicitly in the stack using recursion, or explicitly in allocated memory. So again, memory used in the stack is not just an inconvenience of recursion. It is the way recursive algorithm store things, that would be stored otherwise in iterative algorithm
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of
factorial(n).
Yes.
I was wondering if we can do this in O(1) space complexity?
So, no.

How to use a random seed value in order to unittest a PRNG in Python?

I'm still pretty new to programming and just learning how to unittest. I need to test a function that returns a random value. I've so far found answers suggesting the use of a specific seed value so that the 'random' sequence is constant and can be compared. This is what I've got so far:
This is the function I want to test:
import random
def roll():
'''Returns a random number in the range 1 to 6, inclusive.'''
return random.randint(1, 6)
And this is my unittest:
class Tests(unittest.TestCase):
def test_random_roll(self):
random.seed(900)
seq = random.randint(1, 6)
self.assertEqual(roll(), seq)
How do I set the corresponding seed value for the PRNG in the function so that it can be tested without writing it into the function itself? Or is this completely the wrong way to go about testing a random number generator?
Thanks
The other answers are correct as far as they go. Here I'm answering the deeper question of how to test a random number generator:
Your provided function is not really a random number generator, as its entire implementation depends on a provided random number generator. In other words, you are trusting that Python provides you with a sensible random generator. For most purposes, this is a good thing to do. If you are writing cryptographic primitives, you might want to do something else, and at that point you would want some really robust test strategies (but they will never be enough).
Testing a function returns a specific sequence of numbers tells you virtually nothing about the correctness of your function in terms of "producing random numbers". A predefined sequence of numbers is the opposite of a random sequence.
So, what do you actually want to test? For 'roll' function, I think you'd like to test:
That given 'enough' rolls it produces all the numbers between 1 and 6, preferably in 'approximately' equal proportions.
That it doesn't produce anything else.
The problem with 1. is that your function is defined to be a random sequence, so there is always a non-zero chance that any hard limits you put in to define 'enough' or 'approximately equal' will occasionally fail. You could do some calculations to pick some limits that would make sure your test is unlikely to fail more than e.g. 1 in a billion times, or you could slap a random.seed() call that will mean it will never fail if it passes once (unless the underlying implementation from Python changes).
Item 2. could be 'tested' more easily - generate some large 'N' number of items, check that all are within expected outcome.
For all of this, however, I'd ask what value the unit tests actually are. You literally cannot write a test to check whether something is 'random' or not. To see whether the function has a reasonable source of randomness and uses it correctly, tests are useless - you have to inspect the code. Once you have done that, it's clear that your function is correct (providing Python provides a decent random number generator).
In short, this is one of those cases where unit tests provide extremely little value. I would probably just write one test (item 2 above), and leave it at that.
By seeding the prng with a known seed, you know which sequence it will produce, so you can test for this sequence:
class Tests(unittest.TestCase):
def test_random_roll(self):
random.seed(900)
self.assertEqual(roll(), 6)
self.assertEqual(roll(), 2)
self.assertEqual(roll(), 5)

When Is Recursion Useful? [duplicate]

This question already has answers here:
Recursion or Iteration?
(31 answers)
Closed 5 years ago.
I am relatively new to python and have recently learned about recursion. When tasked to find the factorial of a number, I used this:
def factorial(n):
product = 1
for z in xrange(1,n+1):
product *= z
return product
if __name__ == "__main__":
n = int(raw_input().strip())
result = factorial(n)
print result
Then, because the task was to use recursion, I created a solution that used recursion:
def factorial(n):
if n == 1:
current = 1
else:
current = n*(factorial(n-1))
return current
if __name__ == "__main__":
n = int(raw_input().strip())
result = factorial(n)
print result
Both seem to produce the same result. My question is why would I ever use recursion, if I could use a for loop instead? Are there situations where I cannot just create for loops instead of using recursion?
For every solution that you found with recursion there are a solution iterative, because you can for example simulate the recursion using an stack.
The example of Factorial use a type of recursion named Tail Recursion an this cases have an easy way to implement iterative, but in this case recursion solution is more similar to the mathematical definition. However there are other problems that found an iterative solution is difficult and is more powerful and more expressive use recursive solution. For example the problem of Tower of Hanoi see this question for more informationTower of Hanoi: Recursive Algorithm, the solution of this problem iterative is very tedious and finally have to simulate a recursion.
There are problems like Fibonacci sequence that the definition is recursive an is easy to generate a solution recursive
def fibonacci(n):
if ((n==1) or (n==2)):
return 1
else (n>2):
return fibonacci(n-2) + fibonacci(n-1)
This solution is straight forward, but calculate many times unnecessarily the fibonacci of n-2 see the image bellow to better understanding the fibonacci(7)
So you can see the recursion like syntactic sugar some time, but depends of what you want, you need to decide if use or no. When you program in Low-level programming language the recursion is not used, when you program a microprocessor is a big error, but on others case is better use a recursive solutions for better understanding of your code.
hope this help, but you need go deep reading books.

Inaccurate Large Fibonacci Numbers in Python

I am currently implementing this simple code trying to find the n-th element of the Fibonacci sequence using Python 2.7:
import numpy as np
def fib(n):
F = np.empty(n+2)
F[1] = 1
F[0] = 0
for i in range(2,n+1):
F[i]=F[i-1]+F[i-2]
return int(F[n])
This works fine for F < 79, but after that I get wrong numbers. For example, according to wolfram alpha F79 should be equal to 14472334024676221, but fib(100) gives me 14472334024676220. I think this could be caused by the way python deals with integers, but I have no idea what exactly the problem is. Any help is greatly appreciated!
the default data type for a numpy array is depending on architecture a 64 (or 32) bit int.
pure python would let you have arbitrarily long integers; numpy does not.
so it's more the way numpy deals with integers; pure python would do just fine.
Python will deal with integers perfectly fine here. Indeed, that is the beauty of python. numpy, on the other hand, introduces ugliness and just happens to be completely unnecessary, and will likely slow you down. Your implementation will also require much more space. Python allows you to write beautiful, readable code. Here is Raymond Hettinger's canonical implementation of iterative fibonacci in Python:
def fib(n):
x, y = 0, 1
for _ in range(n):
x, y = y, x + y
return x
That is O(n) time and constant space. It is beautiful, readable, and succinct. It will also give you the correct integer as long as you have memory to store the number on your machine. Learn to use numpy when it is the appropriate tool, and as importantly, learn to not use it when it is inappropriate.
Unless you want to generate a list with all the fibonacci numbers until Fn, there is no need to use a list, numpy or anything else like that, a simple loop and 2 variables will be enough as you only really need to know the 2 previous values
def fib(n):
Fk, Fk1 = 0, 1
for _ in range(n):
Fk, Fk1 = Fk1, Fk+Fk1
return Fk
of course, there is better ways to do it using the mathematical properties of the Fibonacci numbers, with those we know that there is a matrix that give us the right result
import numpy
def fib_matrix(n):
mat = numpy.matrix( [[1,1],[1,0]], dtype=object) ** n
return mat[0,1]
to which I assume they have an optimized matrix exponentiation making it more efficient that the previous method.
Using the properties of the underlying Lucas sequence is possible to do it without the matriz, and equally as efficient as exponentiation by squaring and with the same number of variables as the other, but that is a little harder to understand at first glance unlike the first example because alongside the second example it require more mathematical.
The close form, the one with the golden ratio, will give you the result even faster, but that have the risk of being inaccurate because the use of floating point arithmetic.
As an additional word to the previous answer by hiro protagonist, note that if using Numpy is a requirement, you can solve very easely your issue by replacing:
F = np.empty(n+2)
with
F = np.empty(n+2, dtype=object)
but it will not do anything more than transferring back the computation to pure Python.

Python :: Iteration vs Recursion on string manipulation

In the examples below, both functions have roughly the same number of procedures.
def lenIter(aStr):
count = 0
for c in aStr:
count += 1
return count
or
def lenRecur(aStr):
if aStr == '':
return 0
return 1 + lenRecur(aStr[1:])
Picking between the two techniques is a matter of style or is there a most efficient method here?
Python does not perform tail call optimization, so the recursive solution can hit a stack overflow on long strings. The iterative method does not have this flaw.
That said, len(str) is faster than both methods.
This is not correct: 'functions have roughly the same number of procedures'. You probably mean that: 'these procedures require the same number of operations', or, more formally 'they have the same computational time complexity'.
While both have the same computational time complexity, the one using recursion requires additional CPU instructions to execute code for creating new instances of procedures during recursion, and to switch contexts. And to clean up after returning from every recursion. While these operations do not increase the theoretical computational complexity, in most real life implementations of operating systems they will put significant load.
Also the resursive method will have higher space complexity, as each new instance of recursively-called procedure needs new storage for its data.
Surely the first approach is more optimized, as python doesn't have to do a lot of function call and string slicing, which each of these operations are contain some other operations that cost much for python interpreter, and may be cause a lot of problems in future and in dealing with log strings.
As a more pythonic way you better to use len() function in order to get the length of a string.
You can also use code object to see the required stack sized for each function:
>>> lenRecur.__code__.co_stacksize
4
>>> lenIter.__code__.co_stacksize
3

Categories