The recursive formula for computing the number of ways of choosing k items out of a set of n items, denoted C(n,k), is:
1 if K = 0
C(n,k) = { 0 if n<k
c(n-1,k-1)+c(n-1,k) otherwise
I’m trying to write a recursive function C that computes C(n,k) using this recursive formula. The code I have written should work according to myself but it doesn’t give me the correct answers.
This is my code:
def combinations(n,k):
# base case
if k ==0:
return 1
elif n<k:
return 0
# recursive case
else:
return combinations(n-1,k-1)+ combinations(n-1,k)
The answers should look like this:
>>> c(2, 1)
0
>>> c(1, 2)
2
>>> c(2, 5)
10
but I get other numbers... don’t see where the problem is in my code.
I would try reversing the arguments, because as written n < k.
I think you mean this:
>>> c(2, 1)
2
>>> c(5, 2)
10
Your calls, e.g. c(2, 5) means that n=2 and k=5 (as per your definition of c at the top of your question). So n < k and as such the result should be 0. And that’s exactly what happens with your implementation. And all other examples do yield the actually correct results as well.
Are you sure that the arguments of your example test cases have the correct order? Because they are all c(k, n)-calls. So either those calls are wrong, or the order in your definition of c is off.
This is one of those times where you really shouldn't be using a recursive function. Computing combinations is very simple to do directly. For some things, like a factorial function, using recursion there is no big deal, because it can be optimized with tail-recursion anyway.
Here's the reason why:
Why do we never use this definition for the Fibonacci sequence when we are writing a program?
def fibbonacci(idx):
if(idx < 2):
return idx
else:
return fibbonacci(idx-1) + fibbonacci(idx-2)
The reason is because that, because of recursion, it is prohibitively slow. Multiple separate recursive calls should be avoided if possible, for the same reason.
If you do insist on using recursion, I would recommend reading this page first. A better recursive implementation will require only one recursive call each time. Rosetta code seems to have some pretty good recursive implementations as well.
Related
I have written the following recursive algorithm:
p = [2,3,2,1,4]
def fn(c,i):
if(c < 0 or i < 0):
return 0
if(c == 0):
return 1
return fn(c,i-1)+fn(c-p[i-1],i-1)
Its a solution to a problem where you have c coins, and you have to find out have many ways you can spend your c coins on beers. There are n different beers, only one of each beer.
i is denoted as the i'th beer, with the price of p[i], the prices are stored in array p.
The algorithm recursively calls itself, and if c == 0, it returns 1, as it has found a valid permutation. If c or i is less than 0, it returns 0 as it's not a valid permutation, as it exceeds the amount of coins available.
Now I need to rewrite the algorithm as a Memoized algorithm. This is my first time trying this, so I'm a little confused on how to do it.
Ive been trying different stuff, my latest try is the following code:
p = [2,3,2,1,4]
prev = np.empty([5, 5])
def fni(c,i):
if(prev[c][i] != None):
return prev[c][i]
if(c < 0 or i < 0):
prev[c][i] = 0
return 0
if(c == 0):
prev[c][i] = 1
return 1
prev[c][i] = fni(c,i-1)+fni(c-p[i-1],i-1)
return prev[c][i]
"Surprisingly" it doesn't work, and im sure it's completely wrong. My thought was to save the results of the recursive call in an 2d array of 5x5, and check in the start if the result is already saved in the array, and if it is just return it.
I only provided my above attempt to show something, so don't take the code too seriously.
My prev array is all 0's, and should be values of null so just ignore that.
My task is actually only to solve it as pseudocode, but I thought it would be easier to write it as code to make sure that it would actually work, so pseudo code would help as well.
I hope I have provided enough information, else feel free to ask!
EDIT: I forgot to mention that I have 5 coins, and 5 different beers (one of each beer). So c = 5, and i = 5
First, np.empty() by default gives an array of uninitialized values, not Nones, as the documentation points out:
>>> np.empty([2, 2])
array([[ -9.74499359e+001, 6.69583040e-309],
[ 2.13182611e-314, 3.06959433e-309]]) #uninitialized
Secondly, although this is more subjective, you should default to using dictionaries for memoization in Python. Arrays may be more efficient if you know you'll actually memoize most of the possible values, but it can be hard to tell that ahead of time. At the very least, make sure your array values are initialized. It's good that you're using numpy-- that will help you avoid the common beginner mistake of writing memo = [[0]*5]*5.
Thirdly, you should perform checks for 'out of bounds' or negative parameters (c < 0 or i < 0) before you use them to access an array as in prev[c][i] != None. Negative indices in Python could map you to a different memoized parameter's value.
Besides those details, your memoization code and strategy is sound.
I wrote a recursive solution for something today, and as it goes, curiosity led me down a weird path. I wanted to see how an optimized recursive solution compares to an iterative solution so I chose a classic, the Nth Fibonacci to test with.
I was surprised to find that the recursive solution with memoization is much faster than the iterative solution and I would like to know why.
Here is the code (using python3):
import time
import sys
sys.setrecursionlimit(10000)
## recursive:
def fibr(n, memo = {}):
if n <= 1:
return n
if n in memo:
return memo[n]
memo[n] = fibr(n-1, memo) + fibr(n-2, memo)
return memo[n]
## iterative:
def fibi(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
rstart = time.time()
for n in range(10000):
fibr(n)
rend = time.time()
istart = time.time()
for n in range(10000):
fibi(n)
iend = time.time()
print(f"recursive: {rend-rstart}")
print(f"iterative: {iend-istart}")
My results:
recursive: 0.010010004043579102
iterative: 6.274333238601685
Unless I'm mistaken, both the recursive solution and the iterative solution are about as optimized as they can get? If I'm wrong about that, I'd like to know why.
If not, what would cause the iterative solution to be so much slower? It seems to be slower for all values of n, but harder to notice when n is something more reasonable, like <1000. (I'm using 10000 as you can see above)
Some things I've tried:
I thought it might be the magic swapping in the iterative solution a, b = b, a + b, so I tried replacing it with a more traditional "swap" pattern:
tmp = a + b
a = b
b = tmp
#a, b = b, a + b
But the results are basically the same, so that's not the problem there.
Re-arrange the code so that the iterative solution runs first, just to see if there was some weird cache issue at the OS level? It doesn't change the results so that's not it, probably?
My understanding here (and it might be wrong) is that the recursive solution with memoization is O(n). And the iterative solution is also O(n) simply because it iterates from 0..n.
Am I missing something really obvious? I feel like I must be missing something here.
You might expect that def fibr(n, memo = {}): — when called without a supplied memo — turns into something translated a bit like:
def _fibr_without_defaults(n, memo):
...
def fibr(n):
return _fibr_without_defaults(n, {})
That is, if memo is missing it implicitly gets a blank dictionary per your default.
In actuality, it translates into something more like:
def _fibr_without_defaults(n, memo):
...
_fibr_memo_default = {}
def fibr(n):
return _fibr_without_defaults(n, _fibr_memo_default)
That is, a default argument value is not "use this to construct a default value" but instead "use this actual value by default". Every single call to fibr you make (without supplying memo) is sharing a default memo dictionary.
That means in:
for n in range(10000):
fibr(n)
The prior iterations of your loop are filling out memo for the future iterations. When n is 1000, for example, all the work performed by n<=999 is still stored.
By contrast, the iterative version always starts iterating from 0, no matter what work prior iterative calls performed.
If you perform the translation above by hand, so that you really do get a fresh empty memo for each call, you'll see the iterative version is faster. (Makes sense; inserting things into a dictionary and retrieving them just to do the same work as simple iteration will be slower.)
They are not the same.
The recursive version uses the Memoization pattern, calculating only once the result of fibr(n) and storing/caching the result for insta-return if needed again. It's an O(n) algorithm.
The iterative version calculates everything from scratch. It's an O(n2) algorithm (I think).
I have this function:
def rec(lst):
n = len(lst)
if n <= 1:
return 1
return rec(lst[n // 2:]) + rec(lst[:n // 2])
How can I find the time complexity of this function?
Usually in such problems drawing the recursion tree helps.
Look at this photo I added, note how each level sums up to N (since slicing is the thing here doing the work),
and the depth of the tree is logN (this is easy to show, since we divide by 2 each time, you can find an explanation here). So what we have is the function doing O(n) n*logn times which means in total we have O(n*logn).
Now another way of understanding this is using the "Master Theorem" (I encourage you to look it up and learn about it)
We have here T(n) = 2T(n/2) + O(n), so according to the theorem a=2, b=2 so log_b(a) is equal to 1, and therefore
we have (according to the 2nd case of the theorem):
T(n)=O(logn*(n**(log_b(a)))=O(nlogn)
hello I am relatively new to python! Is there a way to do this using for loops in python?
This is a java implementation of something i want to do in python
for (i=1;i<20; i*= 2)
{System.out.println(i);}
Solution in while loop in python`
while i<20:
print i
i*=2
I cannot figure out a way to do this using for loops. Implemented it using while loop obviously, but still curious to know whether there is a method to do so or not
There are lots of ways to do this, e.g.
for i in range(5):
i = 2 ** i
print i
or using generators
from itertools import count, takewhile
def powers_of_two():
for i in count():
yield 2 ** i
for i in takewhile(lambda x: x < 20, powers_of_two()):
print i
But in the end, it depends on your use case what version gives the clearest and most readbale code. In most cases, you would probably just use a while-loop, since it's simple and does the job.
You think of for loops like they would be in other languages, like C, C++, Java, JavaScript etc.
Python for loops are different; they work on iterables, and you always have to read them like:
for element in iterable
instead of the C'ish
for(start_condition; continue_condition; step_statement)
Hence, you would need iterable to generate your products.
I like readability, so here's how I'd do it:
for a in (2**i for i in range(20)):
print a
But that mainly works because we mathematically know that the i'th element of your sequence is going to be 2**i.
There is not a real way to do this in Python. If you wanted to mimic the logic of that for loop exactly, then a manual while loop would definitely be the way to go.
Otherwise, in Python, you would try to find a generator or generator expression that produces the values of i. Depending on the complexity of your post loop expression, this may require an actual function.
In your case, it’s a bit simpler because the numbers you are looking for are the following:
1 = 2 ** 0
2 = 2 ** 1
4 = 2 ** 2
8 = 2 ** 3
...
So you can generate the numbers using a generator expression (2 ** k for k in range(x)). The problem here is that you would need to specify a value x which happens to be math.floor(math.log2(20)) + 1 (because you are looking for the largest number k for which 2 ** k < 20 is true).
So the full expression would be this:
for i in (2 ** k for k in range(math.floor(math.log2(20)) + 1)):
print(i)
… which is a bit messy, so if you don’t necessarily need the i to be those values, you could move it inside the loop body:
for k in range(math.floor(math.log2(20)) + 1):
i = 2 ** k
print(i)
But this still only fits your purpose. If you wanted a “real” C-for loop expression, you could write a generator function:
def classicForLoop (init, stop, step):
i = init
while i < stop:
yield i
i = step(i)
Used like this:
for i in classicForLoop(1, 20, lambda x: x * 2):
print(i)
Of course, you could also modify the generator function to take lambdas as the first and second parameter, but it’s a bit simpler like this.
Use range() function to define iteration length.You can directly use print() than system.out.println
Alexander mentioned it and re-iterating
for i in range(1,20):print(i*2)
You can also consider while loop here-
i=0
while (i<20):
print(2**i)
i=i+1
Remember indentation in python
Here are the best implementations I could find for lazy infinite sequences of Fibonacci numbers in both Clojure and Python:
Clojure:
(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
sample usage:
(take 5 fib-seq)
Python:
def fib():
a = b = 1
while True:
yield a
a,b = b,a+b
sample usage:
for i in fib():
if i > 100:
break
else:
print i
Obviously the Python code is much more intuitive.
My question is: Is there a better (more intuitive and simple) implementation in Clojure ?
Edit
I'm opening a follow up question at
Clojure Prime Numbers
I agree with Pavel, what is intuitive is subjective. Because I'm (slowly) starting to grok Haskell, I can tell what the Clojure code does, even though I've never written a line of Clojure in my life. So I would consider the Clojure line fairly intuitive, because I've seen it before and I'm adapting to a more functional mindset.
Let's consider the mathematical definition, shall we?
{ 0 if x = 0 }
F(x) = { 1 if x = 1 }
{ F(x - 1) + F(x - 2) if x > 1 }
This is less than ideal, formatting wise - those three brackets lined up should be one giant bracket - but who's counting? This is a pretty clear definition of the Fibonacci sequence to most people with a mathematical background. Let's look at the same thing in Haskell, because I know it better than Clojure:
fib 0 = 0
fib 1 = 1
fib n = fibs (n - 1) + fibs (n - 2)
This is a function, fib, that returns the nth Fibonacci number. Not exactly what we had in Python or Clojure, so let's fix that:
fibs = map fib [0..]
This makes fibs an infinite list of Fibonacci numbers. fibs !! 1 is 1, fibs !! 2 is 1, fibs !! 10 is 55, and so on. However, this is probably quite inefficient, even in a language that relies on heavily optimized recursion such as Haskell. Let's look at the Clojure definition in Haskell:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
The first couple of characters are pretty simple: 0 : 1 : makes a list with elements 0 and 1, and then some more. But what's all the rest of that? Well, fibs is the list we've already got, and tail fibs calls the tail function on our list so far, which returns the list starting at the 2nd element (sort of like in Python saying fibs[1:]). So we take these two lists - fibs and tail fibs - and we zip them together with the + function (operator) - that is, we add the matching elements of each. Let's look:
fibs = 0 : 1 : ...
tail fibs = 1 : ...
zip result = 1 : ...
So our next element is 1! But then we add that back onto our fibs list, and look what we get:
fibs = 0 : 1 : 1 : ...
tail fibs = 1 : 1 : ...
zip result = 1 : 2 : ...
What we have here is a recursive list definition. As we add more elements to the end of fibs with our zipWith (+) fibs (tail fibs) bit, more elements become avaliable for us to work with when adding elements. Note that Haskell by default is lazy, so just making an infinite list like that won't crash anything (just don't try to print it).
So while this is perhaps theoretically the same as our mathematical definition before, it saves the results in our fibs list (sort of an auto-memoization) and we rarely have the problems that might be experienced in a naive solution. For completeness, let's define our fib function in terms of our new fibs list:
fib n = fibs !! n
If I didn't lose you yet, that's good, because that means you understand the Clojure code. Look:
(def fib-seq (lazy-cat [0 1]
(map + fib-seq (rest fib-seq))))
We make a list, fib-seq. It starts with two elements, [0 1], just like our Haskell example. We do a lazy concatenation of these two initial elements with (map + fib-seq (rest fib-seq)) - assuming rest does the same thing that tail does in Haskell, we're just combining our list with itself at a lower offset, and then combining these two lists with the + operator/function.
After working this through your head a few times, and exploring some other examples, this method of generating fibonacci series becomes at least semi-intuitive. It's at least intuitive enough for me to spot it in a language I don't know.
I like:
(def fibs
(map first
(iterate
(fn [[ a, b ]]
[ b, (+ a b) ])
[0, 1])))
Which seems to have some similarities to the python/generator version.
If you didn't know any imperative languages, would this be intuitive for you?
a = a + 5
WTF? a clearly isn't the same as a + 5.
if a = a + 5, is a + 5 = a?
Why doesn't this work???
if (a = 5) { // should be == in most programming languages
// do something
}
There are a lot of things that aren't clear unless you've seen it before somewhere else and understood its purpose. For a long time I haven't known the yield keyword and in effect I couldn't figure out what it did.
You think the imperative approach is more legible because you are used to it.
The Clojure code is intuitive to me (because I know Clojure). If you want something that might look more like something you're familiar with, you can try this, an efficient and overly-verbose recursive version:
(use 'clojure.contrib.def) ; SO's syntax-highlighting still sucks
(defn-memo fib [n]
(cond (= n 0) 0
(= n 1) 1
:else (+ (fib (- n 1))
(fib (- n 2)))))
(def natural-numbers (iterate inc 0))
(def all-fibs
(for [n natural-numbers]
(fib n)))
But to someone who doesn't know what recursion or memoization are, that isn't going to be intuitive either. The very idea of "infinite lazy sequences" probably isn't intuitive to the majority of programmers. I can't guess what's in your brain so I don't know what a more intuitive Clojure function would look like to you, other than "looks more like Python".
To someone who doesn't know programming, all of this stuff is going to look like gibberish. What's a loop? What's a function? What is this yield thing? That's where we all start. Intuitiveness is a function of what you've learned so far. Non-intuitive code is code you aren't familiar with yet. Extrapolating from "I know this" to "It's inherently more intuitive" is wrong.
The wiki has an in depth treatment of various Fibonacci implementations in Clojure which may interest you if you haven't seen it already.
You should always use a language that fits the problem*. Your Python example is just lower level than the Clojure one -- easier to understand for beginners, but more tedious to write and parse for someone who knows the fitting higher level concepts.
* By the way, this also means that it is always nice to have a language that you can grow to fit.
Here is one solution.
(defn fib-seq [a b]
(cons (+ a b) (lazy-seq (fib-seq (+ a b) a))))
(def fibs (concat [1 1] (fib-seq 1 1)))
user=> (take 10 fibs)
(1 1 2 3 5 8 13 21 34 55)
Think about how would you write lazy-cat with recur in clojure.
(take 5 fibs)
Seems about as intuitive as it could possibly get. I mean, that is exactly what you're doing. You don't even need to understand anything about the language, or even know what language that is, in order to know what should happen.