Could someone please help with finding the complexity of the following code?
def mystery(n):
sum = 0
if n % 2 == 0:
for i in range(len(n + 10000)):
sum += 1
elif n % 3 == 0:
i, j = 0, 0
while i <= n:
while j <= n:
sum += j - 1
j += 1
i += 1
else:
sum = n**3
Would the time complexity of the following code be O(n^2) since in the worst case, the elif statement would get executed, so the outer while loop would execute n times, while the nested while loop would be executed n times once only because we never reset j? Therefore, we would have O(n^2 + n) and because the leading term is n^2, the complexity would be O(n^2)?
In the elif section:
when i=0, the while j <= n: loop is O(n).
when i>0, j=n+1, so the while j <= n: loop is O(1).
So the elif section is O(n) + O(n*1) = O(n+n) = O(n).
Related
I have the following recursive function to get the mth term of a n-bonacci sequence as shown below this question. My problem is that the use of for and while loops is totally banned from the code, so I need to get rid off
for i in range(1, n+1):
temp += n_bonacci(n,m-i)
and convert the code into something that is not a loop but nevertheless achieves the same effect. Among the things I can use, but this is not an exclusive enumeration, is: (1) use built-in functions like sum() and .join() and (2) use list comprehensions.
The full code is as follows:
def n_bonacci(n,m): #n is the number of n initial terms; m is the mth term.
if (m < n-1):
return 0
elif (m == n-1):
return 1
else:
temp = 0
#[temp += n_bonacci(n,m-i) for i in range(n)] #this is an attempt at using list comprehension
for i in range(1, n+1):
temp += n_bonacci(n,m-i)
return temp
print("n_bonacci:",n_bonacci(2,10))
print("n_bonacci:",n_bonacci(7,20))
Here's a solution that avoids any type of loop, including loops hidden inside comprehensions:
def n_bonacci_loopless(n, m):
def inner(i, c):
if i == m:
return sum(c)
else:
return inner(i+1, c[-(n-1):] + [sum(c)])
if m < n-1:
return 0
elif (m == n-1):
return 1
else:
return inner(n, [0] * (n-1) + [1])
The base cases are the same, but for recursive cases it initialises a list of collected results c with n-1 zeroes, followed by a one, the sum of which would be the correct answer for m == n.
For m > n, the inner function is called again as long as i < m, summing the list and appending the result to the end of the last n-1 elements of the list so far.
If you are allowed to use comprehensions, the answer is trivial:
def n_bonacci(n,m):
if (m < n-1):
return 0
elif (m == n-1):
return 1
else:
return sum(n_bonacci(n, m-i) for i in range(1, n+1))
You can rewrite the code as follows using list comprehensions:
def n_bonacci(n,m): #n is the number of n initial terms; m is the mth term.
if (m < n-1):
return 0
elif (m == n-1):
return 1
else:
return sum(n_bonacci(n, m-i) for i in range(1, n + 1))
print("n_bonacci:",n_bonacci(2,10))
print("n_bonacci:",n_bonacci(7,20))
To go beyond #Grismar 's answer you can write your own version of sum which doesn't use loops.
def n_bonacci_loopless(n, m):
def recsum(l, s=0):
return recsum(l[1:], s + l[0])
def inner(i, c):
if i == m:
return recsum(c)
else:
return inner(i+1, c[-(n-1):] + [recsum(c)])
if m < n-1:
return 0
elif (m == n-1):
return 1
else:
return inner(n, [0] * (n-1) + [1])
What is the time complexity of this short program? I'm thinking it's linear, but then again, in the if statement there's a Python slice. I'm wondering how it would contribute to the complexity:
def count_s(string, sub):
i = 0
j =len(sub)-1
count = 0
while j < len(string):
if string[i:j+1] == sub:
count = count + 1
j = j + 1
i = i + 1
else:
i = i + 1
j = j + 1
return count
Python's time complexity is O(k) where k is the length of the slice, so your function is O(n * k), which is not linear, yet still not exponential. On the other hand, you could use Python's builtin count function, as such: string.count(sub).
Or, if you're set on using your own code, it could be slimmed down to this for readability's sake:
def count_s(string, sub):
i = count = 0
j = len(sub) - 1
while j < len(string):
count += string[i:j + 1] == sub
j += 1
i += 1
return count
Trying to solve hackerrank problem.
You are given Q queries. Each query consists of a single number N. You can perform 2 operations on N in each move. If N=a×b(a≠1, b≠1), we can change N=max(a,b) or decrease the value of N by 1.
Determine the minimum number of moves required to reduce the value of N to 0.
I have used BFS approach to solve this.
a. Generating all prime numbers using seive
b. using prime numbers I can simply avoid calculating the factors
c. I enqueue -1 along with all the factors to get to zero.
d. I have also used previous results to not enqueue encountered data.
This still is giving me time exceeded. Any idea? Added comments also in the code.
import math
#find out all the prime numbers
primes = [1]*(1000000+1)
primes[0] = 0
primes[1] = 0
for i in range(2, 1000000+1):
if primes[i] == 1:
j = 2
while i*j < 1000000:
primes[i*j] = 0
j += 1
n = int(input())
for i in range(n):
memoize= [-1 for i in range(1000000)]
count = 0
n = int(input())
queue = []
queue.append((n, count))
while len(queue):
data, count = queue.pop(0)
if data <= 1:
count += 1
break
#if it is a prime number then just enqueue -1
if primes[data] == 1 and memoize[data-1] == -1:
queue.append((data-1, count+1))
memoize[data-1] = 1
continue
#enqueue -1 along with all the factors
queue.append((data-1, count+1))
sqr = int(math.sqrt(data))
for i in range(sqr, 1, -1):
if data%i == 0:
div = max(int(data/i), i)
if memoize[div] == -1:
memoize[div] = 1
queue.append((div, count+1))
print(count)
There are two large causes of slowness with this code.
Clearing an array is slower than clearing a set
The first problem is this line:
memoize= [-1 for i in range(1000000)]
this prepares 1 million integers and is executed for each of your 1000 test cases. A faster approach is to simply use a Python set to indicate which values have already been visited.
Unnecessary loop being executed
The second problem is this line:
if primes[data] == 1 and memoize[data-1] == -1:
If you have a prime number, and you have already visited this number, you actually do the slow loop searching for prime factors which will never find any solutions (because it is a prime).
Faster code
In fact, the improvement due to using sets is so much that you don't even need your prime testing code and the following code passes all tests within the time limit:
import math
n = int(input())
for i in range(n):
memoize = set()
count = 0
n = int(input())
queue = []
queue.append((n, count))
while len(queue):
data, count = queue.pop(0)
if data <= 1:
if data==1:
count += 1
break
if data-1 not in memoize:
memoize.add(data-1)
queue.append((data-1, count+1))
sqr = int(math.sqrt(data))
for i in range(sqr, 1, -1):
if data%i == 0:
div = max(int(data/i), i)
if div not in memoize:
memoize.add(div)
queue.append((div, count+1))
print(count)
Alternatively, there's a O(n*sqrt(n)) time and O(n) space complexity solution that passes all the test cases just fine.
The idea is to cache minimum counts for each non-negative integer number up to 1,000,000 (the maximum possible input number in the question) !!!BEFORE!!! running any query. After doing so, for each query just return a minimum count for a given number stored in the cache. So, retrieving a result will have O(1) time complexity per query.
To find minimal counts for each number (let's call it down2ZeroCounts), we should consider several cases:
0 and 1 have 0 and 1 minimal counts correspondingly.
Prime number p doesn't have factors other than 1 and itself. Hence, its minimal count is 1 plus a minimal count of p - 1 or more formally down2ZeroCounts[p] = down2ZeroCounts[p - 1] + 1.
For a composite number num it's a bit more complicated. For any pair of factors a > 1,b > 1 such that num = a*b the minimal count of num is either down2ZeroCounts[a] + 1 or down2ZeroCounts[b] + 1 or down2ZeroCounts[num - 1] + 1.
So, we can gradually build minimal counts for each number in ascending order. Calculating a minimal count of each consequent number will be based on optimal counts for lower numbers and so in the end a list of optimal counts will be built.
To better understand the approach please check the code:
from __future__ import print_function
import os
import sys
maxNumber = 1000000
down2ZeroCounts = [None] * 1000001
def cacheDown2ZeroCounts():
down2ZeroCounts[0] = 0
down2ZeroCounts[1] = 1
currentNum = 2
while currentNum <= maxNumber:
if down2ZeroCounts[currentNum] is None:
down2ZeroCounts[currentNum] = down2ZeroCounts[currentNum - 1] + 1
else:
down2ZeroCounts[currentNum] = min(down2ZeroCounts[currentNum - 1] + 1, down2ZeroCounts[currentNum])
for i in xrange(2, currentNum + 1):
product = i * currentNum
if product > maxNumber:
break
elif down2ZeroCounts[product] is not None:
down2ZeroCounts[product] = min(down2ZeroCounts[product], down2ZeroCounts[currentNum] + 1)
else:
down2ZeroCounts[product] = down2ZeroCounts[currentNum] + 1
currentNum += 1
def downToZero(n):
return down2ZeroCounts[n]
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
q = int(raw_input())
cacheDown2ZeroCounts()
for q_itr in xrange(q):
n = int(raw_input())
result = downToZero(n)
fptr.write(str(result) + '\n')
fptr.close()
Will it be O(n) or greater?
n = length of list
a is a list of integers that is very long
final_count=0
while(n>1):
i=0
j=1
while(a[i+1]==a[i]):
i=i+1
j=j+1
if i==n-1:
break
for k in range(j):
a.pop(0)
final_count=final_count+j*(n-j)
n=n-j
The way I see it, your code would be O(n) if it wasn’t for the a.pop(0) part. Since lists are implemented as arrays in memory, removing an element at the top means that all elements in the array need to moved as well. So removing from a list is O(n). You do that in a loop over j and as far as I can tell, in the end, the sum of all js will be the same as n, so you are removing the item n times from the list, making this part quadratic (O(n²)).
You can avoid this though by not modifying your list, and just keeping track of the initial index. This not only removes the need for the pop, but also the loop over j, making the complexity calculation a bit more straight-forward:
final_count = 0
offset = 0
while n > 1:
i = 0
j = 1
while a[offset + i + 1] == a[offset + i]:
i += 1
j += 1
if i == n - 1:
break
offset += j
final_count += j * (n - j)
n = n - j
Btw. it’s not exactly clear to me why you keep track of j, since j = i + 1 at every time, so you can get rid of that, making the code a bit simpler. And at the very end j * (n - j) is exactly j * n if you first adjust n:
final_count = 0
offset = 0
while n > 1:
i = 0
while a[offset + i + 1] == a[offset + i]:
i += 1
if i == n - 1:
break
offset += i + 1
n = n - (i + 1)
final_count += (i + 1) * n
One final note: As you may notice, offset is now counting from zero to the length of your list, while n is doing the reverse, counting from the length of your list to zero. You could probably combine this, so you don’t need both.
I wrote some code for Project Euler Problem 35:
#Project Euler: Problem 35
import time
start = time.time()
def sieve_erat(n):
'''creates list of all primes < n'''
x = range(2,n)
b = 0
while x[b] < int(n ** 0.5) + 1:
x = filter(lambda y: y % x[b] != 0 or y == x[b], x)
b += 1
else:
return x
def circularPrimes(n):
'''returns # of circular primes below n'''
count = 0
primes = sieve_erat(n)
b = set(primes)
for prime in primes:
inc = 0
a = str(prime)
while inc < len(a):
if int(a) not in b:
break
a = a[-1] + a[0:len(a) - 1]
inc += 1
else:
count += 1
else:
return count
print circularPrimes(1000000)
elapsed = (time.time() - start)
print "Found in %s seconds" % elapsed
I am wondering why this code (above) runs so much faster when I set b = set(primes) in the circularPrimes function. The running time for this code is about 8 seconds. Initially, I did not set b = set(primes) and my circularPrimes function was this:
def circularPrimes(n):
'''returns # of circular primes below n'''
count = 0
primes = sieve_erat(n)
for prime in primes:
inc = 0
a = str(prime)
while inc < len(a):
if int(a) not in primes:
break
a = a[-1] + a[0:len(a) - 1]
inc += 1
else:
count += 1
else:
return count
My initial code (without b = set(primes)) ran so long that I didn't wait for it to finish. I am curious as to why there is such a large discrepancy in terms of running time between the two pieces of code as I do not believe that primes would have had any duplicates that would have made iterating through it take so much longer that iterating through set(primes). Maybe my idea of set( ) is wrong. Any help is welcome.
I believe the culprit here is if int(a) not in b:. Sets are implemented internally as hashtables, meaning that checking for membership is significantly less expensive than with a list (since you just need to check for collision).
You can check out the innards of sets here.