I'm currently doing some work around Big-O complexity and calculating the complexity of algorithms.
I seem to be struggling to work out the steps to calculate the complexity and was looking for some help to tackle this.
The function:
index = 0
while index < len(self.items):
if self.items[index] == item:
self.items.pop(index)
else:
index += 1
The actual challenge is to rewrite this function so that is has O(n) worst case complexity.
My problem with this is, as far as I thought, assignment statements and if statements have a complexity of O(1) whereas the while loop has a complexity of (n) and in the worst case any statements within the while loop could execute n times. So i work this out as 1 + n + 1 = 2 + n = O(n)
I figure I must be working this out incorrectly as there'd be no point in rewriting the function otherwise.
Any help with this is greatly appreciated.
If self.items is a list, the pop operation has complexity "k" where k is the index,
so the only way this is not O(N) is because of the pop operation.
Probably the exercise is done in order for you to use some other method of iterating and removing from the list.
To make it O(N) you can do:
self.items = [x for x in self.items if x == item]
If you are using Python's built in list data structure the pop() operation is not constant in the worst case and is O(N). So your overall complexity is O(N^2). You will need to use some other data structure like linked list if you cannot use auxiliary space.
With no arguments to pop its O(1)
With an argument to pop:
Average time Complexity O(k) (k represents the number passed in as an argument for pop
Amortized worst case time complexity O(k)
Worst case time complexity O(n)
Time Complexity - Python Wiki
So to make your code effective, allow user to pop from the end of the list:
for example:
def pop():
list.pop(-1)
Reference
Since you are passing index to self.items.pop(index), Its NOT O(1).
Related
I need to check if a number and its double exist in an array. This code using set to solve it. However I am not sure the Time complexity is better than O(N^2). I use the for loop and if 2*item in s like below. Isn't it to know whether the item is in an array, we use another O(N). Which mean O(N^2) in total? If it is optimal, how can I implement the codes in C without using nested loop?
Thanks a lot!
def checkIfExist(arr]) -> bool:
s = set(array)
for item in s:
if 2 * item in s and item != 0:
return True
if arr.count(0) >= 2:
return True
return False
The time complexity of the 'in' operator for sets in python is on average O(1) and only in the worst case O(N), since sets in python use a HashTable internally.
So your function's time complexity on average should be O(N) and only in the worst case will be O(N^2), where N is the length of the array.
More here https://wiki.python.org/moin/TimeComplexity
I know there are many other questions out there asking for the general guide of how to calculate the time complexity, such as this one.
From them I have learnt that when there is a loop, such as the (for... if...) in my Python programme, the Time complexity is N * N where N is the size of input. (please correct me if this is also wrong) (Edited once after being corrected by an answer)
# greatest common divisor of two integers
a, b = map(int, input().split())
list = []
for i in range(1, a+b+1):
if a % i == 0 and b % i == 0:
list.append(i)
n = len(list)
print(list[n-1])
However, do other parts of the code also contribute to the time complexity, that will make it more than a simple O(n) = N^2 ? For example, in the second loop where I was finding the common divisors of both a and b (a%i = 0), is there a way to know how many machine instructions the computer will execute in finding all the divisors, and the consequent time complexity, in this specific loop?
I wish the question is making sense, apologise if it is not clear enough.
Thanks for answering
First, a few hints:
In your code there is no nested loop. The if-statement does not constitute a loop.
Not all nested loops have quadratic time complexity.
Writing O(n) = N*N doesn't make any sense: what is n and what is N? Why does n appear on the left but N is on the right? You should expect your time complexity function to be dependent on the input of your algorithm, so first define what the relevant inputs are and what names you give them.
Also, O(n) is a set of functions (namely those asymptotically bounded from above by the function f(n) = n, whereas f(N) = N*N is one function. By abuse of notation, we conventionally write n*n = O(n) to mean n*n ∈ O(n) (which is a mathematically false statement), but switching the sides (O(n) = n*n) is undefined. A mathematically correct statement would be n = O(n*n).
You can assume all (fixed bit-length) arithmetic operations to be O(1), since there is a constant upper bound to the number of processor instructions needed. The exact number of processor instructions is irrelevant for the analysis.
Let's look at the code in more detail and annotate it:
a, b = map(int, input().split()) # O(1)
list = [] # O(1)
for i in range(1, a+b+1): # O(a+b) multiplied by what's inside the loop
if a % i == 0 and b % i == 0: # O(1)
list.append(i) # O(1) (amortized)
n = len(list) # O(1)
print(list[n-1]) # O(log(a+b))
So what's the overall complexity? The dominating part is indeed the loop (the stuff before and after is negligible, complexity-wise), so it's O(a+b), if you take a and b to be the input parameters. (If you instead wanted to take the length N of your input input() as the input parameter, it would be O(2^N), since a+b grows exponentially with respect to N.)
One thing to keep in mind, and you have the right idea, is that higher degree take precedence. So you can have a step that’s constant O(1) but happens n times O(N) then it will be O(1) * O(N) = O(N).
Your program is O(N) because the only thing really affecting the time complexity is the loop, and as you know a simple loop like that is O(N) because it increases linearly as n increases.
Now if you had a nested loop that had both loops increasing as n increased, then it would be O(n^2).
def hasPairWithSum(arr,target):
for i in range(len(arr)):
if ((target-arr[i]) in arr[i+1:]):
return True
return False
In python, is time complexity for this function O(n) or O(n^2), in other words 'if ((target-arr[i]) in arr[i+1:])' is this another for loop or no?
Also what about following function, is it also O(n^2), if not why:
def hasPairWithSum2(arr,target):
seen = set()
for num in arr:
num2 = target - num
if num2 in seen:
return True
seen.add(num)
return False
Thanks!
The first version has a O(n²) time complexity:
Indeed the arr[i+1:] already creates a new list with n-1-i elements, which is not a constant time operation. The in operator will scan that new list, and in worst case visit each value in that new list.
If we count the number of elements copied into a new list (with arr[i+1), we can sum those counts for each iteration of the outer loop:
n-1
+ n-2
+ n-3
+ ...
+ 1
+ 0
This is a triangular number, and equals n(n-1)/2, which is O(n²).
Second version
The second version, using a set, runs in O(n) average time complexity.
There is no list slicing here, and the in operator on a set has -- contrary to a list argument -- an average constant time complexity. So now every action within the loop has an (average) constant time complexity, giving the algorithm an average time complexity of O(n).
According to the python docs, the in operator for a set can have an amortised worst time complexity of O(n). So you would still get a worst time complexity for your algorithm of O(n²).
It's O(n^2)
First loop will run n times and second will run (n-m) for every n.
So the whole thing will run n(n-m) times that is n^2 - nm. Knowing that m<n we know that it has a complexity of O(n^2)
But if it's critical to something you might consult someone who is better at that.
I have two functions, both of which flatten an arbitrarily nested list of lists in Python.
I am trying to figure out the time complexity of both, to see which is more efficient, but I haven't found anything definitive on SO anything so far. There are lots of questions about lists of lists, but not to the nth degree of nesting.
function 1 (iterative)
def flattenIterative(arr):
i = 0
while i < len(arr):
while isinstance(arr[i], list):
if not arr[i]:
arr.pop(i)
i -= 1
break
else:
arr[i: i + 1] = arr[i]
i += 1
return arr
function 2 (recursive)
def flattenRecursive(arr):
if not arr:
return arr
if isinstance(arr[0], list):
return flattenRecursive(arr[0]) + flattenRecursive(arr[1:])
return arr[:1] + flattenRecursiveweb(arr[1:])
My thoughts are below:
function 1 complexity
I think that the time complexity for the iterative version is O(n * m), where n is the length of the initial array, and m is the amount of nesting. I think space complexity of O(n) where n is the length of the initial array.
function 2 complexity
I think that the time complexity for the recursive version will be O(n) where n is the length of the input array. I think space complexity of O(n * m) where n is the length of the initial array, and m is the depth of nesting.
summary
So, to me it seems that the iterative function is slower, but more efficient with space. Conversely, the recursive function is faster, but less efficient with space. Is this correct?
I don't think so. There are N elements, so you will need to visit each element at least once. Overall, your algorithm will run for O(N) iterations. The deciding factor is what happens per iteration.
Your first algorithm has 2 loops, but if you observe carefully, it is still iterating over each element O(1) times per iteration. However, as #abarnert pointed out, the arr[i: i + 1] = arr[i] moves every element from arr[i+1:] up, which is O(N) again.
Your second algorithm is similar, but you are adding lists in this case (in the previous case, it was a simple slice assignment), and unfortunately, list addition is linear in complexity.
In summary, both your algorithms are quadratic.
def firstMissingPositive(self, A):
least = 1024*1024*1024
# worst case n ops
for i in A:
if i < least and i > 0:
least = i
if least > 1:
return 1
else:
# should be O(n)
A_set = set(A)
# worst case n ops again
while True:
least += 1
if least not in A_set:
return least
I only ask because it seemed too simple for the given problem, which most people here may have seen on Leetcode or someplace similar. Is there something I'm not aware of in Python's implementation of set or dict? Lookup should be O(1) and converting a list to a set should be O(n) from what I understand.
It's not O(1) space, it's O(n) space, since you need to build the set.
As for the time, according to the Python wiki, set containment takes O(n) worst case. So your algorithm is thus O(n^2). Note this is worst case - the average time for set containment is O(1) and so the average time complexity for your algorithm is indeed O(n).
You can get the worst-case down to O(n log n) by using an ordered set, but then the average-time will be O(n log n) as well.