Testing complexity using time module - python

I have a doubt regarding time complexity with recursion.
Let's say I need to find the largest number in a list using recursion what I came up with is this:
def maxer(s):
if len(s) == 1:
return s.pop(0)
else:
if s[0]>s[1]:
s.pop(1)
else:
s.pop(0)
return maxer(s)
Now to test the function with many inputs and find out its time complexity, I called the function as follows:
import time
import matplotlib.pyplot as plt
def timer_v3(fx,n):
x_axis=[]
y_axis=[]
for x in range (1,n):
z = [x for x in range(x)]
start=time.time()
fx(z)
end=time.time()
x_axis.append(x)
y_axis.append(end-start)
plt.plot(x_axis,y_axis)
plt.show()
Is there a fundamental flaw in checking complexity like this as a rough estimate? If so, how can we rapidly check the time complexity?

Assuming s is a list, then your function's time complexity is O(n2). When you pop from the start of the list, the remaining elements have to be shifted left one space to "fill in" the gap; that takes O(n) time, and your function pops from the start of the list O(n) times. So the overall complexity is O(n * n) = O(n2).
Your graph doesn't look like a quadratic function, though, because the definition of O(n2) means that it only has to have quadratic behaviour for n > n0, where n0 is an arbitrary number. 1,000 is not a very large number, especially in Python, because running times for smaller inputs are mostly interpreter overhead, and the O(n) pop operation is actually very fast because it's written in C. So it's not only possible, but quite likely that n < 1,000 is too small to observe quadratic behaviour.
The problem is, your function is recursive, so it cannot necessarily be run for large enough inputs to observe quadratic running time. Too-large inputs will overflow the call stack, or use too much memory. So I converted your recursive function into an equivalent iterative function, using a while loop:
def maxer(s):
while len(s) > 1:
if s[0] > s[1]:
s.pop(1)
else:
s.pop(0)
return s.pop(0)
This is strictly faster than the recursive version, but it has the same time complexity. Now we can go much further; I measured the running times up to n = 3,000,000.
This looks a lot like a quadratic function. At this point you might be tempted to say, "ah, #kaya3 has shown me how to do the analysis right, and now I see that the function is O(n2)." But that is still wrong. Measuring the actual running times - i.e. dynamic analysis - still isn't the right way to analyse the time complexity of a function. However large n we test, n0 could still be bigger, and we'd have no way of knowing.
So if you want to find the time complexity of an algorithm, you have to do it by static analysis, like I did (roughly) in the first paragraph of this answer. You don't save yourself time by doing a dynamic analysis instead; it takes less than a minute to read your code and see that it does an O(n) operation O(n) times, if you have the knowledge. So, it is definitely worth developing that knowledge.

Related

Converting recursive function to completely iterative function without using extra space

Is it possible to convert a recursive function like the one below to a completely iterative function?
def fact(n):
if n <= 1:
return
for i in range(n):
fact(n-1)
doSomethingFunc()
It seems pretty easy to do given extra space like a stack or a queue, but I was wondering if we can do this in O(1) space complexity?
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of factorial(n).
Well, generally speaking no.
I mean, the space taken in the stack by recursive functions is not just an inconvenient of this programming style. It is the memory needed for the computation.
So, sure, for lot of algorithm, that space is unnecessary and could be spared. For a classical factorial for example
def fact(n):
if n<=1:
return 1
else:
return n*fact(n-1)
the stacking of all the n, n-1, n-2, ..., 1 arguments is not really necessary.
So, sure, you can find an implementation that get rid of it. But that is optimization (For example, in the specific case of terminal recursion. But I am pretty sure that you add that "doSomething" to make clear that you don't want to focus on that specific case).
You cannot assume in general that an algorithm that don't need all those values exist, recursive or iterative. Or else, that would be saying that all algorithm exist in a O(1) space complexity version.
Example: base representation of a positive integer
def baseRepr(num, base):
if num>=base:
s=baseRepr(num//base, base)
else:
s=''
return s+chr(48+num%base)
Not claiming it is optimal, or even well written.
But, the stacking of the arguments is needed. It is the way you implicitly store the digits that you compute in the reverse order.
An iterative function would also need some memory to store those digits, since you have to compute the last one first.
Well, I am pretty sure that for this simple example, you could find a way to compute from left to right, for example using a log computation to know in advance the number of digits or something. But that's not the point. Just imagine that there is no other algorithm known than the one computing digits from right to left. Then you need to store them. Either implicitly in the stack using recursion, or explicitly in allocated memory. So again, memory used in the stack is not just an inconvenience of recursion. It is the way recursive algorithm store things, that would be stored otherwise in iterative algorithm
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of
factorial(n).
Yes.
I was wondering if we can do this in O(1) space complexity?
So, no.

What is the computational complexity of the function pow in Python? [duplicate]

This question already has an answer here:
Why is time complexity O(1) for pow(x,y) while it is O(n) for x**y?
(1 answer)
Closed 5 months ago.
I want to know the computational complexity of pow in Python. To two-arg (plain exponentiation).
I have this code, and I know the computational complexity of a for is O(n), but I don't know if the pow affect the complexity.
def function(alpha,beta,p):
for x in range(1,p):
beta2 = (pow(alpha, x)) % p
if beta == beta2:
return x
else:
print("no existe")
As mentioned by comment, the official Python interpreter does a lot of optimization for its internal math functions. The usual pow operation of type A ** B calls two variable Pointers for evaluation (actually all Python variables are a combination of Pointers, making it unnecessary to initialize data types), but this is a slow process.
On the contrary, the interpreter can optimize the data in the POW, fix their variable types as int , thus to reduce complexity.
You can also read this answer, which should fully explain your question
Why is time complexity O(1) for pow(x,y) while it is O(n) for x**y?
Oh now you post a code, that clearify the problem. Usually, in Algorithm we treat the time complexity of OPERATION as O(1), this dosn't matter how many operations you have in a loop, because that is the def of O() notation.
And for usual program, only the loop matters, for your progrom the complexity should only be O(n)
def function(alpha,beta,p):
for x in range(1,p): # Only one loop
beta2 = (pow(alpha, x)) % p
if beta == beta2:
return x
else:
print("no existe")

Recursion in python with list slicing vs indexes

Is there any (time/space complexity) disadvantage in writing recursive functions in python using list slicing?
Form what I've seen on the internet, people tend to use lists and low/high variables in recursive functions, but for me it seems more natural to call a function recursively with sliced lists.
Here are two implementations of binary search as examples of the what I'm describing:
List slicing
def binSearch(arr,k):
if len(arr) < 1:
return -1
mid = len(arr) // 2
if arr[mid] == k:
return mid
elif arr[mid] < k:
val = binSearch(arr[mid+1:],k)
if val == -1:
return -1
else:
return mid + 1 + val
else:
return binSearch(arr[:mid],k)
Indexes
def binSearch2(arr,k,low,high):
if low > high:
return -1
mid = (high+low) // 2
if arr[mid] == k:
return mid
elif arr[mid] < k:
return binSearch2(arr,k,mid+1,high)
else:
return binSearch2(arr,k,low,mid-1)
Slices plus recursion is, in general, a double-whammy of undesirability in Python. In this case, recursion is acceptable, but the slicing isn't. If you're in a rush, scroll to the bottom of this post and look at the benchmark.
Let's talk about recursion first. Python wasn't designed to support recursion well, or at least not to the extent that functional languages that use a "natural" head/tail (car/cdr in Lisp) approximation of slicing. This generalizes to any imperative language without tail call support or first-class linked lists that allow accessing the tail in O(1).
Recursion is inappropriate for any linear algorithm in Python because the default CPython stack size is around 1000, meaning if the structure you're processing has more than 1000 elements (a very small number), your program will fail. There are dangerous hacks to increase the stack size, but this just kicks the can to other trivially small limits and risks ungraceful interpreter termination.
For a binary search, recursion is fine, because you have an O(log(n)) algorithm, so you can comfortably handle pretty much any size structure. See this answer for a deeper treatment of when recursion is and isn't appropriate in Python and why it's a second-class citizen by design. Python is not a functional language, and never will be, according to its creator.
There are also few problems that actually require recursion. In this case, Python has a builtin that should cover the rare cases where you need a binary search. For the times bisect doesn't fit your needs, writing your algorithm iteratively is arguably no less intuitive than recursion (and, I'd argue, fits more naturally into the Python iteration-first paradigm).
Moving on to slicing, although binary search is one of the rare cases where recursion is acceptable, slices are absolutely not appropriate here. Slicing the list here is an O(n) copy operation, which totally defeats the purpose of binary searching. You might as well use in, which does a linear search for the same complexity cost of a single slice. Adding slicing here makes the code easier to write, but causes the time complexity to skyrocket to O(n(log(n)).
Slicing also incurs a totally unnecessary O(n) space cost, not to mention garbage collection and memory allocation action, a potentially painful constant time cost.
Let's benchmark and see for ourselves. I used this boilerplate with one change to the namespace:
dict(arr=random.sample(range(n), k=n), k=n, low=0, high=n-1)
$ py test.py --n 1000000
------------------------------
n = 1000000, 100 trials
------------------------------
def binSearch(arr,k):
time (s) => 17.658957500000042
------------------------------
def binSearch2(arr,k,low,high):
time (s) => 0.01235299999984818
------------------------------
So for n=1000000 (not a large number at all), slicing is about 1400 times slower than indices. It just gets worse on larger numbers.
Minor nitpicks:
Use snake_case, not camelCase per PEP-8. Format your code with black.
Arrays in Python refer to something other than the type([]) => <class 'list'> you're probably using. I suggest lst or it if it's a generic list or iterable parameter.

How is the space complexity of the following algorithm O(logn)?

The following is a python code to find sum of list of elements using binary recursion from the book Goodrich and Tamassia.
def binary_sum(S, start, stop):
"""Return the sum of the numbers in implicit slice S[start:stop]."""
if start >= stop: # zero elements in slice
return 0
elif start == stop-1: # one element in slice
return S[start]
else: # two or more elements in slice
mid = (start + stop) // 2
return binary_sum(S, start, mid) + binary_sum(S, mid, stop)
So it is stated in the book that:
"The size of the range is divided in half at each recursive call, and
so the depth of the recursion is 1+logn. Therefore, binary sum uses O(logn)
amount of additional space. However, the running time of
binary sum is O(n), as there are 2n−1 function calls, each requiring constant time."
From what I understand it is saying that the space complexity of the algorithm is O(logn). But since it is making 2n-1 function calls, wouldn't python have to keep 2n-1 different activation records for each function? And therefore, the space complexity should be O(n). What am I missing?
it is making 2n-1 function calls
Not all of them at the same time. A function call has a beginning and an end.
wouldn't python have to keep 2n-1 different activation records for each function?
Only active activation records need to occupy space. There are O(recursion_depth) of those at any given time.
There is a very good explanation of this question on Space complexity analysis of binary recursive sum algorithm
The space complexity of a recursive algorithm depends on the depth of the recursion which is log(n)
Why does the depth of recursion affect the space, required by an
algorithm? Each recursive function call usually needs to allocate some
additional memory (for temporary data) to process its arguments. At
least, each such a call has to store some information about its parent
call - just to know where to return after finishing. Let's imagine you
are performing a task, and you need to perform a sub-task inside
this first task - so you need to remember (or write down on a paper)
where you stopped in the first task to be able to continue it after
you finish the sub-task. And so on, sub-sub-task inside a sub-task...
So, a recursive algorithm will require space O(depth of recursion).
space complexity of any algorithm not depends on the no. of function calls. It depends on the depth of the recursion. In the above algorithm the depth of the recursion is O(log n).

What is the time complexity of these three solutions?

I have these three solutions to a Leetcode problem and do not really understand the difference in time complexity here. Why is the last function twice as fast as the first one?
68 ms
def numJewelsInStones(J, S):
count=0
for s in S:
if s in J:
count += 1
return count
40ms
def numJewelsInStones(J, S):
return sum(s in J for s in S)
32ms
def numJewelsInStones(J, S):
return len([x for x in S if x in J])
Why is the last function twice as fast as the first one?
The analytical time complexity in terms of big O notation looks the same for all, however subject to constants. That is e.g. O(n) really means O(c*n) however c is ignored by convention when comparing time complexities.
Each of your functions has a different c. In particular
loops in general are slower than generators
sum of a generator is likely executed in C code (the sum part, adding numbers)
len is a simple attribute "single operation" lookup on the array, which can be done in constant time, whereas sum takes n add operations.
Thus c(for) > c(sum) > c(len) where c(f) is the hypothetical fixed-overhead measurement of function/statement f.
You could check my assumptions by disassembling each function.
Other than that your measurements are likely influenced by variation due to other processes running in your system. To remove these influences from your analysis, take the average of execution times over at least 1000 calls to each function (you may find that perhaps c is less than this variation though I don't expect that).
what is the time complexity of these functions?
Note that while all functions share the same big O time complexity, the latter will be different depending on the data type you use for J, S. If J, S are of type:
dict, the complexity of your functions will be in O(n)
set, the complexity of your functions will be in O(n)
list, the complexity of your functions will be in O(n*m), where n,m are the sizes of the J, S variables, respectively. Note if n ~ m this will effectively turn into O(n^2). In other words, don't use list.
Why is the data type important? Because Python's in operator is really just a proxy to membership testing implemented for a particular type. Specifically, dict and set membership testing works in O(1) that is in constant time, while the one for list works in O(n) time. Since in the list case there is a pass on every member of J for each member of S, or vice versa, the total time is in O(n*m). See Python's TimeComplexity wiki for details.
With time complexity, big O notation describes how the solution grows as the input set grows. In other words, how they are relatively related. If your solution is O(n) then as the input grows then the time to complete grows linearly. More concretely, if the solution is O(n) and it takes 10 seconds when the data set is 100, then it should take approximately 100 seconds when the data set is 1000.
Your first solution is O(n), we know this because of the for loop, for s in S, which will iterate through the entire data set once. If s in J, assuming J is a set or a dictionary will likely be constant time, O(1), the reasoning behind this is a bit beyond the scope of the question. As a result, the first solution overall is O(n), linear time.
The nuanced differences in time between the other solutions is very likely negligible if you ran your tests on multiple data sets and averaged them out over time, accounting for startup time and other factors that impact the test results. Additionally, Big O notation discards coefficients, so for example, O(3n) ~= O(n).
You'll notice in all of the other solutions you have the same concept, loop over the entire collection and check for the existence in the set or dict. As a result, all of these solutions are O(n). The differences in time can be attributed to other processes running at the same time, the fact that some of the built-ins used are pure C, and also to differences as a result of insufficient testing.
Well, second function faster than first because of using generator instead of loop. Third function is faster than second because second summing generators output (which returns something like list), but third - just calculating it's length.

Categories