Experimentally determining computing complexity of matrix determinant - python

I need help determining experimentally the computing complexity of the determinant of a matrix nxn
My code:
import numpy as np
import timeit
t0 = time.time()
for n in range(1, 10):
A = np.random.rand(n, n)
det = np.linalg.slogdet(A)
t = timeit.timeit(lambda: det)
print(t)
But I get the same time for every n, hence, computing complexity: O(N) which is not correct as it is meant to be O(N^3). Any help would be much appreciated.

For what it's worth, any meaningful benchmarking typically requires sufficiently large N to give the computer something to chew on. A 10x10 matrix is not nearly large enough to start seeing complexity. Start throwing numbers like 100, 1000, 10000, etc, then you'll see your scaling.
For example if I slightly modify your code
for n in range(1, 14):
t0 = time.time()
p = 2**n
A = np.random.rand(p,p)
det = np.linalg.slogdet(A)
print('N={:04d} : {:.2e}s'.format(p, time.time() - t0))
This results in
N=0002 : 4.35e-02s
N=0004 : 0.00e+00s
N=0008 : 0.00e+00s
N=0016 : 5.02e-04s
N=0032 : 0.00e+00s
N=0064 : 5.02e-04s
N=0128 : 5.01e-04s
N=0256 : 1.50e-03s
N=0512 : 8.00e-03s
N=1024 : 3.95e-02s
N=2048 : 2.05e-01s
N=4096 : 1.01e+00s
N=8192 : 7.14e+00s
You can see that for very small values of N, some small-value optimizations and tricks make it hard to see O() complexity, but as the values of N grow, you can start to see the scaling.

There are some possible reasons:
The computer used to generate these numbers was busy doing something else when the and "slow" operations like n=2 or n=16 operations were done
particularly for n=2, maybe some caching was done after the first loop, speeding up subsequent runnings.
You'd also typically expect n=1 to have the worst ratio of running time to n, simply because of constant overhead like variable initialization.

Related

How do I calculate Time Complexity for this particular algorithm?

I know there are many other questions out there asking for the general guide of how to calculate the time complexity, such as this one.
From them I have learnt that when there is a loop, such as the (for... if...) in my Python programme, the Time complexity is N * N where N is the size of input. (please correct me if this is also wrong) (Edited once after being corrected by an answer)
# greatest common divisor of two integers
a, b = map(int, input().split())
list = []
for i in range(1, a+b+1):
if a % i == 0 and b % i == 0:
list.append(i)
n = len(list)
print(list[n-1])
However, do other parts of the code also contribute to the time complexity, that will make it more than a simple O(n) = N^2 ? For example, in the second loop where I was finding the common divisors of both a and b (a%i = 0), is there a way to know how many machine instructions the computer will execute in finding all the divisors, and the consequent time complexity, in this specific loop?
I wish the question is making sense, apologise if it is not clear enough.
Thanks for answering
First, a few hints:
In your code there is no nested loop. The if-statement does not constitute a loop.
Not all nested loops have quadratic time complexity.
Writing O(n) = N*N doesn't make any sense: what is n and what is N? Why does n appear on the left but N is on the right? You should expect your time complexity function to be dependent on the input of your algorithm, so first define what the relevant inputs are and what names you give them.
Also, O(n) is a set of functions (namely those asymptotically bounded from above by the function f(n) = n, whereas f(N) = N*N is one function. By abuse of notation, we conventionally write n*n = O(n) to mean n*n ∈ O(n) (which is a mathematically false statement), but switching the sides (O(n) = n*n) is undefined. A mathematically correct statement would be n = O(n*n).
You can assume all (fixed bit-length) arithmetic operations to be O(1), since there is a constant upper bound to the number of processor instructions needed. The exact number of processor instructions is irrelevant for the analysis.
Let's look at the code in more detail and annotate it:
a, b = map(int, input().split()) # O(1)
list = [] # O(1)
for i in range(1, a+b+1): # O(a+b) multiplied by what's inside the loop
if a % i == 0 and b % i == 0: # O(1)
list.append(i) # O(1) (amortized)
n = len(list) # O(1)
print(list[n-1]) # O(log(a+b))
So what's the overall complexity? The dominating part is indeed the loop (the stuff before and after is negligible, complexity-wise), so it's O(a+b), if you take a and b to be the input parameters. (If you instead wanted to take the length N of your input input() as the input parameter, it would be O(2^N), since a+b grows exponentially with respect to N.)
One thing to keep in mind, and you have the right idea, is that higher degree take precedence. So you can have a step that’s constant O(1) but happens n times O(N) then it will be O(1) * O(N) = O(N).
Your program is O(N) because the only thing really affecting the time complexity is the loop, and as you know a simple loop like that is O(N) because it increases linearly as n increases.
Now if you had a nested loop that had both loops increasing as n increased, then it would be O(n^2).

What would be the worst case run time of my two algorithms

I'm having a really hard time understanding how to calculate worst case run times and run times in general. Since there is a while loop, would the run time have to be n+1 because the while loop must run 1 additional time to check is the case is still valid?I've also been searching online for a good explanation/ practice on how to calculate these run times but I can't seem to find anything good. A link to something like this would be very much appreciated.
def reverse1(lst):
rev_lst = []
i = 0
while(i < len(lst)):
rev_lst.insert(0, lst[i])
i += 1
return rev_lst
def reverse2(lst):
rev_lst = []
i = len(lst) - 1
while (i >= 0):
rev_lst.append(lst[i])
i -= 1
return rev_lst
Constant factors or added values don't matter for big-O run times, so you're over-complicating this. The run time is O(n) (linear) for reverse2, and O(n**2) (quadratic) for reverse1 (because list.insert(0, x) is itself a O(n) operation, performed O(n) times).
Big-O runtime calculations are about how the algorithm behaves as the input size increases towards infinity, and the smaller factors don't matter here; O(n + 1) is the same as O(n) (as is O(5n) for that matter; as n increases, the constant multiplier of 5 is irrelevant to the change in runtime), O(n**2 + n) is still just O(n**2), etc.
Since the number of iterations is fixed for any given size of the input list for both functions, the "worst" time complexity would be the same as the "best" and the average here.
In reverse1, the operation of inserting an item into a list at index 0 costs O(n) because it has to copy all the items to their following positions, and coupled with the while loop that iterates for the number of times of the size of the input list, the time complexity of reverse1 would be O(n^2).
There's no such an issue in reverse2, however, since the append method costs just O(1) to execute, so its overall time complexity is O(n).
I'm going to give you a mathematical explanation of why extra iterations and operations with constant time doesn't matter.
This is O(n) since the definition of Big-Oh is that for f(n) ∈ O(g(n)) there exists some constant k such that f(n) < kg(n).
Consider an algorithm with runtime represented as f(n) = 10000n + 15000000. A way you could simplify this is by factoring out the n: f(n) = n(10000 + 15000000/n). For the worst case runtime, you only care about the performance of the algorithm for super large values of n. Because in this simplification you're dividing by n, in the second part, as n gets really big, the coefficient of n will approach 10000, since 15000000/n approaches 0 if n is enormous. Therefore, for n > N (this means for a large enough value of n) there must exist a constant k such that f(n) < kn, for example k = 10001. Therefore, f(n) ∈ O(n), it has linear runtime efficiency.
With that being said, this means you don't need to worry about constant differences in your runtime, even if you loop n+1 times. The only part that matter (for polynomial time) is the highest degree of n in your code. Your reverse algorithms are O(n) runtime, and even if you iterated n + 1000 times, it would still be O(n) runtime.

Python exponentiation of two lists performance

Is there a way to compute the Cobb-Douglas utility function faster in Python. I run it millions of time, so a speed increase would help. The function raises elements of quantities_list to power of corresponding elements of exponents list, and then multiplies all the resulting elements.
n = 10
quantities = range(n)
exponents = range(n)
def Cobb_Douglas(quantities_list, exponents_list):
number_of_variables = len(quantities_list)
value = 1
for variable in xrange(number_of_variables):
value *= quantities_list[variable] ** exponents_list[variable]
return value
t0 = time.time()
for i in xrange(100000):
Cobb_Douglas(quantities, exponents)
t1 = time.time()
print t1-t0
Iterators are your friend. I got a 28% speedup on my computer by switching your loop to this:
for q, e in itertools.izip(quantities_list, exponents_list):
value *= q ** e
I also got similar results when switching your loop to a functools.reduce call, so it's not worth providing a code sample.
In general, numpy is the right choice for fast arithmetic operations, but numpy's largest integer type is 64 bits, which won't hold the result for your example. If you're using a different numeric range or arithmetic type, numpy is king:
quantities = np.array(quantities, dtype=np.int64)
exponents = np.array(exponents, dtype=np.int64)
def Cobb_Douglas(quantities_list, exponents_list):
return np.product(np.power(quantities_list, exponents_list))
# result: 2649120435010011136
# actual: 21577941222941856209168026828800000
Couple of suggestions:
Use Numpy
Vectorize your code
If quantities are large and and nothing's going to be zero or negative, work in log-space.
I got about a 15% speedup locally using:
def np_Cobb_Douglas(quantities_list, exponents_list):
return np.product(np.power(quantities_list, exponents_list))
And about 40% using:
def np_log_Cobb_Douglas(quantities_list, exponents_list):
return np.exp(np.dot(np.log(quantities_list), np.log(exponents_list)))
Last but not least, there should be some scaling of your Cobb-Douglas parameters so you don't run into overflow errors (if I'm remembering my intro macro correctly).

Fourier Transform: Computational Scaling as a function of vector length

If I were to use the following defined function to compute a discrete Fourier transform, how would I show that the computation scales as O(N^2) as a function of vector length.
def dft(y):
N = len(y)
c = np.zeros(N//2+1,complex)
for k in range(N//2+1):
for n in range(N):
c[k] += y[k]*np.exp(-2j*np.pi*k*n/N)
return c
from what I understand, if an algorithm scales as O(N^2) means that it is quadratic and the run time of the loops is proportional to the square of N. If N were doubled...then the run time would increase by N*N.
My first thought would to run a program were I transform an array of values where the length is equal to N, and then double these values (doubling N) and show that the run time difference between these two is N^2. Does this make any sense (or is there a different/better way)? If so how would I measure the run time in python?
thank you.
The runtime? You could Just make a counter at the beginning and each time something is done increase it by 1. So, inside your second for loop just increment the counter by 1, and when the program finishes print the counter. That would show the amount of calculations needed.
count = 0
def dft(y):
N = len(y)
c = np.zeros(N//2+1,complex)
for k in range(N//2+1):
for n in range(N):
c[k] += y[k]*np.exp(-2j*np.pi*k*n/N)
count+=1
return c
print(count)
A little depending on what time you want to measure you could use time.clock (which I think is closest to what you want here - it measures the time shares that your program actually got to run) or datetime.datetime.now.
You just get the time before and after your calculation is done. Something like:
t0 = time.clock()
dft()
t1 = time.clock()
print("Time ellapsed: {0}".format(t1-t0))
Note that what you're looking for when doubling N is a quadrupling of the time.
The line computing the coefficients is repeated
times. Then you need to show there is a constant M and a value for N that
as N approaches infinity. Then you've shown
.
The timeit library is made for exactly this purpose.
https://docs.python.org/2/library/timeit.html
from timeit import timeit
for i in [10, 100, 1000, ...]:
y = generate_array(i)
timeit('dft(y)')

Python Time Complexity (run-time)

def f2(L):
sum = 0
i = 1
while i < len(L):
sum = sum + L[i]
i = i * 2
return sum
Let n be the size of the list L passed to this function. Which of the following most accurately describes how the runtime of this function grow as n grows?
(a) It grows linearly, like n does.
(b) It grows quadratically, like n^2 does.
(c) It grows less than linearly.
(d) It grows more than quadratically.
I don't understand how you figure out the relationship between the runtime of the function and the growth of n. Can someone please explain this to me?
ok, since this is homework:
this is the code:
def f2(L):
sum = 0
i = 1
while i < len(L):
sum = sum + L[i]
i = i * 2
return sum
it is obviously dependant on len(L).
So lets see for each line, what it costs:
sum = 0
i = 1
# [...]
return sum
those are obviously constant time, independant of L.
In the loop we have:
sum = sum + L[i] # time to lookup L[i] (`timelookup(L)`) plus time to add to the sum (obviously constant time)
i = i * 2 # obviously constant time
and how many times is the loop executed?
it's obvously dependant on the size of L.
Lets call that loops(L)
so we got an overall complexity of
loops(L) * (timelookup(L) + const)
Being the nice guy I am, I'll tell you that list lookup is constant in python, so it boils down to
O(loops(L)) (constant factors ignored, as big-O convention implies)
And how often do you loop, based on the len() of L?
(a) as often as there are items in the list (b) quadratically as often as there are items in the list?
(c) less often as there are items in the list (d) more often than (b) ?
I am not a computer science major and I don't claim to have a strong grasp of this kind of theory, but I thought it might be relevant for someone from my perspective to try and contribute an answer.
Your function will always take time to execute, and if it is operating on a list argument of varying length, then the time it takes to run that function will be relative to how many elements are in that list.
Lets assume it takes 1 unit of time to process a list of length == 1. What the question is asking, is the relationship between the size of the list getting bigger vs the increase in time for this function to execute.
This link breaks down some basics of Big O notation: http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
If it were O(1) complexity (which is not actually one of your A-D options) then it would mean the complexity never grows regardless of the size of L. Obviously in your example it is doing a while loop dependent on growing a counter i in relation to the length of L. I would focus on the fact that i is being multiplied, to indicate the relationship between how long it will take to get through that while loop vs the length of L. Basically, try to compare how many loops the while loop will need to perform at various values of len(L), and then that will determine your complexity. 1 unit of time can be 1 iteration through the while loop.
Hopefully I have made some form of contribution here, with my own lack of expertise on the subject.
Update
To clarify based on the comment from ch3ka, if you were doing more than what you currently have inside your with loop, then you would also have to consider the added complexity for each loop. But because your list lookup L[i] is constant complexity, as is the math that follows it, we can ignore those in terms of the complexity.
Here's a quick-and-dirty way to find out:
import matplotlib.pyplot as plt
def f2(L):
sum = 0
i = 1
times = 0
while i < len(L):
sum = sum + L[i]
i = i * 2
times += 1 # track how many times the loop gets called
return times
def main():
i = range(1200)
f_i = [f2([1]*n) for n in i]
plt.plot(i, f_i)
if __name__=="__main__":
main()
... which results in
Horizontal axis is size of L, vertical axis is how many times the function loops; big-O should be pretty obvious from this.
Consider what happens with an input of length n=10. Now consider what happens if the input size is doubled to 20. Will the runtime double as well? Then it's linear. If the runtime grows by factor 4, then it's quadratic. Etc.
When you look at the function, you have to determine how the size of the list will affect the number of loops that will occur.
In your specific situation, lets increment n and see how many times the while loop will run.
n = 0, loop = 0 times
n = 1, loop = 1 time
n = 2, loop = 1 time
n = 3, loop = 2 times
n = 4, loop = 2 times
See the pattern? Now answer your question, does it:
(a) It grows linearly, like n does. (b) It grows quadratically, like n^2 does.
(c) It grows less than linearly. (d) It grows more than quadratically.
Checkout Hugh's answer for an empirical result :)
it's O(log(len(L))), as list lookup is a constant time operation, independant of the size of the list.

Categories