Is there any (time/space complexity) disadvantage in writing recursive functions in python using list slicing?
Form what I've seen on the internet, people tend to use lists and low/high variables in recursive functions, but for me it seems more natural to call a function recursively with sliced lists.
Here are two implementations of binary search as examples of the what I'm describing:
List slicing
def binSearch(arr,k):
if len(arr) < 1:
return -1
mid = len(arr) // 2
if arr[mid] == k:
return mid
elif arr[mid] < k:
val = binSearch(arr[mid+1:],k)
if val == -1:
return -1
else:
return mid + 1 + val
else:
return binSearch(arr[:mid],k)
Indexes
def binSearch2(arr,k,low,high):
if low > high:
return -1
mid = (high+low) // 2
if arr[mid] == k:
return mid
elif arr[mid] < k:
return binSearch2(arr,k,mid+1,high)
else:
return binSearch2(arr,k,low,mid-1)
Slices plus recursion is, in general, a double-whammy of undesirability in Python. In this case, recursion is acceptable, but the slicing isn't. If you're in a rush, scroll to the bottom of this post and look at the benchmark.
Let's talk about recursion first. Python wasn't designed to support recursion well, or at least not to the extent that functional languages that use a "natural" head/tail (car/cdr in Lisp) approximation of slicing. This generalizes to any imperative language without tail call support or first-class linked lists that allow accessing the tail in O(1).
Recursion is inappropriate for any linear algorithm in Python because the default CPython stack size is around 1000, meaning if the structure you're processing has more than 1000 elements (a very small number), your program will fail. There are dangerous hacks to increase the stack size, but this just kicks the can to other trivially small limits and risks ungraceful interpreter termination.
For a binary search, recursion is fine, because you have an O(log(n)) algorithm, so you can comfortably handle pretty much any size structure. See this answer for a deeper treatment of when recursion is and isn't appropriate in Python and why it's a second-class citizen by design. Python is not a functional language, and never will be, according to its creator.
There are also few problems that actually require recursion. In this case, Python has a builtin that should cover the rare cases where you need a binary search. For the times bisect doesn't fit your needs, writing your algorithm iteratively is arguably no less intuitive than recursion (and, I'd argue, fits more naturally into the Python iteration-first paradigm).
Moving on to slicing, although binary search is one of the rare cases where recursion is acceptable, slices are absolutely not appropriate here. Slicing the list here is an O(n) copy operation, which totally defeats the purpose of binary searching. You might as well use in, which does a linear search for the same complexity cost of a single slice. Adding slicing here makes the code easier to write, but causes the time complexity to skyrocket to O(n(log(n)).
Slicing also incurs a totally unnecessary O(n) space cost, not to mention garbage collection and memory allocation action, a potentially painful constant time cost.
Let's benchmark and see for ourselves. I used this boilerplate with one change to the namespace:
dict(arr=random.sample(range(n), k=n), k=n, low=0, high=n-1)
$ py test.py --n 1000000
------------------------------
n = 1000000, 100 trials
------------------------------
def binSearch(arr,k):
time (s) => 17.658957500000042
------------------------------
def binSearch2(arr,k,low,high):
time (s) => 0.01235299999984818
------------------------------
So for n=1000000 (not a large number at all), slicing is about 1400 times slower than indices. It just gets worse on larger numbers.
Minor nitpicks:
Use snake_case, not camelCase per PEP-8. Format your code with black.
Arrays in Python refer to something other than the type([]) => <class 'list'> you're probably using. I suggest lst or it if it's a generic list or iterable parameter.
Related
Is it possible to convert a recursive function like the one below to a completely iterative function?
def fact(n):
if n <= 1:
return
for i in range(n):
fact(n-1)
doSomethingFunc()
It seems pretty easy to do given extra space like a stack or a queue, but I was wondering if we can do this in O(1) space complexity?
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of factorial(n).
Well, generally speaking no.
I mean, the space taken in the stack by recursive functions is not just an inconvenient of this programming style. It is the memory needed for the computation.
So, sure, for lot of algorithm, that space is unnecessary and could be spared. For a classical factorial for example
def fact(n):
if n<=1:
return 1
else:
return n*fact(n-1)
the stacking of all the n, n-1, n-2, ..., 1 arguments is not really necessary.
So, sure, you can find an implementation that get rid of it. But that is optimization (For example, in the specific case of terminal recursion. But I am pretty sure that you add that "doSomething" to make clear that you don't want to focus on that specific case).
You cannot assume in general that an algorithm that don't need all those values exist, recursive or iterative. Or else, that would be saying that all algorithm exist in a O(1) space complexity version.
Example: base representation of a positive integer
def baseRepr(num, base):
if num>=base:
s=baseRepr(num//base, base)
else:
s=''
return s+chr(48+num%base)
Not claiming it is optimal, or even well written.
But, the stacking of the arguments is needed. It is the way you implicitly store the digits that you compute in the reverse order.
An iterative function would also need some memory to store those digits, since you have to compute the last one first.
Well, I am pretty sure that for this simple example, you could find a way to compute from left to right, for example using a log computation to know in advance the number of digits or something. But that's not the point. Just imagine that there is no other algorithm known than the one computing digits from right to left. Then you need to store them. Either implicitly in the stack using recursion, or explicitly in allocated memory. So again, memory used in the stack is not just an inconvenience of recursion. It is the way recursive algorithm store things, that would be stored otherwise in iterative algorithm
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of
factorial(n).
Yes.
I was wondering if we can do this in O(1) space complexity?
So, no.
I have a doubt regarding time complexity with recursion.
Let's say I need to find the largest number in a list using recursion what I came up with is this:
def maxer(s):
if len(s) == 1:
return s.pop(0)
else:
if s[0]>s[1]:
s.pop(1)
else:
s.pop(0)
return maxer(s)
Now to test the function with many inputs and find out its time complexity, I called the function as follows:
import time
import matplotlib.pyplot as plt
def timer_v3(fx,n):
x_axis=[]
y_axis=[]
for x in range (1,n):
z = [x for x in range(x)]
start=time.time()
fx(z)
end=time.time()
x_axis.append(x)
y_axis.append(end-start)
plt.plot(x_axis,y_axis)
plt.show()
Is there a fundamental flaw in checking complexity like this as a rough estimate? If so, how can we rapidly check the time complexity?
Assuming s is a list, then your function's time complexity is O(n2). When you pop from the start of the list, the remaining elements have to be shifted left one space to "fill in" the gap; that takes O(n) time, and your function pops from the start of the list O(n) times. So the overall complexity is O(n * n) = O(n2).
Your graph doesn't look like a quadratic function, though, because the definition of O(n2) means that it only has to have quadratic behaviour for n > n0, where n0 is an arbitrary number. 1,000 is not a very large number, especially in Python, because running times for smaller inputs are mostly interpreter overhead, and the O(n) pop operation is actually very fast because it's written in C. So it's not only possible, but quite likely that n < 1,000 is too small to observe quadratic behaviour.
The problem is, your function is recursive, so it cannot necessarily be run for large enough inputs to observe quadratic running time. Too-large inputs will overflow the call stack, or use too much memory. So I converted your recursive function into an equivalent iterative function, using a while loop:
def maxer(s):
while len(s) > 1:
if s[0] > s[1]:
s.pop(1)
else:
s.pop(0)
return s.pop(0)
This is strictly faster than the recursive version, but it has the same time complexity. Now we can go much further; I measured the running times up to n = 3,000,000.
This looks a lot like a quadratic function. At this point you might be tempted to say, "ah, #kaya3 has shown me how to do the analysis right, and now I see that the function is O(n2)." But that is still wrong. Measuring the actual running times - i.e. dynamic analysis - still isn't the right way to analyse the time complexity of a function. However large n we test, n0 could still be bigger, and we'd have no way of knowing.
So if you want to find the time complexity of an algorithm, you have to do it by static analysis, like I did (roughly) in the first paragraph of this answer. You don't save yourself time by doing a dynamic analysis instead; it takes less than a minute to read your code and see that it does an O(n) operation O(n) times, if you have the knowledge. So, it is definitely worth developing that knowledge.
The following is a python code to find sum of list of elements using binary recursion from the book Goodrich and Tamassia.
def binary_sum(S, start, stop):
"""Return the sum of the numbers in implicit slice S[start:stop]."""
if start >= stop: # zero elements in slice
return 0
elif start == stop-1: # one element in slice
return S[start]
else: # two or more elements in slice
mid = (start + stop) // 2
return binary_sum(S, start, mid) + binary_sum(S, mid, stop)
So it is stated in the book that:
"The size of the range is divided in half at each recursive call, and
so the depth of the recursion is 1+logn. Therefore, binary sum uses O(logn)
amount of additional space. However, the running time of
binary sum is O(n), as there are 2n−1 function calls, each requiring constant time."
From what I understand it is saying that the space complexity of the algorithm is O(logn). But since it is making 2n-1 function calls, wouldn't python have to keep 2n-1 different activation records for each function? And therefore, the space complexity should be O(n). What am I missing?
it is making 2n-1 function calls
Not all of them at the same time. A function call has a beginning and an end.
wouldn't python have to keep 2n-1 different activation records for each function?
Only active activation records need to occupy space. There are O(recursion_depth) of those at any given time.
There is a very good explanation of this question on Space complexity analysis of binary recursive sum algorithm
The space complexity of a recursive algorithm depends on the depth of the recursion which is log(n)
Why does the depth of recursion affect the space, required by an
algorithm? Each recursive function call usually needs to allocate some
additional memory (for temporary data) to process its arguments. At
least, each such a call has to store some information about its parent
call - just to know where to return after finishing. Let's imagine you
are performing a task, and you need to perform a sub-task inside
this first task - so you need to remember (or write down on a paper)
where you stopped in the first task to be able to continue it after
you finish the sub-task. And so on, sub-sub-task inside a sub-task...
So, a recursive algorithm will require space O(depth of recursion).
space complexity of any algorithm not depends on the no. of function calls. It depends on the depth of the recursion. In the above algorithm the depth of the recursion is O(log n).
I'm trying to get the 15 most relevant item for each users but every functions i tried took an eternity. (more than 6 hours i shutdown it after that ...)
I have 418 unique users, 3718 unique items.
U2tfifd dict has as well 418 entry and there is 32645 words in tfidf_feature_names.
Shape of my interactions_full_df is (40733, 3)
i tried :
def index_tfidf_users(user_id) :
return [users for users in U2tfifd[user_id].flatten().tolist()]
def get_relevant_items(user_id):
return sorted(zip(tfidf_feature_names, index_tfidf_users(user_id)), key=lambda x: -x[1])[:15]
def get_tfidf_token(user_id) :
return [words for words, values in get_relevant_items(user_id)]
then interactions_full_df["tags"] = interactions_full_df["user_id"].apply(lambda x : get_tfidf_token(x))
or
def get_tfidf_token(user_id) :
tags = []
v = sorted(zip(tfidf_feature_names, U2tfifd[user_id].flatten().tolist()), key=lambda x: -x[1])[:15]
for words, values in v :
tags.append(words)
return tags
or
def get_tfidf_token(user_id) :
v = sorted(zip(tfidf_feature_names, U2tfifd[user_id].flatten().tolist()), key=lambda x: -x[1])[:15]
tags = [words for words in v]
return tags
U2tfifd is a dict with keys = user_id, values = an array
There are several things going on which could cause poor performance in your code. The impact of each of these will depend on things like your Python version (2.x or 3.x), your RAM speed, and whatnot. You'll need to experiment and benchmark the various potential improvements yourself.
1. TFIDF Sparsity (~10x speedup depending on sparsity)
One glaring potential problem is that TFIDF naturally returns sparse data (e.g. a paragraph doesn't use anywhere near as many unique words as an entire book), and working with dense structures like numpy arrays is a strange choice when the data is probably zero almost everywhere.
If you'll be doing this same analysis in the future, it might be helpful to make/use a version of TFIDF with sparse array outputs so that when you extract your tokens you can skip over the zero values. This would likely have the secondary benefit of the entire sparse array for each user fitting in the cache and preventing costly RAM access in your sorts and other operations.
It might be worth sparsifying your data anyway. On my potato, a quick benchmark on data which should be similar to yours indicates that the process can be done in ~30s. The process replaces much of the work you're doing with a highly optimized routine coded in C and wrapped for use in Python. The only real cost is the second pass through the non-zero entries, but unless that pass is pretty efficient to begin with you should be better off working with sparse data.
2. Duplicated Efforts and Memoization (~100x speedup)
If U2tfifd has 418 entries and interactions_full_df has 40733 rows then at least 40315 (or 99.0%) of your calls to get_tfidf_token() are wasted since you've already computed the answer. There are tons of memoization decorators out there, but you don't need anything very complicated for your use case.
def memoize(f):
_cache = {}
def _f(arg):
if arg not in _cache:
_cache[arg] = f(arg)
return _cache[arg]
return _f
#memoize
def get_tfidf_token(user_id):
...
Breaking this down, the function memoize() returns another function. The behavior of that function is to check a local cache for the expected return value before computing it and storing it if necessary.
The syntax #memoize... is short for something like the following.
def uncached_get_tfidf_token(user_id):
...
get_tfidf_token = memoize(uncached_get_tfidf_token)
The # symbol is used to signify that we want the modified, or decorated, version of get_tfidf_token() instead of the original. Depending on your application, it might be beneficial to chain decorators together.
3. Vectorized Operations (varying speedup, benchmarking necessary)
Python doesn't really have a notion of primitive types like other languages, and even integers take 24 bytes in memory on my machine. Lists aren't usually be packed, so you can incur costly cache misses as you're plowing through them. No matter how little work the CPU is doing for sorting and whatnot, clobbering a whole new chunk of memory to turn your array into a list and only using that brand new, expensive memory once is going to incur a performance hit.
Many of the things you are trying to do have fast (SIMD vectorized, parallelized, memory-efficient, packed memory, and other fun optimizations) numpy equivalents AND avoid unnecessary array copies and type conversions. It seems you're already using numpy anyway, so you won't have any extra imports or dependencies.
As one example, zip() creates another list in memory in Python 2.x and still does unnecessary work in Python 3.x when you really only care about the indices of tfidf_feature_names. To compute those indices, you can use something like the following, which avoids an unnecessary list creation and uses an optimized routine with slightly better asymptotic complexity as an added bonus.
def get_tfidf_token(user_id):
temp = U2tfifd[user_id].flatten()
ind = np.argpartition(temp, len(temp)-15)[-15:]
return tfidf_feature_names[ind] # works if tfidf_feature_names is a numpy array
return [tfidf_feature_names[i] for i in ind] # always works
Depending on the shape of U2tfifd[user_id], you could avoid the costly .flatten() computation by passing an axis argument to np.argsort() and flattening the 15 obtained indices instead.
4. Bonus
The sorted() function supports a reverse argument so that you can avoid extra computations like throwing a negative on every value. Simply use
sorted(..., reverse=True)
Even better, since you really don't care about the sort itself but just the 15 largest values you can get away with
sorted(...)[-15:]
to index the largest 15 instead of reversing the sort and taking the smallest 15. That doesn't really matter if you're using a better function for the application like np.argpartition(), but it could be helpful in the future.
You can also avoid some function calls by replacing .apply(lambda x : get_tfidf_token(x)) with .apply(get_tfidf_token) since get_tfidf_token is already a function which has the intended behavior. You don't really need the extra lambda.
As far as I can see though, most additional gains are fairly nitpicky and system-dependent. You can make most things faster with Cython or straight C with enough time for example, but you already have reasonably fast routines which do what you want out of the box. The extra engineering effort probably isn't worth any potential gains.
In the examples below, both functions have roughly the same number of procedures.
def lenIter(aStr):
count = 0
for c in aStr:
count += 1
return count
or
def lenRecur(aStr):
if aStr == '':
return 0
return 1 + lenRecur(aStr[1:])
Picking between the two techniques is a matter of style or is there a most efficient method here?
Python does not perform tail call optimization, so the recursive solution can hit a stack overflow on long strings. The iterative method does not have this flaw.
That said, len(str) is faster than both methods.
This is not correct: 'functions have roughly the same number of procedures'. You probably mean that: 'these procedures require the same number of operations', or, more formally 'they have the same computational time complexity'.
While both have the same computational time complexity, the one using recursion requires additional CPU instructions to execute code for creating new instances of procedures during recursion, and to switch contexts. And to clean up after returning from every recursion. While these operations do not increase the theoretical computational complexity, in most real life implementations of operating systems they will put significant load.
Also the resursive method will have higher space complexity, as each new instance of recursively-called procedure needs new storage for its data.
Surely the first approach is more optimized, as python doesn't have to do a lot of function call and string slicing, which each of these operations are contain some other operations that cost much for python interpreter, and may be cause a lot of problems in future and in dealing with log strings.
As a more pythonic way you better to use len() function in order to get the length of a string.
You can also use code object to see the required stack sized for each function:
>>> lenRecur.__code__.co_stacksize
4
>>> lenIter.__code__.co_stacksize
3