I've had the following assignment: Given a list of n integers, each integer in the list is unique and larger than 0. I am also given a number K - which is an integer larger than 0.
List slicing of any kind is not allowed
I need to check whether there is a subset that sums up to K.
e.g: for the list [1,4,8], and k=5, I return True, because we have the subset {1,4}.
Now I need to implement this using a recursion:
So I did, however I needed to implement memoization:
And I wonder what is the difference between those functions' code:
I mean, both seem to implement memoization, however the second should work better but it doesn't. I'd really appreciate some help :)
def subsetsum_mem(L, k):
'''
fill-in your code below here according to the instructions
'''
sum_dict={}
return s_rec_mem(L,0,k,sum_dict)
def s_rec_mem(L, i, k, d):
'''
fill-in your code below here according to the instructions
'''
if(k==0):
return True
elif(k<0 or i==len(L)):
return False
else:
if k not in d:
res_k=s_rec_mem(L,i+1,k-L[i],d) or s_rec_mem(L,i+1,k,d)
d[k]=res_k
return res_k
def subsetsum_mem2(L, k):
'''
fill-in your code below here according to the instructions
'''
sum_dict={}
return s_rec_mem2(L,0,k,sum_dict)
def s_rec_mem2(L, i, k, d):
'''
fill-in your code below here according to the instructions
'''
if(k==0):
return True
elif(k<0 or i==len(L)):
return False
else:
if k not in d:
res_k=s_rec_mem2(L,i+1,k-L[i],d) or s_rec_mem2(L,i+1,k,d)
d[k]=res_k
return res_k
else:
return d[k]
You have two problems with your memoization.
First, you're using just k as the cache key. But the function does different things for different values of i, and you're ignoring i in the cache, so you're going to end up returning the value from L, 9, 1, d for L, 1, 1, d.
Second, only in s_rec_mem, you never return d[k]; if it's present, you just fall off the end and return None (which is falsey).
So, the second one does come closer to working—but it still doesn't actually work.
You could fix it like this:
def s_rec_mem2(L, i, k, d):
'''
fill-in your code below here according to the instructions
'''
if(k==0):
return True
elif(k<0 or i==len(L)):
return False
else:
if (i, k) not in d:
res_k=s_rec_mem2(L,i+1,k-L[i],d) or s_rec_mem2(L,i+1,k,d)
d[i, k]=res_k
return res_k
else:
return d[i, k]
… or by just using lru_cache, either by passing down tuple(L) instead of L (because tuples, unlike lists, can be hashed as keys, and your recursive function doesn't care what kind of sequence it gets), or by making it a local function that sees L via closure instead of getting passed it as a parameter.
Finally, from a quick glance at your code:
It looks like you're only ever going to evaluate s_rec_mem at most twice on each set of arguments (assuming you correctly cache on i, k rather than just k), which means memoization can only provide a 2x constant speedup at best. To get any more than that, you need to rethink your caching or your algorithm.
You're only memoizing within each separate top-level run, but switching to lru_cache on tuple(L), i, k means you're memoizing across all runs until you restart the program (or REPL session)—so the first test may take a long time, but subsequent runs on the same L (even with a different k) could benefit from previous caching.
You seem to be trying to solve a minor variation of the subset sum problem. That problem in the general case is provably NP-complete, which means it's guaranteed to take exponential time. And your variation seems to be if anything harder than the standard problem, not easier. If so, not only is your constant-factor speedup not going to have much benefit, there's really nothing you can do that will do better. In real life, people who solve equivalent problems usually use either optimization (e.g., via dynamic programming) or approximation to within an arbitrary specified limit, both of which allow for polynomial (or at least pseudo-polynomial) solutions in most cases, but can't solve all cases. Or, alternatively, there are subclasses of inputs for which you can solve the problem in polynomial time, and if you're lucky, you can prove your inputs fall within one of those subclasses. If I've guessed right on what you're doing, and you want to pursue any of those three options, you'll need to do a bit of research. (Maybe Wikipedia is good enough, or maybe there are good answers here or on the compsci or math SE sites to get you started.)
Related
Is it possible to convert a recursive function like the one below to a completely iterative function?
def fact(n):
if n <= 1:
return
for i in range(n):
fact(n-1)
doSomethingFunc()
It seems pretty easy to do given extra space like a stack or a queue, but I was wondering if we can do this in O(1) space complexity?
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of factorial(n).
Well, generally speaking no.
I mean, the space taken in the stack by recursive functions is not just an inconvenient of this programming style. It is the memory needed for the computation.
So, sure, for lot of algorithm, that space is unnecessary and could be spared. For a classical factorial for example
def fact(n):
if n<=1:
return 1
else:
return n*fact(n-1)
the stacking of all the n, n-1, n-2, ..., 1 arguments is not really necessary.
So, sure, you can find an implementation that get rid of it. But that is optimization (For example, in the specific case of terminal recursion. But I am pretty sure that you add that "doSomething" to make clear that you don't want to focus on that specific case).
You cannot assume in general that an algorithm that don't need all those values exist, recursive or iterative. Or else, that would be saying that all algorithm exist in a O(1) space complexity version.
Example: base representation of a positive integer
def baseRepr(num, base):
if num>=base:
s=baseRepr(num//base, base)
else:
s=''
return s+chr(48+num%base)
Not claiming it is optimal, or even well written.
But, the stacking of the arguments is needed. It is the way you implicitly store the digits that you compute in the reverse order.
An iterative function would also need some memory to store those digits, since you have to compute the last one first.
Well, I am pretty sure that for this simple example, you could find a way to compute from left to right, for example using a log computation to know in advance the number of digits or something. But that's not the point. Just imagine that there is no other algorithm known than the one computing digits from right to left. Then you need to store them. Either implicitly in the stack using recursion, or explicitly in allocated memory. So again, memory used in the stack is not just an inconvenience of recursion. It is the way recursive algorithm store things, that would be stored otherwise in iterative algorithm
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of
factorial(n).
Yes.
I was wondering if we can do this in O(1) space complexity?
So, no.
My aim is to improve the speed of my Python code that has been successfully accepted in a leetcode problem, Course Schedule.
I am aware of the algorithm but even though I am using O(1) data-structures, my runtime is still poor: around 200ms.
My code uses dictionaries and sets:
from collections import defaultdict
class Solution:
def canFinish(self, numCourses: int, prerequisites: List[List[int]]) -> bool:
course_list = []
pre_req_mapping = defaultdict(list)
visited = set()
stack = set()
def dfs(course):
if course in stack:
return False
stack.add(course)
visited.add(course)
for neighbor in pre_req_mapping.get(course, []):
if neighbor in visited:
no_cycle = dfs(neighbor)
if not no_cycle:
return False
stack.remove(course)
return True
# for course in range(numCourses):
# course_list.append(course)
for pair in prerequisites:
pre_req_mapping[pair[1]].append(pair[0])
for course in range(numCourses):
if course in visited:
continue
no_cycle = dfs(course)
if not no_cycle:
return False
return True
What else can I do to improve the speed?
You are calling dfs() for a given course multiple times.
But its return value won't change.
So we have an opportunity to memoize it.
Change your algorithmic approach (here, to dynamic programming)
for the big win.
It's a space vs time tradeoff.
EDIT:
Hmmm, you are already memoizing most of the computation
with visited, so lru_cache would mostly improve clarity
rather than runtime.
It's just a familiar idiom for caching a result.
It would be helpful to add a # comment citing a reference
for the algorithm you implemented.
This is a very nice expression, with defaulting:
pre_req_mapping.get(course, [])
If you use timeit you may find that the generated bytecode
for an empty tuple () is a tiny bit more efficient than that
for an empty list [], as it involves fewer allocations.
Ok, some style nits follow, unrelated to runtime.
As an aside, youAreMixingCamelCase and_snake_case.
PEP-8 asks you to please stick with just snake_case.
This is a fine choice of identifier name:
for pair in prerequisites:
But instead of the cryptic [0], [1] dereferences,
it would be easier to read a tuple unpack:
for course, prereq in prerequisites:
if not no_cycle: is clumsy.
Consider inverting the meaning of dfs' return value,
or rephrasing the assignment as:
cycle = not dfs(course)
I think that you are doing it in good way, but since Python is an interpreted language, it's normal to have slow runtime compared with compiled languages like C/C++ and Java, especially for large inputs.
Try to write the same code in C/C++ for example and compare the speed between them.
SO busy with some code, and have a function which basically takes dictionary where each value is a list, and returns the key with the largest list.
I wrote the following:
def max_list(dic):
if dic:
l1 = dic.values()
l1 = map(len, l1)
l2 = dic.keys()
return l2[l1.index(max(l1))]
else:
return None
Someone else wrote the following:
def max_list(dic):
result = None
maxValue = 0
for key in dic.keys():
if len(dic[key]) >= maxValue:
result = key
maxValue = len(dic[key])
return result
Which would be the 'correct' way to do this, if there is one. I hope this is not regarded as community wiki (even though the code works), trying to figure which would be the best pattern in terms of the problem.
Another valid option:
maxkey,maxvalue = max(d.items(),key=lambda x: len(x[1]))
Of the two above, I would probably prefer the explicit for loop as you don't generate all sorts of intermediate objects just to throw them away.
As a side note, This solution doesn't work particularly well for empty dicts ... (it raises a ValueError). Since I expect that is an unusual case (rather than the norm), it shouldn't hurt to enclose in a try-except ValueError block.
the most pythonic would be max(dic,key=lambda x:len(dic[x])) ... at least I would think ...
maximizing readability and minimizing lines of code is pythonic ... usually
I think the question you should ask yourself is, what do you think the most important is: code maintainability or computation speed?
As the other answers point out, this problem has a very concise solution using a map. For most people this implementation would probably be more easy to read then the implementation with a loop.
In terms of computational speed, the map solution would be less efficient, but still be in the same Computational Magnitute.
Therefore, I think it is unlikely that the map method would ever have noticeably less performance. I would suggest you to use a profiler after your program is finished, so you can be sure where the real problem lies if your program turns out to run slower than desired.
I've just run into a tricky issue. The following code is supposed to split words into chunks of length numOfChar. The function calls itself, which makes it impossible to have the resulting list (res) inside the function. But if I keep it outside as a global variable, then every subsequent call of the function with different input values leads to a wrong result because res doesn't get cleared.
Can anyone help me out?
Here's the code
(in case you are interested, this is problem 7-23 from PySchools.com):
res = []
def splitWord(word, numOfChar):
if len(word) > 0:
res.append(word[:numOfChar])
splitWord(word[numOfChar:], numOfChar)
return res
print splitWord('google', 2)
print splitWord('google', 3)
print splitWord('apple', 1)
print splitWord('apple', 4)
A pure recursive function should not modify the global state, this counts as a side effect.
Instead of appending-and-recursion, try this:
def splitWord(word, numOfChar):
if len(word) > 0:
return [word[:numOfChar]] + splitWord(word[numOfChar:], numOfChar)
else:
return []
Here, you chop the word into pieces one piece at a time, on every call while going down, and then rebuild the pieces into a list while going up.
This is a common pattern called tail recursion.
P.S. As #e-satis notes, recursion is not an efficient way to do this in Python. See also #e-satis's answer for a more elaborate example of tail recursion, along with a more Pythonic way to solve the problem using generators.
Recursion is completely unnecessary here:
def splitWord(word, numOfChar):
return [word[i:i+numOfChar] for i in xrange(0, len(word), numOfChar)]
If you insist on a recursive solution, it is a good idea to avoid global variables (they make it really tricky to reason about what's going on). Here is one way to do it:
def splitWord(word, numOfChar):
if len(word) > 0:
return [word[:numOfChar]] + splitWord(word[numOfChar:], numOfChar)
else:
return []
To elaborate on #Helgi answer, here is a more performant recursive implémentation. It updates the list instead of summing two lists (which results in the creation of a new object every time).
This pattern forces you to pass a list object as third parameter.
def split_word(word, num_of_chars, tail):
if len(word) > 0:
tail.append(word[:num_of_chars])
return split_word(word[num_of_chars:], num_of_chars, tail)
return tail
res = split_word('fdjskqmfjqdsklmfjm', 3, [])
Another advantage of this form, is that it allows tail recursion optimisation. It's useless in Python because it's not a language that performs such optimisation, but if you translate this code into Erlang or Lisp, you will get it for free.
Remember, in Python you are limited by the recursion stack, and there is no way out of it. This is why recursion is not the preferred method.
You would most likely use generators, using yield and itertools (a module to manipulate generators). Here is a very good example of a function that can split any iterable in chunks:
from itertools import chain, islice
def chunk(seq, chunksize, process=iter):
it = iter(seq)
while True:
yield process(chain([it.next()], islice(it, chunksize - 1)))
Now it's a bit complicated if you start learning Python, so I'm not expecting you to fully get it now, but it's good that you can see this and know it exists. You'll come back to it later (we all did, Python iteration tools are overwhelming at first).
The benefits of this approach are:
It can chunk ANY iterable, not just strings, but also lists, dictionaries, tuples, streams, files, sets, queryset, you name it...
It accepts iterables of any length, and even one with an unknown length (think bytes stream here).
It eats very few memory, as the best thing with generators is that they generate the values on the fly, one by one, and they don't store the previous results before computing the next.
It returns chunks of any nature, meaning you can have a chunks of x letters, lists of x items, or even generators spitting out x items (which is the default).
It returns a generator, and therefor can be use in a flow of other generators. Piping data from one generator to the other, bash style, is a wonderful Python ability.
To get the same result that with your function, you would do:
In [17]: list(chunk('fdjskqmfjqdsklmfjm', 3, ''.join))
Out[17]: ['fdj', 'skq', 'mfj', 'qds', 'klm', 'fjm']
What Python's user-made list-comprehension construction is the most useful?
I have created the following two quantifiers, which I use to do different verification operations:
def every(f, L): return not (False in [f(x) for x in L])
def some(f, L): return True in [f(x) for x in L]
an optimized versions (requres Python 2.5+) was proposed below:
def every(f, L): return all(f(x) for x in L)
def some(f, L): return any(f(x) for x in L)
So, how it works?
"""For all x in [1,4,9] there exists such y from [1,2,3] that x = y**2"""
answer = every([1,4,9], lambda x: some([1,2,3], lambda y: y**2 == x))
Using such operations, you can easily do smart verifications, like:
"""There exists at least one bot in a room which has a life below 30%"""
answer = some(bots_in_this_room, lambda x: x.life < 0.3)
and so on, you can answer even very complicated questions using such quantifiers. Of course, there is no infinite lists in Python (hey, it's not Haskell :) ), but Python's lists comprehensions are very practical.
Do you have your own favourite lists-comprehension constructions?
PS: I wonder, why most people tend not to answer questions but criticize presented examples? The question is about favourite list-comprehension construction actually.
anyand all are part of standard Python from 2.5. There's no need to make your own versions of these. Also the official version of any and all short-circuit the evaluation if possible, giving a performance improvement. Your versions always iterate over the entire list.
If you want a version that accepts a predicate, use something like this that leverages the existing any and all functions:
def anyWithPredicate(predicate, l): return any(predicate(x) for x in l)
def allWithPredicate(predicate, l): return all(predicate(x) for x in l)
I don't particularly see the need for these functions though, as it doesn't really save much typing.
Also, hiding existing standard Python functions with your own functions that have the same name but different behaviour is a bad practice.
There aren't all that many cases where a list comprehension (LC for short) will be substantially more useful than the equivalent generator expression (GE for short, i.e., using round parentheses instead of square brackets, to generate one item at a time rather than "all in bulk at the start").
Sometimes you can get a little extra speed by "investing" the extra memory to hold the list all at once, depending on vagaries of optimization and garbage collection on one or another version of Python, but that hardly amounts to substantial extra usefulness of LC vs GE.
Essentially, to get substantial extra use out of the LC as compared to the GE, you need use cases which intrinsically require "more than one pass" on the sequence. In such cases, a GE would require you to generate the sequence once per pass, while, with an LC, you can generate the sequence once, then perform multiple passes on it (paying the generation cost only once). Multiple generation may also be problematic if the GE / LC are based on an underlying iterator that's not trivially restartable (e.g., a "file" that's actually a Unix pipe).
For example, say you are reading a non-empty open text file f which has a bunch of (textual representations of) numbers separated by whitespace (including newlines here and there, empty lines, etc). You could transform it into a sequence of numbers with either a GE:
G = (float(s) for line in f for s in line.split())
or a LC:
L = [float(s) for line in f for s in line.split()]
Which one is better? Depends on what you're doing with it (i.e, the use case!). If all you want is, say, the sum, sum(G) and sum(L) will do just as well. If you want the average, sum(L)/len(L) is fine for the list, but won't work for the generator -- given the difficulty in "restarting f", to avoid an intermediate list you'll have to do something like:
tot = 0.0
for i, x in enumerate(G): tot += x
return tot/(i+1)
nowhere as snappy, fast, concise and elegant as return sum(L)/len(L).
Remember that sorted(G) does return a list (inevitably), so L.sort() (which is in-place) is the rough equivalent in this case -- sorted(L) would be supererogatory (as now you have two lists). So when sorting is needed a generator may often be preferred simply due to conciseness.
All in all, since L is identically equivalent to list(G), it's hard to get very excited about the ability to express it via punctuation (square brackets instead of round parentheses) instead of a single, short, pronounceable and obvious word like list;-). And that's all a LC is -- punctuation-based syntax shortcut for list(some_genexp)...!
This solution shadows builtins which is generally a bad idea. However the usage feels fairly pythonic, and it preserves the original functionality.
Note there are several ways to potentially optimize this based on testing, including, moving the imports out into the module level and changing f's default into None and testing for it instead of using a default lambda as I did.
def any(l, f=lambda x: x):
from __builtin__ import any as _any
return _any(f(x) for x in l)
def all(l, f=lambda x: x):
from __builtin__ import all as _all
return _all(f(x) for x in l)
Just putting that out there for consideration and to see what people think of doing something so potentially dirty.
For your information, the documentation for module itertools in python 3.x lists some pretty nice generator functions.