Ok, I understand in languages like C++ why calling virtual method defined in a class is slower than calling a non-virtual method (you have to go through the dynamic dispatch table to lookup the correct implementation to call).
But in Python, if I have:
list_of_sets = generate_a_list_containg_a_bunch_of_sets()
intersection_of_all = reduce(list_of_sets[0].intersection, list_of_sets)
This is dramatically (in my experiments about 40%) slower than:
list_of_sets = generate_a_list_containg_a_bunch_of_sets()
intersection_of_all = reduce(set.intersection, list_of_sets)
What I don't get is why that should be so much slower, the method lookup (I would think) would happen on the call to reduce, so the inside of reduce where the intersection method is actually called shouldn't have to be looked up again (it just just reuse the same method reference).
Could someone illuminate where my understanding is flawed?
This is completely unrelated to method binding etc. The first version computes the intersection of three sets in each iteration, while the second version only intersects two sets. This is easy to see if we use the explicit loops instead.
Variant 1:
intersection = list_of_sets[0]
for s in list_of_sets[1:]:
intersection = list_of_sets[0].intersection(intersection, s)
Variant 2:
intersection = list_of_sets[0]
for s in list_of_sets[1:]:
intersection = set.intersection(intersection, s)
(Would you agree now Guido has a point?)
Note that this will probably be even faster:
intersection = list_of_sets[0]
for s in list_of_sets[1:]:
intersection.intersection_update(s)
Related
Let d be a large (but still fits into memory) Python dictionary where we do not know what the keys are. What is the most efficient way (efficient should mean something like the memory used to perform the task is small compared to the size of the dictionary and the speed should at least as fast any of the methods below) to get a key of d (where it does not mater which key you get) and d is unchanged either in content or order (for newer versions of Python) once you are done? This question is not about readability but about the python dictionary objects. For example two methods are:
Use the list method
any_key = list(d)[0]
Using the popitem method
any_key,y = d.popitem()
d[any_key]=y
So both methods essentially implement a peekkey() method. My basic timeit analysis shows that method 2) is must faster than method 1) and I assume that method 2) uses a lot less memory (but I do not really know if this true yet). Is method 2) "best" or is there something better?
Extra brownie points if you get a fast and a readable method using only Python. Even more points for a C/Python method that accesses the dictionary object directly if that method is significantly faster than the best python method.
If you do not care about which key you get, and you don't mean "sample" in the random sense, then just grab the first key using next
key = next(iter(d.keys()))
which, for brevity, is the same as
key = next(iter(d))
Just to test performance, if I generate a dict with 1000 elements
d = {k:k for k in range(1000)}
then benchmarking these two methods, the next approach is about 95% faster
>>> timeit.timeit('sample_key = list(d)[0]', setup='d = {k:k for k in range(1000)}')
5.3303698
>>> timeit.timeit('next(iter(d.keys()))', setup='d = {k:k for k in range(1000)}')
0.18915620000001354
Let me illustrate this with an example we came across with my students :
>>>a_lot = (i for i in range(10e50))
>>>twice_a_lot = map(lambda x: 2*x, a_lot)
>>>next(a_lot)
0
>>>next(a_lot)
1
>>>next(a_lot)
2
>>>next(twice_a_lot)
6
So somehow these iterators share their current state, as crazy and unconfortable as it sounds...
Any hints as of the model python uses behind the scene ?
This may be surprising at first but upon a little reflection, it should seem obvious.
When you create an iterator from another iterator, there is no way to recover the original state over whatever underlying container you are iterating over (in this case, the range object). At least not in general.
Consider the simplest case of this: iter(something).
When something is an iterator, then according to the iterator protocol specification, iterator.__iter__ must:
Return the iterator object itself
In other words, if you've implemented the protocol correctly, then the following identity will always hold:
iter(iterator) is iterator
Of course, map could have some convention that would allow it to recover and create an independent iterator, but there is no such convention. In general, if you want to create independent iterators, you need to create it from the source.
And of course, there are iterators where this really is not possible without storing all previous results. Consider:
import random
def random_iterator():
while True:
yield random.random()
In which case, how should map function with the following?
iterator = random_iterator()
twice = map(lambda x: x*2, iterator)
Ok, thx to all the comments received (in less than 5 minutes!!!!) i understood to related things : if I want two independent iterators, I won't use map to compute the snd from the fst :
>>>a_lot = (i for i in range(10e50))
>>>twice_a_lot = (2*i for i in range(10e50))
and i'll remember map is lazy,'cause there's no other way that could make sense.
That was a nice SO lesson.
THX
I'm trying to get the 15 most relevant item for each users but every functions i tried took an eternity. (more than 6 hours i shutdown it after that ...)
I have 418 unique users, 3718 unique items.
U2tfifd dict has as well 418 entry and there is 32645 words in tfidf_feature_names.
Shape of my interactions_full_df is (40733, 3)
i tried :
def index_tfidf_users(user_id) :
return [users for users in U2tfifd[user_id].flatten().tolist()]
def get_relevant_items(user_id):
return sorted(zip(tfidf_feature_names, index_tfidf_users(user_id)), key=lambda x: -x[1])[:15]
def get_tfidf_token(user_id) :
return [words for words, values in get_relevant_items(user_id)]
then interactions_full_df["tags"] = interactions_full_df["user_id"].apply(lambda x : get_tfidf_token(x))
or
def get_tfidf_token(user_id) :
tags = []
v = sorted(zip(tfidf_feature_names, U2tfifd[user_id].flatten().tolist()), key=lambda x: -x[1])[:15]
for words, values in v :
tags.append(words)
return tags
or
def get_tfidf_token(user_id) :
v = sorted(zip(tfidf_feature_names, U2tfifd[user_id].flatten().tolist()), key=lambda x: -x[1])[:15]
tags = [words for words in v]
return tags
U2tfifd is a dict with keys = user_id, values = an array
There are several things going on which could cause poor performance in your code. The impact of each of these will depend on things like your Python version (2.x or 3.x), your RAM speed, and whatnot. You'll need to experiment and benchmark the various potential improvements yourself.
1. TFIDF Sparsity (~10x speedup depending on sparsity)
One glaring potential problem is that TFIDF naturally returns sparse data (e.g. a paragraph doesn't use anywhere near as many unique words as an entire book), and working with dense structures like numpy arrays is a strange choice when the data is probably zero almost everywhere.
If you'll be doing this same analysis in the future, it might be helpful to make/use a version of TFIDF with sparse array outputs so that when you extract your tokens you can skip over the zero values. This would likely have the secondary benefit of the entire sparse array for each user fitting in the cache and preventing costly RAM access in your sorts and other operations.
It might be worth sparsifying your data anyway. On my potato, a quick benchmark on data which should be similar to yours indicates that the process can be done in ~30s. The process replaces much of the work you're doing with a highly optimized routine coded in C and wrapped for use in Python. The only real cost is the second pass through the non-zero entries, but unless that pass is pretty efficient to begin with you should be better off working with sparse data.
2. Duplicated Efforts and Memoization (~100x speedup)
If U2tfifd has 418 entries and interactions_full_df has 40733 rows then at least 40315 (or 99.0%) of your calls to get_tfidf_token() are wasted since you've already computed the answer. There are tons of memoization decorators out there, but you don't need anything very complicated for your use case.
def memoize(f):
_cache = {}
def _f(arg):
if arg not in _cache:
_cache[arg] = f(arg)
return _cache[arg]
return _f
#memoize
def get_tfidf_token(user_id):
...
Breaking this down, the function memoize() returns another function. The behavior of that function is to check a local cache for the expected return value before computing it and storing it if necessary.
The syntax #memoize... is short for something like the following.
def uncached_get_tfidf_token(user_id):
...
get_tfidf_token = memoize(uncached_get_tfidf_token)
The # symbol is used to signify that we want the modified, or decorated, version of get_tfidf_token() instead of the original. Depending on your application, it might be beneficial to chain decorators together.
3. Vectorized Operations (varying speedup, benchmarking necessary)
Python doesn't really have a notion of primitive types like other languages, and even integers take 24 bytes in memory on my machine. Lists aren't usually be packed, so you can incur costly cache misses as you're plowing through them. No matter how little work the CPU is doing for sorting and whatnot, clobbering a whole new chunk of memory to turn your array into a list and only using that brand new, expensive memory once is going to incur a performance hit.
Many of the things you are trying to do have fast (SIMD vectorized, parallelized, memory-efficient, packed memory, and other fun optimizations) numpy equivalents AND avoid unnecessary array copies and type conversions. It seems you're already using numpy anyway, so you won't have any extra imports or dependencies.
As one example, zip() creates another list in memory in Python 2.x and still does unnecessary work in Python 3.x when you really only care about the indices of tfidf_feature_names. To compute those indices, you can use something like the following, which avoids an unnecessary list creation and uses an optimized routine with slightly better asymptotic complexity as an added bonus.
def get_tfidf_token(user_id):
temp = U2tfifd[user_id].flatten()
ind = np.argpartition(temp, len(temp)-15)[-15:]
return tfidf_feature_names[ind] # works if tfidf_feature_names is a numpy array
return [tfidf_feature_names[i] for i in ind] # always works
Depending on the shape of U2tfifd[user_id], you could avoid the costly .flatten() computation by passing an axis argument to np.argsort() and flattening the 15 obtained indices instead.
4. Bonus
The sorted() function supports a reverse argument so that you can avoid extra computations like throwing a negative on every value. Simply use
sorted(..., reverse=True)
Even better, since you really don't care about the sort itself but just the 15 largest values you can get away with
sorted(...)[-15:]
to index the largest 15 instead of reversing the sort and taking the smallest 15. That doesn't really matter if you're using a better function for the application like np.argpartition(), but it could be helpful in the future.
You can also avoid some function calls by replacing .apply(lambda x : get_tfidf_token(x)) with .apply(get_tfidf_token) since get_tfidf_token is already a function which has the intended behavior. You don't really need the extra lambda.
As far as I can see though, most additional gains are fairly nitpicky and system-dependent. You can make most things faster with Cython or straight C with enough time for example, but you already have reasonably fast routines which do what you want out of the box. The extra engineering effort probably isn't worth any potential gains.
I've been trying to performance optimize a BFS implementation in Python and my original implementation was using deque to store the queue of nodes to expand and a dict to store the same nodes so that I would have efficient lookup to see if it is already open.
I attempted to optimize (simplicity and efficiency) by moving to an OrderedDict. However, this takes significantly more time. 400 sample searches done take 2 seconds with deque/dict and 3.5 seconds with just an OrderedDict.
My question is, if OrderedDict does the same functionality as the two original data structures, should it not at least be similar in performance? Or am I missing something here? Code examples below.
Using just an OrderedDict:
open_nodes = OrderedDict()
closed_nodes = {}
current = Node(start_position, None, 0)
open_nodes[current.position] = current
while open_nodes:
current = open_nodes.popitem(False)[1]
closed_nodes[current.position] = (current)
if goal(current.position):
return trace_path(current, open_nodes, closed_nodes)
# Nodes bordering current
for neighbor in self.environment.neighbors[current.position]:
new_node = Node(neighbor, current, current.depth + 1)
open_nodes[new_node.position] = new_node
Using both a deque and a dictionary:
open_queue = deque()
open_nodes = {}
closed_nodes = {}
current = Node(start_position, None, 0)
open_queue.append(current)
open_nodes[current.position] = current
while open_queue:
current = open_queue.popleft()
del open_nodes[current.position]
closed_nodes[current.position] = (current)
if goal_function(current.position):
return trace_path(current, open_nodes, closed_nodes)
# Nodes bordering current
for neighbor in self.environment.neighbors[current.position]:
new_node = Node(neighbor, current, current.depth + 1)
open_queue.append(new_node)
open_nodes[new_node.position] = new_node
Both deque and dict are implemented in C and will run faster than OrderedDict which is implemented in pure Python.
The advantage of the OrderedDict is that it has O(1) getitem, setitem, and delitem just like regular dicts. This means that it scales very well, despite the slower pure python implementation.
Competing implementations using deques, lists, or binary trees usually forgo fast big-Oh times in one of those categories in order to get a speed or space benefit in another category.
Update: Starting with Python 3.5, OrderedDict() now has a C implementation. And though it hasn't been highly optimized like some of the other containers. It should run much faster than the pure python implementation. Then starting with Python 3.6, regular dictionaries has been ordered (though the ordering behavior is not yet guaranteed). Those should run faster still :-)
Like Sven Marnach said, OrderedDict is implemented in Python, I want to add that it is implemented using dict and list.
dict in python is implemented as hashtable. I am not sure how deque is implemented, but documentation says that deque is optimized for quick adding or accessing first/last elements, so I guess that deque is implemented as linked-list.
I think when you do pop on OrderedDict, python does hashtable look-up which is slower compared to linked-list which has direct pointers to last and first elements. Adding an element to the end of linked-list is also faster compared with hash-table.
So primary cause why OrderDict in your example is slower, is because it is faster to access last element from linked-list, than to access any element using hash-table.
My thoughts are based on information from book Beautiful Code, it describes implementation details behind dict, however I do not know much details behind list and deque, this answer is just my intuition of how things work, so in case I am wrong, I really deserve down-votes for talking things which I am not sure about. Why I talk things on which I am not sure? -Because I want to test my intuition :)
Lets say that I have a graph and want to see if b in N[a]. Which is the faster implementation and why?
a, b = range(2)
N = [set([b]), set([a,b])]
OR
N= [[b],[a,b]]
This is obviously oversimplified, but imagine that the graph becomes really dense.
Membership testing in a set is vastly faster, especially for large sets. That is because the set uses a hash function to map to a bucket. Since Python implementations automatically resize that hash table, the speed can be constant (O(1)) no matter the size of the set (assuming the hash function is sufficiently good).
In contrast, to evaluate whether an object is a member of a list, Python has to compare every single member for equality, i.e. the test is O(n).
It all depends on what you're trying to accomplish. Using your example verbatim, it's faster to use lists, as you don't have to go through the overhead of creating the sets:
import timeit
def use_sets(a, b):
return [set([b]), set([a, b])]
def use_lists(a, b):
return [[b], [a, b]]
t=timeit.Timer("use_sets(a, b)", """from __main__ import use_sets
a, b = range(2)""")
print "use_sets()", t.timeit(number=1000000)
t=timeit.Timer("use_lists(a, b)", """from __main__ import use_lists
a, b = range(2)""")
print "use_lists()", t.timeit(number=1000000)
Produces:
use_sets() 1.57522511482
use_lists() 0.783344984055
However, for reasons already mentioned here, you benefit from using sets when you are searching large sets. It's impossible to tell by your example where that inflection point is for you and whether or not you'll see the benefit.
I suggest you test it both ways and go with whatever is faster for your specific use-case.
Set ( I mean a hash based set like HashSet) is much faster than List to lookup for a value. List has to go sequentially to find out if the value exists. HashSet can directly jump and locate the bucket and look up for a value almost in a constant time.