Time complexity of python set operations? - python

What is the the time complexity of each of python's set operations in Big O notation?
I am using Python's set type for an operation on a large number of items. I want to know how each operation's performance will be affected by the size of the set. For example, add, and the test for membership:
myset = set()
myset.add('foo')
'foo' in myset
Googling around hasn't turned up any resources, but it seems reasonable that the time complexity for Python's set implementation would have been carefully considered.
If it exists, a link to something like this would be great. If nothing like this is out there, then perhaps we can work it out?
Extra marks for finding the time complexity of all set operations.

According to Python wiki: Time complexity, set is implemented as a hash table. So you can expect to lookup/insert/delete in O(1) average. Unless your hash table's load factor is too high, then you face collisions and O(n).
P.S. for some reason they claim O(n) for delete operation which looks like a mistype.
P.P.S. This is true for CPython, pypy is a different story.

The other answers do not talk about 2 crucial operations on sets: Unions and intersections. In the worst case, union will take O(n+m) whereas intersection will take O(min(x,y)) provided that there are not many element in the sets with the same hash. A list of time complexities of common operations can be found here: https://wiki.python.org/moin/TimeComplexity

The operation in should be independent from he size of the container, ie. O(1) -- given an optimal hash function. This should be nearly true for Python strings. Hashing strings is always critical, Python should be clever there and thus you can expect near-optimal results.

Related

Does python use the best possible algorithms in order to save the most time it can?

I have a question that might be very simple to answer.
I couldn't find the answer anywhere.
Does python use the best possible algorithms in order to save the most time it can?
I just saw on some website that for example, the max method time order -in lists- is O(n) in python which there are better time orders as you know.
Is it true?
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
max method time order -in lists- is O(n) in python which there are better time orders as you know. Is it true?
No this is not true. Finding the maximum value in a list will require that all values in the list are inspected, hence O(n).
You may be confused with lists that have been prepared in some way. For instance:
You have a list that is already sorted (which is a O(nlogn) process). In that case you can of course get the maximum in constant time, since you know its index. If the list is sorted in ascending order, it would be unwise to call max on it, as that would indeed be a waste of time. You may know the list is sorted, but python will not assume this, and still scan the whole list.
You have a list that has been heapified to a max-heap (which is a O(n) process). Again, in that case you can get the maximum in constant time, since it is stored at index 0. Lists can be heapified with heapq -- the default being a min-heap.
So, if you know nothing about your list, then you will have to inspect all values to be sure to identify the maximum. That is what max() does. In case you do know something more that could help to identify the maximum without having to look at all values, then use another, more appropriate method.
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
You should use the algorithms that you know can perform better (based on what you know about a data structure). In many cases there is such better algorithm implementation available via a python library. For example, to find a particular value in a sorted list, use bisect.bisect_left and not index.
Look at a more complex example. Say you have written code that can generate chess moves and simulate a game of chess. You have good ideas about evaluation functions, alphabeta pruning, killer moves, lookup tables, ...and a bunch of other optimisation techniques. You cannot expect python to get smart when you issue a naive max on "all" evaluated chess states. You need to implement the complex algorithm to efficiently search and filter the right states to get the "best" chess move out of that forest of information without wasting time on less promising moves.
A Python list is a sequential and contiguus container. That means that finding the ith element is in constant time, and adding to the end is easy is no reallocation is required.
Finding a value is O(n/2), and finding min or max is O(n).
If you want a list and being able to find its minimum value in O(1), the heapq module that maintains a binary tree is available.
But Python offers few specialized containers in its standard library.
In terms of complexity, you'll find that python almost always uses solutions based on algorithms with best complexity. Performance may vary depending on constants, and python is just not the fastest language compared to C or C++.
In this case, if you're looking for max value from a list, there is no better solution - to find maximum value, you have to check every value, meaning solution is O(n). That's just how lists work - it's just list with values. If you were to use some other structure, e.g. sorted list - accessing max value would take O(1) - but you would pay for this low complexity with higher complexity of adding/deleting values.
It differs from library to library
The defult python librarys like the sort function (if a algorithem not selected) will use the most efficient algorithem by deffult.
Sadly is Python quite slow in genera compared to languages like C, C++ or java.
This is becouse that python is one script that reads your script and executes it live.
C, C++ and Java all compiles to binary (exe) before executing.
//SW

Python complexity reference?

Is there any Python complexity reference? In cppreference, for example, for many functions (such as std::array::size or std::array::fill) there's a complexity section which describes their running complexity, in terms of linear in the size of the container or constant.
I would expect the same information to appear in the python website, perhaps, at least for the CPython implementation. For example, in the list reference, in list.insert I would expect to see complexity: linear; I know this case (and many other container-related operations) is covered here, but many other cases are not. Here are a few examples:
What is the complexity of tuple.__le__? It seems like when comparing two tuples of size n, k, the complexity is about O(min(n,k)) (however, for small n's it looks different).
What is the complexity of random.shuffle? It appears to be O(n). It also appears that the complexity of random.randint is O(1).
What is the complexity of the __format__ method of strings? It appears to be linear in the size of the input string; however, it also grows when the number of relevant arguments grow (compare ("{0}"*100000).format(*(("abc",)*100000)) with ("{}"*100000).format(*(("abc",)*100000))).
I'm aware that (a) each of these questions may be answered by itself, (b) one may look at the code of these modules (even though some are written in C), and (c) StackExchange is not a python mailing list for user requests. So: this is not a doc-feature request, just a question of two parts:
Do you know if such a resource exists?
If not, do you know what is the place to ask for such, or can you suggest why I don't need such?
CPython is pretty good about its algorithms, and the time complexity of an operation is usually just the best you would expect of a good standard library.
For example:
Tuple ordering has to be O(min(n,m)), because it works by comparing element-wise.
random.shuffle is O(n), because that's the complexity of the modern Fisher–Yates shuffle.
.format I imagine is linear, since it only requires one scan through the template string. As for the difference you see, CPython might just be clever enough to cache the same format code used twice.
The docs do mention time complexity, but generally only when it's not what you would expect — for example, because a deque is implemented with a doubly-linked list, it's explicitly mentioned as having O(n) for indexing in the middle.
Would the docs benefit from having time complexity called out everywhere it's appropriate? I'm not sure. The docs generally present builtins by what they should be used for and have implementations optimized for those use cases. Emphasizing time complexity seems like it would either be useless noise or encourage developers to second-guess the Python implementation itself.

Time differences when popping out items from dictionary of different lengths

I am designing a software in Python and I was getting little curious about whether there is any time differences when popping out items from a dictionary of very small lengths and when popping out items from a dictionary of very large length or it is same in all cases.
You can easily answer this question for yourself using the timeit module. But the entire point of a dictionary is near-instant access to any desired element by key, so I would not expect to have a large difference between the two scenarios.
Check out this article on Python TimeComplexity:
The Average Case times listed for dict objects assume that the hash
function for the objects is sufficiently robust to make collisions
uncommon. The Average Case assumes the keys used in parameters are
selected uniformly at random from the set of all keys.
Note that there is a fast-path for dicts that (in practice) only deal
with str keys; this doesn't affect the algorithmic complexity, but it
can significantly affect the constant factors: how quickly a typical
program finishes.
According to this article, for a 'Get Item' operation, the average case is O(1), with a worse case of O(n). In other words, the worst case is that the time increases linearly with size. See Big O Notation on Wikipedia for more information.

Time complexity of accessing a Python dict

I am writing a simple Python program.
My program seems to suffer from linear access to dictionaries,
its run-time grows exponentially even though the algorithm is quadratic.
I use a dictionary to memoize values. That seems to be a bottleneck.
The values I'm hashing are tuples of points.
Each point is: (x,y), 0 <= x,y <= 50
Each key in the dictionary is: A tuple of 2-5 points: ((x1,y1),(x2,y2),(x3,y3),(x4,y4))
The keys are read many times more often than they are written.
Am I correct that python dicts suffer from linear access times with such inputs?
As far as I know, sets have guaranteed logarithmic access times.
How can I simulate dicts using sets(or something similar) in Python?
edit As per request, here's a (simplified) version of the memoization function:
def memoize(fun):
memoized = {}
def memo(*args):
key = args
if not key in memoized:
memoized[key] = fun(*args)
return memoized[key]
return memo
See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1).
The best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o).
After a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1)
l = []
for x in range(0, 50):
for y in range(0, 50):
if hash((x,y)) in l:
print "Fail: ", (x,y)
l.append(hash((x,y)))
print "Test Finished"
CodePad (Available for 24 hours)
You are not correct. dict access is unlikely to be your problem here. It is almost certainly O(1), unless you have some very weird inputs or a very bad hashing function. Paste some sample code from your application for a better diagnosis.
It would be easier to make suggestions if you provided example code and data.
Accessing the dictionary is unlikely to be a problem as that operation is O(1) on average, and O(N) amortized worst case. It's possible that the built-in hashing functions are experiencing collisions for your data. If you're having problems with has the built-in hashing function, you can provide your own.
Python's dictionary implementation
reduces the average complexity of
dictionary lookups to O(1) by
requiring that key objects provide a
"hash" function. Such a hash function
takes the information in a key object
and uses it to produce an integer,
called a hash value. This hash value
is then used to determine which
"bucket" this (key, value) pair should
be placed into.
You can overwrite the __hash__ method in your class to implement a custom hash function like this:
def __hash__(self):
return hash(str(self))
Depending on what your data actually looks like, you might be able to come up with a faster hash function that has fewer collisions than the standard function. However, this is unlikely. See the Python Wiki page on Dictionary Keys for more information.
To answer your specific questions:
Q1:
"Am I correct that python dicts suffer from linear access times with such inputs?"
A1: If you mean that average lookup time is O(N) where N is the number of entries in the dict, then it is highly likely that you are wrong. If you are correct, the Python community would very much like to know under what circumstances you are correct, so that the problem can be mitigated or at least warned about. Neither "sample" code nor "simplified" code are useful. Please show actual code and data that reproduce the problem. The code should be instrumented with things like number of dict items and number of dict accesses for each P where P is the number of points in the key (2 <= P <= 5)
Q2:
"As far as I know, sets have guaranteed logarithmic access times.
How can I simulate dicts using sets(or something similar) in Python?"
A2: Sets have guaranteed logarithmic access times in what context? There is no such guarantee for Python implementations. Recent CPython versions in fact use a cut-down dict implementation (keys only, no values), so the expectation is average O(1) behaviour. How can you simulate dicts with sets or something similar in any language? Short answer: with extreme difficulty, if you want any functionality beyond dict.has_key(key).
As others have pointed out, accessing dicts in Python is fast. They are probably the best-oiled data structure in the language, given their central role. The problem lies elsewhere.
How many tuples are you memoizing? Have you considered the memory footprint? Perhaps you are spending all your time in the memory allocator or paging memory.
My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic.
I use a dictionary to memoize values. That seems to be a bottleneck.
This is evidence of a bug in your memoization method.

Performance of list(...).insert(...)

I thought about the following question about computer's architecture. Suppose I do in Python
from bisect import bisect
index = bisect(x, a) # O(log n) (also, shouldn't it be a standard list function?)
x.insert(index, a) # O(1) + memcpy()
which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works?
Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.
My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).
For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.
Use the blist module if you need a list with better insert performance.
CPython lists are contiguous arrays. Which one of the O(log n) bisect and O(n) insert dominates your performance profile depends on the size of your list and also the constant factors inside the O(). Particularly, the comparison function invoked by bisect can be something expensive depending on the type of objects in the list.
If you need to hold potentially large mutable sorted sequences then the linear array underlying Pythons list type isn't a good choice. Depending on your requirements heaps, trees or skip-lists might be appropriate.

Categories