Time differences when popping out items from dictionary of different lengths - python

I am designing a software in Python and I was getting little curious about whether there is any time differences when popping out items from a dictionary of very small lengths and when popping out items from a dictionary of very large length or it is same in all cases.

You can easily answer this question for yourself using the timeit module. But the entire point of a dictionary is near-instant access to any desired element by key, so I would not expect to have a large difference between the two scenarios.

Check out this article on Python TimeComplexity:
The Average Case times listed for dict objects assume that the hash
function for the objects is sufficiently robust to make collisions
uncommon. The Average Case assumes the keys used in parameters are
selected uniformly at random from the set of all keys.
Note that there is a fast-path for dicts that (in practice) only deal
with str keys; this doesn't affect the algorithmic complexity, but it
can significantly affect the constant factors: how quickly a typical
program finishes.
According to this article, for a 'Get Item' operation, the average case is O(1), with a worse case of O(n). In other words, the worst case is that the time increases linearly with size. See Big O Notation on Wikipedia for more information.

Related

Python data structures - sorting big O complexity implementation

We all have been told the popular theoretical limit for a general case sorting of objects to be O(n*log(n)) in many languages.
Let's say we have a list:
lst = [1,1,2,3,4,5,3,2,3,4,2,1,2,3]
In Python, I was recently introduced to some additional benefits of using Counter (from collections import Counter) over dictionary with keys as list numbers and values as their occurrence counter.
coun = Counter(lst)
print(coun) # ==> Counter({2: 4, 3: 4, 1: 3, 4: 2, 5: 1})
It was shown several times (What is the time complexity of collections.Counter() in Python?) that construction of Counter takes O(n) and unlike standard dict, Counter() has some additional space overhead to store frequencies of each element.
When you attempt to work with Counter, it often returns outputs in the sorted order:
.items() or .keys(). Maybe for sake of convenience it applies a quick O(logn) sorting before giving you the result, but it sounds unexpectedly bad, when you will use it in simple traversing:
for i in range(len(lst)):
if lst[i] not in coun.keys():
print("element", lst[i], "not found!")
You would naturally expect that complexity of above will be O(n) as in a standard dictionary (check presence is O(1) over n loops).
So without picking in the code, let's just assume that lst[i] not in coun.keys() is implemented in a O(1) complexity, using some space overhead.
Is it theoretically possible that during counter construction, this additional (potentially prohibitively large for really big and unique lists) space overhead is giving us an edge in small and medium sized lists (length < 1000) to get the O(n) sorting advantage at a cost of using extra space.
If above is possible, I assume that behind the scenes there is a mechanism that will stop counting every single element and putting them into the correct sorted order, when memory footprint is breaking some defined value (like 1Mb) and lst[i] not in coun.keys() becomes O(logn).
Just thinking outloud here, as in reality a lot of lists that we are working with actually are less than 1000 elements.
Afterthought 1:
On the other hand, you probably wouldn't care much for O(n) vs O(nlogn) when n<1000 it will be barely noticable time gain at a potentially huge space overhead price.
Afterthought 2:
It appears that .keys() is preserving the insertion order, that just happened to be the same as sorted order due to my poor initial data set.
Nevertheless, is it possible to have implementation of data structure that will place the counted objects in the right places at the moment of adding them?
The O(n*log(n)) lower bound on sorting algorithms only applies to algorithms that can sort arbitrary objects by comparing them to one another. If you know that your data is from a limited domain, you can use more efficient algorithms. For example, if the values are all small integers you can use a counting sort to efficiently sort the data in O(n) time.
Here's an example that can sort sequences that only contain integers from the domain 0-5, like in your example.
def sort_0_to_5(data):
counts = [0, 0, 0, 0, 0, 0]
for val in data:
counts[val] += 1
return [val for val in range(len(counts)) for _ in range(counts[val])]
This runs in O(n) time and uses only constant space. This is a very basic counting sort, a fancier version could sort arbitrary objects as long as they have integer keys within the domain. (You just need a couple extra passes over the data to make cumulative counts and then to build up the output in the right order.)
More sophisticated algorithms like radix sort can handle much larger domains in quasi-linear time. The way you need to account for time gets tricky though, as once the domain starts getting comparable to the size of the data set, the less "constant" the parts of the code that deal with the domain size become. Radix sort, for example, takes O(n*log(k)) time where k is the size of the domain.
I'd note however that even if you can figure out a sorting algorithm that has a better time complexity than the standard comparison sorts, that may not actually mean it is faster for your actual data. Unless the size of your data set is huge, the constant terms that get excluded from asymptotic analysis are likely to matter quite a lot. You may find that a very well implemented O(n*log(n)) sort (like the one behind Python's sorted) performs better than a O(n) sort you've coded up by hand.

Is my understanding of Hashsets correct?(Python)

I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value?
For your first question - why is the average time complexity of a lookup O(1)? - this statement is in general only true if you have a good hash function. An ideal hash function is one that causes a nice spread on its elements. In particular, hash functions are usually chosen so that the probability that any two elements collide is low. Under this assumption, it's possible to formally prove that the expected number of elements to check is O(1). If you search online for "universal family of hash functions," you'll probably find some good proofs of this result.
As for using placeholders - there are several different ways to implement a hash table. The approach you're using is called "closed addressing" or "hashing with chaining," and in that approach there's little reason to use placeholders. However, other hashing strategies exist as well. One common family of approaches is called "open addressing" (the most famous of which is linear probing hashing), and in those setups placeholder elements are necessary to avoid false negative lookups. Searching online for more details on this will likely give you a good explanation about why.
As for how this differs from HashMap, the HashMap is just one possible implementation of a map abstraction backed by a hash table. Java's HashMap does support nulls, while other approaches don't.
The lookup time wouldn't be O(n) because not all items need to be searched, it also depends on the number of buckets. More buckets would decrease the probability of a collision and reduce the chain length.
The number of buckets can be kept as a constant factor of the number of entries by resizing the hash table as needed. Along with a hash function that evenly distributes the values, this keeps the expected chain length bounded, giving constant time lookups.
The hash tables used by hashmaps and hashsets are the same except they store different values. A hashset will contain references to a single value, and a hashmap will contain references to a key and a value. Hashsets can be implemented by delegating to a hashmap where the keys and values are the same.
A lot has been written here about open hash tables, but some fundamental points are missed.
Practical implementations generally have O(1) lookup and delete because they guarantee buckets won't contain more than a fixed number of items (the load factor). But this means they can only achieve amortized O(1) time for insert because the table needs to be reorganized periodically as it grows.
(Some may opt to reorganize on delete, also, to shrink the table when the load factor reaches some bottom threshold, gut this only affect space, not asymptotic run time.)
Reorganization means increasing (or decreasing) the number of buckets and re-assigning all elements into their new bucket locations. There are schemes, e.g. extensible hashing, to make this a bit cheaper. But in general it means touching each element in the table.
Reorganization, then, is O(n). How can insert be O(1) when any given one may incur this cost? The secret is amortization and the power of powers. When the table is grown, it must be grown by a factor greater than one, two being most common. If the table starts with 1 bucket and doubles each time the load factor reaches F, then the cost of N reorganizations is
F + 2F + 4F + 8F ... (2^(N-1))F = (2^N - 1)F
At this point the table contains (2^(N-1))F elements, the number in the table during the last reorganization. I.e. we have done (2^(N-1))F inserts, and the total cost of reorganization is as shown on the right. The interesting part is the average cost per element in the table (or insert, take your pick):
(2^N - 1)F 2^N
---------- ~= ------- = 2
(2^(N-1))F 2^(N-1)
That's where the amortized O(1) comes from.
One additional point is that for modern processors, linked lists aren't a great idea for the bucket lists. With 8-byte pointers, the overhead is meaningful. More importantly, heap-allocated nodes in a single list will almost never be contiguous in memory. Traversing such a list kills cache performance, which can slow things down by orders of magnitude.
Arrays (with an integer count for number of data-containing elements) are likely to work out better. If the load factor is small enough, just allocate an array equal in size to the load factor at the time the first element is inserted in the bucket. Otherwise, grow these element arrays by factors the same way as the bucket array! Everything will still amortize to O(1).
To delete an item from such a bucket, don't mark it deleted. Just copy the last array element to the location of the deleted one and decrement the element count. Of course this won't work if you allow external pointers into the hash buckets, but that's a bad idea anyway.

Time complexity of iterating through python's Collections.counter [duplicate]

When working with dictionaries in Python, this page says that the time complexity of iterating through the element of the dictionary is O(n), where n is the largest size the dictionary has been.
However, I don't think that there is an obvious way to iterate through the elements of a hash table. Can I assume good performance of dict.iteritems() when iterating through element of a hash table, without too much overhead?
Since dictionaries are used a lot in Python, I assume that this is implemented in some smart way. Still, I need to make sure.
If you look at the notes on Python's dictionary source code, I think the relevant points are the following:
Those methods (iteration and key listing) loop over every potential entry
How many potential entries will there be, as a function of largest size (largest number of keys ever stored in that dictionary)? Look at the following two sections in the same document:
Maximum dictionary load in PyDict_SetItem. Currently set to 2/3
Growth rate upon hitting maximum load. Currently set to *2.
This would suggest that the sparsity of a dictionary is going to be somewhere around 1/3~2/3 (unless growth rate is set to *4, then it's 1/6~2/3). So basically you're going to be checking upto 3 (or 6 if *4) potential entries for every key.
Of course, whether it's 3 entries or 1000, it's still O(n) but 3 seems like a pretty acceptable constant factor.
Incidentally here are the rest of the source & documentation, including that of the DictObject:
http://svn.python.org/projects/python/trunk/Objects/

Time complexity of python set operations?

What is the the time complexity of each of python's set operations in Big O notation?
I am using Python's set type for an operation on a large number of items. I want to know how each operation's performance will be affected by the size of the set. For example, add, and the test for membership:
myset = set()
myset.add('foo')
'foo' in myset
Googling around hasn't turned up any resources, but it seems reasonable that the time complexity for Python's set implementation would have been carefully considered.
If it exists, a link to something like this would be great. If nothing like this is out there, then perhaps we can work it out?
Extra marks for finding the time complexity of all set operations.
According to Python wiki: Time complexity, set is implemented as a hash table. So you can expect to lookup/insert/delete in O(1) average. Unless your hash table's load factor is too high, then you face collisions and O(n).
P.S. for some reason they claim O(n) for delete operation which looks like a mistype.
P.P.S. This is true for CPython, pypy is a different story.
The other answers do not talk about 2 crucial operations on sets: Unions and intersections. In the worst case, union will take O(n+m) whereas intersection will take O(min(x,y)) provided that there are not many element in the sets with the same hash. A list of time complexities of common operations can be found here: https://wiki.python.org/moin/TimeComplexity
The operation in should be independent from he size of the container, ie. O(1) -- given an optimal hash function. This should be nearly true for Python strings. Hashing strings is always critical, Python should be clever there and thus you can expect near-optimal results.

Time complexity of accessing a Python dict

I am writing a simple Python program.
My program seems to suffer from linear access to dictionaries,
its run-time grows exponentially even though the algorithm is quadratic.
I use a dictionary to memoize values. That seems to be a bottleneck.
The values I'm hashing are tuples of points.
Each point is: (x,y), 0 <= x,y <= 50
Each key in the dictionary is: A tuple of 2-5 points: ((x1,y1),(x2,y2),(x3,y3),(x4,y4))
The keys are read many times more often than they are written.
Am I correct that python dicts suffer from linear access times with such inputs?
As far as I know, sets have guaranteed logarithmic access times.
How can I simulate dicts using sets(or something similar) in Python?
edit As per request, here's a (simplified) version of the memoization function:
def memoize(fun):
memoized = {}
def memo(*args):
key = args
if not key in memoized:
memoized[key] = fun(*args)
return memoized[key]
return memo
See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1).
The best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o).
After a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1)
l = []
for x in range(0, 50):
for y in range(0, 50):
if hash((x,y)) in l:
print "Fail: ", (x,y)
l.append(hash((x,y)))
print "Test Finished"
CodePad (Available for 24 hours)
You are not correct. dict access is unlikely to be your problem here. It is almost certainly O(1), unless you have some very weird inputs or a very bad hashing function. Paste some sample code from your application for a better diagnosis.
It would be easier to make suggestions if you provided example code and data.
Accessing the dictionary is unlikely to be a problem as that operation is O(1) on average, and O(N) amortized worst case. It's possible that the built-in hashing functions are experiencing collisions for your data. If you're having problems with has the built-in hashing function, you can provide your own.
Python's dictionary implementation
reduces the average complexity of
dictionary lookups to O(1) by
requiring that key objects provide a
"hash" function. Such a hash function
takes the information in a key object
and uses it to produce an integer,
called a hash value. This hash value
is then used to determine which
"bucket" this (key, value) pair should
be placed into.
You can overwrite the __hash__ method in your class to implement a custom hash function like this:
def __hash__(self):
return hash(str(self))
Depending on what your data actually looks like, you might be able to come up with a faster hash function that has fewer collisions than the standard function. However, this is unlikely. See the Python Wiki page on Dictionary Keys for more information.
To answer your specific questions:
Q1:
"Am I correct that python dicts suffer from linear access times with such inputs?"
A1: If you mean that average lookup time is O(N) where N is the number of entries in the dict, then it is highly likely that you are wrong. If you are correct, the Python community would very much like to know under what circumstances you are correct, so that the problem can be mitigated or at least warned about. Neither "sample" code nor "simplified" code are useful. Please show actual code and data that reproduce the problem. The code should be instrumented with things like number of dict items and number of dict accesses for each P where P is the number of points in the key (2 <= P <= 5)
Q2:
"As far as I know, sets have guaranteed logarithmic access times.
How can I simulate dicts using sets(or something similar) in Python?"
A2: Sets have guaranteed logarithmic access times in what context? There is no such guarantee for Python implementations. Recent CPython versions in fact use a cut-down dict implementation (keys only, no values), so the expectation is average O(1) behaviour. How can you simulate dicts with sets or something similar in any language? Short answer: with extreme difficulty, if you want any functionality beyond dict.has_key(key).
As others have pointed out, accessing dicts in Python is fast. They are probably the best-oiled data structure in the language, given their central role. The problem lies elsewhere.
How many tuples are you memoizing? Have you considered the memory footprint? Perhaps you are spending all your time in the memory allocator or paging memory.
My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic.
I use a dictionary to memoize values. That seems to be a bottleneck.
This is evidence of a bug in your memoization method.

Categories