Time complexity of iterating through python's Collections.counter [duplicate] - python

When working with dictionaries in Python, this page says that the time complexity of iterating through the element of the dictionary is O(n), where n is the largest size the dictionary has been.
However, I don't think that there is an obvious way to iterate through the elements of a hash table. Can I assume good performance of dict.iteritems() when iterating through element of a hash table, without too much overhead?
Since dictionaries are used a lot in Python, I assume that this is implemented in some smart way. Still, I need to make sure.

If you look at the notes on Python's dictionary source code, I think the relevant points are the following:
Those methods (iteration and key listing) loop over every potential entry
How many potential entries will there be, as a function of largest size (largest number of keys ever stored in that dictionary)? Look at the following two sections in the same document:
Maximum dictionary load in PyDict_SetItem. Currently set to 2/3
Growth rate upon hitting maximum load. Currently set to *2.
This would suggest that the sparsity of a dictionary is going to be somewhere around 1/3~2/3 (unless growth rate is set to *4, then it's 1/6~2/3). So basically you're going to be checking upto 3 (or 6 if *4) potential entries for every key.
Of course, whether it's 3 entries or 1000, it's still O(n) but 3 seems like a pretty acceptable constant factor.
Incidentally here are the rest of the source & documentation, including that of the DictObject:
http://svn.python.org/projects/python/trunk/Objects/

Related

What is the time complexity of pop() for the set in Python?

I know that pop the last element of the list takes O(1).
And after reading this post
What is the time complexity of popping elements from list in Python?
I notice that if we pop an arbitrary number from a list takes O(n) since all the pointers need to shift one position up.
But for the set, there is no order and no index. So I am not sure if there is still pointers in set?
If not, would the pop() for set is O(1)?
Thanks.
On modern CPython implementations, pop takes amortized constant-ish time (I'll explain further). On Python 2, it's usually the same, but performance can degrade heavily in certain cases.
A Python set is based on a hash table, and pop has to find an occupied entry in the table to remove and return. If it searched from the start of the table every time, this would take time proportional to the number of empty leading entries, and it would get slower with every pop.
To avoid this, the standard CPython implementation tries to remember the position of the last popped entry, to speed up sequences of pops. CPython 3.5+ has a dedicated finger member in the set memory layout to store this position, but earlier versions abuse the hash field of the first hash table entry to store this index.
On any Python version, removing all elements from a set with a sequence of pop operations will take time proportional to the size of the underlying hash table, which is usually within a small constant factor of the original number of elements (unless you'd already removed a bunch of elements). Mixing insertions and pop can interfere heavily with this on Python 2 if inserted elements land in hash table index 0, trashing the search finger. This is much less of an issue on Python 3.

Does python use the best possible algorithms in order to save the most time it can?

I have a question that might be very simple to answer.
I couldn't find the answer anywhere.
Does python use the best possible algorithms in order to save the most time it can?
I just saw on some website that for example, the max method time order -in lists- is O(n) in python which there are better time orders as you know.
Is it true?
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
max method time order -in lists- is O(n) in python which there are better time orders as you know. Is it true?
No this is not true. Finding the maximum value in a list will require that all values in the list are inspected, hence O(n).
You may be confused with lists that have been prepared in some way. For instance:
You have a list that is already sorted (which is a O(nlogn) process). In that case you can of course get the maximum in constant time, since you know its index. If the list is sorted in ascending order, it would be unwise to call max on it, as that would indeed be a waste of time. You may know the list is sorted, but python will not assume this, and still scan the whole list.
You have a list that has been heapified to a max-heap (which is a O(n) process). Again, in that case you can get the maximum in constant time, since it is stored at index 0. Lists can be heapified with heapq -- the default being a min-heap.
So, if you know nothing about your list, then you will have to inspect all values to be sure to identify the maximum. That is what max() does. In case you do know something more that could help to identify the maximum without having to look at all values, then use another, more appropriate method.
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
You should use the algorithms that you know can perform better (based on what you know about a data structure). In many cases there is such better algorithm implementation available via a python library. For example, to find a particular value in a sorted list, use bisect.bisect_left and not index.
Look at a more complex example. Say you have written code that can generate chess moves and simulate a game of chess. You have good ideas about evaluation functions, alphabeta pruning, killer moves, lookup tables, ...and a bunch of other optimisation techniques. You cannot expect python to get smart when you issue a naive max on "all" evaluated chess states. You need to implement the complex algorithm to efficiently search and filter the right states to get the "best" chess move out of that forest of information without wasting time on less promising moves.
A Python list is a sequential and contiguus container. That means that finding the ith element is in constant time, and adding to the end is easy is no reallocation is required.
Finding a value is O(n/2), and finding min or max is O(n).
If you want a list and being able to find its minimum value in O(1), the heapq module that maintains a binary tree is available.
But Python offers few specialized containers in its standard library.
In terms of complexity, you'll find that python almost always uses solutions based on algorithms with best complexity. Performance may vary depending on constants, and python is just not the fastest language compared to C or C++.
In this case, if you're looking for max value from a list, there is no better solution - to find maximum value, you have to check every value, meaning solution is O(n). That's just how lists work - it's just list with values. If you were to use some other structure, e.g. sorted list - accessing max value would take O(1) - but you would pay for this low complexity with higher complexity of adding/deleting values.
It differs from library to library
The defult python librarys like the sort function (if a algorithem not selected) will use the most efficient algorithem by deffult.
Sadly is Python quite slow in genera compared to languages like C, C++ or java.
This is becouse that python is one script that reads your script and executes it live.
C, C++ and Java all compiles to binary (exe) before executing.
//SW

Is my understanding of Hashsets correct?(Python)

I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value?
For your first question - why is the average time complexity of a lookup O(1)? - this statement is in general only true if you have a good hash function. An ideal hash function is one that causes a nice spread on its elements. In particular, hash functions are usually chosen so that the probability that any two elements collide is low. Under this assumption, it's possible to formally prove that the expected number of elements to check is O(1). If you search online for "universal family of hash functions," you'll probably find some good proofs of this result.
As for using placeholders - there are several different ways to implement a hash table. The approach you're using is called "closed addressing" or "hashing with chaining," and in that approach there's little reason to use placeholders. However, other hashing strategies exist as well. One common family of approaches is called "open addressing" (the most famous of which is linear probing hashing), and in those setups placeholder elements are necessary to avoid false negative lookups. Searching online for more details on this will likely give you a good explanation about why.
As for how this differs from HashMap, the HashMap is just one possible implementation of a map abstraction backed by a hash table. Java's HashMap does support nulls, while other approaches don't.
The lookup time wouldn't be O(n) because not all items need to be searched, it also depends on the number of buckets. More buckets would decrease the probability of a collision and reduce the chain length.
The number of buckets can be kept as a constant factor of the number of entries by resizing the hash table as needed. Along with a hash function that evenly distributes the values, this keeps the expected chain length bounded, giving constant time lookups.
The hash tables used by hashmaps and hashsets are the same except they store different values. A hashset will contain references to a single value, and a hashmap will contain references to a key and a value. Hashsets can be implemented by delegating to a hashmap where the keys and values are the same.
A lot has been written here about open hash tables, but some fundamental points are missed.
Practical implementations generally have O(1) lookup and delete because they guarantee buckets won't contain more than a fixed number of items (the load factor). But this means they can only achieve amortized O(1) time for insert because the table needs to be reorganized periodically as it grows.
(Some may opt to reorganize on delete, also, to shrink the table when the load factor reaches some bottom threshold, gut this only affect space, not asymptotic run time.)
Reorganization means increasing (or decreasing) the number of buckets and re-assigning all elements into their new bucket locations. There are schemes, e.g. extensible hashing, to make this a bit cheaper. But in general it means touching each element in the table.
Reorganization, then, is O(n). How can insert be O(1) when any given one may incur this cost? The secret is amortization and the power of powers. When the table is grown, it must be grown by a factor greater than one, two being most common. If the table starts with 1 bucket and doubles each time the load factor reaches F, then the cost of N reorganizations is
F + 2F + 4F + 8F ... (2^(N-1))F = (2^N - 1)F
At this point the table contains (2^(N-1))F elements, the number in the table during the last reorganization. I.e. we have done (2^(N-1))F inserts, and the total cost of reorganization is as shown on the right. The interesting part is the average cost per element in the table (or insert, take your pick):
(2^N - 1)F 2^N
---------- ~= ------- = 2
(2^(N-1))F 2^(N-1)
That's where the amortized O(1) comes from.
One additional point is that for modern processors, linked lists aren't a great idea for the bucket lists. With 8-byte pointers, the overhead is meaningful. More importantly, heap-allocated nodes in a single list will almost never be contiguous in memory. Traversing such a list kills cache performance, which can slow things down by orders of magnitude.
Arrays (with an integer count for number of data-containing elements) are likely to work out better. If the load factor is small enough, just allocate an array equal in size to the load factor at the time the first element is inserted in the bucket. Otherwise, grow these element arrays by factors the same way as the bucket array! Everything will still amortize to O(1).
To delete an item from such a bucket, don't mark it deleted. Just copy the last array element to the location of the deleted one and decrement the element count. Of course this won't work if you allow external pointers into the hash buckets, but that's a bad idea anyway.

Time differences when popping out items from dictionary of different lengths

I am designing a software in Python and I was getting little curious about whether there is any time differences when popping out items from a dictionary of very small lengths and when popping out items from a dictionary of very large length or it is same in all cases.
You can easily answer this question for yourself using the timeit module. But the entire point of a dictionary is near-instant access to any desired element by key, so I would not expect to have a large difference between the two scenarios.
Check out this article on Python TimeComplexity:
The Average Case times listed for dict objects assume that the hash
function for the objects is sufficiently robust to make collisions
uncommon. The Average Case assumes the keys used in parameters are
selected uniformly at random from the set of all keys.
Note that there is a fast-path for dicts that (in practice) only deal
with str keys; this doesn't affect the algorithmic complexity, but it
can significantly affect the constant factors: how quickly a typical
program finishes.
According to this article, for a 'Get Item' operation, the average case is O(1), with a worse case of O(n). In other words, the worst case is that the time increases linearly with size. See Big O Notation on Wikipedia for more information.

Is it possible to give a python dict an initial capacity (and is it useful)

I am filling a python dict with around 10,000,000 items. My understanding of dict (or hashtables) is that when too much elements get in them, the need to resize, an operation that cost quite some time.
Is there a way to say to a python dict that you will be storing at least n items in it, so that it can allocate memory from the start? Or will this optimization not do any good to my running speed?
(And no, I have not checked that the slowness of my small script is because of this, I actually wouldn't now how to do that. This is however something I would do in Java, set the initial capacity of the HashSet right)
First off, I've heard rumor that you can set the size of a dictionary at initialization, but I have never seen any documentation or PEP describing how this would be done.
With this in mind I ran an analysis on your quantity of items, described below. While it may take some time to resize the dictionary each time I would recommend moving ahead without worrying about it, at least until you can test its performance.
The two rules that concern us in determining resizing is number of elements and factor of resizing. A dictionary will resize itself when it is 2/3 full on the addition of the element putting it over the 2/3 mark. Below 50,000 elements it will increase by a factor of 4, above that amount by a factor of 2. Using your estimate of 10,000,000 elements (between 2^23 and 2^24) your dictionary will resize itself 15 times (7 times below 50k, 8 times above). Another resize would occur just past 11,100,000.
Resizing and replacing the current elements in the hashtable does take some time, but I wonder if you'd notice it with whatever else you have going on in the code nearby. I just put together a timing suite comparing inserts at five places along each boundary from dictionary sizes of 2^3 through 2^24, and the "border" additions average 0.4 nanoseconds longer than the "non-border" additions. This is 0.17% longer... probably acceptable. The minimum for all operations was 0.2085 microseconds, and max was 0.2412 microseconds.
Hope this is insightful, and if you do check the performance of your code please follow-up with an edit! My primary resource for dictionary internals was the splendid talk given by Brandon Rhodes at PyCon 2010: The Mighty Dictionary
Yes you can and here is a solution I found in another person's question that is related to yours too:
d = {}
for i in xrange(4000000):
d[i] = None
# 722ms
d = dict(itertools.izip(xrange(4000000), itertools.repeat(None)))
# 634ms
dict.fromkeys(xrange(4000000))
# 558ms
s = set(xrange(4000000))
dict.fromkeys(s)
# Not including set construction 353ms
those are different ways to initialize a dictionary with a certain size.

Categories