Time complexity of min, max on sets - python

min, max have O(N) time complexity because they have to loop over the given list/string and check every index to find min/max. But I am wondering what would be the time complexity of min,max if used on a set? For example:
s = {1,2,3,4} # s is a set
using min/max we get:
min(s) = 1
max(s) = 4
Since sets do not use indices like lists and strings, but instead operate using buckets that can be accessed directly, does the time complexity of min/max differ than the general case?
Thank you!

As pointed out in the comments above, python is a well documented language and one must always refer to the docs first.
Answering the question, according to the docs,
A set object is an unordered collection of distinct hashable objects.
Being unordered means that to evaluate maximum or minimum among all the elements using any means (inbuilt or not) would at least require one to look at each element, which means O(n) complexity at best.
On top of it, max and min functions of python iterate over each element and are O(n) in all cases.
You can always look up the source code yourself.

Related

Does python use the best possible algorithms in order to save the most time it can?

I have a question that might be very simple to answer.
I couldn't find the answer anywhere.
Does python use the best possible algorithms in order to save the most time it can?
I just saw on some website that for example, the max method time order -in lists- is O(n) in python which there are better time orders as you know.
Is it true?
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
max method time order -in lists- is O(n) in python which there are better time orders as you know. Is it true?
No this is not true. Finding the maximum value in a list will require that all values in the list are inspected, hence O(n).
You may be confused with lists that have been prepared in some way. For instance:
You have a list that is already sorted (which is a O(nlogn) process). In that case you can of course get the maximum in constant time, since you know its index. If the list is sorted in ascending order, it would be unwise to call max on it, as that would indeed be a waste of time. You may know the list is sorted, but python will not assume this, and still scan the whole list.
You have a list that has been heapified to a max-heap (which is a O(n) process). Again, in that case you can get the maximum in constant time, since it is stored at index 0. Lists can be heapified with heapq -- the default being a min-heap.
So, if you know nothing about your list, then you will have to inspect all values to be sure to identify the maximum. That is what max() does. In case you do know something more that could help to identify the maximum without having to look at all values, then use another, more appropriate method.
should I use the algorithms that I know they can perform better in order to save more time or does python did this for me in its methods?
You should use the algorithms that you know can perform better (based on what you know about a data structure). In many cases there is such better algorithm implementation available via a python library. For example, to find a particular value in a sorted list, use bisect.bisect_left and not index.
Look at a more complex example. Say you have written code that can generate chess moves and simulate a game of chess. You have good ideas about evaluation functions, alphabeta pruning, killer moves, lookup tables, ...and a bunch of other optimisation techniques. You cannot expect python to get smart when you issue a naive max on "all" evaluated chess states. You need to implement the complex algorithm to efficiently search and filter the right states to get the "best" chess move out of that forest of information without wasting time on less promising moves.
A Python list is a sequential and contiguus container. That means that finding the ith element is in constant time, and adding to the end is easy is no reallocation is required.
Finding a value is O(n/2), and finding min or max is O(n).
If you want a list and being able to find its minimum value in O(1), the heapq module that maintains a binary tree is available.
But Python offers few specialized containers in its standard library.
In terms of complexity, you'll find that python almost always uses solutions based on algorithms with best complexity. Performance may vary depending on constants, and python is just not the fastest language compared to C or C++.
In this case, if you're looking for max value from a list, there is no better solution - to find maximum value, you have to check every value, meaning solution is O(n). That's just how lists work - it's just list with values. If you were to use some other structure, e.g. sorted list - accessing max value would take O(1) - but you would pay for this low complexity with higher complexity of adding/deleting values.
It differs from library to library
The defult python librarys like the sort function (if a algorithem not selected) will use the most efficient algorithem by deffult.
Sadly is Python quite slow in genera compared to languages like C, C++ or java.
This is becouse that python is one script that reads your script and executes it live.
C, C++ and Java all compiles to binary (exe) before executing.
//SW

Sample one element from a set with O(1) time complexity

I have a set in Python and I want to sample one element from it, like with random.sample() method. The problem is that sample() converts set to tuple internally which is O(n) and I have to do it in the most optimal way.
Is there a function that I can use to sample an element from a set with time complexity O(1) or the only way to do this is by creating your own implementation of set?
Because the data layout is irregular, it’s impossible to uniformly sample from a hash-based set in O(1) except, in the case of ω(n) queries, by preprocessing it into some sort of array. (Such an array could be maintained while building the set, of course, but that’s not the starting point given and isn’t faster to add than the tuple conversion.)

find minimal and maximum element in Python

Suppose I am maintaining a set of integers (will add/remove dynamically, and the integers might have duplicate values). And I need to efficiently find max/min element of the current element set. Wondering if any better solutions?
My current solution is maintain a max heap and a min heap. I am using Python 2.7.x and open for any 3rd party pip plug-ins to fit my problem.
Just use min and max function. There is no point in maintaining the heap. You need min/max heap if you want to perform this action (getting min/max) many times while adding removing elements.
Do not forget that to build a heap you need to spend O(n) time, where the constant is 2 (as far as I remember). Only then you can use it's O(log(n)) time to get your min/max.
P.S. Ok now that you have told that you have to call min/max many times you have to make two heaps (min heap and max heap). It will take O(n) to construct each heap, then each operation (add element, remove element, find min/max) will take O(log(n)), where you add/remove elements to both heaps and do min/max on the corresponding heap.
Where as if you will go with min/max functions over the list, your do not need to construct anything and add will take O(1), remove O(n), and min/max O(n), which is way worse than heap.
P.P.S python has heaps
min(list_of_ints)
Will yield your minimum of the list and...
max(list_of_ints)
Will yield your maximum of the list.
Hope that helps....
Using sorted list may help. First element will always be minimum and last element will always be maximum, with O(1) complexity to get the value.
Using binary search to add/remove with sorted list will be O(log(n))

Time complexity of iterating through python's Collections.counter [duplicate]

When working with dictionaries in Python, this page says that the time complexity of iterating through the element of the dictionary is O(n), where n is the largest size the dictionary has been.
However, I don't think that there is an obvious way to iterate through the elements of a hash table. Can I assume good performance of dict.iteritems() when iterating through element of a hash table, without too much overhead?
Since dictionaries are used a lot in Python, I assume that this is implemented in some smart way. Still, I need to make sure.
If you look at the notes on Python's dictionary source code, I think the relevant points are the following:
Those methods (iteration and key listing) loop over every potential entry
How many potential entries will there be, as a function of largest size (largest number of keys ever stored in that dictionary)? Look at the following two sections in the same document:
Maximum dictionary load in PyDict_SetItem. Currently set to 2/3
Growth rate upon hitting maximum load. Currently set to *2.
This would suggest that the sparsity of a dictionary is going to be somewhere around 1/3~2/3 (unless growth rate is set to *4, then it's 1/6~2/3). So basically you're going to be checking upto 3 (or 6 if *4) potential entries for every key.
Of course, whether it's 3 entries or 1000, it's still O(n) but 3 seems like a pretty acceptable constant factor.
Incidentally here are the rest of the source & documentation, including that of the DictObject:
http://svn.python.org/projects/python/trunk/Objects/

Time differences when popping out items from dictionary of different lengths

I am designing a software in Python and I was getting little curious about whether there is any time differences when popping out items from a dictionary of very small lengths and when popping out items from a dictionary of very large length or it is same in all cases.
You can easily answer this question for yourself using the timeit module. But the entire point of a dictionary is near-instant access to any desired element by key, so I would not expect to have a large difference between the two scenarios.
Check out this article on Python TimeComplexity:
The Average Case times listed for dict objects assume that the hash
function for the objects is sufficiently robust to make collisions
uncommon. The Average Case assumes the keys used in parameters are
selected uniformly at random from the set of all keys.
Note that there is a fast-path for dicts that (in practice) only deal
with str keys; this doesn't affect the algorithmic complexity, but it
can significantly affect the constant factors: how quickly a typical
program finishes.
According to this article, for a 'Get Item' operation, the average case is O(1), with a worse case of O(n). In other words, the worst case is that the time increases linearly with size. See Big O Notation on Wikipedia for more information.

Categories