Find the points with the steepest slope python - python

I have a list of float points such as [x1,x2,x3,x4,....xn] that are plotted as a line graph. I would like to find the set of points where the slope is the steepest.
Right now, Im calculating the difference between a set of points in a loop and using the max() function to determine the maximum point.
Any other elegant way of doing this?

Assuming points is the list of your values, you can calculate the differences in a single line using:
max_slope = max([x - z for x, z in zip(points[:-1], points[1:])])
But what you gain in compactness, you probably lose in readability.
What happens in this list comprehension is the following:
Two lists are created based on the original one, namely points[:-1] & points[1:]. Points[:-1] starts from the beginning of the original list and goes to the second to last item (inclusive). Points[1:] starts from the second item and goes all the way to the last item (inclusive again.)
Example
example_list = [1, 2, 3, 4, 5]
ex_a = example_list[:-1] # [1, 2, 3, 4]
ex_b = example_list[1:] # [2, 3, 4, 5]
Then you zip the two lists creating an object from which you can draw x, z pairs to calculate your differences. Note that zip does not create a list in Python 3 so you need to pass it's return value to the list argument.
Like:
example_list = [1, 2, 3, 4, 5]
ex_a = example_list[:-1] # [1, 2, 3, 4]
ex_b = example_list[1:] # [2, 3, 4, 5]
print(list(zip(ex_a, ex_b))) # [(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
Finally, you calculate the differences using the created pairs, store the results in a list and get the maximum value.
If the location of the max slope is also interesting you can get the index from the created list by using the .index() method. In that case, though, it would probably be better to save the list created by the comprehension and not just use it.

Numpy has a number of tools for working with arrays. For example, you could:
import numpy as np
xx = np.array([x1, x2, x3, x4, ...]) # your list of values goes in there
print(np.argmax(xx[:-1] - xx[1:])) # for all python versions

Related

Create random sequence of comparison pairs (x, y) so that subsequent x and y values are not repeated

I have the following list:
item_list = [1, 2, 3, 4, 5]
I want to compare each item in the list to the other items to generate comparison pairs, such that the same comparisons (x, y) and (y, x) are not repeated (i.e. I don't want both [1, 5] and [5, 1]). For the 5 items in the list, this would generate a total of 10 comparison pairs (n*(n-1)/2). I also want to randomize the order of the pairs such that both x- and y-values aren't the same as the adjacent x- and y-values.
For example, this is fine:
[1, 5]
[3, 2]
[5, 4]
[4, 2]
...
But not this:
[1, 5]
[1, 4] <-- here the x-value is the same as the previous x-value
[2, 4] <-- here the y-value is the same as the previous y-value
[5, 3]
...
I have only been able to come up with a method in which I manually create the pairs by zipping two lists together (example below), but this is obviously very time-consuming (and would be even more so if I wanted to increase the list of items to 10, which would generate 45 pairs). I also can't randomize the order each time, otherwise I could get repetitions of the same x- or y-values.
x_list = [1, 4, 1, 3, 1, 4, 1, 2, 5, 3]
y_list = [2, 5, 3, 5, 4, 2, 5, 3, 2, 4]
zip_list = zip(x_list, y_list)
paired_list = list(zip_list)
print(paired_list)
I hope that makes sense. I am very new to coding, so any help would be much appreciated!
Edit: For context, my experiment involves displaying two images next to each other on the screen. I have a total of 5 images (labeled 1-5), hence the 5-item list. For each image pair, the participant must select one of the two images, which is why I don't want the same image displayed at the same time (e.g. [1, 1]), and I don't need the same pair repeated (e.g. [1, 5] and [5, 1]). I also want to make sure that each time the screen displays a new pair of images, both images, in their respective positions on the screen, change. So it doesn't matter if an image repeats in the sequence, so as long as it changes position (e.g. [4, 3] followed by [5, 4] is ok).
carrvo's answer is good, but doesn't guarantee the requirement that each iteration-step causes the x-value to change and the y-value to change.
(I'm also not a fan of mutability, shuffling in place, but in some contexts it's more performant)
I haven't thought of an elegant, concise implementation, but I do see a slightly clever trick: Because each pair appears only once, we're already guaranteed to have either x or y change, so if we see a pair for which they don't both change, we can just swap them.
I haven't tested this.
from itertools import combinations
from random import sample # not cryptographic secure.
def human_random_pairs(items):
n = len(items)
random_pairs = sample(combinations(items, 2),
n * (n - 1) / 2)
def generator():
old = random_pairs[0]
yield old
for new in random_pairs[1:]:
collision = old[0] == new[0] or old[1] == new[1] # or you can use any, a comprehension, and zip; your choice.
old = tuple(reversed(new)) if collision else new
yield old
return tuple(generator())
This wraps the output in a tuple; you can use a list if you like, or depending on your usage you can probably unwrap the inner function and just yield directly from human_random_pairs, in which case it will "return" the iterable/generator.
Oh, actually we can use itertools.accumulate:
from itertools import accumulate, combinations, starmap
from operator import eq
from random import sample # not cryptographic secure.
def human_random_pairs(items):
n = len(items)
def maybe_flip_second(fst, snd):
return tuple(reversed(snd)) if any(starmap(eq, zip(fst, snd))) else snd
return tuple( # this outer wrapper is optional
accumulate(sample(combinations(items, 2), n * (n - 1) / 2), # len(combinations) = n! / r! / (n-r)!
maybe_flip_second)
)
I had to look up how to generate combinations and random because I have not used them so often, but you should be looking for something like the following:
from itertools import combinations
from random import shuffle
item_list = range(1, 6) # [1, 2, 3, 4, 5]
paired_list = list(combinations(item_list, 2))
shuffle(paired_list)
print(paired_list)
Thank you for the contributions! I'm posting the solution I ended up using below for anyone who might be interested, which uses carrvo's code for generating random comparisons and the pair reversal idea from ShapeOfMatter. Overall does not look very elegant and can likely be simplified, but at least generates the desired output.
from itertools import combinations
import random
# Create image pair comparisons and randomize order
no_of_images = 5
image_list = range(1, no_of_images+1)
pairs_list = list(combinations(image_list, 2))
random.shuffle(pairs_list)
print(pairs_list)
# Create new comparisons sequence with no x- or y-value repeats, by reversing pairs that clash
trial_list = []
trial_list.append(pairs_list[0]) # append first image pair
binary_list = [0] # check if preceding pairs have been reversed or not (0 = not reversed, 1 = reversed)
# For subsequent pairs, if x- or y-values are repeated, reverse the pair
for i in range(len(pairs_list)-1):
# if previous pair was reversed, check against new pair
if binary_list[i] == 1:
if trial_list[i][0] == pairs_list[i+1][0] or trial_list[i][1] == pairs_list[i+1][1]:
trial_list.append(tuple(list(reversed(pairs_list[i+1])))) # if x- or y-value repeats, reverse pair
binary_list.append(1) # flag reversal
else:
trial_list.append(pairs_list[i+1])
binary_list.append(0)
# if previous pair was not reversed, check against old pair
elif binary_list[i] == 0:
if pairs_list[i][0] == pairs_list[i+1][0] or pairs_list[i][1] == pairs_list[i+1][1]:
trial_list.append(tuple(list(reversed(pairs_list[i+1])))) # if x- or y-value repeats, reverse pair
binary_list.append(1) # flag reversal
else:
trial_list.append(pairs_list[i+1])
binary_list.append(0)
print(trial_list)

Find mapping that translates one list of clusters to another in Python

I am using scikit-learn to cluster some data, and I want to compare the results of different clustering techniques. I am immediately faced with the issue that the labels for the clusters are different for different runs, so even if they are clustered exactly the same the similarity of the lists is still very low.
Say I have
list1 = [1, 1, 0, 5, 5, 1, 8, 1]
list2 = [3, 3, 1, 2, 2, 3, 8, 3]
I would (ideally) like a function that returns the best mapping in the form of a translation dictionary like this:
findMapping(list1, list2)
>>> {0:1, 1:3, 5:2, 8:8}
And I said "best mapping" because let's say list3 = [3, 3, 1, 2, 2, 3, 8, 4] then findMapping(list1, list3) would still return the same mapping even though the final 1 turns into a 4 instead of a 3.
So the best mapping is the one that minimizes the number of differences between the two lists. I think's a good criterion, but there may be a better one.
I could write a trial-and-error optimization algorithm to do this, but I'm hardly the first person to want to compare the results of clustering algorithms. I expect something like this already exists and I just don't know what it's called. But I searched around and didn't find any answers.
The point is that after applying the best translation I will measure the difference between the lists, so maybe there is a way to measure the difference between lists of numbers indexed differently without finding the translation as an intermediate step, and that's good too.
===================================
Based on Pallie's answer I was able to create the findMapping function, and then I took it one step further to create a translation function that returns the second list converted to the labels of the first list.
def translateLabels(masterList, listToConvert):
contMatrix = contingency_matrix(masterList, listToConvert)
labelMatcher = munkres.Munkres()
labelTranlater = labelMatcher.compute(contMatrix.max() - contMatrix)
uniqueLabels1 = list(set(masterList))
uniqueLabels2 = list(set(listToConvert))
tranlatorDict = {}
for thisPair in labelTranlater:
tranlatorDict[uniqueLabels2[thisPair[1]]] = uniqueLabels1[thisPair[0]]
return [tranlatorDict[label] for label in listToConvert]
Even with this conversion (which I needed for consistent plotting of cluster colors), using the Rand index and/or normalized mutual information does seem like a good way to compare the differences that don't require a shared labeling.
I also like the idea of first sorting both lists according the values in the data, but that may not work when comparing clusters from very different data.
You could try calculating the adjusted Rand index between two results. This gives a score between -1 and 1, where 1 is a perfect match.
Or by taking argmax of confusion matrix:
list1 = ['a', 'a', 'b', 'c', 'c', 'a', 'd', 'a']
list2 = [3, 3, 1, 2, 2, 3, 8, 3]
np.argmax(contingency_matrix(list1, list2), axis=1)
array([2, 0, 1, 3])
2 means row 2 (the value 2, the cluster 3) best matches "a" column 0 (the index of 2). Row 0 then matches column 1, etc.
For the Hungarian method:
m = Munkres()
contmat = contingency_matrix(list1, list2)
m.compute(contmat.max() - contmat)
[(0, 2), (1, 0), (2, 1), (3, 3)]
using: https://github.com/bmc/munkres

Check if two nested lists are equivalent upon substitution

For some context, I'm trying to enumerate the number of unique situations that can occur when calculating the Banzhaf power indices for four players, when there is no dictator and there are either four or five winning coalitions.
I am using the following code to generate a set of lists that I want to iterate over.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(map(list, combinations(s, r)) for r in range(2, len(s)+1))
def superpowerset(iterable):
s = powerset(iterable)
return chain.from_iterable(map(list, combinations(s, r)) for r in range(4, 6))
set_of_lists = superpowerset([1,2,3,4])
However, two lists in this set shouldn't be considered unique if they are equivalent under remapping.
Using the following list as an example:
[[1, 2], [1, 3], [2, 3], [1, 2, 4]]
If each element 2 is renamed to 3 and vice-versa, we would get:
[[1, 3], [1, 2], [3, 2], [1, 3, 4]]
The order in each sub-list is unimportant, and the order of the sub-lists is also un-important. Thus, the swapped list can be rewritten as:
[[1, 2], [1, 3], [2, 3], [1, 3, 4]]
There are 4 values, so there are P(4,4)=24 possible remappings that could occur (including the trivial mapping).
Is there any way to check this easily? Or, even better, is there are way to avoid generating these lists to begin with?
I'm not even sure how I would go about transforming the first list into the second list (but could brute force it from there). Also, I'm not restricted to data type (to a certain extent) and using frozenset would be fine.
Edit: The solution offered by tobias_k answers the "checking" question but, as noted in the comments, I think I have the wrong approach to this problem.
This is probably no complete solution yet, but it might show you a direction to investigate further.
You could map each element to some characteristics concerning the "topology", how it is "connected" with other elements. You have to be careful not to take the ordering in the sets into account, or -- obviously -- the element itself. You could, for example, consider how often the element appears, in what sized groups it appears, and something like this. Combine those metrics to a key function, sort the elements by that key, and assign them new names in that order.
def normalize(lists):
items = set(x for y in lists for x in y)
counter = itertools.count()
sorter = lambda x: sorted(len(y) for y in lists if x in y)
mapping = {k: next(counter) for k in sorted(items, key=sorter)}
return tuple(sorted(tuple(sorted(mapping[x] for x in y)) for y in lists))
This maps your two example lists to the same "normalized" list:
>>> normalize([[1, 2], [1, 3], [2, 3], [1, 2, 4]])
((0, 1), (0, 2), (1, 2), (1, 2, 3))
>>> normalize([[1, 3], [1, 2], [3, 2], [1, 3, 4]])
((0, 1), (0, 2), (1, 2), (1, 2, 3))
When applied to all the lists, it gets the count down from 330 to 36. I don't know if this is minimal, but it looks like a good start.
>>> normalized = set(map(normalize, set_of_lists))
>>> len(normalized)
36

How does this code snippet rotating a matrix work?

While looking for a pythonic way to rotate a matrix, I came across this answer. However there is no explanation attached to it. I copied the snippet here:
rotated = zip(*original[::-1])
How does it work?
>>> lis = [[1,2,3], [4,5,6], [7,8,9]]
[::-1] reverses the list :
>>> rev = lis[::-1]
>>> rev
[[7, 8, 9], [4, 5, 6], [1, 2, 3]]
now we use zip on all items of the rev, and append each returned tuple to rotated:
>>> rotated = []
>>> for item in zip(rev[0],rev[1],rev[2]):
... rotated.append(item)
...
>>> rotated
[(7, 4, 1), (8, 5, 2), (9, 6, 3)]
zip picks items from the same index from each of the iterable passed to it(it runs only up to the item with minimum length) and returns them as a tuple.
what is *:
* is used for unpacking all the items of rev to zip, so instead of manually typing
rev[0], rev[1], rev[2], we can simply do zip(*rev).
The above zip loop could also be written as:
>>> rev = [[7, 8, 9], [4, 5, 6], [1, 2, 3]]
>>> min_length = min(len(x) for x in rev) # find the min length among all items
>>> rotated = []
for i in xrange(min_length):
items = tuple(x[i] for x in rev) # collect items on the same index from each
# list inside `rev`
rotated.append(items)
...
>>> rotated
[(7, 4, 1), (8, 5, 2), (9, 6, 3)]
Complementary to the explanations by Ashwini and HennyH, here's a little figure to illustrate the process.
First, the [::-1] slice operator reverses the list of list, taking the entire list (thus the first two arguments can be omitted) and using a step of -1.
Second, the zip function takes a number of lists and effectively returns a new list with rows and columns reversed. The * says that the list of lists is unpacked into several lists.
As can be seen, these two operations combined will rotate the matrix.
For my explination:
>>> m = [['a','b','c'],[1,2,3]]
Which when pretty-printed would be:
>>> pprint(m)
['a', 'b', 'c']
[1, 2, 3]
Firstly, zip(*m) will create a list of all the columns in m. As demonstrated by:
>>> zip(*m)
[('a', 1), ('b', 2), ('c', 3)]
The way this works is zip takes n sequences and get's the i-th element of each one and adds it to a tuple. So translated to our matrix m where each row is represented by a list contained within m, we essentially pass in each row to zip, which then gets the 1st element from each row puts all of them into a tuple, then gets every 2nd element from each row etc... Ultimately getting every column in m i.e:
>>> zip(['row1column1','row1column2'],['row2column1','row2column2'])
[('row1column1', 'row2column1'), ('row1column2', 'row2column2')]
Notice that each tuple contains all the elements in a specific column
Now that would look like:
>>> pprint(zip(*m))
('a', 1)
('b', 2)
('c', 3)
So effectively, each column in m is now a row. Hoever it isn't in the correct order (try imagine in your head rotating m to get the matrix above, it can't be done). This why it's necessary to 'flip' the original matrix:
>>> pprint(zip(*m[::-1]))
(1, 'a')
(2, 'b')
(3, 'c')
Which results in a matrix which is the equivalent of m rotated - 90 degrees.

How to get indices of a sorted array in Python

I have a numerical list:
myList = [1, 2, 3, 100, 5]
Now if I sort this list to obtain [1, 2, 3, 5, 100].
What I want is the indices of the elements from the
original list in the sorted order i.e. [0, 1, 2, 4, 3]
--- ala MATLAB's sort function that returns both
values and indices.
If you are using numpy, you have the argsort() function available:
>>> import numpy
>>> numpy.argsort(myList)
array([0, 1, 2, 4, 3])
http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html
This returns the arguments that would sort the array or list.
Something like next:
>>> myList = [1, 2, 3, 100, 5]
>>> [i[0] for i in sorted(enumerate(myList), key=lambda x:x[1])]
[0, 1, 2, 4, 3]
enumerate(myList) gives you a list containing tuples of (index, value):
[(0, 1), (1, 2), (2, 3), (3, 100), (4, 5)]
You sort the list by passing it to sorted and specifying a function to extract the sort key (the second element of each tuple; that's what the lambda is for. Finally, the original index of each sorted element is extracted using the [i[0] for i in ...] list comprehension.
myList = [1, 2, 3, 100, 5]
sorted(range(len(myList)),key=myList.__getitem__)
[0, 1, 2, 4, 3]
I did a quick performance check on these with perfplot (a project of mine) and found that it's hard to recommend anything else but
np.argsort(x)
(note the log scale):
Code to reproduce the plot:
import perfplot
import numpy as np
def sorted_enumerate(seq):
return [i for (v, i) in sorted((v, i) for (i, v) in enumerate(seq))]
def sorted_enumerate_key(seq):
return [x for x, y in sorted(enumerate(seq), key=lambda x: x[1])]
def sorted_range(seq):
return sorted(range(len(seq)), key=seq.__getitem__)
b = perfplot.bench(
setup=np.random.rand,
kernels=[sorted_enumerate, sorted_enumerate_key, sorted_range, np.argsort],
n_range=[2 ** k for k in range(15)],
xlabel="len(x)",
)
b.save("out.png")
The answers with enumerate are nice, but I personally don't like the lambda used to sort by the value. The following just reverses the index and the value, and sorts that. So it'll first sort by value, then by index.
sorted((e,i) for i,e in enumerate(myList))
Updated answer with enumerate and itemgetter:
sorted(enumerate(a), key=lambda x: x[1])
# [(0, 1), (1, 2), (2, 3), (4, 5), (3, 100)]
Zip the lists together: The first element in the tuple will the index, the second is the value (then sort it using the second value of the tuple x[1], x is the tuple)
Or using itemgetter from the operatormodule`:
from operator import itemgetter
sorted(enumerate(a), key=itemgetter(1))
Essentially you need to do an argsort, what implementation you need depends if you want to use external libraries (e.g. NumPy) or if you want to stay pure-Python without dependencies.
The question you need to ask yourself is: Do you want the
indices that would sort the array/list
indices that the elements would have in the sorted array/list
Unfortunately the example in the question doesn't make it clear what is desired because both will give the same result:
>>> arr = np.array([1, 2, 3, 100, 5])
>>> np.argsort(np.argsort(arr))
array([0, 1, 2, 4, 3], dtype=int64)
>>> np.argsort(arr)
array([0, 1, 2, 4, 3], dtype=int64)
Choosing the argsort implementation
If you have NumPy at your disposal you can simply use the function numpy.argsort or method numpy.ndarray.argsort.
An implementation without NumPy was mentioned in some other answers already, so I'll just recap the fastest solution according to the benchmark answer here
def argsort(l):
return sorted(range(len(l)), key=l.__getitem__)
Getting the indices that would sort the array/list
To get the indices that would sort the array/list you can simply call argsort on the array or list. I'm using the NumPy versions here but the Python implementation should give the same results
>>> arr = np.array([3, 1, 2, 4])
>>> np.argsort(arr)
array([1, 2, 0, 3], dtype=int64)
The result contains the indices that are needed to get the sorted array.
Since the sorted array would be [1, 2, 3, 4] the argsorted array contains the indices of these elements in the original.
The smallest value is 1 and it is at index 1 in the original so the first element of the result is 1.
The 2 is at index 2 in the original so the second element of the result is 2.
The 3 is at index 0 in the original so the third element of the result is 0.
The largest value 4 and it is at index 3 in the original so the last element of the result is 3.
Getting the indices that the elements would have in the sorted array/list
In this case you would need to apply argsort twice:
>>> arr = np.array([3, 1, 2, 4])
>>> np.argsort(np.argsort(arr))
array([2, 0, 1, 3], dtype=int64)
In this case :
the first element of the original is 3, which is the third largest value so it would have index 2 in the sorted array/list so the first element is 2.
the second element of the original is 1, which is the smallest value so it would have index 0 in the sorted array/list so the second element is 0.
the third element of the original is 2, which is the second-smallest value so it would have index 1 in the sorted array/list so the third element is 1.
the fourth element of the original is 4 which is the largest value so it would have index 3 in the sorted array/list so the last element is 3.
If you do not want to use numpy,
sorted(range(len(seq)), key=seq.__getitem__)
is fastest, as demonstrated here.
The other answers are WRONG.
Running argsort once is not the solution.
For example, the following code:
import numpy as np
x = [3,1,2]
np.argsort(x)
yields array([1, 2, 0], dtype=int64) which is not what we want.
The answer should be to run argsort twice:
import numpy as np
x = [3,1,2]
np.argsort(np.argsort(x))
gives array([2, 0, 1], dtype=int64) as expected.
Most easiest way you can use Numpy Packages for that purpose:
import numpy
s = numpy.array([2, 3, 1, 4, 5])
sort_index = numpy.argsort(s)
print(sort_index)
But If you want that you code should use baisc python code:
s = [2, 3, 1, 4, 5]
li=[]
for i in range(len(s)):
li.append([s[i],i])
li.sort()
sort_index = []
for x in li:
sort_index.append(x[1])
print(sort_index)
We will create another array of indexes from 0 to n-1
Then zip this to the original array and then sort it on the basis of the original values
ar = [1,2,3,4,5]
new_ar = list(zip(ar,[i for i in range(len(ar))]))
new_ar.sort()
`
s = [2, 3, 1, 4, 5]
print([sorted(s, reverse=False).index(val) for val in s])
For a list with duplicate elements, it will return the rank without ties, e.g.
s = [2, 2, 1, 4, 5]
print([sorted(s, reverse=False).index(val) for val in s])
returns
[1, 1, 0, 3, 4]
Import numpy as np
FOR INDEX
S=[11,2,44,55,66,0,10,3,33]
r=np.argsort(S)
[output]=array([5, 1, 7, 6, 0, 8, 2, 3, 4])
argsort Returns the indices of S in sorted order
FOR VALUE
np.sort(S)
[output]=array([ 0, 2, 3, 10, 11, 33, 44, 55, 66])
Code:
s = [2, 3, 1, 4, 5]
li = []
for i in range(len(s)):
li.append([s[i], i])
li.sort()
sort_index = []
for x in li:
sort_index.append(x[1])
print(sort_index)
Try this, It worked for me cheers!
firstly convert your list to this:
myList = [1, 2, 3, 100, 5]
add a index to your list's item
myList = [[0, 1], [1, 2], [2, 3], [3, 100], [4, 5]]
next :
sorted(myList, key=lambda k:k[1])
result:
[[0, 1], [1, 2], [2, 3], [4, 5], [3, 100]]
A variant on RustyRob's answer (which is already the most performant pure Python solution) that may be superior when the collection you're sorting either:
Isn't a sequence (e.g. it's a set, and there's a legitimate reason to want the indices corresponding to how far an iterator must be advanced to reach the item), or
Is a sequence without O(1) indexing (among Python's included batteries, collections.deque is a notable example of this)
Case #1 is unlikely to be useful, but case #2 is more likely to be meaningful. In either case, you have two choices:
Convert to a list/tuple and use the converted version, or
Use a trick to assign keys based on iteration order
This answer provides the solution to #2. Note that it's not guaranteed to work by the language standard; the language says each key will be computed once, but not the order they will be computed in. On every version of CPython, the reference interpreter, to date, it's precomputed in order from beginning to end, so this works, but be aware it's not guaranteed. In any event, the code is:
sizediterable = ...
sorted_indices = sorted(range(len(sizediterable)), key=lambda _, it=iter(sizediterable): next(it))
All that does is provide a key function that ignores the value it's given (an index) and instead provides the next item from an iterator preconstructed from the original container (cached as a defaulted argument to allow it to function as a one-liner). As a result, for something like a large collections.deque, where using its .__getitem__ involves O(n) work (and therefore computing all the keys would involve O(n²) work), sequential iteration remains O(1), so generating the keys remains just O(n).
If you need something guaranteed to work by the language standard, using built-in types, Roman's solution will have the same algorithmic efficiency as this solution (as neither of them rely on the algorithmic efficiency of indexing the original container).
To be clear, for the suggested use case with collections.deque, the deque would have to be quite large for this to matter; deques have a fairly large constant divisor for indexing, so only truly huge ones would have an issue. Of course, by the same token, the cost of sorting is pretty minimal if the inputs are small/cheap to compare, so if your inputs are large enough that efficient sorting matters, they're large enough for efficient indexing to matter too.

Categories