Algorithm optimization - Find the unique number in a list - python

The goal is to find the unique number in an array which contains identical numbers except one. Speed is of the essence as arrays can be huge. My code below works for smaller arrays but times out for large arrays. How to improve the algo? See input / output example below:
Input = [1, 1, 1, 1, 2, 1, 1, 1, 1]
Output = 2
def find_uniq(arr):
result = [x for x in arr if arr.count(x) ==1]
return result[0]

Your current solution is quadratic!
You can bring this down to linear, using collections.Counter in association with next (very handy when you don't want the entire list being built only to be thrown away). The counts are precomputed and then the first unique value found is returned.
from collections import Counter
def find_uniq(arr):
c = Counter(arr)
return next(x for x in arr if c[x] == 1)
next shines here because the goal is to return the first unique value found. next works with a generator, and only the first item is returned (further computation halts).

Using NumPy:
np.where(np.bincount(arr) == 1)

Sometimes the stupidiest solution is not the worst:
def find_uniq(arr):
count = arr.count
for x in arr:
if count(x) == 1:
return x
return None
Depending on where the first unique item is, this can be faster or slower than than Coldspeed's Counter based solution (which seems mostly stable). In the worst case, it's only marginally faster than the OP's solution but eats less memory. In the best case it's obviously the winner ;)

I think this is the best solution because it iterates over the Counter dictionary, not the original array (which will likely be longer) when trying to find the unique number/numbers. It will work with a Python list of integers or floats as the function input
from collections import Counter
def find_uniq(arr):
return next(k for k,v in Counter(arr).items()
if v == 1)
or if you may have more than one number with a count of 1:
from collections import Counter
def find_uniq(arr):
return [k for k,v in Counter(arr).items()
if v == 1]

Here's a way to find it quickly
def find_uniq(arr):
members = list(set(arr))
for member in members:
if arr.count(member) == 1:
return member
continue
return None #if there aren't any uniques

You already have #cᴏʟᴅsᴘᴇᴇᴅ answer which is the fastest generic solution to this problem. Almost linear for any sorted or unsorted array.
from collections import Counter
def find_uniq(arr):
c = Counter(arr)
return next(x for x in arr if c[x] == 1)
Since the question is tagged algorithm I want to show how you can still solve the problem with linear time complexity without any module, just yout algorithm:
def find_uniq(arr):
i = 0
while i < len(arr):
j = i + 1
pivot = arr[i] # store arr[i] so you don't have to access array again to read its value
while j < len(arr) and pivot == arr[j]:
j += 1
if j - i == 1: # if previous loop never executed
return pivot
i = j
return None
About its complexity: This is not only O(n), it's also Θ(n) since i = j makes sure you don't access any array element more than once. Spatial complexity is O(1).
About its correctness: It works only if input array is sorted! So here is the big deal. If you can build your array already sorted this algorithm will be faster than Counter and anything else. Problem is if the array is not already sorted you'll have to sort it. arr.sort() uses Timsort algorithm, which is Ω(log(N!)),Θ(log(N!)) and O(log(N!)), with O(n) worst-case spatial complexity. Therefore in worst case (array not already sorted) the whole code will O(log(N!)). Still better than O(n2), though.

R_dic = dict()
R = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8]
R_dic = {i: R.count(i) for i in R if i not in R_dic} #list comprehension
print R_dic
output:
{1: 2, 2: 2, 3: 2, 4: 2, 5: 2, 6: 2, 7: 2, 8: 2, 9: 1}
now check for the key and value in the dictionary as per your condition.

Related

most efficient way to iterate over a large array looking for a missing element in Python

I was trying an online test. the test asked to write a function that given a list of up to 100000 integers whose range is 1 to 100000, would find the first missing integer.
for example, if the list is [1,4,5,2] the output should be 3.
I iterated over the list as follow
def find_missing(num)
for i in range(1, 100001):
if i not in num:
return i
the feedback I receives is the code is not efficient in handling big lists.
I am quite new and I couldnot find an answer, how can I iterate more efficiently?
The first improvement would be to make yours linear by using a set for the repeated membership test:
def find_missing(nums)
s = set(nums)
for i in range(1, 100001):
if i not in s:
return i
Given how C-optimized python sorting is, you could also do sth like:
def find_missing(nums)
s = sorted(set(nums))
return next(i for i, n in enumerate(s, 1) if i != n)
But both of these are fairly space inefficient as they create a new collection. You can avoid that with an in-place sort:
from itertools import groupby
def find_missing(nums):
nums.sort() # in-place
return next(i for i, (k, _) in enumerate(groupby(nums), 1) if i != k)
For any range of numbers, the sum is given by Gauss's formula:
# sum of all numbers up to and including nums[-1] minus
# sum of all numbers up to but not including nums[-1]
expected = nums[-1] * (nums[-1] + 1) // 2 - nums[0] * (nums[0] - 1) // 2
If a number is missing, the actual sum will be
actual = sum(nums)
The difference is the missing number:
result = expected - actual
This compulation is O(n), which is as efficient as you can get. expected is an O(1) computation, while actual has to actually add up the elements.
A somewhat slower but similar complexity approach would be to step along the sequence in lockstep with either a range or itertools.count:
for a, e in zip(nums, range(nums[0], len(nums) + nums[0])):
if a != e:
return e # or break if not in a function
Notice the difference between a single comparison a != e, vs a linear containment check like e in nums, which has to iterate on average through half of nums to get the answer.
You can use Counter to count every occurrence of your list. The minimum number with occurrence 0 will be your output. For example:
from collections import Counter
def find_missing():
count = Counter(your_list)
keys = count.keys() #list of every element in increasing order
main_list = list(range(1:100000)) #the list of values from 1 to 100k
missing_numbers = list(set(main_list) - set(keys))
your_output = min(missing_numbers)
return your_output

Fastest union for multiple sorted lists removing duplicates and get an ordered result

When having a list of lists in which all the sublists are ordered e.g.:
[[1,3,7,20,31], [1,2,5,6,7], [2,4,25,26]]
what is the fastest way to get the union of these lists without having duplicates in it and also get an ordered result?
So the resulting list should be:
[1,2,3,4,5,6,7,20,25,26,31].
I know I could just union them all without duplicates and then sort them but are there faster ways (like: do the sorting while doing the union) built into python?
EDIT:
Is the proposed answer faster than executing the following algorithm pairwise with all the sublists?
You can use heapq.merge for this:
def mymerge(v):
from heapq import merge
last = None
for a in merge(*v):
if a != last: # remove duplicates
last = a
yield a
print(list(mymerge([[1,3,7,20,31], [1,2,5,6,7], [2,4,25,26]])))
# [1, 2, 3, 4, 5, 6, 7, 20, 25, 26, 31]
(EDITED)
The asymptotic theoretical best approach to the problem is to use the priority queue, like, for example, the one implemented in heapq.merge() (thanks to #kaya3 for pointing this out).
However, in practice, a number of things can go wrong. For example the constant factors in the complexity analysis are large enough that a theoretically-optimal approach is, in real-life scenarios, slower.
This is fundamentally dependent on the implementation.
For example, Python suffer some speed penalty for explicit looping.
So, let's consider a couple of approaches and how the do perform for some concrete inputs.
Approaches
Just to give you some idea of the numbers we are discussing, here are a few approaches:
merge_sorted() which uses the most naive approach of flatten the sequence, reduce it to a set() (removing duplicates) and sort it as required
import itertools
def merge_sorted(seqs):
return sorted(set(itertools.chain.from_iterable(seqs)))
merge_heapq() which essentially #arshajii's answer. Note that the itertools.groupby() variation is slightly (less than ~1%) faster.
import heapq
def i_merge_heapq(seqs):
last_item = None
for item in heapq.merge(*seqs):
if item != last_item:
yield item
last_item = item
def merge_heapq(seqs):
return list(i_merge_heapq(seqs))
merge_bisect_set() is substantially the same algorithm as merge_sorted() except that the result is now constructed explicitly using the efficient bisect module for sorted insertions. Since sorted() is doing fundamentally the same thing but looping in Python, this is not going to be faster.
import itertools
import bisect
def merge_bisect_set(seqs):
result = []
for item in set(itertools.chain.from_iterable(seqs)):
bisect.insort(result, item)
return result
merge_bisect_cond() is similar to merge_bisect_set() but now the non-repeating constraint is explicitly done using the final list. However, this is much more expensive than just using set() (in fact it is so slow that it was excluded from the plots).
def merge_bisect_cond(seqs):
result = []
for item in itertools.chain.from_iterable(seqs):
if item not in result:
bisect.insort(result, item)
return result
merge_pairwise() explicitly implements the the theoretically efficient algorithm, similar to what you outlined in your question.
def join_sorted(seq1, seq2):
result = []
i = j = 0
len1, len2 = len(seq1), len(seq2)
while i < len1 and j < len2:
if seq1[i] < seq2[j]:
result.append(seq1[i])
i += 1
elif seq1[i] > seq2[j]:
result.append(seq2[j])
j += 1
else: # seq1[i] == seq2[j]
result.append(seq1[i])
i += 1
j += 1
if i < len1:
result.extend(seq1[i:])
elif j < len2:
result.extend(seq2[j:])
return result
def merge_pairwise(seqs):
result = []
for seq in seqs:
result = join_sorted(result, seq)
return result
merge_loop() implements a generalization of the above, where now pass is done only once for all sequences, instead of doing this pairwise.
def merge_loop(seqs):
result = []
lengths = list(map(len, seqs))
idxs = [0] * len(seqs)
while any(idx < length for idx, length in zip(idxs, lengths)):
item = min(
seq[idx]
for idx, seq, length in zip(idxs, seqs, lengths) if idx < length)
result.append(item)
for i, (idx, seq, length) in enumerate(zip(idxs, seqs, lengths)):
if idx < length and seq[idx] == item:
idxs[i] += 1
return result
Benchmarks
By generating the input using:
def gen_input(n, m=100, a=None, b=None):
if a is None and b is None:
b = 2 * n * m
a = -b
return tuple(tuple(sorted(set(random.randint(int(a), int(b)) for _ in range(n)))) for __ in range(m))
one can plot the timings for varying n:
Note that, in general, the performances will vary for different values of n (the size of each sequence) and m (the number of sequences), but also of a and b (the minimum and the maximum number generated).
For brevity, this was not explored in this answer, but feel free to play around with it here, which also includes some other implementations, notably some tentative speed-ups with Cython that were only partially successful.
You can make use of sets in Python-3.
mylist = [[1,3,7,20,31], [1,2,5,6,7], [2,4,25,26]]
mynewlist = mylist[0] + mylist[1] + mylist[2]
print(list(set(mynewlist)))
Output:
[1, 2, 3, 4, 5, 6, 7, 20, 25, 26, 31]
First merge all the sub-lists using list addition.
Then convert it into a set object where it will delete all the duplicates which will also be sorted in ascending order.
Convert it back to list.It gives your desired output.
Hope it answers your question.

Recursive function that returns combinations of size n chosen from list

I am trying to write a recursive function which takes as its inputs an integer n, and a list l, and returns a list of all the combinations of size n that can be chosen from the elements in l. I'm aware that I can just use itertools, but I want to become better at writing recursive functions, and I believe writing my own function will help me.
So for example, if you input:
n = 3
l = [1, 2, 3, 4]
I want the output to be:
`[ [1, 2, 3], [1, 3, 4], [2, 3, 4], [1, 2, 4] ]
So far I've written this code:
def get_combinations(l, n): # returns the possible combinations of size n from a list of chars
if len(l) == 0:
return []
elif n == 1:
return [l[0]]
newList = list()
for i in range(len(l)):
sliced_list = l[i:]
m = n - 1
#lost here, believe I need to make a recursive call somewhere with get_combinations(sliced_list, m)
return newList
I found this example of such a function for permutations helpful, but I'm struggling to implement something similar.
To clarify, I set up my base cases in the way I did because I'm expecting to pass sliced_list and m in my recursive call, and if you imagine the situation when i = 3, you'll have an empty list for sliced_list, and m will be 1 when you've gone deep enough to build up a combination. But I'm not married to these base cases.
Let me try to summarize the questions I have:
How do I produce a final result which is exactly a list of lists, instead of a list of lists of lists of lists ... (depth = n)?
What should my recursive call look like?
Am I going about this problem in exactly the wrong way?
First I would recommend that you layout your function like:
def get_combinations(array, n):
solutions = []
# all the code goes here
return solutions
That way if there's a case that's a problem, like n == 0, you can just ignore it and get a valid (empty) result. The next issue is this line is wrong:
elif n == 1:
return [array[0]]
The correct thing to do if n == 1 is to return an array with every element of the array wrapped in a list:
if n == 1:
solutions = [[element] for element in array]
This is your base case as the next layer up in the recursion will build on this. What comes next is the heart of the problem. If n > 1 and the array has content, then we need to loop over the indexes of the array. For each we'll call ourself recursively on everything past the current index and n - 1:
sub_solutions = get_combinations(array[index + 1:], n - 1)
This returns partial solutions. We need to stuff the element at the current index, ie. array[index], onto the front of each sub_solution in sub_solutions, and add each augmented sub_solution to our list of solutions that we return at the end of the function:
solutions.append([array[index]] + sub_solution)
And that's it!

A fast way to find the number of elements in list intersection (Python)

Is there any faster way to calculate this value in Python:
len([x for x in my_list if x in other_list])
I tried to use sets, since the lists' elements are unique, but I noticed no difference.
len(set(my_list).intersection(set(other_list)))
I'm working with big lists, so even the slightest improvement counts.
Thanks
Simple way is to find the least length'd list... than use that with set.intersection..., eg:
a = range(100)
b = range(50)
fst, snd = (a, b) if len(a) < len(b) else (b, a)
len(set(fst).intersection(snd))
I think a generator expression like so would be fast
sum(1 for i in my_list if i in other_list)
Otherwise a set intersection is about as fast as it will get
len(set(my_list).intersection(other_list))
From https://wiki.python.org/moin/TimeComplexity, set intersection for two sets s and t has time complexity:
Average - O(min(len(s), len(t))
Worst - O(len(s) * len(t))
len([x for x in my_list if x in other_list]) has complexity O(n^2) which is equivalent to the worst case for set.intersection().
If you use set.intersection() you only need to convert one of the lists to a set first:
So len(set(my_list).intersection(other_list)) should on average going to be faster than the nested list comprehension.
You could try using the filter function. Since you mentioned you're working with huge lists, ifilterof itertools module would be a good option:
from itertools import ifilter
my_set = set(range(100))
other_set = set(range(50))
for item in ifilter(lambda x: x in other_set, my_set):
print item
The idea is to sort the two lists first and then traverse them like we want to merge them, in order to find the elements in first list belonging also to second list. This way we have an O(n logn) algorithm.
def mycount(l, m):
l.sort()
m.sort()
i, j, counter = 0, 0, 0
while i < len(l) and j < len(m):
if l[i] == m[j]:
counter += 1
i += 1
elif l[i] < m[j]:
i += 1
else:
j += 1
return counter
From local tests it's 100 times faster than len([x for x in a if x in b]) when working with lists of 10000 elements.
EDIT:
Considering that the list elements are unique, the common elements will have a frequency two in the union of the two lists. Also they will be together when we sort this union. So the following is also valid:
def mycount(l, m):
s = sorted(l + m)
return sum(s[i] == s[i + 1] for i in xrange(len(s) - 1))
Similarily, we can use a counter:
from collections import Counter
def mycount(l, m):
c = Counter(l)
c.update(m)
return sum(v == 2 for v in c.itervalues())

Find the smallest positive number not in list

I have a list in python like this:
myList = [1,14,2,5,3,7,8,12]
How can I easily find the first unused value? (in this case '4')
I came up with several different ways:
Iterate the first number not in set
I didn't want to get the shortest code (which might be the set-difference trickery) but something that could have a good running time.
This might be one of the best proposed here, my tests show that it might be substantially faster - especially if the hole is in the beginning - than the set-difference approach:
from itertools import count, filterfalse # ifilterfalse on py2
A = [1,14,2,5,3,7,8,12]
print(next(filterfalse(set(A).__contains__, count(1))))
The array is turned into a set, whose __contains__(x) method corresponds to x in A. count(1) creates a counter that starts counting from 1 to infinity. Now, filterfalse consumes the numbers from the counter, until a number is found that is not in the set; when the first number is found that is not in the set it is yielded by next()
Timing for len(a) = 100000, randomized and the sought-after number is 8:
>>> timeit(lambda: next(filterfalse(set(a).__contains__, count(1))), number=100)
0.9200698399945395
>>> timeit(lambda: min(set(range(1, len(a) + 2)) - set(a)), number=100)
3.1420603669976117
Timing for len(a) = 100000, ordered and the first free is 100001
>>> timeit(lambda: next(filterfalse(set(a).__contains__, count(1))), number=100)
1.520096342996112
>>> timeit(lambda: min(set(range(1, len(a) + 2)) - set(a)), number=100)
1.987783643999137
(note that this is Python 3 and range is the py2 xrange)
Use heapq
The asymptotically good answer: heapq with enumerate
from heapq import heapify, heappop
heap = list(A)
heapify(heap)
from heapq import heapify, heappop
from functools import partial
# A = [1,2,3] also works
A = [1,14,2,5,3,7,8,12]
end = 2 ** 61 # these are different and neither of them can be the
sentinel = 2 ** 62 # first gap (unless you have 2^64 bytes of memory).
heap = list(A)
heap.append(end)
heapify(heap)
print(next(n for n, v in enumerate(
iter(partial(heappop, heap), sentinel), 1) if n != v))
Now, the one above could be the preferred solution if written in C, but heapq is written in Python and most probably slower than many other alternatives that mainly use C code.
Just sort and enumerate to find the first not matching
Or the simple answer with good constants for O(n lg n)
next(i for i, e in enumerate(sorted(A) + [ None ], 1) if i != e)
This might be fastest of all if the list is almost sorted because of how the Python Timsort works, but for randomized the set-difference and iterating the first not in set are faster.
The + [ None ] is necessary for the edge cases of there being no gaps (e.g. [1,2,3]).
This makes use of the property of sets
>>> l = [1,2,3,5,7,8,12,14]
>>> m = range(1,len(l))
>>> min(set(m)-set(l))
4
I would suggest you to use a generator and use enumerate to determine the missing element
>>> next(a for a, b in enumerate(myList, myList[0]) if a != b)
4
enumerate maps the index with the element so your goal is to determine that element which differs from its index.
Note, I am also assuming that the elements may not start with a definite value, in this case which is 1, and if it is so, you can simplify the expression further as
>>> next(a for a, b in enumerate(myList, 1) if a != b)
4
A for loop with the list will do it.
l = [1,14,2,5,3,7,8,12]
for i in range(1, max(l)):
if i not in l: break
print(i) # result 4
Don't know how efficient, but why not use an xrange as a mask and use set minus?
>>> myList = [1,14,2,5,3,7,8,12]
>>> min(set(xrange(1, len(myList) + 1)) - set(myList))
4
You're only creating a set as big as myList, so it can't be that bad :)
This won't work for "full" lists:
>>> myList = range(1, 5)
>>> min(set(xrange(1, len(myList) + 1)) - set(myList))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: min() arg is an empty sequence
But the fix to return the next value is simple (add one more to the masked set):
>>> min(set(xrange(1, len(myList) + 2)) - set(myList))
5
import itertools as it
next(i for i in it.count() if i not in mylist)
I like this because it reads very closely to what you're trying to do: "start counting, keep going until you reach a number that isn't in the list, then tell me that number". However, this is quadratic since testing i not in mylist is linear.
Solutions using enumerate are linear, but rely on the list being sorted and no value being repeated. Sorting first makes it O(n log n) overall, which is still better than quadratic. However, if you can assume the values are distinct, then you could put them into a set first:
myset = set(mylist)
next(i for i in it.count() if i not in myset)
Since set containment checks are roughly constant time, this will be linear overall.
I just solved this in a probably non pythonic way
def solution(A):
# Const-ish to improve readability
MIN = 1
if not A: return MIN
# Save re-computing MAX
MAX = max(A)
# Loop over all entries with minimum of 1 starting at 1
for num in range(1, MAX):
# going for greatest missing number return optimistically (minimum)
# If order needs to switch, then use max as start and count backwards
if num not in A: return num
# In case the max is < 0 double wrap max with minimum return value
return max(MIN, MAX+1)
I think it reads quite well
My effort, no itertools. Sets "current" to be the one less than the value you are expecting.
list = [1,2,3,4,5,7,8]
current = list[0]-1
for i in list:
if i != current+1:
print current+1
break
current = i
The naive way is to traverse the list which is an O(n) solution. However, since the list is sorted, you can use this feature to perform binary search (a modified version for it). Basically, you are looking for the last occurance of A[i] = i.
The pseudo algorithm will be something like:
binarysearch(A):
start = 0
end = len(A) - 1
while(start <= end ):
mid = (start + end) / 2
if(A[mid] == mid):
result = A[mid]
start = mid + 1
else: #A[mid] > mid since there is no way A[mid] is less than mid
end = mid - 1
return (result + 1)
This is an O(log n) solution. I assumed lists are one indexed. You can modify the indices accordingly
EDIT: if the list is not sorted, you can use the heapq python library and store the list in a min-heap and then pop the elements one by one
pseudo code
H = heapify(A) //Assuming A is the list
count = 1
for i in range(len(A)):
if(H.pop() != count): return count
count += 1
sort + reduce to the rescue!
from functools import reduce # python3
myList = [1,14,2,5,3,7,8,12]
res = 1 + reduce(lambda x, y: x if y-x>1 else y, sorted(myList), 0)
print(res)
Unfortunatelly it won't stop after match is found and will iterate whole list.
Faster (but less fun) is to use for loop:
myList = [1,14,2,5,3,7,8,12]
res = 0
for num in sorted(myList):
if num - res > 1:
break
res = num
res = res + 1
print(res)
you can try this
for i in range(1,max(arr1)+2):
if i not in arr1:
print(i)
break
Easy to read, easy to understand, gets the job done:
def solution(A):
smallest = 1
unique = set(A)
for int in unique:
if int == smallest:
smallest += 1
return smallest
Keep incrementing a counter in a loop until you find the first positive integer that's not in the list.
def getSmallestIntNotInList(number_list):
"""Returns the smallest positive integer that is not in a given list"""
i = 0
while True:
i += 1
if i not in number_list:
return i
print(getSmallestIntNotInList([1,14,2,5,3,7,8,12]))
# 4
I found that this had the fastest performance compared to other answers on this post. I tested using timeit in Python 3.10.8. My performance results can be seen below:
import timeit
def findSmallestIntNotInList(number_list):
# Infinite while-loop until first number is found
i = 0
while True:
i += 1
if i not in number_list:
return i
t = timeit.Timer(lambda: findSmallestIntNotInList([1,14,2,5,3,7,8,12]))
print('Execution time:', t.timeit(100000), 'seconds')
# Execution time: 0.038100800011307 seconds
import timeit
def findSmallestIntNotInList(number_list):
# Loop with a range to len(number_list)+1
for i in range (1, len(number_list)+1):
if i not in number_list:
return i
t = timeit.Timer(lambda: findSmallestIntNotInList([1,14,2,5,3,7,8,12]))
print('Execution time:', t.timeit(100000), 'seconds')
# Execution time: 0.05068870005197823 seconds
import timeit
def findSmallestIntNotInList(number_list):
# Loop with a range to max(number_list) (by silgon)
# https://stackoverflow.com/a/49649558/3357935
for i in range (1, max(number_list)):
if i not in number_list:
return i
t = timeit.Timer(lambda: findSmallestIntNotInList([1,14,2,5,3,7,8,12]))
print('Execution time:', t.timeit(100000), 'seconds')
# Execution time: 0.06317249999847263 seconds
import timeit
from itertools import count, filterfalse
def findSmallestIntNotInList(number_list):
# iterate the first number not in set (by Antti Haapala -- Слава Україні)
# https://stackoverflow.com/a/28178803/3357935
return(next(filterfalse(set(number_list).__contains__, count(1))))
t = timeit.Timer(lambda: findSmallestIntNotInList([1,14,2,5,3,7,8,12]))
print('Execution time:', t.timeit(100000), 'seconds')
# Execution time: 0.06515420007053763 seconds
import timeit
def findSmallestIntNotInList(number_list):
# Use property of sets (by Bhargav Rao)
# https://stackoverflow.com/a/28176962/3357935
m = range(1, len(number_list))
return min(set(m)-set(number_list))
t = timeit.Timer(lambda: findSmallestIntNotInList([1,14,2,5,3,7,8,12]))
print('Execution time:', t.timeit(100000), 'seconds')
# Execution time: 0.08586219989228994 seconds
The easiest way would be just to loop through the sorted list and check if the index is equal the value and if not return the index as solution.
This would have complexity O(nlogn) because of the sorting:
for index,value in enumerate(sorted(myList)):
if index is not value:
print(index)
break
Another option is to use python sets which are somewhat dictionaries without values, just keys. In dictionaries you can look for a key in constant time which make the whol solution look like the following, having only linear complexity O(n):
mySet = set(myList)
for i in range(len(mySet)):
if i not in mySet:
print(i)
break
Edit:
If the solution should also deal with lists where no number is missing (e.g. [0,1]) and output the next following number and should also correctly consider 0, then a complete solution would be:
def find_smallest_positive_number_not_in_list(myList):
mySet = set(myList)
for i in range(1, max(mySet)+2):
if i not in mySet:
return i
A solution that returns all those values is
free_values = set(range(1, max(L))) - set(L)
it does a full scan, but those loops are implemented in C and unless the list or its maximum value are huge this will be a win over more sophisticated algorithms performing the looping in Python.
Note that if this search is needed to implement "reuse" of IDs then keeping a free list around and maintaining it up-to-date (i.e. adding numbers to it when deleting entries and picking from it when reusing entries) is a often a good idea.
The following solution loops all numbers in between 1 and the length of the input list and breaks the loop whenever a number is not found inside it. Otherwise the result is the length of the list plus one.
listOfNumbers=[1,14,2,5,3,7,8,12]
for i in range(1, len(listOfNumbers)+1):
if not i in listOfNumbers:
nextNumber=i
break
else:
nextNumber=len(listOfNumbers)+1

Categories