Is there a way to continue calling a function only if a condition is met without using loops/recursion/comprehension? I can only use map and filter.
I am bubble sorting a list and I need to reiterate until there is one whole pass without any swap. I use a counter for this.
This is what I have so far:
def sort(l, i, cnt):
if i < len(l) - 1 and l[i] > l[i+1]:
l[i], l[i+1] = l[i+1], l[i]
cnt += 1
return l[i]
def main(l):
cnt = 0
l = list(map(lambda i: sort(l, i, cnt), range(len(l))))
I'm not sure how to continue calling sort only if cnt != 0. Any help is appreciated.
It is un uncommon requirement, but it is possible, provided you are allowed to use other functions to actually execute the iterators. Because both map and filter both only return iterators, so you will have to use sum, list, or tuple for example to actually cause the iterators to return their values.
Here I would use a function that compares 2 consecutive elements in a list, returns 0 in they are in increasing order and swap them and returns 1 if they are not. Using sum on a map on this function will return the number of swaps on a pass:
def sort2(l, i):
if (l[i] > l[i+1]):
l[i], l[i+1] = l[i+1], l[i]
return 1
return 0
You can the execute a pass with:
sum(map(lambda i: sort2(l, i), range(len(l) -1)))
And you execute a full bubble sort by using a second map for all the passes:
sum(map(lambda j: sum(map(lambda i: sort2(l, i), range(len(l) -j))), range(1, len(l))))
In order to stop as soon as one pass resulted in 0 swap, I would filter with a function raising a StopIteration, when it gets 0, because if we use sum, a pass returns 0 as soon as the list is sorted, and the StopIteration will gently stops the iterators:
def stop(x):
# print(x) # uncomment to control the actual calls
if x == 0:
raise StopIteration
return True
Let us combine everything:
tuple(filter(stop, map(lambda j: sum(map(lambda i: sort2(l, i), range(len(l) -j))
), range(1, len(l)))))
Demo:
With l = [1, 3, 5, 2, 4], this gives (with the print uncommented in stop):
2
1
0
(2, 1)
So we have correctly got:
first pass caused 2 swaps ((5,2) and (5,4))
second pass caused 1 swap ((3,2))
third pass caused 0 swap and the stop filter actually stopped the iterators.
That being said, it is a nice exercise, but I would never do that in real world programs...
You might want to check itertools.takewhile; it seems like it is the one you are looking for. See the documentation for itertools for details.
takewhile(lambda x: x<5, [1,4,6,4,1])
# --> 1 4
According to the comments, your goal is to sort a list of numbers between 1 and 50 using map and filter without loops, comprehensions or recursion. Therefore your actual question (about a do/while equivalent for implementing bubble sort) is an XY problem because the correct solution is not to use bubble sort at all. When asking questions in future, please say what your actual problem is - don't just ask how to fix your attempted solution.
The counting sort algorithm is suitable here; something akin to counting sort can be implemented using map and filter without side-effects:
def counting_sort(numbers):
min_x, max_x = min(numbers), max(numbers)
r_x = range(min_x, max_x + 1)
r_i = range(len(numbers))
count_ge = list(map(sum, map(lambda x: map(x.__ge__, numbers), r_x)))
return list(map(lambda i: min_x + sum(map(i.__ge__, count_ge)), r_i))
Note that this is a very inefficient variant of counting sort; it runs in O(nk) time instead of O(n + k) time. But it meets the requirements.
Related
I have a solution for this problem on codewars.com that works when I run it in Sublime, but when I try to submit, I get this error:
Process was terminated. It took longer than 12000ms to complete
Why did my code time out?
Our servers are configured to only allow a certain amount of time for your code to execute. In rare cases the server may be taking on too much work and simply wasn't able to run your code efficiently enough. Most of the time though this issue is caused by inefficient algorithms. If you see this error multiple times you should try to optimize your code further.
The goal of the function is to find the next biggest number after a given number that you can make by rearranging the digits of a given number. For example, if I was given 216, I would need to return 261.
This is the code I have now:
import itertools
def next_bigger(n):
# takes a number like 472 and puts it in a list like so: [4, 7, 2]
num_arr = [int(x) for x in str(n)]
perms = []
total = ''
# x would be a permutation of num_arr, like [7, 2, 4]
for x in itertools.permutations(num_arr):
for y in x:
total += str(y)
perms.append(int(total))
total = ''
# bigger is all permutations that are bigger than n,
# so bigger[0] is the next biggest number.
# if there are no bigger permutations, the function returns -1
bigger = sorted([x for x in perms if x > n])
return bigger[0] if bigger else -1
I'm new to coding in Python, so is there some mistake I am making which causes my code to be extremely inefficient? Any suggestions are welcome.
Thanks for all the help you guys gave me. I ended up finding a solution from here using the Next Lexicographical Permutation Algorithm
This is my tidied up version of the solution provided here:
def next_bigger(n):
# https://www.nayuki.io/res/next-lexicographical-permutation-algorithm/nextperm.py
# https://www.nayuki.io/page/next-lexicographical-permutation-algorithm
# Find non-increasing suffix
arr = [int(x) for x in str(n)]
i = len(arr) - 1
while i > 0 and arr[i - 1] >= arr[i]:
i -= 1
if i <= 0:
return -1
# Find successor to pivot
j = len(arr) - 1
while arr[j] <= arr[i - 1]:
j -= 1
arr[i - 1], arr[j] = arr[j], arr[i - 1]
# Reverse suffix
arr[i : ] = arr[len(arr) - 1 : i - 1 : -1]
return int(''.join(str(x) for x in arr))
Why are you getting TLE (time limit exceeded)?
Because your algorithm has wrong complexity. How much permutations you will find for list with 3 elements? Only 6. But what if we use list with 23 elements? 25852016738884976640000.
This is too much for time limit.
So, if you want to have solve this problem you have to find solution without permutations. Please rethink how the numbers are written. The number 271 is bigger then 216 because the number on the second position has bigger value 7>1.
So, your solution has to find two numbers and swap them position. The number on the left have to smaller then the second one.
For example - for 111115444474444 you should find 5 and 7.
Then you swap them - and now you should sort sublist on right from the first position.
For example after swapped the values (111117444454444) you have to sort (444454444) -> (444444445). Now merge all, and you have solution.
import functools
def next_bigger(a):
a = map(int, str(a))
tmp = list(reversed(a))
for i, item_a in enumerate(reversed(a)):
for j in (range(i)):
if item_a < tmp[j]:
#you find index of number to swap
tmp[i]=tmp[j]
print(list(reversed(tmp[i:])))
tmp[j]=item_a
fin = list(reversed(tmp[i:])) + sorted(tmp[:i])
return functools.reduce(lambda x,y: x*10+y, fin)
return -1
A simple backtracking approach is to consider the digits one at a time. Starting from the most significant digit, pick the smallest number you have left that doesn't prevent the new number from exceeding the input. This will always start by reproducing the input, then will have to backtrack to the next-to-last digit (because there aren't any other choices for the last digit). For inputs like 897654321, the backtracking will immediately cascade to the beginning because there are no larger digits left to try in any of the intermediate slots.
You should sorting the num_arr in desc order and creating a number by combining the result.
Since OP required, next largest, OP needs to check starting from right, which right digit is larger then its very left digit and rotate their position.
Here is the final code:
def next_bigger(n):
num_arr = [int(x) for x in str(n)]
i = 0
i = len(num_arr) - 1
while(i > 0):
if num_arr[i] > num_arr[i-1]:
a = num_arr[i]
num_arr[i] = num_arr[i-1]
num_arr[i-1] = a
break
else:
i = i-1
newbig = "".join(str(e) for e in num_arr)
return int(newbig)
Now I edit to calculate next bigger element.
def perms(s):
if(len(s)==1):
return [s]
result=[]
for i,v in enumerate(s):
result += [v+p for p in perms(s[:i]+s[i+1:])]
return result
a=input()
b=perms(str(a))
if len(b)!=1:
for i in range(0,len(b)):
if b[i]==a:
print (b[i+1])
break
else:
print ("-1")
I would like to write a function which performs efficiently this "strange" sort (I am sorry for this pseudocode, it seems to me to be the clearest way to introduce the problem):
l=[[A,B,C,...]]
while some list in l is not sorted (increasingly) do
find a non-sorted list (say A) in l
find the first two non-sorted elements of A (i.e. A=[...,b,a,...] with b>a)
l=[[...,a,b,...],[...,b+a,...],B,C,...]
Two important things should be mentioned:
The sorting is dependent on the choice of the first two
non-sorted elements: if A=[...,b,a,r,...], r<a<b and we choose to
sort wrt to (a,r) then the final result won't be the same. This is
why we fix the two first non-sorted elements of A.
Sorting this way always comes to an end.
An example:
In: Sort([[4,5,3,10]])
Out: [[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
since
(a,b)=(5,3): [4,5,3,10]->[[4,3,5,10],[4,8,10]]
(a,b)=(4,3): [[4,3,5,10],[4,8,10]]->[[3,4,5,10],[7,5,10],[4,8,10]]
(a,b)=(7,5): [[3,4,5,10],[7,5,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[12,10],[4,8,10]]
(a,b)=(12,10): [[3,4,5,10],[5,7,10],[12,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
Thank you for your help!
EDIT
Why am I considering this problem:
I am trying to do some computations with the Universal Enveloping Algebra of a Lie algebra. This is a mathematical object generated by products of some generators x_1,...x_n. We have a nice description of a generating set (it amounts to the ordered lists in the question), but when exchanging two generators, we need to take into account the commutator of these two elements (this is the sum of the elements in the question). I haven't given a solution to this question because it would be close to the worst one you can think of. I would like to know how you would implement this in a good way, so that it is pythonic and fast. I am not asking for a complete solution, only some clues. I am willing to solve it by myself .
Here's a simple implementation that could use some improvement:
def strange_sort(lists_to_sort):
# reverse so pop and append can be used
lists_to_sort = lists_to_sort[::-1]
sorted_list_of_lists = []
while lists_to_sort:
l = lists_to_sort.pop()
i = 0
# l[:i] is sorted
while i < len(l) - 1:
if l[i] > l[i + 1]:
# add list with element sum to stack
lists_to_sort.append(l[:i] + [l[i] + l[i + 1]] + l[i + 2:])
# reverse elements
l[i], l[i + 1] = l[i + 1], l[i]
# go back if necessary
if i > 0 and l[i - 1] > l [i]:
i -= 1
continue
# move on to next index
i += 1
# done sorting list
sorted_list_of_lists.append(l)
return sorted_list_of_lists
print(strange_sort([[4,5,3,10]]))
This keeps track of which lists are left to sort by using a stack. The time complexity is pretty good, but I don't think it's ideal
Firstly you would have to implement a while loop which would check if all of the numbers inside of the lists are sorted. I will be using all which checks if all the objects inside a sequence are True.
def a_sorting_function_of_some_sort(list_to_sort):
while not all([all([number <= numbers_list[numbers_list.index(number) + 1] for number in numbers_list
if not number == numbers_list[-1]])
for numbers_list in list_to_sort]):
for numbers_list in list_to_sort:
# There's nothing to do if the list contains just one number
if len(numbers_list) > 1:
for number in numbers_list:
number_index = numbers_list.index(number)
try:
next_number_index = number_index + 1
next_number = numbers_list[next_number_index]
# If IndexError is raised here, it means we don't have any other numbers to check against,
# so we break this numbers iteration to go to the next list iteration
except IndexError:
break
if not number < next_number:
numbers_list_index = list_to_sort.index(numbers_list)
list_to_sort.insert(numbers_list_index + 1, [*numbers_list[:number_index], number + next_number,
*numbers_list[next_number_index + 1:]])
numbers_list[number_index] = next_number
numbers_list[next_number_index] = number
# We also need to break after parsing unsorted numbers
break
return list_to_sort
Is there any faster way to calculate this value in Python:
len([x for x in my_list if x in other_list])
I tried to use sets, since the lists' elements are unique, but I noticed no difference.
len(set(my_list).intersection(set(other_list)))
I'm working with big lists, so even the slightest improvement counts.
Thanks
Simple way is to find the least length'd list... than use that with set.intersection..., eg:
a = range(100)
b = range(50)
fst, snd = (a, b) if len(a) < len(b) else (b, a)
len(set(fst).intersection(snd))
I think a generator expression like so would be fast
sum(1 for i in my_list if i in other_list)
Otherwise a set intersection is about as fast as it will get
len(set(my_list).intersection(other_list))
From https://wiki.python.org/moin/TimeComplexity, set intersection for two sets s and t has time complexity:
Average - O(min(len(s), len(t))
Worst - O(len(s) * len(t))
len([x for x in my_list if x in other_list]) has complexity O(n^2) which is equivalent to the worst case for set.intersection().
If you use set.intersection() you only need to convert one of the lists to a set first:
So len(set(my_list).intersection(other_list)) should on average going to be faster than the nested list comprehension.
You could try using the filter function. Since you mentioned you're working with huge lists, ifilterof itertools module would be a good option:
from itertools import ifilter
my_set = set(range(100))
other_set = set(range(50))
for item in ifilter(lambda x: x in other_set, my_set):
print item
The idea is to sort the two lists first and then traverse them like we want to merge them, in order to find the elements in first list belonging also to second list. This way we have an O(n logn) algorithm.
def mycount(l, m):
l.sort()
m.sort()
i, j, counter = 0, 0, 0
while i < len(l) and j < len(m):
if l[i] == m[j]:
counter += 1
i += 1
elif l[i] < m[j]:
i += 1
else:
j += 1
return counter
From local tests it's 100 times faster than len([x for x in a if x in b]) when working with lists of 10000 elements.
EDIT:
Considering that the list elements are unique, the common elements will have a frequency two in the union of the two lists. Also they will be together when we sort this union. So the following is also valid:
def mycount(l, m):
s = sorted(l + m)
return sum(s[i] == s[i + 1] for i in xrange(len(s) - 1))
Similarily, we can use a counter:
from collections import Counter
def mycount(l, m):
c = Counter(l)
c.update(m)
return sum(v == 2 for v in c.itervalues())
How can I tell if a list (or iterable) of numbers all have the same sign?
Here's my first (naive) draft:
def all_same_sign(list):
negative_count = 0
for x in list:
if x < 0:
negative_count += 1
return negative_count == 0 or negative_count == len(list)
Is there a more pythonic and/or correct way of doing this? First thing that comes to mind is to stop iterating once you have opposite signs.
Update
I like the answers so far although I wonder about performance. I'm not a performance junkie but I think when dealing with lists it's reasonable to consider the performance. For my particular use-case I don't think it will be a big deal but for completeness of this question I think it's good to address it. My understanding is the min and max functions have O(n) performance. The two suggested answers so far have O(2n) performance whereas my above routine adding a short circuit to quit once an opposite sign is detected will have at worst O(n) performance. Thoughts?
You can make use of all function: -
>>> x = [1, 2, 3, 4, 5]
>>> all(item >= 0 for item in x) or all(item < 0 for item in x)
True
Don't know whether it's the most pythonic way.
How about:
same_sign = not min(l) < 0 < max(l)
Basically, this checks whether the smallest element of l and the largest element straddle zero.
This doesn't short-circuit, but does avoid Python loops. Only benchmarking can tell whether this is a good tradeoff for your data (and whether the performance of this piece even matters).
Instead of all you could use any, as it short-circuits on the first true item as well:
same = lambda s: any(i >= 0 for i in s) ^ any(i < 0 for i in s)
Similarly to using all, you can use any, which has the benefit of better performance, as it will break the loop on the first occurrence of different sign:
def all_same_sign(lst):
if lst[0] >= 0:
return not any(i < 0 for i in lst)
else:
return not any(i >= 0 for i in lst)
It would be a little tricky if you want to consider 0, as belonging to both groups:
def all_same_sign(lst):
first = 0
i = 0
while first == 0:
first = lst[i]
i += 1
if first > 0:
return not any(i < 0 for i in lst)
else:
return not any(i > 0 for i in lst)
In any case, you iterate the list once instead of twice as in other answers. Your code has the drawback of iterating the loop in Python, which is much less efficient than using built-in functions.
I have been working on generating all possible submodels for a biological problem. I have a working recursion for generating a big list of all the submodels I want. However, the lists get unmanageably large pretty fast (N=12 is just possible in the example below, N>12 uses too much memory). So I wanted to convert it to a generator function using yield instead, but I'm stuck.
My working recursive function looks like this:
def submodel_list(result, pat, current, maxn):
''' result is a list to append to
pat is the current pattern (starts as empty list)
current is the current number of the pattern
maxn is the number of items in the pattern
'''
if pat:
curmax = max(pat)
else:
curmax = 0
for i in range(current):
if i-1 <= curmax:
newpat = pat[:]
newpat.append(i)
if current == maxn:
result.append(newpat)
else:
submodel_generator(result, newpat, current+1, maxn)
result = []
submodel_list(result, [], 1, 5)
That gives me the expected list of submodels for my purposes.
Now, I want to get that same list using a recursion. Naively, I thought I could just switch out my result.append() for a yield function, and the rest would work OK. So I tried this:
def submodel_generator(pat, current, maxn):
'''same as submodel_list but yields instead'''
if pat:
curmax = max(pat)
else:
curmax = 0
for i in range(current):
print i, current, maxn
if i-1 <= curmax:
print curmax
newpat = pat[:]
newpat.append(i)
if current == maxn:
yield newpat
else:
submodel_generator(newpat, current+1, maxn)
b = submodel_generator([], 1, 5)
for model in b: print model
But now I get nothing. A (very dumb) bit of digging tells me the function gets to the final else statement once, then stops - i.e. the recursion no longer works.
Is there a way to turn my first, clunky, list-making function into a nice neat generator function? Is there something silly I've missed here? All help hugely appreciated!
You should change this:
submodel_generator(newpat, current+1, maxn)
to this:
for b in submodel_generator(newpat, current+1, maxn):
yield b
This will recursively yield the value from successive calls to the function.
[Update]: Note that as of Python 3.3, you can use the new yield from syntax:
yield from submodel_generator(newpat, current+1, maxn)