Conversion of iterative algorithm to recursive - python

Hi I am trying to convert my iterative algorithm to recursive solution to achieve Dynamic Programming after it's done (Do suggest me other ways to reduce time complexity of this triple nested iteration). I am not good with recursion. I had tried to convert it but it is giving me index out of range errors.
Iterative Approach:
def foo(L):
L=sorted(L)
A = 0
for i,x in enumerate(L):
for j,y in enumerate(L):
if x != y:
if (y%x ==0):
for k,z in enumerate(L):
if y != z:
if (z%y==0) and (y%x==0):
A= A+1
return A
Recursive Approach:
A =i =j= k =0 #Initializing globals
def foo(L):
L=sorted(L)
global A ,i,j,k
x=y=z=L
luckyOne(x,y,z)
return A
def luckyOne(x,y,z):
global i,j,k
while(i< len(x) and j < len(y) and k < len(z)):
while(x[i] != y[j]):
luckyTwo(x[i:],y[j:],z[k:])
i+=1
luckyOne(x[i:],y[j:],z[k:])
# i+=1
# return A
i+=1
luckyOne(x[i:],y[j:],z[k:])
return 0
def luckyTwo(x,y,z):
global i,j,k
while (i< len(x) and j < len(y) and k < len(z)):
while(y[j]%x[i]==0):
luckyThree(x[i:],y[j:],z[k:])
j+=1
luckyTwo(x[i:],y[j:],z[k:])
j+=1
luckyTwo(x[i:],y[j:],z[k:])
return 0
def luckyThree(x,y,z):
global A ,i,j,k
while (i< len(x) and j < len(y) and k < len(z)):
while (y[j]!=z[k]):
while((z[k]%y[j]==0) and (y[j]%x[i]==0)):
A+=1
print 'idr aya'
k+=1
luckyThree(x[i:],y[j:],z[k:])
k+=1
luckyThree(x[i:],y[j:],z[k:])
return 0
The input should be like L=['1','2','3']

This is the fastest version I can come up with:
def foo(lst):
edges = {x: [y for y in lst if x != y and y % x == 0] for x in set(lst)}
return sum(len(edges[y]) for x in lst for y in edges[x])
This should be significantly faster (1/7th the time in my test of lists with 100 elements).
The algorithm is essentially to build a directed graph where the nodes are the numbers in the list. Edges go from node A to node B iff the integer values of those nodes are different and A divides evenly into B.
Then traverse the graph. For each starting node A, find all the nodes B where there's an edge from A to B. On paper, we would then go to all the next nodes C, but we don't need to... we can just count how many edges are leaving node B and add that to our total.
EDIT
Depending on the distribution of values in the list, this is probably faster:
def foo(lst):
counts = Counter(lst)
edges = {x: [y for y in counts if x != y and y % x == 0] for x in counts}
return sum(counts[x] * counts[y] * sum(counts[z] for z in edges[y]) for x in counts for y in edges[x])
Here, you can think of nodes as having a numeric value and a count. This avoids duplicate nodes for duplicate values in the input. Then we basically do the same thing but multiply by the appropriate count at each step.
EDIT 2
def foo(lst):
counts = collections.Counter(lst)
edges = collections.defaultdict(list)
for x, y in itertools.combinations(sorted(counts), 2):
if y % x == 0:
edges[x].append(y)
return sum(counts[x] * counts[y] * sum(counts[z] for z in edges[y]) for x in counts for y in edges[x])
Slight improvement thanks to #Blckknght. Sorting the unique values first saves some time in enumeration.
EDIT 3
See comments, but the original code in this question was actually wrong. Here's code that (I think) does the right thing based on the problem description which can be found in the comments:
def foo3(lst):
count = 0
for x, y, z in itertools.combinations(lst, 3):
if y % x == 0 and z % y == 0:
count += 1
return count
print(foo3([1, 2, 3, 4, 5, 6])) # 3
print(foo3([6, 5, 4, 3, 2, 1])) # 0
EDIT 4
Much faster version of the previous code:
def foo4(lst):
edges = [[] for _ in range(len(lst))]
for i, j in itertools.combinations(range(len(lst)), 2):
if lst[j] % lst[i] == 0:
edges[i].append(j)
return sum(len(edges[j]) for i in range(len(lst)) for j in edges[i])
EDIT 5
More compact version (seems to run in about the same amount of time):
def foo5(lst):
edges = [[j for j in range(i + 1, len(lst)) if lst[j] % lst[i] == 0] for i in range(len(lst))]
return sum(len(edges[j]) for i in range(len(lst)) for j in edges[i])

Here's how I'd solve your problem. It should use O(N**2) time.
def count_triple_multiples(lst):
count = collections.Counter(lst)
double_count = collections.defaultdict(int)
for x, y in itertools.combinations(sorted(count), 2):
if y % x == 0:
double_count[y] += count[x] * count[y]
triple_count = 0
for x, y in itertools.combinations(sorted(double_count), 2):
if y % x == 0:
triple_count += double_count[x] * count[y]
return triple_count
My algorithm is very similar to the one smarx is using in his answer, but I keep a count of the number of edges incident to a given value rather than a list.

As far as speeding things up goes (instead of going recursive), testing with 1000 entries, slicing the sorted list at each level cut time by more than half for me (gets rid of numbers less than y, z at their respective levels:
def foo(L):
assert 0 not in L
L=sorted(L)
A = 0
for i,x in enumerate(L):
for j,y in enumerate(L[i + 1:]):
if x != y and not y % x:
for k,z in enumerate(L[i + j + 2:]):
if y != z and not z % y:
A = A + 1
return A

Related

I am merging to list with ascending order

If either i or j reach the end of their list range, how i copy the remainder of the other
list to the merge list
https://ibb.co/m9JzBYp
go to link if not get the question
def list_merge (x, y):
merge = []
i = 0
j = 0
total = len (x) + len(y)
while != total :
if x[i] < y[j]:
merge.append(x[i])
i += 1
if i >= len (x):
#how i copy the reminder
else :
merge.append(y[j])
j += 1
if j >= len (y):
#how i copy the reminder
return merge
EDIT - OP wanted the code in some specific way.. Please see second snippet.
Don't try to complicate your code.. just go with how you would do it manually and write the code.
def list_merge (x, y):
merged_list = []
i = 0
j = 0
# when one of them is empty, break out
while i < len(x) and j < len(y):
if x[i] <= y[j]:
merged_list.append(x[i])
i +=1
else:
merged_list.append(y[j])
j +=1
# if you are here, that means either one of the list is done
# so check the bounds of both lists and append the one which is not traversed
# x has some elements to add
# check how extend works... makes your code clean
if i != len(x):
merged_list.extend(x[i:])
else:
merged_list.extend(y[j:])
return merged_list
a = [1,3,5,7,10]
b = [2,4,6,8,100]
print(list_merge(a,b))
Output
[1, 2, 3, 4, 5, 6, 7, 8, 10, 100]
What OP needed
def list_merge (x, y):
merge = []
i = 0
j = 0
total = len (x) + len(y)
while len(merge) != total :
if x[i] < y[j]:
merge.append(x[i])
i += 1
if i >= len (x):
merge.extend(y[j:])
else:
merge.append(y[j])
j += 1
if j >= len (y):
merge.extend(x[i:])
return merge
a quite simple form:
def merge(*lists_in):
list_in = []
for l in lists_in:
list_in += l
i = 0
while True:
if i == list_in.__len__() -1:
break
if list_in[i] > list_in[i+1]:
temp = list_in[i]
list_in[i] = list_in[i+1]
list_in[i+1] = temp
i = 0
else:
i += 1
return list_in
Testing it:
list1 = [1,4]
list2 = [1,5,6]
list3 = [3,7,9,10]
print(merge(list1, list2, list3))
Out:
[1, 1, 3, 4, 5, 6, 7, 9, 10]
This can be solved rather nicely using deques, which allow you to efficiently look at and remove the first element.
from collections import deque
# A generator function that interleaves two sorted deques
# into a single sorted sequence.
def merge_deques(x, y):
while x and y:
yield (x if x[0] <= y[0] else y).popleft()
# When we reach this point, one of x or y is empty,
# so one of these doesn't yield any values.
yield from x
yield from y
# Makes a list by consuming the merged sequence of two deques.
def list_merge(x, y):
return list(merge_deques(deque(x), deque(y))

find the number of subarrays of an array with XOR sum

You are given the following array A, We need to calculate the total number of sub-arrays with XOR sum X were, The sub-array should satisfy the conditions (X+1) = (X^1). Here is my solution,
def getTotalXorOfSubarrayXors(arr, N):
X = 0
count = 0
for i in range(0, N):
for j in range(i, N):
for k in range(i, j + 1):
X = X ^ arr[k]
if X+1 == X^1:
count +=1
X = 0
return count
arr = [3, 5, 2, 4, 6]
N = len(A)
print(getTotalXorOfSubarrayXors(A, N))
But this solution has a time complexity of O(n^3) which exceeds my time limit for a large set of arrays. Is there is any way I can optimize this code to have less time complexity?
The condition (X+1) = (X^1) just means X must be even. So just count the even xors by using prefix-xor-counts. Takes O(n) time and O(1) space.
def getTotalXorOfSubarrayXors(A, _):
X = 0
counts = [1, 0]
total = 0
for a in A:
X ^= a & 1
total += counts[X]
counts[X] += 1
return total
Try it online! (with tests)
Operation X ^ 1 changes the last bit of a number. So ****1 becomes ****0 and vice versa.
So we can see that for odd values of X value of X ^ 1 is less than X, but for even X's value X ^ 1 is larger by one than X - just what we need.
Now we can count subarrays with even xor-sum. Note that we remember how many odd and even xorsums we already have for subarrays starting from zero index:
def Xors(arr, N):
oddcnt = 0
evencnt = 0
res = 0
x = 0
for p in arr:
x ^= p
if (x % 2):
res += oddcnt
oddcnt += 1
else:
evencnt += 1
res += evencnt
return res

Finding first pair of numbers in array that sum to value

Im trying to solve the following Codewars problem: https://www.codewars.com/kata/sum-of-pairs/train/python
Here is my current implementation in Python:
def sum_pairs(ints, s):
right = float("inf")
n = len(ints)
m = {}
dup = {}
for i, x in enumerate(ints):
if x not in m.keys():
m[x] = i # Track first index of x using hash map.
elif x in m.keys() and x not in dup.keys():
dup[x] = i
for x in m.keys():
if s - x in m.keys():
if x == s-x and x in dup.keys():
j = m[x]
k = dup[x]
else:
j = m[x]
k = m[s-x]
comp = max(j,k)
if comp < right and j!= k:
right = comp
if right > n:
return None
return [s - ints[right],ints[right]]
The code seems to produce correct results, however the input can consist of array with up to 10 000 000 elements, so the execution times out for large inputs. I need help with optimizing/modifying the code so that it can handle sufficiently large arrays.
Your code inefficient for large list test cases so it gives timeout error. Instead you can do:
def sum_pairs(lst, s):
seen = set()
for item in lst:
if s - item in seen:
return [s - item, item]
seen.add(item)
We put the values in seen until we find a value that produces the specified sum with one of the seen values.
For more information go: Referance link
Maybe this code:
def sum_pairs(lst, s):
c = 0
while c<len(lst)-1:
if c != len(lst)-1:
x= lst[c]
spam = c+1
while spam < len(lst):
nxt= lst[spam]
if nxt + x== s:
return [x, nxt]
spam += 1
else:
return None
c +=1
lst = [5, 6, 5, 8]
s = 14
print(sum_pairs(lst, s))
Output:
[6, 8]
This answer unfortunately still times out, even though it's supposed to run in O(n^3) (since it is dominated by the sort, the rest of the algorithm running in O(n)). I'm not sure how you can obtain better than this complexity, but I thought I might put this idea out there.
def sum_pairs(ints, s):
ints_with_idx = enumerate(ints)
# Sort the array of ints
ints_with_idx = sorted(ints_with_idx, key = lambda (idx, num) : num)
diff = 1000000
l = 0
r = len(ints) - 1
# Indexes of the sum operands in sorted array
lSum = 0
rSum = 0
while l < r:
# Compute the absolute difference between the current sum and the desired sum
sum = ints_with_idx[l][1] + ints_with_idx[r][1]
absDiff = abs(sum - s)
if absDiff < diff:
# Update the best difference
lSum = l
rSum = r
diff = absDiff
elif sum > s:
# Decrease the large value
r -= 1
else:
# Test to see if the indexes are better (more to the left) for the same difference
if absDiff == diff:
rightmostIdx = max(ints_with_idx[l][0], ints_with_idx[r][0])
if rightmostIdx < max(ints_with_idx[lSum][0], ints_with_idx[rSum][0]):
lSum = l
rSum = r
# Increase the small value
l += 1
# Retrieve indexes of sum operands
aSumIdx = ints_with_idx[lSum][0]
bSumIdx = ints_with_idx[rSum][0]
# Retrieve values of operands for sum in correct order
aSum = ints[min(aSumIdx, bSumIdx)]
bSum = ints[max(aSumIdx, bSumIdx)]
if aSum + bSum == s:
return [aSum, bSum]
else:
return None

shorten a list of integers by sums of contiguous positive or negative numbers

I would like to write a function to process a list of integers, best way is to show as an example:
input [0,1,2,3, -1,-2,-3, 0,1,2,3, -1,-2,-3] will return [6,-6,6,-6]
I have a draft here that will actually work:
def group_pos_neg_list(nums):
p_nums = []
# to determine if the first element >=0 or <0
# create pos_combined and neg_combined as a list to check the length in the future
if nums[0] >= 0:
pos_combined, neg_combined = [nums[0]], []
elif nums[0] < 0:
pos_combined, neg_combined = [], [nums[0]]
# loop over each element from position 1 to the end
# accumulate pos num and neg nums and set back to 0 if next element is different
index = 1
while index < len(nums):
if nums[index] >= 0 and nums[index-1] >= 0: # both posivite
pos_combined.append(nums[index])
index += 1
elif nums[index] < 0 and nums[index-1] < 0: # both negative
neg_combined.append(nums[index])
index += 1
else:
if len(pos_combined) > 0:
p_nums.append(sum(pos_combined))
pos_combined, neg_combined = [], [nums[index]]
elif len(neg_combined) > 0:
p_nums.append(sum(neg_combined))
pos_combined, neg_combined = [nums[index]], []
index += 1
# finish the last combined group
if len(pos_combined) > 0:
p_nums.append(sum(pos_combined))
elif len(neg_combined) > 0:
p_nums.append(sum(neg_combined))
return p_nums
But I am not quite happy with it, because it looks a bit complicate.
Especially that there is a repeating part of code:
if len(pos_combined) > 0:
p_nums.append(sum(pos_combined))
pos_combined, neg_combined = [], [nums[index]]
elif len(neg_combined) > 0:
p_nums.append(sum(neg_combined))
pos_combined, neg_combined = [nums[index]], []
I have to write this twice as the final group of integers will not be counted in the loop, so an extra step is needed.
Is there anyway to simplify this?
Using groupby
No need to make it that complex: we can first groupby the signum, and then we can calculate the sum, so:
from itertools import groupby
[sum(g) for _, g in groupby(data, lambda x: x >= 0)]
This then produces:
>>> from itertools import groupby
>>> data = [0,1,2,3, -1,-2,-3, 0,1,2,3, -1,-2,-3]
>>> [sum(g) for _, g in groupby(data, lambda x: x >= 0)]
[6, -6, 6, -6]
So groupby produces tuples with the "key" (the part we calculate with the lambda), and an iterable of the "burst" (a continuous subsequence of elements with the same key). We are only interested in the latter g, and then calculate sum(g) and add that to the list.
Custom made algorithm
We can also write our own version, by using:
swap_idx = [0]
swap_idx += [i+1 for i, (v1, v2) in enumerate(zip(data, data[1:]))
if (v1 >= 0) != (v2 >= 0)]
swap_idx.append(None)
our_sums = [sum(data[i:j]) for i, j in zip(swap_idx, swap_idx[1:])]
Here we first construct a list swap_idx that stores the indices where of the element where the signum changes. So for your sample code that is:
>>> swap_idx
[0, 4, 7, 11, None]
The 0 and None are added by the code explicitly. So now that we identified the points where the sign has changed, we can sum these subsequences together, with sum(data[i:j]). We thus use zip(swap_idx, swap_idx[1:]) to obtain two consecutive indices, and thus we can then sum that slice together.
More verbose version
The above is not very readable: yes it works, but it requires some reasoning. We can also produce a more verbose version, and make it even more generic, for example:
def groupby_aggregate(iterable, key=lambda x: x, aggregate=list):
itr = iter(iterable)
nx = next(itr)
kx = kxcur = key(nx)
current = [nx]
try:
while True:
nx = next(itr)
kx = key(nx)
if kx != kxcur:
yield aggregate(current)
current = [nx]
kxcur = kx
else:
current.append(nx)
except StopIteration:
yield aggregate(current)
We can then use it like:
list(groupby_aggregate(data, lambda x: x >= 0, sum))
You can use itertools.groupby, utilizing a key to group by all the values greater than or equal to zero:
import itertools
s = [0,1,2,3, -1,-2,-3, 0,1,2,3, -1,-2,-3]
new_s = [sum(b) for a, b in itertools.groupby(s, key=lambda x: x >=0)]
Output:
[6, -6, 6, -6]
Here is a way to do without any external imports, only using reduce():
def same_sign(a, b):
"""Returns True if a and b have the same sign"""
return (a*b>0) or (a>=0 and b>=0)
l = [0,1,2,3, -1,-2,-3, 0,1,2,3, -1,-2,-3]
reduce(
lambda x, y: (x+y if same_sign(x,y) else [x, y]) if not isinstance(x, list) else x[:-1] + [x[-1] + y] if same_sign(x[-1],y) else x + [y],
l
)
#[6, -6, 6, -6]
Explanation
This is a bit hard to explain, but I'll try.
From the docs calling reduce() will:
Apply function of two arguments cumulatively to the items of iterable, from left to right
In this case I take two values (x and y) from your list and do the following:
If x is not a list:
If x and y have the same sign (product >=0), sum them
Otherwise return a list [x, y]
If x is a list, only modify the last element of x.
If the signs match, add y.
Otherwise append a new element to the list x
Note
You probably shouldn't do it this way because the code is hard to read and understand. I just wanted to show that it was possible.
Update
A more readable version of the same code above:
def same_sign(a, b):
"""Returns True if a and b have the same sign"""
return (a*b>0) or (a>=0 and b>=0)
l = [0,1,2,3, -1,-2,-3, 0,1,2,3, -1,-2,-3]
def reducer(x, y):
if isinstance(x, list):
if same_sign(x[-1], y):
return x[:-1] + [x[-1] + y]
else:
return x + [y]
else:
if same_sign(x, y):
return x+y
else:
return [x, y]
reduce(reducer, l)
#[6, -6, 6, -6]

Combining two lists into one list where they share the same values and removing duplicates using list comprehensions

I am trying to combine two lists:
One holds Square numbers.
The other stores Pentagonal Numbers.
def pentaSquares():
l = []
n = 0
squares = lambda x: [x*x for x in range(n)]
penta = lambda y: [y*(3*y-1)//2 for y in range(n)]
while l.index < 4:
l = [i for i in squares for j in penta if squares == penta]
n = n+1
return l
I must merge these lists using List Comprehensions where their values match until there are 4 elements in the list.
If somebody could point me in the right direction, that would be much appreciated.
I am currently getting this error: TypeError: unorderable types: builtin_function_or_method() < int()
Using a pair of generators should give you this answer without taking up all the memory in the world. This should work nicely (though perhaps take a very long time) for any resultant list size.
import itertools
squares = (x*x for x in itertools.count(0))
pentas = (y * (3*y-1) // 2 for y in itertools.count(0))
results = []
cur_s, cur_p = next(squares), next(pentas)
# prime the pump
while len(results) < 4:
if cur_s == cur_p:
results.append(cur_s)
# success
# advance the generator with the smaller current result
if cur_s > cur_p:
cur_p = next(pentas)
else:
cur_s = next(squares)
There's no reason to use list comprehensions for this task, but if you had to you should use the list -> set and set intersection approach in cricket_007's now-deleted answer
for n in range(itertools.count(0)):
squares = [x * x for x in range(n)]
pentas = [y * (3*y-1) // 2 for y in range(n)]
result = set(squares).intersection(set(pentas))
if len(result) >= 4:
break
def pentaSquares(n):
squarlist=[x*x for x in range(n)]
pentalist=[y * (3*y-1) // 2 for y in range(n)]
l=[x for x in squarlist if x in pentalist]
return l
>>> pentaSquares(10000)
[0, 1, 9801, 94109401]
EDIT 1 O.P Satisfaction
def pentaSquares(n):
squarlist=[]
pentalist=[]
squares = lambda x:x*x
penta = lambda y:y*(3*y-1)//2
for i in range(n):
squarlist.append(squares(i))
pentalist.append(penta(i))
l=[x for x in squarlist if x in pentalist]
if l < 4:
print('there are less than 4 values, input larger n')
return l

Categories