How to generate natural products in order? - python

You want to have a list of the ordered products n x m so that both n and m are natural numbers and 1 < (n x m) < upper_limit, say uper_limit = 100. Also both n and m cannot be bigger than the square root of the upper limit (therefore n <= 10 and m <= 10).
The most straightforward thing to do would be to generate all the products with a list comprehension and then sort the result.
sorted(n*m for n in range(1,10) for m in range(1,n))
However when upper_limit becomes very big then this is not very efficient, especially if the objective is to found only one number given certain criteria (ex. find the max product such that ... -> I would want to generate the products in descending order, test them and stop the whole process as soon as I find the first one that respects the criteria).
So, how to generate this products in order?
The first thing I have done was to start from the upper_limit and go backwards one by one, making a double test:
- checking if the number can be a product of n and m
- checking for the criteria
Again, this is not very efficient ...
Any algorithm that solves this problem?

I found a slightly more efficient solution to this problem.
For a and b being natural numbers:
S = a + b
D = abs(a - b)
If S is constant, the smaller D is, the bigger a*b is.
For each S (taken in decreasing order) it is therefore possible to iterate through all the possible tuples (a, b) with increasing D.
First I plug the external condition and if the product ab respects the condition I then iterate through other (a,b) tuples with smaller decreasing S and smaller increasing D to check if I find other numbers that respect the same condition but have a bigger ab. I repeat the iteration until I find a number with D == 0 or 1 (because in that case there cannot be tuples with smaller S that have a higher product)

The following code will check all the possible combinations without repetition and will stop when the condition is met. In this code if the break is executed in the inner loop, the break statement in the outer loop is executed as well, otherwise continue statement is executed.
from math import sqrt
n = m = round(sqrt(int(input("Enter upper limit"))))
for i in range(n, 0, -1):
for j in range(i - 1, 0, -1):
if * required condition:
n = i
m = j
break
else:
continue
break

Related

Trying to understand the time complexity of this dynamic recursive subset sum

# Returns true if there exists a subsequence of `A[0…n]` with the given sum
def subsetSum(A, n, k, lookup):
# return true if the sum becomes 0 (subset found)
if k == 0:
return True
# base case: no items left, or sum becomes negative
if n < 0 or k < 0:
return False
# construct a unique key from dynamic elements of the input
key = (n, k)
# if the subproblem is seen for the first time, solve it and
# store its result in a dictionary
if key not in lookup:
# Case 1. Include the current item `A[n]` in the subset and recur
# for the remaining items `n-1` with the decreased total `k-A[n]`
include = subsetSum(A, n - 1, k - A[n], lookup)
# Case 2. Exclude the current item `A[n]` from the subset and recur for
# the remaining items `n-1`
exclude = subsetSum(A, n - 1, k, lookup)
# assign true if we get subset by including or excluding the current item
lookup[key] = include or exclude
# return solution to the current subproblem
return lookup[key]
if __name__ == '__main__':
# Input: a set of items and a sum
A = [7, 3, 2, 5, 8]
k = 14
# create a dictionary to store solutions to subproblems
lookup = {}
if subsetSum(A, len(A) - 1, k, lookup):
print('Subsequence with the given sum exists')
else:
print('Subsequence with the given sum does not exist')
It is said that the complexity of this algorithm is O(n * sum), but I can't understand how or why;
can someone help me? Could be a wordy explanation or a recurrence relation, anything is fine
The simplest explanation I can give is to realize that when lookup[(n, k)] has a value, it is True or False and indicates whether some subset of A[:n+1] sums to k.
Imagine a naive algorithm that just fills in all the elements of lookup row by row.
lookup[(0, i)] (for 0 ≤ i ≤ total) has just two elements true, i = A[0] and i = 0, and all the other elements are false.
lookup[(1, i)] (for 0 ≤ i ≤ total) is true if lookup[(0, i)] is true or i ≥ A[1] and lookup[(0, i - A[1]) is true. I can reach the sum i either by using A[i] or not, and I've already calculated both of those.
...
lookup[(r, i)] (for 0 ≤ i ≤ total) is true if lookup[(r - 1, i)] is true or i ≥ A[r] and lookup[(r - 1, i - A[r]) is true.
Filling in this table this way, it is clear that we can completely fill the lookup table for rows 0 ≤ row < len(A) in time len(A) * total since filling in each element in linear. And our final answer is just checking if (len(A) - 1, sum) True in the table.
Your program is doing the exact same thing, but calculating the value of entries of lookup as they are needed.
Sorry for submitting two answers. I think I came up with a slightly simpler explanation.
Take your code in imagine putting the three lines inside if key not in lookup: into a separate function, calculateLookup(A, n, k, lookup). I'm going to call "the cost of calling calculateLookup for n and k for a specific value of n and k to be the total time spent in the call to calculateLookup(A, n, k, loopup), but excluding any recursive calls to calculateLookup.
The key insight is that as defined above, the cost of calling calculateLookup() for any n and k is O(1). Since we are excluding recursive calls in the cost, and there are no for loops, the cost of calculateLookup is the cost of just executing a few tests.
The entire algorithm does a fixed amount of work, calls calculateLookup, and then a small amount of work. Hence the amount of time spent in our code is the same as asking how many times do we call calculateLookup?
Now we're back to previous answer. Because of the lookup table, every call to calculateLookup is called with a different value for (n, k). We also know that we check the bounds of n and k before each call to calculateLookup so 1 ≤ k ≤ sum and 0 ≤ n ≤ len(A). So calculateLookup is called at most (len(A) * sum) times.
In general, for these algorithms that use memoization/cacheing, the easiest thing to do is to separately calculate and then sum:
How long things take assuming all values you need are cached.
How long it takes to fill the cache.
The algorithm you presented is just filling up the lookup cache. It's doing it in an unusual order, and its not filling every entry in the table, but that's all its doing.
The code would be slightly faster with
lookup[key] = subsetSum(A, n - 1, k - A[n], lookup) or subsetSum(A, n - 1, k, lookup)
Doesn't change the O() of the code in the worst case, but can avoid some unnecessary calculations.

Fast Python algorithm for random partitioning with subset sums equal or close to given ratios

This question is an extension of my previous question: Fast python algorithm to find all possible partitions from a list of numbers that has subset sums equal to a ratio
. I want to divide a list of numbers so that the ratios of subset sums equal to given values. The difference is now I have a long list of 200 numbers so that a enumeration is infeasible. Note that although there are of course same numbers in the list, every number is distinguishable.
import random
lst = [random.randrange(10) for _ in range(200)]
In this case, I want a function to stochastically sample a certain amount of partitions with subset sums equal or close to the given ratios. This means that the solution can be sub-optimal, but I need the algorithm to be fast enough. I guess a Greedy algorithm will do. With that being said, of course it would be even better if there is a relatively fast algorithm that can give the optimal solution.
For example, I want to sample 100 partitions, all with subset sum ratios of 4 : 3 : 3. Duplicate partitions are allowed but should be very unlikely for such long list. The function should be used like this:
partitions = func(numbers=lst, ratios=[4, 3, 3], num_gen=100)
To test the solution, you can do something like:
from math import isclose
eps = 0.05
assert all([isclose(ratios[i] / sum(ratios), sum(x) / sum(lst), abs_tol=eps)
for part in partitions for i, x in enumerate(part)])
Any suggestions?
You can use a greedy heuristic where you generate each partition from num_gen random permutations of the list. Each random permutation is partitioned into len(ratios) contiguous sublists. The fact that the partition subsets are sublists of a permutation make enforcing the ratio condition very easy to do during sublist generation: as soon as the sum of the sublist we are currently building reaches one of the ratios, we "complete" the sublist, add it to the partition and start creating a new sublist. We can do this in one pass through the entire permutation, giving us the following algorithm of time complexity O(num_gen * len(lst)).
M = 100
N = len(lst)
P = len(ratios)
R = sum(ratios)
S = sum(lst)
for _ in range(M):
# get a new random permutation
random.shuffle(lst)
partition = []
# starting index (in the permutation) of the current sublist
lo = 0
# permutation partial sum
s = 0
# index of sublist we are currently generating (i.e. what ratio we are on)
j = 0
# ratio partial sum
rs = ratios[j]
for i in range(N):
s += lst[i]
# if ratio of permutation partial sum exceeds ratio of ratio partial sum,
# the current sublist is "complete"
if s / S >= rs / R:
partition.append(lst[lo:i + 1])
# start creating new sublist from next element
lo = i + 1
j += 1
if j == P:
# done with partition
# remaining elements will always all be zeroes
# (i.e. assert should never fail)
assert all(x == 0 for x in lst[i+1:])
partition[-1].extend(lst[i+1:])
break
rs += ratios[j]
Note that the outer loop can be redesigned to loop indefinitely until num_gen good partitions are generated (rather than just looping num_gen times) for more robustness. This algorithm is expected to produce M good partitions in O(M) iterations (provided random.shuffle is sufficiently random) if the number of good partitions is not too small compared to the total number of partitions of the same size, so it should perform well for for most inputs. For an (almost) uniformly random list like [random.randrange(10) for _ in range(200)], every iteration produces a good partition with eps = 0.05 as is evident by running the example below. Of course, how well the algorithm performs will also depend on the definition of 'good' -- the stricter the closeness requirement (in other words, the smaller the epsilon), the more iterations it will take to find a good partition. This implementation can be found here, and will work for any input (assuming random.shuffle eventually produces all permutations of the input list).
You can find a runnable version of the code (with asserts to test how "good" the partitions are) here.

Fastest way to find sub-lists of a fixed length, from a given list of values, whose elements sum equals a defined number

In Python 3.6, suppose that I have a list of numbers L, and that I want to find all possible sub-lists S of a given pre-chosen length |S|, such that:
any S has to have length smaller than L, that is |S| < |L|
any S can only contain numbers present in L
numbers in S do not have to be unique (they can appear repeatedly)
the sum of all numbers in S should be equal to a pre-determined number N
A trivial solution for this can be found using the Cartesian Product with itertools.product. For example, suppose L is a simple list of all integers between 1 and 10 (inclusive) and |S| is chosen to be 3. Then:
import itertools
L = range(1,11)
N = 8
Slength = 3
result = [list(seq) for seq in itertools.product(L, repeat=Slength) if sum(seq) == N]
However, as larger lists L are chosen, and or larger |S|, the above approach becomes extremely slow. In fact, even for L = range(1,101) with |S|=5 and N=80, the computer almost freezes and it takes approximately an hour to compute the result.
My take is that:
there is a lot of unnecessary computations going on there under the hood, given the condition that sub-lists should sum to N
there is a ton of cache misses due to iterating over possibly millions of lists generated by itertools.product to just keep much much fewer
So, my question/challenge is: is there a way I can do this in a more computationally efficient way? Unless we are talking hundreds of Gigabytes, speed to me is more critical than memory, so the challenge focuses more on speed, even if considerations for memory efficiency are a welcome bonus.
So given an input list and a target length and sum, you want all the permutations of the numbers in the input list such that:
The sum equals the target sum
The length equals the target length
The following code should be faster:
# Input
input_list = range(1,101)
# Targets
target_sum = 15
target_length = 5
# Available numbers
numbers = set(input_list)
# Initialize the stack
stack = [[num] for num in numbers]
result = []
# Loop until we run out of permutations
while stack:
# Get a permutation from the stack
current = stack.pop()
# If it's too short
if len(current) < target_length:
# And the sum is too small
if sum(current) < target_sum:
# Then for each available number
for num in numbers:
# Append said number and put the resulting permutation back into the stack
stack.append(current + [num])
# If it's not too short and the sum equals the target, add to the result!
elif sum(current) == target_sum:
result.append(current)
print(len(result))

Finding maximum sum of occurrences of one element in two attempts from a list

Best explained by example. If a python list is -
[[0,1,2,0,4],
[0,1,2,0,2],
[1,0,0,0,1],
[1,0,0,1,0]]
I want to select two sub-lists which will yield the max sum of occurrences of zeros present - where sum is to be calculated as below
SUM = No. of zeros present in the first selected sub-list + No. of zeros present in the second selected sub-list which were not present in the first selected sub-list.
In this case, answer is 5. (First or second sub-list and the last sub-list). (Note that the third sub-list is not to be selected because it has zero present in 3rd index which is same as in first/second sub-list we have to select and it will amount to sum as 4 which will not be maximum if we consider the last sub-list)
What kind of algorithm is best suited if we were to apply it on a big input? Is there a better way to do this in better than in N2 time?
Binary operations are fairly useful for this task:
Convert each sublist to a binary number, where a 0 is turned into a 1 bit, and other numbers are turned into a 0 bit.
For example, [0,1,2,0,4] would be turned into 10010, which is 18.
Eliminate duplicate numbers.
Combine the remaining numbers pairwise and combine them with a binary OR.
Find the number with the most 1 bits.
The code:
lists = [[0,1,2,0,4],
[0,1,2,0,2],
[1,0,0,0,1],
[1,0,0,1,0]]
import itertools
def to_binary(lst):
num = ''.join('1' if n == 0 else '0' for n in lst)
return int(num, 2)
def count_ones(num):
return bin(num).count('1')
# Step 1 & 2: Convert to binary and remove duplicates
binary_numbers = {to_binary(lst) for lst in lists}
# Step 3: Create pairs
combinations = itertools.combinations(binary_numbers, 2)
# Step 4 & 5: Compute binary OR and count 1 digits
zeros = (count_ones(a | b) for a, b in combinations)
print(max(zeros)) # output: 5
The efficiency of the naive algorithm is O(n(n-1)*m) ~ O(n2m) where n is the number of lists and m is the length of each list. When n and m are comparable in magnitude, this equates to O(n3).
It might be helpful to observe that naive matrix multiplication is also O(n3). This might lead us to the following algorithm:
Write each list with only 1's and 0's, where a 1 indicates a non-zero entry.
Arrange these lists in a matrix A.
Compute the product M=AAT.
Find the minimum element in M; the row and column correspond to the lists which produce the maximize number of non-overlapping zeros.
Here, (3) is the limiting step of the algorithm. Asymptotically, depending on your matrix multiplication algorithm, you can achieve a complexity down to roughly O(n2.4).
An example Python implementation would look like:
import numpy as np
lists = [[0,1,2,0,4],
[0,1,2,0,2],
[1,0,0,0,1],
[1,0,0,1,0]]
filtered = list(set(tuple(1 if e else 0 for e in sub) for sub in lists))
A = np.mat(filtered)
D = np.einsum('ik,jk->ij', A, A)
indices= np.unravel_index(np.argmin(D), D.shape)
print(f'{indices}: {len(lists[0]) - D[indices]}') # (0, 3): 0
Note that this algorithm on it's own has the fundamental inefficiency that it is calculating both the lower-triangular and upper-triangular halves of dot product matrix. However, the numpy speed-up will probably offset this from the combinations approach. See the timing results below:
def numpy_approach(lists):
filtered = list(set(tuple(1 if e else 0 for e in sub) for sub in lists))
A = np.mat(filtered, dtype=bool).astype(int)
D = np.einsum('ik,jk->ij', A, A)
return len(lists[0]) - D.min()
def itertools_approach(lists):
binary_numbers = {int(''.join('1' if n == 0 else '0' for n in lst), 2)
for lst in lists}
combinations = itertools.combinations(binary_numbers, 2)
zeros = (bin(a | b).count('1') for a, b in combinations)
return max(zeros)
from time import time
N = 1000
lists = [[random.randint(0, 5) for _ in range(10)] for _ in range(100)]
for name, function in {
'numpy approach': numpy_approach,
'itertools approach': itertools_approach
}.items():
start = time()
for _ in range(N):
function(lists)
print(f'{name}: {time() - start}')
# numpy approach: 0.2698099613189697
# itertools approach: 0.9693171977996826
The algorithm should look something like (with Haskell code as example, so as not to make the process trivial for you in Python:
turn each sublist into "Is zero" or "Isn't zero"
map (map (\x -> if x==0 then 1 else 0)) bigList
Enumerate the list so you can keep indices
enumList = zip [0..] bigList
Compare each sublist with its successive sublists
myCompare = concat . go
where
go [] = []
go ((ix, xs):xss) = [((ix, iy), zipWith (.|.) xs ys) | (iy, ys) <- xss] : go xss
Calculate your maxes
best = maximumBy (compare `on` (sum . snd)) $ myCompare enumList
Pull out the indices
result = fst best

Number of multiples less than the max number

For the following problem on SingPath:
Given an input of a list of numbers and a high number,
return the number of multiples of each of
those numbers that are less than the maximum number.
For this case the list will contain a maximum of 3 numbers
that are all relatively prime to each
other.
Here is my code:
def countMultiples(l, max_num):
counting_list = []
for i in l:
for j in range(1, max_num):
if (i * j < max_num) and (i * j) not in counting_list:
counting_list.append(i * j)
return len(counting_list)
Although my algorithm works okay, it gets stuck when the maximum number is way too big
>>> countMultiples([3],30)
9 #WORKS GOOD
>>> countMultiples([3,5],100)
46 #WORKS GOOD
>>> countMultiples([13,25],100250)
Line 5: TimeLimitError: Program exceeded run time limit.
How to optimize this code?
3 and 5 have some same multiples, like 15.
You should remove those multiples, and you will get the right answer
Also you should check the inclusion exclusion principle https://en.wikipedia.org/wiki/Inclusion-exclusion_principle#Counting_integers
EDIT:
The problem can be solved in constant time. As previously linked, the solution is in the inclusion - exclusion principle.
Let say you want to get the number of multiples of 3 less than 100, you can do this by dividing floor(100/3), the same applies for 5, floor(100/5).
Now to get the multiplies of 3 and 5 that are less than 100, you would have to add them, and subtract the ones that are multiples of both. In this case, subtracting multiplies of 15.
So the answer for multiples of 3 and 5, that are less than 100 is floor(100/3) + floor(100/5) - floor(100/15).
If you have more than 2 numbers, it gets a bit more complicated, but the same approach applies, for more check https://en.wikipedia.org/wiki/Inclusion-exclusion_principle#Counting_integers
EDIT2:
Also the loop variant can be speed up.
Your current algorithm appends multiple in a list, which is very slow.
You should switch the inner and outer for loop. By doing that you would check if any of the divisors divide the number, and you get the the divisor.
So just adding a boolean variable which tells you if any of your divisors divide the number, and counting the times the variable is true.
So it would like this:
def countMultiples(l, max_num):
nums = 0
for j in range(1, max_num):
isMultiple = False
for i in l:
if (j % i == 0):
isMultiple = True
if (isMultiple == True):
nums += 1
return nums
print countMultiples([13,25],100250)
If the length of the list is all you need, you'd be better off with a tally instead of creating another list.
def countMultiples(l, max_num):
count = 0
counting_list = []
for i in l:
for j in range(1, max_num):
if (i * j < max_num) and (i * j) not in counting_list:
count += 1
return count

Categories