finding the length of the longest subsequence - python

Here, in this piece of code, it prints the length of the largest subsequence of a sequence that's increasing then decreasing or vice versa.
for example:
Input: 1, 11, 2, 10, 4, 5, 2, 1
Output: 6 (A Longest Subsequence of length 6 is 1, 2, 10, 4, 2, 1)
but how can I make it work with three monotonic (increasing or decreasing) regions?
like increasing-decreasing-increasing OR decreasing-increasing-decreasing
example:
input: 7 16 1 6 20 17 7 18 25 1 25 21 11 5 29 11 3 3 26 19
output: 12
(largest subsequence: 7 1 6 17 18 25 25 21 11 5 3 3) as we see,
it can be split into three regions:
7,1 / 6,17,18,25,25 / 21,11,5,3,3
arr = list(map(int, input().split()))
def lbs(arr):
n = len(arr)
lis = [1 for i in range(n+1)]
for i in range(1 , n):
for j in range(0 , i):
if ((arr[i] > arr[j]) and (lis[i] < lis[j] +1)):
lis[i] = lis[j] + 1
lds = [1 for i in range(n+1)]
for i in reversed(range(n-1)):
for j in reversed(range(i-1 ,n)):
if(arr[i] > arr[j] and lds[i] < lds[j] + 1):
lds[i] = lds[j] + 1
maximum = lis[0] + lds[0] - 1
for i in range(1 , n):
maximum = max((lis[i] + lds[i]-1), maximum)
return maximum
print ("Length of LBS is",lbs(arr))

I've came up with a O(n^2 log n) idea.
You want to divide your whole segment into three parts: first one containing increasing subsequence, second one containing decreasing one and the last one containing again increasing one.
First of all, let's choose a sequence's prefix - the first part (O(n) possibilities). To minimize the amount of checked intervals, you can pick only prefixes which last element is in their longest increasing subsequence. (In other words, when choosing range [1, x], a_x should be in it's longest increasing subsequence)
Now you have similar problem to the one you've already solved - finding decreasing, then increasing subsequence (I'd use binary search instead of for loop you used, by the way). The only difference is that the decreasing subsequence must start from values smaller than the last element of chosen prefix (just ignore any larger or equal values) - you're able to do it in O(n log n).

Related

Given 3 lists, find which two elements in the first two lists sum as close as possible to each value in the third list

I am given 3 lists, and I have to find two values in the first two lists whose sum is as close as possible to each value in the third list, and I have to return their indices (one-based indexing). If multiple solutions are equally close, either one may be returned.
I have a working solution, and while it worked on semi-large inputs, it is too slow for large inputs (all 3 lists length 10000 for example).
So the question is basically: how can you find an exact solution to this problem, without having to calculate every possible combination of list1 and list2?
Sample input:
3
2 2 5
1.000002 0.000002
0.500000 -0.500000
0.500001 0.500002 0.500003 1.000000 0.000001
2 2 5
1.000002 0.000001
0.500000 -0.500000
0.500001 0.500002 0.500003 1.000000 0.000001
5 4 7
0.000001 0.000002 0.000003 0.000004 0.000005
0.000002 0.000010 0.000001 -0.000001
0.000001 0.000002 0.000100 0.000005 0.000020 0.000010 0.000003
Sample output (added newlines for readability, so not present in script output):
2 1
2 1
2 1
2 1
2 2
2 1
1 2
1 2
1 2
2 2
2 4
3 4
5 2
4 3
5 2
1 2
4 4
My current solution:
"""Given an input file, which contains multiple sets of 3 lists each,
find two values in list1 and list2 whose sum is as close as possible to each
element in list3"""
from sys import argv
from time import time
start = time()
def parser(file):
"""Reads a file, returns it as a list of reads, where each read contains
an info line, list1, list2, list3"""
lines = open(argv[1], 'r').readlines()
read = []
tests = int(lines.pop(0))
for line in lines:
read.append(line.strip())
reads = []
for n in range(tests):
reads.append(read[4 * n:4*(n+1)])
return reads
def dict_of_sums(list1, list2):
"""Creates a dict, whose keys are the sums of all values in list1 and
list2, and whose values are the indices of those values in list1 and
list2"""
sums = {}
m = len(list1)
k = len(list2)
for a in range(m):
for b in range(k):
combination = str(a + 1) + ' ' + str(b + 1)
sum = float(list1[a]) + float(list2[b])
sum = round(sum, 6)
sums[sum] = combination
return sums
def find_best_combination(ordered, list3, c):
"""Finds the best combination using binary search: takes a number c,
and searches through the ordered list to find the closest sum.
Returns that sum"""
num = float(list3[c])
lower, upper = 0, len(ordered)
while True:
idx = (lower + upper) // 2
value = ordered[idx]
if value == num:
return value
if value > num:
upper = idx
elif value < num:
lower = idx
if lower + 1 == upper:
for z in [-1, 0, 1]:
totest = idx + z
if z == -1:
delta = (ordered[totest] - num) ** 2
best = totest
else:
deltanew = (ordered[totest] - num) ** 2
if deltanew < delta:
delta = deltanew
best = totest
return ordered[best]
reads = parser(argv[1])
for i in reads:
m, k, n = i.pop(0).split()
m, k, n = int(m), int(k), int(n)
list1, list2, list3 = i[0].split(), i[1].split(), i[2].split()
results = dict_of_sums(list1, list2)
ordered = []
# Create an ordered list of all possible sums of the values in list1 and
# list2
for k in results.keys():
ordered.append(k)
ordered = sorted(ordered)
# Loops over list3, searching the closest sum. Prints the indices of its
# constituent numbers in list1 and list2
for c in range(n):
res = find_best_combination(ordered, list3, c)
results[res]
end = time()
print(end - start)
Your current solution is O(n^2 log(n)) time, and O(n^2) memory. The reason is that your ordered is a list of size n^2 that you then sort, and do lots and lots of binary searches on. This gives you much poorer constants, and a chance of going into swap.
In you case of 10,000 each, you have a dictionary with 100,000,000 keys, that you then sort, and walk through. Which is billions of operations and GB of data. If your machine winds up in swap, those operations will slow down a lot and you have a problem.
I would suggest that you sort lists 2 and 3. For each l1 in list 1 it lets you walk through l1+l2 in parallel with walking through l3, finding the best in l3. Here that is in pseudo-code:
record best for every element in list 3 to be list1[1] + list2[1]
foreach l1 in list 1:
start l2 at start of list2
start l3 at start of list3
while can advance in both list 2 and list 3:
if advancing in list2 improves l1 + l2 as approx of l3:
advance in list 2
else:
if l1 + l2 is better than best recorded approx of l3:
record l1 + l2 as best for l3
advance in list 3
while can advance in list 3:
if l1 + l2 is better than best recorded approx of l3:
record l1 + l2 as best for l3
advance in list 3
if l1 + l2 is better than best recorded approx of l3:
record l1 + l2 as best for l3
This requires sorted versions of list2 and list3, and a lookup from list3 to best approximation. In your example of 10,000 items each, you have 2 data structures of size 10,000, and have to do roughly 200,000,000 operations. Better than billions and no problems with pressing against memory limits.

Google Kickstart Round E 2020 Longest Arithmetic Runtime Error

I tried solving this challenge mentioned below but I got a Run Time error. I used Python
Problem
An arithmetic array is an array that contains at least two integers and the differences between consecutive integers are equal. For example, [9, 10], [3, 3, 3], and [9, 7, 5, 3] are arithmetic arrays, while [1, 3, 3, 7], [2, 1, 2], and [1, 2, 4] are not arithmetic arrays.
Sarasvati has an array of N non-negative integers. The i-th integer of the array is Ai. She wants to choose a contiguous arithmetic subarray from her array that has the maximum length. Please help her to determine the length of the longest contiguous arithmetic subarray.
Input:
The first line of the input gives the number of test cases, T. T test cases follow. Each test case begins with a line containing the integer N. The second line contains N integers. The i-th integer is Ai.
Output:
For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the length of the longest contiguous arithmetic subarray.
Limits
Time limit: 20 seconds per test set.
Memory limit: 1GB.
1 ≤ T ≤ 100.
0 ≤ Ai ≤ 109.
Test Set 1
2 ≤ N ≤ 2000.
Test Set 2
2 ≤ N ≤ 2 × 105 for at most 10 test cases.
For the remaining cases, 2 ≤ N ≤ 2000.
Sample Input
4
7
10 7 4 6 8 10 11
4
9 7 5 3
9
5 5 4 5 5 5 4 5 6
10
5 4 3 2 1 2 3 4 5 6
Output
Case #1: 4
Case #2: 4
Case #3: 3
Case #4: 6
Here's my python3 solution which gives run time error
t = int(input())
for t_case in range(t):
n = int(input())
arr = list(map(int, input().split()))
x = []
for i in range(n - 1) :
x.append((arr[i] - arr[i + 1]))
ans, temp = 1, 1
j = len(x)
for i in range(1,j):
if x[i] == x[i - 1]:
temp = temp + 1
else:
ans = max(ans, temp)
temp = 1
ans = max(ans, temp)
print(f"Case #{t_case+1}: {ans+1}")
Could anyone please help me out.
As of now Kickstart is using Python 3.5 which does not support f-strings (they were added in py3.6). Try to replace them with str.format.
t=int(input())
for test in range(t):
n=int(input())
arr = list(map(int, input().split()))
x = []
for i in range(n - 1) :
x.append((arr[i] - arr[i + 1]))
ans, temp = 1, 1
j = len(x)
for i in range(1,j):
if x[i] == x[i - 1]:
temp = temp + 1
else:
ans = max(ans, temp)
temp = 1
ans = max(ans, temp)
print('Case #{0}: {1}'.format(test+1,ans+1))

Generating comabinations of values with the limit of the number of appearances of the values

My task is to:
generate a list of unique combinations where each combination has a certain length (var com_len) and contains values (ints) from the given list (var values),
each combination is created by taking random values from the given list (randomness is very important!),
each combination must have unique values inside, no value can be repeated inside the combination,
values in the combination must be sorted,
appearance of each value within the whole combinations' set is counted (var counter),
each value must appear in the whole dataset as close to the given number of times as possible (var counter_expected). "as close as possible" means, count each appearing value as the script goes and if there are no more combinations left to create, just end the script.
For example, I need to generate a list of combinations where each combination has a length of 3, has unique sorted values inside, each value is from the range(256) and each value appears in all of the combinations generated so far as close to 100 times as possible.
My problem is, how efficiently detect there are no more unique combinations of the values that can be created to stop the loop.
The problem appears when the script is going to an end and there still are available values left and len(available_values) > com_len but it isn't possible to create any new unique combinations that haven't appeared yet.
The code created so far:
import numpy as np
import random
com_len = 3
length = 256
counter = np.zeros(length)
values = range(length)
exclude_values = []
counter_expected = 100
done = []
mask = np.ones(len(np.array(values)), np.bool)
mask[exclude_values] = False
available_values = set(values) - set(exclude_values)
available_values = list(available_values)
ii = 0
while True:
"""print progress"""
ii = ii + 1
if not ii % 1000: print('.', end='')
#POSSIBLE CONDITION HERE
ex = random.sample(available_values, k=com_len)
ex.sort()
if ex in done: continue
done.append(ex)
counter[ex] = counter[ex] + 1
for l_ in ex:
if counter[l_] == counter_expected:
del available_values[available_values.index(l_)]
if len(available_values) < com_len:break
if all(counter[mask] == counter_expected): break
#OR HERE
NOTE: The script usually ends successfully because either len(available_values) < com_len or all(counter[mask] == counter_expected) condition breaks the While loop. Try running the script several times and at some point, you'll observe the script going into an infinite loop as len(available_values) >= com_len but there are no more new unique conditions available to be created so the counter is not increasing.
I need an efficient condition to stop the script. Using itertools.combinations here is not an option because available_values list may be long, i.e. 10k elements, at the beginning.
The brute-force would be using itertools.combinations when len(available_values) reaches a certain level and checking if there are any combinations that haven't been created yet but it's an ugly solution.
There might be a better way which doesn't come up to me but may to you. I'll be grateful for your help.
UPDATED FINAL ANSWER:
I've come up with the following code that matches my needs.
NOTES:
The function is not the best in many terms but it does its job very well!
The function has 3 modes of generation of data: generating a total number of combinations, generating combinations with a minimal number of times each value appear across all combinations, generating combinations with a "max" number of times each value appear across all combinations ("max" means "as close as possible to max value").
The function allows changing the length of combinations on the fly within a selected range or providing a specific number.
Depending on the params, the function can do a redundant number of iterations as shown by 'Total number of errors'. But...
It's FAST! It uses sets and tuples for great performance. The only problem could happen when itertools.combinations fires returning tons (millions) of combinations but, in my case, it has never happened so far.
The code:
import numpy as np
import random
import itertools
from decimal import Decimal
def get_random_combinations(values, exclude, length, mode, limit, min_ = 0, max_ = 0):
done = set()
try:
"""Creating counter"""
counter = np.zeros(len(values), np.uint)
"""Create a mask for excluded values"""
"""https://stackoverflow.com/questions/25330959/how-to-select-inverse-of-indexes-of-a-numpy-array"""
mask = np.ones(len(np.array(values)), np.bool)
mask[exclude] = False
"""available values to create combinations"""
values_a = set(values) - set(exclude)
values_a = list(values_a)
if length == 1:
if mode == 'total':
"""generate just data_number of examples"""
for ii in range(limit):
comb = random.sample(values_a, 1)[0]
del values_a[values_a.index(comb)]
done.add(tuple([comb]))
else:
"""generate one example for each comb"""
for comb in values_a: done.add(tuple([comb]))
else:
"""total number of combinations"""
if isinstance(length, str): rr = np.mean([min_, max_])
else: rr = length
nn = len(values_a)
comb_max = int(Decimal(np.math.factorial(nn)) / Decimal(np.math.factorial(rr) * np.math.factorial(nn-rr)))
err_limit = int(comb_max * 0.01)
if err_limit > 10000: err_limit = 10000
"""initiate variables"""
#should itertools be used to generate the rest of combinations
gen_comb = False
#has all combinations generated by itertools ended
comb_left_0 = False
#has the limit of errors been reached to generate itertools combinations
err_limit_reached = False
#previous combination
ll_prev = 0
dd = 0 #done counter
comb_left = set() #itertools combinations
err = 0 #errors counter
"""options variables for statistics"""
err_t = 0 #total number of errors
gen = 0 #total number of generations of itertools.combinations
ii = 0 #total number of iterations
print('GENERATING LIST OF COMBINATIONS')
while True:
"""print progress"""
ii = ii + 1
if not dd % 1000: print('.', end='')
"""check if length of combs is random or not"""
if isinstance(length, str):
"""change max_ length of combinations to
\the length of available values"""
if len(values_a) < max_: max_ = len(values_a)
ll = random.randint(min_, max_)
else: ll = length
if ll != ll_prev: gen_comb = True
"""generate combinations only when err limit is reached or
the length of combinations has changed"""
if err_limit_reached and gen_comb:
gen = gen + 1
"""after reaching the max number of consecutive errors, start generating combinations via itertools"""
"""generation is done at this point to prevent generation for a very long list"""
"""generation is also done when when length of a combination changes"""
comb_left = set(itertools.combinations(values_a, ll)) - done
"""break if there are no elements left"""
if not len(comb_left): break
"""if combinations has already been generated, used them"""
if comb_left:
"""take random sample from the set"""
comb = random.sample(comb_left, 1)[0]
"""remove it from the set"""
comb_left.remove(comb)
"""check if it was the last combination to break the loop at the end"""
if not len(comb_left): comb_left_0 = True
else:
"""generate random combination"""
comb = tuple(sorted(random.sample(values_a, ll)))
"""set previous length"""
ll_prev = ll
"""reset gen_comb"""
gen_comb = False
"""check if combination is new"""
if comb not in done: found = True
else:
"""otherwise, iterate errors"""
err = err + 1
err_t = err_t + 1
found = False
if err > err_limit: err_limit_reached = gen_comb = True
if found:
"""reset err"""
err = 0
dd = dd + 1
"""add combination to done"""
done.add(comb)
"""increase counter for the combs"""
counter[list(comb)] = counter[list(comb)] + 1
"""check if seekeing the max number of combinations or min"""
if mode == 'max':
"""for max, we must remove the elements which reached the limit"""
for l_ in list(comb):
if counter[l_] == limit:
del values_a[values_a.index(l_)]
"""if length of available elements is smaller than the possible length of the combinations"""
if isinstance(length, str):
"""for random length, choose the minimal length"""
if len(values_a) < min_: break
else:
if len(values_a) < ll: break
"""if all elements reached the limit"""
if mode == 'total':
if len(done) >= limit: break
else: #min, max
if all(counter[mask] >= limit): break
"""if the number of consecutive errors reached
the total number of combinations, break as you may never
draw a valid combination"""
if err > comb_max: break
"""if it was the last combination left"""
if comb_left_0: break
except Exception as e: print(e)
print('')
print('Combinations generated: ' + str(dd))
print('Total number of iterations: ' + str(ii))
print('Final value of err: ' + str(err))
print('Total number of errors: ' + str(err_t))
print('How many times itertools.combinations was used: ' + str(gen))
return done
"""range of values to create combinations"""
values = range(256)
"""values excluded from the combinations"""
exclude = [0,255]
"""length of combinations, if is string, the number of layers
is generated randomly withing (min_, max_) range """
length = 'r'
"""mode of how the combinations are generated:
min: minimal number of times the value appears across all combinations
(limited down by the limit value)
max: max number of times the value appears across all combinations (limited
max by the limit value)
total: total number of combinations (limited the limit value)"""
mode = 'max'
"""limit used for the mode combinations' generation"""
limit = 1000
"""min_ and max_ are used when length is string,
length is generated randomly within (min_, max_) range"""
min_ = 4
max_ = 12
done = get_random_combinations(values, exclude, length, mode, limit, min_, max_)
There are a total of n choose k possible combinations meeting your criteria, with n = length and k = com_len. n choose k evaluates to n! / (k! * (n - k)!). If you generate all distinct possibilities, each of the n values appears (n - 1)! / ((k - 1)! * (n - k)!) times (https://math.stackexchange.com/q/26619/295281). You should be able to solve this, assuming that z <= (n - 1)! / ((k - 1)! * (n - k)!), where z = counter_expected.
For your example:
n = 256
k = 3
z = 100 <= 32385
One common method to generate combinations in general is to step k bits through an boolean array of length n, always incrementing the lowest possible bit. Whenever a higher bit gets incremented, all the ones below it get reset to their initial position. Here is a sample sequence:
0 0 0 0 3 2 1
0 0 0 3 0 2 1
0 0 0 3 2 0 1
0 0 0 3 2 1 0
0 0 3 0 0 2 1
...
3 2 0 0 1 0 0
3 2 0 1 0 0 0
3 2 1 0 0 0 0
I've labeled the positions so that you can see that if the values are sorted to begin with, the combinations will always come out sorted. Keep in mind that you can implement this as an array of n booleans or k indices. Both have advantages and disadvantages.
For your particular use-case, there's a twist. You don't use a bit once is count has exceeded a certain amount. There are a number of ways of stepping through the bits, but they all boil down to having a size n counter array.
If n * z is a multiple of k, you will automatically be able to get exact counts in all of the bins. Neither n nor z themselves actually has to be multiples of k. If that's not true, however, you will inevitably have underflow or overflow. Intuitively, you want to generate a target of n * z total values, k at a time. It's pretty clear that one has to be a multiple of the later to make this possible.
You can have two types of exit criteria. Given the total accumulated count of all the bits, s,
s >= n * z: all bits have a count of at least z. At most k - 1 bits have a count of z + 1.
s > n * z - k: all bits have a count of z, except at most k - 1 bits, so adding one more combination would cause condition 1.
One final design choice to discuss is the order in which bits move. Since generating a series of combinations exhausts a bin, I'd like to be able to keep the exhausted bins accumulating sequentially, in a predictable order on one side of the array bucket. This will remove a lot of checks from the algorithm. So instead of incrementing the lowest possible bit, I will increment the highest possible bit, and increment the one below it whenever it resets. In that case, the exhausted buckets will always be the lowest bits.
So let's finally stop making unproven mathy-sounding statements and show an implementation:
def generate_combos(n, k, z):
full_locs = np.arange(k + 1, dtype=np.uint)
full_locs[k] = n # makes partial vectorization easier
locs = full_locs[:k] # bit indices
counts = np.zeros(n, dtype=np.uint) # counter buckets
values = np.arange(n, dtype=np.uint) # values
min_index = 0 # index of lowest non-exhausted bin
for _ in range((n * z) // k):
counts[locs] += 1
yield values[locs]
if counts[min_index] == z:
# if lowest bin filled, shift and reset
min_index += np.argmax(counts[min_index:] < z)
locs[:] = min_index + np.arange(k)
else:
# otherwise, increment highest available counter
i = np.flatnonzero(np.diff(full_locs) > 1)
if i.size:
i = i[-1]
locs[i] += 1
# reset the remainder
locs[i + 1:] = locs[i] + np.arange(1, k - i)
else:
break
This uses condition 2. If you want condition 1, add the following lines after:
if counters[-1] < z:
yield values[-k:]
Changing the loop to something like for _ in range(-((n * z) // -k)): (courtesy of https://stackoverflow.com/a/54585138/2988730) won't help because the counters aren't designed to handle it.
Here is an IDEOne link showing the first hundred elements of generate_combos(256, 3, 10):
[0 1 2]
[0 1 3]
[0 1 4]
[0 1 5]
[0 1 6]
[0 1 7]
[0 1 8]
[0 1 9]
[ 0 1 10]
[ 0 1 11]
[2 3 4]
[2 3 5]
[2 3 6]
[2 3 7]
[2 3 8]
[2 3 9]
[ 2 3 10]
[ 2 3 11]
[ 2 3 12]
[4 5 6]
[4 5 7]
[4 5 8]
[4 5 9]
[ 4 5 10]
[ 4 5 11]
[ 4 5 12]
[ 4 5 13]
[6 7 8]
[6 7 9]
[ 6 7 10]
[ 6 7 11]
[ 6 7 12]
[ 6 7 13]
[ 6 7 14]
[ 8 9 10]
[ 8 9 11]
[ 8 9 12]
[ 8 9 13]
[ 8 9 14]
[ 8 9 15]
[10 11 12]
[10 11 13]
[10 11 14]
[10 11 15]
[10 11 16]
[12 13 14]
[12 13 15]
[12 13 16]
[12 13 17]
[12 13 18]
[13 14 15]
[14 15 16]
[14 15 17]
[14 15 18]
[14 15 19]
[14 15 20]
[15 16 17]
[16 17 18]
[16 17 19]
[16 17 20]
[16 17 21]
[16 17 22]
[16 17 23]
[17 18 19]
[18 19 20]
[18 19 21]
[18 19 22]
[18 19 23]
[18 19 24]
[18 19 25]
[19 20 21]
[20 21 22]
[20 21 23]
[20 21 24]
[20 21 25]
[20 21 26]
[20 21 27]
[21 22 23]
[22 23 24]
[22 23 25]
[22 23 26]
[22 23 27]
[22 23 28]
[22 23 29]
[24 25 26]
[24 25 27]
[24 25 28]
[24 25 29]
[24 25 30]
[24 25 31]
[24 25 32]
[26 27 28]
[26 27 29]
[26 27 30]
[26 27 31]
[26 27 32]
[26 27 33]
[26 27 34]
[28 29 30]
[28 29 31]
...
Notice that after the first 10 elements, both 0 and 1 have appeared 10 times. 2 and 3 appeared once, so they get used up after only 9 more iterations, and so forth.
Here is another answer focusing on the random aspect.
You have n choose k possibilities that you want to randomly sample to get approximately z occurrences of each value (using my notation from the other answer). I posit that if you take (n * z) // k k-sized samples, and your random number generator is actually uniform, you will automatically get approximately z occurrences of each element. In your example, with n=256, k=3 and z=100, it's plausible that among 8533, the distribution will indeed be fairly uniform among the 256 bins.
If you are willing to accept some level of imperfection in the uniformity, python's random.sample is a good choice. The population is all integers from zero to n choose k.
n choose k in this case is 256 * 255 * 254 / 6 = 2763520. This is just out of range for a signed 32-bit integer, but fits comfortably into an unsigned one. Better yet, you can simply use Python's infinite precision integers.
The trick is to map these numbers to a unique combination of values. This is done with a combinatorial number system, as described here.
from random import sample
from scipy.misc import combs
def gen_samples(n, k, z):
codes = sample(range(combs(n, k)), (n * z) // k)
return [unrank(n, k, code) for code in codes]
def unrank(n, k, i):
"""
Implementation of Lehmer's greedy algorithm left as
exercise for the reader
"""
# return k-element sequence
See here for hints on unranking.

Algorithm to evenly split number with factors while maximizing the lowest weight

Given a number and a list of factors, what is the most efficient way to split this number into its given factors so as to maximize the minimum weight (weight is the multiple of particular factor)?
>>> number = 32
>>> given_factors = [1,2,4]
>>> evenly_split_a_number_with_factors(number, given_factors)
[6,5,4]
# note: 6*1 + 5*2 + 4*4 == 32
Another way to think of it as:
Given:
w1*f1 + w2*f2 + ... + wN*fN = Z
Where:
given f1,f2,f3...fN are ascending order factors of a
given positive number Z
Find: w1,w2,w3...wN which are corresponding factors' non-zero positive weights and
weights being approximately evenly distributed
Example
e.g. Given: a + 2b + 4c = 32, find largest together possible a,b,c
1 2 4
a b c
32 00 00
00 16 00
00 00 08
08 04 04
06 05 04 <- should be the outcome of this algorithm
Possible approach: good solution shoud contain some portion with equal weights.
Start with the largest possible weight Kmax = N div SumOfFactors and split the rest of number.
If spliting is not possible - decrement weight and repeat
This approach tries to make reduction of problem size - it is important for larger sum and number of summands.
For your example - good solution should look like
32 = K * (1 + 2 + 4) + Split_of_(32 - 7 * K)
Kmax = 32 div 7 = 4
Rest = 32 - 4 * 7 = 4
Varitants of splitting rest 4 into factors:
4 = 4 gives weights 4 4 5
4 = 2+2 gives weights 4 6 4
4 = 2+1+1 gives weights 6 5 4
4 = 1+1+1+1 gives weights 8 4 4
The best variant for you is 2+1+1 (perhaps one with the most different factors), while I think that solution (not listed in your example) 4 4 5 is quite good too.
Case when KMax is not suitable:
120 into (2,7,11,19)
sum = 39, k=3, rest = 3, it is impossible to make 3
k=2, rest = 42, we can make partitions:
42=3*2+2*7+2*11, possible solution is 5,4,4,2
42=2*2+2*19, possible solution is 4,2,2,4
Implemented #MBo answer (selecting his answer as he told me about the logic). Feel free to comment if I missed a usecase (I deliberately am not accounting for reducing the k since purpose of this function is to get maximum minimum weights for a given set of factors)
def evenly_weight_a_number_with_factors(number, factors):
"""
Args:
number (int): Number to evenly split using `factors`
factors (list): list of ints
>>> evenly_weight_a_number_with_factors(32, [1,2,4])
6,5,4
Given:
w1*f1 + w2*f2 + ... + wN*fN = Z
Where:
given f1,f2,f3...fN are ascending order factors of a
given positive number Z
Find: w1,w2,w3...wN which are corresponding factors' non-zero positive weights and
weights being approximately evenly distributed
Example
e.g. Given: a + 2b + 4c = 32, find largest together possible a,b,c
1 2 4
a b c
32 00 00
00 16 00
00 00 08
08 04 04
06 05 04 <- should be the outcome of this algorithm
"""
log = logging.getLogger(evenly_weight_a_number_with_factors_logger_name)
# function to return True if all numbers in `_factors` are factors of number `n`
are_all_factors = lambda n, _factors: all(n % f == 0 for f in _factors)
def weighted_sum(__weights, __factors):
return sum([wt*factor for wt, factor in zip(__weights, __factors)])
def add_variant_wt(__weights, i, _remainder_weight):
old__weights = __weights[:]
if _remainder_weight < factors[i]:
log.warn('skipping add_variant_wt _remainder_weight: {} < factor: {}'.format(_remainder_weight, factors[i]))
return []
variant_wt = _remainder_weight / factors[i]
variant_wt_rem = _remainder_weight % factors[i]
log.debug('add_variant_wt: weights, i, renainder_weight, variant_wt, remain: {}'
.format((__weights, i, _remainder_weight, variant_wt, variant_wt_rem)))
if variant_wt_rem:
__weights[i] += variant_wt
if i + 1 >= len(factors):
return add_variant_wt(__weights, i-1, variant_wt_rem)
return add_variant_wt(__weights, i+1, variant_wt_rem)
__weights[i] += variant_wt
log.debug('add_variant_wt i: {} before: {} after: {}'.format(i, old__weights, __weights))
return __weights
assert list(sorted(factors)) == factors, "Given factors {} are not sorted".format(factors)
assert are_all_factors(number, factors) == True, "All numbers in {} are not factors of number: {}".format(factors, number)
sum_of_all_factors = sum(factors)
largest_possible_weight = number / sum_of_all_factors
remainder_weight = number % sum_of_all_factors
variant_weight_sets = []
tmp_weights = []
for _ in factors:
tmp_weights.append(largest_possible_weight)
log.debug('tmp_weights: {} remainder_weight: {}'.format(tmp_weights, remainder_weight))
for i, _ in enumerate(factors):
_weights = add_variant_wt(tmp_weights[:], i, remainder_weight)
if _weights:
variant_weight_sets.append(_weights)
weights = variant_weight_sets[-1] # pick wt variance where largest factor gets the biggest weight
log.debug('variant_weight_sets: {}'.format(variant_weight_sets))
sum_weighted = weighted_sum(weights, factors)
assert sum_weighted == number, "sum_weighted: {} != number: {}".format(sum_weighted, number)
return weights
Result looks like:
>>> evenly_weight_a_number_with_factors(32, [1,2,4])
[4, 4, 5]
>>> evenly_weight_a_number_with_factors(32, [1,2,8])
[2, 3, 3]
>>> evenly_weight_a_number_with_factors(32, [1,2,2])
[6, 6, 7]
>>> evenly_weight_a_number_with_factors(100, [1,2,4,4,100])
[0, 0, 0, 0, 1]
>>> evenly_weight_a_number_with_factors(100, [1,2,4,4])
[10, 9, 9, 9]
>>>

Write a program to compute the sum of the terms of the series

Write a program to compute the sum of the terms of the series: 4 - 8 + 12 - 16 + 20 -
24 + 28 - 32 + .... +/- n, where n is an input. Consider that n is always valid (which
means it follows the series pattern).
n = int(input("Enter n: "))
sum = 0
for i in range(4,n+4,4):
sum += i - (i+2)
print("The sum of %s first terms is: %s"%(n,sum))
Can't seem to find the issues that Ihave
How about an explicit formula?
def sumSeries(n):
if n / 4 % 2 == 0:
return - n / 2
else:
return (n + 4) / 2
The series doesn't do anything too interesting, it just keeps adding +4 every two steps, and flips the sign in even steps:
4 = 4
4 - 8 = -4
4 - 8 + 12 = 8
4 - 8 + 12 - 16 = -8
...
Some examples:
for n in range(4, 100, 4):
print("%d -> %d" % (n, sumSeries(n)))
Output:
4 -> 4
8 -> -4
12 -> 8
16 -> -8
20 -> 12
24 -> -12
28 -> 16
32 -> -16
36 -> 20
40 -> -20
44 -> 24
48 -> -24
52 -> 28
56 -> -28
60 -> 32
64 -> -32
First of all, know that your series sum has a closed form.
def series_sum(n):
sign = 1 if n % 2 else -1
value = (n - 1) // 2 * 4 + 4
return sign * value
series_sum(1) # 4
series_sum(2) # -4
series_sum(3) # 8
But in general, infinite series are a good usecase for generators.
def series():
value = 0
sign = -1
while True:
value += 4
sign *= -1
yield sign * value
s = series()
next(s) # 4
next(s) # -8
next(s) # 12
Thus for getting the sum you can do this.
s = series()
def sum_series(n, s):
return sum(next(s) for _ in range(n))
sum_series(5, s) # 12
An interesting question asked in the comment is also, given some value, how can we recover the sum up until that value is reached in the series. The generator approach is well suited for these kind of problems.
from itertools import takewhile
def sum_until(val):
return sum(x for x in takewhile(lambda x: -val <= x <= val, series()))
sum_until(12) # 8
Python can be used to easily compute mathematical sequences and series.
We find the sum of all values computed up to and including n
Given
the following mathematical components:
generating function (A)
sample alternating arithmetic sequence (B)
summation equation (C)
We now implement two approaches A and C verified by B.
Code
import itertools as it
n = 8
Generating Function, A
seq = [(-1)**(i + 1)*(4 * i) for i in range(1, n + 1)]
sum(seq)
# -16
Summation Equation, C
def f(n):
if n == 1:
return 1
elif n % 2 == 0:
return -n // 2
else:
return (n + 1) // 2
4*f(n)
# -16
Details
Generating Function
This first approach simply sums an arithmetic sequence generated by a list comprehension. The signs of values alternate by the expression (-1)**(i + 1):
seq
# [4, -8, 12, -16, 20, -24, 28, -32]
Similarly, an infinite sequence can be made using a generator expression and itertools.count:
inf_seq = ((-1)**(i + 1)*(4 * i) for i in it.count(1))
sum(it.islice(inf_seq, n))
# -16
Here the sum is returned for a slice of n values. Note, we can use the take itertools recipe and itertools.accumulate to compute some arbitrary number of summations, e.g. 10 sums (see also itertools.takewhile).
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(it.islice(iterable, n))
inf_seq = ((-1)**(i + 1)*(4 * i) for i in it.count(1))
list(take(10, it.accumulate(inf_seq)))
# [4, -4, 8, -8, 12, -12, 16, -16, 20, -20]
Summation Equation
The second approach comes from inspection, where a pattern is determined from the outputs of a sample sequence:
n 4n f(n) 4f(n)
--- ---- ---- -----
1 4 1 -> 4
2 -8 -1 -> -4
3 12 2 -> 8
4 -16 -2 -> -8
5 20 3 -> 12
6 -24 -3 -> -12
7 28 4 -> 16
8 -32 -4 -> -16
9 36 5 -> 20
10 -40 -5 -> -20
For an arbitrary final value n, a value of the sequence is generated (4n). When multiplied with some unknown function, f(n), a resultant sum is computed (4f(n)). We determine a pattern for f(n) by deducing the relationship between the sequence values and expected sums. Once determined, we directly implement a function that computes our desired sums.
Highlights
Mathematical sequences can be generated from list comprehensions.
Infinite sequences can be made from generator expressions.
Mathematical series/generating functions can be computed using reducing functions, e.g. sum(), operator.mul(), etc. applied to sequences.
General summation equations can be implemented as simple Python functions.
As #John Coleman pointed out, sum += i - (i+2) produces one result not as you expected.
Below is my solution:
Using if else to determinate the sign, then sum up. at last, put it into another loop to create the series you'd like.
n = 9
print('N='+str(n), [sum([index*4 if index%2 else index*-4 for index in range(1, num+1)]) for num in range(1, n+1)])
n = 8
print('N='+str(n), [sum([index*4 if index%2 else index*-4 for index in range(1, num+1)]) for num in range(1, n+1)])
Output:
N=9 [4, -4, 8, -8, 12, -12, 16, -16, 20]
N=8 [4, -4, 8, -8, 12, -12, 16, -16]
[Finished in 0.178s]

Categories