Related
Given an integer array sequence a_n of length N, cut the sequence into several parts such that every one of which is a consequtive subsequence of the original sequence.
Every part must satisfy the following:
The sum of each part is not greater than a given integer M
Find a cut that minimizes the sum of the maximum integer of each part
For example:
input : n = 8, m = 17 arr = [2, 2, 2, 8, 1, 8, 2, 1]
output = 12
explanation: subarrays = [2, 2, 2], [8, 1, 8], [2, 1]
sum = 2 + 8 + 2 = 12
0 <= N <= 100000
each integer is between 0 and 1000000
If no such cut exists, return -1
I believe this is a dynamic programming question, but I am not sure how to approach this.
I am relatively new to coding, and came across this question in an interview which I could not do. I would like to know how to solve it for future reference.
Heres what I tried:
n = 8
m = 17
arr = [2, 2, 2, 8, 1, 8, 2, 1]
biggest_sum, i = 0, 0
while (i < len(arr)):
seq_sum = 0
biggest_in_seq = -1
while (seq_sum <= m and i < len(arr)):
if (seq_sum + arr[i] <= m ):
seq_sum += arr[i]
if (arr[i] > biggest_in_seq):
biggest_in_seq = arr[i]
i += 1
else:
break
biggest_sum += biggest_in_seq
if (biggest_sum == 0):
print(-1)
else:
print(biggest_sum)
This givens the result 16, and the subsequences are: [[2, 2, 2, 8, 1], [8, 2, 1]]
Problem is that you are filling every sequence from left to right up to the maximum allowed value m. You should evaluate different options of sequence lengths and minimize the result, which in the example means that the 2 8 values must be in the same sequence.
a possible solution could be:
n = 8
m = 17
arr = [2, 2, 2, 8, 1, 8, 2, 1]
def find_solution(arr, m, n):
if max(arr)>m:
return -1
optimal_seq_length = [0] * n
optimal_max_sum = [0] * n
for seq_start in reversed(range(n)):
seq_len = 0
seq_sum = 0
seq_max = 0
while True:
seq_len += 1
seq_end = seq_start + seq_len
if seq_end > n:
break
last_value_in_seq = arr[seq_end - 1]
seq_sum += last_value_in_seq
if seq_sum > m:
break
seq_max = max(seq_max, last_value_in_seq)
max_sum_from_next_seq_on = 0 if seq_end >= n else optimal_max_sum[seq_end]
max_sum = max_sum_from_next_seq_on + seq_max
if seq_len == 1 or max_sum < optimal_max_sum[seq_start]:
optimal_max_sum[seq_start] = max_sum
optimal_seq_length[seq_start] = seq_len
# create solution list of lists
solution = []
seg_start = 0
while seg_start < n:
seg_length = optimal_seq_length[seg_start]
solution.append(arr[seg_start:seg_start+seg_length])
seg_start += seg_length
return solution
print(find_solution(arr, m, n))
# [[2, 2, 2], [8, 1, 8], [2, 1]]
Key aspects of my proposal:
start from a small array (only last element), and make the problem array grow to the front:
[1]
[2, 1]
[8, 2, 1]
etc.
for each of above problem arrays, store:
the optimal sum of the maximum of each sequence (optimal_max_sum), which is the value to be minimized
the sequence length of the first sequence (optimal_seq_length) to achieve this optimal value
do this by: for each allowed sequence length starting at the beginning of the problem array:
calculate the new max_sum value and add it to previously calculated optimal_max_sum for the part after this sequence
keep the smallest max_sum, store it in optimal_max_sum and the associated seq_length in optimal_seq_length
Note : almost duplicate of Numpy vectorization: Find intersection between list and list of lists
Differences :
I am focused on efficiently when the lists are large
I'm searching for the largest intersections.
x = [500 numbers between 1 and N]
y = [[1, 2, 3], [4, 5, 6, 7], [8, 9], [10, 11, 12], etc. up to N]
Here are some assumptions:
y is a list of ~500,000 sublist of ~500 elements
each sublist in y is a range, so y is characterized by the last elements of each sublists. In the example : 3, 7, 9, 12 ...
x is not sorted
y contains once and only once each numbers between 1 and ~500000*500
y is sorted in the sense that, as in the example, the sub-lists are sorted and the first element of one sublist is the next of the last element of the previous list.
y is known long before even compile-time
My purpose is to know, among the sublists of y, which have at least 10 intersections with x.
I can obviously make a loop :
def find_best(x, y):
result = []
for index, sublist in enumerate(y):
intersection = set(x).intersection(set(sublist))
if len(intersection) > 2: # in real live: > 10
result.append(index)
return(result)
x = [1, 2, 3, 4, 5, 6]
y = [[1, 2, 3], [4], [5, 6], [7], [8, 9, 10, 11]]
res = find_best(x, y)
print(res) # [0, 2]
Here the result is [0,2] because the first and third sublist of y have 2 elements in intersection with x.
An other method should to parse only once y and count the intesections :
def find_intersec2(x, y):
n_sublists = len(y)
res = {num: 0 for num in range(0, n_sublists + 1)}
for list_no, sublist in enumerate(y):
for num in sublist:
if num in x:
x.remove(num)
res[list_no] += 1
return [n for n in range(n_sublists + 1) if res[n] >= 2]
This second method uses more the hypothesis.
Questions :
what optimizations are possibles ?
Is there a completely different approach ? Indexing, kdtree ? In my use case, the large list y is known days before the actual run. So i'm not afraid to buildind an index or whatever from y. The small list x is only known at runtime.
Since y contains disjoint ranges and the union of them is also a range, a very fast solution is to first perform a binary search on y and then count the resulting indices and only return the ones that appear at least 10 times. The complexity of this algorithm is O(Nx log Ny) with Nx and Ny the number of items in respectively x and y. This algorithm is nearly optimal (since x needs to be read entirely).
Actual implementation
First of all, you need to transform your current y to a Numpy array containing the beginning value of all ranges (in an increasing order) with N as the last value (assuming N is excluded for the ranges of y, or N+1 otherwise). This part can be assumed as free since y can be computed at compile time in your case. Here is an example:
import numpy as np
y = np.array([1, 4, 8, 10, 13, ..., N])
Then, you need to perform the binary search and check that the values fits in the range of y:
indices = np.searchsorted(y, x, 'right')
# The `0 < indices < len(y)` check should not be needed regarding the input.
# If so, you can use only `indices -= 1`.
indices = indices[(0 < indices) & (indices < len(y))] - 1
Then you need to count the indices and filter the ones with at least :
uniqueIndices, counts = np.unique(indices, return_counts=True)
result = uniqueIndices[counts >= 10]
Here is an example based on your:
x = np.array([1, 2, 3, 4, 5, 6])
# [[1, 2, 3], [4], [5, 6], [7], [8, 9, 10, 11]]
y = np.array([1, 4, 5, 7, 8, 12])
# Actual simplified version of the above algorithm
indices = np.searchsorted(y, x, 'right') - 1
uniqueIndices, counts = np.unique(indices, return_counts=True)
result = uniqueIndices[counts >= 2]
# [0, 2]
print(result.tolist())
It runs in less than 0.1 ms on my machine on a random input based on your input constraints.
Turn y into 2 dicts.
index = { # index to count map
0 : 0,
1 : 0,
2 : 0,
3 : 0,
4 : 0
}
y = { # elem to index map
1: 0,
2: 0,
3: 0,
4: 1,
5: 2,
6: 2,
7: 3,
8 : 4,
9 : 4,
10 : 4,
11 : 4
}
Since you know y in advance, I don't count the above operations into the time complexity. Then, to count the intersection:
x = [1, 2, 3, 4, 5, 6]
for e in x: index[y[e]] += 1
Since you mentioned x is small, I try to make the time complexity depends only on the size of x (in this case O(n)).
Finally, the answer is the list of keys in index dict where the value is >= 2 (or 10 in real case).
answer = [i for i in index if index[i] >= 2]
This uses y to create a linear array mapping every int to the (1 plus), the index of the range or subgroup the int is in; called x2range_counter.
x2range_counter uses a 32 bit array.array type to save memory and can be cached and used for calculations of all x on the same y.
calculating the hits in each range for a particular x is then just indirected array incrementing of a count'er in function count_ranges`.
y = [[1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11, 12]]
x = [5, 3, 1, 11, 8, 10]
range_counter_max = len(y)
extent = y[-1][-1] + 1 # min in y must be 1 not 0 remember.
x2range_counter = array.array('L', [0] * extent) # efficient 32 bit array storage
# Map any int in any x to appropriate ranges counter.
for range_counter_index, rng in enumerate(y, start=1):
for n in rng:
x2range_counter[n] = range_counter_index
print(x2range_counter) # array('L', [0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4])
# x2range_counter can be saved for this y and any x on this y.
def count_ranges(x: List[int]) -> List[int]:
"Number of x-hits on each y subgroup in order"
# Note: count[0] initially catches errors. count[1..] counts x's in y ranges [0..]
count = array.array('L', [0] * (range_counter_max + 1))
for xx in x:
count[x2range_counter[xx]] += 1
assert count[0] == 0, "x values must all exist in a y range and y must have all int in its range."
return count[1:]
print(count_ranges(x)) # array('L', [1, 2, 1, 2])
I created a class for this, with extra functionality such as returning the ranges rather than the indices; all ranges hit >=M times; (range, hit-count) tuples sorted most hit first.
Range calculations for different x are proportional to x and are simple array lookups rather than any hashing of dicts.
What do you think?
Let's say I have a NumPy array:
x = np.array([3, 9, 2, 1, 5, 4, 7, 7, 8, 6])
If I sum up this array, I get 52. What I need is a way to split it up starting from left to right into roughly n chunks where n is chosen by the user. Essentially, the splits occur in a greedy fashion. So, for some number of chunks n, the first n - 1 chunks must each sum up to at least 52/n and they must be consecutive indices taken from left to right.
So, if n = 2 then the first chunk would consist of the first 7 elements:
chunk[0] = x[:7] # [3, 9, 2, 1, 5, 4, 7], sum = 31
chunk[1] = x[7:] # [7, 8, 6], sum = 21
Notice that the first chunk wouldn't consist of the first 6 elements only since the sum would be 24 which is less than 52/2 = 26. Also, notice that the number of elements in each chunk is allowed to vary as long as the sum criteria is met. Finally, it is perfectly fine for the last chunk to not be close to 52/2 = 26 since the other chunk(s) may take more.
However, the output that I need is a two column array that contains the start index in the first column and the (exclusive) stop index in the second column:
[[0, 7],
[7, 10]]
If n = 4, then the first 3 chunks need to each sum up to at least 52/4 = 13 and would look like this:
chunk[0] = x[:3] # [3, 9, 2], sum = 14
chunk[1] = x[3:7] # [1, 5, 4], sum = 17
chunk[2] = x[7:9] # [7, 8], sum = 15
chunk[3] = x[9:] # [6], sum = 6
And the output that I need would be:
[[0, 3],
[3, 7],
[7, 9],
[9, 10]
So, one naive approach using for loops might be:
ranges = np.zeros((n_chunks, 2), np.int64)
ranges_idx = 0
range_start_idx = start
sum = 0
for i in range(x.shape[0]):
sum += x[i]
if sum > x.sum() / n_chunks:
ranges[ranges_idx, 0] = range_start_idx
ranges[ranges_idx, 1] = min(
i + 1, x.shape[0]
) # Exclusive stop index
# Reset and Update
range_start_idx = i + 1
ranges_idx += 1
sum = 0
# Handle final range outside of for loop
ranges[ranges_idx, 0] = range_start_idx
ranges[ranges_idx, 1] = x.shape[0]
if ranges_idx < n_chunks - 1:
left[ranges_idx:] = x.shape[0]
return ranges
I am looking for a nicer vectorized solution.
I found inspiration in a similar question that was answered:
def func(x, n):
out = np.zeros((n, 2), np.int64)
cum_arr = x.cumsum() / x.sum()
idx = 1 + np.searchsorted(cum_arr, np.linspace(0, 1, n, endpoint=False)[1:])
out[1:, 0] = idx # Fill the first column with start indices
out[:-1, 1] = idx # Fill the second column with exclusive stop indices
out[-1, 1] = x.shape[0] # Handle the stop index for the final chunk
return out
Update
To cover the pathological case, we need to be a little more precise and do something like:
def func(x, n, truncate=False):
out = np.zeros((n_chunks, 2), np.int64)
cum_arr = x.cumsum() / x.sum()
idx = 1 + np.searchsorted(cum_arr, np.linspace(0, 1, n, endpoint=False)[1:])
out[1:, 0] = idx # Fill the first column with start indices
out[:-1, 1] = idx # Fill the second column with exclusive stop indices
out[-1, 1] = x.shape[0] # Handle the stop index for the final chunk
# Handle pathological case
diff_idx = np.diff(idx)
if np.any(diff_idx == 0):
row_truncation_idx = np.argmin(diff_idx) + 2
out[row_truncation_idx:, 0] = x.shape[0]
out[row_truncation_idx-1:, 1] = x.shape[0]
if truncate:
out = out[:row_truncation_idx]
return out
Here is a solution that doesn't iterate over all elements:
def fun2(array, n):
min_sum = np.sum(array) / n
cumsum = np.cumsum(array)
i = -1
count = min_sum
out = []
while i < len(array)-1:
j = np.searchsorted(cumsum, count)
out.append([i+1, j+1])
i = j
if i < len(array):
count = cumsum[i] + min_sum
out[-1][1] -= 1
return np.array(out)
For the two test cases it produces the results you expected. HTH
I need to create a random array of 6 integers between 1 and 5 in Python but I also have another data say a=[2 2 3 1 2] which can be considered as the capacity. It means 1 can occur no more than 2 times or 3 can occur no more than 3 times.
I need to set up a counter for each integer from 1 to 5 to make sure each integer is not generated by the random function more than a[i].
Here is the initial array I created in python but I need to find out how I can make sure about the condition I described above. For example, I don't need a solution like [2 1 5 4 5 4] where 4 is shown twice or [2 2 2 2 1 2].
solution = np.array([np.random.randint(1,6) for i in range(6)])
Even if I can add probability, that should work. Any help is appreciated on this.
You can create an pool of data that have the most counts and then pick from there:
import numpy as np
a = [2, 2, 3, 1, 2]
data = [i + 1 for i, e in enumerate(a) for _ in range(e)]
print(data)
result = np.random.choice(data, 6, replace=False)
print(result)
Output
[1, 1, 2, 2, 3, 3, 3, 4, 5, 5]
[1 3 2 2 3 1]
Note that data is array that has for each element the specified count, then we pick randomly from data this way we ensure that you won't have more elements that the specify count.
UPDATE
If you need that each number appears at least one time, you can start with a list of each of the numbers, sample from the rest and then shuffle:
import numpy as np
result = [1, 2, 3, 4, 5]
a = [1, 1, 2, 0, 1]
data = [i + 1 for i, e in enumerate(a) for _ in range(e)]
print(data)
result = result + np.random.choice(data, 1, replace=False).tolist()
np.random.shuffle(result)
print(result)
Output
[1, 2, 3, 3, 5]
[3, 4, 2, 5, 1, 2]
Notice that I subtract 1 from each of the original values of a, also the original 6 was change to 1 because you already have 5 numbers in the variable result.
You could test your count against a dictionary
import random
a = [2, 2, 3, 1, 2]
d = {idx: item for idx,item in enumerate(a, start = 1)}
l = []
while len(set(l) ^ set([*range(1, 6)])) > 0:
l = []
while len(l) != 6:
x = random.randint(1,5)
while l.count(x) == d[x]:
x = random.randint(1,5)
l.append(x)
print(l)
I have a fixed amount of int arrays of the form:
[a,b,c,d,e]
for example:
[2,2,1,1,2]
where a and b can be ints from 0 to 2, c and d can be 0 or 1, and e can be ints from 0 to 2.
Therefore there are: 3 * 3 * 2 * 2 * 3: 108 possible arrays of this form.
I would like to assign to each of those arrays a unique integer code from 0 to 107.
I am stuck, i thought of adding each numbers in the array, but two arrays such as:
[0,0,0,0,1] and [1,0,0,0,0]
would both add to 1.
Any suggestion?
Thank you.
You could use np.ravel_multi_index:
>>> np.ravel_multi_index([1, 2, 0, 1, 2], (3, 3, 2, 2, 3))
65
Validation:
>>> {np.ravel_multi_index(j, (3, 3, 2, 2, 3)) for j in itertools.product(*map(range, (3,3,2,2,3)))} == set(range(np.prod((3, 3, 2, 2, 3))))
True
Going back the other way:
>>> np.unravel_index(65, dims=(3, 3, 2, 2, 3))
(1, 2, 0, 1, 2)
Just another way, similar to Horner's method for polynomials:
>>> array = [1, 2, 0, 1, 2]
>>> ranges = (3, 3, 2, 2, 3)
>>> reduce(lambda i, (a, r): i * r + a, zip(array, ranges), 0)
65
Unrolled that's ((((0 * 3 + 1) * 3 + 2) * 2 + 0) * 2 + 1) * 3 + 2 = 65.
This is a little like converting digits from a varying-size number base to a standard integer. In base-10, you could have five digits, each from 0 to 9, and then you would convert them to a single integer via i = a*10000 + b*1000 + c*100 + d*10 + e*1.
Equivalently, for the decimal conversion, you could write i = np.dot([a, b, c, d, e], bases), where bases = [10*10*10*10, 10*10*10, 10*10, 10, 1].
You can do the same thing with your bases, except that your positions introduce multipliers of [3, 3, 2, 2, 3] instead of [10, 10, 10, 10, 10]. So you could set bases = [3*2*2*3, 2*2*3, 2*3, 3, 1] (=[36, 12, 6, 3, 1]) and then use i = np.dot([a, b, c, d, e], bases). Note that this will always give answers in the range of 0 to 107 if a, b, c, d, and e fall in the ranges you specified.
To convert i back into a list of digits, you could use something like this:
digits = []
remainder = i
for base in bases:
digit, remainder = divmod(remainder, base)
digits.append(digit)
On the other hand, to keep your life simple, you are probably better off using Paul Panzer's answer, which pretty much does the same thing. (I never thought of an n-digit number as the coordinates of a cell in an n-dimensional grid before, but it turns out they're mathematically equivalent. And np.ravel is an easy way to assign a serial number to each cell.)
This data is small enough that you may simply enumerate them:
>>> L = [[a,b,c,d,e] for a in range(3) for b in range(3) for c in range(2) for d in range(2) for e in range(3)]
>>> L[0]
[0, 0, 0, 0, 0]
>>> L[107]
[2, 2, 1, 1, 2]
If you need to go the other way (from the array to the integer) make a lookup dict for it so that you will get O(1) instead of O(n):
>>> lookup = {tuple(x): i for i, x in enumerate(L)}
>>> lookup[1,1,1,1,1]
58
getting dot-product of your vectors as following:
In [210]: a1
Out[210]: array([2, 2, 1, 1, 2])
In [211]: a2
Out[211]: array([1, 0, 1, 1, 0])
In [212]: a1.dot(np.power(10, np.arange(5,0,-1)))
Out[212]: 221120
In [213]: a2.dot(np.power(10, np.arange(5,0,-1)))
Out[213]: 101100
should produce 108 unique numbers - use their indices...
If the array lenght is not very huge, you can calculate out the weight first, then use simple math formula to get the ID.
The code will be like:
#Test Case
test1 = [2, 2, 1, 1, 2]
test2 = [0, 2, 1, 1, 2]
test3 = [0, 0, 0, 0, 2]
def getUniqueID(target):
#calculate out the weights first;
#When Index=0; Weight[0]=1;
#When Index>0; Weight[Index] = Weight[Index-1]*(The count of Possible Values for Previous Index);
weight = [1, 3, 9, 18, 36]
return target[0]*weight[0] + target[1]*weight[1] + target[2]*weight[2] + target[3]*weight[3] + target[4]*weight[4]
print 'Test Case 1:', getUniqueID(test1)
print 'Test Case 2:', getUniqueID(test2)
print 'Test Case 3:', getUniqueID(test3)
#Output
#Test Case 1: 107
#Test Case 2: 105
#Test Case 3: 72
#[Finished in 0.335s]