I have a very large ndarray A, and a sorted list of points k (a small list, about 30 points).
For every element of A, I want to determine the closest element in the list of points k, together with the index. So something like:
>>> A = np.asarray([3, 4, 5, 6])
>>> k = np.asarray([4.1, 3])
>>> values, indices
[3, 4.1, 4.1, 4.1], [1, 0, 0, 0]
Now, the problem is that A is very very large. So I can't do something inefficient like adding one dimension to A, take the abs difference to k, and then take the minimum of each column.
For now I have been using np.searchsorted, as shown in the second answer here: Find nearest value in numpy array but even this is too slow. This is the code I used (modified to work with multiple values):
def find_nearest(A,k):
indicesClosest = np.searchsorted(k, A)
flagToReduce = indicesClosest==k.shape[0]
modifiedIndicesToAvoidOutOfBoundsException = indicesClosest.copy()
modifiedIndicesToAvoidOutOfBoundsException[flagToReduce] -= 1
flagToReduce = np.logical_or(flagToReduce,
np.abs(A-k[indicesClosest-1]) <
np.abs(A - k[modifiedIndicesToAvoidOutOfBoundsException]))
flagToReduce = np.logical_and(indicesClosest > 0, flagToReduce)
indicesClosest[flagToReduce] -= 1
valuesClosest = k[indicesClosest]
return valuesClosest, indicesClosest
I then thought of using scipy.spatial.KDTree:
>>> d = scipy.spatial.KDTree(k)
>>> d.query(A)
This turns out to be much slower than the searchsorted solution.
On the other hand, the array A is always the same, only k changes. So it would be beneficial to use some auxiliary structure (like a "inverse KDTree") on A, and then query the results on the small array k.
Is there something like that?
Edit
At the moment I am using a variant of np.searchsorted that requires the array A to be sorted. We can do this in advance as a pre-processing step, but we still have to restore the original order after computing the indices. This variant is about twice as fast as the one above.
A = np.random.random(3000000)
k = np.random.random(30)
indices_sort = np.argsort(A)
sortedA = A[indices_sort]
inv_indices_sort = np.argsort(indices_sort)
k.sort()
def find_nearest(sortedA, k):
midpoints = k[:-1] + np.diff(k)/2
idx_aux = np.searchsorted(sortedA, midpoints)
idx = []
count = 0
final_indices = np.zeros(sortedA.shape, dtype=int)
old_obj = None
for obj in idx_aux:
if obj != old_obj:
idx.append((obj, count))
old_obj = obj
count += 1
old_idx = 0
for idx_A, idx_k in idx:
final_indices[old_idx:idx_A] = idx_k
old_idx = idx_A
final_indices[old_idx:] = len(k)-1
indicesClosest = final_indices[inv_indices_sort] #<- this takes 90% of the time
return k[indicesClosest], indicesClosest
The line that takes so much time is the line that brings the indices back to their original order.
Update:
The builtin function numpy.digitize can actually do exactly what you need. Only a small trick is required: digitize assigns values to bins. We can convert k to bins by sorting the array and setting the bin borders exactly in the middle between adjacent elements.
import numpy as np
A = np.asarray([3, 4, 5, 6])
k = np.asarray([4.1, 3, 1]) # added another value to show that sorting/binning works
ki = np.argsort(k)
ks = k[ki]
i = np.digitize(A, (ks[:-1] + ks[1:]) / 2)
indices = ki[i]
values = ks[i]
print(values, indices)
# [ 3. 4.1 4.1 4.1] [1 0 0 0]
Old answer:
I would take a brute-force approach to perform one vectorized pass over A for each element in k and update those locations where the current element improves the approximation.
import numpy as np
A = np.asarray([3, 4, 5, 6])
k = np.asarray([4.1, 3])
err = np.zeros_like(A) + np.inf # keep track of error over passes
values = np.empty_like(A, dtype=k.dtype)
indices = np.empty_like(A, dtype=int)
for i, v in enumerate(k):
d = np.abs(A - v)
mask = d < err # only update where v is closer to A
values[mask] = v
indices[mask] = i
err[mask] = d[mask]
print(values, indices)
# [ 3. 4.1 4.1 4.1] [1 0 0 0]
This approach requires three temporary variables of same size as A, so it will fail if not enough memory is available.
So, after some work and an idea from the scipy mailing list, I think that in my case (with a constant A and slowly varying k), the best way to do this is to use the following implementation.
class SearchSorted:
def __init__(self, tensor, use_k_optimization=True):
'''
use_k_optimization requires storing 4x the size of the tensor.
If use_k_optimization is True, the class will assume that successive calls will be made with similar k.
When this happens, we can cut the running time significantly by storing additional variables. If it won't be
called with successive k, set the flag to False, as otherwise would just consume more memory for no
good reason
'''
self.indices_sort = np.argsort(tensor)
self.sorted_tensor = tensor[self.indices_sort]
self.inv_indices_sort = np.argsort(self.indices_sort)
self.use_k_optimization = use_k_optimization
self.previous_indices_results = None
self.prev_idx_A_k_pair = None
def query(self, k):
midpoints = k[:-1] + np.diff(k) / 2
idx_count = np.searchsorted(self.sorted_tensor, midpoints)
idx_A_k_pair = []
count = 0
old_obj = 0
for obj in idx_count:
if obj != old_obj:
idx_A_k_pair.append((obj, count))
old_obj = obj
count += 1
if not self.use_k_optimization or self.previous_indices_results is None:
#creates the index matrix in the sorted case
final_indices = self._create_indices_matrix(idx_A_k_pair, self.sorted_tensor.shape, len(k))
#and now unsort it to match the original tensor position
indicesClosest = final_indices[self.inv_indices_sort]
if self.use_k_optimization:
self.prev_idx_A_k_pair = idx_A_k_pair
self.previous_indices_results = indicesClosest
return indicesClosest
old_indices_unsorted = self._create_indices_matrix(self.prev_idx_A_k_pair, self.sorted_tensor.shape, len(k))
new_indices_unsorted = self._create_indices_matrix(idx_A_k_pair, self.sorted_tensor.shape, len(k))
mask = new_indices_unsorted != old_indices_unsorted
self.prev_idx_A_k_pair = idx_A_k_pair
self.previous_indices_results[self.indices_sort[mask]] = new_indices_unsorted[mask]
indicesClosest = self.previous_indices_results
return indicesClosest
#staticmethod
def _create_indices_matrix(idx_A_k_pair, matrix_shape, len_quant_points):
old_idx = 0
final_indices = np.zeros(matrix_shape, dtype=int)
for idx_A, idx_k in idx_A_k_pair:
final_indices[old_idx:idx_A] = idx_k
old_idx = idx_A
final_indices[old_idx:] = len_quant_points - 1
return final_indices
The idea is to sort the array A beforehand, then use searchsorted of A on the midpoints of k. This gives the same information as before, in that it tells us exactly which points of A are closer to which points of k. The method _create_indices_matrix will create the full indices array from these informations, and then we will unsort it to recover the original order of A. To take advantage of slowly varying k, we save the last indices and we determine which indices we have to change; we then change only those. For slowly varying k, this produces superior performance (at a quite bigger memory cost, however).
For random matrix A of 5 million elements and k of about 30 elements, and repeating the experiments 60 times, we get
Function search_sorted1; 15.72285795211792s
Function search_sorted2; 13.030786037445068s
Function query; 2.3306031227111816s <- the one with use_k_optimization = True
Function query; 4.81286096572876s <- with use_k_optimization = False
scipy.spatial.KDTree.query is too slow, and I don't time it (above 1 minute, though). This is the code used to do the timing; contains also the implementation of search_sorted1 and 2.
import numpy as np
import scipy
import scipy.spatial
import time
A = np.random.rand(10000*500) #5 million elements
k = np.random.rand(32)
k.sort()
#first attempt, detailed in the answer, too
def search_sorted1(A, k):
indicesClosest = np.searchsorted(k, A)
flagToReduce = indicesClosest == k.shape[0]
modifiedIndicesToAvoidOutOfBoundsException = indicesClosest.copy()
modifiedIndicesToAvoidOutOfBoundsException[flagToReduce] -= 1
flagToReduce = np.logical_or(flagToReduce,
np.abs(A-k[indicesClosest-1]) <
np.abs(A - k[modifiedIndicesToAvoidOutOfBoundsException]))
flagToReduce = np.logical_and(indicesClosest > 0, flagToReduce)
indicesClosest[flagToReduce] -= 1
return indicesClosest
#taken from #Divakar answer linked in the comments under the question
def search_sorted2(A, k):
indicesClosest = np.searchsorted(k, A, side="left").clip(max=k.size - 1)
mask = (indicesClosest > 0) & \
((indicesClosest == len(k)) | (np.fabs(A - k[indicesClosest - 1]) < np.fabs(A - k[indicesClosest])))
indicesClosest = indicesClosest - mask
return indicesClosest
def kdquery1(A, k):
d = scipy.spatial.cKDTree(k, compact_nodes=False, balanced_tree=False)
_, indices = d.query(A)
return indices
#After an indea on scipy mailing list
class SearchSorted:
def __init__(self, tensor, use_k_optimization=True):
'''
Using this requires storing 4x the size of the tensor.
If use_k_optimization is True, the class will assume that successive calls will be made with similar k.
When this happens, we can cut the running time significantly by storing additional variables. If it won't be
called with successive k, set the flag to False, as otherwise would just consume more memory for no
good reason
'''
self.indices_sort = np.argsort(tensor)
self.sorted_tensor = tensor[self.indices_sort]
self.inv_indices_sort = np.argsort(self.indices_sort)
self.use_k_optimization = use_k_optimization
self.previous_indices_results = None
self.prev_idx_A_k_pair = None
def query(self, k):
midpoints = k[:-1] + np.diff(k) / 2
idx_count = np.searchsorted(self.sorted_tensor, midpoints)
idx_A_k_pair = []
count = 0
old_obj = 0
for obj in idx_count:
if obj != old_obj:
idx_A_k_pair.append((obj, count))
old_obj = obj
count += 1
if not self.use_k_optimization or self.previous_indices_results is None:
#creates the index matrix in the sorted case
final_indices = self._create_indices_matrix(idx_A_k_pair, self.sorted_tensor.shape, len(k))
#and now unsort it to match the original tensor position
indicesClosest = final_indices[self.inv_indices_sort]
if self.use_k_optimization:
self.prev_idx_A_k_pair = idx_A_k_pair
self.previous_indices_results = indicesClosest
return indicesClosest
old_indices_unsorted = self._create_indices_matrix(self.prev_idx_A_k_pair, self.sorted_tensor.shape, len(k))
new_indices_unsorted = self._create_indices_matrix(idx_A_k_pair, self.sorted_tensor.shape, len(k))
mask = new_indices_unsorted != old_indices_unsorted
self.prev_idx_A_k_pair = idx_A_k_pair
self.previous_indices_results[self.indices_sort[mask]] = new_indices_unsorted[mask]
indicesClosest = self.previous_indices_results
return indicesClosest
#staticmethod
def _create_indices_matrix(idx_A_k_pair, matrix_shape, len_quant_points):
old_idx = 0
final_indices = np.zeros(matrix_shape, dtype=int)
for idx_A, idx_k in idx_A_k_pair:
final_indices[old_idx:idx_A] = idx_k
old_idx = idx_A
final_indices[old_idx:] = len_quant_points - 1
return final_indices
mySearchSorted = SearchSorted(A, use_k_optimization=True)
mySearchSorted2 = SearchSorted(A, use_k_optimization=False)
allFunctions = [search_sorted1, search_sorted2,
mySearchSorted.query,
mySearchSorted2.query]
print(np.array_equal(mySearchSorted.query(k), kdquery1(A, k)[1]))
print(np.array_equal(mySearchSorted.query(k), search_sorted2(A, k)[1]))
print(np.array_equal(mySearchSorted2.query(k), search_sorted2(A, k)[1]))
if __name__== '__main__':
num_to_average = 3
for func in allFunctions:
if func.__name__ == 'search_sorted3':
indices_sort = np.argsort(A)
sA = A[indices_sort].copy()
inv_indices_sort = np.argsort(indices_sort)
else:
sA = A.copy()
if func.__name__ != 'query':
func_to_use = lambda x: func(sA, x)
else:
func_to_use = func
k_to_use = k
start_time = time.time()
for idx_average in range(num_to_average):
for idx_repeat in range(10):
k_to_use += (2*np.random.rand(*k.shape)-1)/100 #uniform between (-1/100, 1/100)
k_to_use.sort()
indices = func_to_use(k_to_use)
if func.__name__ == 'search_sorted3':
indices = indices[inv_indices_sort]
val = k[indices]
end_time = time.time()
total_time = end_time-start_time
print('Function {}; {}s'.format(func.__name__, total_time))
I'm sure that it still possible to do better (I use a loot of space for SerchSorted class, so we could probably save something). If you have any ideas for an improvement, please let me know!
Related
The answer for three matrices was given in this question, but I'm not sure how to apply this logic to an arbitrary amount of pairwise connected matrices:
f(i, j, k, l, ...) = min(A(i, j), B(i,k), C(i,l), D(j,k), E(j,l), F(k,l), ...)
Where A,B,... are matrices and i,j,... are indices that range up to the respective dimensions of the matrices. If we consider n indices, there are n(n-1)/2 pairs and thus matrices. I would like to find (i,j,k,...) such that f(i,j,k,l,...) is maximized. I am currently doing that as follows:
import numpy as np
import itertools
# i j k l ...
dimensions = [50,50,50,50]
n_dims = len(dimensions)
pairs = list(itertools.combinations(range(n_dims), 2))
# Construct the matrices A(i,j), B(i,k), ...
matrices = [];
for pair in pairs:
matrices.append(np.random.rand(dimensions[pair[0]], dimensions[pair[1]]))
# All the different i,j,k,l... combinations
combinations = itertools.product(*list(map(np.arange,dimensions)))
combinations = np.asarray(list(combinations))
# Find the maximum minimum
vals = []
for i in range(len(pairs)):
pair = pairs[i]
matrix = matrices[i]
vals.append(matrix[combinations[:,pair[0]], combinations[:,pair[1]]])
f = np.min(vals,axis=0)
best_indices = combinations[np.argmax(f)]
print(best_indices, np.max(f))
[5 17 17 18] 0.932985854758534
This is faster than iterating over all (i, j, k, l, ...), but a lot of time is spent constructing the combinations and vals matrices. Is there an alternative way to do this where (1) the speed of numpy's matrix computation can be preserved and (2) I don't have to construct the memory-intensive vals matrices?
Here is a generalisation of the 3D solution. I assume there are other (better?) ways of organising the recursion but this works well enough. It does a 6D example (product of dims 9x10^6) in <10 ms
Sample run, note that occasionally the indices returned by the two methods do not match. This is because they are not always unique, sometimes different index combinations yield the same maximum of minima. Also note that in the very end we do a single run of a huge 6D 9x10^12 example. Brute force is no longer viable on that, the smart method takes about 10 seconds.
trial 1
results identical True
results compatible True
brute force 276.8830654968042 ms
branch cut 9.971900499658659 ms
trial 2
results identical True
results compatible True
brute force 273.444719001418 ms
branch cut 9.236706099909497 ms
trial 3
results identical True
results compatible True
brute force 274.2998780013295 ms
branch cut 7.31226220013923 ms
trial 4
results identical True
results compatible True
brute force 273.0268925006385 ms
branch cut 6.956217200058745 ms
HUGE (100, 150, 200, 100, 150, 200) 9000000000000
branch cut 10246.754082996631 ms
Code:
import numpy as np
import itertools as it
import functools as ft
def bf(dims,pairs):
dims,pairs = np.array(dims),np.array(pairs,object)
n,m = len(dims),len(pairs)
IDX = np.empty((m,n),object)
Y,X = np.triu_indices(n,1)
IDX[np.arange(m),Y] = slice(None)
IDX[np.arange(m),X] = slice(None)
idx = np.unravel_index(
ft.reduce(np.minimum,(p[(*i,)] for p,i in zip(pairs,IDX))).argmax(),dims)
return ft.reduce(np.minimum,(
p[I] for p,I in zip(pairs,it.combinations(idx,2)))),idx
def cut(dims,pairs,offs=None):
n = len(dims)
if n<3:
if n==2:
A = pairs[0] if offs is None else np.minimum(
pairs[0],np.minimum.outer(offs[0],offs[1]))
idx = np.unravel_index(A.argmax(),dims)
return A[idx],idx
else:
idx = offs[0].argmax()
return offs[0][idx],(idx,)
gmx = min(map(np.min,pairs))
gidx = n * (0,)
A = pairs[0] if offs is None else np.minimum(
pairs[0],np.minimum.outer(offs[0],offs[1]))
Y,X = np.unravel_index(A.argsort(axis=None)[::-1],dims[:2])
for y,x in zip(Y,X):
if A[y,x] <= gmx:
return gmx,gidx
coffs = [np.minimum(p1[y],p2[x])
for p1,p2 in zip(pairs[1:n-1],pairs[n-1:])]
if not offs is None:
coffs = [*map(np.minimum,coffs,offs[2:])]
cmx,cidx = cut(dims[2:],pairs[2*n-3:],coffs)
if cmx >= A[y,x]:
return A[y,x],(y,x,*cidx)
if gmx < cmx:
gmx = min(A[y,x],cmx)
gidx = y,x,*cidx
return gmx,gidx
from timeit import timeit
IDX = 10,15,20,10,15,20
for rep in range(4):
print("trial",rep+1)
pairs = [np.random.rand(i,j) for i,j in it.combinations(IDX,2)]
print("results identical",cut(IDX,pairs)==bf(IDX,pairs))
print("results compatible",cut(IDX,pairs)[1]==bf(IDX,pairs)[1])
print("brute force",timeit(lambda:bf(IDX,pairs),number=2)*500,"ms")
print("branch cut",timeit(lambda:cut(IDX,pairs),number=10)*100,"ms")
IDX = 100,150,200,100,150,200
pairs = [np.random.rand(i,j) for i,j in it.combinations(IDX,2)]
print("HUGE",IDX,np.prod(IDX))
print("branch cut",timeit(lambda:cut(IDX,pairs),number=1)*1000,"ms")
Tl Dr. If I were to explain the problem in short:
I have signals:
np.random.seed(42)
x = np.random.randn(1000)
y = np.random.randn(1000)
z = np.random.randn(1000)
and human readable string tuple logic like :
entry_sig_ = ((x,y,'crossup',False),)
exit_sig_ = ((x,z,'crossup',False), 'or_',(x,y,'crossdown',False))
where:
'entry_sig_' means the output will be 1 when the time series unfolds from left to right and 'entry_sig_' is hit. (x,y,'crossup',False) means: x crossed y up at a particular time i, and False means signal doesn't have "memory". Otherwise number of hits accumulates.
'exit_sig_' means the output will again become '0' when the 'exit_sig_' is hit.
The output is generated through:
#njit
def run(x, entry_sig, exit_sig):
'''
x: np.array
entry_sig, exit_sig: homogeneous tuples of tuple signals
Returns: sequence of 0 and 1 satisfying entry and exit sigs
'''
L = x.shape[0]
out = np.empty(L)
out[0] = 0.0
out[-1] = 0.0
i = 1
trade = True
while i < L-1:
out[i] = 0.0
if reduce_sig(entry_sig,i) and i<L-1:
out[i] = 1.0
trade = True
while trade and i<L-2:
i += 1
out[i] = 1.0
if reduce_sig(exit_sig,i):
trade = False
i+= 1
return out
reduce_sig(sig,i) is a function (see definition below) that parses the tuple and returns resulting output for a given point in time.
Question:
As of now, an object of SingleSig class is instantiated in the for loop from scratch for any given point in time; thus, not having "memory", which totally cancels the merits of having a class, a bare function will do. Does there exist a workaround (a different class template, a different approach, etc) so that:
combined tuple signal can be queried for its value at a particular point in time i.
"memory" can be reset; i.e. e.g. MultiSig(sig_tuple).memory_field can be set to 0 at a constituent signals levels.
Following code adds a memory to the signals which can be wiped using MultiSig.reset() to reset the count of all signals to 0. The memory can be queried using MultiSig.query_memory(key) to return the number of hits for that signal at that time.
For the memory function to work, I had to add unique keys to the signals to identify them.
from numba import njit, int64, float64, types
from numba.types import Array, string, boolean
from numba import jitclass
import numpy as np
np.random.seed(42)
x = np.random.randn(1000000)
y = np.random.randn(1000000)
z = np.random.randn(1000000)
# Example of "human-readable" signals
entry_sig_ = ((x,y,'crossup',False),)
exit_sig_ = ((x,z,'crossup',False), 'or_',(x,y,'crossdown',False))
# Turn signals into homogeneous tuple
#entry_sig_
entry_sig = (((x,y,'crossup',False),'NOP','1'),)
#exit_sig_
exit_sig = (((x,z,'crossup',False),'or_','2'),((x,y,'crossdown',False),'NOP','3'))
#njit
def cross(x, y, i):
'''
x,y: np.array
i: int - point in time
Returns: 1 or 0 when condition is met
'''
if (x[i - 1] - y[i - 1])*(x[i] - y[i]) < 0:
out = 1
else:
out = 0
return out
kv_ty = (types.string,types.int64)
spec = [
('memory', types.DictType(*kv_ty)),
]
#njit
def single_signal(x, y, how, acc, i):
'''
i: int - point in time
Returns either signal or accumulator
'''
if cross(x, y, i):
if x[i] < y[i] and how == 'crossdown':
out = 1
elif x[i] > y[i] and how == "crossup":
out = 1
else:
out = 0
else:
out = 0
return out
#jitclass(spec)
class MultiSig:
def __init__(self,entry,exit):
'''
initialize memory at single signal level
'''
memory_dict = {}
for i in entry:
memory_dict[str(i[2])] = 0
for i in exit:
memory_dict[str(i[2])] = 0
self.memory = memory_dict
def reduce_sig(self, sig, i):
'''
Parses multisignal
sig: homogeneous tuple of tuples ("human-readable" signal definition)
i: int - point in time
Returns: resulting value of multisignal
'''
L = len(sig)
out = single_signal(*sig[0][0],i)
logic = sig[0][1]
if out:
self.update_memory(sig[0][2])
for cnt in range(1, L):
s = single_signal(*sig[cnt][0],i)
if s:
self.update_memory(sig[cnt][2])
out = out | s if logic == 'or_' else out & s
logic = sig[cnt][1]
return out
def update_memory(self, key):
'''
update memory
'''
self.memory[str(key)] += 1
def reset(self):
'''
reset memory
'''
dicti = {}
for i in self.memory:
dicti[i] = 0
self.memory = dicti
def query_memory(self, key):
'''
return number of hits on signal
'''
return self.memory[str(key)]
#njit
def run(x, entry_sig, exit_sig):
'''
x: np.array
entry_sig, exit_sig: homogeneous tuples of tuples
Returns: sequence of 0 and 1 satisfying entry and exit sigs
'''
L = x.shape[0]
out = np.empty(L)
out[0] = 0.0
out[-1] = 0.0
i = 1
multi = MultiSig(entry_sig,exit_sig)
while i < L-1:
out[i] = 0.0
if multi.reduce_sig(entry_sig,i) and i<L-1:
out[i] = 1.0
trade = True
while trade and i<L-2:
i += 1
out[i] = 1.0
if multi.reduce_sig(exit_sig,i):
trade = False
i+= 1
return out
run(x, entry_sig, exit_sig)
To reiterate what I said in the comments, | and & are bitwise operators, not logical operators. 1 & 2 outputs 0/False which is not what I believe you want this to evaluate to so I made sure the out and s can only be 0/1 in order for this to produce the expected output.
You are aware that the because of:
out = out | s if logic == 'or_' else out & s
the order of the time-series inside entry_sig and exit_sig matters?
Let (output, logic) be tuples where output is 0 or 1 according to how crossup and crossdown would evalute the passed information of the tuple and logic is or_ or and_.
tuples = ((0,'or_'),(1,'or_'),(0,'and_'))
out = tuples[0][0]
logic = tuples[0][1]
for i in range(1,len(tuples)):
s = tuples[i][0]
out = out | s if logic == 'or_' else out & s
out = s
logic = tuples[i][1]
print(out)
0
changing the order of the tuple yields the other signal:
tuples = ((0,'or_'),(0,'and_'),(1,'or_'))
out = tuples[0][0]
logic = tuples[0][1]
for i in range(1,len(tuples)):
s = tuples[i][0]
out = out | s if logic == 'or_' else out & s
out = s
logic = tuples[i][1]
print(out)
1
The performance hinges on how many times the count needs to be updated. Using n=1,000,000 for all three time series, your code had a mean run-time of 0.6s on my machine, my code had 0.63s.
I then changed the crossing logic up a bit to save the number of if/else so that the nested if/else is only triggered if the time-series crossed which can be checked by one comparison only. This further halved the difference in run-time so above code now sits at 2.5% longer run-time your original code.
Is there a way that collections.Counter doesn't count/ignores a given value (here 0):
from collections import Counter
import numpy as np
idx = np.random.randint(4, size=(100,100))
most_common = np.zeros(100)
num_most_common = np.zeros(100)
for i in range(100):
most_common[i], num_most_common[i] = Counter(idx[i, :]).most_common(1)[0]
So if 0 is the most common value it should give the second most common value. In addition, is there a way to avoid the for loop in this case?
For positive numbers, we can use vectorized-bincount - bincount2D_vectorized -
# https://stackoverflow.com/a/46256361/ #Divakar
def bincount2D_vectorized(a):
N = a.max()+1
a_offs = a + np.arange(a.shape[0])[:,None]*N
return np.bincount(a_offs.ravel(), minlength=a.shape[0]*N).reshape(-1,N)
# Get binned counts per row, with each number representing a bin
c = bincount2D_vectorized(idx)
# Skip the first element, as that represents counts for 0s.
# Get most common element and count per row
most_common = c[:,1:].argmax(1)+1
num_most_common = c[:,1:].max(1)
# faster : num_most_common = c[np.arange(len(most_common)),most_common]
For generic int numbers, we could extend like so -
s = idx.min()
c = bincount2D_vectorized(idx-s)
c[:,-s] = 0
most_common = c.argmax(1)
num_most_common = c[np.arange(len(most_common)),most_common]
most_common += s
You can do the following, using a generator to only count something if it is not 0.
most_common = np.array([Counter(x for x in r if x).most_common(1)[0][0] for r in idx])
num_most_common = np.array([Counter(x for x in r if x).most_common(1)[0][1] for r in idx])
or even
count = np.array([Counter(x for x in r if x).most_common(1)[0] for r in idx])
most_common = count[:,0]
num_most_common = count[:,1]
I am currently using python and numpy for calculations of correlations between 2 lists: data_0 and data_1. Each list contains respecively sorted times t0 and t1.
I want to calculate all the events where 0 < t1 - t0 < t_max.
for time_0 in np.nditer(data_0):
delta_time = np.subtract(data_1, np.full(data_1.size, time_0))
delta_time = delta_time[delta_time >= 0]
delta_time = delta_time[delta_time < time_max]
Doing so, as the list are sorted, I am selecting a subarray of data_1 of the form data_1[index_min: index_max].
So I need in fact to find two indexes to get what I want.
And what's interesting is that when I go to the next time_0, as data_0 is also sorted, I just need to find the new index_min / index_max such as new_index_min >= index_min / new_index_max >= index_max.
Meaning that I don't need to scann again all the data_1.
(data list from scratch).
I have implemented such a solution not using the numpy methods (just with while loop) and it gives me the same results as before but not as fast than before (15 times longer!).
I think as normally it requires less calculation, there should be a way to make it faster using numpy methods but I don't know how to do it.
Does anyone have an idea?
I am not sure if I am super clear so if you have any questions, do not hestitate.
Thank you in advance,
Paul
Here is a vectorized approach using argsort. It uses a strategy similar to your avoid-full-scan idea:
import numpy as np
def find_gt(ref, data, incl=True):
out = np.empty(len(ref) + len(data) + 1, int)
total = (data, ref) if incl else (ref, data)
out[1:] = np.argsort(np.concatenate(total), kind='mergesort')
out[0] = -1
split = (out < len(data)) if incl else (out >= len(ref))
if incl:
out[~split] -= len(data)
split[0] = False
return np.maximum.accumulate(np.where(split, -1, out))[split] + 1
def find_intervals(ref, data, span, incl=(True, True)):
index_min = find_gt(ref, data, incl[0])
index_max = len(ref) - find_gt(-ref[::-1], -span-data[::-1], incl[1])[::-1]
return index_min, index_max
ref = np.sort(np.random.randint(0,20000,(10000,)))
data = np.sort(np.random.randint(0,20000,(10000,)))
span = 2
idmn, idmx = find_intervals(ref, data, span, (True, True))
print('checking')
for d,mn,mx in zip(data, idmn, idmx):
assert mn == len(ref) or ref[mn] >= d
assert mn == 0 or ref[mn-1] < d
assert mx == len(ref) or ref[mx] > d+span
assert mx == 0 or ref[mx-1] <= d+span
print('ok')
It works by
indirectly sorting both sets together
finding for each time in one set the preceding time in the other
this is done using maximum.reduce
the preceding steps are applied twice, the second time the times in
one set are shifted by span
I am new to Julia and have been experimenting with it as I got to know that it has amazing performances. But I am still to experience those promised performances. I have tried many methods for enhancing performance described in the book "JULIA HIGH PERFORMANCE", which has made the code a little bit less readable. But still, my python code is much faster than my Julia code, at least 3x faster for the benchmark case.
Either I am doing something very wrong with the code which must be a sin in Julia or its that Julia just can't do it. Please prove me wrong about the later.
What I am trying to do in the code is assign distinct balls into distinct boxes with a maximum and minimum limit to the capacity of each box. The order in which balls are placed in the box also matters. I need to generate all possible assignments with the given constraints in minimum possible time.
PYTHON CODE:
import itertools
import time
max_balls = 5
min_balls = 0
def get_assignments(balls, boxes, assignments=[[]]):
all_assignments = []
upper_ball_limit = len(balls)
if upper_ball_limit > max_balls:
upper_ball_limit = max_balls
n_boxes = len(boxes)
lower_ball_limit = len(balls) - upper_ball_limit * (n_boxes - 1)
if lower_ball_limit < min_balls:
lower_ball_limit = min_balls
if len(boxes) == 0:
raise Exception("No delivery boys found")
elif len(boxes) == 1:
for strategy in itertools.permutations(balls, upper_ball_limit):
# valid = evaluate_strategy(strategy, db_id)
for subplan in assignments:
subplan_copy = subplan[:]
box_id = boxes[0]
subplan_copy.append((box_id, strategy))
all_assignments.append(subplan_copy)
return all_assignments
else:
box_id = boxes[0]
for i in range(lower_ball_limit, upper_ball_limit+ 1):
for strategy in itertools.permutations(balls, i):
temp_plans = []
for subplan in assignments:
subplan_copy = subplan[:]
subplan_copy.append((box_id, strategy))
temp_plans.append(subplan_copy)
remaining_balls = set(balls).difference(strategy)
remaining_boxes = list(set(boxes).difference([box_id]))
if remaining_balls:
all_assignments.extend(get_assignments(remaining_balls, remaining_boxes, temp_plans))
else:
all_assignments.extend(temp_plans)
return all_assignments
balls = range(1, 9)
boxes = [1, 2, 3, 4]
t = time.time()
all_assignments = get_assignments(balls, boxes)
print('Number of assignments: %s' % len(all_assignments))
print('Time taken: %s' % (time.time()-t))
And here is my attempt at writing the JULIA CODE for the above.
#!/usr/bin/env julia
using Combinatorics
const max_balls=5
const min_balls=0
function plan_assignments(balls::Vector{Int32}, boxes ; plans=[Vector{Tuple{Int32,Array{Int32,1}}}(length(boxes))])
const n_boxes = length(boxes)
const n_balls = length(balls)
const n_plans = length(plans)
if n_boxes*max_balls < n_balls
print("Invalid Inputs: Number of balls exceed the number of boxes.")
end
all_plans = Vector{Tuple{Int32,Array{Int32,1}}}[]
upper_box_limit = n_balls
if upper_box_limit > max_balls
upper_box_limit = max_balls
end
lower_box_limit = n_balls - upper_box_limit * (n_boxes-1)
if lower_box_limit < min_balls
lower_box_limit = min_balls
end
if n_boxes == 1
box_id = boxes[1]
#inbounds for strategy in Combinatorics.permutations(balls, upper_box_limit)
#inbounds for subplan in plans
subplan = subplan[:]
subplan[tn_boxes - n_boxes + 1] = (box_id, strategy)
all_plans = push!(all_plans, subplan)
end
end
return all_plans
else
box_id = boxes[1]
#inbounds for i in lower_box_limit:upper_box_limit
#inbounds for strategy in Combinatorics.permutations(balls, i)
temp_plans = Array{Vector{Tuple{Int32,Array{Int32,1}}},1}(n_plans)
# temp_plans = []
#inbounds for (i,subplan) in zip(1:n_plans, plans)
subplan = subplan[:]
subplan[tn_boxes - n_boxes + 1] = (box_id, strategy)
temp_plans[i] = subplan
# subplan = push!(subplan, (db_id, strategy))
# temp_plans = push!(temp_plans, subplan)
remaining_balls = filter((x) -> !(x in strategy), balls)
remaining_boxes = filter((x) -> x != box_id , boxes)
if length(remaining_balls) > 0
#inbounds for plan in plan_assignments(remaining_balls, remaining_boxes, plans=temp_plans)
push!(all_plans, plan)
end
# append!(all_plans, plan_assignments(remaining_orders, remaining_delivery_boys, plans=temp_plans))
else
#inbounds for plan in temp_plans
push!(all_plans, plan)
end
# append!(all_plans, temp_plans)
end
end
end
end
return all_plans
end
end
balls = Int32[1,2,3,4,5,6,7,8]
boxes = Int32[1,2,3,4]
const tn_boxes = length(boxes)
#timev all_plans = plan_assignments(balls, boxes)
print(length(all_plans))
My benchmark timings are as follows:
For Python:
Number of assignments: 5040000
Time taken: 22.5003659725
For Julia: (This is while discounting the compilation time.)
76.986338 seconds (122.94 M allocations: 5.793 GB, 77.01% gc time)
elapsed time (ns): 76986338257
gc time (ns): 59287603693
bytes allocated: 6220111360
pool allocs: 122932049
non-pool GC allocs:10587
malloc() calls: 11
realloc() calls: 18
GC pauses: 270
full collections: 28
This is another version in Julia, a little bit more idiomatic and modified to avoid recursion and some allocations:
using Iterators
using Combinatorics
histograms(n,m) = [diff([0;x;n+m]).-1 for x in combinations([1:n+m-1;],m-1)]
good_histograms(n,m,minval,maxval) =
[h for h in histograms(n,m) if all(maxval.>=h.>=minval)]
typealias PlanGrid Matrix{SubArray{Int,1,Array{Int,1},Tuple{UnitRange{Int}},true}}
function assignmentplans(balls,boxes,minballs,maxballs)
nballs, nboxes = length(balls),length(boxes)
nperms = factorial(nballs)
partslist = good_histograms(nballs,nboxes,minballs,maxballs)
plans = PlanGrid(nboxes,nperms*length(partslist))
permutationvector = vec([balls[p[i]] for i=1:nballs,p in permutations(balls)])
i1 = 1
for parts in partslist
i2 = 0
breaks = [(x[1]+1:x[2]) for x in partition(cumsum([0;parts]),2,1)]
for i=1:nperms
for j=1:nboxes
plans[j,i1] = view(permutationvector,breaks[j]+i2)
end
i1 += 1
i2 += nballs
end
end
return plans
end
For timing we get:
julia> assignmentplans([1,2],[1,2],0,2) # a simple example
2×6 Array{SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true},2}:
Int64[] Int64[] [1] [2] [1,2] [2,1]
[1,2] [2,1] [2] [1] Int64[] Int64[]
julia> #time plans = assignmentplans([1:8;],[1:4;],0,5);
8.279623 seconds (82.28 M allocations: 2.618 GB, 14.07% gc time)
julia> size(plans)
(4,5040000)
julia> plans[:,1000000] # sample ball configuration
4-element Array{SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true},1}:
Int64[]
[7,3,8,2,5]
[4]
[6,1]
Timings, of course, vary per setup, but this should be much faster. It is not exactly an apples to apples comparison, but it calculates the same stuff. Timing on the poster's (or others') machines are welcome in the comments.
I made a few minor changes to the code of Dan Getz in order to make it type-stable. The main problem was that Combinatorics.permutations and Iterators.partition are not type-stable, so I had to write type-stable versions of these as well (my change to Combinatorics.combinations was actually unnecessary)
import Base: start, next, done, eltype, length, size
import Base: iteratorsize, SizeUnknown, IsInfinite, HasLength
import Combinatorics: factorial, combinations
# Copied from Combinatorics
# Only one small change in the `nextpermutation` method
immutable Permutations{T}
a::T
t::Int
end
eltype{T}(::Type{Permutations{T}}) = Vector{eltype(T)}
length(p::Permutations) = (0 <= p.t <= length(p.a))?factorial(length(p.a), length(p.a)-p.t):0
"""
Generate all permutations of an indexable object. Because the number of permutations can be very large, this function returns an iterator object. Use `collec
t(permutations(array))` to get an array of all permutations.
"""
permutations(a) = Permutations(a, length(a))
"""
Generate all size t permutations of an indexable object.
"""
function permutations(a, t::Integer)
if t < 0
t = length(a) + 1
end
Permutations(a, t)
end
start(p::Permutations) = [1:length(p.a);]
next(p::Permutations, s) = nextpermutation(p.a, p.t ,s)
function nextpermutation(m, t, state)
s = copy(state) # was s = copy(s) a few lines down
perm = [m[s[i]] for i in 1:t]
n = length(s)
if t <= 0
return(perm, [n+1])
end
if t < n
j = t + 1
while j <= n && s[t] >= s[j]; j+=1; end
end
if t < n && j <= n
s[t], s[j] = s[j], s[t]
else
if t < n
reverse!(s, t+1)
end
i = t - 1
while i>=1 && s[i] >= s[i+1]; i -= 1; end
if i > 0
j = n
while j>i && s[i] >= s[j]; j -= 1; end
s[i], s[j] = s[j], s[i]
reverse!(s, i+1)
else
s[1] = n+1
end
end
return(perm, s)
end
done(p::Permutations, s) = !isempty(s) && max(s[1], p.t) > length(p.a) || (isempty(s) && p.t > 0)
# Copied from Iterators
# Returns an `Array` of `Array`s instead of `Tuple`s now
immutable Partition{I}
xs::I
step::Int
n::Int
end
iteratorsize{T<:Partition}(::Type{T}) = SizeUnknown()
eltype{I}(::Type{Partition{I}}) = Vector{eltype(I)}
function partition{I}(xs::I, n::Int, step::Int = n)
Partition(xs, step, n)
end
function start{I}(it::Partition{I})
N = it.n
p = Vector{eltype(I)}(N)
s = start(it.xs)
for i in 1:(N - 1)
if done(it.xs, s)
break
end
(p[i], s) = next(it.xs, s)
end
(s, p)
end
function next{I}(it::Partition{I}, state)
N = it.n
(s, p0) = state
(x, s) = next(it.xs, s)
ans = p0; ans[end] = x
p = similar(p0)
overlap = max(0, N - it.step)
for i in 1:overlap
p[i] = ans[it.step + i]
end
# when step > n, skip over some elements
for i in 1:max(0, it.step - N)
if done(it.xs, s)
break
end
(x, s) = next(it.xs, s)
end
for i in (overlap + 1):(N - 1)
if done(it.xs, s)
break
end
(x, s) = next(it.xs, s)
p[i] = x
end
(ans, (s, p))
end
done(it::Partition, state) = done(it.xs, state[1])
# Copied from the answer of Dan Getz
# Added types to comprehensions and used Vector{Int} instead of Int in vcat
typealias PlanGrid Matrix{SubArray{Int,1,Array{Int,1},Tuple{UnitRange{Int}},true}}
histograms(n,m) = [diff(vcat([0],x,[n+m])).-1 for x in combinations([1:n+m-1;],m-1)]
good_histograms(n,m,minval,maxval) =
Vector{Int}[h for h in histograms(n,m) if all(maxval.>=h.>=minval)]
minballs = 0
maxballs = 5
function assignmentplans(balls,boxes,minballs,maxballs)
nballs, nboxes = length(balls),length(boxes)
nperms = factorial(nballs)
partslist = good_histograms(nballs,nboxes,minballs,maxballs)
plans = PlanGrid(nboxes,nperms*length(partslist))
permutationvector = vec([balls[p[i]] for i=1:nballs,p in permutations(balls)])
i1 = 1
for parts in partslist
i2 = 0
breaks = UnitRange{Int64}[(x[1]+1:x[2]) for x in partition(cumsum(vcat([0],parts)),2,1)]
for i=1:nperms
for j=1:nboxes
plans[j,i1] = view(permutationvector,breaks[j]+i2)
end
i1 += 1
i2 += nballs
end
end
return plans
end
#time plans = assignmentplans(1:8, 1:4, 0, 5)
The result of the first run is (the timings vary a lot because of the gc)
1.589867 seconds (22.02 M allocations: 1.127 GB, 46.50% gc time)
4×5040000 Array{SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true},2}:
Int64[] Int64[] Int64[] Int64[] Int64[] Int64[] … [8,7,6,5,4] [8,7,6,5,4] [8,7,6,5,4] [8,7,6,5,4] [8,7,6,5,4]
Int64[] Int64[] Int64[] Int64[] Int64[] Int64[] [1,3,2] [2,1,3] [2,3,1] [3,1,2] [3,2,1]
[1,2,3] [1,2,3] [1,2,3] [1,2,3] [1,2,3] [1,2,3] Int64[] Int64[] Int64[] Int64[] Int64[]
[4,5,6,7,8] [4,5,6,8,7] [4,5,7,6,8] [4,5,7,8,6] [4,5,8,6,7] [4,5,8,7,6] Int64[] Int64[] Int64[] Int64[] Int64[]
I didn't test the changes thoroughly. Also, I don't understand, why s = copy(s) works in combinations but not in permutations. Interestingly, there is only a negligible improvement if I try to make the original version type-stable (still > 40 s with 85% gc time).