I have a large two dimensional array arr which I would like to bin over the second axis using numpy. Because np.histogram flattens the array I'm currently using a for loop:
import numpy as np
arr = np.random.randn(100, 100)
nbins = 10
binned = np.empty((arr.shape[0], nbins))
for i in range(arr.shape[0]):
binned[i,:] = np.histogram(arr[i,:], bins=nbins)[0]
I feel like there should be a more direct and more efficient way to do that within numpy but I failed to find one.
You could use np.apply_along_axis:
x = np.array([range(20), range(1, 21), range(2, 22)])
nbins = 2
>>> np.apply_along_axis(lambda a: np.histogram(a, bins=nbins)[0], 1, x)
array([[10, 10],
[10, 10],
[10, 10]])
The main advantage (if any) is that it's slightly shorter, but I wouldn't expect much of a performance gain. It's possibly marginally more efficient in the assembly of the per-row results.
I was a bit confused by the lambda in Ami's solution so I expanded it out to show what it's doing:
def hist_1d(a):
return np.histogram(a, bins=bins)[0]
counts = np.apply_along_axis(hist_1d, axis=1, arr=x)
To bin a numpy array along any axis you may use :
def bin_nd_data(arr, bin_n = 2, axis = -1):
""" bin a nD array along one specific axis, to check.."""
ss = list( arr.shape )
if ss[axis]%bin_n==0:
ss[ axis ] = int( ss[axis]/bin_n)
print('ss is ', ss )
if axis==-1:
ss.append( bin_n)
return np.mean( np.reshape(arr, ss, order='F' ), axis=-1 )
else:
ss.insert( axis+1, bin_n )
return np.mean( np.reshape(arr, ss, order='F' ), axis=axis+1 )
else:
print('bin nd data, not divisible bin given : array shape :', arr.shape, ' bin ', bin_n)
return None
It is a slight bother to take into account the case 'axis=-1'.
You have to use numpy.histogramdd specifically meant for your problem
Related
I have two 1D arrays
x = np.random.rand(100)
alpha = np.array([2, 3, 4])
I will refer to the elements of x as x_0, x_1, etc.
How, in the fastest way possible, can I create a sort of 'sliding dot product' from this, more specifically the following 1D array:
array([2*x_0 + 3*x_1 + 4*x_2,
2*x_1 + 3*x_2 + 4*x_3,
2*x_2 + 3*x_3 + 4*x_4,
...,
2*x_98 + 3*x_99 + 4*x_100])
I can't think of a way that doesn't use for loops. I'm sure there's a more elegant way.
that's called convolution, in your case you want to use it in "valid" mode so that it doesn't pad with zeros.
import numpy as np
x = np.random.rand(100)
alpha = np.array([2, 3, 4])
res = np.convolve(x,alpha,mode="valid")
print(len(res)) # 98, you can count it yourself on a paper.
I have an n row, m column numpy array, and would like to create a new k x m array by selecting k random elements from each column of the array. I wrote the following python function to do this, but would like to implement something more efficient and faster:
def sample_array_cols(MyMatrix, nelements):
vmat = []
TempMat = MyMatrix.T
for v in TempMat:
v = np.ndarray.tolist(v)
subv = random.sample(v, nelements)
vmat = vmat + [subv]
return(np.array(vmat).T)
One question is whether there's a way to loop over each column without transposing the array (and then transposing back). More importantly, is there some way to map the random sample onto each column that would be faster than having a for loop over all columns? I don't have that much experience with numpy objects, but I would guess that there should be something analogous to apply/mapply in R that would work?
One alternative is to randomly generate the indices first, and then use take_along_axis to map them to the original array:
arr = np.random.randn(1000, 5000) # arbitrary
k = 10 # arbitrary
n, m = arr.shape
idx = np.random.randint(0, n, (k, m))
new = np.take_along_axis(arr, idx, axis=0)
Output (shape):
in [215]: new.shape
out[215]: (10, 500) # (k x m)
To sample each column without replacement just like your original solution
import numpy as np
matrix = np.arange(4*3).reshape(4,3)
matrix
Output
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
k = 2
np.take_along_axis(matrix, np.random.rand(*matrix.shape).argsort(axis=0)[:k], axis=0)
Output
array([[ 9, 1, 2],
[ 3, 4, 11]])
I would
Pre-allocate the result array, and fill in columns, and
Use numpy index based indexing
def sample_array_cols(matrix, n_result):
(n,m) = matrix.shape
vmat = numpy.array([n_result, m], dtype= matrix.dtype)
for c in range(m):
random_indices = numpy.random.randint(0, n, n_result)
vmat[:,c] = matrix[random_indices, c]
return vmat
Not quite fully vectorized, but better than building up a list, and the code scans just like your description.
I am generating a random matrix with
np.random.randint(2, size=(5, 3))
that outputs something like
[0,1,0],
[1,0,0],
[1,1,1],
[1,0,1],
[0,0,0]
How do I create the random matrix with the condition that each row cannot contain all 1's? That is, each row can be [1,0,0] or [0,0,0] or [1,1,0] or [1,0,1] or [0,0,1] or [0,1,0] or [0,1,1] but cannot be [1,1,1].
Thanks for your answers
Here's an interesting approach:
rows = np.random.randint(7, size=(6, 1), dtype=np.uint8)
np.unpackbits(rows, axis=1)[:, -3:]
Essentially, you are choosing integers 0-6 for each row, ie 000-110 as binary. 7 would be 111 (all 1's). You just need to extract binary digits as columns and take the last 3 digits (your 3 columns) since the output of unpackbits is 8 digits.
Output:
array([[1, 0, 1],
[1, 0, 0],
[1, 0, 0],
[1, 0, 0],
[0, 1, 1],
[0, 0, 0]], dtype=uint8)
If you always have 3 columns, one approach is to explicitly list the possible rows and then choose randomly among them until you have enough rows:
import numpy as np
# every acceptable row
choices = np.array([
[1,0,0],
[0,0,0],
[1,1,0],
[1,0,1],
[0,0,1],
[0,1,0],
[0,1,1]
])
n_rows = 5
# randomly pick which type of row to use for each row needed
idx = np.random.choice(range(len(choices)), size=n_rows)
# make an array by using the chosen rows
array = choices[idx]
If this needs to generalize to a large number of columns, it won't be practical to explicitly list all choices (even if you create the choices programmatically, the memory is still an issue; the number of possible rows grows exponentially in the number of columns). Instead, you can create an initial matrix and then just resample any unacceptable rows until there are none left. I'm assuming that a row is unacceptable if it consists only of 1s; it would be easy to adapt this to the case where the threshold is any number of 1s, though.
n_rows = 5
n_cols = 4
array = np.random.randint(2, size=(n_rows, n_cols))
all_1s_idx = array.sum(axis=-1) == n_cols
while all_1s_idx.any():
array[all_1s_idx] = np.random.randint(2, size=(all_1s_idx.sum(), n_cols))
all_1s_idx = array.sum(axis=-1) == n_cols
Here we just keep resampling all unacceptable rows until there are none left. Because all of the necessary rows are resampled at once, this should be quite efficient. Additionally, as the number of columns grows larger, the probability of a row having all 1s decreases exponentially, so efficiency shouldn't be a problem.
#busybear beat me to it but I'll post it anyway, as it is a bit more general:
def not_all(m, k):
if k>64 or sys.byteorder != 'little':
raise NotImplementedError
sample = np.random.randint(0, 2**k-1, (m,), dtype='u8').view('u1').reshape(m, -1)
sample[:, k//8] <<= -k%8
return np.unpackbits(sample).reshape(m, -1)[:, :k]
For example:
>>> sample = not_all(1000000, 11)
# sanity checks
>>> unq, cnt = np.unique(sample, axis=0, return_counts=True)
>>> len(unq) == 2**11-1
True
>>> unq.sum(1).max()
10
>>> cnt.min(), cnt.max()
(403, 568)
And while I'm at hijacking other people's answers here is a streamlined version of #Nathan's acceptance-rejection method.
def accrej(m, k):
sample = np.random.randint(0, 2, (m, k), bool)
all_ones, = np.where(sample.all(1))
while all_ones.size:
resample = np.random.randint(0, 2, (all_ones.size, k), bool)
sample[all_ones] = resample
all_ones = all_ones[resample.all(1)]
return sample.view('u1')
Try this solution using sum():
import numpy as np
array = np.random.randint(2, size=(5, 3))
for i, entry in enumerate(array):
if entry.sum() == 3:
while True:
new = np.random.randint(2, size=(1, 3))
if new.sum() == 3:
continue
break
array[i] = new
print(array)
Good luck my friend!
I have an array of 5 values, consisting of 4 values and one index. I sort and split the array along the index. This leads me to splits of matrices with different lengths. From here on I want to calculate the mean, variance of the fourth values and covariance of the first 3 values for every split. My current approach works with a for loop, which I would like to replace by matrix operations, but I am struggeling with the different sizes of my matrices.
import numpy as np
A = np.random.rand(10,5)
A[:,-1] = np.random.randint(4, size=10)
sorted_A = A[np.argsort(A[:,4])]
splits = np.split(sorted_A, np.where(np.diff(sorted_A[:,4]))[0]+1)
My current for loop looks like this:
result = np.zeros((len(splits), 5))
for idx, values in enumerate(splits):
if(len(values))>0:
result[idx, 0] = np.mean(values[:,3])
result[idx, 1] = np.var(values[:,3])
result[idx, 2:5] = np.cov(values[:,0:3].transpose(), ddof=0).diagonal()
else:
result[idx, 0] = values[:,3]
I tried to work with masked arrays without success, since I couldn't load the matrices into the masked arrays in a proper form. Maybe someone knows how to do this or has a different suggestion.
You can use np.add.reduceat as follows:
>>> idx = np.concatenate([[0], np.where(np.diff(sorted_A[:,4]))[0]+1, [A.shape[0]]])
>>> result2 = np.empty((idx.size-1, 5))
>>> result2[:, 0] = np.add.reduceat(sorted_A[:, 3], idx[:-1]) / np.diff(idx)
>>> result2[:, 1] = np.add.reduceat(sorted_A[:, 3]**2, idx[:-1]) / np.diff(idx) - result2[:, 0]**2
>>> result2[:, 2:5] = np.add.reduceat(sorted_A[:, :3]**2, idx[:-1], axis=0) / np.diff(idx)[:, None]
>>> result2[:, 2:5] -= (np.add.reduceat(sorted_A[:, :3], idx[:-1], axis=0) / np.diff(idx)[:, None])**2
>>>
>>> np.allclose(result, result2)
True
Note that the diagonal of the covariance matrix are just the variances which simplifies this vectorization quite a bit.
Is it possible to simplify this:
import numpy as np
a = np.random.random_sample((40, 3))
data_base = np.random.random_sample((20, 3))
mean = np.random.random_sample((40,))
data = []
for s in data_base:
data.append(mean + np.dot(a, s))
data should be of size (20, 40). I was wondering if I could do some broadcasting instead of the loop. I was not able to do it with np.add and some [:, None]. I certainly do not use this correctly.
Your data creates a (20,40) array:
In [385]: len(data)
Out[385]: 20
In [386]: data = np.array(data)
In [387]: data.shape
Out[387]: (20, 40)
The straight forward application of dot produces the same thing:
In [388]: M2=mean+np.dot(data_base, a.T)
In [389]: np.allclose(M2,data)
Out[389]: True
The matmul operator also works with these arrays (no need to expand and squeeze):
M3 = data_base#a.T + mean