Apply custom function on numpy matrices - python

Given a function like my_function(x,y) that takes two ndarrays x and y as an input and outputs a scalar:
def my_function(x,y):
perm = np.take(x, y)
return np.sum((np.power(2, perm) - 1) / (np.log2(np.arange(3, k + 3))))
I want to find a way to apply it to two matrices r and p
r = np.asarray([[5,6,7],[8,9,10]])
p = np.asarray([[2,1,0],[0,2,1]])
in such a way that an ndarray is returned with the values
np.asarray([my_function([5,6,7],[2,1,0]), my_function([8,9,10],[0,2,1])

You can slightly modify your function to use take_along_axis instead of take, which will allow you to adapt to the 2D solution.
def my_function_2d(x, y, k=1):
t = np.take_along_axis(x, y, -1)
u = np.power(2, t) - 1
v = np.log2(np.arange(3, k+3))
return (u / v).sum(-1)
my_function_2d(r, p, k=1)
array([ 139.43547554, 1128.73332914])
Validation
In [96]: k = 1
In [97]: my_function([5,6,7],[2,1,0])
Out[97]: 139.4354755392921
In [98]: my_function([8,9,10],[0,2,1])
Out[98]: 1128.7333291393375
This will also still work on the 1D case:
In [145]: my_function_2d(r[0], p[0], k=1)
Out[145]: 139.4354755392921
This approach generalizes to the N-dimensional case:
In [157]: r = np.random.randint(1, 5, (2, 2, 2, 2, 2, 3))
In [158]: p = np.random.randint(0, r.shape[-1], r.shape)
In [159]: my_function_2d(r, p, k=3)
Out[159]:
array([[[[[ 8.34718483, 14.25597598],
[12.25597598, 19.97868221]],
[[12.97868221, 4.68481893],
[ 2.42295943, 1.56160631]]],
[[[23.42409467, 9.82346582],
[10.93124418, 16.42409467]],
[[23.42409467, 1.56160631],
[ 3.68481893, 10.68481893]]]],
[[[[15.97868221, 10.93124418],
[ 5.40752517, 14.93124418]],
[[ 4.14566566, 6.34718483],
[14.93124418, 3.68481893]]],
[[[ 9.20853795, 13.39462286],
[23.42409467, 3.82346582]],
[[23.42409467, 9.85293763],
[ 4.56160631, 10.93124418]]]]])
I assume you realize your approach doesn't work for all inputs and ks, there are some shape requirements

You can try either map or a list comprehension with zip as following. Please note that I took k=1 to have a running code as you did not specify k
def my_function(x,y):
k=1
perm = np.take(x, y)
return np.sum((np.power(2, perm) - 1) / (np.log2(np.arange(3, k + 3))))
r = np.asarray([[5,6,7],[8,9,10]])
p = np.asarray([[2,1,0],[0,2,1]])
result = np.asarray([my_function(i, j) for i, j in zip(r, p)])
print (result)
# [ 139.43547554 1128.73332914]

You can use np.vectorize with the signature keyword:
k = 3
np.vectorize(my_function, signature='(i),(i)->()')(r, p)
# array([124.979052 , 892.46280834])

Related

How do I iterate over the vectors of all but one axis of a numpy array (tensor)?

Suppose p has shape (4, 3, 2). I want to iterate 12 times over arrays of size (2,)
q = np.empty_like(p)
op_axes = [list(range(len(p.shape) - 1)) + [-1]] * 2
it = np.nditer([p, q],
op_axes=op_axes,
op_flags=[['readonly'], ['writeonly', 'allocate']])
with it:
for this_p, this_q in it:
print(this_p.shape) # I want this to have shape (2,)
this_q[...] = some_function_of(this_p)
What am I doing wrong?
Best I can do:
q = np.empty_like(p)
for i in np.ndindex(p.shape[: -1]):
this_p = p[i]
...
q[i] = solution.x

sample n zeros from a sparse.coo_matrix

How do I (efficiently) sample zero values from a scipy.sparse.coo_matrix?
>>> import numpy as np
>>> from scipy.sparse import coo_matrix
>>> # create sparse array
>>> X = np.array([[1., 0.], [2., 1.], [0., 0.]])
>>> X_sparse = coo_matrix(X)
>>> # randomly sample 0's from X_sparse, retrieving as [(row, col), (row_col)]
>>> def sample_zeros(sp_arr, n, replacement=False):
>>> # ???
>>> return negs
>>> zero_indices = sample_zeros(X_sparse, n=3, replacement=False)
>>> print(zero_indices)
[(0, 1), (2, 0), (2, 1)]
Efficiency is important here, since I will be doing this in an iterator that feeds a neural network.
Since you know the shape of X, you could use np.random.choice to generate
random (row, col) locations in X:
h, w = X.shape
rows = np.random.choice(h, size=n)
cols = np.random.choice(w, size=n)
The main difficulty is how to check if a (row, col) is a non-zero location in X.
Here's a way to do that: Make a new sparse X which equals 1 wherever X is nonzero.
Next, create a new sparse matrix, Y, with non-zero values at the random locations generated above. Then subtract:
Y = Y - X.multiply(Y)
This sparse matrix Y will be zero wherever X is nonzero.
So if we've managed to generate enough nonzero values in Y, then we can use their (row, col) locations as the return value for sample_negs:
import unittest
import sys
import numpy as np
import scipy.sparse as sparse
def sample_negs(X, n=3, replace=False):
N = np.prod(X.shape)
m = N - X.size
if n == 0:
result = []
elif (n < 0) or (not replace and m < n) or (replace and m == 0):
raise ValueError("{n} samples from {m} locations do not exist"
.format(n=n, m=m))
elif n/m > 0.5:
# Y (in the else clause, below) would be pretty dense so there would be no point
# trying to use sparse techniques. So let's use hpaulj's idea
# (https://stackoverflow.com/a/53577267/190597) instead.
import warnings
warnings.filterwarnings("ignore", category=sparse.SparseEfficiencyWarning)
Y = sparse.coo_matrix(X == 0)
rows = Y.row
cols = Y.col
idx = np.random.choice(len(rows), size=n, replace=replace)
result = list(zip(rows[idx], cols[idx]))
else:
X_row, X_col = X.row, X.col
X_data = np.ones(X.size)
X = sparse.coo_matrix((X_data, (X_row, X_col)), shape=X.shape)
h, w = X.shape
Y = sparse.coo_matrix(X.shape)
Y_size = 0
while Y_size < n:
m = n - Y.size
Y_data = np.concatenate([Y.data, np.ones(m)])
Y_row = np.concatenate([Y.row, np.random.choice(h, size=m)])
Y_col = np.concatenate([Y.col, np.random.choice(w, size=m)])
Y = sparse.coo_matrix((Y_data, (Y_row, Y_col)), shape=X.shape)
# Remove values in Y where X is nonzero
# This also consolidates (row, col) duplicates
Y = sparse.coo_matrix(Y - X.multiply(Y))
if replace:
Y_size = Y.data.sum()
else:
Y_size = Y.size
if replace:
rows = np.repeat(Y.row, Y.data.astype(int))
cols = np.repeat(Y.col, Y.data.astype(int))
idx = np.random.choice(rows.size, size=n, replace=False)
result = list(zip(rows[idx], cols[idx]))
else:
rows = Y.row
cols = Y.col
idx = np.random.choice(rows.size, size=n, replace=False)
result = list(zip(rows[idx], cols[idx]))
return result
class Test(unittest.TestCase):
def setUp(self):
import warnings
warnings.filterwarnings("ignore", category=sparse.SparseEfficiencyWarning)
self.ncols, self.nrows = 100, 100
self.X = sparse.random(self.ncols, self.nrows, density=0.05, format='coo')
Y = sparse.coo_matrix(self.X == 0)
self.expected = set(zip(Y.row, Y.col))
def test_n_too_large(self):
self.assertRaises(ValueError, sample_negs, self.X, n=100*100+1, replace=False)
X_dense = sparse.coo_matrix(np.ones((4,2)))
self.assertRaises(ValueError, sample_negs, X_dense, n=1, replace=True)
def test_no_replacement(self):
for m in range(100):
negative_list = sample_negs(self.X, n=m, replace=False)
negative_set = set(negative_list)
self.assertEqual(len(negative_list), m)
self.assertLessEqual(negative_set, self.expected)
def test_no_repeats_when_replace_is_false(self):
negative_list = sample_negs(self.X, n=10, replace=False)
self.assertEqual(len(negative_list), len(set(negative_list)))
def test_dense_replacement(self):
N = self.ncols * self.nrows
m = N - self.X.size
for i in [-1, 0, 1]:
negative_list = sample_negs(self.X, n=m+i, replace=True)
negative_set = set(negative_list)
self.assertEqual(len(negative_list), m+i)
self.assertLessEqual(negative_set, self.expected)
def test_sparse_replacement(self):
for m in range(100):
negative_list = sample_negs(self.X, n=m, replace=True)
negative_set = set(negative_list)
self.assertEqual(len(negative_list), m)
self.assertLessEqual(negative_set, self.expected)
if __name__ == '__main__':
sys.argv.insert(1,'--verbose')
unittest.main(argv = sys.argv)
Since sample_negs is rather complicated, I've included some unit tests
to hopefully verify reasonable behavior.
I don't think there's an efficient way that takes advantage of the sparse matrix structure:
In [197]: >>> X = np.array([[1., 0.], [2., 1.], [0., 0.]])
...: >>> X_sparse = sparse.coo_matrix(X)
In [198]: X_sparse
Out[198]:
<3x2 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [199]: print(X_sparse)
(0, 0) 1.0
(1, 0) 2.0
(1, 1) 1.0
With the dense array you could do something like:
In [204]: zeros = np.argwhere(X==0)
In [205]: zeros
Out[205]:
array([[0, 1],
[2, 0],
[2, 1]])
In [206]: idx=np.random.choice(3,3, replace=False)
In [207]: idx
Out[207]: array([0, 2, 1])
In [208]: zeros[idx,:]
Out[208]:
array([[0, 1],
[2, 1],
[2, 0]])
We could ask for all 0s of the sparse matrix:
In [209]: X_sparse==0
/usr/local/lib/python3.6/dist-packages/scipy/sparse/compressed.py:214: SparseEfficiencyWarning: Comparing a sparse matrix with 0 using == is inefficient, try using != instead.
", try using != instead.", SparseEfficiencyWarning)
Out[209]:
<3x2 sparse matrix of type '<class 'numpy.bool_'>'
with 3 stored elements in Compressed Sparse Row format>
In [210]: print(_)
(0, 1) True
(2, 0) True
(2, 1) True

How to scan through all the elements of a matrix with theano?

TL;DR:
What is the theano.scan equivalent of:
M = np.arange(9).reshape(3, 3)
for i in range(M.shape[0]):
for j in range(M.shape[1]):
M[i, j] += 5
M
possibly (if doable) without using nested scans?
Note that this question does not want to be specifically about how to apply an operation elementwise to a matrix, but more generally on how to implement with theano.scan a nested looping construct like the above.
Long version:
theano.scan (or equivalently in this case, theano.map) allows to map a function looping through multiple indices, by simply providing a sequence of elements to the sequences arguments, with something like
import theano
import theano.tensor as T
M = T.dmatrix('M')
def map_func(i, j, matrix):
return matrix[i, j] + i * j
results, updates = theano.scan(map_func,
sequences=[T.arange(M.shape[0]), T.arange(M.shape[1])],
non_sequences=[M])
f = theano.function(inputs=[M], outputs=results)
f(np.arange(9).reshape(3, 3))
#
which is roughly equivalent to a python loop of the form:
M = np.arange(9).reshape(3, 3)
for i, j in zip(np.arange(M.shape[0]), np.arange(M.shape[1])):
M[i, j] += 5
M
which increases by 5 all the elements in the diagonal of M.
But what if I want to find the theano.scan equivalent of:
M = np.arange(9).reshape(3, 3)
for i in range(M.shape[0]):
for j in range(M.shape[1]):
M[i, j] += 5
M
possibly without nesting scan?
One way is of course to flatten the matrix, scan through the flattened elements, and then reshape it to the original shape, with something like
import theano
import theano.tensor as T
M = T.dmatrix('M')
def map_func(i, X):
return X[i] + .5
M_flat = T.flatten(M)
results, updates = theano.map(map_func,
sequences=T.arange(M.shape[0] * M.shape[1]),
non_sequences=M_flat)
final_M = T.reshape(results, M.shape)
f = theano.function([M], final_M)
f([[1, 2], [3, 4]])
but is there a better way that doesn't involve explicitly flattening and reshaping the matrix?
Here is an example on how this kind of thing can be achieve using nested theano.scan calls.
In this example we add the number 3.141 to every element of a matrix, effectively simulating in a convoluted way the output of H + 3.141:
H = T.dmatrix('H')
def fn2(col, row, matrix):
return matrix[row, col] + 3.141
def fn(row, matrix):
res, updates = theano.scan(fn=fn2,
sequences=T.arange(matrix.shape[1]),
non_sequences=[row, matrix])
return res
results, updates = theano.scan(fn=fn,
sequences=T.arange(H.shape[0]),
non_sequences=[H])
f = theano.function([H], results)
f([[0, 1], [2, 3]])
# array([[ 3.141, 4.141],
# [ 5.141, 6.141]])
As another example, let us add to each element of a matrix the product of its row and column indices:
H = T.dmatrix('H')
def fn2(col, row, matrix):
return matrix[row, col] + row * col
def fn(row, matrix):
res, updates = theano.scan(fn=fn2,
sequences=T.arange(matrix.shape[1]),
non_sequences=[row, matrix])
return res
results, updates = theano.scan(fn=fn,
sequences=T.arange(H.shape[0]),
non_sequences=[H])
f = theano.function([H], results)
f(np.arange(9).reshape(3, 3))
# Out[2]:array([[ 0., 1., 2.],
# [ 3., 5., 7.],
# [ 6., 9., 12.]])

NumPy indexing with varying position

I have an array input_data of shape (A, B, C), and an array ind of shape (B,). I want to loop through the B axis and take the sum of elements C[B[i]] and C[B[i]+1]. The desired output is of shape (A, B). I have the following code which works, but I feel is inefficient due to index-based looping through the B axis. Is there a more efficient method?
import numpy as np
input_data = np.random.rand(2, 6, 10)
ind = [ 2, 3, 5, 6, 5, 4 ]
out = np.zeros( ( input_data.shape[0], input_data.shape[1] ) )
for i in range( len(ind) ):
d = input_data[:, i, ind[i]:ind[i]+2]
out[:, i] = np.sum(d, axis = 1)
Edited based on Divakar's answer:
import timeit
import numpy as np
N = 1000
input_data = np.random.rand(10, N, 5000)
ind = ( 4999 * np.random.rand(N) ).astype(np.int)
def test_1(): # Old loop-based method
out = np.zeros( ( input_data.shape[0], input_data.shape[1] ) )
for i in range( len(ind) ):
d = input_data[:, i, ind[i]:ind[i]+2]
out[:, i] = np.sum(d, axis = 1)
return out
def test_2():
extent = 2 # Comes from 2 in "ind[i]:ind[i]+2"
m,n,r = input_data.shape
idx = (np.arange(n)*r + ind)[:,None] + np.arange(extent)
out1 = input_data.reshape(m,-1)[:,idx].reshape(m,n,-1).sum(2)
return out1
print timeit.timeit(stmt = test_1, number = 1000)
print timeit.timeit(stmt = test_2, number = 1000)
print np.all( test_1() == test_2(), keepdims = True )
>> 7.70429363482
>> 0.392034666757
>> [[ True]]
Here's a vectorized approach using linear indexing with some help from broadcasting. We merge the last two axes of the input array, calculate the linear indices corresponding to the last two axes, perform slicing and reshape back to a 3D shape. Finally, we do summation along the last axis to get the desired output. The implementation would look something like this -
extent = 2 # Comes from 2 in "ind[i]:ind[i]+2"
m,n,r = input_data.shape
idx = (np.arange(n)*r + ind)[:,None] + np.arange(extent)
out1 = input_data.reshape(m,-1)[:,idx].reshape(m,n,-1).sum(2)
If the extent is always going to be 2 as stated in the question - "... sum of elements C[B[i]] and C[B[i]+1]", then you could simply do -
m,n,r = input_data.shape
ind_arr = np.array(ind)
axis1_r = np.arange(n)
out2 = input_data[:,axis1_r,ind_arr] + input_data[:,axis1_r,ind_arr+1]
You could also use integer array indexing combined with basic slicing:
import numpy as np
m,n,r = 2, 6, 10
input_data = np.arange(2*6*10).reshape(m, n, r)
ind = np.array([ 2, 3, 5, 6, 5, 4 ])
out = np.zeros( ( input_data.shape[0], input_data.shape[1] ) )
for i in range( len(ind) ):
d = input_data[:, i, ind[i]:ind[i]+2]
out[:, i] = np.sum(d, axis = 1)
out2 = input_data[:, np.arange(n)[:,None], np.add.outer(ind,range(2))].sum(axis=-1)
print(out2)
# array([[ 5, 27, 51, 73, 91, 109],
# [125, 147, 171, 193, 211, 229]])
assert np.allclose(out, out2)

dimension error while converting code from matlab to python

I am working on conversion of code from matlab to python.
for values,
N = 100
V = [[ -7.94627203e+01 -1.81562235e+02 -3.05418070e+02 -2.38451033e+02][ 9.43740653e+01 1.69312771e+02 1.68545575e+01 -1.44450299e+02][ 5.61599000e+00 8.76135909e+01 1.18959245e+02 -1.44049237e+02]]
V is numpy array.
for i = 1:N
L(i) = sqrt(norm(v(:,i)));
if L(i) > 0.0001
q(:,i) = v(:,i)/L(i);
else
q(:,i) = v(:,i)*0.0001;
end
end
I have converted this code to :
L = []
q = []
for i in range(1, (N +1)):
L.insert((i -1),np.sqrt( np.linalg.norm(v[:, (i -1)])))
if L[(i -1)] > 0.0001:
q.insert((i -1), (v[:, (i -1)] / L[(i -1)]).tolist())
else:
q.insert((i -1), (v[:, (i -1)] * 0.0001).tolist())
q = np.array(q)
return q, len_
But, in matlab the resultant dimensions are 3 x 4 but I am getting 4 x 3 in python. Can anyone let me know where I am doing mistake?
You are inserting lists of length 3 into q. When you finish the loop that creates q, q is a list of 4 items, where each item is a list of length 3. So np.array(q) creates an array with shape 4x3. You could change the second-to-last line to this:
q = np.array(q).T
Or, you can use numpy more effectively to eliminate all the explicit for loops. For example, if you are using numpy 1.8, the norm function accepts an axis argument.
Here's vectorized version of your code.
First, some setup for this example.
In [152]: np.set_printoptions(precision=3)
In [153]: np.random.seed(111)
Create some data to work with.
In [154]: v = 5e-9 * np.random.randint(0, 3, size=(3, 4))
In [155]: v
Out[155]:
array([[ 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00],
[ 1.000e-08, 5.000e-09, 1.000e-08, 1.000e-08],
[ 1.000e-08, 0.000e+00, 5.000e-09, 0.000e+00]])
Compute the square root of the norms of the columns by using the argument axis=0 in numpy.linalg.norm.
In [156]: L = np.sqrt(np.linalg.norm(v, axis=0))
In [157]: L
Out[157]: array([ 1.189e-04, 7.071e-05, 1.057e-04, 1.000e-04])
Use numpy.where to select the values by which the columns of v are to be divided to create q.
In [158]: scale = np.where(L > 0.0001, L, 1000.0)
In [159]: scale
Out[159]: array([ 1.189e-04, 1.000e+03, 1.057e-04, 1.000e+03])
q is has shape (3, 4), and scale has shape (4,), so we can use broadcasting to divide each column of q by the corresponding value in scale.
In [160]: q = v / scale
In [161]: q
Out[161]:
array([[ 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00],
[ 8.409e-05, 5.000e-12, 9.457e-05, 1.000e-11],
[ 8.409e-05, 0.000e+00, 4.729e-05, 0.000e+00]])
Repeated here are the three lines of the vectorized code:
L = np.sqrt(np.linalg.norm(v, axis=0))
scale = np.where(L > 0.0001, L, 1000.0)
q = v / scale

Categories