Numpy apply function elementwise based on position - python

I have a numpy array A with shape (M,N). I want to create a new array B with shape (M,N,3) where the result would be the same as the following:
import numpy as np
def myfunc(A,sx=1.5,sy=3.5):
M,N=A.shape
B=np.zeros((M,N,3))
for i in range(M):
for j in range(N):
B[i,j,0]=i*sx
B[i,j,1]=j*sy
B[i,j,2]=A[i,j]
return B
A=np.array([[1,2,3],[9,8,7]])
print(myfunc(A))
Giving the result:
[[[0. 0. 1. ]
[0. 3.5 2. ]
[0. 7. 3. ]]
[[1.5 0. 9. ]
[1.5 3.5 8. ]
[1.5 7. 7. ]]]
Is there a way to do it without the loop? I was thinking whether numpy would be able to apply a function element-wise using the indexes of the array. Something like:
def myfuncEW(indx,value,out,vars):
out[0]=indx[0]*vars[0]
out[1]=indx[1]*vars[1]
out[2]=value
M,N=A.shape
B=np.zeros((M,N,3))
np.applyfunctionelementwise(myfuncEW,A,B,(sx,sy))

You could use mgrid and moveaxis:
>>> M, N = A.shape
>>> I, J = np.mgrid[:M, :N] * np.array((sx, sy))[:, None, None]
>>> np.moveaxis((I, J, A), 0, -1)
array([[[ 0. , 0. , 1. ],
[ 0. , 3.5, 2. ],
[ 0. , 7. , 3. ]],
[[ 1.5, 0. , 9. ],
[ 1.5, 3.5, 8. ],
[ 1.5, 7. , 7. ]]])
>>>

You could use meshgrid and dstack, like this:
import numpy as np
def myfunc(A,sx=1.5,sy=3.5):
M, N = A.shape
J, I = np.meshgrid(range(N), range(M))
return np.dstack((I*sx, J*sy, A))
A=np.array([[1,2,3],[9,8,7]])
print(myfunc(A))
# array([[[ 0. , 0. , 1. ],
# [ 0. , 3.5, 2. ],
# [ 0. , 7. , 3. ]],
#
# [[ 1.5, 0. , 9. ],
# [ 1.5, 3.5, 8. ],
# [ 1.5, 7. , 7. ]]])

By preallocating the 3d array B, you save about half the time compared to stacking I, J and A.
def myfunc(A, sx=1.5, sy=3.5):
M, N = A.shape
B = np.zeros((M, N, 3))
B[:, :, 0] = np.arange(M)[:, None]*sx
B[:, :, 1] = np.arange(N)[None, :]*sy
B[:, :, 2] = A
return B

Related

Matrix element repetition bug

I'm trying to create a matrix that reads:
[0,1,2]
[3,4,5]
[6,7,8]
However, my elements keep repeating. How do I fix this?
import numpy as np
n = 3
X = np.empty(shape=[0, n])
for i in range(3):
for j in range(1,4):
for k in range(1,7):
X = np.append(X, [[(3*i) , ((3*j)-2), ((3*k)-1)]], axis=0)
print(X)
Results:
[[ 0. 1. 2.]
[ 0. 1. 5.]
[ 0. 1. 8.]
[ 0. 1. 11.]
[ 0. 1. 14.]
[ 0. 1. 17.]
[ 0. 4. 2.]
[ 0. 4. 5.]
I'm not really sure how you think your code was supposed to work. You are appending a row in X at each loop, so 3 * 3 * 7 times, so you end up with a matrix of 54 x 3.
I think maybe you meant to do:
for i in range(3):
X = np.append(X, [[3*i , 3*i+1, 3*i+2]], axis=0)
Just so you know, appending array is usually discouraged (just create a list of list, then make it a numpy array).
You could also do
>> np.arange(9).reshape((3,3))
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])

numpy array with diagonal equal to zero and [x,y] =-[y,x]

I want to create a N x N array in numpy such that the diagonal is zero and [x,y] = -[y,x].
For example:
np.array([[[0,12, 2],
[-12, 0, 3],
[-2, -3, 0]],])
The values inside the array can be any float.
One way would be with scipy.spatial.distance.squareform -
from scipy.spatial.distance import squareform
def diag_inverted(n):
l = n*(n-1)//2
out = squareform(np.random.randn(l))
out[np.tri(len(out),k=-1,dtype=bool)] *= -1
return out
Another with array-assignment and masking -
def diag_inverted_v2(n):
l = n*(n-1)//2
m = np.tri(n, k=-1, dtype=bool)
out = np.zeros((n,n),dtype=float)
out[m] = np.random.randn(l)
out[m.T] = -out.T[m.T]
return out
Sample runs -
In [148]: diag_inverted(2)
Out[148]:
array([[ 0. , -0.97873798],
[ 0.97873798, 0. ]])
In [149]: diag_inverted(3)
Out[149]:
array([[ 0. , -2.2408932 , -1.86755799],
[ 2.2408932 , 0. , 0.97727788],
[ 1.86755799, -0.97727788, 0. ]])
In [150]: diag_inverted(4)
Out[150]:
array([[ 0. , -0.95008842, 0.15135721, -0.4105985 ],
[ 0.95008842, 0. , 0.10321885, -0.14404357],
[-0.15135721, -0.10321885, 0. , -1.45427351],
[ 0.4105985 , 0.14404357, 1.45427351, 0. ]])
Here you go:
size = 3
a = np.random.normal(0,1, (size, size))
ret = (a-a.transpose())/2
Output (random):
array([[ 0. , 0.11872306, 0.46792054],
[-0.11872306, 0. , 0.12530741],
[-0.46792054, -0.12530741, 0. ]])

bool masking in numpy array matrices

I have following program
import numpy as np
arr = np.random.randn(3,4)
print(arr)
regArr = (arr > 0.8)
print (regArr)
print (arr[ regArr].reshape(arr.shape))
output:
[[ 0.37182134 1.4807685 0.11094223 0.34548185]
[ 0.14857641 -0.9159358 -0.37933393 -0.73946522]
[ 1.01842304 -0.06714827 -1.22557205 0.45600827]]
I am looking for output in arr where values greater than 0.8 should exist and other values to be zero.
I tried bool masking as shown above. But I am able to slove this. Kindly help
I'm not entirely sure what exactly you want to achieve, but this is what I did to filter.
arr = np.random.randn(3,4)
array([[-0.04790508, -0.71700005, 0.23204224, -0.36354634],
[ 0.48578236, 0.57983561, 0.79647091, -1.04972601],
[ 1.15067885, 0.98622772, -0.7004639 , -1.28243462]])
arr[arr < 0.8] = 0
array([[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[1.15067885, 0.98622772, 0. , 0. ]])
Thanks to user3053452, I have added one more solution which the original data will not be changed.
arr = np.random.randn(3,4)
array([[ 0.4297907 , 0.38100702, 0.30358291, -0.71137138],
[ 1.15180635, -1.21251676, 0.04333404, 1.81045931],
[ 0.17521058, -1.55604971, 1.1607159 , 0.23133528]])
new_arr = np.where(arr < 0.8, 0, arr)
array([[0. , 0. , 0. , 0. ],
[1.15180635, 0. , 0. , 1.81045931],
[0. , 0. , 1.1607159 , 0. ]])

Printing numpy with different position in the column

I have following numpy array
import numpy as np
np.random.seed(20)
np.random.rand(20).reshape(5, 4)
array([[ 0.5881308 , 0.89771373, 0.89153073, 0.81583748],
[ 0.03588959, 0.69175758, 0.37868094, 0.51851095],
[ 0.65795147, 0.19385022, 0.2723164 , 0.71860593],
[ 0.78300361, 0.85032764, 0.77524489, 0.03666431],
[ 0.11669374, 0.7512807 , 0.23921822, 0.25480601]])
For each column I would like to slice it in positions:
position_for_slicing=[0, 3, 4, 4]
So I will get following array:
array([[ 0.5881308 , 0.85032764, 0.23921822, 0.81583748],
[ 0.03588959, 0.7512807 , 0 0],
[ 0.65795147, 0, 0 0],
[ 0.78300361, 0, 0 0],
[ 0.11669374, 0, 0 0]])
Is there fast way to do this ? I know I can use to do for loop for each column, but I was wondering if there is more elegant way to do this.
If "elegant" means "no loop" the following would qualify, but probably not under many other definitions (arr is your input array):
m, n = arr.shape
arrf = np.asanyarray(arr, order='F')
padded = np.r_[arrf, np.zeros_like(arrf)]
assert padded.flags['F_CONTIGUOUS']
expnd = np.lib.stride_tricks.as_strided(padded, (m, m+1, n), padded.strides[:1] + padded.strides)
expnd[:, [0,3,4,4], range(4)]
# array([[ 0.5881308 , 0.85032764, 0.23921822, 0.25480601],
# [ 0.03588959, 0.7512807 , 0. , 0. ],
# [ 0.65795147, 0. , 0. , 0. ],
# [ 0.78300361, 0. , 0. , 0. ],
# [ 0.11669374, 0. , 0. , 0. ]])
Please note that order='C' and then 'C_CONTIGUOUS' in the assertion also works. My hunch is that 'F' could be a bit faster because the indexing then operates on contiguous slices.

Error in scipy sparse diags matrix construction

When using scipy.sparse.spdiags or scipy.sparse.diags I have noticed want I consider to be a bug in the routines eg
scipy.sparse.spdiags([1.1,1.2,1.3],1,4,4).toarray()
returns
array([[ 0. , 1.2, 0. , 0. ],
[ 0. , 0. , 1.3, 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
That is for positive diagonals it drops the first k data. One might argue that there is some grand programming reason for this and that I just need to pad with zeros. OK annoying as that may be, one can use scipy.sparse.diags which gives the correct result. However this routine has a bug that can't be worked around
scipy.sparse.diags([1.1,1.2],0,(4,2)).toarray()
gives
array([[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ],
[ 0. , 0. ]])
nice, and
scipy.sparse.diags([1.1,1.2],-2,(4,2)).toarray()
gives
array([[ 0. , 0. ],
[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2]])
but
scipy.sparse.diags([1.1,1.2],-1,(4,2)).toarray()
gives an error saying ValueError: Diagonal length (index 0: 2 at offset -1) does not agree with matrix size (4, 2). Obviously the answer is
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
and for extra random behaviour we have
scipy.sparse.diags([1.1],-1,(4,2)).toarray()
giving
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.1],
[ 0. , 0. ]])
Anyone know if there is a function for constructing diagonal sparse matrices that actually works?
Executive summary: spdiags works correctly, even if the matrix input isn't the most intuitive. diags has a bug that affects some offsets in rectangular matrices. There is a bug fix on scipy github.
The example for spdiags is:
>>> data = array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])
>>> diags = array([0,-1,2])
>>> spdiags(data, diags, 4, 4).todense()
matrix([[1, 0, 3, 0],
[1, 2, 0, 4],
[0, 2, 3, 0],
[0, 0, 3, 4]])
Note that the 3rd column of data always appears in the 3rd column of the sparse. The other columns also line up. But they are omitted where they 'fall off the edge'.
The input to this function is a matrix, while the input to diags is a ragged list. The diagonals of the sparse matrix all have different numbers of values. So the specification has to accomodate this in one or other. spdiags does this by ignoring some values, diags by taking a list input.
The sparse.diags([1.1,1.2],-1,(4,2)) error is puzzling.
the spdiags equivalent does work:
In [421]: sparse.spdiags([[1.1,1.2]],-1,4,2).A
Out[421]:
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
The error is raised in this block of code:
for j, diagonal in enumerate(diagonals):
offset = offsets[j]
k = max(0, offset)
length = min(m + offset, n - offset)
if length <= 0:
raise ValueError("Offset %d (index %d) out of bounds" % (offset, j))
try:
data_arr[j, k:k+length] = diagonal
except ValueError:
if len(diagonal) != length and len(diagonal) != 1:
raise ValueError(
"Diagonal length (index %d: %d at offset %d) does not "
"agree with matrix size (%d, %d)." % (
j, len(diagonal), offset, m, n))
raise
The actual matrix constructor in the diags is:
dia_matrix((data_arr, offsets), shape=(m, n))
This is the same constructor that spdiags uses, but without any manipulation.
In [434]: sparse.dia_matrix(([[1.1,1.2]],-1),shape=(4,2)).A
Out[434]:
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
In dia format, the inputs are stored exactly as given by spdiags (complete with that matrix with extra values):
In [436]: M.data
Out[436]: array([[ 1.1, 1.2]])
In [437]: M.offsets
Out[437]: array([-1], dtype=int32)
As #user2357112 points out, length = min(m + offset, n - offset is wrong, producing 3 in the test case. Changing it to length = min(m + k, n - k) makes all cases for this (4,2) matrix work. But it fails with the transpose: diags([1.1,1.2], 1, (2, 4))
The correction, as of Oct 5, for this issue is:
https://github.com/pv/scipy-work/commit/529cbde47121c8ed87f74fa6445c05d71353eb6c
length = min(m + offset, n - offset, min(m,n))
With this fix, diags([1.1,1.2], 1, (2, 4)) works.

Categories