I have big 3D matrices indicating the position of agents in a 3D space. The values of the matrix are 0 if there is not agent on it and 1 if there is an agent on it.
Then, my problem is that I want the agents to 'grow' in the sense that I want them to be determined by lets say a cube (3x3x3) of ones. If already gotten a way to do it but I'm having trouble when the agent is close to the borders.
For example, I have a matrix of positions 100x100x100, if I know my agent is at position (x, y, z) I will do:
positions_matrix = numpy.zeros((100, 100, 100))
positions_matrix[x - 1: x + 2, y - 1: y + 2, z - 1: z + 2] += numpy.ones((3, 3, 3))
Of course in my real code I'm looping over more positions but this is basically it. This works but the problem comes when the agent is to close to the border in which the sum can't be made because the resultant matrix from slicing would be smaller than the ones matrix.
Any idea how to solve it or if numpy or any other package have an implementation for this? I couldn't manage to find it although I'm pretty sure I'm not the first one to face against this.
A slightly more programmatic way of solving the problem:
import numpy as np
m = np.zeros((100, 100, 100))
slicing = tuple(
slice(max(0, x_i - 1), min(x_i + 2, d - 1))
for x_i, d in zip((x, y, z), m.shape))
ones_shape = tuple(s.stop - s.start for s in slicing)
m[slicing] += np.ones(ones_shape)
But it is otherwise the same as the accepted answer.
You should cut at the lower and upper bounds, using something like:
import numpy as np
m = np.zeros((100, 100, 100))
x_min, x_max = np.max([0, x-1]), np.min([x+2, m.shape[0]-1])
y_min, y_max = np.max([0, y-1]), np.min([y+2, m.shape[1]-1])
z_min, z_max = np.max([0, z-1]), np.min([z+2, m.shape[2]-1])
m[x_min:x_max, y_min:y_max, z_min:z_max] += np.ones((x_max-x_min, y_max-y_min, z_max-z_min))
There is a solution using np.put, and its 'clip' option.
It just requires a little gymnastics because the function requires indices in the flattened matrix; fortunately, the function np.ravel_multi_index does the job:
import itertools
import numpy as np
x, y, z = 2, 0, 4
positions_matrix = np.zeros((100,100,100))
indices = np.array( list( itertools.product( (x-1, x, x+1), (y-1, y, y+1), (z-1, z, z+1)) ))
flat_indices = np.ravel_multi_index(indices.T, positions_matrix.shape, mode='clip')
positions_matrix.put(flat_indices, 1+positions_matrix.take(flat_indices))
# positions_matrix[2,1,4] is now 1.0
The nice thing about this solution is that you can play with other modes, for instance 'wrap' (if your agents live on a donut ;-) or in a periodic space).
I'll explain how it works on a smaller 2D matrix:
import itertools
import numpy as np
positions_matrix = np.zeros((8,8))
ones = np.ones((3,3))
x, y = 0, 4
indices = np.array( list( itertools.product( (x-1, x, x+1), (y-1, y, y+1) )))
# array([[-1, 3],
# [-1, 4],
# [-1, 5],
# [ 0, 3],
# [ 0, 4],
# [ 0, 5],
# [ 1, 3],
# [ 1, 4],
# [ 1, 5]])
flat_indices = np.ravel_multi_index(indices.T, positions_matrix.shape, mode='clip')
# array([ 3, 4, 5, 3, 4, 5, 11, 12, 13])
positions_matrix.put(flat_indices, ones, mode='clip')
# positions_matrix is now:
# array([[0., 0., 0., 1., 1., 1., 0., 0.],
# [0., 0., 0., 1., 1., 1., 0., 0.],
# [0., 0., 0., 0., 0., 0., 0., 0.],
# [ ...
By the way, in this case mode='clip' was redundant for put.
Well, I just cheated put does an assignment. The +=1 requires both take and put:
positions_matrix.put(flat_indices, ones.flat + positions_matrix.take(flat_indices))
# notice that ones has to be flattened, or alternatively the result of take could be reshaped (3,3)
# positions_matrix is now:
# array([[0., 0., 0., 2., 2., 2., 0., 0.],
# [0., 0., 0., 2., 2., 2., 0., 0.],
# [0., 0., 0., 0., 0., 0., 0., 0.],
# [ ...
There is one important difference in this solution compared to the others: the ones matrix is always (3,3),
which may or may not be an advantage.
The trick is in this flat_indices list, that has repeating entries (result of clip).
It may thus require some precautions, if you add a non constant sub-matrix at max indices:
x, y = 1, 7
values = 1 + np.arange(9)
indices = np.array( list( itertools.product( (x-1, x, x+1), (y-1, y, y+1) )))
flat_indices = np.ravel_multi_index(indices.T, positions_matrix.shape, mode='clip')
positions_matrix.put(flat_indices, values, mode='clip')
# positions_matrix is now:
# array([[0., 0., 0., 2., 2., 2., 1., 3.],
# [0., 0., 0., 2., 2., 2., 4., 6.],
# [0., 0., 0., 0., 0., 0., 7., 9.],
... you were probably expecting the last column to be 2 5 8.
Currently, you could work on flat_indices, for example by putting -1 in the out-of-bounds locations.
But it'd all be easier if np.put accepted non-flat indices, or if there was a clip mode='ignore'.
Related
I am trying to understand the signature functionality in numpy.vectorize. I have some examples but did not help much in the understanding.
>>import scipy.stats
>>pearsonr = np.vectorize(scipy.stats.pearsonr, signature='(n),(n)->(),()')
>>pearsonr([[0, 1, 2, 3]], [[1, 2, 3, 4], [4, 3, 2, 1]])
(array([ 1., -1.]), array([ 0., 0.]))
>>convolve = np.vectorize(np.convolve, signature='(n),(m)->(k)')
>>convolve(np.eye(4), [1, 2, 1])
array([[1., 2., 1., 0., 0., 0.],
[0., 1., 2., 1., 0., 0.],
[0., 0., 1., 2., 1., 0.],
[0., 0., 0., 1., 2., 1.]])
>>>import numpy as np
>>>qr = np.vectorize(np.linalg.qr, signature='(m,n)->(m,k),(k,n)')
>>>qr(np.random.normal(size=(1, 3, 2)))
(array([[-0.31622777, -0.9486833 ],
[-0.9486833 , 0.31622777]]),
array([[-3.16227766, -4.42718872, -5.69209979],
[ 0. , -0.63245553, -1.26491106]]))
>>>import scipy
>>>logm = np.vectorize(scipy.linalg.logm, signature='(m,m)->(m,m)')
>>>logm(np.random.normal(size=(1, 3, 2)))
array([[[ 1.08226288, -2.29544602],
[ 2.12599894, -1.26335203]]])
Can you please someone explain the functionality-syntax of the signatures
signature='(n),(n)->(),()'
signature='(n),(m)->(k)'
signature='(m,n)->(m,k),(k,n)'
signature='(m,m)->(m,m)'
used in the aforementioned examples? If we didn't use the signatures, how the examples would have been implemented in a more easy-naive way?
Any help is highly appreciated.
The aforementioned examples can be found here and here.
I think the explanation would be clearer if we knew the 'signature' of the individual functions - what they expect, and what they produce. But I can make some deductions from the code you show.
>>pearsonr = np.vectorize(scipy.stats.pearsonr, signature='(n),(n)->(),()')
>>pearsonr([[0, 1, 2, 3]], [[1, 2, 3, 4], [4, 3, 2, 1]])
(array([ 1., -1.]), array([ 0., 0.]))
This is called with a (4,) and (2,4) arrays (well, lists that become such arrays). They broadcast together to (2,4). The stats function is then called twice, once for each row of the pair, getting two (4,) arrays, and returning 2 scalar values (maybe the mean and std?)
>>convolve = np.vectorize(np.convolve, signature='(n),(m)->(k)')
>>convolve(np.eye(4), [1, 2, 1])
array([[1., 2., 1., 0., 0., 0.],
[0., 1., 2., 1., 0., 0.],
[0., 0., 1., 2., 1., 0.],
[0., 0., 0., 1., 2., 1.]])
This called with (4,4) and (3,) arrays. I think convolve gets called 4 times, once for each row of the eye, and getting the same [1,2,1] each time. The result is a 4 row array (with 6 columns - determined by convolve itself, not vectorize.
>>>import numpy as np
>>>qr = np.vectorize(np.linalg.qr, signature='(m,n)->(m,k),(k,n)')
>>>qr(np.random.normal(size=(1, 3, 2)))
(array([[-0.31622777, -0.9486833 ],
[-0.9486833 , 0.31622777]]),
array([[-3.16227766, -4.42718872, -5.69209979],
[ 0. , -0.63245553, -1.26491106]]))
Signature: np.linalg.qr(a, mode='reduced')
a : array_like, shape (M, N)
'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
vectorize signature just repeats the information in the docs.
a is (1,3,2) shape array; so qr is called once (1st dimension), with a (3,2) array. The result is 2 arrays, (2,k) and (k,3) shapes. When I run it I get an added size 1 dimension (1,2,3) and (1,2,2). Different numbers because of random:
In [120]: qr = np.vectorize(np.linalg.qr, signature='(m,n)->(m,k),(k,n)')
...: qr(np.random.normal(size=(1, 3,2)))
Out[120]:
(array([[[-0.61362528, 0.09161174],
[ 0.63682861, -0.52978942],
[-0.46681188, -0.84316692]]]),
array([[[-0.65301725, -1.00494992],
[ 0. , 0.8068886 ]]]))
>>>import scipy
>>> logm = np.vectorize(scipy.linalg.logm, signature='(m,m)->(m,m)')
>>>logm(np.random.normal(size=(1, 3, 2)))
array([[[ 1.08226288, -2.29544602],
[ 2.12599894, -1.26335203]]])
scipy.linalg.logm expects square array, and returns the same.
Calling logm with a (1,3,2) produces an error, because (3,2) is not a square array:
ValueError: inconsistent size for core dimension 'm': 2 vs 3
Calling scipy.linalg.logm directly produces the same error, worded differently:
linalg.logm(np.random.normal(size=(3, 2)))
ValueError: expected square array_like input
When I say the function is called twice, or something like that, I'm ignoring the test call that's used to determine the return dtype.
I am working with a thematic raster of land use classes. The goal is to split the raster into smaller tiles of a given size. For example, I have a raster of 1490 pixels and I want to split it into tiles of 250x250 pixels. To get tiles of equal size, I would want to increase the width of the raster to 1500 pixels to fit in exactly 6 tiles. To do so, I need to increase the size of the raster by 10 pixels.
I am currently opening the raster with the rasterio library, which returns a NumPy ndarray. Is there a function to add a buffer around this array? The goal would be something like this:
import numpy as np
a = np.array([
[1,4,5],
[4,5,5],
[1,2,2]
])
a_with_buffer = a.buffer(a, 1) # 2nd argument refers to the buffer size
Then a_with_buffer would look as following:
[0,0,0,0,0]
[0,1,4,5,0],
[0,4,5,5,0],
[0,1,2,2,0],
[0,0,0,0,0]
You can use np.pad:
>>> np.pad(a, 1)
array([[0, 0, 0, 0, 0],
[0, 1, 4, 5, 0],
[0, 4, 5, 5, 0],
[0, 1, 2, 2, 0],
[0, 0, 0, 0, 0]])
you can create np.zeros then insert a in the index what you want like below.
Try this:
>>> a = np.array([[1,4,5],[4,5,5],[1,2,2]])
>>> b = np.zeros((5,5))
>>> b[1:1+a.shape[0],1:1+a.shape[1]] = a
>>> b
array([[0., 0., 0., 0., 0.],
[0., 1., 4., 5., 0.],
[0., 4., 5., 5., 0.],
[0., 1., 2., 2., 0.],
[0., 0., 0., 0., 0.]])
so maybe this is a basic question about numpy, but I can't see how to do is, so lets say I have a 2D numpy array like this
import numpy as np
arr = np.array([[ 0., 460., 166., 167., 123.],
[ 0., 0., 0., 0., 0.],
[ 0., 81., 0., 21., 0.],
[ 0., 128., 23., 0., 12.],
[ 0., 36., 0., 13., 0.]])
And I want the coordinates from the subarray
[[0., 21,. 0.],
[23., 0., 12.],
[0., 13., 0.]]
I tried slicing my original array and the find the coordinates using np.argwhere like this
newarr = np.argwhere(arr[2:, 2:] != 0)
#output
#[[0 1]
# [1 0]
# [1 2]
# [2 1]]
Which are indeed the coordinates from the subarray but I was expecting the coordinates corresponding to my original array, the desired output is:
[[2 3]
[3 2]
[3 4]
[4 3]]
If I use the np.argwhere with my original array I get a bunch of coordinates that I don't need, so I can't figure it out how to get what I need, any help or if you can point me to the right direction will be great, thank you!
Assume origin on the top left corner of the matrix and the matrix itself placed in 4th quadrant of Cartesian space. The horizontal axis having the column indices, and the vertical axis coming down having row indices.
You will see the whole sub-matrix is origin shifted on (2,2) coordinate. Thus when the coordinates you get are with respect to sub-matrix on origin, then to get them back to (2,2) again, just add (2,2) in whole elements:
>>> np.argwhere(arr[2:, 2:] != 0) + [2, 2]
array([[2, 3],
[3, 2],
[3, 4],
[4, 3]])
For other examples:
>>> col_shift, row_shift = 3, 2
>>> arr[row_shift:, col_shift:]
array([[21., 0.],
[ 0., 12.],
[13., 0.]])
>>> np.argwhere(arr[row_shift:, col_shift:] != 0) + [row_shift, col_shift]
array([[2, 3],
[3, 4],
[4, 3]])
For a fully inside sub matrix, you can bound the column and rows:
>>> col_shift, row_shift = 0, 1
>>> col_bound, row_bound = 4, 4
>>> arr[row_shift:row_bound, col_shift:col_bound]
array([[ 0., 0., 0., 0.],
[ 0., 81., 0., 21.],
[ 0., 128., 23., 0.]])
>>> np.argwhere(arr[row_shift:row_bound, col_shift:col_bound] != 0) + [row_shift, col_shift]
array([[2, 1],
[2, 3],
[3, 1],
[3, 2]])
You have moved down the array two times and two times to the right. All that remains for you is to add the number of steps taken towards X and towards Y in the coordinates:
y = 2
x = 2
newarr = np.argwhere(arr[y:, x:] != 0)
X = (newarr[0:, 0] + x).reshape(4,1)
Y = (newarr[0:, 1] + y).reshape(4,1)
print(np.concatenate((X, Y), axis=1))
I'm trying to pad a numpy array, and I cannot seem to find the right approach from the documentation for numpy. I have an array:
a = array([2, 1, 3, 5, 7])
This represents the index for an array I wish to create. So at index value 2 or 1 or 3 etc I would like to have a one in the array, and everywhere else in the target array, to be padded with zeros. Sort of like an array mask. I would also like to specify the overall length of the target array, l. So my ideal function would like something like:
>>> foo(a,l)
array([0,1,1,1,0,1,0,1,0,0,0]
, where l=10 for the above example.
EDIT:
So I wrote this function:
def padwithones(a,l) :
p = np.zeros(l)
for i in a :
p = np.insert(p,i,1)
return p
Which gives:
Out[19]:
array([ 0., 1., 0., 1., 1., 1., 0., 1., 0., 0., 0., 0., 0.,
0., 0.])
Which isn't correct!
What you're looking for is basically a one-hot array:
def onehot(foo, l):
a = np.zeros(l, dtype=np.int32)
a[foo] = 1
return a
Example:
In [126]: onehot([2, 1, 3, 5, 7], 10)
Out[126]: array([0, 1, 1, 1, 0, 1, 0, 1, 0, 0])
I have a two-dimensional numpy array called meta with 3 columns.. what I want to do is :
check if the first two columns are ZERO
check if the third column is smaller than X
Return only those rows that match the condition
I made it work, but the solution seem very contrived :
meta[ np.logical_and( np.all( meta[:,0:2] == [0,0],axis=1 ) , meta[:,2] < 20) ]
Could you think of cleaner way ? It seem hard to have multiple conditions at once ;(
thanks
Sorry first time I copied the wrong expression... corrected.
you can use multiple filters in a slice, something like this:
x = np.arange(90.).reshape(30, 3)
#set the first 10 rows of cols 1,2 to be zero
x[0:10, 0:2] = 0.0
x[(x[:,0] == 0.) & (x[:,1] == 0.) & (x[:,2] > 10)]
#should give only a few rows
array([[ 0., 0., 11.],
[ 0., 0., 14.],
[ 0., 0., 17.],
[ 0., 0., 20.],
[ 0., 0., 23.],
[ 0., 0., 26.],
[ 0., 0., 29.]])
How about this -
meta[meta[:,2]<X * np.all(meta[:,0:2]==0,1),:]
Sample run -
In [89]: meta
Out[89]:
array([[ 1, 2, 3, 4],
[ 0, 0, 2, 0],
[ 9, 0, 11, 12]])
In [90]: X
Out[90]: 4
In [91]: meta[meta[:,2]<X * np.all(meta[:,0:2]==0,1),:]
Out[91]: array([[0, 0, 2, 0]])