Map arrays with duplicate indexes? - python

Assume three arrays in numpy:
a = np.zeros(5)
b = np.array([3,3,3,0,0])
c = np.array([1,5,10,50,100])
b can now be used as an index for a and c. For example:
In [142]: c[b]
Out[142]: array([50, 50, 50, 1, 1])
Is there any way to add up the values connected to the duplicate indexes with this kind of slicing? With
a[b] = c
Only the last values are stored:
array([ 100., 0., 0., 10., 0.])
I would like something like this:
a[b] += c
which would give
array([ 150., 0., 0., 16., 0.])
I'm mapping very large vectors onto 2D matrices and would really like to avoid loops...

The += operator for NumPy arrays simply doesn't work the way you are hoping, and I'm not aware of a away of making it work that way. As a work-around I suggest using numpy.bincount():
>>> numpy.bincount(b, c)
array([ 150., 0., 0., 16.])
Just append zeros as needed.

You could do something like:
def sum_unique(label, weight):
order = np.lexsort(label.T)
label = label[order]
weight = weight[order]
unique = np.ones(len(label), 'bool')
unique[:-1] = (label[1:] != label[:-1]).any(-1)
totals = weight.cumsum()
totals = totals[unique]
totals[1:] = totals[1:] - totals[:-1]
return label[unique], totals
And use it like this:
In [110]: coord = np.random.randint(0, 3, (10, 2))
In [111]: coord
Out[111]:
array([[0, 2],
[0, 2],
[2, 1],
[1, 2],
[1, 0],
[0, 2],
[0, 0],
[2, 1],
[1, 2],
[1, 2]])
In [112]: weights = np.ones(10)
In [113]: uniq_coord, sums = sum_unique(coord, weights)
In [114]: uniq_coord
Out[114]:
array([[0, 0],
[1, 0],
[2, 1],
[0, 2],
[1, 2]])
In [115]: sums
Out[115]: array([ 1., 1., 2., 3., 3.])
In [116]: a = np.zeros((3,3))
In [117]: x, y = uniq_coord.T
In [118]: a[x, y] = sums
In [119]: a
Out[119]:
array([[ 1., 0., 3.],
[ 1., 0., 3.],
[ 0., 2., 0.]])
I just thought of this, it might be easier:
In [120]: flat_coord = np.ravel_multi_index(coord.T, (3,3))
In [121]: sums = np.bincount(flat_coord, weights)
In [122]: a = np.zeros((3,3))
In [123]: a.flat[:len(sums)] = sums
In [124]: a
Out[124]:
array([[ 1., 0., 3.],
[ 1., 0., 3.],
[ 0., 2., 0.]])

Related

Getting coordinates from a numpy array

so maybe this is a basic question about numpy, but I can't see how to do is, so lets say I have a 2D numpy array like this
import numpy as np
arr = np.array([[ 0., 460., 166., 167., 123.],
[ 0., 0., 0., 0., 0.],
[ 0., 81., 0., 21., 0.],
[ 0., 128., 23., 0., 12.],
[ 0., 36., 0., 13., 0.]])
And I want the coordinates from the subarray
[[0., 21,. 0.],
[23., 0., 12.],
[0., 13., 0.]]
I tried slicing my original array and the find the coordinates using np.argwhere like this
newarr = np.argwhere(arr[2:, 2:] != 0)
#output
#[[0 1]
# [1 0]
# [1 2]
# [2 1]]
Which are indeed the coordinates from the subarray but I was expecting the coordinates corresponding to my original array, the desired output is:
[[2 3]
[3 2]
[3 4]
[4 3]]
If I use the np.argwhere with my original array I get a bunch of coordinates that I don't need, so I can't figure it out how to get what I need, any help or if you can point me to the right direction will be great, thank you!
Assume origin on the top left corner of the matrix and the matrix itself placed in 4th quadrant of Cartesian space. The horizontal axis having the column indices, and the vertical axis coming down having row indices.
You will see the whole sub-matrix is origin shifted on (2,2) coordinate. Thus when the coordinates you get are with respect to sub-matrix on origin, then to get them back to (2,2) again, just add (2,2) in whole elements:
>>> np.argwhere(arr[2:, 2:] != 0) + [2, 2]
array([[2, 3],
[3, 2],
[3, 4],
[4, 3]])
For other examples:
>>> col_shift, row_shift = 3, 2
>>> arr[row_shift:, col_shift:]
array([[21., 0.],
[ 0., 12.],
[13., 0.]])
>>> np.argwhere(arr[row_shift:, col_shift:] != 0) + [row_shift, col_shift]
array([[2, 3],
[3, 4],
[4, 3]])
For a fully inside sub matrix, you can bound the column and rows:
>>> col_shift, row_shift = 0, 1
>>> col_bound, row_bound = 4, 4
>>> arr[row_shift:row_bound, col_shift:col_bound]
array([[ 0., 0., 0., 0.],
[ 0., 81., 0., 21.],
[ 0., 128., 23., 0.]])
>>> np.argwhere(arr[row_shift:row_bound, col_shift:col_bound] != 0) + [row_shift, col_shift]
array([[2, 1],
[2, 3],
[3, 1],
[3, 2]])
You have moved down the array two times and two times to the right. All that remains for you is to add the number of steps taken towards X and towards Y in the coordinates:
y = 2
x = 2
newarr = np.argwhere(arr[y:, x:] != 0)
X = (newarr[0:, 0] + x).reshape(4,1)
Y = (newarr[0:, 1] + y).reshape(4,1)
print(np.concatenate((X, Y), axis=1))

numpy arange for list (vectorized calculation)

I want to create a 2d matrix b from an array a, where a contains range_stop values for each matrix column.
For example, with a = [2,3], I want to obtain
b = [[0, 0],
[1, 1],
[2, 2],
[NaN, 3]]
What's the most efficient way (for vectorized calculation) to do it? My current code is:
a = [2,3]
b = np.zeros((max(a)+1,len(a)))
b.fill(np.nan)
for i,ai in enumerate(a):
b[:ai, i] = np.arange(ai)
You can first create the 2D arange using repeat
a = np.asarray([2, 3])
b = np.repeat(np.arange(np.max(a) + 1, dtype=float)[:, None], len(a), axis=1)
# array([[0., 0.],
# [1., 1.],
# [2., 2.],
# [3., 3.]])
and then compare each column with a to fill in np.nans
b[b > a] = np.nan
# array([[ 0., 0.],
# [ 1., 1.],
# [ 2., 2.],
# [nan, 3.]])

Create a new sparse matrix from the operations of different rows of a given large sparse matrix in python

Lets say , S is the large scipy-csr-matrix(sparse) and a dictionary D with key -> index(position) of the row vector A in S & values -> list of all the indices(positions) of other row vectors l in S. For each row vector in l you subtract A and get the new vector which will be nothing but the new row vector to be updated in the new sparse matrix.
dictionary of form -> { 1 : [4 , 5 ,... ,63] }
then have to create a new sparse matrix with....
new_row_vector_1 -> S_vec1 - S_vec4
new_row_vector_2 -> S_vec1 - S_vec5
.
new_row_vector_n -> S_vec1 - S_vec63
where S_vecX is the Xth row vector of matrix S
Check out the pictorial explanation of the above statements
Numpy Example:
>>> import numpy as np
>>> s = np.array([[1,5,3,4],[3,0,12,7],[5,6,2,4],[4,6,6,4],[7,12,5,67]])
>>> s
array([[ 1, 5, 3, 4],
[ 3, 0, 12, 7],
[ 5, 6, 2, 4],
[ 4, 6, 6, 4],
[ 7, 12, 5, 67]])
>>> index_dictionary = {0: [2, 4], 1: [3, 4], 2: [1], 3: [1, 2], 4: [1, 3, 2]}
>>> n = np.zeros((10,4)) #sum of all lengths of values in index_dictionary would be the number of rows for the new array(n) and columns remain the same as s.
>>> n
array([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
>>> idx = 0
>>> for index in index_dictionary:
... for k in index_dictionary[index]:
... n[idx] = s[index]-s[k]
... idx += 1
...
>>> n
array([[ -4., -1., 1., 0.],
[ -6., -7., -2., -63.],
[ -1., -6., 6., 3.],
[ -4., -12., 7., -60.],
[ 2., 6., -10., -3.],
[ 1., 6., -6., -3.],
[ -1., 0., 4., 0.],
[ 4., 12., -7., 60.],
[ 3., 6., -1., 63.],
[ 2., 6., 3., 63.]])
n is what i want.
Here's a simple demonstration of what I think you are trying to do:
First the numpy array version:
In [619]: arr=np.arange(12).reshape(4,3)
In [620]: arr[[1,0,2,3]]-arr[0]
Out[620]:
array([[3, 3, 3],
[0, 0, 0],
[6, 6, 6],
[9, 9, 9]])
Now the sparse equivalent:
In [622]: M=sparse.csr_matrix(arr)
csr implements row indexing:
In [624]: M[[1,0,2,3]]
Out[624]:
<4x3 sparse matrix of type '<class 'numpy.int32'>'
with 11 stored elements in Compressed Sparse Row format>
In [625]: M[[1,0,2,3]].A
Out[625]:
array([[ 3, 4, 5],
[ 0, 1, 2],
[ 6, 7, 8],
[ 9, 10, 11]], dtype=int32)
But not broadcasting:
In [626]: M[[1,0,2,3]]-M[0]
....
ValueError: inconsistent shapes
So we can use an explicit form of broadcasting
In [627]: M[[1,0,2,3]]-M[[0,0,0,0]] # or M[[0]*4]
Out[627]:
<4x3 sparse matrix of type '<class 'numpy.int32'>'
with 9 stored elements in Compressed Sparse Row format>
In [628]: _.A
Out[628]:
array([[3, 3, 3],
[0, 0, 0],
[6, 6, 6],
[9, 9, 9]], dtype=int32)
This may not be the fastest or most efficient, but it's a start.
I found in a previous SO question that M[[1,0,2,3]] indexing is performed with a matrix multiplication, in this case the equivalent of:
idxM = sparse.csr_matrix(([1,1,1,1],([0,1,2,3],[1,0,2,3])),(4,4))
M1 = idxM * M
Sparse matrix slicing using list of int
So my difference expression requires 2 such multiplication along with the subtraction.
We could try a row by row iteration, and building a new matrix from the result, but there's no guarantee that it will be faster. Depending on the arrays, converting to dense and back might even be faster.
=================
I can imagine 2 ways of applying this to the dictionary.
One is to iterate through the dictionary (what order?), perform this difference for each key, collect the results in a list (list of sparse matrices), and use sparse.bmat to join them into one matrix.
Another is to collect two lists of indexes, and apply the above indexed difference just once.
In [8]: index_dictionary = {0: [2, 4], 1: [3, 4], 2: [1], 3: [1, 2], 4: [1, 3, 2]}
In [10]: alist=[]
...: for index in index_dictionary:
...: for k in index_dictionary[index]:
...: alist.append((index, k))
In [11]: idx = np.array(alist)
In [12]: idx
Out[12]:
array([[0, 2],
[0, 4],
[1, 3],
[1, 4],
[2, 1],
[3, 1],
[3, 2],
[4, 1],
[4, 3],
[4, 2]])
applied to your dense s:
In [15]: s = np.array([[1,5,3,4],[3,0,12,7],[5,6,2,4],[4,6,6,4],[7,12,5,67]])
In [16]: s[idx[:,0]]-s[idx[:,1]]
Out[16]:
array([[ -4, -1, 1, 0],
[ -6, -7, -2, -63],
[ -1, -6, 6, 3],
[ -4, -12, 7, -60],
[ 2, 6, -10, -3],
[ 1, 6, -6, -3],
[ -1, 0, 4, 0],
[ 4, 12, -7, 60],
[ 3, 6, -1, 63],
[ 2, 6, 3, 63]])
and to the sparse equivalent
In [19]: arr= sparse.csr_matrix(s)
In [20]: arr
Out[20]:
<5x4 sparse matrix of type '<class 'numpy.int32'>'
with 19 stored elements in Compressed Sparse Row format>
In [21]: res=arr[idx[:,0]]-arr[idx[:,1]]
In [22]: res
Out[22]:
<10x4 sparse matrix of type '<class 'numpy.int32'>'
with 37 stored elements in Compressed Sparse Row format>

Specifying dtype=object for numpy.gradient

Is there a way to specify the dtype for numpy.gradient?
I'm using an array of subarrays and it's throwing the following error:
ValueError: setting an array element with a sequence.
Here is an example:
import numpy as np
a = np.empty([3, 3], dtype=object)
it = np.nditer(a, flags=['multi_index', 'refs_ok'])
while not it.finished:
i = it.multi_index[0]
j = it.multi_index[1]
a[it.multi_index] = np.array([i, j])
it.iternext()
print(a)
which outputs
[[array([0, 0]) array([0, 1]) array([0, 2])]
[array([1, 0]) array([1, 1]) array([1, 2])]
[array([2, 0]) array([2, 1]) array([2, 2])]]
I would like print(np.gradient(a)) to return
array(
[[array([[1, 0],[0, 1]]), array([[1, 0], [0, 1]]), array([[1, 0], [0, 1]])],
[array([[1, 0], [0, 1]]), array([[1, 0], [0, 1]]), array([[1, 0],[0, 1]])],
[array([[1, 0], [0, 1]]), array([[1, 0], [0, 1]]), array([[1, 0],[0, 1]])]],
dtype=object)
Notice that, in this case, the gradient of the vector field is an identity tensor field.
why are you working an array of dtype object? That's more work than using a 2d array.
e.g.
In [53]: a1=np.array([[1,2],[3,4],[5,6]])
In [54]: a1
Out[54]:
array([[1, 2],
[3, 4],
[5, 6]])
In [55]: np.gradient(a1)
Out[55]:
[array([[ 2., 2.],
[ 2., 2.],
[ 2., 2.]]),
array([[ 1., 1.],
[ 1., 1.],
[ 1., 1.]])]
or working column by column, or row by row
In [61]: [np.gradient(i) for i in a1.T]
Out[61]: [array([ 2., 2., 2.]), array([ 2., 2., 2.])]
In [62]: [np.gradient(i) for i in a1]
Out[62]: [array([ 1., 1.]), array([ 1., 1.]), array([ 1., 1.])]
dtype=object only make sense if the subarrays/lists differ in type and/or shape. And even then it doesn't add much to a regular Python list.
==============================
I can take your 2d a, and make a 3d array with:
In [126]: a1=np.zeros((3,3,2),int)
In [127]: a1.flat[:]=[i for i in a.flatten()]
In [128]: a1
Out[128]:
array([[[0, 0],
[0, 1],
[0, 2]],
[[1, 0],
[1, 1],
[1, 2]],
[[2, 0],
[2, 1],
[2, 2]]])
Or I could produce the same thing with meshgrid:
In [129]: X,Y=np.meshgrid(np.arange(3),np.arange(3),indexing='ij')
In [130]: a2=np.array([Y,X]).T
When I apply np.gradient to that I get 3 arrays, each (3,3,2) in shape.
In [136]: ga1=np.gradient(a1)
In [137]: len(ga1)
Out[137]: 3
In [138]: ga1[0].shape
Out[138]: (3, 3, 2)
It looks like the 1st 2 arrays have the values you want, so it's just a matter of rearranging them.
In [141]: np.array(ga1[:2]).shape
Out[141]: (2, 3, 3, 2)
In [143]: gga1=np.array(ga1[:2]).transpose([1,2,0,3])
In [144]: gga1.shape
Out[144]: (3, 3, 2, 2)
In [145]: gga1[0,0]
Out[145]:
array([[ 1., -0.],
[-0., 1.]])
If they must go back into a (3,3) object array, I could do:
In [146]: goa1=np.empty([3,3],dtype=object)
In [147]: for i in range(3):
for j in range(3):
goa1[i,j]=gga1[i,j]
.....:
In [148]: goa1
Out[148]:
array([[array([[ 1., -0.],
[-0., 1.]]),
array([[ 1., -0.],
[ 0., 1.]]),
array([[ 1., -0.],
...
[ 0., 1.]]),
array([[ 1., 0.],
[ 0., 1.]])]], dtype=object)
I still wonder what's the point to working with a object array.

Convolving a periodic image with python

I want to convolve an n-dimensional image which is conceptually periodic.
What I mean is the following: if I have a 2D image
>>> image2d = [[0,0,0,0],
... [0,0,0,1],
... [0,0,0,0]]
and I want to convolve it with this kernel:
>>> kernel = [[ 1,1,1],
... [ 1,1,1],
... [ 1,1,1]]
then I want the result to be:
>>> result = [[1,0,1,1],
... [1,0,1,1],
... [1,0,1,1]]
How to do this in python/numpy/scipy?
Note that I am not interested in creating the kernel, but mainly the periodicity of the convolution, i.e. the three leftmost ones in the resulting image (if that makes sense).
This is already built in, with scipy.signal.convolve2d's optional boundary='wrap' which gives periodic boundary conditions as padding for the convolution. The mode option here is 'same' to make the output size match the input size.
In [1]: image2d = [[0,0,0,0],
... [0,0,0,1],
... [0,0,0,0]]
In [2]: kernel = [[ 1,1,1],
... [ 1,1,1],
... [ 1,1,1]]
In [3]: from scipy.signal import convolve2d
In [4]: convolve2d(image2d, kernel, mode='same', boundary='wrap')
Out[4]:
array([[1, 0, 1, 1],
[1, 0, 1, 1],
[1, 0, 1, 1]])
The only disadvantage here is that you cannot use scipy.signal.fftconvolve which is often faster.
image2d = [[0,0,0,0,0],
[0,0,0,1,0],
[0,0,0,0,0],
[0,0,0,0,0]]
kernel = [[1,1,1],
[1,1,1],
[1,1,1]]
image2d = np.asarray(image2d)
kernel = np.asarray(kernel)
img_f = np.fft.fft2(image2d)
krn_f = np.fft.fft2(kernel, s=image2d.shape)
conv = np.fft.ifft2(img_f*krn_f).real
>>> conv.round()
array([[ 0., 0., 0., 0., 0.],
[ 1., 0., 0., 1., 1.],
[ 1., 0., 0., 1., 1.],
[ 1., 0., 0., 1., 1.]])
Note that the kernel is placed with its top left corner at the position of the 1 in the image. You would need to roll the result to get what you are after:
k_rows, k_cols = kernel.shape
conv2 = np.roll(np.roll(conv, -(k_cols//2), axis=-1),
-(k_rows//2), axis=-2)
>>> conv2.round()
array([[ 0., 0., 1., 1., 1.],
[ 0., 0., 1., 1., 1.],
[ 0., 0., 1., 1., 1.],
[ 0., 0., 0., 0., 0.]])
This kind of 'periodic convolution' is better known as circular or cyclic convolution. See http://en.wikipedia.org/wiki/Circular_convolution.
In the case of an n-dimensional image, as is the case in this question, one can use the scipy.ndimage.convolve function. It has a parameter mode which can be set to wrap for a circular convolution.
result = scipy.ndimage.convolve(image,kernel,mode='wrap')
>>> import numpy as np
>>> image = np.array([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 0, 0]])
>>> kernel = np.array([[1, 1, 1],
... [1, 1, 1],
... [1, 1, 1]])
>>> from scipy.ndimage import convolve
>>> convolve(image, kernel, mode='wrap')
array([[1, 0, 1, 1],
[1, 0, 1, 1],
[1, 0, 1, 1]])

Categories