Mirror a numpy ndarray - python

I have a numpy Ndarray of dimensions (N * N * M) and want to mirror it over the main diagonal efficiently.
For N=1 I did the following:
A = np.array([[1, 0, 6, 5], [0, 2, 0, 0], [1, 0, 2, 0], [0, 1, 0, 3]])
A = np.tril(A) + np.triu(A.T, 1)
'''
From:
array([[1, 0, 6, 5],
[0, 2, 0, 0],
[1, 0, 2, 0],
[0, 1, 0, 3]])
To:
array([[1, 0, 1, 0],
[0, 2, 0, 1],
[1, 0, 2, 0],
[0, 1, 0, 3]])
'''
However this (np.tril and np.triu) doesn’t work for higher dimensions e.g.
A = np.array([[[1], [0], [6], [5]], [[0], [2],[0], [0]], [[1], [0], [2], [0]], [[0], [1], [0], [3]]]) # (4,4,1)
A = np.array([[[1,2], [0,3], [6,5], [5,6]], [[0,3], [2,2],[0,1], [0,3]], [[1,5], [0,2], [2,1], [0,9]], [[0,1], [1,2], [0,2], [3,4]]]) # (4,4,2)
Any ideas to do this efficiently (without for loops)? I don’t mind if you mirror the bottom or the top triangle of the matrix

This is a simple way to do that:
import numpy as np
# Example data, shape (4, 4, 2)
a = np.array([[[1, 2], [0, 3], [6, 5], [5, 6]],
[[0, 3], [2, 2], [0, 1], [0, 3]],
[[1, 5], [0, 2], [2, 1], [0, 9]],
[[0, 1], [1, 2], [0, 2], [3, 4]]])
# Lower triangle of ones, shape (4, 4, 1)
tril = np.tril(np.ones(a.shape[:-1], a.dtype))[..., np.newaxis]
# Eye matrix with extra dimension, shape (4, 4, 1)
eye = np.eye(a.shape[0], dtype=a.dtype)[..., np.newaxis]
# Lower triangle
atril = a * tril
# Add upper triangle and remove diagonal that was added twice
result = atril + atril.swapaxes(0, 1) - a * eye
# Check result
print(result[..., 0])
# [[1 0 1 0]
# [0 2 0 1]
# [1 0 2 0]
# [0 1 0 3]]
print(result[..., 1])
# [[2 3 5 1]
# [3 2 2 2]
# [5 2 1 2]
# [1 2 2 4]]

Related

Downsampling 3D array with numpy

Given array:
A = array([[[1, 2, 3, 1],
[4, 5, 6, 2],
[7, 8, 9, 3]]])
I obtain the following array in the forward pass with a downsampling factor of k-1:
k = 3
B = A[...,::k]
#output
array([[[1, 1],
[4, 2],
[7, 3]]])
In the backward pass I want to be able to come back to my original shape, with an output of:
array([[[1, 0, 0, 1],
[4, 0, 0, 2],
[7, 0, 0, 3]]])
You can use numpy.zeros to initialize the output and indexing:
shape = list(B.shape)
shape[-1] = k*(shape[-1]-1)+1
# [1, 3, 4]
A2 = np.zeros(shape, dtype=B.dtype)
A2[..., ::k] = B
print(A2)
output:
array([[[1, 0, 0, 1],
[4, 0, 0, 2],
[7, 0, 0, 3]]])
using A:
A2 = np.zeros_like(A)
A2[..., ::k] = B
# or directly
# A2[..., ::k] = A[..., ::k]

Repeat Numpy array by a sliding window

From the following array of shape (6, 3):
>>> arr
[
[1, 0, 1],
[0, 0, 2],
[1, 2, 0],
[0, 1, 3],
[2, 2, 1],
[2, 0, 2]
]
I'd like to repeat the values according to a sliding window of n=4, giving a new array of shape (6-n-1, n, 3):
>>> new_arr
[
[
[1, 0, 1],
[0, 0, 2],
[1, 2, 0],
[0, 1, 3]
],
[
[0, 0, 2],
[1, 2, 0],
[0, 1, 3],
[2, 2, 1]
],
[
[1, 2, 0],
[0, 1, 3],
[2, 2, 1],
[2, 0, 2]
]
]
It is relatively straightforward using a loop, but it gets extremely slow with several million values (instead of 6 in this example) in the initial array.
Is there a faster way to get to new_arr using Numpy primitives?
You can use NumPy, specifically this function (only NumPy >= 1.20.0):
from numpy.lib.stride_tricks import sliding_window_view
new_arr = sliding_window_view(arr, (n, arr.shape[1])).squeeze()

Map a 2d 2-channel numpy array to a 2d 1-channel numpy array

Suppose I have a 2d 2-channel (3d) numpy array:
[[[-1, -1], [0, -1], [1, -1]],
[[-1, 0], [0, 0], [1, 0]],
[[-1, 1], [0, 1], [1, 1]]]
I want to map this to a 2d 1-channel (3d) numpy array:
[[[0], [1], [2]],
[[3], [4], [5]],
[[6], [7], [8]]]
So for example I had the following array
[[[-1, -1], [0, 0], [1, 1]],
[[ 0, 0], [1, 0], [1, 1]]]
After applying the mapping I should get.
[[[0], [4], [8]],
[[4], [5], [8]]]
Since [-1, -1] == [0], [0, 0] == [4], and so on in the mapping.
I am writing a python program to preprocess images in CIELAB space. The L* has been stripped off leaving me with 'ab'. I want to convert individual pixels of ab to classes.
Let's generate your lookup arrays to get a hint. First the template:
ROWS = 3
COLS = 3
template = np.arange(ROWS * COLS).reshape(ROWS, COLS, 1)
This is equivalent to
template = np.array([[[0], [1], [2]],
[[3], [4], [5]],
[[6], [7], [8]]])
Then the input grid:
ROW_OFFSET = -1
COL_OFFSET = -1
grid = np.stack(np.mgrid[ROW_OFFSET:ROWS + ROW_OFFSET,
COL_OFFSET:COLS + COL_OFFSET], 2)
This is equivalent to
grid = np.array([[[-1, -1], [-1, 0], [-1, 1]],
[[ 0, -1], [ 0, 0], [ 0, 1]],
[[ 1, -1], [ 1, 0], [ 1, 1]]])
Given how we made grid, it should be clear that the "channels" are the row and column index, up to the offset. So given an index array, you can map it into template using fancy indexing:
index = np.array([[[-1,-1], [0, 0], [1, 1]],
[[ 0, 0], [1, 0], [1, 1]]])
result = template[index[:, :, 0] - ROW_OFFSET, index[:, :, 1] - COL_OFFSET, :]
If your template always fits the pattern shown above, you don't need indexing at all. You can just generate the result directly from COLS and the grid offsets:
result = (index[:, :, 0] - ROW_OFFSET) * COLS + index[:, :, 1] - COL_OFFSET
import numpy as np
a = np.array([[[-1,-1], [0,-1], [1,-1]],
[[-1, 0], [0, 0], [1, 0]],
[[-1, 1], [0, 1], [1, 1]]])
b = np.array([[[-1,-1], [0, 0], [1, 1]],
[[ 0, 0], [1, 0], [1, 1]]])
width = len(a[0])
start_row = a[0][0][0]
start_col = a[0][0][1]
result = []
for rows in b:
line = []
for d in rows:
n = (d[1] - start_col) * width + d[0] - start_row
line.append([n])
result.append(line)
result = np.asarray(result)
Is this what you mean?

How can I store store index pairs using True values from a boolean-like square symmetric numpy array?

I have a Numpy Array that with integer values 1 or 0 (can be cast as booleans if necessary). The array is square and symmetric (see note below) and I want a list of the indices where a 1 appears:
Note that array[i][j] == array[j][i] and array[i][i] == 0 by design. Also I cannot have any duplicates.
import numpy as np
array = np.array([
[0, 0, 1, 0, 1, 0, 1],
[0, 0, 1, 1, 0, 1, 0],
[1, 1, 0, 0, 0, 0, 1],
[0, 1, 0, 0, 1, 1, 0],
[1, 0, 0, 1, 0, 0, 1],
[0, 1, 0, 1, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0]
])
I would like a result that is like this (order of each sub-list is not important, nor is the order of each element within the sub-list):
[
[0, 2],
[0, 4],
[0, 6],
[1, 2],
[1, 3],
[1, 5],
[2, 6],
[3, 4],
[3, 5],
[4, 6]
]
Another point to make is that I would prefer not to loop over all indices twice using the condition j<i because the size of my array can be large but I am aware that this is a possibility - I have written an example of this using two for loops:
result = []
for i in range(array.shape[0]):
for j in range(i):
if array[i][j]:
result.append([i, j])
print(pd.DataFrame(result).sort_values(1).values)
# using dataframes and arrays for formatting but looking for
# 'result' which is a list
# Returns (same as above but columns are the opposite way round):
[[2 0]
[4 0]
[6 0]
[2 1]
[3 1]
[5 1]
[6 2]
[4 3]
[5 3]
[6 4]]
idx = np.argwhere(array)
idx = idx[idx[:,0]<idx[:,1]]
Another way:
idx = np.argwhere(np.triu(array))
output:
[[0 2]
[0 4]
[0 6]
[1 2]
[1 3]
[1 5]
[2 6]
[3 4]
[3 5]
[4 6]]
Comparison:
##bousof solution
def method1(array):
return np.vstack(np.where(np.logical_and(array, np.diff(np.ogrid[:array.shape[0],:array.shape[0]])[0]>=0))).transpose()[:,::-1]
#Also mentioned by #hpaulj
def method2(array):
return np.argwhere(np.triu(array))
def method3(array):
idx = np.argwhere(array)
return idx[idx[:,0]<idx[:,1]]
#The original method in question by OP(d-man)
def method4(array):
result = []
for i in range(array.shape[0]):
for j in range(i):
if array[i][j]:
result.append([i, j])
return result
#suggestd by #bousof in comments
def method5(array):
return np.vstack(np.where(np.triu(array))).transpose()
inputs = [np.random.randint(0,2,(n,n)) for n in [10,100,1000,10000]]
Seems like method1, method2 and method5 are slightly faster for large arrays while method3 is faster for smaller cases:
In [249]: arr = np.array([
...: [0, 0, 1, 0, 1, 0, 1],
...: [0, 0, 1, 1, 0, 1, 0],
...: [1, 1, 0, 0, 0, 0, 1],
...: [0, 1, 0, 0, 1, 1, 0],
...: [1, 0, 0, 1, 0, 0, 1],
...: [0, 1, 0, 1, 0, 0, 0],
...: [1, 0, 1, 0, 1, 0, 0]
...: ])
The most common way of getting indices on non-zeros (True) is with np.nonzero (aka np.where):
In [250]: idx = np.nonzero(arr)
In [251]: idx
Out[251]:
(array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 6, 6, 6]),
array([2, 4, 6, 2, 3, 5, 0, 1, 6, 1, 4, 5, 0, 3, 6, 1, 3, 0, 2, 4]))
This is a tuple - 2 arrays for a 2d array. It can be used directly to index the array (or anything like it): arr[idx] will give all 1s.
Apply np.transpose to that and get an array of 'pairs':
In [252]: np.argwhere(arr)
Out[252]:
array([[0, 2],
[0, 4],
[0, 6],
[1, 2],
[1, 3],
[1, 5],
[2, 0],
[2, 1],
[2, 6],
[3, 1],
[3, 4],
[3, 5],
[4, 0],
[4, 3],
[4, 6],
[5, 1],
[5, 3],
[6, 0],
[6, 2],
[6, 4]])
Using such an array to index arr is harder - requiring a loop and conversion to tuple.
To weed out the symmetric duplicates we could make a tri-lower array:
In [253]: np.tril(arr)
Out[253]:
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 1, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0]])
In [254]: np.argwhere(np.tril(arr))
Out[254]:
array([[2, 0],
[2, 1],
[3, 1],
[4, 0],
[4, 3],
[5, 1],
[5, 3],
[6, 0],
[6, 2],
[6, 4]])
You can use numpy.where:
>>> np.vstack(np.where(np.logical_and(array, np.diff(np.ogrid[:array.shape[0],:array.shape[0]])[0]<=0))).transpose()
array([[2, 0],
[2, 1],
[3, 1],
[4, 0],
[4, 3],
[5, 1],
[5, 3],
[6, 0],
[6, 2],
[6, 4]])
np.diff(np.ogrid[:array.shape[0],:array.shape[0]])[0]<=0 is true only on the lower part of the matrix. If the order is important, you can get the same order as in the question using:
>>> np.vstack(np.where(np.logical_and(array, np.diff(np.ogrid[:array.shape[0],:array.shape[0]])[0]>=0))).transpose()[:,::-1]
array([[2, 0],
[4, 0],
[6, 0],
[2, 1],
[3, 1],
[5, 1],
[6, 2],
[4, 3],
[5, 3],
[6, 4]])

Numpy.select from 3D array

Suppose I have the following numpy arrays:
>>a
array([[0, 0, 2],
[2, 0, 1],
[2, 2, 1]])
>>b
array([[2, 2, 0],
[2, 0, 2],
[1, 1, 2]])
that I then vertically stack
c=np.dstack((a,b))
resulting in:
>>c
array([[[0, 2],
[0, 2],
[2, 0]],
[[2, 2],
[0, 0],
[1, 2]],
[[2, 1],
[2, 1],
[1, 2]]])
From this I wish to, for each 3rd dimension of c, check which combination is present in this subarray, and then number it accordingingly with the index of the list-match. I've tried the following, but it is not working. The algorithm is simple enough with double for-loops, but because c is very large, it is prohibitively slow.
classes=[(0,0),(2,1),(2,2)]
out=np.select( [h==c for h in classes], range(len(classes)), default=-1)
My desired output would be
out = [[-1,-1,-1],
[3, 1,-1],
[2, 2,-1]]
How about this:
(np.array([np.array(h)[...,:] == c for h in classes]).all(axis = -1) *
(2 + np.arange(len(classes)))[:, None, None]).max(axis=0) - 1
It returns, what you actually need
array([[-1, -1, -1],
[ 3, 1, -1],
[ 2, 2, -1]])
You can test the a and b arrays separately like this:
clsa = (0,2,2)
clesb = (0,1,2)
np.select ( [(ca==a) & (cb==b) for ca,cb in zip (clsa, clsb)], range (3), default = -1)
which gets your desired result (except returns 0,1,2 instead of 1,2,3).
Here is another way to get what you want, thought I would post it in case it's useful to anyone.
import numpy as np
a = np.array([[0, 0, 2],
[2, 0, 1],
[2, 2, 1]])
b = np.array([[2, 2, 0],
[2, 0, 2],
[1, 1, 2]])
classes=[(0,0),(2,1),(2,2)]
c = np.empty(a.shape, dtype=[('a', a.dtype), ('b', b.dtype)])
c['a'] = a
c['b'] = b
classes = np.array(classes, dtype=c.dtype)
classes.sort()
out = classes.searchsorted(c)
out = np.where(c == classes[out], out+1, -1)
print out
#array([[-1, -1, -1]
# [ 3, 1, -1]
# [ 2, 1, -1]])

Categories