Adjacent cells of multiple cell patches in a numpy array - python

this is a followup question arising from this solution.
The solution to count adjacent cells works pretty well unless you have multiple patches in the array.
So this time the array for instance looks like this.
import numpy
from scipy import ndimage
s = ndimage.generate_binary_structure(2,2)
a = numpy.zeros((6,6), dtype=numpy.int) # example array
a[1:3, 1:3] = 1;a[2:4,4:5] = 1
print a
[0 0 0 0 0 0]
[0 1 1 0 0 0]
[0 1 1 0 1 0]
[0 0 0 0 1 0]
[0 0 0 0 0 0]
[0 0 0 0 0 0]
# Number of nonoverlapping cells
c = ndimage.binary_dilation(a,s).astype(a.dtype)
b = c - a
numpy.sum(b) # returns 19
# However the correct number of non overlapping cells should be 22 (12+10)
Is there any smart solution to solve this dilemma without using any loops or iterating through the array? The reason is that the array could be quite big.
idea 1:
Just thought over it and a way to do it might be to check for more than one patch in the iterating structure. For the total count number to be correct those cells below have to be equal 2 (or more) in the dilation. Anyone got any idea how to turn this thought into code?
[1 1 1 1 0 0]
[1 0 0 2 1 1]
[1 0 0 2 0 1]
[1 1 1 2 0 1]
[0 0 0 1 1 1]
[0 0 0 0 0 0]

You can use label from ndimage to segment each patch of ones.
Then you just ask where the returned array equals 1, 2, 3 etc and perform your algoritm on it (or you just use the ndimage.distance_transform_cdt but with inverting your forground/background for each labeled segment.
Edit 1:
This code will take your array a and do what you ask:
b, c = ndimage.label(a)
e = numpy.zeros(a.shape)
for i in xrange(c):
e += ndimage.distance_transform_cdt((b == i + 1) == 0) == 1
print e
I realize it is a bit ugly with all the equals there but it outputs:
In [41]: print e
[[ 1. 1. 1. 1. 0. 0.]
[ 1. 0. 0. 2. 1. 1.]
[ 1. 0. 0. 2. 0. 1.]
[ 1. 1. 1. 2. 0. 1.]
[ 0. 0. 0. 1. 1. 1.]
[ 0. 0. 0. 0. 0. 0.]]
Edit 2 (Alternative solution):
This code should do the same stuff and hopefully faster (however it will not find the where
two patches only touch corners).
b = ndimage.binary_closing(a) - a
b = ndimage.binary_dilation(b.astype(bool))
c = ndimage.distance_transform_cdt(a == 0) == 1
e = c.astype(numpy.int) * b + c
print e

Related

How to change the value of some m x m submatrices from an NxN matrix with numpy?

The general problem I'm facing could be translated as follows:
Code a NxN matrix in which mxm sub-matrices of values x or y are regularly alternated.
For example, let's say I need to make a 6x6 matrix with 2x2 sub-matrices of 0 or 1 are alternated, the result should be the matrix below:
0 0 1 1 0 0
0 0 1 1 0 0
1 1 0 0 1 1
1 1 0 0 1 1
0 0 1 1 0 0
0 0 1 1 0 0
For now, I only managed to get this:
0 0 1 0 0 0
0 0 1 0 0 0
1 1 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
with the code :
b = numpy.zeros((6, 6))
b[:2, 2:6:4] = 1
b[2:6:4, :2] = 1
print(b)
I managed to find a solution but it has four for-loop so it is hard to read and takes a bit of time. The code for this possible answer is:
c = np.array([])
for k in range(3):
for l in range (2):
for i in range(3):
for j in range (2):
if (k+i)%2 == 0:
c = np.append(c, 0)
else:
c = np.append(c, 1)
print("c = ", np.reshape(c, (6, 6)))
Isn't there a better way to give the expected output without using loops or with 1 or 2 loops max ?
import numpy as np
m = 8
n = 4
c = np.zeros(shape=(m,m))
assert not m%n #m must be divisible by n
for row_i in range(m):
for col_i in range(m):
if (row_i//n + col_i//n)%2:
c[row_i][col_i] = 1
print(c)
[[0. 0. 0. 0. 1. 1. 1. 1.]
[0. 0. 0. 0. 1. 1. 1. 1.]
[0. 0. 0. 0. 1. 1. 1. 1.]
[0. 0. 0. 0. 1. 1. 1. 1.]
[1. 1. 1. 1. 0. 0. 0. 0.]
[1. 1. 1. 1. 0. 0. 0. 0.]
[1. 1. 1. 1. 0. 0. 0. 0.]
[1. 1. 1. 1. 0. 0. 0. 0.]]
I think you're on the right track with using python array slicing. Here is an example for 2x2 submatrices (works with any even-square sized matrix b).
# first block for submatrices starting at column and row index 0
# 0::4 - every 4th column/row starting from column 0
# so this results in 1 0 0 0 1 0 and so on
b[0::4, 0::4] = 1
# 1::4 - every 4th column starting from column 1
# so this results in 0 1 0 0 0 1 and so on
b[0::4, 1::4] = 1
b[1::4, 0::4] = 1
b[1::4, 1::4] = 1
# second block for submatrices starting from column and row index 2
b[2::4, 2::4] = 1
b[2::4, 3::4] = 1
b[3::4, 2::4] = 1
b[3::4, 3::4] = 1
Now for larger submatrices you have just to increase the distance between the entries. For submatrices of size n, the distance must be 2 * n, because that is the sheme for the repetition of 1 in the matrix. Each block then has size n. Try to write a procedure. If you do not succeed I will help further.
Here is one take using np.repeat and np.tile:
import numpy as np
def tiled_matrix(n, m):
row = np.tile(np.repeat([1, 0], m), (n // m) // 2 + 1)[:n]
row_block = np.tile(row, (m,1))
two_rows_block = np.vstack((row_block, 1 - row_block))
arr = np.tile(two_rows_block, ((n // m) // 2 + 1, 1))[:n]
return arr
print(tiled_matrix(10, 4))
# [[1 1 1 1 0 0 0 0 1 1]
# [1 1 1 1 0 0 0 0 1 1]
# [1 1 1 1 0 0 0 0 1 1]
# [1 1 1 1 0 0 0 0 1 1]
# [0 0 0 0 1 1 1 1 0 0]
# [0 0 0 0 1 1 1 1 0 0]
# [0 0 0 0 1 1 1 1 0 0]
# [0 0 0 0 1 1 1 1 0 0]
# [1 1 1 1 0 0 0 0 1 1]
# [1 1 1 1 0 0 0 0 1 1]]
Here is another take with np.meshgrid and xor:
def meshgrid_mod(n, m):
ix = np.arange(n, dtype=np.uint8)
xs, ys = np.meshgrid(ix, ix)
arr = ((xs % (2*m)) < m) ^ ((ys % (2*m)) < m)
return arr.astype(np.int)
The most simple solution is probably to use skimage.util.view_as_blocks():
import numpy as np
import skimage.util
a = np.zeros((6, 6))
b = skimage.util.view_as_blocks(a, (2, 2))
# b is a view of a, so changing values in b will also change values in a
b[::2, ::2, ...] = 1
b[1::2, 1::2, ...] = 1
a
# array([[1., 1., 0., 0., 1., 1.],
# [1., 1., 0., 0., 1., 1.],
# [0., 0., 1., 1., 0., 0.],
# [0., 0., 1., 1., 0., 0.],
# [1., 1., 0., 0., 1., 1.],
# [1., 1., 0., 0., 1., 1.]])

Numpy matrix with values equal to offset from central row/column

For given odd value a, I want to generate two matrices, where values represent the offset from central row/column in x or y direction. Example for a=5:
| -2 -1 0 1 2 | | -2 -2 -2 -2 -2 |
| -2 -1 0 1 2 | | -1 -1 -1 -1 -1 |
X = | -2 -1 0 1 2 | Y = | 0 0 0 0 0 |
| -2 -1 0 1 2 | | 1 1 1 1 1 |
| -2 -1 0 1 2 | | 2 2 2 2 2 |
What is the easiest way to achieve this with Numpy?
Try meshgrid:
n=5
X,Y = np.meshgrid(np.arange(n),np.arange(n))
X -= n//2
Y -= n//2
Or
n = 5
range_ = np.arange(-(n//2), n-n//2)
X,Y = np.meshgrid(range_, range_)
Also check out ogrid.
np.arange and np.repeat will do:
a = 5
limits = -(a//2), a//2 + 1
col = np.c_[np.arange(*limits)]
Y = np.repeat(col, repeats=a, axis=1)
X = Y.T
Just use fancy indexing technique of Numpy module. The following code demonstrates the solution for a 5X5 matrix:
import numpy as np
if __name__=='__main__':
A = np.zeros((5, 5))
A[np.arange(5), :] = np.arange(5)//2 - np.arange(5)[::-1]//2
B = np.zeros((5, 5))
B[:, np.arange(5)] = np.arange(5)//2 - np.arange(5)[::-1]//2
B = B.T
Output
[[-2. -1. 0. 1. 2.]
[-2. -1. 0. 1. 2.]
[-2. -1. 0. 1. 2.]
[-2. -1. 0. 1. 2.]
[-2. -1. 0. 1. 2.]]
[[-2. -2. -2. -2. -2.]
[-1. -1. -1. -1. -1.]
[ 0. 0. 0. 0. 0.]
[ 1. 1. 1. 1. 1.]
[ 2. 2. 2. 2. 2.]]
Cheers.

How to get all possible array attributions of numpy arrays?

Python: get all possible array attributions of nd arrays. Use itertools.product?
If so, how?
In Python, I have two n dimensions numpy arrays A and B (B is a zero array).
Such way A.shape[i]<=B.shape[i], for any i between 0 and n.
I want to create a for loop in such way every iteration I attribute A to a different subset of B, in such way every possible position in occupied until the end of the for loop.
for instance, with A = np.array([[1,1,1],[1,1,1]]) and B = np.zeros((3,4)), I would get these(one of these for each iteration):
1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0
1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1
0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1
For a fixed n dimension it is trivial, just use nested for loops for each dimension.
However, I want it for a generic n dimensions.
My approach was to use the itertools.product to get all combinations of indexes.
In the above example, product([0,1],[0,1]), would iterate over (0,0),(0,1),(1,0),(1,1), and I would have my indexes.
However, I don't know how to pass the values of the parameters to product function for a generic n.
Any idea? There are better ways of doing so?
itertools product should work.
import numpy as np
from itertools import product
A = np.ones((2,3))
B = np.zeros((3,4))
r_rng = range(B.shape[0]-A.shape[0]+1)
c_rng = range(B.shape[1]-A.shape[1]+1)
for i,j in product(r_rng, c_rng):
C = B.copy()
C[i:i+A.shape[0],j:j+A.shape[1]]=A
print(C,'\n')
Output:
[[1. 1. 1. 0.]
[1. 1. 1. 0.]
[0. 0. 0. 0.]]
[[0. 1. 1. 1.]
[0. 1. 1. 1.]
[0. 0. 0. 0.]]
[[0. 0. 0. 0.]
[1. 1. 1. 0.]
[1. 1. 1. 0.]]
[[0. 0. 0. 0.]
[0. 1. 1. 1.]
[0. 1. 1. 1.]]
Here is an example. You can use the * operator to unpack a variable number of argument from a list and give it to itertools.product():
import itertools
size1 = (3,5,6)
size2 = (2,2,2)
N = len(size1)
coords = []
for i in range(N):
delta = size1[i]-size2[i]
coords.append(list(range(delta)))
print(coords)
it = itertools.product(*coords)
arr = np.array(list(it))
print(arr)
Output:
[[0 0 0]
[0 0 1]
[0 0 2]
[0 0 3]
[0 1 0]
[0 1 1]
[0 1 2]
[0 1 3]
[0 2 0]
[0 2 1]
[0 2 2]
[0 2 3]]
Im going to post the solution I obtained:
import numpy as np
from itertools import product
A=np.ones((2,3,2))
B=np.zeros((3,4,4))
coords=[]
for i in range(len(B.shape)):
delta = B.shape[i]-A.shape[i]+1
coords.append(list(range(delta)))
print(coords)
for start_idx in product(*coords):
idx=tuple(slice(start_idx[i], start_idx[i]+A.shape[i]) for i in range(len(A.shape)))
m=np.zeros(B.shape)
m.__setitem__(tuple(idx), A)
print(m)
ps: Indexing the nd arrays was very tricky

Initializing a matrix with alternating 0s and 1s in TensorFlow

I am trying to create an n-by-m matrix of 0s and 1s with a very simple structure:
[[1 0 0 0 0 0 0 ...],
[1 1 0 0 0 0 0 ...],
[1 1 1 0 0 0 0 ...],
[1 1 1 1 0 0 0 ...],
[0 1 1 1 1 0 0 ...],
[0 1 1 1 1 1 0 ...],
...
[... 0 0 0 1 1 1 1],
[... 0 0 0 0 1 1 1],
[... 0 0 0 0 0 1 1],
[... 0 0 0 0 0 0 1]]
However, I don't want to start writing loops as this is probably achievable using something built in: A = tf.constant(???,shape(n,m))
Note that after the first 3 rows there is simply a repetition of four 1s, followed by m-3 0s, until the last 3 rows.
So I am thinking something along the lines of a repeat of repeat, but I have no idea what syntax to use.
You're looking for tf.matrix_band_part(). As per the manual, it's function is to
Copy a tensor setting everything outside a central band in each innermost matrix to zero.
So in your case you'd create a matrix with ones, and then take a 4-wide band like this:
tf.matrix_band_part( tf.ones( shape = ( 1, n, m ) ), 3, 0 )
Tested code:
import tensorflow as tf
x = tf.ones( shape = ( 1, 9, 6 ) )
y = tf.matrix_band_part( x, 3, 0 )
with tf.Session() as sess:
res = sess.run( y )
print ( res )
Output:
[[[1. 0. 0. 0. 0. 0.]
[1. 1. 0. 0. 0. 0.]
[1. 1. 1. 0. 0. 0.]
[1. 1. 1. 1. 0. 0.]
[0. 1. 1. 1. 1. 0.]
[0. 0. 1. 1. 1. 1.]
[0. 0. 0. 1. 1. 1.]
[0. 0. 0. 0. 1. 1.]
[0. 0. 0. 0. 0. 1.]]]

Tensorflow Non-Maximum Suppression

NOTE: tf.image.non_max_suppression does NOT do what I'm looking for!
I'm trying to perform non-maximum suppression (NMS) similar to the Canny edge detector. Specifically, NMS on an 2D array will keep a value if it is the maximum within a window, otherwise suppress it (set to 0).
For example, consider the matrix
[[3 2 1 4 2 3]
[1 4 2 1 5 2]
[2 2 3 2 1 3]]
If we consider a window size of 3 x 3, then the result should be
[[0 0 0 0 0 0]
[0 4 0 0 5 0]
[0 0 0 0 0 0]]
I've searched around and couldn't find anything that performs this operation in tf.image and tf.nn. Is there code somewhere that performs NMS? If not, how can I efficiently implement NMS in Tensorflow (Python)?
Thanks!
EDIT: I came up with one way to solve this but I'm not sure if there are better ways: take a max pool with 1 stride (i.e. no downsampling) and the window size, then use tf.where to check if the value is equal to the max pooled value and set to 0 if not. Is there a better way?
Answering my own question (though open to better solutions):
import tensorflow as tf
import numpy as np
def non_max_suppression(input, window_size):
# input: B x W x H x C
pooled = tf.nn.max_pool(input, ksize=[1, window_size, window_size, 1], strides=[1,1,1,1], padding='SAME')
output = tf.where(tf.equal(input, pooled), input, tf.zeros_like(input))
# NOTE: if input has negative values, the suppressed values can be higher than original
return output # output: B X W X H x C
sess = tf.InteractiveSession()
x = np.array([[3,2,1,4,2,3],[1,4,2,1,5,2],[2,2,3,2,1,3]], dtype=np.float32).reshape([1,3,6,1])
inp = tf.Variable(x)
out = non_max_suppression(inp, 3)
sess.run(tf.global_variables_initializer())
print out.eval().reshape([3,6])
'''
[[ 0. 0. 0. 0. 0. 0.]
[ 0. 4. 0. 0. 5. 0.]
[ 0. 0. 0. 0. 0. 0.]]
'''
sess.close()

Categories