big_array = np.array((
[0,1,0,0,1,0,0,1],
[0,1,0,0,0,0,0,0],
[0,1,0,0,1,0,0,0],
[0,0,0,0,1,0,0,0],
[1,0,0,0,1,0,0,0]))
print(big_array)
[[0 1 0 0 1 0 0 1]
[0 1 0 0 0 0 0 0]
[0 1 0 0 1 0 0 0]
[0 0 0 0 1 0 0 0]
[1 0 0 0 1 0 0 0]]
Is there a way to iterate over this numpy array and for each 2x2 cluster of 0s, set all values within that cluster = 5? This is what the output would look like.
[[0 1 5 5 1 5 5 1]
[0 1 5 5 0 5 5 0]
[0 1 5 5 1 5 5 0]
[0 0 5 5 1 5 5 0]
[1 0 5 5 1 5 5 0]]
My thoughts are to use advanced indexing to set the 2x2 shape = to 5, but I think it would be really slow to simply iterate like:
1) check if array[x][y] is 0
2) check if adjacent array elements are 0
3) if all elements are 0, set all those values to 5.
big_array = [1, 7, 0, 0, 3]
i = 0
p = 0
while i <= len(big_array) - 1 and p <= len(big_array) - 2:
if big_array[i] == big_array[p + 1]:
big_array[i] = 5
big_array[p + 1] = 5
print(big_array)
i = i + 1
p = p + 1
Output:
[1, 7, 5, 5, 3]
It is a example, not whole correct code.
Here's a solution by viewing the array as blocks.
First you need to define this function rolling_window from here https://gist.github.com/seberg/3866040/revisions
Then break the array big, your starting array, into 2x2 blocks using this function.
Also generate an array which has indices of every element in big and break it similarly into 2x2 blocks.
Then generate a boolean mask where the 2x2 blocks of big are all zero, and use the index array to get those elements.
blks = rolling_window(big,window=(2,2)) # 2x2 blocks of original array
inds = np.indices(big.shape).transpose(1,2,0) # array of indices into big
blkinds = rolling_window(inds,window=(2,2,0)).transpose(0,1,4,3,2) # 2x2 blocks of indices into big
mask = blks == np.zeros((2,2)) # generate a mask of every 2x2 block which is all zero
mask = mask.reshape(*mask.shape[:-2],-1).all(-1) # still generating the mask
# now blks[mask] is every block which is zero..
# but you actually want the original indices in the array 'big' instead
inds = blkinds[mask].reshape(-1,2).T # indices into big where elements need replacing
big[inds[0],inds[1]] = 5 #reassign
You need to test this: I did not. But the idea is to break the array into blocks, and an array of indices into blocks, then develop a boolean condition on the blocks, use those to get the indices, and then reassign.
An alternative would be to iterate through indblks as defined here, then test the 2x2 obtained from big at each indblk element and reassign if necessary.
This is my attempt to help you solve your problem. My solution may be subject to fair criticism.
import numpy as np
from itertools import product
m = np.array((
[0,1,0,0,1,0,0,1],
[0,1,0,0,0,0,0,0],
[0,1,0,0,1,0,0,0],
[0,0,0,0,1,0,0,0],
[1,0,0,0,1,0,0,0]))
h = 2
w = 2
rr, cc = tuple(d + 1 - q for d, q in zip(m.shape, (h, w)))
slices = [(slice(r, r + h), slice(c, c + w))
for r, c in product(range(rr), range(cc))
if not m[r:r + h, c:c + w].any()]
for s in slices:
m[s] = 5
print(m)
[[0 1 5 5 1 5 5 1]
[0 1 5 5 0 5 5 5]
[0 1 5 5 1 5 5 5]
[0 5 5 5 1 5 5 5]
[1 5 5 5 1 5 5 5]]
Related
So I have a program that at some point creates random arrays and I have perform an operation which is to add rows while replacing other rows based on the values found in the rows. One of the random arrays will look something like this but keep in mind that it could randomly vary in size ranging from 3x3 up to 10x10:
0 2 0 1
1 0 0 1
1 0 2 1
2 0 1 2
For every row that has at least one value equal to 2 I need to remove/replace the row and add some more rows. The number of rows added will depend on the number of combinations possible of 0s and 1s where the number of digits is equal to the number of 2s counted in each row. Each added row will introduce one of these combinations in the positions where the 2s are located. The result that I'm looking for will look like this:
0 1 0 1 # First combination to replace 0 2 0 1
0 0 0 1 # Second combination to replace 0 2 0 1 (Only 2 combinations, only one 2)
1 0 0 1 # Stays the same
1 0 1 1 # First combination to replace 1 0 2 1
1 0 0 1 # Second combination to replace 1 0 2 1 (Only 2 combinations, only one 2)
0 0 1 0 # First combination to replace 2 0 1 2
0 0 1 1 # Second combination to replace 2 0 1 2
1 0 1 1 # Third combination to replace 2 0 1 2
1 0 1 0 # Fourth combination to replace 2 0 1 2 (4 combinations, there are two 2s)
If you know a Numpy way of accomplishing this I will be grateful.
You can try the following. Create a sample array:
import numpy as np
np.random.seed(5)
a = np.random.randint(0, 3, (4, 4))
print(a)
This gives:
[[2 1 2 2]
[0 1 0 0]
[2 0 2 0]
[0 1 1 0]]
Compute the output array:
ts = (a == 2).sum(axis=1)
r = np.hstack([np.array(np.meshgrid(*[[0, 1]] * t)).reshape(t, -1).T.ravel() for t in ts if t])
out = np.repeat(a, 2**ts, axis=0)
out[out == 2] = r
print(out)
Result:
[[0 1 0 0]
[0 1 0 1]
[1 1 0 0]
[1 1 0 1]
[0 1 1 0]
[0 1 1 1]
[1 1 1 0]
[1 1 1 1]
[0 1 0 0]
[0 0 0 0]
[1 0 0 0]
[0 0 1 0]
[1 0 1 0]
[0 1 1 0]]
Not the prettiest code but it does the job. You could clean up the itertools calls but this lets you see how it works.
import numpy as np
import itertools
X = np.array([[0, 2, 0, 1],
[1, 0, 0, 1],
[1, 0, 2, 1],
[2, 0, 1, 2]])
def add(X_,Y):
if Y.size == 0:
Y = X_
else:
Y = np.vstack((Y, X_))
return(Y)
Y = np.array([])
for i in range(len(X)):
if 2 not in X[i,:]:
Y = add(X[i,:], Y)
else:
a = np.where(X[i,:]==2)[0]
n = [[i for i in itertools.chain([1, 0])] for _ in range(len(a))]
m = list(itertools.product(*n))
for j in range(len(m)):
M = 1 * X[i,:]
u = list(m[j])
for k in range(len(a)):
M[a[k]] = u[k]
Y = add(M, Y)
print(Y)
#[[0 1 0 1]
# [0 0 0 1]
# [1 0 0 1]
# [1 0 1 1]
# [1 0 0 1]
# [1 0 1 1]
# [1 0 1 0]
# [0 0 1 1]
# [0 0 1 0]]
arr= [1,2,3,4]
k = 4 (can be different)
so result will be 2 d array. How to do this without using any loop? and can't hard code k.
k and arr can vary as per input.
Must use numpy.pad
[[1,2,3,4,0,0,0], #k-1 zeros
[0,1,2,3,4,0,0],
[0,0,1,2,3,4,0],
[0,0,0,1,2,3,4]]
If you really have to do it without a loop (for educational purposes)
np.pad(np.tile(arr,[k,1]), [(0,0),(0,k)]).reshape(-1)[:-k].reshape(k,-1)
Using list comprehension as a one liner :
import numpy as np
arr= np.array([1,2,3,4])
k = 4
print( np.array( [ np.pad(arr, (0+i , k-1-i ) ) for i in range(0,k)] ) )
Out :
[[1 2 3 4 0 0 0]
[0 1 2 3 4 0 0]
[0 0 1 2 3 4 0]
[0 0 0 1 2 3 4]]
I am just starting off with numpy and am trying to create a function that takes in an array (x), converts this into a np.array, and returns a numpy array with 0,0,0,0 added after each element.
It should look like so:
input array: [4,5,6]
output: [4,0,0,0,0,5,0,0,0,0,6,0,0,0,0]
I have tried the following:
import numpy as np
x = np.asarray([4,5,6])
y = np.array([])
for index, value in enumerate(x):
y = np.insert(x, index+1, [0,0,0,0])
print(y)
which returns:
[4 0 0 0 0 5 6]
[4 5 0 0 0 0 6]
[4 5 6 0 0 0 0]
So basically I need to combine the output into one single numpy array rather than three lists.
Would anybody know how to solve this?
Many thanks!
Use the numpy .zeros function !
import numpy as np
inputArray = [4,5,6]
newArray = np.zeros(5*len(inputArray),dtype=int)
newArray[::5] = inputArray
In fact, you 'force' all the values with indexes 0,5 and 10 to become 4,5 and 6.
so _____[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
becomes [4 0 0 0 0 5 0 0 0 0 6 0 0 0 0]
>>> newArray
array([4, 0, 0, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 0 ,0])
I haven't used numpy to solve this problem,but this code seems to return your required output:
a = [4,5,6]
b = [0,0,0,0]
c = []
for x in a:
c = c + [x] + b
print(c)
I hope this helps!
I have 2 matrices, and I want to perform a 'cell-wise' addition, however the matrices aren't the same size. I want to preserve the cells relative positions during the calculation (i.e. their 'co-ordinates' from the top left), so a simple (if maybe not the best) solution, seems to be to pad the smaller matrix's x and y with zeros.
This thread has a perfectly satisfactory answer for concatenating vertically, and this does work with my data, and following the suggestion in the answer, I also threw in the hstack but at the moment, it's complaining that the dimensions (excluding concatenation axis) need to match exactly. Perhaps hstack doesnt work as I anticipate or exactly equivalently to vstack, but I'm at a bit of a loss now.
This is what hstack throws at me, meanwhile vstack seems to have no problem.
ValueError: all the input array dimensions except for the concatenation axis must match exactly
Essentially the code checks which of a pair of matrices is the shorter and/or wider, and then pads the smaller matrix with zeros to match.
Here's the code I have:
import numpy as np
A = np.random.randint(2, size = (3, 7))
B = np.random.randint(2, size = (5, 10))
# If the arrays have different row numbers:
if A.shape[0] < B.shape[0]: # Is A shorter than B?
A = np.vstack((A, np.zeros((B.shape[0] - A.shape[0], A.shape[1]))))
elif A.shape[0] > B.shape[0]: # or is A longer than B?
B = np.vstack((B, np.zeros((A.shape[0] - B.shape[0], B.shape[1]))))
# If they have different column numbers
if A.shape[1] < B.shape[1]: # Is A narrower than B?
A = np.hstack((A, np.zeros((B.shape[1] - A.shape[1], A.shape[0]))))
elif A.shape[1] > B.shape[1]: # or is A wider than B?
B = np.hstack((B, np.zeros((A.shape[1] - B.shape[1], B.shape[0]))))
It's getting late so its possible I've just missed something obvious with hstack but I can't see my logic error at the moment.
Just use np.pad :
np.pad(A,((0,2),(0,3)),'constant') # 2 is 5-3, 3 is 10-7
[[0 1 1 0 1 0 0 0 0 0]
[1 0 0 1 0 1 0 0 0 0]
[1 0 1 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
But the 4 pads width must be computed; so an another simple
method to pad the 2 array in any case is :
A = np.ones((3, 7),int)
B = np.ones((5, 2),int)
ma,na = A.shape
mb,nb = B.shape
m,n = max(ma,mb) , max(na,nb)
newA = np.zeros((m,n),A.dtype)
newA[:ma,:na]=A
newB = np.zeros((m,n),B.dtype)
newB[:mb,:nb]=B
For :
[[1 1 1 1 1 1 1]
[1 1 1 1 1 1 1]
[1 1 1 1 1 1 1]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]]
[[1 1 0 0 0 0 0]
[1 1 0 0 0 0 0]
[1 1 0 0 0 0 0]
[1 1 0 0 0 0 0]
[1 1 0 0 0 0 0]]
I think your hstack lines should be of the form
np.hstack((A, np.zeros((A.shape[0], B.shape[1] - A.shape[1]))))
You seem to have the rows and columns swapped.
Yes, indeed. You should swap (B.shape[1] - A.shape[1], A.shape[0]) to (A.shape[0], B.shape[1] - A.shape[1]) and so on, because you need to have the same numbers of rows to stack them horizontally.
Try b[:a.shape[0], :a.shape[1]] = b[:a.shape[0], :a.shape[1]]+a where b the larger array
Example below
import numpy as np
a = np.arange(12).reshape(3, 4)
print("a\n", a)
b = np.arange(16).reshape(4, 4)
print("b original\n", b)
b[:a.shape[0], :a.shape[1]] = b[:a.shape[0], :a.shape[1]]+a
print("b new\n",b)
output
a
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
b original
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
b new
[[ 0 2 4 6]
[ 8 10 12 14]
[16 18 20 22]
[12 13 14 15]]
Past midnight and maybe someone has an idea how to tackle a problem of mine. I want to count the number of adjacent cells (which means the number of array fields with other values eg. zeroes in the vicinity of array values) as sum for each valid value!.
Example:
import numpy, scipy
s = ndimage.generate_binary_structure(2,2) # Structure can vary
a = numpy.zeros((6,6), dtype=numpy.int) # Example array
a[2:4, 2:4] = 1;a[2,4] = 1 # with example value structure
print a
>[[0 0 0 0 0 0]
[0 0 0 0 0 0]
[0 0 1 1 1 0]
[0 0 1 1 0 0]
[0 0 0 0 0 0]
[0 0 0 0 0 0]]
# The value at position [2,4] is surrounded by 6 zeros, while the one at
# position [2,2] has 5 zeros in the vicinity if 's' is the assumed binary structure.
# Total sum of surrounding zeroes is therefore sum(5+4+6+4+5) == 24
How can i count the number of zeroes in such way if the structure of my values vary?
I somehow believe to must take use of the binary_dilation function of SciPy, which is able to enlarge the value structure, but simple counting of overlaps can't lead me to the correct sum or does it?
print ndimage.binary_dilation(a,s).astype(a.dtype)
[[0 0 0 0 0 0]
[0 1 1 1 1 1]
[0 1 1 1 1 1]
[0 1 1 1 1 1]
[0 1 1 1 1 0]
[0 0 0 0 0 0]]
Use a convolution to count neighbours:
import numpy
import scipy.signal
a = numpy.zeros((6,6), dtype=numpy.int) # Example array
a[2:4, 2:4] = 1;a[2,4] = 1 # with example value structure
b = 1-a
c = scipy.signal.convolve2d(b, numpy.ones((3,3)), mode='same')
print numpy.sum(c * a)
b = 1-a allows us to count each zero while ignoring the ones.
We convolve with a 3x3 all-ones kernel, which sets each element to the sum of it and its 8 neighbouring values (other kernels are possible, such as the + kernel for only orthogonally adjacent values). With these summed values, we mask off the zeros in the original input (since we don't care about their neighbours), and sum over the whole array.
I think you already got it. after dilation, the number of 1 is 19, minus 5 of the starting shape, you have 14. which is the number of zeros surrounding your shape. Your total of 24 has overlaps.