Convert c-order index into f-order index in Python - python

I am trying to find a solution to the following problem. I have an index in C-order and I need to convert it into F-order.
To explain simply my problem, here is an example:
Let's say we have a matrix x as:
x = np.arange(1,5).reshape(2,2)
print(x)
array([[1, 2],
[3, 4]])
Then the flattened matrix in C order is:
flat_c = x.ravel()
print(flat_c)
array([1, 2, 3, 4])
Now, the value 3 is at the 2nd position of the flat_c vector i.e. flat_c[2] is 3.
If I would flatten the matrix x using the F-order, I would have:
flat_f = x.ravel(order='f')
array([1, 3, 2, 4])
Now, the value 3 is at the 1st position of the flat_f vector i.e. flat_f[1] is 3.
I am trying to find a way to get the F-order index knowing the dimension of the matrix and the corresponding index in C-order.
I tried using np.unravel_index but this function returns the matrix positions...

We can use a combination of np.ravel_multi_index and np.unravel_index for a ndarray supported solution. Hence, given array shape s of input array a and c-order index c_idx, it would be -
s = a.shape
f_idx = np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
So, the idea is pretty simple. Use np.unravel_index to get c-based indices in n-dim, then get flattened-linear index in fortran order by using np.ravel_multi_index on flipped shape and those flipped n-dim indices to simulate fortran behavior.
Sample runs on 2D -
In [321]: a
Out[321]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [322]: s = a.shape
In [323]: c_idx = 6
In [324]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[324]: 4
In [325]: c_idx = 12
In [326]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[326]: 8
Sample run on 3D array -
In [336]: a
Out[336]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
In [337]: s = a.shape
In [338]: c_idx = 21
In [339]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[339]: 9
In [340]: a.ravel('F')[9]
Out[340]: 21

Suppose your matrix is of shape (nrow,ncol). Then the 1D index when unraveled in C style for the (irow,icol) entry is given by
idxc = ncol*irow + icol
In the above equation, you know idxc. Then,
icol = idxc % ncol
Now you can find irow
irow = (idxc - icol) / ncol
Now you know both irow and icol. You can use them to get the F index. I think the F index will be given by
idxf = nrow*icol + irow
Please double-check my math, I might have got something wrong...
For the 3D case, if your array has dimensions [n1][n2][n3], then the unraveled C-index for [i1][i2][i3] is
idxc = n2*n3*i1 + n3*i2+i3
Using modulo operations similar to the 2D case, we can recover i1,i2,i3 and then convert to unraveled F index, i.e.
n3*i2 + i3 = idxc % (n2*n3)
i3 = (n3*i2+i3) % n3
i2 = ((n3*i2+i3) - i3) /n3
i1 = (idxc-(n3+i2+i3)) /(n2*n3)
F index would be:
idxf = i1 + n1*i2 +n1*n2*i3
Please check my math.

In simple cases you may also get away with transposing and ravelling the array:
import numpy as np
x = np.arange(2 * 2).reshape(2, 2)
print(x)
# [[0 1]
# [2 3]]
print(x.ravel())
# [0 1 2 3]
print(x.transpose().ravel())
# [0 2 1 3]
x = np.arange(2 * 3 * 4).reshape(2, 3, 4)
print(x)
# [[[ 0 1 2 3]
# [ 4 5 6 7]
# [ 8 9 10 11]]
# [[12 13 14 15]
# [16 17 18 19]
# [20 21 22 23]]]
print(x.ravel())
# [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
print(x.transpose().ravel())
# [ 0 12 4 16 8 20 1 13 5 17 9 21 2 14 6 18 10 22 3 15 7 19 11 23]

Related

Dividing numpy 2d array into equal sections

I have a 30*30px image and I converted it to a NumPy array. Now I want to divide this 30*30 image into 9 equal pieces (imagine a tic-tak-toe game). I wrote the code below for that purpose but the problem with my code is that it has two nested loops and in python, that means a straight ticket to lower-performance town (specially for large number of datas). So is there a better way of doing this using NumPy and Numpy indexing?
#Factor is saing that the image should be divided into 9 sections 3*3 = 9 (kinda like 3 rows 3 columns)
def section(img , factor = 3):
secs = []
#This basicaly tests if the image can actually get divided into equal sections
if (img.shape[0] % factor != 0):
return False
#number of pixel in each row and column of the sections
pix_num = int(img.shape[0] / factor)
ptr_x_a = 0
ptr_x_b = pix_num -1
for i in range(factor):
ptr_y_a = 0
ptr_y_b = pix_num - 1
for j in range(factor):
secs.append( img[ptr_x_a :ptr_x_b , ptr_y_a : ptr_y_b] )
ptr_y_a += pix_num
ptr_y_b += pix_num
ptr_x_a += pix_num
ptr_x_b += pix_num
return np.array(secs , dtype = "int16"‍‍‍‍‍‍‍)
P.S: Don't mind reading the whole code, just know that it uses pointers to select different areas of the image.
P.S2: See the image below to get an idea of what's happening. It is a 6*6 image divided into 9 pieces (factor = 3)
If you have an array of shape (K * M, K * N), you can transform it into something of shape (K * K, M, N) using reshape and transpose. For example, if you have K = M = N = 3, you want to transform
>>> a = np.arange(81).reshape(9, 9)
into
[[[ 0, 1, 2],
[ 9, 10, 11],
[18, 19, 20]],
[[ 3, 4, 5],
[12, 13, 14],
[21, 22, 23]],
[[ 6, 7, 8],
[15, 16, 17],
[24, 25, 26]],
...
]]]
The idea is that you need to get the elements lined up in memory in the order shown here (i.e. 0, 1, 2, 9, 10, 11, 18, ...). You can do this by adding the appropriate auxiliary dimensions and transposing:
b = a.reshape(K, M, K, N)
c = b.transpose(0, 2, 1, 3)
d = c.reahape(-1, M, N)
As a one-liner:
a.reshape(K, M, K, N).transpose(0, 2 1, 3).reshape(-1, M, N)
The order of the transpose determines the order of the blocks. The first two dimensions, 0, 2, represent the fact that your inner loop iterates the columns faster than the rows. If you wanted to arrange the blocks by column (iterate the rows faster), you could do
c = b.transpose(2, 0, 1, 3)
Reshaping does not change the memory layout of the elements, but transposing copies data if necessary.
In your particular example, K = 3 and M = N = 10. The code above does not change in any way besides that.
As an aside, your loops can be improved by making the ranges directly over the indices you want rather auxiliary quantities, as well as pre-allocating the output:
result = np.zeros(factor * factor, pix_num, pix_num)
n = 0
for r in range(0, img.shape[0], pix_num):
for c in range(0, img.shape[1], pix_num):
result[n, :, :] = img[r:r + pix_num, c:c + pix_num]
n += 1
a = np.arange(36)
a.resize(6, 6)
print(a)
b = list(map(lambda x: np.array_split(x, 3, axis=1), np.array_split(a, 3, axis=0)))
print(np.array(b).reshape(9,2,2))
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]
[24 25 26 27 28 29]
[30 31 32 33 34 35]]
[[[ 0 1]
[ 6 7]]
[[ 2 3]
[ 8 9]]
[[ 4 5]
[10 11]]
[[12 13]
[18 19]]
[[14 15]
[20 21]]
[[16 17]
[22 23]]
[[24 25]
[30 31]]
[[26 27]
[32 33]]
[[28 29]
[34 35]]]
You can do something like this to get one section:
sec = img[:10]
sec = list(zip(*sec))[:10]
sec = list(zip(*sec))
This will pick out the first 10x10 section.

Subtracting minimum of row from the row

I know that
a - a.min(axis=0)
will subtract the minimum of each column from every element in the column. I want to subtract the minimum in each row from every element in the row. I know that
a.min(axis=1)
specifies the minimum within a row, but how do I tell the subtraction to go by rows instead of columns? (How do I specify the axis of the subtraction?)
edit: For my question, a is a 2d array in NumPy.
Assuming a is a numpy array, you can use this:
new_a = a - np.min(a, axis=1)[:,None]
Try it out:
import numpy as np
a = np.arange(24).reshape((4,6))
print (a)
new_a = a - np.min(a, axis=1)[:,None]
print (new_a)
Result:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
[[0 1 2 3 4 5]
[0 1 2 3 4 5]
[0 1 2 3 4 5]
[0 1 2 3 4 5]]
Note that np.min(a, axis=1) returns a 1d array of row-wise minimum values.
We than add an extra dimension to it using [:,None]. It then looks like this 2d array:
array([[ 0],
[ 6],
[12],
[18]])
When this 2d array participates in the subtraction, it gets broadcasted into a shape of (4,6), which looks like this:
array([[ 0, 0, 0, 0, 0, 0],
[ 6, 6, 6, 6, 6, 6],
[12, 12, 12, 12, 12, 12],
[18, 18, 18, 18, 18, 18]])
Now, element-wise subtraction happens between the two (4,6) arrays.
Specify keepdims=True to preserve a length-1 dimension in place of the dimension that min collapses, allowing broadcasting to work out naturally:
a - a.min(axis=1, keepdims=True)
This is especially convenient when axis is determined at runtime, but still probably clearer than manually reintroducing the squashed dimension even when the 1 value is fixed.
If you want to use only pandas you can just apply a lambda to every column using min(row)
new_df = pd.DataFrame()
for i, col in enumerate(df.columns):
new_df[col] = df.apply(lambda row: row[i] - min(row))

concatenate images from a matrix of images

I have 4 images, each have width and height of 8. They belong inside a vector with shape [4,8,8]. I reshape the vector of images to become a matrix of images with shape [2,2,8,8].
How can I concatenate the images from inside the matrix to produce a single image so that the shape becomes [16,16]? I want the images to be concatenated so that their x,y position from the matrix are maintained - essentially just stitching separate images together into a single image.
I have a feeling this could easily be done in numpy, maybe even tensorflow, but I'm open to any nice solution in python.
You can use the numpy.concatenate with different axis. Here is an example with a reduced version using 4 images with shape [2 2], which produces a [4 4]resulting image:
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6], [7, 8]])
c = np.array([[9, 10], [11, 12]])
d = np.array([[13, 14], [15, 16]])
ab = np.concatenate((a, b), axis=1)
cd = np.concatenate((c, d), axis=1)
abcd = np.concatenate((ab, cd), axis=0)
>>> print(abcd)
array([[ 1, 2, 5, 6],
[ 3, 4, 7, 8],
[ 9, 10, 13, 14],
[11, 12, 15, 16]])
>>> print(abcd.shape)
(4, 4)
Just adapt this code to yours, instead of using a, b, c, d concatenating images by the first dimension of your tensor, with something similar to np.concatenate((t[0], t[1]), axis=1) being t your tensor of shape [4 8 8].
Otherwise, as other answers suggested you can use twice the numpy.hstack function twice, but I think that it's behaviour it's not that easily readable, even being less code.
You can use np.hstack twice like so (slightly smaller arrays to make them printable):
import numpy as np
original = np.array([[np.arange(16).reshape(4,-1)]*2]*2)
combined = np.hstack(np.hstack(original))
print(combined)
wich gives:
[[ 0 1 2 3 0 1 2 3]
[ 4 5 6 7 4 5 6 7]
[ 8 9 10 11 8 9 10 11]
[12 13 14 15 12 13 14 15]
[ 0 1 2 3 0 1 2 3]
[ 4 5 6 7 4 5 6 7]
[ 8 9 10 11 8 9 10 11]
[12 13 14 15 12 13 14 15]]
I wouldn't recommend doing this, because it's hardly readable, but just for the sake of numpy exercise you can make a grid of images with just reshape() and transpose() methods.
import numpy as np
w_img = 3
h_img = 2
n_img = 4
images = (np.array([a + b for a in 'abcd' for b in '123456'])
.reshape(n_img, h_img, w_img))
# A batch of 4 images 3x2
print(images)
#[[['a1' 'a2' 'a3']
# ['a4' 'a5' 'a6']]
#
# [['b1' 'b2' 'b3']
# ['b4' 'b5' 'b6']]
#
# [['c1' 'c2' 'c3']
# ['c4' 'c5' 'c6']]
#
# [['d1' 'd2' 'd3']
# ['d4' 'd5' 'd6']]]
# Making 2x2 grid
w_grid = 2
h_grid = 2
grid = images.reshape(h_grid, w_grid, h_img, w_img) # axes: h_grid, w_grid, h_img, w_img
grid = grid.transpose([0, 2, 1, 3]) # axes: h_grid, h_img, w_grid, w_img
grid = grid.reshape(4, 6) # axes: (h_grid * h_img), (w_grid * w_img)
print(grid)
#[['a1' 'a2' 'a3' 'b1' 'b2' 'b3']
# ['a4' 'a5' 'a6' 'b4' 'b5' 'b6']
# ['c1' 'c2' 'c3' 'd1' 'd2' 'd3']
# ['c4' 'c5' 'c6' 'd4' 'd5' 'd6']]

Intersect multiple 2D np arrays for determining zones

Using this small reproducible example, I've so far been unable to generate a new integer array from 3 arrays that contains unique groupings across all three input arrays.
The arrays are related to topographic properties:
import numpy as np
asp = np.array([8,1,1,2,7,8,2,3,7,6,4,3,6,5,5,4]).reshape((4,4)) #aspect
slp = np.array([9,10,10,9,9,12,12,9,10,11,11,9,9,9,9,9]).reshape((4,4)) #slope
elv = np.array([13,14,14,13,14,15,16,14,14,15,16,14,13,14,14,13]).reshape((4,4)) #elevation
The idea is that the geographic contours are broken into 3 different properties using GIS routines:
1-8 for aspect (1=north facing, 2=northeast facing, etc.)
9-12 for slope (9=gentle slope...12=steepest slope)
13-16 for elevation (13=lowest elevations...16=highest elevations)
The small graphic below attempts to depict the kind of result I'm after (array shown in lower left). Note, the "answer" given in the graphic is but one possible answer. I'm not concerned about the final arrangement of integers in the resulting array so long as the final array contains an integer at each row/column index that identifies unique groupings.
For example, the array indexes at [0,1] and [0,2] have the same aspect, slope, and elevation and therefore receive the same integer identifier in the resulting array.
Does numpy have a built in routine for this kind of thing?
Each location in the grid is associated with a tuple composed of one value from
asp, slp and elv. For example, the upper left corner has tuple (8,9,13).
We would like to map this tuple to a number which uniquely identifies this tuple.
One way to do that would be to think of (8,9,13) as the index into the 3D array
np.arange(9*13*17).reshape(9,13,17). This particular array was chosen
to accommodate the largest values in asp, slp and elv:
In [107]: asp.max()+1
Out[107]: 9
In [108]: slp.max()+1
Out[108]: 13
In [110]: elv.max()+1
Out[110]: 17
Now we can map the tuple (8,9,13) to the number 1934:
In [113]: x = np.arange(9*13*17).reshape(9,13,17)
In [114]: x[8,9,13]
Out[114]: 1934
If we do this for each location in the grid, then we get a unique number for each location.
We could end right here, letting these unique numbers serve as labels.
Or, we can generate smaller integer labels (starting at 0 and increasing by 1)
by using np.unique with
return_inverse=True:
uniqs, labels = np.unique(vals, return_inverse=True)
labels = labels.reshape(vals.shape)
So, for example,
import numpy as np
asp = np.array([8,1,1,2,7,8,2,3,7,6,4,3,6,5,5,4]).reshape((4,4)) #aspect
slp = np.array([9,10,10,9,9,12,12,9,10,11,11,9,9,9,9,9]).reshape((4,4)) #slope
elv = np.array([13,14,14,13,14,15,16,14,14,15,16,14,13,14,14,13]).reshape((4,4)) #elevation
x = np.arange(9*13*17).reshape(9,13,17)
vals = x[asp, slp, elv]
uniqs, labels = np.unique(vals, return_inverse=True)
labels = labels.reshape(vals.shape)
yields
array([[11, 0, 0, 1],
[ 9, 12, 2, 3],
[10, 8, 5, 3],
[ 7, 6, 6, 4]])
The above method works fine as long as the values in asp, slp and elv are small integers. If the integers were too large, the product of their maximums could overflow the maximum allowable value one can pass to np.arange. Moreover, generating such a large array would be inefficient.
If the values were floats, then they could not be interpreted as indices into the 3D array x.
So to address these problems, use np.unique to convert the values in asp, slp and elv to unique integer labels first:
indices = [ np.unique(arr, return_inverse=True)[1].reshape(arr.shape) for arr in [asp, slp, elv] ]
M = np.array([item.max()+1 for item in indices])
x = np.arange(M.prod()).reshape(M)
vals = x[indices]
uniqs, labels = np.unique(vals, return_inverse=True)
labels = labels.reshape(vals.shape)
which yields the same result as shown above, but works even if asp, slp, elv were floats and/or large integers.
Finally, we can avoid the generation of np.arange:
x = np.arange(M.prod()).reshape(M)
vals = x[indices]
by computing vals as a product of indices and strides:
M = np.r_[1, M[:-1]]
strides = M.cumprod()
indices = np.stack(indices, axis=-1)
vals = (indices * strides).sum(axis=-1)
So putting it all together:
import numpy as np
asp = np.array([8,1,1,2,7,8,2,3,7,6,4,3,6,5,5,4]).reshape((4,4)) #aspect
slp = np.array([9,10,10,9,9,12,12,9,10,11,11,9,9,9,9,9]).reshape((4,4)) #slope
elv = np.array([13,14,14,13,14,15,16,14,14,15,16,14,13,14,14,13]).reshape((4,4)) #elevation
def find_labels(*arrs):
indices = [np.unique(arr, return_inverse=True)[1] for arr in arrs]
M = np.array([item.max()+1 for item in indices])
M = np.r_[1, M[:-1]]
strides = M.cumprod()
indices = np.stack(indices, axis=-1)
vals = (indices * strides).sum(axis=-1)
uniqs, labels = np.unique(vals, return_inverse=True)
labels = labels.reshape(arrs[0].shape)
return labels
print(find_labels(asp, slp, elv))
# [[ 3 7 7 0]
# [ 6 10 12 4]
# [ 8 9 11 4]
# [ 2 5 5 1]]
This can be done using numpy.unique() and then a mapping like:
Code:
combined = 10000 * asp + 100 * slp + elv
unique = dict(((v, i + 1) for i, v in enumerate(np.unique(combined))))
combined_unique = np.vectorize(unique.get)(combined)
Test Code:
import numpy as np
asp = np.array([8, 1, 1, 2, 7, 8, 2, 3, 7, 6, 4, 3, 6, 5, 5, 4]).reshape((4, 4)) # aspect
slp = np.array([9, 10, 10, 9, 9, 12, 12, 9, 10, 11, 11, 9, 9, 9, 9, 9]).reshape((4, 4)) # slope
elv = np.array([13, 14, 14, 13, 14, 15, 16, 14, 14, 15, 16, 14, 13, 14, 14, 13]).reshape((4, 4))
combined = 10000 * asp + 100 * slp + elv
unique = dict(((v, i + 1) for i, v in enumerate(np.unique(combined))))
combined_unique = np.vectorize(unique.get)(combined)
print(combined_unique)
Results:
[[12 1 1 2]
[10 13 3 4]
[11 9 6 4]
[ 8 7 7 5]]
This seems like a similar problem to labeling unique regions in an image. This is a function I've written to do this, though you would first need to concatenate your 3 arrays to 1 3D array.
def labelPix(pix):
height, width, _ = pix.shape
pixRows = numpy.reshape(pix, (height * width, 3))
unique, counts = numpy.unique(pixRows, return_counts = True, axis = 0)
unique = [list(elem) for elem in unique]
labeledPix = numpy.zeros((height, width), dtype = int)
offset = 0
for index, zoneArray in enumerate(unique):
index += offset
zone = list(zoneArray)
zoneArea = (pix == zone).all(-1)
elementsArray, numElements = scipy.ndimage.label(zoneArea)
elementsArray[elementsArray!=0] += offset
labeledPix[elementsArray!=0] = elementsArray[elementsArray!=0]
offset += numElements
return labeledPix
This will label unique 3-value combinations, while also assigning separate labels to zones which have the same 3-value combination, but are not in contact with one another.
asp = numpy.array([8,1,1,2,7,8,2,3,7,6,4,3,6,5,5,4]).reshape((4,4)) #aspect
slp = numpy.array([9,10,10,9,9,12,12,9,10,11,11,9,9,9,9,9]).reshape((4,4)) #slope
elv = numpy.array([13,14,14,13,14,15,16,14,14,15,16,14,13,14,14,13]).reshape((4,4)) #elevation
pix = numpy.zeros((4,4,3))
pix[:,:,0] = asp
pix[:,:,1] = slp
pix[:,:,2] = elv
print(labelPix(pix))
returns:
[[ 0 1 1 2]
[10 12 3 4]
[11 9 6 4]
[ 8 7 7 5]]
Here's a plain Python technique using itertools.groupby. It requires the input to be 1D lists, but that shouldn't be a major issue. The strategy is to zip the lists together, along with an index number, then sort the resulting columns. We then group identical columns together, ignoring the index number when comparing columns. Then we gather the index numbers from each group, and use them to build the final output list.
from itertools import groupby
def show(label, seq):
print(label, ' '.join(['{:2}'.format(u) for u in seq]))
asp = [8, 1, 1, 2, 7, 8, 2, 3, 7, 6, 4, 3, 6, 5, 5, 4]
slp = [9, 10, 10, 9, 9, 12, 12, 9, 10, 11, 11, 9, 9, 9, 9, 9]
elv = [13, 14, 14, 13, 14, 15, 16, 14, 14, 15, 16, 14, 13, 14, 14, 13]
size = len(asp)
a = sorted(zip(asp, slp, elv, range(size)))
groups = sorted([u[-1] for u in g] for _, g in groupby(a, key=lambda t:t[:-1]))
final = [0] * size
for i, g in enumerate(groups, 1):
for j in g:
final[j] = i
show('asp', asp)
show('slp', slp)
show('elv', elv)
show('out', final)
output
asp 8 1 1 2 7 8 2 3 7 6 4 3 6 5 5 4
slp 9 10 10 9 9 12 12 9 10 11 11 9 9 9 9 9
elv 13 14 14 13 14 15 16 14 14 15 16 14 13 14 14 13
out 1 2 2 3 4 5 6 7 8 9 10 7 11 12 12 13
There's no need to do that second sort, we could just use a plain list comp
groups = [[u[-1] for u in g] for _, g in groupby(a, key=lambda t:t[:-1])]
or generator expression
groups = ([u[-1] for u in g] for _, g in groupby(a, key=lambda t:t[:-1]))
I only did it so that my output matches the output in the question.
Here's one way to solve this problem using a dictionary based lookup.
from collections import defaultdict
import itertools
group_dict = defaultdict(list)
idx_count = 0
for a, s, e in np.nditer((asp, slp, elv)):
asp_tuple = (a.tolist(), s.tolist(), e.tolist())
if asp_tuple not in group_dict:
group_dict[asp_tuple] = [idx_count+1]
idx_count += 1
else:
group_dict[asp_tuple].append(group_dict[asp_tuple][-1])
list1d = list(itertools.chain(*list(group_dict.values())))
np.array(list1d).reshape(4, 4)
# result
array([[ 1, 2, 2, 3],
[ 4, 5, 6, 7],
[ 7, 8, 9, 10],
[11, 12, 12, 13]])

Slicing python matrix into quadrants

Suppose I have the following matrix in python:
[[1,2,3,4],
[5,6,7,8],
[9,10,11,12],
[13,14,15,16]]
I want to slice it into the following matrices (or quadrants/corners):
[[1,2], [5,6]]
[[3,4], [7,8]]
[[9,10], [13,14]]
[[11,12], [15,16]]
Is this supported with standard slicing operators in python or is it necessary to use an extended library like numpy?
If you are always working with a 4x4 matrix:
a = [[1 ,2 , 3, 4],
[5 ,6 , 7, 8],
[9 ,10,11,12],
[13,14,15,16]]
top_left = [a[0][:2], a[1][:2]]
top_right = [a[0][2:], a[1][2:]]
bot_left = [a[2][:2], a[3][:2]]
bot_right = [a[2][2:], a[3][2:]]
You could also do the same for an arbitrary size matrix:
h = len(a)
w = len(a[1])
top_left = [a[i][:w // 2] for i in range(h // 2)]
top_right = [a[i][w // 2:] for i in range(h // 2)]
bot_left = [a[i][:w // 2] for i in range(h // 2, h)]
bot_right = [a[i][w // 2:] for i in range(h // 2, h)]
The question is already answered, but I think this solution is more general.
It can also be used numpy.split and list comprehension in the following way:
import numpy as np
A = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
B = [M for SubA in np.split(A,2, axis = 0) for M in np.split(SubA,2, axis = 1)]
Getting:
>>>[array([[1, 2],[5, 6]]),
array([[3, 4],[7, 8]]),
array([[ 9, 10],[13, 14]]),
array([[11, 12],[15, 16]])]
Now if you want to have them assigned into different variables, just:
C1,C2,C3,C4 = B
Have a look to numpy.split doc.
Changing the parameter indices_or_sections you can get a higher number of splits.
>>> a = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]]
>>> x = map(lambda x:x[:2], a)
>>> x
[[1, 2], [5, 6], [9, 10], [13, 14]]
>>> y = map(lambda x: x[2:], a)
>>> y
[[3, 4], [7, 8], [11, 12], [15, 16]]
>>> x[:2] + y[:2] + x[2:] + y[2:]
[[1, 2], [5, 6], [3, 4], [7, 8], [9, 10], [13, 14], [11, 12], [15, 16]]
Although the answers can provide the required solution. These are not applicable for the arrays in different sizes. If you have a NumPy array in size of (6x7), then these methods will not create a solution. I have prepared a solution for myself and want to share it here.
Note: In my solution, there will be overlaps due to the different axis sizes.
I have created this solution to divide an astronomical image into four quadrants. I, then, use these quadrants to calculate the mean and median in an annulus.
import numpy as np
def quadrant_split2d(array):
"""Example function for identifiying the elements of quadrants in an array.
array:
A 2D NumPy array.
Returns:
The quadrants. True for the members of the quadrants, False otherwise.
"""
Ysize = array.shape[0]
Xsize = array.shape[1]
y, x = np.indices((Ysize,Xsize))
if not (Xsize==Ysize)&(Xsize % 2 == 0): print ('There will be overlaps')
sector1=(x<Xsize/2)&(y<Ysize/2)
sector2=(x>Xsize/2-1)&(y<Ysize/2)
sector3=(x<Xsize/2)&(y>Ysize/2-1)
sector4=(x>Xsize/2-1)&(y>Ysize/2-1)
sectors=(sector1,sector2,sector3,sector4)
return sectors
You can test the function with the different type of arrays.
For example:
test = np.arange(42).reshape(6,7)
print ('Original array:\n', test)
sectors = quadrant_split2d(test)
print ('Sectors:')
for ss in sectors: print (test[ss])
This will give us the following sectors:
[ 0 1 2 3 7 8 9 10 14 15 16 17]
[ 3 4 5 6 10 11 12 13 17 18 19 20]
[21 22 23 24 28 29 30 31 35 36 37 38]
[24 25 26 27 31 32 33 34 38 39 40 41]

Categories