Python numpy: reshape list into repeating 2D array - python

I'm new to python and I have a question about numpy.reshape. I currently have 2 lists of values like this:
x = [0,1,2,3]
y = [4,5,6,7]
And I want them to be in separate 2D arrays, where each item is repeated for the length of the original lists, like this:
xx = [[0,0,0,0]
[1,1,1,1]
[2,2,2,2]
[3,3,3,3]]
yy = [[4,5,6,7]
[4,5,6,7]
[4,5,6,7]
[4,5,6,7]]
Is there a way to do this with numpy.reshape, or is there a better method I could use? I would very much appreciate a detailed explanation. Thanks!

numpy.meshgrid will do this for you.
N.B. From your requested output, it looks like you want ij indexing, not the default xy
from numpy import meshgrid
x = [0,1,2,3]
y = [4,5,6,7]
xx,yy=meshgrid(x,y,indexing='ij')
print xx
>>> [[0 0 0 0]
[1 1 1 1]
[2 2 2 2]
[3 3 3 3]]
print yy
>>> [[4 5 6 7]
[4 5 6 7]
[4 5 6 7]
[4 5 6 7]]
For reference, here's xy indexing
xx,yy=meshgrid(x,y,indexing='xy')
print xx
>>> [[0 1 2 3]
[0 1 2 3]
[0 1 2 3]
[0 1 2 3]]
print yy
>>> [[4 4 4 4]
[5 5 5 5]
[6 6 6 6]
[7 7 7 7]]

Related

I have a 2d array. Need to make a loop to replace first 2 rows, then next 2 only rows and so on by ones and print till the loop ends

b = np.random.randint(0,10, (6,3))
I tried this code, but it gives the `ValueError: operands could not be broadcast together with shapes (2,3) () (6,3)
step = 2
r1 = 0
r2 = 2
while r2 <= len(b):
c = np.where(b[r1:r2] >= 0, 1, b)
print(c)
r1+ = step
r2+ = step
I think the problem is in a condition of np.where. It creates an array wih a shape that is incompatible with b array
What i need is for the code to receive array b and to return 3 arrays of the same size of b but with two rows been substituted by 1´s. Like this:
[[1 1 1]
[1 1 1]
[6 3 4]
[2 9 3]
[6 9 2]
[8 1 0]]
[[3 2 8]
[3 8 5]
[1 1 1]
[1 1 1]
[6 9 2]
[8 1 0]]
[[3 2 8]
[3 8 5]
[6 3 4]
[2 9 3]
[1 1 1]
[1 1 1]]
My tutor told me to try it with 'np.where' function.But it seems that this function doesnt support this type of condition i´m trying to feed to it. May be there is another way to get the desired output. All examples I googled work with random values of the array and not precisely rows. In pandas it easier. But i need numpy code to feed the output to the neural network. The ones will be treated by it as an empty values, but the size of the array will be always the same, thus not producing errors
You are getting a ValueError because the size of b[0:2] is not the same as the size of b.
print(b.shape)
# (6, 3)
print(b[0:2].shape)
# (2, 3)
The documentation for numpy.where states that the way the condition works is "Where True, yield x, otherwise yield y." Thus, you need to be able to broadcast x and y onto the size of your condition. In your example, you can't broadcast (6,3) onto (2,3) and hence the error.
You need things to be the same size. For example, c = np.where(b[0:2] >= 0, 1, b[0:2]) would not give you an error.
However, if you want to step through your array b, then you need something other than b[0:2]. Otherwise it will just keep repeating that first part your array. I think you probably want b[r1:r2].
Also, I notice that you have r1+ = step instead of r1 += step, which will also spit out an error. Note that you don't actually need both r1 and r2 since their offset is step.
Putting all this together, we can adjust your code to give you something that works:
import numpy as np
b = np.random.randint(0,5, (6,3))
step = 2
r1 = 0
while r1 <= len(b) - step:
c = np.copy(b)
c[r1:r1+step] = np.where(b[r1:r1+step] >= 0, 1, b[r1:r1+step])
print(c)
r1 += step
Or you could instead do it with a for loop instead of a while loop:
import numpy as np
b = np.random.randint(0,5, (6,3))
step = 2
for r1 in range(0, len(b), step):
c = np.copy(b)
c[r1:r1+step] = np.where(b[r1:r1+step] >= 0, 1, b[r1:r1+step])
print(c)
Resulting output:
[[1 1 1]
[1 1 1]
[3 2 2]
[1 1 2]
[3 3 0]
[3 2 2]]
[[4 0 2]
[4 0 0]
[1 1 1]
[1 1 1]
[3 3 0]
[3 2 2]]
[[4 0 2]
[4 0 0]
[3 2 2]
[1 1 2]
[1 1 1]
[1 1 1]]

An efficient way to concatenate rows of a 2-dim array according to a given list of pairs of indexes

Suppose I have a 2 dimensional array with a very large number of rows, and a list of pairs of indexes of that array. I want to create a new 2 dim array, whose rows are concatenations of the rows of the original array, made according to the list of pairs of indexes. For example:
a =
1 2 3
4 5 6
7 8 9
0 0 0
indexes = [[0,0], [0,1], [2,3]]
the returned array should be:
1 2 3 1 2 3
1 2 3 4 5 6
7 8 9 0 0 0
Obviously I can iterate the list of indexes, but my question is whether there is a more efficient way of doing this. I should say that the list of indexes is also very large.
First convert indexes to a Numpy array:
ind = np.array(indexes)
Then generate your result as:
result = np.concatenate([a[ind[:,0]], a[ind[:,1]]], axis=1)
The result is:
array([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 4, 5, 6],
[7, 8, 9, 0, 0, 0]])
Another possible formula (with the same result):
result = np.concatenate([ a[ind[:,i]] for i in range(ind.shape[1]) ], axis=1)
You can do this in one line using NumPy as:
a = np.arange(12).reshape(4, 3)
print(a)
b = [[0, 0], [1, 1], [2, 3]]
b = np.array(b)
print(b)
c = a[b.reshape(-1)].reshape(-1, a.shape[1]*b.shape[1])
print(c)
'''
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
[[0 0]
[1 1]
[2 3]]
[[ 0 1 2 0 1 2]
[ 3 4 5 3 4 5]
[ 6 7 8 9 10 11]]
'''
You can use horizontal stacking np.hstack:
c = np.array(indexes)
np.hstack((a[c[:,0]],a[c[:,1]]))
output:
[[1 2 3 1 2 3]
[1 2 3 4 5 6]
[7 8 9 0 0 0]]

(Inverse-) Sorting 2d numpy array column-wise

The following code sorts an 2d numpy array column-wise forth and back
import numpy as np
#Column-wise sort and inverse sort of image (2d array)
nrows = 10
ncols = 5
a = np.random.randint(nrows, size=(nrows, ncols))
a_sorted = np.sort(a, axis=0)
ori_indices = np.zeros_like(a)
for c in range(ncols):
ori_indices[:,c] = np.argsort(np.argsort(a[:,c]))
#Do some work on sorted array, like e.g row-wise filtering
#After processing sorted array, move it back to original order
a_backsorted = np.zeros_like(a)
for c in range(ncols):
a_backsorted[:,c] = a_sorted[:,c][ori_indices[:,c]]
print (a); print ()
print (a_backsorted); print ()
print (a_sorted); print ()
The code work as is but I guess there is a more efficient implementation without for loop (using fancy indexing)
You can try a_sorted[::-1] to reverse the array
print (a_sorted); print ()
print (a_sorted[::-1])
[[0 0 0 2 0]
[2 0 0 2 2]
[4 0 2 6 4]
[4 2 3 7 5]
[4 4 4 7 6]
[5 5 4 8 7]
[6 5 4 8 7]
[7 6 8 9 8]
[8 7 9 9 9]
[8 8 9 9 9]]
[[8 8 9 9 9]
[8 7 9 9 9]
[7 6 8 9 8]
[6 5 4 8 7]
[5 5 4 8 7]
[4 4 4 7 6]
[4 2 3 7 5]
[4 0 2 6 4]
[2 0 0 2 2]
[0 0 0 2 0]]
#Column-wise sort and inverse sort of image (2d array)
import numpy as np
#Define random array and sort it
nrows = 10
ncols = 5
a = np.random.randint(nrows, size=(nrows, ncols))
a_sorted = np.sort(a, axis=0)
#Save original order of columns
ori_indices = np.argsort(np.argsort(a, axis=0), axis=0)
#Do some work on sorted array, like e.g row-wise filtering.
#....
#After processing sorted array, move it back to original order:
c=np.array([[i] for i in range(ncols)]).T
a_backsorted = a_sorted[ori_indices, c]
#Check results
print (a); print ()
print (a_backsorted); print ()
print (a_sorted); print ()
import numpy as np
nrows = 10; ncols = 5
a = np.random.randint(nrows, size=(nrows, ncols))
a_sorted = np.sort(a, axis=0)
a_backsorted = np.zeros_like(a)
c = np.array([[i] for i in range(ncols)]).T
a_backsorted[np.argsort(a, axis=0), c] = a_sorted
The reverting of the column-wise sorting is done by inserting the values of the sorted array at the argsorted positions in the backsorted array. Since this is done columnwise, the argsorted positions are paired with the columns represented in the c array

How to get indexes of top 2 values of each row in a 2-D numpy array, but with a specific area is excluded?

I have a 2-D array for example:
p = np.array([[21,2,3,1,12,13],
[4,5,6,14,15,16],
[7,8,9,17,18,19]])
b = np.argpartition(p, np.argmin(p, axis=1))[:, -2:]
com = np.ones([3,6],dtype=np.int)
com[np.arange(com.shape[0])[:,None],b] = 0
print(com)
b is the indices of top 2 values of each row in p:
b = [[0 5]
[4 5]
[4 5]]
com is np.ones matrix, the same size as p, the element whose index is same as b will change to 0.
So the result is :
com = [[0 1 1 1 1 0]
[1 1 1 1 0 0]
[1 1 1 1 0 0]]
Now I have one more constraint :
p[0:2,0:2] = [[21 2]
[4 5]]
The numbers in these area [0:2,0:2] should not be considered, so the result should be:
b = [[4 5]
[4 5]
[4 5]]
com = [[1 1 1 1 0 0]
[1 1 1 1 0 0]
[1 1 1 1 0 0]]
How can I do this ? Should I use a mask or something similarly?
Thanks in advance !
Just set the values in those slices to a low value, ensuring that they won't be among the two largest, an then use argpartition:
out = np.copy(p)
out[0:2,0:2] = -np.inf
np.argpartition(out, [-2,-1])[:, -2:]
array([[4, 5],
[4, 5],
[4, 5]])

Delete specific Values of an Array: Python

I have an array of the shape (1179648, 909).
The problem is that some rows are filled with 0's only. I am checking for this as follows:
for i in range(spectra1Only.shape[0]):
for j in range(spectra1Only.shape[1]):
if spectra1Only[i,j] == 0:
I now want to remove the whole row of [i] if there is any 0 appearing to get a smaller amount of only the data needed.
My question is: what would be the best method to do so? Remove? Del? numpy.delete? Or any other method?
You can use Boolean indexing with np.any along axis=1:
spectra1Only = spectra1Only[~(spectra1Only == 0).any(1)]
Here's a demonstration:
A = np.random.randint(0, 9, (5, 5))
print(A)
[[5 0 3 3 7]
[3 5 2 4 7]
[6 8 8 1 6]
[7 7 8 1 5]
[8 4 3 0 3]]
print(A[~(A == 0).any(1)])
[[3 5 2 4 7]
[6 8 8 1 6]
[7 7 8 1 5]]

Categories