I see no fortran order in:
import numpy as np
In [143]: np.array([[1,2],[3,4]],order='F')
Out[143]:
array([[1, 2],
[3, 4]])
But in the following it works:
In [139]: np.reshape(np.arange(9),newshape=(3,3),order='F')
Out[139]:
array([[0, 3, 6],
[1, 4, 7],
[2, 5, 8]])
So what am I doing wrong in the first one?
When you call numpy.array to create an array from an existing Python object, it will give you an object with whatever shape that the original Python object has. So,
np.array([[1,2],[3,4]], ...)
Will always give you,
np.array([[1, 2],
[3, 4]])
Which is exactly what you typed in, so it should not come as a surprise. Fortran order and C order do not describe the shape of the data, they describe the memory layout. When you print out an object, NumPy doesn't show you what the memory layout is, it only shows you the shape.
You can witness that the array truly is stored in Fortran order when you flatten it with the "K" order, which keeps the original order of the elements:
>>> a = np.array([[1,2],[3,4]], order="F")
>>> a.flatten(order="K")
array([1, 3, 2, 4])
This is what truly distinguishes Fortran from C order: the memory layout. Most NumPy functions do not force you to consider memory layout, instead, different layouts are handled transparently.
It sounds like what you want is to transpose, reversing the axis order. This can be done simply:
>>> b = numpy.transpose(a)
>>> b
array([[1, 3],
[2, 4]])
This does not create a new array, but a new view of the same array:
>>> b.base is a
True
If you want the data to have the memory layout 1 2 3 4 and have a Fortran order view of that [[1, 3], [2, 4]], the efficient way to do this is to store the existing array with C order and then transpose it, which results in a Fortran-order array with the desired contents and requires no extra copies.
>>> a = np.array([[1, 2], [3, 4]]).transpose()
>>> a.flatten(order="K")
array([1, 2, 3, 4])
>>> a
array([[1, 3],
[2, 4]])
If you store the original with Fortran order, the transposition will result in C order, so you don't want that (or maybe all you care about is the transposition, and memory order is not important?). In either case, the array will look the same in NumPy.
>>> a = np.array([[1, 2], [3, 4]], order="F").transpose()
>>> a.flatten(order="K")
array([1, 3, 2, 4])
>>> a
array([[1, 3],
[2, 4]])
Your two means of constructing the 2D array are not at all equivalent. In the first, you specified the structure of the array. In the second, you formed an array and then reshaped to your liking.
>>> np.reshape([1,2,3,4],newshape=(2,2),order='F')
array([[1, 3],
[2, 4]])
Again, for comparison, even if you ask for the reshape and format change to FORTRAN, you'll get your specified structure:
>>> np.reshape([[1,2],[3,4]],newshape=(2,2),order='F')
array([[1, 2],
[3, 4]])
Related
This question already has an answer here:
how to reshape an N length vector to a 3x(N/3) matrix in numpy using reshape
(1 answer)
Closed 2 years ago.
I have an array: [1, 2, 3, 4, 5, 6]. I would like to use the numpy.reshape() function so that I end up with this array:
[[1, 4],
[2, 5],
[3, 6]
]
I'm not sure how to do this. I keep ending up with this, which is not what I want:
[[1, 2],
[3, 4],
[5, 6]
]
These do the same thing:
In [57]: np.reshape([1,2,3,4,5,6], (3,2), order='F')
Out[57]:
array([[1, 4],
[2, 5],
[3, 6]])
In [58]: np.reshape([1,2,3,4,5,6], (2,3)).T
Out[58]:
array([[1, 4],
[2, 5],
[3, 6]])
Normally values are 'read' across the rows in Python/numpy. This is call row-major or 'C' order. Read down is 'F', for FORTRAN, and is common in MATLAB, which has Fortran roots.
If you take the 'F' order, make a new copy and string it out, you'll get a different order:
In [59]: np.reshape([1,2,3,4,5,6], (3,2), order='F').copy().ravel()
Out[59]: array([1, 4, 2, 5, 3, 6])
You can set the order in np.reshape, in your case you can use 'F'. See docs for details
>>> arr
array([1, 2, 3, 4, 5, 6])
>>> arr.reshape(-1, 2, order = 'F')
array([[1, 4],
[2, 5],
[3, 6]])
The reason that you are getting that particular result is that arrays are normally allocates in C order. That means that reshaping by itself is not sufficient. You have to tell numpy to change the order of the axes when it steps along the array. Any number of operations will allow you to do that:
Set the axis order to F. F is for Fortran, which, like MATLAB, conventionally uses column-major order:
a.reshape(2, 3, order='F')
Swap the axes after reshaping:
np.swapaxes(a.reshape(2, 3), 0, 1)
Transpose the result:
a.reshape(2, 3).T
Roll the second axis forward:
np.rollaxis(a.reshape(2, 3), 1)
Notice that all but the first case require you to reshape to the transpose.
You can even manually arrange the data
np.stack((a[:3], a[3:]), axis=1)
Note that this will make many unnecessary copies. If you want the data copied, just do
a.reshape(2, 3, order='F').copy()
I have a large NumPy array which I want to fill with new data on each iteration of a loop. The array is filled with data repeated along axis 0, for example:
[[1, 5],
[1, 5],
[1, 5],
[1, 5]]
I know how to create this array from scratch in each iteration:
x = np.repeat([[1, 5]], 4, axis=0)
However, I don't want to create a new array every time, because it's a very large array (much larger than 4x2). Instead, I want to create the array in advance using the above code, and then just fill the array with new data on each iteration.
But np.repeat() returns a new array, rather than acting on an existing array. Is there an equivalent of np.repeat() for filling an existing array?
As we noted in comments, you can use a broadcasting assignment to fill your 2d array with a 1d array-like of the appropriate size:
x[...] = [1, 5]
If by any chance your large array always contains the same items in each row (i.e. you won't change these preset values later), you can almost certainly use broadcasting in later parts of your code and just work with an initial x such as
x = np.array([[1, 5]])
This array has shape (1, 2) which is broadcast-compatible with other arrays of shape (4, 2) you might have in the above example.
If you always need the same values in each row and for some reason you can't use broadcasting (both cases are highly unlikely), you can use broadcast_to to create an array with an explicit 2d shape without copying memory:
x_bc = np.broadcast_to([1, 5], (4, 2)) # broadcast 1d [1, 5] to shape (4, 2)
This might work because it has the right shape with only 2 unique elements in memory:
>>> x_bc
array([[1, 5],
[1, 5],
[1, 5],
[1, 5]])
>>> x_bc.strides
(0, 8)
However you can't mutate it, because it's a read-only view:
>>> x_bc[0, :] = [2, 4]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-35-ae12ecfe3c5e> in <module>
----> 1 x_bc[0, :] = [2, 4]
ValueError: assignment destination is read-only
So, if you only need the same values in each row and you can't use broadcasting and you want to mutate those same rows later, you can use stride tricks to map the same 1d data to a 2d array:
>>> x_in = np.array([1, 5])
... x_strided = np.lib.stride_tricks.as_strided(x_in, shape=(4,) + x_in.shape,
... strides=(0,) + x_in.strides[-1:])
>>> x_strided
array([[1, 5],
[1, 5],
[1, 5],
[1, 5]])
>>> x_strided[0, :] = [2, 4]
>>> x_strided
array([[2, 4],
[2, 4],
[2, 4],
[2, 4]])
Which gives you a 2d array of fixed shape that always contains one unique row, and mutating any of the rows mutates the rest (since the underlying data corresponds to only a single row). Handle with care, because if you ever want to have two different rows you'll have to do something else.
How I can make, for example, matrix transpose, without making a copy of matrix object? As well, as other matrix operations ( subtract a matrix from the matrix, ...). Is it beneficial to do that?
Taking the transpose of an array does not make a copy:
>>> a = np.arange(9).reshape(3,3)
>>> b = np.transpose(a)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> b
array([[0, 3, 6],
[1, 4, 7],
[2, 5, 8]])
>>> b[0,1] = 100
>>> b
array([[ 0, 100, 6],
[ 1, 4, 7],
[ 2, 5, 8]])
>>> a
array([[ 0, 1, 2],
[100, 4, 5],
[ 6, 7, 8]])
The same applies to a numpy.matrix object.
This can be beneficial when you want to avoid unnecessarily consuming a lot of memory by copying very large arrays. But you also have to be careful to avoid unintentionally modifying the original array (if you still need it) when you modify the transpose.
A number of numpy functions accept an optional "out" keyword (e.g., numpy.dot) to write the output to an existing array. For example, to take the matrix product of a with itself and write the output an existing array c:
numpy.dot(a, a, out=c)
Transpose operation as in b = a.T creates a shallow copy. a and b will share the same data.
For arithmetic operations see: - vs -= operators with numpy
Shallow copies are good for using the memory efficiently. However you have to keep in mind that changing a value will affect all copies.
Is it possible to perform min/max in-place assignment with NumPy multi-dimensional arrays without an extra copy?
Say, a and b are two 2D numpy arrays and I would like to have a[i,j] = min(a[i,j], b[i,j]) for all i and j.
One way to do this is:
a = numpy.minimum(a, b)
But according to the documentation, numpy.minimum creates and returns a new array:
numpy.minimum(x1, x2[, out])
Element-wise minimum of array elements.
Compare two arrays and returns a new array containing the element-wise minima.
So in the code above, it will create a new temporary array (min of a and b), then assign it to a and dispose it, right?
Is there any way to do something like a.min_with(b) so that the min-result is assigned back to a in-place?
numpy.minimum() takes an optional third argument, which is the output array. You can specify a there to have it modified in place:
In [9]: a = np.array([[1, 2, 3], [2, 2, 2], [3, 2, 1]])
In [10]: b = np.array([[3, 2, 1], [1, 2, 1], [1, 2, 1]])
In [11]: np.minimum(a, b, a)
Out[11]:
array([[1, 2, 1],
[1, 2, 1],
[1, 2, 1]])
In [12]: a
Out[12]:
array([[1, 2, 1],
[1, 2, 1],
[1, 2, 1]])
Given:
test = numpy.array([[1, 2], [3, 4], [5, 6]])
test[i] gives the ith row (e.g. [1, 2]). How do I access the ith column? (e.g. [1, 3, 5]). Also, would this be an expensive operation?
To access column 0:
>>> test[:, 0]
array([1, 3, 5])
To access row 0:
>>> test[0, :]
array([1, 2])
This is covered in Section 1.4 (Indexing) of the NumPy reference. This is quick, at least in my experience. It's certainly much quicker than accessing each element in a loop.
>>> test[:,0]
array([1, 3, 5])
this command gives you a row vector, if you just want to loop over it, it's fine, but if you want to hstack with some other array with dimension 3xN, you will have
ValueError: all the input arrays must have same number of dimensions
while
>>> test[:,[0]]
array([[1],
[3],
[5]])
gives you a column vector, so that you can do concatenate or hstack operation.
e.g.
>>> np.hstack((test, test[:,[0]]))
array([[1, 2, 1],
[3, 4, 3],
[5, 6, 5]])
And if you want to access more than one column at a time you could do:
>>> test = np.arange(9).reshape((3,3))
>>> test
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> test[:,[0,2]]
array([[0, 2],
[3, 5],
[6, 8]])
You could also transpose and return a row:
In [4]: test.T[0]
Out[4]: array([1, 3, 5])
Although the question has been answered, let me mention some nuances.
Let's say you are interested in the first column of the array
arr = numpy.array([[1, 2],
[3, 4],
[5, 6]])
As you already know from other answers, to get it in the form of "row vector" (array of shape (3,)), you use slicing:
arr_col1_view = arr[:, 1] # creates a view of the 1st column of the arr
arr_col1_copy = arr[:, 1].copy() # creates a copy of the 1st column of the arr
To check if an array is a view or a copy of another array you can do the following:
arr_col1_view.base is arr # True
arr_col1_copy.base is arr # False
see ndarray.base.
Besides the obvious difference between the two (modifying arr_col1_view will affect the arr), the number of byte-steps for traversing each of them is different:
arr_col1_view.strides[0] # 8 bytes
arr_col1_copy.strides[0] # 4 bytes
see strides and this answer.
Why is this important? Imagine that you have a very big array A instead of the arr:
A = np.random.randint(2, size=(10000, 10000), dtype='int32')
A_col1_view = A[:, 1]
A_col1_copy = A[:, 1].copy()
and you want to compute the sum of all the elements of the first column, i.e. A_col1_view.sum() or A_col1_copy.sum(). Using the copied version is much faster:
%timeit A_col1_view.sum() # ~248 µs
%timeit A_col1_copy.sum() # ~12.8 µs
This is due to the different number of strides mentioned before:
A_col1_view.strides[0] # 40000 bytes
A_col1_copy.strides[0] # 4 bytes
Although it might seem that using column copies is better, it is not always true for the reason that making a copy takes time too and uses more memory (in this case it took me approx. 200 µs to create the A_col1_copy). However if we needed the copy in the first place, or we need to do many different operations on a specific column of the array and we are ok with sacrificing memory for speed, then making a copy is the way to go.
In the case we are interested in working mostly with columns, it could be a good idea to create our array in column-major ('F') order instead of the row-major ('C') order (which is the default), and then do the slicing as before to get a column without copying it:
A = np.asfortranarray(A) # or np.array(A, order='F')
A_col1_view = A[:, 1]
A_col1_view.strides[0] # 4 bytes
%timeit A_col1_view.sum() # ~12.6 µs vs ~248 µs
Now, performing the sum operation (or any other) on a column-view is as fast as performing it on a column copy.
Finally let me note that transposing an array and using row-slicing is the same as using the column-slicing on the original array, because transposing is done by just swapping the shape and the strides of the original array.
A[:, 1].strides[0] # 40000 bytes
A.T[1, :].strides[0] # 40000 bytes
To get several and indepent columns, just:
> test[:,[0,2]]
you will get colums 0 and 2
>>> test
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
>>> ncol = test.shape[1]
>>> ncol
5L
Then you can select the 2nd - 4th column this way:
>>> test[0:, 1:(ncol - 1)]
array([[1, 2, 3],
[6, 7, 8]])
This is not multidimensional. It is 2 dimensional array. where you want to access the columns you wish.
test = numpy.array([[1, 2], [3, 4], [5, 6]])
test[:, a:b] # you can provide index in place of a and b