As stated in scipy lecture notes, this will not work as expected:
a = np.random.randint(0, 10, (1000, 1000))
a += a.T
assert np.allclose(a, a.T)
But why? How does being a view affect this behavior?
a += a.T
does in-place summing up (it's using view a.T while processing), so you end up with a non-symmetric matrix
you can easily check this, i.e. I got:
In [3]: a
Out[3]:
array([[ 6, 15, 7, ..., 8, 9, 2],
[15, 6, 9, ..., 14, 9, 7],
[ 7, 9, 0, ..., 9, 5, 8],
...,
[ 8, 23, 15, ..., 6, 4, 10],
[18, 13, 8, ..., 4, 2, 6],
[ 3, 9, 9, ..., 16, 8, 4]])
You can see it's not symmetric, right? (compare right-top and left-bottom items)
if you do a real copy:
a += np.array(a.T)
it works fine, i.e.:
In [6]: a
Out[6]:
array([[ 2, 11, 8, ..., 9, 15, 5],
[11, 4, 14, ..., 10, 3, 13],
[ 8, 14, 14, ..., 10, 9, 3],
...,
[ 9, 10, 10, ..., 16, 7, 6],
[15, 3, 9, ..., 7, 14, 1],
[ 5, 13, 3, ..., 6, 1, 2]])
To better understand why it does so, you can imagine you wrote the loop yourself as following:
In [8]: for i in xrange(1000):
for j in xrange(1000):
a[j,i] += a[i,j]
....:
In [9]: a
Out[9]:
array([[ 4, 5, 14, ..., 12, 16, 13],
[ 3, 2, 16, ..., 16, 8, 8],
[ 9, 12, 10, ..., 7, 7, 23],
...,
[ 8, 10, 6, ..., 14, 13, 23],
[10, 4, 6, ..., 9, 16, 21],
[11, 8, 14, ..., 16, 12, 12]])
It adds a[999,0] to calculate a[0,999] but a[999,0] already has sum of a[999,0] + a[0,999] -- so below the main diagonal you add the values twice.
This problem is due to internal designs of numpy.
It basically boils down to that the inplace operator will change the values as it goes, and then those changed values will be used where you were actually intending for the original value to be used.
This is discussed in this bug report, and it does not seem to be fixable.
The reason why it works for smaller size arrays seems to be because of how the data is buffered while being worked on.
To exactly understand why the issue crops up, I am afraid you will have to dig into the internals of numpy.
For large arrays, the in-place operator causes you to apply the addition to the operator currently being operated on. For example:
>>> a = np.random.randint(0, 10, (1000, 1000))
>>> a
array([[9, 4, 2, ..., 7, 0, 6],
[8, 4, 1, ..., 3, 5, 9],
[6, 4, 9, ..., 6, 9, 7],
...,
[6, 2, 5, ..., 0, 4, 6],
[5, 7, 9, ..., 8, 0, 5],
[2, 0, 1, ..., 4, 3, 5]])
Notice that the top-right and bottom-left elements are 6 and 2.
>>> a += a.T
>>> a
array([[18, 12, 8, ..., 13, 5, 8],
[12, 8, 5, ..., 5, 12, 9],
[ 8, 5, 18, ..., 11, 18, 8],
...,
[19, 7, 16, ..., 0, 12, 10],
[10, 19, 27, ..., 12, 0, 8],
[10, 9, 9, ..., 14, 11, 10]])
Now notice that the top-right element is correct (8 = 6 + 2). However, the bottom-left element is the result not of 6 + 2, but of 8 + 2. In other words, the addition that was applied to the bottom-left element is the top-right element of the array after the addition. You'll notice this is true for all the other elements below the first row as well.
I imagine this works this way because you do not need to make a copy of your array. (Though it then looks like it does make a copy if the array is small.)
assert np.allclose(a, a.T)
I just understood now that you are generating a symmetric matrix by summing a with it's transponse a.T resulting in a symmetric matrix)
Which makes us rightfully expect np.allclose(a, a.T) to return true (resulting matrix being symmetric so it should be equal to its transpose)
a += a.T # How being a view affects this behavior?
I've just narrowed it down to this
TL;DR a = a + a.T is fine for larger matrices, while a += a.T gives strange results starting from 91x91 size
>>> a = np.random.randint(0, 10, (1000, 1000))
>>> a += a.T # using the += operator
>>> np.allclose(a, a.T)
False
>>> a = np.random.randint(0, 10, (1000, 1000))
>>> a = a + a.T # using the + operator
>>> np.allclose(a, a.T)
True
I've got same cut-off at size 90x90 like #Hannes (I am on Python 2.7 and Numpy 1.11.0, so there are at least two environments that can produce this)
>>> a = np.random.randint(0, 10, (90, 90))
>>> a += a.T
>>> np.allclose(a, a.T)
True
>>> a = np.random.randint(0, 10, (91, 91))
>>> a += a.T
>>> np.allclose(a, a.T)
False
Related
Let's say I have an array that contains 2 times 3x3-array:
a = np.arange(2 * 3 * 3).reshape(2, 3, 3)
print(a)
Output:
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]]])
Now I would like to have the upper triangle of each of the 2 arrays. I know I can achieve it through the following two:
np.array([aa[np.triu_indices(3)] for aa in a])
# or
a.T[np.tril_indices(3)].T
Output:
array([[ 0, 1, 2, 4, 5, 8],
[ 9, 10, 11, 13, 14, 17]])
However, I know that list comprehension is slow so I'd rather not use it. And the transpose + tril makes it difficult to understand what it does at first sight. I had hoped that one of the following options would work, but none of them did:
a[:, np.triu_indices(3)] # totally different output
a[np.arange(len(a)), np.triu_indices(3)] # error
a[np.indices(a.shape)[0], np.triu_indices(3)] # error
Is there an elegant and fast way to do it?
I am trying to shift a 2D square matrix in both x and y directions at the same time (diagonally). Is there any way in Python to do this? Please see the attached figure.
You can use np.roll() for each axis:
>>> x = np.arange(1, 17).reshape(4, 4)
>>> x
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
>>> np.roll(x, 2, axis=(0,1))
array([[11, 12, 9, 10],
[15, 16, 13, 14],
[ 3, 4, 1, 2],
[ 7, 8, 5, 6]])
I thought I understood the reshape function in Numpy until I was messing around with it and came across this example:
a = np.arange(16).reshape((4,4))
which returns:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
This makes sense to me, but then when I do:
a.reshape((2,8), order = 'F')
it returns:
array([[0, 8, 1, 9, 2, 10, 3, 11],
[4, 12, 5, 13, 6, 14, 7, 15]])
I would expect it to return:
array([[0, 4, 8, 12, 1, 5, 9, 13],
[2, 6, 10, 14, 3, 7, 11, 15]])
Can someone please explain what is happening here?
The elements of a in order 'F'
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
are [0,4,8,12,1,5,9 ...]
Now rearrange them in a (2,8) array.
I think the reshape docs talks about raveling the elements, and then reshaping them. Evidently the ravel is done first.
Experiment with a.ravel(order='F').reshape(2,8).
Oops, I get what you expected:
In [208]: a = np.arange(16).reshape(4,4)
In [209]: a
Out[209]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
In [210]: a.ravel(order='F')
Out[210]: array([ 0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15])
In [211]: _.reshape(2,8)
Out[211]:
array([[ 0, 4, 8, 12, 1, 5, 9, 13],
[ 2, 6, 10, 14, 3, 7, 11, 15]])
OK, I have to keep the 'F' order during the reshape
In [214]: a.ravel(order='F').reshape(2,8, order='F')
Out[214]:
array([[ 0, 8, 1, 9, 2, 10, 3, 11],
[ 4, 12, 5, 13, 6, 14, 7, 15]])
In [215]: a.ravel(order='F').reshape(2,8).flags
Out[215]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
...
In [216]: a.ravel(order='F').reshape(2,8, order='F').flags
Out[216]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
From np.reshape docs
You can think of reshaping as first raveling the array (using the given
index order), then inserting the elements from the raveled array into the
new array using the same kind of index ordering as was used for the
raveling.
The notes on order are fairly long, so it's not surprising that the topic is confusing.
Given a matrix A with dimensions axa, and B with dimensions bxb, and axa modulo bxb == 0. B is a submatrix(s) of A starting at (0,0) and tiled until the dimensions of axa is met.
A = array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
An example of a submatrix might be:
B = array([[10, 11],
[14, 15]])
Where the number 15 is in position (1, 1) with respect to B's coordinates.
How could I return a view on the array A, for a particular position in B? For example for position (1,1) in B, I want to get all such values from A:
C = array([[5, 7],
[13, 15]])
The reason I want a view, is that I wish to update multiple positions in A:
C = array([[5, 7],[13, 15]]) = 20
results in
A = array([[ 0, 1, 2, 3],
[ 4, 20, 6, 20],
[ 8, 9, 10, 11],
[12, 20, 14, 20]])
You can obtain this as follows:
>>> A = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
>>> A[np.ix_([1,3],[1,3])] = 20
>>> A
array([[ 0, 1, 2, 3],
[ 4, 20, 6, 20],
[ 8, 9, 10, 11],
[12, 20, 14, 20]])
For more info about np.ix_ could review the NumPy documentation
Numpy ravel works well if I need to create a vector by reading by rows or by columns. However, I would like to transform a matrix to a 1d array, by using a method that is often used in image processing. This is an example with initial matrix A and final result B:
A = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
B = np.array([[ 0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15])
Is there an existing function already that could help me with that? If not, can you give me some hints on how to solve this problem? PS. the matrix A is NxN.
I've been using numpy for several years, and I've never seen such a function.
Here's one way you could do it (not necessarily the most efficient):
In [47]: a
Out[47]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
In [48]: np.concatenate([np.diagonal(a[::-1,:], k)[::(2*(k % 2)-1)] for k in range(1-a.shape[0], a.shape[0])])
Out[48]: array([ 0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15])
Breaking down the one-liner into separate steps:
a[::-1, :] reverses the rows:
In [59]: a[::-1, :]
Out[59]:
array([[12, 13, 14, 15],
[ 8, 9, 10, 11],
[ 4, 5, 6, 7],
[ 0, 1, 2, 3]])
(This could also be written a[::-1] or np.flipud(a).)
np.diagonal(a, k) extracts the kth diagonal, where k=0 is the main diagonal. So, for example,
In [65]: np.diagonal(a[::-1, :], -3)
Out[65]: array([0])
In [66]: np.diagonal(a[::-1, :], -2)
Out[66]: array([4, 1])
In [67]: np.diagonal(a[::-1, :], 0)
Out[67]: array([12, 9, 6, 3])
In [68]: np.diagonal(a[::-1, :], 2)
Out[68]: array([14, 11])
In the list comprehension, k gives the diagonal to be extracted. We want to reverse the elements in every other diagonal. The expression 2*(k % 2) - 1 gives the values 1, -1, 1, ... as k varies from -3 to 3. Indexing with [::1] leaves the order of the array being indexed unchanged, and indexing with [::-1] reverses the order of the array. So np.diagonal(a[::-1, :], k)[::(2*(k % 2)-1)] gives the kth diagonal, but with every other diagonal reversed:
In [71]: [np.diagonal(a[::-1,:], k)[::(2*(k % 2)-1)] for k in range(1-a.shape[0], a.shape[0])]
Out[71]:
[array([0]),
array([1, 4]),
array([8, 5, 2]),
array([ 3, 6, 9, 12]),
array([13, 10, 7]),
array([11, 14]),
array([15])]
np.concatenate() puts them all into a single array:
In [72]: np.concatenate([np.diagonal(a[::-1,:], k)[::(2*(k % 2)-1)] for k in range(1-a.shape[0], a.shape[0])])
Out[72]: array([ 0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15])
I found discussion of zigzag scan for MATLAB, but not much for numpy. One project appears to use a hardcoded indexing array for 8x8 blocks
https://github.com/lot9s/lfv-compression/blob/master/scripts/our_mpeg/zigzag.py
ZIG = np.array([[0, 1, 5, 6, 14, 15, 27, 28],
[2, 4, 7, 13, 16, 26, 29, 42],
[3, 8, 12, 17, 25, 30, 41, 43],
[9, 11, 18, 24, 31, 40, 44,53],
[10, 19, 23, 32, 39, 45, 52,54],
[20, 22, 33, 38, 46, 51, 55,60],
[21, 34, 37, 47, 50, 56, 59,61],
[35, 36, 48, 49, 57, 58, 62,63]])
Apparently it's used jpeg and mpeg compression.