Let's say I have :
one = np.array([ [2,3,np.array([ [1,2], [7,3] ])],
[4,5,np.array([ [11,12],[14,15] ])]
], dtype=object)
two = np.array([ [1,2] ,[7, 3],
[11,12] , [14,15] ])
I want to be able to compare the values that are in the array of the one array, with the values of two array.
I am talking about the
[1,2] ,[7, 3],
[11,12] , [14,15]
So, I want to check if they are the same, one by one.
Probably like:
for idx,x in np.ndenumerate(one):
for idy,y in np.ndenumerate(two):
print(y)
which gives all the elements of two.
I can't figure how to access at the same time all elements (but only the last from each row) of one and compare them with two
The problem is that they don't have the same dimensions.
This works
np.r_[tuple(one[:, 2])] == two
Output:
array([[ True, True],
[ True, True],
[ True, True],
[ True, True]], dtype=bool)
In a comment link #George tried to work with:
In [246]: a
Out[246]: array([1, [2, [33, 44, 55, 66]], 11, [22, [77, 88, 99, 100]]], dtype=object)
In [247]: a.shape
Out[247]: (4,)
This is a 4 element array. If we reshape it, we can isolate an inner layer
In [257]: a.reshape(2,2)
Out[257]:
array([[1, [2, [33, 44, 55, 66]]],
[11, [22, [77, 88, 99, 100]]]], dtype=object)
In [258]: a.reshape(2,2)[:,1]
Out[258]: array([[2, [33, 44, 55, 66]], [22, [77, 88, 99, 100]]], dtype=object)
This last case is (2,) - 2 lists. We can isolate the 2nd item in each list with a comprehension, and create an array from the resulting lists:
In [260]: a1=a.reshape(2,2)[:,1]
In [261]: [i[1] for i in a1]
Out[261]: [[33, 44, 55, 66], [77, 88, 99, 100]]
In [263]: np.array([i[1] for i in a1])
Out[263]:
array([[ 33, 44, 55, 66],
[ 77, 88, 99, 100]])
Nothing fancy here - just paying attention to array shapes, and using list operations where arrays don't work.
Related
Given this numpy array
[[200. 202.08165 ]
[189.60295 190.32434 ]
[189.19751 188.7867 ]
[162.15639 164.05934 ]]
I want to get this array
[[200. 190.32434 ]
[189.60295 188.7867 ]
[189.19751 164.05934 ]]
The same for 3 columns, given this array
[[200. 202.08165 187.8392 ]
[189.60295 190.32434 167.93082]
[189.19751 188.7867 199.2839 ]
[162.15639 164.05934 200.92 ]]
I want to get this array
[[200. 190.32434 199.2839 ]
[189.60295 188.7867 200.92 ]]
Any vectorized way to achieve this for any number of columns and rows? np.diag and np.diagonal only seem to give me a single diagonal, but I need all of them stacked up.
Well it seems like a specialized case of keeping diagonal elements. Here's one vectorized solution using masking -
def keep_diag(a):
m,n = a.shape
i,j = np.ogrid[:m,:n]
mask = (i>=j) & ((i-m+n)<=j)
return a.T[mask.T].reshape(n,-1).T
Most of the trick is at the step of mask creation, which when masked with the input array gets us the required elements off it.
Sample runs -
In [105]: a
Out[105]:
array([[ 0, 16],
[11, 98],
[81, 63],
[83, 20]])
In [106]: keep_diag(a)
Out[106]:
array([[ 0, 98],
[11, 63],
[81, 20]])
In [102]: a
Out[102]:
array([[10, 2, 66],
[44, 18, 35],
[70, 8, 31],
[12, 27, 86]])
In [103]: keep_diag(a)
Out[103]:
array([[10, 18, 31],
[44, 8, 86]])
you can still use np.diagonal():
import numpy as np
b= np.array([[200. , 202.08165, 187.8392 ],
[189.60295, 190.32434, 167.93082],
[189.19751, 188.7867 , 199.2839 ],
[162.15639, 164.05934, 200.92 ]])
diags = np.asarray([b[i:,:].diagonal() for i in range(b.shape[0]-b.shape[1]+1)])
if have an array of shape (9,1,3).
array([[[ 6, 12, 108]],
[[122, 112, 38]],
[[ 57, 101, 62]],
[[119, 76, 177]],
[[ 46, 62, 2]],
[[127, 61, 155]],
[[ 5, 6, 151]],
[[ 5, 8, 185]],
[[109, 167, 33]]])
I want to find the argmax index of the third dimension, in this case it would be 185, so index 7.
I guess the solution is linked to reshaping but I can't wrap my head around it. Thanks for any help!
I'm not sure what's tricky about it. But, one way to get the index of the greatest element along the last axis would be by using np.max and np.argmax like:
# find `max` element along last axis
# and get the index using `argmax` where `arr` is your array
In [53]: np.argmax(np.max(arr, axis=2))
Out[53]: 7
Alternatively, as #PaulPanzer suggested in his comments, you could use:
In [63]: np.unravel_index(np.argmax(arr), arr.shape)
Out[63]: (7, 0, 2)
In [64]: arr[(7, 0, 2)]
Out[64]: 185
You may have to do it like this:
data = np.array([[[ 6, 12, 108]],
[[122, 112, 38]],
[[ 57, 101, 62]],
[[119, 76, 177]],
[[ 46, 62, 2]],
[[127, 61, 155]],
[[ 5, 6, 151]],
[[ 5, 8, 185]],
[[109, 167, 33]]])
np.argmax(data[:,0][:,2])
7
I am trying to solve the following problem. I have two matrices A and B and I want to create a new matrix C which consists of the rows of the matrices A and B depending on some condition which is encoded in the array v, i.e. if the i'th entry of v is a one then I want the i'th row of C to be the i'th row of B and if it is a zero then it should be the i'th row of A. I came up with the following solution
C = np.choose(v,A.T,B.T).T
but it is too slow. One obvious bad thing are the two transposes, but since np.choose does not take an axis argument I don't know how to get rid of them. Any ideas for a fast solution of this problem?
For Example let
A = np.arange(20).reshape([4,5])
and
B = 10 - A
Then one could imagine that one wants the matrix C to be the matrix of rows with smallest maximum norm. So we let
v = np.sum(A,axis=1)<np.sum(B,axis=1)
and then C is the matrix
C = np.choose(v,[A.T,B.T]).T
which is
array([[10, 9, 8, 7, 6],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
Seems like a good setup to use np.where to do the chosing operation based on the mask/binary input data -
C = np.where(v[:,None],B,A)
That v[:,None] part basically extends v to broadcastable shape as A and B allowing the broadcasting to let chosing work along the appropriate axis, axis=0 in this case for the two 2D arrays.
Sample run -
In [58]: A
Out[58]:
array([[82, 78, 57],
[14, 97, 32],
[72, 11, 49],
[98, 34, 41],
[89, 71, 52],
[34, 51, 55],
[26, 92, 59]])
In [59]: B
Out[59]:
array([[55, 67, 50],
[49, 64, 21],
[34, 18, 72],
[24, 61, 65],
[56, 59, 23],
[44, 77, 13],
[56, 55, 58]])
In [62]: v
Out[62]: array([1, 0, 0, 0, 0, 1, 1])
In [63]: np.where(v[:,None],B,A)
Out[63]:
array([[55, 67, 50],
[14, 97, 32],
[72, 11, 49],
[98, 34, 41],
[89, 71, 52],
[44, 77, 13],
[56, 55, 58]])
If v doesn't strictly consist of 0s and 1s only, use v[:,None]==1 as the first argument with np.where.
Another approach would be with boolean-indexing -
C = A.copy()
mask = v==1
C[mask] = B[mask]
Note : If v is already a boolean array, skip the comparison against 1 for the mask creation.
Runtime test -
In [77]: A = np.random.randint(11,99,(10000,3))
In [78]: B = np.random.randint(11,99,(10000,3))
In [79]: v = np.random.rand(A.shape[0])>0.5
In [82]: def choose_rows_copy(A, B, v):
...: C = A.copy()
...: C[v] = B[v]
...: return C
...:
In [83]: %timeit np.where(v[:,None],B,A)
10000 loops, best of 3: 107 µs per loop
In [84]: %timeit choose_rows_copy(A, B, v)
1000 loops, best of 3: 226 µs per loop
I saw a function numpy.fill_diagonal which assigns same value for diagonal elements. But I want to assign different random values for each diagonal elements. How can I do it in python ? May be using scipy or other libraries ?
That the docs call the fill val a scalar is an existing documentation bug. In fact, any value that can be broadcasted here is OK.
Fill diagonal works fine with array-likes:
>>> a = np.arange(1,10).reshape(3,3)
>>> a
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> np.fill_diagonal(a, [99, 42, 69])
>>> a
array([[99, 2, 3],
[ 4, 42, 6],
[ 7, 8, 69]])
It's a stride trick, since the diagonal elements are regularly spaced by the array's width + 1.
From the docstring, that's a better implementation than using np.diag_indices too:
Notes
-----
.. versionadded:: 1.4.0
This functionality can be obtained via `diag_indices`, but internally
this version uses a much faster implementation that never constructs the
indices and uses simple slicing.
You can use np.diag_indices to get those indices and then simply index into the array with those and assign values.
Here's a sample run to illustrate it -
In [86]: arr # Input array
Out[86]:
array([[13, 69, 35, 98, 16],
[93, 42, 72, 51, 65],
[51, 33, 96, 43, 53],
[15, 26, 16, 17, 52],
[31, 54, 29, 95, 80]])
# Get row, col indices
In [87]: row,col = np.diag_indices(arr.shape[0])
# Assign values, let's say from an array to illustrate
In [88]: arr[row,col] = np.array([100,200,300,400,500])
In [89]: arr
Out[89]:
array([[100, 69, 35, 98, 16],
[ 93, 200, 72, 51, 65],
[ 51, 33, 300, 43, 53],
[ 15, 26, 16, 400, 52],
[ 31, 54, 29, 95, 500]])
You can also use np.diag_indices_from and probably would be more idomatic, like so -
row, col = np.diag_indices_from(arr)
Note : The tried function would work just fine. This is discussed in a previous Q&A - Numpy modify ndarray diagonal too.
Create an identity matrix with n dimensions (take input from the user). Fill the diagonals of that matrix with the multiples of the number provided by the user.
arr=np.eye(4)
j=3
np.fill_diagonal(arr,6)
for i,x in zip(range(4),range(1,5)):
arr[i,i]=arr[i,i]*x
arr[i,j]=6*(j+1)
j-=1
arr
output:
array([[ 6., 0., 0., 24.],
[ 0., 12., 18., 0.],
[ 0., 12., 18., 0.],
[ 6., 0., 0., 24.]])
I am trying to implement a function, which can split a 3 dimensional numpy array in to 8 pieces, whilst keeping the order intact. Essentially I need the splits to be:
G[:21, :18,:25]
G[21:, :18,:25]
G[21:, 18:,:25]
G[:21, 18:,:25]
G[:21, :18,25:]
G[21:, :18,25:]
G[21:, 18:,25:]
G[:21, 18:,25:]
Where the original size of this particular matrix would have been 42, 36, 50. How is it possible to generalise these 8 "slices" so I do not have to hardcode all of them? essentially move the :in every possible position.
Thanks!
You could apply the 1d slice to successive (lists of) dimensions.
With a smaller 3d array
In [147]: X=np.arange(4**3).reshape(4,4,4)
A compound list comprehension produces a nested list. Here I'm using the simplest double split
In [148]: S=[np.split(z,2,0) for y in np.split(X,2,2) for z in np.split(y,2,1)]
In this case, all sublists have the same size, so I can convert it to an array for convenient viewing:
In [149]: SA=np.array(S)
In [150]: SA.shape
Out[150]: (4, 2, 2, 2, 2)
There are your 8 subarrays, but grouped (4,2).
In [153]: SAA = SA.reshape(8,2,2,2)
In [154]: SAA[0]
Out[154]:
array([[[ 0, 1],
[ 4, 5]],
[[16, 17],
[20, 21]]])
In [155]: SAA[1]
Out[155]:
array([[[32, 33],
[36, 37]],
[[48, 49],
[52, 53]]])
Is the order right? I can change it by changing the axis in the 3 split operations.
Another approach is write your indexing expressions as tuples
In [156]: x,y,z = 2,2,2 # define the split points
In [157]: ind = [(slice(None,x), slice(None,y), slice(None,z)),
(slice(x,None), slice(None,y), slice(None,z)),]
# and so on
In [158]: S1=[X[i] for i in ind]
In [159]: S1[0]
Out[159]:
array([[[ 0, 1],
[ 4, 5]],
[[16, 17],
[20, 21]]])
In [160]: S1[1]
Out[160]:
array([[[32, 33],
[36, 37]],
[[48, 49],
[52, 53]]])
Looks like the same order I got before.
That ind list of tuples can be produced with some sort of iteration and/or list comprehension. Maybe even using itertools.product or np.mgrid to generate the permutations.
An itertools.product version could look something like
In [220]: def foo(i):
return [(slice(None,x) if j else slice(x,None))
for j,x in zip(i,[2,2,2])]
In [221]: SAA = np.array([X[foo(i)] for i in
itertools.product(range(2),range(2),range(2))])
In [222]: SAA[-1]
Out[222]:
array([[[ 0, 1],
[ 4, 5]],
[[16, 17],
[20, 21]]])
product iterates the last value fastest, so the list is reversed (compared to your target).
To generate a particular order it may be easier to list the tuples explicitly, e.g.:
In [227]: [X[foo(i)] for i in [(1,1,1),(0,1,1),(0,0,1)]]
Out[227]:
[array([[[ 0, 1],
[ 4, 5]],
[[16, 17],
[20, 21]]]), array([[[32, 33],
[36, 37]],
[[48, 49],
[52, 53]]]), array([[[40, 41],
[44, 45]],
[[56, 57],
[60, 61]]])]
This highlights the fact that there are 2 distinct issues - generating the iteration pattern, and splitting the array based on this pattern.