How to create a specific upper triangular matrix? - python

I would like to create in python (using numpy) an upper triangular matrix in the form:
[[ 1, c, c^2],
[ 0, 1, c ],
[ 0, 0, 1 ]])
where c is a rational number and the rank of the matrix may vary (2, 3, 4, ...). Is there any smart way to do it other than creating rows and stacking them?

r = 3
c = 3
i,j = np.indices((r,r))
np.triu(float(c)**(j-i))
Result:
array([[1., 3., 9.],
[0., 1., 3.],
[0., 0., 1.]])

There are probably more straightforward solutions but this is what I came up with:
import numpy as np
c=5
m=np.triu(c**np.triu(np.ones((3,3)), 1).cumsum(axis =1))
print(m)
output:
[[ 1. 5. 25.]
[ 0. 1. 5.]
[ 0. 0. 1.]]

Related

How to compute the kind of distance matrix with vectorization

I have an numpy array A of shape 4 X 3 X 2. Each line below is a 2D coordinate of a node. (Each three nodes compose a triangle in my finite element analysis.)
array([[[0., 2.], #node00
[2., 2.], #node01
[1., 1.]], #node02
[[0., 2.], #node10
[1., 1.], #node11
[0., 0.]], #node12
[[2., 2.], #node20
[1., 1.], #node21
[2., 0.]], #node22
[[0., 0.], #node30
[1., 1.], #node31
[2., 0.]]]) #node32
I have another numpy array B of coordinates of pre-computed "centers":
array([[1. , 1.66666667], # center0
[0.33333333, 1. ], # center1
[1.66666667, 1. ], # center2
[1. , 0.33333333]])# center3
How can I efficiently calculate a matrix C of Euclidian distance like this
dist(center0, node00) dist(center0,node01) dist(center0, node02)
dist(center1, node10) dist(center1,node11) dist(center1, node12)
dist(center2, node20) dist(center2,node21) dist(center2, node22)
dist(center3, node30) dist(center3,node31) dist(center3, node32)
where dist represents a Euclidian distance formula like math.dist or numpy.linalg.norm? Namely, the result matrix's i,j element is the distance between center-i to node-ij.
Vectorized code instead of loops is needed, as my actual data is from medical imaging which is very large. With a nested loop, one can obtain the expected output as follows:
In [63]: for i in range(4):
...: for j in range(3):
...: C[i,j]=math.dist(A[i,j], B[i])
In [67]: C
Out[67]:
array([[1.05409255, 1.05409255, 0.66666667],
[1.05409255, 0.66666667, 1.05409255],
[1.05409255, 0.66666667, 1.05409255],
[1.05409255, 0.66666667, 1.05409255]])
[Edit] This is different question from Pairwise operations (distance) on two lists in numpy, as things like indexing needs to be properly addressed here.
a = np.reshape(A, [12, 2])
b = B[np.repeat(np.arange(4), 3)]
c = np.reshape(np.linalg.norm(a - b, axis=-1), (4, 3))
c
# array([[1.05409255, 1.05409255, 0.66666667],
# [1.05409255, 0.66666667, 1.05409255],
# [1.05409255, 0.66666667, 1.05409255],
# [1.05409255, 0.66666667, 1.05409255]])

Depthwise stacking with NumPy

I am using the following code and getting an output numpy ndarray of size (2,9) that I am then trying to reshape into size (3,3,2). My hope was that calling reshape using (3,3,2) as the dimensions of the new array would take each row of the 2x9 array and shape it into a 3x3 array and wrap these two 3x3 arrays into another array.
For instance, when I index the result I would like the following behavior:
input: print(result)
output: [[ 2. 2. 1. 0. 8. 5. 2. 4. 5.]
[ 4. 7. 5. 6. 4. 3. -3. 2. 1.]]
result = result.reshape((3,3,2))
DESIRED NEW BEHAVIOR
input: print(result[:,:,0])
output: [[2. 2. 1.]
[0. 8. 5.]
[2. 4. 5.]]
input: print(result[:,:,1])
output: [[ 4. 7. 5.]
[ 6. 4. 3.]
[-3. 2. 1.]]
ACTUAL NEW BEHAVIOR
input: print(result[:,:,0])
output: [[2. 1. 8.]
[2. 5. 7.]
[6. 3. 2.]]
input: print(result[:,:,1])
output: [[ 2. 0. 5.]
[ 4. 4. 5.]
[ 4. -3. 1.]]
Is there a way to specify to reshape that I would like to go row by row along the depth dimension? I'm very confused as to why numpy by default makes the choice it does for reshape.
Here is the code I am using to produce result matrix, this code may or may not be necessary to analyze my issue. I feel as if it will not be necessary but am including it for completeness:
import numpy as np
# im2col implementation assuming width/height dimensions of filter and input_vol
# are the same (i.e. input_vol_width is equal to input_vol_height and the same
# for the filter spatial dimensions, although input_vol_width need not equal
# filter_vol_width)
def im2col(input, filters, input_vol_dims, filter_size_dims, stride):
receptive_field_size = 1
for dim in filter_size_dims:
receptive_field_size *= dim
output_width = output_height = int((input_vol_dims[0]-filter_size_dims[0])/stride + 1)
X_col = np.zeros((receptive_field_size,output_width*output_height))
W_row = np.zeros((len(filters),receptive_field_size))
pos = 0
for i in range(0,input_vol_dims[0]-1,stride):
for j in range(0,input_vol_dims[1]-1,stride):
X_col[:,pos] = input[i:i+stride+1,j:j+stride+1,:].ravel()
pos += 1
for i in range(len(filters)):
W_row[i,:] = filters[i].ravel()
bias = np.array([[1], [0]])
result = np.dot(W_row, X_col) + bias
print(result)
if __name__ == '__main__':
x = np.zeros((7, 7, 3))
x[:,:,0] = np.array([[0,0,0,0,0,0,0],
[0,1,1,0,0,1,0],
[0,2,2,1,1,1,0],
[0,2,0,2,1,0,0],
[0,2,0,0,1,0,0],
[0,0,0,1,1,0,0],
[0,0,0,0,0,0,0]])
x[:,:,1] = np.array([[0,0,0,0,0,0,0],
[0,2,0,1,0,2,0],
[0,0,1,2,1,0,0],
[0,2,0,0,2,0,0],
[0,2,1,0,0,0,0],
[0,1,2,2,2,0,0],
[0,0,0,0,0,0,0]])
x[:,:,2] = np.array([[0,0,0,0,0,0,0],
[0,0,0,2,1,1,0],
[0,0,0,2,2,0,0],
[0,2,1,0,2,2,0],
[0,0,1,2,1,2,0],
[0,2,0,0,2,1,0],
[0,0,0,0,0,0,0]])
w0 = np.zeros((3,3,3))
w0[:,:,0] = np.array([[1,1,0],
[1,-1,1],
[-1,1,1]])
w0[:,:,1] = np.array([[-1,-1,0],
[1,-1,1],
[1,-1,-1]])
w0[:,:,2] = np.array([[0,0,0],
[0,0,1],
[1,0,1]]
w1 = np.zeros((3,3,3))
w1[:,:,0] = np.array([[0,-1,1],
[1,1,0],
[1,1,0]])
w1[:,:,1] = np.array([[-1,-1,1],
[1,0,1],
[0,1,1]])
w1[:,:,2] = np.array([[-1,-1,0],
[1,-1,0],
[1,1,0]])
filters = np.array([w0,w1])
im2col(x,np.array([w0,w1]),x.shape,w0.shape,2)
Let's reshape a bit differently and then do a depth-wise dstack:
arr = np.dstack(result.reshape((-1,3,3)))
arr[..., 0]
array([[2., 2., 1.],
[0., 8., 5.],
[2., 4., 5.]])
Reshape keeps the original order of the elements
In [215]: x=np.array(x)
In [216]: x.shape
Out[216]: (2, 9)
Reshaping the size 9 dimension into a 3x3 keeps the element order that you want:
In [217]: x.reshape(2,3,3)
Out[217]:
array([[[ 2., 2., 1.],
[ 0., 8., 5.],
[ 2., 4., 5.]],
[[ 4., 7., 5.],
[ 6., 4., 3.],
[-3., 2., 1.]]])
But you have to index it with [0,:,:] to see one of those blocks.
To see the same blocks with [:,:,0], you have to move that size 2 dimension to the end. COLDSPEED's dstack does that by iterating on the first dimension, and joining the 2 blocks (each 3x3) on a new third dimension). Another way is to use transpose to reorder the dimensions:
In [218]: x.reshape(2,3,3).transpose(1,2,0)
Out[218]:
array([[[ 2., 4.],
[ 2., 7.],
[ 1., 5.]],
[[ 0., 6.],
[ 8., 4.],
[ 5., 3.]],
[[ 2., -3.],
[ 4., 2.],
[ 5., 1.]]])
In [219]: y = _
In [220]: y.shape
Out[220]: (3, 3, 2)
In [221]: y[:,:,0]
Out[221]:
array([[2., 2., 1.],
[0., 8., 5.],
[2., 4., 5.]])

Is there a built-in/easy LDU decomposition method in Numpy?

I see cholesky decomposition in numpy.linalg.cholesky, but could not find a LDU decompositon. Can anyone suggest a function to use?
Scipy has an LU decomposition function: scipy.linalg.lu. Note that this also introduces a permutation matrix P into the mix. This answer gives a nice explanation of why this happens.
If you specifically need LDU, then you can just normalize the U matrix to pull out D.
Here's how you might do it:
>>> import numpy as np
>>> import scipy.linalg as la
>>> a = np.array([[2, 4, 5],
[1, 3, 2],
[4, 2, 1]])
>>> (P, L, U) = la.lu(a)
>>> P
array([[ 0., 1., 0.],
[ 0., 0., 1.],
[ 1., 0., 0.]])
>>> L
array([[ 1. , 0. , 0. ],
[ 0.5 , 1. , 0. ],
[ 0.25 , 0.83333333, 1. ]])
>>> U
array([[ 4. , 2. , 1. ],
[ 0. , 3. , 4.5],
[ 0. , 0. , -2. ]])
>>> D = np.diag(np.diag(U)) # D is just the diagonal of U
>>> U /= np.diag(U)[:, None] # Normalize rows of U
>>> P.dot(L.dot(D.dot(U))) # Check
array([[ 2., 4., 5.],
[ 1., 3., 2.],
[ 4., 2., 1.]])
Try this:
import numpy as np
A = np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]])
U = np.triu(A,1)
L = np.tril(A,-1)
D = np.tril(np.triu(A))
print(A)
print(L)
print(D)
print(U)

Matrix multiplication issue for LU decomposition?

I'm trying to solve an Ax=b by using LU decomposition, but somehow I can't get the A by multiplying L*U. Here's the code and the results;
A = array([2,3,5,4]).reshape(2,2)
b = array([4,3])
P,L, U = lu(A)
And the results for L and U
L:
array([[ 1. , 0. ],
[ 0.4, 1. ]])
U:
array([[ 5. , 4. ],
[ 0. , 1.4]])
Result for L*U
dot(L,U):
array([[ 5., 4.],
[ 2., 3.]])
So instead of ((2, 3),(5, 4)), I'm getting (( 5., 4.),( 2., 3.)). And as a result, I can't solve Ax=b. What is the reason for getting such L*U result?
Oh seems like I totally forgot about the permutation matrix P. Multiplying the inverse of P with L*U solved the problem;
dot(inv(P),dot(P,A)):
array([[ 2., 3.],
[ 5., 4.]])
According to the WikiPedia: PA = LU.
So, if you want A = LU, you could add permute_l=True to lu function:
(ins)>>> a = np.array([2,3,5,4]).reshape(2,2)
(ins)>>> l,u = scipy.linalg.lu(a, permute_l=True)
(ins)>>> l.dot(u)
array([[ 2., 3.],
[ 5., 4.]])

How to split an array based on minimum row value using vectorization

I am trying to figure out how to take the following for loop that splits an array based on the index of the lowest value in the row and use vectorization. I've looked at this link and have been trying to use the numpy.where function but currently unsuccessful.
For example if an array has n columns, then all the rows where col[0] has the lowest value are put in one array, all the rows where col[1] are put in another, etc.
Here's the code using a for loop.
import numpy
a = numpy.array([[ 0. 1. 3.]
[ 0. 1. 3.]
[ 0. 1. 3.]
[ 1. 0. 2.]
[ 1. 0. 2.]
[ 1. 0. 2.]
[ 3. 1. 0.]
[ 3. 1. 0.]
[ 3. 1. 0.]])
result_0 = []
result_1 = []
result_2 = []
for value in a:
if value[0] <= value[1] and value[0] <= value[2]:
result_0.append(value)
elif value[1] <= value[0] and value[1] <= value[2]:
result_1.append(value)
else:
result_2.append(value)
print(result_0)
>>[array([ 0. 1. 3.]), array([ 0. 1. 3.]), array([ 0. 1. 3.])]
print(result_1)
>>[array([ 1. 0. 2.]), array([ 1. 0. 2.]), array([ 1. 0. 2.])]
print(result_2)
>>[array([ 3. 1. 0.]), array([ 3. 1. 0.]), array([ 3. 1. 0.])]
First, use argsort to see where the lowest value in each row is:
>>> a.argsort(axis=1)
array([[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[1, 0, 2],
[1, 0, 2],
[1, 0, 2],
[2, 1, 0],
[2, 1, 0],
[2, 1, 0]])
Note that wherever a row has 0, that is the smallest column in that row.
Now you can build the results:
>>> sortidx = a.argsort(axis=1)
>>> [a[sortidx[:,i] == 0] for i in range(a.shape[1])]
[array([[ 0., 1., 3.],
[ 0., 1., 3.],
[ 0., 1., 3.]]),
array([[ 1., 0., 2.],
[ 1., 0., 2.],
[ 1., 0., 2.]]),
array([[ 3., 1., 0.],
[ 3., 1., 0.],
[ 3., 1., 0.]])]
So it is done with only a single loop over the columns, which will give a huge speedup if the number of rows is much larger than the number of columns.
This is not the best solution since it relies on simple python loops and is not very efficient when you start dealing with large data sets but it should get you started.
The point is to create an array of "buckets" which store the data based on the depth of the lengthiest element. Then enumerate each element in values, selecting the smallest one and saving its offset which is subsequently appended to the correct results "bucket", for each a. Finally we print this out in the last loop.
Solution using loops:
import numpy
import pprint
# random data set
a = numpy.array([[0, 1, 3],
[0, 1, 3],
[0, 1, 3],
[1, 0, 2],
[1, 0, 2],
[1, 0, 2],
[3, 1, 0],
[3, 1, 0],
[3, 1, 0]])
# create a list of results as big as the depth of elements in an entry
results = list()
for l in range(max(len(i) for i in a)):
results.append(list())
# don't do the following because all the references to the lists will be the same and you get dups:
# results = [[]]*max(len(i) for i in a)
for value in a:
res_offset, _val = min(enumerate(value), key=lambda x: x[1]) # get the offset and min value
results[res_offset].append(value) # store the original Array obj in the correct "bucket"
# print for visualization
for c, r in enumerate(results):
print("result_%s: %s" % (c, r))
Outputs:
result_0: [array([0, 1, 3]), array([0, 1, 3]), array([0, 1, 3])]
result_1: [array([1, 0, 2]), array([1, 0, 2]), array([1, 0, 2])]
result_2: [array([3, 1, 0]), array([3, 1, 0]), array([3, 1, 0])]
I found a much easier way to do this. I hope that I am interpreting the OP correctly.
My sense is that the OP wants to create a slice of the larger array based upon some set of conditions.
Note that the code above to create the array does not seem to work--at least in python 3.5. I generated the array as follow.
a = np.array([0., 1., 3., 0., 1., 3., 0., 1., 3., 1., 0., 2., 1., 0., 2.,1., 0., 2.,3., 1., 0.,3., 1., 0.,3., 1., 0.]).reshape([9,3])
Next, I sliced the original array into smaller arrays. Numpy has builtins to help with this.
result_0 = a[np.logical_and(a[:,0] <= a[:,1],a[:,0] <= a[:,2])]
result_1 = a[np.logical_and(a[:,1] <= a[:,0],a[:,1] <= a[:,2])]
result_2 = a[np.logical_and(a[:,2] <= a[:,0],a[:,2] <= a[:,1])]
This will generate new numpy arrays that match the given conditions.
Note if the user wants to convert these individual rows into a list or arrays, he/she can just enter the following code to obtain the result.
result_0 = [np.array(x) for x in result_0.tolist()]
result_0 = [np.array(x) for x in result_1.tolist()]
result_0 = [np.array(x) for x in result_2.tolist()]
This should generate the outcome requested in the OP.

Categories