I want solve linear equation Ax= b, each A contains in 3d matrix. For-example,
In Ax = B,
Suppose A.shape is (2,3,3)
i.e. = [[[1,2,3],[1,2,3],[1,2,3]] [[1,2,3],[1,2,3],[1,2,3]]]
and B.shape is (3,1)
i.e. [1,2,3]^T
And I want to know each 3-vector x of Ax = B i.e.(x_1, x_2, x_3).
What comes to mind is multiply B with np.ones(2,3) and use function dot with the inverse of each A element. But It needs loop to do this.(which consumes lots of time when matrix size going up high) (Ex. A[:][:] = [1,2,3])
How can I solve many Ax = B equation without loop?
I made elements of A and B are same, but as you probably know, it is just example.
For invertible matrices, we could use np.linalg.inv on the 3D array A and then use tensor matrix-multiplication with B so that we lose the last and first axes of those two arrays respectively, like so -
np.tensordot( np.linalg.inv(A), B, axes=((-1),(0)))
Sample run -
In [150]: A
Out[150]:
array([[[ 0.70454189, 0.17544101, 0.24642533],
[ 0.66660371, 0.54608536, 0.37250876],
[ 0.18187631, 0.91397945, 0.55685133]],
[[ 0.81022308, 0.07672197, 0.7427768 ],
[ 0.08990586, 0.93887203, 0.01665071],
[ 0.55230314, 0.54835133, 0.30756205]]])
In [151]: B = np.array([[1],[2],[3]])
In [152]: np.linalg.solve(A[0], B)
Out[152]:
array([[ 0.23594665],
[ 2.07332454],
[ 1.90735086]])
In [153]: np.linalg.solve(A[1], B)
Out[153]:
array([[ 8.43831557],
[ 1.46421396],
[-8.00947932]])
In [154]: np.tensordot( np.linalg.inv(A), B, axes=((-1),(0)))
Out[154]:
array([[[ 0.23594665],
[ 2.07332454],
[ 1.90735086]],
[[ 8.43831557],
[ 1.46421396],
[-8.00947932]]])
Alternatively, the tensor matrix-multiplication could be replaced by np.matmul, like so -
np.matmul(np.linalg.inv(A), B)
On Python 3.x, we could use # operator for the same functionality -
np.linalg.inv(A) # B
Related
I am new to python/numpy.
I need to do the following calculation:
for an array of discrete times t, calculate $e^{At}$ for a $2\times 2$ matrix $A$
What I did:
def calculate(t_,x_0,v_0,omega_0,c):
# define A
a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
A =np.matrix([[a_11,a_12], [a_21, a_22]])
print A
# use vectorization
temps = np.array(t_)
A_ = np.array([A for k in range (1,n+1,1)])
temps*A_
x_=scipy.linalg.expm(temps*A)
v_=A*scipy.linalg.expm(temps*A)
return x_,v_
n=10
omega_0=1
c=1
x_0=1
v_0=1
t_ = [float(5*k*np.pi/n) for k in range (1,n+1,1)]
x_, v_ = calculate(t_,x_0,v_0,omega_0,c)
However, I get this error when multiplying A_ (array containing n times A ) and temps (containg the times for which I want to calculate exp(At) :
ValueError: operands could not be broadcast together with shapes (10,) (10,2,2)
As I understand vectorization, each element in A_ would be multiplied by element at the same index from temps; but I think i don't get it right.
Any help/ comments much appreciated
A pure numpy calculation of t_ is (creates an array instead of a list):
In [254]: t = 5*np.arange(1,n+1)*np.pi/n
In [255]: t
Out[255]:
array([ 1.57079633, 3.14159265, 4.71238898, 6.28318531, 7.85398163,
9.42477796, 10.99557429, 12.56637061, 14.13716694, 15.70796327])
In [256]: a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
In [257]: a_11
Out[257]: 0
In [258]: A = np.array([[a_11,a_12], [a_21, a_22]])
In [259]: A
Out[259]:
array([[ 0, 1],
[-3, -1]])
In [260]: t.shape
Out[260]: (10,)
In [261]: A.shape
Out[261]: (2, 2)
In [262]: A_ = np.array([A for k in range (1,n+1,1)])
In [263]: A_.shape
Out[263]: (10, 2, 2)
A_ is np.ndarray. I made A a np.ndarray as well; yours is np.matrix, but your A_ will still be np.ndarray. np.matrix can only be 2d, where as A_ is 3d.
So t * A will be array elementwise multiplication, hence the broadcasting error, (10,) (10,2,2).
To do that elementwise multiplication right you need something like
In [264]: result = t[:,None,None]*A[None,:,:]
In [265]: result.shape
Out[265]: (10, 2, 2)
But if you want matrix multiplication of the (10,) with (10,2,2), then einsum does it easily:
In [266]: result1 = np.einsum('i,ijk', t, A_)
In [267]: result1
Out[267]:
array([[ 0. , 86.39379797],
[-259.18139392, -86.39379797]])
np.dot can't do it because its rule is 'last with 2nd to last'. tensordot can, but I'm more comfortable with einsum.
But that einsum expression makes it obvious (to me) that I can get the same thing from the elementwise *, by summing on the 1st axis:
In [268]: (t[:,None,None]*A[None,:,:]).sum(axis=0)
Out[268]:
array([[ 0. , 86.39379797],
[-259.18139392, -86.39379797]])
Or (t[:,None,None]*A[None,:,:]).cumsum(axis=0) to get a 2x2 for each time.
This is what I would do.
import numpy as np
from scipy.linalg import expm
A = np.array([[1, 2], [3, 4]])
for t in np.linspace(0, 5*np.pi, 20):
print(expm(t*A))
No attempt at vectorization here. The expm function applies to one matrix at a time, and it surely takes the bulk of computation time. No need to worry about the cost of multiplying A by a scalar.
Using the numpy gradient function, one obtains a list of arrays. E.g. in 3 dimensions 3 arrays corresponding to the x,y,z axes. I would like to normalize the gradient for each element.
What I have right now is:
gradient = np.gradient(self.image)
gradient_norm = np.sqrt(sum(x**2 for x gradient))
for dim in gradient:
np.divide(dim, gradient_norm, out=dim)
np.nan_to_num(dim, copy=False)
It seems highly verbose and inelegant for something which I think is not an exotic problem. Also the above does quite a bit of copying which I would like to avoid (as a bonus).
Compute the norm with np.linalg.norm and simply divide iteratively -
norms = np.linalg.norm(gradient,axis=0)
gradient = [np.where(norms==0,0,i/norms) for i in gradient]
Alternatively, if you don't mind a n+1 dim array as output -
out = np.where(norms==0,0,gradient/norms)
linalg.norm can broadcast with keepdims=True key arg
g = (np.arange(9) - 4).reshape((3, 3))
g
Out[215]:
array([[-4, -3, -2],
[-1, 0, 1],
[ 2, 3, 4]])
col_norm = g/np.linalg.norm(g, axis=0, keepdims=True)
col_norm
Out[217]:
array([[-0.87287156, -0.70710678, -0.43643578],
[-0.21821789, 0. , 0.21821789],
[ 0.43643578, 0.70710678, 0.87287156]])
row_norm = g/np.linalg.norm(g, axis=1, keepdims=True)
row_norm
Out[219]:
array([[-0.74278135, -0.55708601, -0.37139068],
[-0.70710678, 0. , 0.70710678],
[ 0.37139068, 0.55708601, 0.74278135]])
I have a set of data in python likes:
x y angle
If I want to calculate the distance between two points with all possible value and plot the distances with the difference between two angles.
x, y, a = np.loadtxt('w51e2-pa-2pk.log', unpack=True)
n = 0
f=(((x[n])-x[n+1:])**2+((y[n])-y[n+1:])**2)**0.5
d = a[n]-a[n+1:]
plt.scatter(f,d)
There are 255 points in my data.
f is the distance and d is the difference between two angles.
My question is can I set n = [1,2,3,.....255] and do the calculation again to get the f and d of all possible pairs?
You can obtain the pairwise distances through broadcasting by considering it as an outer operation on the array of 2-dimensional vectors as follows:
vecs = np.stack((x, y)).T
np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
For example,
In [1]: import numpy as np
...: x = np.array([1, 2, 3])
...: y = np.array([3, 4, 6])
...: vecs = np.stack((x, y)).T
...: np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
...:
Out[1]:
array([[ 0. , 1.41421356, 3.60555128],
[ 1.41421356, 0. , 2.23606798],
[ 3.60555128, 2.23606798, 0. ]])
Here, the (i, j)'th entry is the distance between the i'th and j'th vectors.
The case of the pairwise differences between angles is similar, but simpler, as you only have one dimension to deal with:
In [2]: a = np.array([10, 12, 15])
...: a[np.newaxis, :] - a[: , np.newaxis]
...:
Out[2]:
array([[ 0, 2, 5],
[-2, 0, 3],
[-5, -3, 0]])
Moreover, plt.scatter does not care that the results are given as matrices, and putting everything together using the notation of the question, you can obtain the plot of angles by distances by doing something like
vecs = np.stack((x, y)).T
f = np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
d = angle[np.newaxis, :] - angle[: , np.newaxis]
plt.scatter(f, d)
You have to use a for loop and range() to iterate over n, e.g. like like this:
n = len(x)
for i in range(n):
# do something with the current index
# e.g. print the points
print x[i]
print y[i]
But note that if you use i+1 inside the last iteration, this will already be outside of your list.
Also in your calculation there are errors. (x[n])-x[n+1:] does not work because x[n] is a single value in your list while x[n+1:] is a list starting from n+1'th element. You can not subtract a list from an int or whatever it is.
Maybe you will have to even use two nested loops to do what you want. I guess that you want to calculate the distance between each point so a two dimensional array may be the data structure you want.
If you are interested in all combinations of the points in x and y I suggest to use itertools, which will give you all possible combinations. Then you can do it like follows:
import itertools
f = [((x[i]-x[j])**2 + (y[i]-y[j])**2)**0.5 for i,j in itertools.product(255,255) if i!=j]
# and similar for the angles
But maybe there is even an easier way...
I want solve linear equation Ax= b, each A contains in 3d matrix. For-example,
In Ax = B,
Suppose A.shape is (2,3,3)
i.e. = [[[1,2,3],[1,2,3],[1,2,3]] [[1,2,3],[1,2,3],[1,2,3]]]
and B.shape is (3,1)
i.e. [1,2,3]^T
And I want to know each 3-vector x of Ax = B i.e.(x_1, x_2, x_3).
What comes to mind is multiply B with np.ones(2,3) and use function dot with the inverse of each A element. But It needs loop to do this.(which consumes lots of time when matrix size going up high) (Ex. A[:][:] = [1,2,3])
How can I solve many Ax = B equation without loop?
I made elements of A and B are same, but as you probably know, it is just example.
For invertible matrices, we could use np.linalg.inv on the 3D array A and then use tensor matrix-multiplication with B so that we lose the last and first axes of those two arrays respectively, like so -
np.tensordot( np.linalg.inv(A), B, axes=((-1),(0)))
Sample run -
In [150]: A
Out[150]:
array([[[ 0.70454189, 0.17544101, 0.24642533],
[ 0.66660371, 0.54608536, 0.37250876],
[ 0.18187631, 0.91397945, 0.55685133]],
[[ 0.81022308, 0.07672197, 0.7427768 ],
[ 0.08990586, 0.93887203, 0.01665071],
[ 0.55230314, 0.54835133, 0.30756205]]])
In [151]: B = np.array([[1],[2],[3]])
In [152]: np.linalg.solve(A[0], B)
Out[152]:
array([[ 0.23594665],
[ 2.07332454],
[ 1.90735086]])
In [153]: np.linalg.solve(A[1], B)
Out[153]:
array([[ 8.43831557],
[ 1.46421396],
[-8.00947932]])
In [154]: np.tensordot( np.linalg.inv(A), B, axes=((-1),(0)))
Out[154]:
array([[[ 0.23594665],
[ 2.07332454],
[ 1.90735086]],
[[ 8.43831557],
[ 1.46421396],
[-8.00947932]]])
Alternatively, the tensor matrix-multiplication could be replaced by np.matmul, like so -
np.matmul(np.linalg.inv(A), B)
On Python 3.x, we could use # operator for the same functionality -
np.linalg.inv(A) # B
Suppose array_1 and array_2 are two arrays of matrices of the same sizes. Is there any vectorised way of multiplying element-wise, the elements of these two arrays(which their elements' multiplication is well defined)?
The dummy code:
def mat_multiply(array_1,array_2):
size=np.shape(array_1)[0]
result=np.array([])
for i in range(size):
result=np.append(result,np.dot(array_1[i],array_2[i]),axis=0)
return np.reshape(result,(size,2))
example input:
a=[[[1,2],[3,4]],[[1,2],[3,4]]]
b=[[1,3],[4,5]]
output:
[[ 7. 15.]
[ 14. 32.]]
Contrary to your first sentence, a and b are not the same size. But let's focus on your example.
So you want this - 2 dot products, one for each row of a and b
np.array([np.dot(x,y) for x,y in zip(a,b)])
or to avoid appending
X = np.zeros((2,2))
for i in range(2):
X[i,...] = np.dot(a[i],b[i])
the dot product can be expressed with einsum (matrix index notation) as
[np.einsum('ij,j->i',x,y) for x,y in zip(a,b)]
so the next step is to index that first dimension:
np.einsum('kij,kj->ki',a,b)
I'm quite familiar with einsum, but it still took a bit of trial and error to figure out what you want. Now that the problem is clear I can compute it in several other ways
A, B = np.array(a), np.array(b)
np.multiply(A,B[:,np.newaxis,:]).sum(axis=2)
(A*B[:,None,:]).sum(2)
np.dot(A,B.T)[0,...]
np.tensordot(b,a,(-1,-1))[:,0,:]
I find it helpful to work with arrays that have different sizes. For example if A were (2,3,4) and B (2,4), it would be more obvious the dot sum has to be on the last dimension.
Another numpy iteration tool is np.nditer. einsum uses this (in C).
http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html
it = np.nditer([A, B, None],flags=['external_loop'],
op_axes=[[0,1,2], [0,-1,1], None])
for x,y,w in it:
# x, y are shape (2,)
w[...] = np.dot(x,y)
it.operands[2][...,0]
Avoiding that [...,0] step, requires a more elaborate setup.
C = np.zeros((2,2))
it = np.nditer([A, B, C],flags=['external_loop','reduce_ok'],
op_axes=[[0,1,2], [0,-1,1], [0,1,-1]],
op_flags=[['readonly'],['readonly'],['readwrite']])
for x,y,w in it:
w[...] = np.dot(x,y)
# w[...] += x*y
print C
# array([[ 7., 15.],[ 14., 32.]])
There's one more option that #hpaulj left out in his extensive and comprehensive list of options:
>>> a = np.array(a)
>>> b = np.array(b)
>>> from numpy.core.umath_tests import matrix_multiply
>>> matrix_multiply.signature
'(m,n),(n,p)->(m,p)'
>>> matrix_multiply(a, b[..., np.newaxis])
array([[[ 7],
[15]],
[[14],
[32]]])
>>> matrix_multiply(a, b[..., np.newaxis]).shape
(2L, 2L, 1L)
>>> np.squeeze(matrix_multiply(a, b[..., np.newaxis]), axis=-1)
array([[ 7, 15],
[14, 32]])
The nice thing about matrix_multiply is that, it being a gufunc, it will work not only with 1D arrays of matrices, but also with broadcastable arrays. As an example, if instead of multiplying the first matrix with the first vector, and the second matrix with the second vector, you wanted to compute all possible multiplications, you could simply do:
>>> a = np.arange(8).reshape(2, 2, 2) # to have different matrices
>>> np.squeeze(matrix_multiply(a[...,np.newaxis, :, :],
... b[..., np.newaxis]), axis=-1)
array([[[ 3, 11],
[ 5, 23]],
[[19, 27],
[41, 59]]])