I am looking for a way to achieve the following in Python and can't figure out how to do it:
a=[[0,1],[1,0],[1,1]]
b=[1,0,5]
c=hocuspocus(a,b)
--> c=[[0,1],[0,0],[5,5]]
So basically I would like to multiply the different matrix rows in a with the list b.
Thanks a lot in advance!
hocuspocus = lambda a,b: [[r*q for r in p] for p, q in zip(a,b)]
Use Numpy, it has a function for cross multiplying, and other useful tools for matricies.
import * from numpy as np
a=[[0,1],[1,0],[1,1]]
b=[1,0,5]
prod = a * b
Python lists don't support that behaviour directly, but Numpy arrays do matrix multiplication (and various other matrix operations that you might want) directly:
>>> a
array([[0, 1, 1],
[1, 0, 1]])
>>> b
array([1, 0, 5])
>>> a * b
array([[0, 0, 5],
[1, 0, 5]])
Related
I've been trying to visualize QR decomposition in a step by step fashion, but I'm not getting expected results. I'm new to numpy so it'd be nice if any expert eye could spot what I might be missing:
import numpy as np
from scipy import linalg
A = np.array([[12, -51, 4],
[6, 167, -68],
[-4, 24, -41]])
#Givens
v = np.array([12, 6])
vnorm = np.linalg.norm(v)
W_12 = np.array([[v[0]/vnorm, v[1]/vnorm, 0],
[-v[1]/vnorm, v[0]/vnorm, 0],
[0, 0, 1]])
W_12 * A #this should return a matrix such that [1,0] = 0
#gram-schmidt
A[:,0]
v = np.linalg.norm(A[:,0]) * np.array([1, 0, 0])
u = (A[:,0] - v)
u = u / np.linalg.norm(u)
W1 = np.eye(3) - 2 * np.outer(u, u.transpose())
W1 * A #this matrix's first column should look like [a, 0, 0]
any help clarifying the fact that this intermediate results don't show the properties that they are supposed to will be greatly received
NumPy is designed to work with homogeneous multi-dimensional arrays, it is not specifically a linear algebra package. So by design, the * operator is element-wise multiplication, not the matrix product.
If you want to get the matrix product, there are a few ways:
You can create np.matrix objects, rather than np.ndarray objects, for which the * operator is the matrix product.
You can also use the # operator, as in W_12 # A, which is the matrix product.
Or you can use np.dot(W_12, A) or W_12.dot(A), which computes the dot product.
Any one of these, using the data you give, returns the following for Givens rotation:
>>> np.dot(W_12 A)[1, 0]
-2.2204460492503131e-16
And this for the Gram-Schmidt step:
>>> (W1.dot(A))[:, 0]
array([ 1.40000000e+01, -4.44089210e-16, 4.44089210e-16])
have two arrays that are like this
x = [a,b]
y = [p,q,r]
I need to multiply this together to a product c which should be like this,
c = [a*p, a*q, a*r, b*p, b*q, b*r]
However x*y gives the following error,
ValueError: operands could not be broadcast together with shapes (2,) (3,)
I can do something like this,
for i in range(len(x)):
for t in range(len(y)):
c.append(x[i] * y[t]
But really the length of my x and y is quite large so what's the most efficient way to make such a multiplication without the looping.
You can use NumPy broadcasting for pairwise elementwise multiplication between x and y and then flatten with .ravel(), like so -
(x[:,None]*y).ravel()
Or use outer product and then flatten -
np.outer(x,y).ravel()
Use Numpy dot...
>>> import numpy as np
>>> a=np.arange(1,3)# [1,2]
>>> b=np.arange(1,4)# [1,2,3]
>>> np.dot(a[:,None],b[None])
array([[1, 2, 3],
[2, 4, 6]])
>>> np.dot(a[:,None],b[None]).ravel()
array([1, 2, 3, 2, 4, 6])
>>>
I wish to initialise a matrix A, using the equation A_i,j = f(i,j) for some f (It's not important what this is).
How can I do so concisely avoiding a situation where I have two for loops?
numpy.fromfunction fits the bill here.
Example from doc:
>>> import numpy as np
>>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int)
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4]])
One could also get the indexes of your array with numpy.indices and then apply the function f in a vectorized fashion,
import numpy as np
shape = 1000, 1000
Xi, Yj = np.indices(shape)
A = (2*Xi + 3*Yj).astype(np.int) # or any other function f(Xi, Yj)
I have an object which is described by two quantities, A and B (in real case they can be more than two). Objects are correlated depending on the value of A and B. In particular I know the correlation matrix for A and for B. Just as example:
a = np.array([[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 1, 1],
[0, 0, 1, 1]])
b = np.array([[1, 1, 0],
[1, 1, 1],
[0, 1, 1]])
na = a.shape[0]
nb = b.shape[0]
correlation for A:
so if an element has A == 0.5 and the other equal to A == 1.5 they are fully correlated (red). Otherwise if an element has A == 0.5 and the second item has A == 3.5 they are uncorrelated (blue).
Similarly for B:
Now I want multiply the two correlation matrixes, but I want to obtain as final matrix a matrix with two axis, where the new axes are a folded version of the original axes:
def get_folded_bin(ia, ib):
return ia * nb + ib
here what I am doing:
result = np.swapaxes(np.tensordot(a, b, axes=0), 1, 2).reshape(na* nb, na * nb)
visually:
and in particular this must hold:
for ia1 in xrange(na):
for ia2 in xrange(na):
for ib1 in xrange(nb):
for ib2 in xrange(nb):
assert(a[ia1, ia2] * b[ib1, ib2] == result[get_folded_bin(ia1, ib1), get_folded_bin(ia2, ib2)])
actually my problem is to do it with more quantities (A, B, C, ...) in a general way. Maybe there is also a simpler function within numpy to do that.
np.einsum lets you simplify the tensordot expression a bit:
result = np.einsum('ij,kl->ikjl',a,b).reshape(-1, na * nb)
I don't think there's a way of eliminating the reshape.
It may also be easier to generalize to more arrays, though I wouldn't get carried away with too many iteration variables in one einsum expression.
I think finally I have found a solution:
np.kron(a,b)
and then I can compose with
np.kron(np.kron(a,b), c)
How can I calculate the 1-norm of the difference of two vectors, ||a - b||_1 = sum(|a_i - b_i|) in Python?
a = [1,2,3,4]
b = [2,3,4,5]
||a - b||_1 = 4
Python has powerful built-in types, but Python lists are not mathematical vectors or matrices. You could do this with lists, but it will likely be cumbersome for anything more than trivial operations.
If you find yourself needing vector or matrix arithmetic often, the standard in the field is NumPy, which probably already comes packaged for your operating system the way Python also was.
I share the confusion of others about exactly what it is you're trying to do, but perhaps the numpy.linalg.norm function will help:
>>> import numpy
>>> a = numpy.array([1, 2, 3, 4])
>>> b = numpy.array([2, 3, 4, 5])
>>> numpy.linalg.norm((a - b), ord=1)
4
To show how that's working under the covers:
>>> a
array([1, 2, 3, 4])
>>> b
array([2, 3, 4, 5])
>>> (a - b)
array([-1, -1, -1, -1])
>>> numpy.linalg.norm((a - b))
2.0
>>> numpy.linalg.norm((a - b), ord=1)
4
In NumPy, for two vectors a and b, this is just
numpy.linalg.norm(a - b, ord=1)
You appear to be asking for the sum of the differences between the paired components of the two arrays:
>>> A=[1,2,3,4]
>>> B=[2,3,4,5]
>>> sum(abs(a - b) for a, b in zip(A, B))
4
It is not clear what exactly is required here, but here is my guess
a=[1,2,3,4]
b=[2,3,4,5]
def a_b(a,b):
return sum(map(lambda a:abs(a[0]-a[1]), zip(a,b)))
print a_b(a,b)
Using Numpy you can calculate any norm between two vectors using the linear algebra package.
import numpy as np
a = np.array([[2,3,4])
b = np.array([0,-1,7])
# L1 Norm
np.linalg.norm(a-b, ord=1)
# L2 Norm
np.linalg.norm(a-b, ord=2)
# L3 Norm
np.linalg.norm(a-b, ord=3)
# Ln Norm
np.linalg.norm(a-b, ord=n)
Example: