I have two matrices, A (N by K) and B (N by M) and I would like to concentrate A and B into a tensor C (N by K by M) where C(n,k,m) = A(n,k) * B(n,m). I know how to do it in python like
C = B[:,numpy.newaxis,:] * A[:,:,numpy.newaxis]
Can anyone please tell me the matlab code that does the same thing efficiently?
Take advantage of the implicit expansion feature of bsxfun. Use permute to have your B as an Nx1xM matrix:
C = bsxfun(#times, A, permute(B, [1, 3, 2]));
And from MATLAB R2016b onward, you can get the same result in this way:
C = A * permute(B, [1, 3, 2]);
Related
I don't understand how the following code realizes the transformation of dimensions? The shape of C is [2, 3, 3, 4]. How to realize the following matrix operation without einsum function?
import numpy as np
a = np.random.randint(0, 10, (2,3,4))
b = np.random.randint(0, 10, (3, 6, 4))
c = np.einsum('bld,hid-> blhd', a,b)
You can find more details in about einstein notation wikipedia
This means that you have indices b,l,h,i,d
this will iterate the indices to cover all the inputs and build the input
I will use capital letters for the arrays here to distinguish from the indices.
C[b,l,h,d] += A[b,l,d] * B[h,i,d]
The shape of the output can be determined as follows.
You take the index of each output axis and look for the same index in the input. For instance the first axis of C is indexed with b that is also used to index the first axis of A, thus assert C.shape[0] == A.shape[0]. Repeating for the other axes we have assert C.shape[1] == A.shape[1], assert C.shape[2] == B.shape[0], and assert C.shape[3] == A.shape[2], also assert C.shape[3] == B.shape[2].
Notice that the index i does not affect where the term will be added, each element of the output can be written as
C[b,l,h,d] = sum(A[b,l,d] * B[h,i,d] for i in range(B.shape[1]))
Notice also that i is not used to index A. So this could be also written as
C[b,l,h,d] = A[b,l,d] * B[h,:,d].sum();
Or if you want to use vectorized operation
first expanding then reducing
C = A[:,:,None,:] * B[None,None,:,:,:].sum(-2)
expanding reducing then expandin, possible because A does not use i
C = A[:,:,None,:] * B.sum(-2)[None,None,:,:]
To answer your first question
c = np.einsum('bld,hid->blhd', a,b)
implements the formula
which, if you don't want to use einsum, you can achieve using
c = a[:, :, None, :] * b.sum(-2)[None, None, :, :]
# b l (h) d i (b) (l) h d
We have an array list say [1, 2, 3, 4], I want the difference between all combinations, ie.
for [1, 2, 3, 4] -> 1, 1, 2, 1, 2, 3
that is (2-1), (3-2), (3-1), (4-3), (4-2) and (4-1)
I already made an inefficient code with high complexity. I need an efficient solution with less complexity in C++ or Python.
Result size is n*(n-1)/2, where n is list size, so two for-loops solution is optimal one
for i in range(len(A)):
for j in range(i+1, len(A)):
diffs.append(A[j] - A[i])
You can simplify your code by using itertools.combinations, which is designed to to exactly what you're looking for.
diffs = [y - x for x, y in itertools.combinations(list, 2)]
There is probably some really complicated, clever algorithm that can do this in O(n log n), but any "normal" algorithm is going to have to look at each pair of elements, and there are n (n + 1) / 2 such pairs.
============== EDITED FOR LINEAR SOLUTION ====
Okay. Just confirming that you're specifically asking for the difference, and not the absolute value of the difference in your solution. If you're looking for their sum, rather than just the list, you can do some math.
Look at a list [a, b, c, d]. You want
(b - a) + (c - a) + (d - a) + (c - b) + (d - b) + (d-c)
which simplifies to -3*a - b + c + 3*d
It's pretty easy to generalize and see that for a list of n elements, the multipliers are -(n-1), -(n - 3), .... n-3, n-1 so you have
n = len(list)
sum(value * multiplier for value, multiplier in zip(list, range(-n+1, n, 2))```
Test Sample:
a = [0.1357678 0.27303184 -0.75600229]
b = [0.3813097 -0.72613616 0.18361217]
I would like to implement SUMXMY2(a, b) in Python without for loops
How can I do this?
As far as I know, - is not a valid operator for lists, so I would use a list comprehension. It does technically use a for loop, however I'd call it "elegant enough".
c = [(b[i] - a[i]) ** 2 for i in range(len(b))]
result = sum(c)
To make it more compact but less readable:
c = sum([(b[i] - a[i]) ** 2 for i in range(len(b))])
If you're dealing with lists of different lengths, use this:
c = [(b[i] - a[i]) ** 2 for i in range(min(len(b), len(a)))]
result = sum(c)
Squared difference is given by:
c = ((a - b) ** 2)
The sum is then simply given by
c = c.sum()
if a and b are lists you can convert them to pandas series first:
a = pd.Series(a)
or to numpy array as:
a = np.asarray(a)
I have a simple code in MATLAB which I am trying to translate to python, but I am stuck in a simple for loop:
Here is the situation:
Matlab
f0 = constant
fn = (nx1) matrix
b = (nx1) matrix
d and x are constant
mthd = 1 or 2
s = 1:-0.1:0.1;
for i = 1:10
f = fn * s(i)
switch mthd
case 1
v(:,i) = d *(1 + 1./b.*(f0./f)).^x
case 2
v(:,i) = log(f0./f)./b;
v(:,i) = v./(1+v)
end
v(1,:) = min(vp(2,:));
The output in Matlab results v with nx1 matrix
Assuming it is a simple equation with element wise operation in matlab,
I went ahead and wrote a code in python like this:
s = np.linspace(1,0.1,num=10)
for i in range(1,11)
f = fn * s[i]
if mthd ==1:
v = d *(1 + 1/b*(f0/f))^x
elif mthd ==2:
v = log(f0/f)/b;
v = v/(1+v)
Clearly, this is not the right one and I get stuck right from f = fn* s[i]
Any suggestion in this conversion will be of great help.
Thank you
Clearly this is not the right one and I get stuck right from f = fn* s[i]
What error message are you getting here? Make sure your vectors fn and b are numpy arrays and not lists.
for i in range(1,11)
Python uses zero indexing, whereas Matlab uses 1-indexing. Therefore your for loop should use for i in range(10), which iterates from 0 to 9 instead of 1 to 10.
v = d *(1 + 1/b*(f0/f))^x
Assuming fn and b are numpy arrays in your Python implementation, if you really want this to mirror the Matlab code you can still use indexing such as v[:,i]. However you need to initialize v as a numpy array with the correct size first.
v = log(f0/f)/b;
You probably want np.log here.
Hopefully this is helpful, let me know if you still have questions. You may also find this website helpful.
The code block below should be closer to what you want. Here are a few things to look out for:
Phyton arrays are indexed from 0. In base Python you handle powers with ** e.g. 2 ** 2 equals 4
When performing scalar multiplication and divide of arrays, better to use np.multiply and np.divide
Use np.log for logarithm and np.power for exponentiation with numpy matrices.
Use np.add to add a scalar to a numpy array.
import numpy as np
f0 = 5 # constant
fn = np.matrix([[5], [4], [3], [2], [1]]) # 5 x 1 matrix
b = np.matrix([[9], [8], [7], [6], [5]]) # 5 x 1 matrix
# d and x are constant
d = 4
x = 8
# mthd = 1 or 2
mthd = 1
s = np.linspace(1,0.1,num=10)
# python arrays are indexed from 0
for i in range(0,len(s)):
f = fn * s[i]
if mthd == 1:
v = np.power(np.multiply(d, (1 + np.divide(1., np.multiply
(b, np.divide(f0, f) ) ) ) ), x)
elif mthd ==2:
v = np.divide(np.log(np.divide(f0,f)), b);
v = np.divide(v, (np.add(1, v)) )
def determinant(M):
"""
Finds the determinant of matrix M.
"""
if dimension(M)[0]!=dimension(M)[1]:
print("This matrix is not a square matrix and therefore cannot have a determinant!")
return
elif dimension(M)[0]==dimension(M)[1]:
if dimension(M)==(2,2):
return (M[0][0]*M[1][1])-(M[0][1]*M[1][0])
else:
return (M[0][0]*determinant(reduce_matrix(M,1,1))) - (M[0][1]*determinant(reduce_matrix(M,1,2))) + (M[0][2]*determinant(reduce_matrix(M,1,3)))
EDIT: This code here is capable of finding the determinant of 3x3 matrices, but ONLY 3x3 matrices. How can I edit this in order to find the determinant of ANY size square matrix?
You can use list comprehensions to apply an expression by an input list like so:
[n ** 2 for n in [1, 2, 3]] == [1, 4, 9]
I assume you'd like to accumulate the results, in which case you can use the sum function.
sum([1, 2, 3]) == 6
By applying both you end up with an expression like this:
sum([((-1) ** i) * (M[0][i] * determinant(reduce_matrix(M, 1, i + 1))) for i in range(0, dimension(M)[1])])
Note that range excludes the last element.
Also be cautious of operator precedence:
-1 ** 2 != (-1) ** 2