Conjugate transpose of self using numpy syntax - python

I am trying to translate this MATLAB code into Python.
The following is the code:
Y=C*Up(:,1:p-1)'*Y;
And this is my translation thus far:
Y = C * Up[:, 1:p-1] * Y
I am having trouble with the syntax for the conjugate transpose of self that is used in the MATLAb code. I am not certain that my first idea:
Y = C * Up[:, 1:p-1].getH() * Y
would be correct.
Does anyone have any ideas?

I am not very experienced with numpy, but based on the comments of #hpaulj I can suggest the following:
If you don't want to be subject to the limitations of numpy.matrix objects (see warning here), you can define your own function for doing a conjugate transpose. All you need to do is transpose your array, then subtract the imaginary part of the result, times 2, from the result. I am not sure how computationally efficient this is, but it should definitely give the correct result.
I'd expect something like this to work:
Y = C * ctranspose(Up[:, 0:p-1]) * Y
...
def ctranspose(arr: np.ndarray) -> np.ndarray:
# Explanation of the math involved:
# x == Real(X) + j*Imag(X)
# conj_x == Real(X) - j*Imag(X)
# conj_x == Real(X) + j*Imag(X) - 2j*Imag(X) == x - 2j*Imag(X)
tmp = arr.transpose()
return tmp - 2j*tmp.imag
(Solution is for Python 3)
A more elegant solution based on the comment by #AndrasDeak:
Y = C * Up[:, 0:p-1].conj().T * Y
Note also, two differences related to indexing between python and MATLAB:
Python is 0-based (i.e. the first index of an array is 0, unlike in MATLAB where it's 1)
The indexing in Python is inclusive:exclusive unlike in MATLAB where it's inclusive:inclusive.
Therefore, when we want to access the first 3 elements of a vector in MATLAB we'd write:
res = vec(1:3);
In Python we'd write:
res = vec[0:3] # or [:3]
(Again, credits to #Andras for this explanation)

Use arr.conj().T to get complex conjugate of a matrix.

Related

Move from Matlab to Python numpy

I am new to python so by gentle with me , I try to convert code from Matlab to numpy python , I am working with matrix .
I have some basic question (that I didn't found the answers in Google):
What is the equivalent for the ' tag for example : H' , H= H*H'
What is the equivalent for the / (mrdivide) tag for example : H= H/A
Thanks,
MAK
' (transpose) means the conjugate transpose of a matrix. For real matrices, it is given by np.transpose(arr) or the shorthand arr.T. For complex matrices, you need to use more complicated arr.conj().T.
/ (mrdivide) solves the equation x A = b -> x = b / A using least squares (np.linalg.lstsq). This is equivalent to (x A)^T = b^T -> A^T x^T = b^T, which can be done using np.linalg.lstsq(A.T, b.T).T.

clean summation involving index of numpy arrays

I've occasionally but not frequently used numpy. I'm now needing to do some summations where the sums involve the row/column indices.
I have an m x n array S. I want to do the create a new m x n array whose 's,i' entry is
-c i S[s,i] + g (i+1)S[s,i+1] + (s+1)S[s+1,i-1]
So say S=np.array([[1,2],[3,4], [5,6]]) the result I want is
-c*np.array([[0*1, 1*2],[0*3, 1*4],[0*5, 1*6]])
+ g*np.array([[1*2, 2*0],[1*4, 2*0],[1*6, 2*0]])
+ np.array([[1*0, 1*3],[2*0, 2*5],[3*0, 3*0]])
(that's not all the terms in my equation, but I feel like knowing how to do this would be enough to complete what I'm after).
I think what I will need to do is create a new array whose rows are just the index of the rows and another corresponding for columns. Then do some component-wise multiplication. But this is well outside what I normally do in my research, so I've taken a few wrong steps already.
note: It is understood that where the indices refer to something outside my array the value is zero.
Is there a clean way to do the summation I've described above?
I would do it in several steps, due to your possible out-of-bounds indexing:
import numpy as np
S = np.array([[1,2],[3,4], [5,6]])
c = np.random.rand()
g = np.random.rand()
m,n = S.shape
Stmp1 = S*np.arange(0,n) # i*S[s,i]
Stmp2 = S*np.arange(0,m)[:,None] # s*S[s,i]
# the answer:
Sout = -c*Stmp1
Sout[:,:-1] = Sout[:,:-1] + g*Stmp1[:,1:]
Sout[:-1,1:] = Sout[:-1,1:] + Stmp2[1:,:-1]
# only for control:
Sout2 = -c*np.array([[0*1, 1*2],[0*3, 1*4],[0*5, 1*6]]) \
+ g*np.array([[1*2, 2*0],[1*4, 2*0],[1*6, 2*0]]) \
+ np.array([[1*0, 1*3],[2*0, 2*5],[3*0, 3*0]])
Check:
In [431]: np.all(Sout==Sout2)
Out[431]: True
I introduced auxiliary arrays for i*S[s,i] and s*S[s,i]. While this is clearly not necessary, it makes the code easier to read. We could've easily sliced into the np.arange(0,n) calls directly, but unless memory is not an issue, I find this approach much more straightforward.

Code for this expression in python

I'm trying to code this expression in python but I'm having some difficulty.
This is the code I have so far and wanted some advice.
x = 1x2 vector
mu = 1x2 vector
Sigma = 2x2 matrix
xT = (x-mu).transpose()
sig = Sigma**(-1)
dotP = dot(xT ,sig )
dotdot = dot(dotP, (x-mu))
E = exp( (-1/2) dotdot )
Am I on the right track? Any suggestions?
Sigma ** (-1) isn't what you want. That would raise each element of Sigma to the -1 power, i.e. 1 / Sigma, whereas in the mathematical expression it means the inverse, which is written in Python as np.linalg.inv(Sigma).
(-1/2) dotdot is a syntax error; in Python, you need to always include * for multiplication, or just do - dotdot / 2. Since you're probably using python 2, division is a little wonky; unless you've done from __future__ import division (highly recommended), 1/2 will actually be 0, because it's integer division. You can use .5 to get around that, though like I said I do highly recommend doing the division import.
This is pretty trivial, but you're doing the x-mu subtraction twice where it's only necessary to do once. Could save a little speed if your vectors are big by doing it only once. (Of course, here you're doing it in two dimensions, so this doesn't matter at all.)
Rather than calling the_array.transpose() (which is fine), it's often nicer to use the_array.T, which is the same thing.
I also wouldn't use the name xT; it implies to me that it's the transpose of x, which is false.
I would probably combine it like this:
# near the top of the file
# you probably did some kind of `from somewhere import *`.
# most people like to only import specific names and/or do imports like this,
# to make it clear where your functions are coming from.
import numpy as np
centered = x - mu
prec = np.linalg.inv(Sigma)
E = np.exp(-.5 * np.dot(centered.T, np.dot(prec, centered)))

Python: Conditional elements in array

A question from a complete Python novice.
I have a column array where I need to force certain values to zero depending on a conditional statement applied to another array. I have found two solutions, which both provide the correct answer. But they are both quite time consuming for the larger arrays I typically need (>1E6 elements) - also I suspect that it is poor programming technique. The two versions are:
from numpy import zeros,abs,multiply,array,reshape
def testA(y, f, FC1, FC2):
c = zeros((len(f),1))
for n in xrange(len(f)):
if abs(f[n,0]) >= FC1 and abs(f[n,0]) <= FC2:
c[n,0] = 1.
w = multiply(c,y)
return w
def testB(y, f, FC1, FC2):
z = [(abs(f[n,0])>=FC1 and abs(f[n,0])<=FC2) for n in xrange(len(f))]
z = multiply(array(z,dtype=float).reshape(len(f),1), y)
return z
The input arrays are column arrays as this matches the post processing to be done. The test can be done like:
>>> from numpy.random import normal as randn
>>> fs, N = 1.E3, 2**22
>>> f = fs/N*arange(N).reshape((N,1))
>>> x = randn(size=(N,1))
>>> w1 = testA(x,f,200.,550.)
>>> z1 = testB(x,f,200.,550.)
On my laptop testA takes 18.7 seconds and testB takes 19.3 - both for N=2**22. In testB I also tried to include "z = [None]*len(f)" to preallocate as suggested in another thread but this doesn't really make any difference.
I have two questions, which I hope to have the same answer:
What is the "correct" Python solution to this problem?
Is there anything I can do to get the answer faster?
I have deliberately not used any time at all using compiled Python for example - I wanted to have some working code first. Hopefully also something, which is good Python style. I hope to be able to get the execution time for N=2**22 below two seconds or so. This particular operation will be used many times so the execution time does matter.
I apologize in advance if the question is stupid - I haven't been able to find an answer in the overwhelming amount of not always easily accessible Python documentation or in another thread.
use bool array to access elements in array y:
def testC(y, f, FC1, FC2):
f2 = abs(f)
idx = (f2>=FC1) & (f2<=FC2)
y[~idx] = 0
return y
All of these are slower than HYRY solution by a large factor:
How about
( x[1] if FC1<=abs(x[0])<=FC2 else 0 for x in itertools.izip(f,x) )
If you need to do random access (very slow)
[ x[1] if FC1<=abs(x[0])<=FC2 else 0 for x in itertools.izip(f,x) ]
or you can also use map
map(lambda x: x[1] if FC1<=abs(x[0])<=FC2 else 0 , itertools,izip(f,x))
or using vectorize (faster than A and B but much much slower than C)
b1v = np.vectorize(lambda a,b: a if 200<=abs(b)<=550 else 0)
b1 = b1v(f,x)

mrdivide function in MATLAB: what is it doing, and how can I do it in Python?

I have this line of MATLAB code:
a/b
I am using these inputs:
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
b = ones(25, 18)
This is the result (a 1x25 matrix):
[5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
What is MATLAB doing? I am trying to duplicate this behavior in Python, and the mrdivide documentation in MATLAB was unhelpful. Where does the 5 come from, and why are the rest of the values 0?
I have tried this with other inputs and receive similar results, usually just a different first element and zeros filling the remainder of the matrix. In Python when I use linalg.lstsq(b.T,a.T), all of the values in the first matrix returned (i.e. not the singular one) are 0.2. I have already tried right division in Python and it gives something completely off with the wrong dimensions.
I understand what a least square approximation is, I just need to know what mrdivide is doing.
Related:
Array division- translating from MATLAB to Python
MRDIVIDE or the / operator actually solves the xb = a linear system, as opposed to MLDIVIDE or the \ operator which will solve the system bx = a.
To solve a system xb = a with a non-symmetric, non-invertible matrix b, you can either rely on mridivide(), which is done via factorization of b with Gauss elimination, or pinv(), which is done via Singular Value Decomposition, and zero-ing of the singular values below a (default) tolerance level.
Here is the difference (for the case of mldivide): What is the difference between PINV and MLDIVIDE when I solve A*x=b?
When the system is overdetermined, both algorithms provide the
same answer. When the system is underdetermined, PINV will return the
solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will
pick the solution with least number of non-zero elements.
In your example:
% solve xb = a
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9];
b = ones(25, 18);
the system is underdetermined, and the two different solutions will be:
x1 = a/b; % MRDIVIDE: sparsest solution (min L0 norm)
x2 = a*pinv(b); % PINV: minimum norm solution (min L2)
>> x1 = a/b
Warning: Rank deficient, rank = 1, tol = 2.3551e-014.
ans =
5.0000 0 0 ... 0
>> x2 = a*pinv(b)
ans =
0.2 0.2 0.2 ... 0.2
In both cases the approximation error of xb-a is non-negligible (non-exact solution) and the same, i.e. norm(x1*b-a) and norm(x2*b-a) will return the same result.
What is MATLAB doing?
A great break-down of the algorithms (and checks on properties) invoked by the '\' operator, depending upon the structure of matrix b is given in this post in scicomp.stackexchange.com. I am assuming similar options apply for the / operator.
For your example, MATLAB is most probably doing a Gaussian elimination, giving the sparsest solution amongst a infinitude (that's where the 5 comes from).
What is Python doing?
Python, in linalg.lstsq uses pseudo-inverse/SVD, as demonstrated above (that's why you get a vector of 0.2's). In effect, the following will both give you the same result as MATLAB's pinv():
from numpy import *
a = array([1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9])
b = ones((25, 18))
# xb = a: solve b.T x.T = a.T instead
x2 = linalg.lstsq(b.T, a.T)[0]
x2 = dot(a, linalg.pinv(b))
TL;DR: A/B = np.linalg.solve(B.conj().T, A.conj().T).conj().T
I did not find the earlier answers to create a satisfactory substitute, so I dug into Matlab's reference documents for mrdivide further and found the solution. I cannot explain the actual mathematics here or take credit for coming up with the answer. I'm just following Matlab's explanation. Additionally, I wanted to post the actual detail from Matlab to give credit. If it's a copyright issue, someone tell me and I'll remove the actual text.
%/ Slash or right matrix divide.
% A/B is the matrix division of B into A, which is roughly the
% same as A*INV(B) , except it is computed in a different way.
% More precisely, A/B = (B'\A')'. See MLDIVIDE for details.
%
% C = MRDIVIDE(A,B) is called for the syntax 'A / B' when A or B is an
% object.
%
% See also MLDIVIDE, RDIVIDE, LDIVIDE.
% Copyright 1984-2005 The MathWorks, Inc.
Note that the ' symbol indicates the complex conjugate transpose. In python using numpy, that requires .conj().T chained together.
Per this handy "cheat sheet" of numpy for matlab users, linalg.lstsq(b,a) -- linalg is numpy.linalg.linalg, a light-weight version of the full scipy.linalg.
a/b finds the least square solution to the system of linear equations bx = a
if b is invertible, this is a*inv(b), but if it isn't, the it is the x which minimises norm(bx-a)
You can read more about least squares on wikipedia.
according to matlab documentation, mrdivide will return at most k non-zero values, where k is the computed rank of b. my guess is that matlab in your case solves the least squares problem given by replacing b by b(:1) (which has the same rank). In this case the moore-penrose inverse b2 = b(1,:); inv(b2*b2')*b2*a' is defined and gives the same answer

Categories