Multiple coefficient sets for least squares fitting in numpy/scipy - python

Is there a way to perform multiple simultaneous (but unrelated) least-squares fits with different coefficient matrices in either numpy.linalg.lstsq or scipy.linalg.lstsq? For example, here is a trivial linear fit that I would like to be able to do with different x-values but the same y-values. Currently, I have to write a loop:
x = np.arange(12.0).reshape(4, 3)
y = np.arange(12.0, step=3.0)
m = np.stack((x, np.broadcast_to(1, x.shape)), axis=0)
fit = np.stack(tuple(np.linalg.lstsq(w, y, rcond=-1)[0] for w in m), axis=-1)
This results in a set of fits with the same slope and different intercepts, such that fit[n] corresponds to coefficients m[n].
Linear least squares is not a great example since it is invertible, and both functions have an option for multiple y-values. However, it serves to illustrate my point.
Ideally, I would like to extend this to any "broadcastable" combination of a and b, where a.shape[-2] == b.shape[0] exactly, and the last dimensions have to either match or be one (or missing). I am not really hung up on which dimension of a is the one representing the different matrices: it was just convenient to make it the first one to shorten the loop.
Is there a built in method in numpy or scipy to avoid the Python loop? I am very much interested in using lstsq rather than manually transposing, multiplying and inverting the matrices.

You could use scipy.sparse.linalg.lsqr together with scipy.sparse.block_diag. I'm just not sure it will be any faster.
Example:
>>> import numpy as np
>>> from scipy.sparse import block_diag
>>> from scipy.sparse import linalg as sprsla
>>>
>>> x = np.random.random((3,5,4))
>>> y = np.random.random((3,5))
>>>
>>> for A, b in zip(x, y):
... print(np.linalg.lstsq(A, b))
...
(array([-0.11536962, 0.22575441, 0.03597646, 0.52014899]), array([0.22232195]), 4, array([2.27188101, 0.69355384, 0.63567141, 0.21700743]))
(array([-2.36307163, 2.27693405, -1.85653264, 3.63307554]), array([0.04810252]), 4, array([2.61853881, 0.74251282, 0.38701194, 0.06751288]))
(array([-0.6817038 , -0.02537582, 0.75882223, 0.03190649]), array([0.09892803]), 4, array([2.5094637 , 0.55673403, 0.39252624, 0.18598489]))
>>>
>>> sprsla.lsqr(block_diag(x), y.ravel())
(array([-0.11536962, 0.22575441, 0.03597646, 0.52014899, -2.36307163,
2.27693405, -1.85653264, 3.63307554, -0.6817038 , -0.02537582,
0.75882223, 0.03190649]), 2, 15, 0.6077437777160813, 0.6077437777160813, 6.226368324510392, 106.63227777368986, 1.3277892240815807e-14, 5.36589277249043, array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]))

Related

Python move point by matrix and then draw orbit

I know how to draw points moved by matrix, like this below
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x=np.random.randn(2) #2*1 matrix
A=np.random.randn(2,2) #2*2 matrix
print ('the content of x:\n{}\n the content of A:\n{}'.format(x,A))
def action(pt,n):
record=[pt]
for i in range(n):
pt= A#pt
record=np.vstack([record,pt])
plt.scatter(record[:,1],record[:,1])
action(x,100)
the function "action" will draw something like a line, but I want to move points by matrix and then draw it like an orbit
SHORT ANSWER:
plt.scatter(record[:,1],record[:,1]) will feed same values in both x & y dimensions & hence will always return a line. Replace it by:
X,Y = np.hsplit(record,2)
plt.scatter(X,Y)
LONG ANSWER:
The main cause behind plot coming out as a line is that you are generating the plot using 2 constants (although randomly generated). I will illustrate using below example:
>>> c
array([[ 1., 2.],
[ 2., 4.]])
>>> d
array([ 3., 4.])
>>> d#c
array([ 11., 22.])
>>> d#c#c
array([ 55., 110.])
>>> d#c#c#c
array([ 275., 550.])
Notice how all the recursive operation is only multiplying the initial co-ordinate by 5 at each stage.
How to get a non-linear plot??
Utilize the variable 'i' which we are calling for loop operation by giving it a power of 2(parabola) or more.
Use random numbers populated in the 2 matrices greater than 1. Otherwise all the operations either increase the magnitude in -ve or if b/w (-1,1) the magnitude decreases.
Use mathematical functions to introduce non-linearity. Eg:
pt = pt + np.sin(pt)
Reflect if using 2 random matrices & looping over them is the only way to achieve the curve. If this activity is independent from your bigger programme etc, then probably try different approach by using mathematical functions which generate the curve you like.

Normalization of a matrix

I have a 150x4 matrix X which I created from a pandas dataframe using the following code:
X = df_new.as_matrix()
I have to normalize it using this function:
I know that Uj is the mean val of j, and that σ j is the standard deviation of j, but I don't understand what j is. I'm having a little trouble understanding what the bar on X is, and I'm confused by the commas in the equation (I don't know if they have any significance or not).
Can anyone help me understand what this equation means so I can then write the normalization using sklearn?
You don't actually need to write code for the normalization yourself - it comes ready with sklearn.preprocessing.scale.
Here is an example from the docs:
>>> from sklearn import preprocessing
>>> import numpy as np
>>> X_train = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
>>> X_scaled = preprocessing.scale(X_train)
>>> X_scaled
array([[ 0. ..., -1.22..., 1.33...],
[ 1.22..., 0. ..., -0.26...],
[-1.22..., 1.22..., -1.06...]])
When used with the default setting axis=0, the mormalization happens column-wise (i.e. for each column j, as in your equestion). As a result, it is easy to confirm that scaled data has zero mean and unit variance:
>>> X_scaled.mean(axis=0)
array([ 0., 0., 0.])
>>> X_scaled.std(axis=0)
array([ 1., 1., 1.])
The indexes for matrix X are row (i) and column (j). Hence, X,j means column j of matrix X. I.e. normalize each column of matrix X to z-scores.
You can do that using pandas:
df_new_zscores = (df_new - df_new.mean()) / df_new.std()
I do not know pandas but I think that the equation means that the normalized matrix is given by
You subtract the empirical mean and devide by the empirical standard deviation per column.
You sometimes use this for Principal Component Analysis.

implementing euclidean distance based formula using numpy

I am trying to implement this formula in python using numpy
As you can see in picture above X is numpy matrix and each xi is a vector with n dimensions and C is also a numpy matrix and each Ci is vector with n dimensions too, dist(Ci,xi) is euclidean distance between these two vectors.
I implement a code in python:
value = 0
for i in range(X.shape[0]):
min_value = math.inf
#this for loop iterate k times
for j in range(C.shape[0]):
distance = (np.dot(X[i] - C[j],
X[i] - C[j])) ** .5
min_value = min(min_value, distance)
value += min_value
fitnessValue = value
But my code performance is not good enough I'am looking for faster,is there any faster way to calculate that formula in python any idea would be thankful.
Generally, loops running an important number of times should be avoided when possible in python.
Here, there exists a scipy function, scipy.spatial.distance.cdist(C, X), which computes the pairwise distance matrix between C and X. That is to say, if you call distance_matrix = scipy.spatial.distance.cdist(C, X), you have distance_matrix[i, j] = dist(C_i, X_j).
Then, for each j, you want to compute the minimum of the dist(C_i, X_j) over all i. You do not either need a loop to compute this! The function numpy.minimum does it for you, if you pass an axis argument.
And finally, the summation of all these minimum is done by calling the numpy.sum function.
This gives code much more readable and faster:
import scipy.spatial.distance
import numpy as np
def your_function(C, X):
distance_matrix = scipy.spatial.distance.cdist(C, X)
minimum = np.min(distance_matrix, axis=0)
return np.sum(minimum)
Which returns the same results as your function :)
Hope this helps!
einsum can also be called into play. Here is a simple small example of a pairwise distance calculation for a small set. Useful if you don't have scipy installed and/or wish to use numpy solely.
>>> a
array([[ 0., 0.],
[ 1., 1.],
[ 2., 2.],
[ 3., 3.],
[ 4., 4.]])
>>> b = a.reshape(np.prod(a.shape[:-1]),1,a.shape[-1])
>>> b
array([[[ 0., 0.]],
[[ 1., 1.]],
[[ 2., 2.]],
[[ 3., 3.]],
[[ 4., 4.]]])
>>> diff = a - b; dist_arr = np.sqrt(np.einsum('ijk,ijk->ij', diff, diff)).squeeze()
>>> dist_arr
array([[ 0. , 1.41421, 2.82843, 4.24264, 5.65685],
[ 1.41421, 0. , 1.41421, 2.82843, 4.24264],
[ 2.82843, 1.41421, 0. , 1.41421, 2.82843],
[ 4.24264, 2.82843, 1.41421, 0. , 1.41421],
[ 5.65685, 4.24264, 2.82843, 1.41421, 0. ]])
Array 'a' is a simple 2d (shape=(5,2), 'b' is just 'a' reshaped to facilitate (5, 1, 2) the difference calculations for the cdist style array. The terms are written verbosely since they are extracted from other code. the 'diff' variable is the difference array and the dist_arr shown is for the 'euclidean' distance. Should you need euclideansq (square distance) for 'closest' determinations, simply remove the np.sqrt term and finally squeeze, just removes and 1 terms in the shape.
cdist is faster for much larger arrays (in the order of 1000s of origins and destinations) but einsum is a nice alternative and well documented by others on this site.

Normalise 2D Numpy Array: Zero Mean Unit Variance

I have a 2D Numpy array, in which I want to normalise each column to zero mean and unit variance. Since I'm primarily used to C++, the method in which I'm doing is to use loops to iterate over elements in a column and do the necessary operations, followed by repeating this for all columns. I wanted to know about a pythonic way to do so.
Let class_input_data be my 2D array. I can get the column mean as:
column_mean = numpy.sum(class_input_data, axis = 0)/class_input_data.shape[0]
I then subtract the mean from all columns by:
class_input_data = class_input_data - column_mean
By now, the data should be zero mean. However, the value of:
numpy.sum(class_input_data, axis = 0)
isn't equal to 0, implying that I have done something wrong in my normalisation. By isn't equal to 0, I don't mean very small numbers which can be attributed to floating point inaccuracies.
Something like:
import numpy as np
eg_array = 5 + (np.random.randn(10, 10) * 2)
normed = (eg_array - eg_array.mean(axis=0)) / eg_array.std(axis=0)
normed.mean(axis=0)
Out[14]:
array([ 1.16573418e-16, -7.77156117e-17, -1.77635684e-16,
9.43689571e-17, -2.22044605e-17, -6.09234885e-16,
-2.22044605e-16, -4.44089210e-17, -7.10542736e-16,
4.21884749e-16])
normed.std(axis=0)
Out[15]: array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

Eigenvectors computed with numpy's eigh and svd do not match

Consider singular value decomposition M=USV*. Then the eigenvalue decomposition of M* M gives M* M= V (S* S) V*=VS* U* USV*. I wish to verify this equality with numpy by showing that the eigenvectors returned by eigh function are the same as those returned by svd function:
import numpy as np
np.random.seed(42)
# create mean centered data
A=np.random.randn(50,20)
M= A-np.array(A.mean(0),ndmin=2)
# svd
U1,S1,V1=np.linalg.svd(M)
S1=np.square(S1)
V1=V1.T
# eig
S2,V2=np.linalg.eigh(np.dot(M.T,M))
indx=np.argsort(S2)[::-1]
S2=S2[indx]
V2=V2[:,indx]
# both Vs are in orthonormal form
assert np.all(np.isclose(np.linalg.norm(V1,axis=1), np.ones(V1.shape[0])))
assert np.all(np.isclose(np.linalg.norm(V1,axis=0), np.ones(V1.shape[1])))
assert np.all(np.isclose(np.linalg.norm(V2,axis=1), np.ones(V2.shape[0])))
assert np.all(np.isclose(np.linalg.norm(V2,axis=0), np.ones(V2.shape[1])))
assert np.all(np.isclose(S1,S2))
assert np.all(np.isclose(V1,V2))
The last assertion fails. Why?
Just play with small numbers to debug your problem.
Start with A=np.random.randn(3,2) instead of your much larger matrix with size (50,20)
In my random case, I find that
v1 = array([[-0.33872745, 0.94088454],
[-0.94088454, -0.33872745]])
and for v2:
v2 = array([[ 0.33872745, -0.94088454],
[ 0.94088454, 0.33872745]])
they only differ for a sign, and obviously, even if normalized to have unit module, the vector can differ for a sign.
Now if you try the trick
assert np.all(np.isclose(V1,-1*V2))
for your original big matrix, it fails... again, this is OK. What happens is that some vectors have been multiplied by -1, some others haven't.
A correct way to check for equality between the vectors is:
assert allclose(abs((V1*V2).sum(0)),1.)
and indeed, to get a feeling of how this works you can print this quantity:
(V1*V2).sum(0)
that indeed is either +1 or -1 depending on the vector:
array([ 1., -1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., -1., 1., 1., 1., -1., -1.])
EDIT: This will happen in most cases, especially if starting from a random matrix. Notice however that this test will likely fail if one or more eigenvalues has an eigenspace of dimension larger than 1, as pointed out by #Sven Marnach in his comment below:
There might be other differences than just vectors multiplied by -1.
If any of the eigenvalues has a multi-dimensional eigenspace, you
might get an arbitrary orthonormal basis of that eigenspace, and to
such bases might be rotated against each other by an arbitraty
unitarian matrix

Categories