Numpy.dot dot product function for statsmodels - python

I am learning statsmodels.api module to use python for regression analysis. So I started from the simple OLS model.
In econometrics, the function is like: y = Xb + e
where X is NxK dimension, b is Kx1, e is Nx1, so adding together y is Nx1. This is perfectly fine from linear algebra point of view.
But I followed the tutorial from Statsmodels as the following:
import numpy as np
nsample = 100 # total obs is 100
x = np.linspace(0, 10, 100) # using np.linspace(start, stop, number)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size = nsample) # draw numbers from normal distribution
default at mu = 0, and std.dev = 1, size = set by user
# e is n x 1
# Now, we add the constant/intercept term to X
X = sm.add_constant(X)
# Now, we compute the y
y = np.dot(X, beta) + e
So this generates the correct answer. But I have a question about the generation of beta = np.array([1,0.1,10]). This beta, if we use:
beta.shape
(3,)
It has a dimension of (3,), the same goes with y and e except X:
X.shape
(100,3)
e.shape
(100,)
y.shape
(100,)
So I guess initiating array using the following three ways
o = array([1,2,3])
o1 = array([[1],[2],[3]])
o2 = array([[1,2,3]])
print(o.shape)
print(o1.shape)
print(o2.shape)
----------------
(3,)
(3, 1)
(1, 3)
If I use beta = array([[1],[2],[3]]), which is a (3,1), and np.dot(X, beta) gets me a wrong answer, although the dimension seems to work.
If I use array([[1,2,3]]), which is a row vector, the dimension doesn't match for dot product in numpy, neither in linear algebra.
So, I am wondering why for a NxK dot Kx1 numpy dot product, we have to use a (N,K) dot (K,) instead of (N,K) dot (K,1) matrices. What operation makes only np.array([1, 0.1, 10]) works for numpy.dot() while np.array([[1], [0.1], [10]]) doesn't.
Thank you very much.
Some update
Sorry about the confusion, the codes in Statsmodels are randomly generated so I tried to fix the X and get the following input:
f = array([[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15]])
o = array([1,2,3])
o1 = array([[1],[2],[3]])
o2 = array([[1,2,3]])
print(o.shape)
print(o1.shape)
print(o2.shape)
print("---------")
print(np.dot(f,o))
print(np.dot(f,o1))
r1 = np.dot(f,o)
r2 = np.dot(f,o1)
type1 = type(np.dot(f,o))
type2 = type(np.dot(f,o1))
tf = type1 is type2
tf2 = type1 == type2
print(type1)
print(type2)
print(tf)
print(tf2)
-------------------------
(3,)
(3, 1)
(1, 3)
---------
[14 32 50 68 86]
[[14]
[32]
[50]
[68]
[86]]
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
True
True
Sorry again for the confusion and inconvenience, they worked fine.

python/numpy is not a matrix-based language as it is Matlab or Octave or Scilab. These follow the rules of matrix multplication strictly. So
np.dot(f,o) ---------> f*o in Matlab/Octave/Scilab
np.dot(f,o1) ---------> f*o1 does not work in Matlab/Octave/Scilab
python/numpy has the 'broadcasting' which are the rules how the different data types and operations give together a result. It's not obvious why np.dot(f,o1) even should work, but the broadcasting defines some usefull results. You will have to consult the docs for that.
In python/numpy the * is not a matrix operator. You can find out what the broadcasting gives for
print(f*o)
print(f*o1)
print(f*o2)
Rather recently python/numpy has introduced the matrix operator #. You might find out what happens with
print(f#o)
print(f#o1)
print(f#o2)
Does this give some impressions ?

Related

Ensuring same dimensions in Python

The dimensions of P is (2,3,3). But the dimensions of M is (3,3). How can I ensure that both P and M have the same dimensions i.e. (2,3,3).
import numpy as np
P=np.array([[[128.22918457, 168.52413295, 209.72343319],
[129.01598287, 179.03716051, 150.68633749],
[131.00688309, 187.42601593, 193.68172751]],
[[ 64.11459228, 84.26206648, 104.86171659],
[ 64.50799144, 89.51858026, 75.34316875],
[ 65.50344155, 93.71300796, 96.84086375]]])
for x in range(0,2):
M=P[x]+1
print(M)
Just do
M = P + 1
and that ensures M and P have the same dimensions.
I don't know why you need this and for what (why you have tried to use M = P + 1 for making the shape equal?). But, you can ensure they have same shapes using assert:
assert a.shape == b.shape
It will get error when the shapes are not the same, so you can be sure that dimensions are the same if it didn't stuck and get error.

Why does it work when columns are larger than rows in Python Sklearn (Linear Regression) [duplicate]

it's known that when the number of variables (p) is larger than the number of samples (n) the least square estimator is not defined.
In sklearn I receive this values:
In [30]: lm = LinearRegression().fit(xx,y_train)
In [31]: lm.coef_
Out[31]:
array([[ 0.20092363, -0.14378298, -0.33504391, ..., -0.40695124,
0.08619906, -0.08108713]])
In [32]: xx.shape
Out[32]: (1097, 3419)
Call [30] should return an error. How does sklearn work when p>n like in this case?
EDIT:
It seems that the matrix is filled with some values
if n > m:
# need to extend b matrix as it will be filled with
# a larger solution matrix
if len(b1.shape) == 2:
b2 = np.zeros((n, nrhs), dtype=gelss.dtype)
b2[:m,:] = b1
else:
b2 = np.zeros(n, dtype=gelss.dtype)
b2[:m] = b1
b1 = b2
When the linear system is underdetermined, then the sklearn.linear_model.LinearRegression finds the minimum L2 norm solution, i.e.
argmin_w l2_norm(w) subject to Xw = y
This is always well defined and obtainable by applying the pseudoinverse of X to y, i.e.
w = np.linalg.pinv(X).dot(y)
The specific implementation of scipy.linalg.lstsq, which is used by LinearRegression uses get_lapack_funcs(('gelss',), ... which is precisely a solver that finds the minimum norm solution via singular value decomposition (provided by LAPACK).
Check out this example
import numpy as np
rng = np.random.RandomState(42)
X = rng.randn(5, 10)
y = rng.randn(5)
from sklearn.linear_model import LinearRegression
lr = LinearRegression(fit_intercept=False)
coef1 = lr.fit(X, y).coef_
coef2 = np.linalg.pinv(X).dot(y)
print(coef1)
print(coef2)
And you will see that coef1 == coef2. (Note that fit_intercept=False is specified in the constructor of the sklearn estimator, because otherwise it would subtract the mean of each feature before fitting the model, yielding different coefficients)

Scipy.optimize.linprog : Value error - Invalid input

I'm trying to Solve a little probleme just to otpimize some units production in a game, where Alpha is a variety coefficient (it sets how the variable can differ from each other) :
import numpy as np
import scipy.optimize as opti
alpha = 0.05
C = np.array([-1,-1,-1,-1,-15,-3,-3,-4,0,0,0,0,0,0])
B = np.array([1600,0,0,0,0,0,0,0,0,0,0,0,0,0])
MatriceC = np.array([\
np.array([14-((1-alpha)*8),7-((1-alpha)*8),7-((1-alpha)*25),18-((1-
alpha)*12),30-((1-alpha)*30),40-((1-alpha)*40),18-((1-alpha)*1),76-((1-
alpha)*16),-1,0,0,0,0,0]),\
np.array([14-((1+alpha)*8),7-((1+alpha)*8),7-((1+alpha)*25),18-
((1+alpha)*12),30-((1+alpha)*30),40-((1+alpha)*40),18-((1+alpha)*1),76-
((1+alpha)*16),0,-1,0,0,0,0])*(-1),\
np.array([14-((1-alpha)*30),7-((1-alpha)*2),7-((1-alpha)*13),18-((1-
alpha)*7),30-((1-alpha)*30),40-((1-alpha)*40),18-((1-alpha)*24),76-((1-
alpha)*56),0,0,-1,0,0,0]),\
np.array([14-((1+alpha)*30),7-((1+alpha)*2),7-((1+alpha)*13),18-
((1+alpha)*7),30-((1+alpha)*30),40-((1+alpha)*40),18-((1+alpha)*24),76-
((1+alpha)*56),0,0,0,-1,0,0])*(-1),\
np.array([8-((1-alpha)*30),8-((1-alpha)*2),25-((1-alpha)*13),12-((1-
alpha)*7),30-((1-alpha)*30),40-((1-alpha)*40),1-((1-alpha)*24),16-((1-
alpha)*56),0,0,0,0,-1,0]),\
np.array([8-((1+alpha)*30),8-((1+alpha)*2),25-((1+alpha)*13),12-
((1+alpha)*7),30-((1+alpha)*30),40-((1+alpha)*40),1-((1+alpha)*24),16-
((1+alpha)*56),0,0,0,0,0,-1])*(-1)])
#print(help(opti.linprog))
print(np.shape(MatriceC))
print(np.shape(B))
opti.linprog(C,A_eq=MatriceC,b_eq=B) #This causes the error...
And I get as an output :
(6, 14)
(14,)
ValueError: Invalid input for linprog with method = 'simplex'. The number
of rows in A_eq must be equal to the number of values in b_eq
Considering the shape of the matrix I get. I don't understand what I'm doing wrong.
PS :
I have tried adding
MatriceC = MatriceC.T
Just before the linprog call and it stills outpout the same error. It did change the (6, 14) shape into (14, 6) (well it's logical)
Transponse your MatriceC with MatriceC.T before passing it to linprog
linprog according to their doc:
Minimize: c^T * x
Subject to: A_ub * x <= b_ub
A_eq * x == b_eq
In order to satisfy the above equation, the matrices' dimension should conform to each other. Read about Matrix Multiplication.

Efficient way of computing the cross products between two sets of vectors numpy

I have two sets of 2000 3D vectors each, and I need to compute the cross product between each possible pair. I currently do it like this
for tx in tangents_x:
for ty in tangents_y:
cross = np.cross(tx, ty)
(... do something with the cross variable...)
This works, but it's pretty slow. Is there a way to make it faster?
If I was interested in the element-wise product, I could just do the following
# Define initial vectors
tx = np.array([np.random.randn(3) for i in range(2000)])
ty = np.array([np.random.randn(3) for i in range(2000)])
# Store them into matrices
X = np.array([tx for i in range(2000)])
Y = np.array([ty for i in range(2000)]).T
# Compute the element-wise product
ew = X * Y
# Use the element_wise product as usual
for i,tx in enumerate(tangents_x):
for j,ty in enumerate(tangents_y):
(... use the element wise product of tx and ty as ew[i,j])
How can I apply this to the cross product instead of the element-wise one? Or, do you see another alternative?
Thanks much :)
Like many numpy functions cross supports broadcasting, therefore you can simply do:
np.cross(tangents_x[:, None, :], tangents_y)
or - more verbose but maybe easier to read
np.cross(tangents_x[:, None, :], tangents_y[None, :, :])
This reshapes tangents_x and tangents_y to shapes 2000, 1, 3 and 1, 2000, 3. By the rules of broadcasting this will be interpreted like two arrays of shape 2000, 2000, 3 where tangents_x is repeated along axis 1 and tangents_y is repeated along axis 0.
Just write it out and compile it
import numpy as np
import numba as nb
#nb.njit(fastmath=True,parallel=True)
def calc_cros(vec_1,vec_2):
res=np.empty((vec_1.shape[0],vec_2.shape[0],3),dtype=vec_1.dtype)
for i in nb.prange(vec_1.shape[0]):
for j in range(vec_2.shape[0]):
res[i,j,0]=vec_1[i,1] * vec_2[j,2] - vec_1[i,2] * vec_2[j,1]
res[i,j,1]=vec_1[i,2] * vec_2[j,0] - vec_1[i,0] * vec_2[j,2]
res[i,j,2]=vec_1[i,0] * vec_2[j,1] - vec_1[i,1] * vec_2[j,0]
return res
Performance
#create data
tx = np.random.rand(3000,3)
ty = np.random.rand(3000,3)
#don't measure compilation overhead
comb=calc_cros(tx,ty)
t1=time.time()
comb=calc_cros(tx,ty)
print(time.time()-t1)
This gives 0.08s for the two (3000,3) matrices.
np.dot is almost always going to be faster. So you could convert one of the vectors into a matrix.
def skew(x):
return np.array([[0, -x[2], x[1]],
[x[2], 0, -x[0]],
[-x[1], x[0], 0]])
On my machine this runs faster:
tx = np.array([np.random.randn(3) for i in range(100)])
ty = np.array([np.random.randn(3) for i in range(100)])
tt=time.clock()
for x in tx:
for y in ty:
cross = np.cross(x, y)
print(time.clock()-tt)
0.207 sec
tt=time.clock()
for x in tx:
m=skew(x)
for y in ty:
cross = np.dot(m, y)
print(time.clock()-tt)
0.015 sec
This result may vary depending on the computer.
You could use np.meshgrid() to build the combination matrix and then decompose the cross product. The rest is fiddling around with the axes etc:
# build two lists of 5 3D vecotrs as example values:
a_list = np.random.randint(0, 10, (5, 3))
b_list = np.random.randint(0, 10, (5, 3))
# here the original approach using slow list comprehensions:
slow = np.array([[ np.cross(a, b) for a in a_list ] for b in b_list ])
# now the faster proposed version:
g = np.array([ np.meshgrid(a_list[:,i], b_list[:,i]) for i in range(3) ])
fast = np.array([ g[1,0] * g[2,1] - g[2,0] * g[1,1],
g[2,0] * g[0,1] - g[0,0] * g[2,1],
g[0,0] * g[1,1] - g[1,0] * g[0,1] ]).transpose(1, 2, 0)
I tested this with 10000×10000 elements (instead of the 5×5 in the example above) and it took 6.4 seconds with the fast version. The slow version already took 27 seconds for 500 elements.
For your 2000×2000 elements the fast version takes 0.23s on my computer. Fast enough for you?
Use a cartesian product to get all possible pairs
import itertools as it
all_pairs = it.product(tx, ty)
And then use map to loop over all pairs and compute the cross product:
map(lambda x: np.cross(x[0], x[1]), all_pairs)

Why does numpy.random.dirichlet() not accept multidimensional arrays?

On the numpy page they give the example of
s = np.random.dirichlet((10, 5, 3), 20)
which is all fine and great; but what if you want to generate random samples from a 2D array of alphas?
alphas = np.random.randint(10, size=(20, 3))
If you try np.random.dirichlet(alphas), np.random.dirichlet([x for x in alphas]), or np.random.dirichlet((x for x in alphas)), it results in a
ValueError: object too deep for desired array. The only thing that seems to work is:
y = np.empty(alphas.shape)
for i in xrange(np.alen(alphas)):
y[i] = np.random.dirichlet(alphas[i])
print y
...which is far from ideal for my code structure. Why is this the case, and can anyone think of a more "numpy-like" way of doing this?
Thanks in advance.
np.random.dirichlet is written to generate samples for a single Dirichlet distribution. That code is implemented in terms of the Gamma distribution, and that implementation can be used as the basis for a vectorized code to generate samples from different distributions. In the following, dirichlet_sample takes an array alphas with shape (n, k), where each row is an alpha vector for a Dirichlet distribution. It returns an array also with shape (n, k), each row being a sample of the corresponding distribution from alphas. When run as a script, it generates samples using dirichlet_sample and np.random.dirichlet to verify that they are generating the same samples (up to normal floating point differences).
import numpy as np
def dirichlet_sample(alphas):
"""
Generate samples from an array of alpha distributions.
"""
r = np.random.standard_gamma(alphas)
return r / r.sum(-1, keepdims=True)
if __name__ == "__main__":
alphas = 2 ** np.random.randint(0, 4, size=(6, 3))
np.random.seed(1234)
d1 = dirichlet_sample(alphas)
print "dirichlet_sample:"
print d1
np.random.seed(1234)
d2 = np.empty(alphas.shape)
for k in range(len(alphas)):
d2[k] = np.random.dirichlet(alphas[k])
print "np.random.dirichlet:"
print d2
# Compare d1 and d2:
err = np.abs(d1 - d2).max()
print "max difference:", err
Sample run:
dirichlet_sample:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
np.random.dirichlet:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
max difference: 5.55111512313e-17
I think you're looking for
y = np.array([np.random.dirichlet(x) for x in alphas])
for your list comprehension. Otherwise you're simply passing a python list or tuple. I imagine the reason numpy.random.dirichlet does not accept your list of alpha values is because it's not set up to - it already accepts an array, which it expects to have a dimension of k, as per the documentation.

Categories