Related
I have a Mathematica code that calculates the 95% confidence intervals of a Cumulative Distribution Function (CDF) obtained from a specific Probability Distribution Function (PDF). The PDF is ugly, as it contains an Hypergeometric 2F1 function, and I need to calculate the 2-sigma errorbars of a data set of 15 values.
I want to translate this code to Python, but I get a very significant divergence on the second half of the values.
Mathematica code
results are the lower and upper 2-sigma confidence level for the values in xdata. That is, xdata should always fall between the two corresponding results values.
navs = {10, 10, 18, 30, 52, 87, 147, 245, 410, 684, 1141, 1903, 3173, 5290, 8816};
freqs = {0.00002, 0.00004, 0.0000666667, 0.000111111, 0.000185185, 0.000308642, 0.000514403, 0.000857339, 0.00142893, 0.00238166, 0.00396944, 0.00661594, 0.0165426, 0.0220568, 0.027571}
xdata = {0.578064980346793, 0.030812200935204, 0.316777979844816,
0.353718150091612, 0.287659600326548, 0.269254388840293,
0.16545714457921, 0.138759871084825, 0.0602382519940077,
0.10120771961, 0.065311134782518, 0.105235790998594,
0.124642033979457, 0.0271909963701794, 0.0686653810421847};
data = MapThread[{#1, #2, #3} &, {navs, freqs, xdata}]
post[x_, n_, y_] =
(n - 1) (1 - x)^n (1 - y)^(n - 2) Hypergeometric2F1[n, n, 1, x*y]
integral = Map[(values = #; mesh = Subdivide[0, 1, 1000];
Interpolation[
DeleteDuplicates[{Map[
SetPrecision[post[#, values[[1]], values[[3]]^2], 100] &,
mesh] // (Accumulate[#] - #/2 - #[[1]]/
2) & // #/#[[-1]] &,
mesh}\[Transpose], (#1[[1]] == #2[[1]] &)],
InterpolationOrder -> 1]) &, data];
results =
MapThread[{Sqrt[#1[.025]], Sqrt[#1[0.975]]} &, {integral, data}]
{{0.207919, 0.776508}, {0.0481485, 0.535278}, {0.0834002, 0.574447},
{0.137742, 0.551035}, {0.121376, 0.455097}, {0.136889, 0.403306},
{0.0674029, 0.279408}, {0.0612534, 0.228762}, {0.0158357, 0.134521},
{0.0525374, 0.156055}, {0.0270589, 0.108861}, {0.0740978, 0.137691},
{0.100498, 0.149646}, {0.00741129, 0.0525161}, {0.0507748, 0.0850961}}
Python code
Here's my translation: results are the same quantity as before, truncated to the 7th digit to increase readability.
The results values I get start diverging from the 7th pair of values on, and the last four points of xdata do not fall between the two corresponding results values.
import numpy as np
from scipy.integrate import cumtrapz
from scipy.interpolate import interp1d
from mpmath import *
mesh = list(np.linspace(0,1,1000));
navs = [10, 10, 18, 30, 52, 87, 147, 245, 410, 684, 1141, 1903, 3173, 5290, 8816]
freqs = [0.00002, 0.00004, 0.0000666667, 0.000111111, 0.000185185, 0.000308642, 0.000514403, 0.000857339, 0.00142893, 0.00238166, 0.00396944, 0.00661594, 0.0165426, 0.0220568, 0.027571]
xdata = [0.578064980346793, 0.030812200935204, 0.316777979844816,
0.353718150091612,0.287659600326548, 0.269254388840293,
0.16545714457921, 0.138759871084825, 0.0602382519940077,
0.10120771961, 0.065311134782518, 0.105235790998594,
0.124642033979457, 0.0271909963701794, 0.0686653810421847]
def post(x,n,y):
post = (n-1)*((1-x)**n)*((1-y)**(n-2))*hyp2f1(n,n,1,x*y)
return post
# setting the numeric precision to 100 as in Mathematica
# trying to get the most precise hypergeometric function values
mp.dps = 100
mp.pretty = True
results = []
for i in range(len(navs)):
postprob = [];
for j in range(len(mesh)):
posterior = post(mesh[j], navs[i], xdata[i]**2)
postprob.append(posterior)
# calculate the norm of the pdf for integration
norm = np.trapz(np.array(postprob),mesh);
# integrate pdf/norm to obtain cdf
integrate = list(np.unique(cumtrapz(np.array(postprob)/norm, mesh, initial=0)));
mesh2 = list(np.linspace(0,1,len(integrate)));
# interpolate inverse cdf to obtain the 2sigma quantiles
icdf = interp1d(integrate, mesh2, bounds_error=False, fill_value='extrapolate');
results.append(list(np.sqrt(icdf([0.025, 0.975]))))
results
[[0.2079198, 0.7765088], [0.0481485, 0.5352773], [0.0834, 0.5744489],
[0.1377413, 0.5510352], [0.1218029, 0.4566994], [0.1399324, 0.4122767],
[0.0733743, 0.3041607], [0.0739691, 0.2762597], [0.0230135, 0.1954886],
[0.0871462, 0.2588804], [0.05637, 0.2268962], [0.1731199, 0.3217401],
[0.2665897, 0.3969059], [0.0315915, 0.2238736], [0.2224567, 0.3728803]]
Thanks to the comments to this question, I found out that:
The hypergeometric function gives different results in the two languages. With the same input values i get that: In Mathematica Hypergeometric2F1 gives me as a result 1.0588267, while in Python mpmath.hyp2f1 gives 1.0588866. This is the very second point of the mesh, and the difference in in the fifth decimal place.
Is there somewhere a better definition of this special function I was not able to find?
I still don't know if this is only due to the Hypergeometric function or also to the integration method, but that is definitely a starting point.
(I am fairly new to Python, maybe the code is a bit naive)
I'm attempting to write a custom Theano Op which numerically integrates a function between two values. The Op is a custom likelihood for PyMC3 which involves the numerical evaluation of some integrals. I can't simply use the #as_op decorator as I need to use HMC to do the MCMC step. Any help would be much appreciated, as this question seems to have come up several times but has never been solved (e.g. https://stackoverflow.com/questions/36853015/using-theano-with-numerical-integration, Theano: implementing an integral function).
Clearly one solution would be to write a numerical integrator within Theano, but this seems like a waste of effort when very good integrators are already available, for example through scipy.integrate.
To keep this as a minimal example, let's just try and integrate a function between 0 and 1 inside an Op. The following integrates a Theano function outside of an Op, and produces correct results as far as my testing has gone.
import theano
import theano.tensor as tt
from scipy.integrate import quad
x = tt.dscalar('x')
y = x**4 # integrand
f = theano.function([x], y)
print f(0)
print f(1)
ans = integrate.quad(f, 0, 1)[0]
print ans
However, attempting to do integration within an Op appears much harder. My current best effort is:
import numpy as np
import theano
import theano.tensor as tt
from scipy import integrate
class IntOp(theano.Op):
__props__ = ()
def make_node(self, x):
x = tt.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs, output_storage):
x = inputs[0]
z = output_storage[0]
f_to_int = theano.function([x], x)
z[0] = tt.as_tensor_variable(integrate.quad(f_to_int, 0, 1)[0])
def infer_shape(self, node, i0_shapes):
return i0_shapes
def grad(self, inputs, output_grads):
ans = integrate.quad(output_grads[0], 0, 1)[0]
return [ans]
intOp = IntOp()
x = tt.dmatrix('x')
y = intOp(x)
f = theano.function([x], y)
inp = np.asarray([[2, 4], [6, 8]], dtype=theano.config.floatX)
out = f(inp)
print inp
print out
Which gives the following error:
Traceback (most recent call last):
File "stackoverflow.py", line 35, in <module>
out = f(inp)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
outputs = self.fn()
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 912, in rval
r = p(n, [x[0] for x in i], o)
File "stackoverflow.py", line 17, in perform
f_to_int = theano.function([x], x)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 320, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 390, in pfunc
for p in params]
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 489, in _pfunc_param_to_in
raise TypeError('Unknown parameter type: %s' % type(param))
TypeError: Unknown parameter type: <type 'numpy.ndarray'>
Apply node that caused the error: IntOp(x)
Toposort index: 0
Inputs types: [TensorType(float64, matrix)]
Inputs shapes: [(2, 2)]
Inputs strides: [(16, 8)]
Inputs values: [array([[ 2., 4.],
[ 6., 8.]])]
Outputs clients: [['output']]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "stackoverflow.py", line 30, in <module>
y = intOp(x)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 611, in __call__
node = self.make_node(*inputs, **kwargs)
File "stackoverflow.py", line 11, in make_node
return theano.Apply(self, [x], [x.type()])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
I'm surprised by this, especially the TypeError, as I thought I had converted the output_storage variable into a tensor but it appears to believe here that it is still an ndarray.
I found your question because I'm trying to build a random variable in PyMC3 that represents a general point process (Hawkes, Cox, Poisson, etc) and the likelihood function has an integral. I really want to be able to use Hamiltonian Monte Carlo or NUTS samplers, so I needed that integral with respect to time to be differentiable.
Starting off of your attempt, I made an integrateOut theano Op that seems to work correctly with the behavior I need. I've tested it out on a few different inputs (not on my stats model just yet, but it appears promising!). I'm a total theano n00b, so pardon any stupidity. I would greatly appreciate feedback if anyone has any. Not sure it's exactly what you're looking for, but here's my solution (example at the bottom and in the doc strings). *EDIT: simplified some remnants of screwing around with ways to do this.
import theano
import theano.tensor as T
from scipy.integrate import quad
class integrateOut(theano.Op):
"""
Integrate out a variable from an expression, computing
the definite integral w.r.t. the variable specified
!!! Only implemented in this for scalars !!!
Parameters
----------
f : scalar
input 'function' to integrate
t : scalar
the variable to integrate out
t0: float
lower integration limit
tf: float
upper integration limit
Returns
-------
scalar
a new scalar with the 't' integrated out
Notes
-----
usage of this looks like:
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2 + y**2)*t
# integrate z w.r.t. t as a function of (x,y)
intZ = integrateOut(z,t,0.0,5.0)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
"""
def __init__(self,f,t,t0,tf,*args,**kwargs):
super(integrateOut,self).__init__()
self.f = f
self.t = t
self.t0 = t0
self.tf = tf
def make_node(self,*inputs):
self.fvars=list(inputs)
# This will fail when taking the gradient... don't be concerned
try:
self.gradF = T.grad(self.f,self.fvars)
except:
self.gradF = None
return theano.Apply(self,self.fvars,[T.dscalar().type()])
def perform(self,node, inputs, output_storage):
# Everything else is an argument to the quad function
args = tuple(inputs)
# create a function to evaluate the integral
f = theano.function([self.t]+self.fvars,self.f)
# actually compute the integral
output_storage[0][0] = quad(f,self.t0,self.tf,args=args)[0]
def grad(self,inputs,grads):
return [integrateOut(g,self.t,self.t0,self.tf)(*inputs)*grads[0] \
for g in self.gradF]
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2+y**2)*t
intZ = integrateOut(z,t,0,1)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
print funcIntZ(2,2)
print funcGradIntZ(2,2)
SymPy is proving harder than anticipated, but in the meantime in case anyone's finding this useful, I'll also point out how to modify this Op to allow for changing the final timepoint without creating a new Op. This can be useful if you have a point process, or if you have uncertainty in your time measurements.
class integrateOut2(theano.Op):
def __init__(self, f, int_var, *args,**kwargs):
super(integrateOut2,self).__init__()
self.f = f
self.int_var = int_var
def make_node(self, *inputs):
tmax = inputs[0]
self.fvars=list(inputs[1:])
return theano.Apply(self, [tmax]+self.fvars, [T.dscalar().type()])
def perform(self, node, inputs, output_storage):
# Everything else is an argument to the quad function
tmax = inputs[0]
args = tuple(inputs[1:])
# create a function to evaluate the integral
f = theano.function([self.int_var]+self.fvars, self.f)
# actually compute the integral
output_storage[0][0] = quad(f, 0., tmax, args=args)[0]
def grad(self, inputs, grads):
tmax = inputs[0]
param_grads = T.grad(self.f, self.fvars)
## Recall fundamental theorem of calculus
## d/dt \int^{t}_{0}f(x)dx = f(t)
## So sub in t_max to the graph
FTC_grad = theano.clone(self.f, {self.int_var: tmax})
grad_list = [FTC_grad*grads[0]] + \
[integrateOut2(grad_fn, self.int_var)(*inputs)*grads[0] \
for grad_fn in param_grads]
return grad_list
I always use the following code where I generate B = 10000 samples of n = 30 observations from a normal distribution with µ = 1 and σ 2 = 2.25. For each sample, the parameters µ and σ are estimated and stored in a matrix. I hope this can help you.
loglik <- function(p,z){
sum(dnorm(z,mean=p[1],sd=p[2],log=TRUE))
}
set.seed(45)
n <- 30
x <- rnorm(n,mean=1,sd=1.5)
optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=x)
B <- 10000
bootstrap.results <- matrix(NA,nrow=B,ncol=3)
colnames(bootstrap.results) <- c("mu","sigma","convergence")
for (b in 1:B){
sample.b <- rnorm(n,mean=1,sd=1.5)
m.b <- optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=sample.b)
bootstrap.results[b,] <- c(m.b$par,m.b$convergence)
}
One can also obtain the ML estimate of λ and use the bootstrap to estimate the bias and the standard error of the estimate. First calculate the MLE of λ Then, we estimate the bias and the standard error of λˆ by a nonparametric bootstrap.
B <- 9999
lambda.B <- rep(NA,B)
n <- length(w.time)
for (b in 1:B){
b.sample <- sample(1:n,n,replace=TRUE)
lambda.B[b] <- 1/mean(w.time[b.sample])
}
bias <- mean(lambda.B-m$estimate)
sd(lambda.B)
In the second part we calculate a 95% confidence interval for the mean time between failures.
n <- length(w.time)
m <- mean(w.time)
se <- sd(w.time)/sqrt(n)
interval.1 <- m + se * qnorm(c(0.025,0.975))
interval.1
But we can also use the the assumption that the data are from an exponential distribution. In that case we have varX¯ = 1/(nλ^2) = θ^{2}/n which can be estimated by X¯^{2}/n.
sd.m <- sqrt(m^2/n)
interval.2 <- m + sd.m * qnorm(c(0.025,0.975))
interval.2
We can also estimate the standard error of ˆθ by means of a boostrap procedure. We use the nonparametric bootstrap, that is, we sample from the original sample with replacement.
B <- 9999
m.star <- rep(NA,B)
for (b in 1:B){
m.star[b] <- mean(sample(w.time,replace=TRUE))
}
sd.m.star <- sd(m.star)
interval.3 <- m + sd.m.star * qnorm(c(0.025,0.975))
interval.3
An interval not based on the assumption of normality of ˆθ is obtained by the percentile method:
interval.4 <- quantile(m.star, probs=c(0.025,0.975))
interval.4
I have obtained the coefficients for the Legendre polynomial that best fits my data. Now I am needing to determine the value of that polynomial at each time-step of my data. I need to do this so that I can subtract the fit from my data. I have looked at the documentation for the Legendre module, and I'm not sure if I just don't understand my options or if there isn't a native tool in place for what I want. If my data-points were evenly spaced, linspace would be a good option, but that's not the case here. Does anyone have a suggestion for what to try?
For those who would like to demand a minimum working example of code, just use a random array, get the coefficients, and tell me from there how you would proceed. The values themselves don't matter. It's the technique that I'm asking about here. Thanks.
To simplify Ahmed's example
In [1]: from numpy.polynomial import Polynomial, Legendre
In [2]: p = Polynomial([0.5, 0.3, 0.1])
In [3]: x = np.random.rand(10) * 10
In [4]: y = p(x)
In [5]: pfit = Legendre.fit(x, y, 2)
In [6]: plot(*pfit.linspace())
Out[6]: [<matplotlib.lines.Line2D at 0x7f815364f310>]
In [7]: plot(x, y, 'o')
Out[7]: [<matplotlib.lines.Line2D at 0x7f81535d8bd0>]
The Legendre functions are scaled and offset, as the data should be confined to the interval [-1, 1] to get any advantage over the usual power basis. If you want the coefficients for plain old Legendre functions
In [8]: pfit.convert()
Out[8]: Legendre([ 0.53333333, 0.3 , 0.06666667], [-1., 1.], [-1., 1.])
But that isn't recommended.
Once you have a function, you can just generate a numpy array for the timepoints:
>>> import numpy as np
>>> timepoints = [1,3,7,15,16,17,19]
>>> myarray = np.array(timepoints)
>>> def mypolynomial(bins, pfinal): #pfinal is just the estimate of the final array (i'll do quadratic)
... a,b,c = pfinal # obviously, for a*x^2 + b*x + c
... return (a*bins**2) + b*bins + c
>>> mypolynomial(myarray, (1,1,0))
array([ 2, 12, 56, 240, 272, 306, 380])
It automatically evaluates it for each timepoint is in the numpy array.
Now all you have to do is rewrite mypolynomial to go from a simple quadratic example to a proper one for a Legendre polynomial. Treat the function as if it were evaluating a float to return the value, and when called on the numpy array it will automatically evaluate it for each value.
EDIT:
Let's say I wanted to generalize this to all standard polynomials:
>>> import numpy as np
>>> timepoints = [1,3,7,15,16,17,19]
>>> myarray = np.array(timepoints)
>>> def mypolynomial(bins, pfinal): #pfinal is just the estimate of the final array (i'll do quadratic)
>>> hist = np.zeros((1, len(myarray))) # define blank return
... for i in range(len(pfinal)):
... # fixed a typo here, was pfinal[-i] which would give -0 rather than -1, since negative indexing starts at -1, not -0
... const = pfinal[-i-1] # negative index to go from 0 exponent to highest exponent
... hist += const*(bins**i)
... return hist
>>> mypolynomial(myarray, (1,1,0))
array([ 2, 12, 56, 240, 272, 306, 380])
EDIT2: Typo fix
EDIT3:
#Ahmed is perfect right when he states Homer's rule is good for numerical stability. The implementation here would be as follows:
>>> def horner(coeffs, x):
... acc = 0
... for c in coeffs:
... acc = acc * x + c
... return acc
>>> horner((1,1,0), myarray)
array([ 2, 12, 56, 240, 272, 306, 380])
Slightly modified to keep the same argument order as before, from the code here:
http://rosettacode.org/wiki/Horner%27s_rule_for_polynomial_evaluation#Python
When you're using a nice library to fit polynomials, the library will in my experience usually have a function to evaluate them. So I think it is useful to know how you're generating these coefficients.
In the example below, I used two functions in numpy, legfit and legval which made it trivial to both fit and evaluate the Legendre polynomials without any need to invoke Horner's rule or do the bookkeeping yourself. (Though I do use Horner's rule to generate some example data.)
Here's a complete example where I generate some sparse data from a known polynomial, fit a Legendre polynomial to it, evaluate that polynomial on a dense grid, and plot. Note that the fitting and evaluating part takes three lines thanks to the numpy library doing all the heavy lifting.
It produces the following figure:
import numpy as np
### Setup code
def horner(coeffs, x):
"""Evaluate a polynomial at a point or array"""
acc = 0.0
for c in reversed(coeffs):
acc = acc * x + c
return acc
x = np.random.rand(10) * 10
true_coefs = [0.1, 0.3, 0.5]
y = horner(true_coefs, x)
### Fit and evaluate
legendre_coefs = np.polynomial.legendre.legfit(x, y, 2)
new_x = np.linspace(0, 10)
new_y = np.polynomial.legendre.legval(new_x, legendre_coefs)
### Plotting only
try:
import pylab
pylab.ion() # turn on interactive plotting
pylab.figure()
pylab.plot(x, y, 'o', new_x, new_y, '-')
pylab.xlabel('x')
pylab.ylabel('y')
pylab.title('Fitting Legendre polynomials and evaluating them')
pylab.legend(['original sparse data', 'fit'])
except:
print("Can't start plots.")
The griding the data (d) in irregular grid (x and y) using Scipy's griddata is timecomsuing when the datasets are many. But, the longitudes and latitudes (x and y) are always same, only the data (d) are changing. In this case, once using the giddata, how to repeat the procedure with different d arrys to achieve faster result?
import numpy as np, matplotlib.pyplot as plt
from scipy.interpolate import griddata
x = np.array([110, 112, 114, 115, 119, 120, 122, 124]).astype(float)
y = np.array([60, 61, 63, 67, 68, 70, 75, 81]).astype(float)
d = np.array([4, 6, 5, 3, 2, 1, 7, 9]).astype(float)
ulx, lrx = np.min(x), np.max(x)
uly, lry = np.max(y), np.min(y)
xi = np.linspace(ulx, lrx, 15)
yi = np.linspace(uly, lry, 15)
grided_data = griddata((x, y), d, (xi.reshape(1,-1), yi.reshape(-1,1)), method='nearest',fill_value=0)
plt.imshow(grided_data)
plt.show()
The above code works for one array of d.
But I have hundreds of other arrays.
griddata with nearest ends up using NearestNDInterpolator. That's a class that creates an iterator, which is called with the xi:
elif method == 'nearest':
ip = NearestNDInterpolator(points, values, rescale=rescale)
return ip(xi)
So you could create your own NearestNDInterpolator and call it with multiple times with different xi.
But I think in your case you want to change the values. Looking at the code for that class I see
self.tree = cKDTree(self.points)
self.values = y
the __call__ does:
dist, i = self.tree.query(xi)
return self.values[i]
I don't know the relative cost of creating the tree versus query.
So it should be easy to change values between uses of __call__. And it looks like values could have multiple columns, since it's just indexing on the 1st dimension.
This interpolator is simple enough that you could write your own using the same tree idea.
Here's a Nearest Interpolator that lets you repeat the interpolation for the same points, but different z values. I haven't done timings yet to see how much time it saves
class MyNearest(interpolate.NearestNDInterpolator):
# normal interpolation, but returns the near neighbor indices as well
def __call__(self, *args):
xi = interpolate.interpnd._ndim_coords_from_arrays(args, ndim=self.points.shape[1])
xi = self._check_call_shape(xi)
xi = self._scale_x(xi)
dist, i = self.tree.query(xi)
return i, self.values[i]
def my_griddata(points, values, method='linear', fill_value=np.nan,
rescale=False):
points = interpolate.interpnd._ndim_coords_from_arrays(points)
if points.ndim < 2:
ndim = points.ndim
else:
ndim = points.shape[-1]
assert(ndim==2)
# simplified call for 2d 'nearest'
ip = MyNearest(points, values, rescale=rescale)
return ip # ip(xi) # return iterator, not values
ip = my_griddata((xreg, yreg), z, method='nearest',fill_value=0)
print(ip)
xi = (xi.reshape(1,-1), yi.reshape(-1,1))
I, data = ip(xi)
print(data.shape)
print(I.shape)
print(np.allclose(z[I],data))
z1 = xreg+yreg # new z data
data = z1[I] # should show diagonal color bars
So as long as z has the same shape as before (and as xreg), z[I] will return the nearest value for each xi.
And it can interpolated 2d data as well (e.g. (225,n) shaped)
z1 = np.array([xreg+yreg, xreg-yreg]).T
print(z1.shape) # (225,2)
data = z1[I]
print(data.shape) # (20,20,2)
I am trying to implement face recognition by Principal Component Analysis (PCA) using python. I am following the steps in this tutorial: http://onionesquereality.wordpress.com/2009/02/11/face-recognition-using-eigenfaces-and-distance-classifiers-a-tutorial/
Here is my code:
import os
from PIL import Image
import numpy as np
import glob
import numpy.linalg as linalg
#Step1: put database images into a 2D array
filenames = glob.glob('C:\\Users\\Karim\\Downloads\\att_faces\\New folder/*.pgm')
filenames.sort()
img = [Image.open(fn).convert('L').resize((90, 90)) for fn in filenames]
images = np.asarray([np.array(im).flatten() for im in img])
#Step 2: find the mean image and the mean-shifted input images
mean_image = images.mean(axis=0)
shifted_images = images - mean_image
#Step 3: Covariance
c = np.cov(shifted_images)
#Step 4: Sorted eigenvalues and eigenvectors
eigenvalues,eigenvectors = linalg.eig(c)
idx = np.argsort(-eigenvalues)
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:, idx]
#Step 5: Only keep the top 'num_eigenfaces' eigenvectors
num_components = 20
eigenvalues = eigenvalues[0:num_components].copy()
eigenvectors = eigenvectors[:, 0:num_components].copy()
#Step 6: Finding weights
w = eigenvectors.T * np.asmatrix(shifted_images)
#Step 7: Input image
input_image = Image.open('C:\\Users\\Karim\\Downloads\\att_faces\\1.pgm').convert('L').resize((90, 90))
input_image = np.asarray(input_image).flatten()
#Step 8: get the normalized image, covariance, eigenvalues and eigenvectors for input image
shifted_in = input_image - mean_image
c = np.cov(input_image)
cmat = c.reshape(1,1)
eigenvalues_in, eigenvectors_in = linalg.eig(cmat)
#Step 9: Fing weights of input image
w_in = eigenvectors_in.T * np.asmatrix(shifted_in)
print w_in
print w_in.shape
#Step 10: Euclidean distance
d = np.sqrt(np.sum((w - w_in)**2))
idx = np.argmin(d)
match = images[idx]
I am havin a problem in Step 10 as I am getting this error:
Traceback (most recent call last):
File "C:/Users/Karim/Desktop/Bachelor 2/New folder/new3.py", line 59, in <module>
d = np.sqrt(np.sum((w - w_in)**2))
File "C:\Python27\lib\site-packages\numpy\matrixlib\defmatrix.py", line 343, in __pow__
return matrix_power(self, other)
File "C:\Python27\lib\site-packages\numpy\matrixlib\defmatrix.py", line 160, in matrix_power
raise ValueError("input must be a square array")
ValueError: input must be a square array
Anyone can help??
I think you want to change the line where you calculate d to something like this:
#Step 10: Euclidean distance
d = np.sqrt(np.sum(np.asarray(w - w_in)**2, axis=1)
This gives you a list of length M (number of training images) of the squared, summed, rooted distances between each images pixels. I believe that you don't want the matrix product, you want the elementwise square of each value, hence the np.asarray to make it not a matrix. This gives you the 'euclidean' difference between w_in and each of the w matrices.
When you go (w - w_in), the result is not a square matrix. To multiply a matrix by itself it must be square (that's just a property of matrix multiplication). So either you've constructed your w and w_in matrices wrong, or what you actually meant to do is square each element in the matrix (w - w_in) which is a different operation. Search for element-wise multiplication to find the numpy syntax.