I'm trying to replicate this paper: Global motion estimation from coarsely sampled motion vector field and the applications
I need to find the parameters m0, m1, m2....., m7
Given a images x1_1 and x1_2 and where the equations are
Example data.
Where x'{x1_2[1, :, :]} and y'{x1_2[0, :, :]} are values of x1_2 and
x, y the x1_1 is x1_1 in the same style.
I have refereed to the example in this post to make this implementation.
Could anyone help me how to find these parameters?
Made edits as per comments and the example of leastsq function.
Changed program is given below and output is given below
import os
import sys
import numpy as np
import scipy
from scipy.optimize import leastsq
def peval (inp_mat,p):
m0,m1,m2,m3,m4,m5,m6,m7 = p
out_mat = np.zeros(inp_mat.shape,dtype=np.float32)
mid = inp_mat.shape[0]/2
for xy in range(0,inp_mat.shape[0]):
if (xy<(inp_mat.shape[0]/2)):
out_mat[xy] = ( ( (inp_mat[xy+mid]*m0)+(inp_mat[xy]*m1)+ m2 ) /( (inp_mat[xy+mid]*m6)+(inp_mat[xy]*m7)+1 ) )
else:
out_mat[xy] = ( ( (inp_mat[xy]*m3)+(inp_mat[xy-mid]*m4)+ m5 ) /( (inp_mat[xy]*m6)+(inp_mat[xy-mid]*m7)+1 ) )
return out_mat
def residuals(p, out_mat, inp_mat):
m0,m1,m2,m3,m4,m5,m6,m7 = p
err=np.zeros(inp_mat.shape,dtype=np.float32)
if (out_mat.shape == inp_mat.shape):
for xy in range(0,inp_mat.shape[0]):
err[xy] = err[xy]+ (out_mat[xy] -inp_mat[xy])
return err
f = open('/media/anilil/Data/Datasets/repo/txt_op/vid.txt','r')
x = np.loadtxt(f,dtype=np.int16,comments='#',delimiter='\t')
nof = x.shape[0]/72 # Find the number of frames
x1 = x.reshape(-1,60,40)
x1_1= x1[0,:,:].flatten()
x1_2= x1[1,:,:].flatten()
x= []
y= []
for xy in range(1,50,1):
y.append(x1[xy,:,:].flatten())
x.append(x1[xy-1,:,:].flatten())
x=np.array(x,dtype=np.float32)
y=np.array(y,dtype=np.float32)
length = x1_1.shape#initail guess
p0 = np.array([1,1,1,1,1,1,1,1],dtype=np.float32)
abc=leastsq(residuals, p0,args=(y,x))
print ('Size of first matrix is '+str(x1_1.shape))
print ('Size of first matrix is '+str(x1_2.shape))
print ("Done with program")
Output
ValueError: object too deep for desired array
Traceback (most recent call last):
File "/media/anilil/Data/charm/mv_clean/.idea/nose_reduction_mpeg.py", line 49, in <module>
abc=leastsq(residuals, p0,args=(y,x))
File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 378, in leastsq
gtol, maxfev, epsfcn, factor, diag)
minpack.error: Result from function call is not a proper array of floats.
Check out the documentation of leastsq and the example.
You need to define the objective function so that it takes all the parameters as the first argument, followed by other inputs:
def function (M, inp_mat):
m0, m1, m2, m3, m4, m5, m6, m7 = M
out_mat = np.zeros(inp_mat.shape)
...
The other parameters, in your case inp_mat, are passed as args to the optimization function:
result = opt.leastsq(function, x0, args=(inp_mat), Dfun=None, full_output=0, col_deriv=0, ftol=1.49012e-08, xtol=1.49012e-08, gtol=0.0, maxfev=0, epsfcn=None, factor=100, diag=None)
I don't know what inp_mat is supposed to be. Most likely it has something to do with the data, so args=(x1) is probably what you want.
Finally, you want to retrieve the result from the optimization and do something with them.
Related
I'm trying to fit a lorentzian to one of the peaks in my dataset.
We were given the fit for a gaussian, and aside from the actual fit equation, the code is very similar, so I'm not sure where I am going wrong. I don't see why there is an issue with the dimensions when I'm using curve_fit.
Here are the relevant pieces of my code for a better idea of what I'm talking about.
Reading the CSV file in and trimming it
import csv
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from matplotlib.ticker import StrMethodFormatter
#reading in the csv file
with open("Data-Oscilloscope.csv") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=",")
time =[]
voltage_raw = []
for row in csv_reader:
time.append(float(row[3]))
voltage_raw.append(float(row[4]))
print("voltage:", row[4])
#trimming the data
trim_lower_index = 980
trim_upper_index = 1170
time_trim = time[trim_lower_index:trim_upper_index]
voltage_trim = voltage_raw[trim_lower_index:trim_upper_index]
The Gaussian fit given
#fitting the gaussian function
def gauss_function(x, a, x0, sigma):
return a*np.exp(-(x-x0)**2/(2*sigma**2))
popt, pcov = curve_fit(gauss_function, time_trim, voltage_trim, p0=[1,.4,0.1])
perr = np.sqrt(np.diag(pcov))
#plot of the gaussian fit
plt.figure(2)
plt.plot(time_trim, gauss_function(time_trim, *popt), label = "fit")
plt.plot(time_trim, voltage_trim, "-b")
plt.show()
My attempted Lorentzian fit
#x is just the x values, a is the amplitude, x0 is the central value, and f is the full width at half max
def lorentz_function(x, a, x0,f):
w = f/2 #half width at half max
return a*w/ [(x-x0)**2+w**2]
popt, pcov = curve_fit(lorentz_function, time_trim, voltage_trim, p0=[1,.4,0.1])
I get an error running this that states:
in leastsq raise TypeError('Improper input: N=%s must not exceed M=%s' % (n, m))
TypeError: Improper input: N=3 must not exceed M=1
I'm probably missing something very obvious but just can't see it.
Thanks in advance!
EDIT: I have taken a look at other, similar questions and went through their explanations, but can't see how those fit in with my code because the number of parameters and dimensions of my inputs should be fine, given that they worked for the gaussian fit.
You didn't show the full traceback/error so I can only guess where it's happening. It's probably looking at the result returned by lorentz_function, and finding the dimensions to be wrong. So while the error is produced by your function, the testing is in its caller (in this case a level or two down).
def optimize.curve_fit(
f,
xdata,
ydata,
p0=None,... # p0=[1,.4,0.1]
...
res = leastsq(func, p0,
...
curve_fit passes the task to leastsq, which starts as:
def optimize.leastsq(
func,
x0,
args=(), ...
x0 = asarray(x0).flatten()
n = len(x0)
if not isinstance(args, tuple):
args = (args,)
shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
m = shape[0]
if n > m:
raise TypeError('Improper input: N=%s must not exceed M=%s' % (n, m))
I'm guessing _check_func does
res = func(x0, *args) # call your func with initial values
and returning the shape of res. The error says there's a mismatch between what it expects based on the shape of x0 and the result of your func.
I'm guessing that with a 3 element p0, it's complaining that your function returned a 1 element result (due to the []).
lorentz is your function. You don't test the output shape so it can't raise this error.
I had a similar problem yielding this error message.
In that case, the array of data passed to the optimize.leastsq was in the matrix form. the data seems to have to be a 1 row array.
For example, the sentence calling the leastsq was
result = optimize.leastsq(fit_func, param0, args=(xdata, ydata, zdata))
xdata, ydata, zdata was in the [ 1 x num ] matrix form. It was
[[200. .... 350.]]
It should be
[200. .... 350.]
So, I had to add the sentence for conversion
xdata = xdata[0, :]
So do ydata and zdata.
I hope this be a help for you.
I'm attempting to write a custom Theano Op which numerically integrates a function between two values. The Op is a custom likelihood for PyMC3 which involves the numerical evaluation of some integrals. I can't simply use the #as_op decorator as I need to use HMC to do the MCMC step. Any help would be much appreciated, as this question seems to have come up several times but has never been solved (e.g. https://stackoverflow.com/questions/36853015/using-theano-with-numerical-integration, Theano: implementing an integral function).
Clearly one solution would be to write a numerical integrator within Theano, but this seems like a waste of effort when very good integrators are already available, for example through scipy.integrate.
To keep this as a minimal example, let's just try and integrate a function between 0 and 1 inside an Op. The following integrates a Theano function outside of an Op, and produces correct results as far as my testing has gone.
import theano
import theano.tensor as tt
from scipy.integrate import quad
x = tt.dscalar('x')
y = x**4 # integrand
f = theano.function([x], y)
print f(0)
print f(1)
ans = integrate.quad(f, 0, 1)[0]
print ans
However, attempting to do integration within an Op appears much harder. My current best effort is:
import numpy as np
import theano
import theano.tensor as tt
from scipy import integrate
class IntOp(theano.Op):
__props__ = ()
def make_node(self, x):
x = tt.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs, output_storage):
x = inputs[0]
z = output_storage[0]
f_to_int = theano.function([x], x)
z[0] = tt.as_tensor_variable(integrate.quad(f_to_int, 0, 1)[0])
def infer_shape(self, node, i0_shapes):
return i0_shapes
def grad(self, inputs, output_grads):
ans = integrate.quad(output_grads[0], 0, 1)[0]
return [ans]
intOp = IntOp()
x = tt.dmatrix('x')
y = intOp(x)
f = theano.function([x], y)
inp = np.asarray([[2, 4], [6, 8]], dtype=theano.config.floatX)
out = f(inp)
print inp
print out
Which gives the following error:
Traceback (most recent call last):
File "stackoverflow.py", line 35, in <module>
out = f(inp)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
outputs = self.fn()
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 912, in rval
r = p(n, [x[0] for x in i], o)
File "stackoverflow.py", line 17, in perform
f_to_int = theano.function([x], x)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 320, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 390, in pfunc
for p in params]
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 489, in _pfunc_param_to_in
raise TypeError('Unknown parameter type: %s' % type(param))
TypeError: Unknown parameter type: <type 'numpy.ndarray'>
Apply node that caused the error: IntOp(x)
Toposort index: 0
Inputs types: [TensorType(float64, matrix)]
Inputs shapes: [(2, 2)]
Inputs strides: [(16, 8)]
Inputs values: [array([[ 2., 4.],
[ 6., 8.]])]
Outputs clients: [['output']]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "stackoverflow.py", line 30, in <module>
y = intOp(x)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 611, in __call__
node = self.make_node(*inputs, **kwargs)
File "stackoverflow.py", line 11, in make_node
return theano.Apply(self, [x], [x.type()])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
I'm surprised by this, especially the TypeError, as I thought I had converted the output_storage variable into a tensor but it appears to believe here that it is still an ndarray.
I found your question because I'm trying to build a random variable in PyMC3 that represents a general point process (Hawkes, Cox, Poisson, etc) and the likelihood function has an integral. I really want to be able to use Hamiltonian Monte Carlo or NUTS samplers, so I needed that integral with respect to time to be differentiable.
Starting off of your attempt, I made an integrateOut theano Op that seems to work correctly with the behavior I need. I've tested it out on a few different inputs (not on my stats model just yet, but it appears promising!). I'm a total theano n00b, so pardon any stupidity. I would greatly appreciate feedback if anyone has any. Not sure it's exactly what you're looking for, but here's my solution (example at the bottom and in the doc strings). *EDIT: simplified some remnants of screwing around with ways to do this.
import theano
import theano.tensor as T
from scipy.integrate import quad
class integrateOut(theano.Op):
"""
Integrate out a variable from an expression, computing
the definite integral w.r.t. the variable specified
!!! Only implemented in this for scalars !!!
Parameters
----------
f : scalar
input 'function' to integrate
t : scalar
the variable to integrate out
t0: float
lower integration limit
tf: float
upper integration limit
Returns
-------
scalar
a new scalar with the 't' integrated out
Notes
-----
usage of this looks like:
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2 + y**2)*t
# integrate z w.r.t. t as a function of (x,y)
intZ = integrateOut(z,t,0.0,5.0)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
"""
def __init__(self,f,t,t0,tf,*args,**kwargs):
super(integrateOut,self).__init__()
self.f = f
self.t = t
self.t0 = t0
self.tf = tf
def make_node(self,*inputs):
self.fvars=list(inputs)
# This will fail when taking the gradient... don't be concerned
try:
self.gradF = T.grad(self.f,self.fvars)
except:
self.gradF = None
return theano.Apply(self,self.fvars,[T.dscalar().type()])
def perform(self,node, inputs, output_storage):
# Everything else is an argument to the quad function
args = tuple(inputs)
# create a function to evaluate the integral
f = theano.function([self.t]+self.fvars,self.f)
# actually compute the integral
output_storage[0][0] = quad(f,self.t0,self.tf,args=args)[0]
def grad(self,inputs,grads):
return [integrateOut(g,self.t,self.t0,self.tf)(*inputs)*grads[0] \
for g in self.gradF]
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2+y**2)*t
intZ = integrateOut(z,t,0,1)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
print funcIntZ(2,2)
print funcGradIntZ(2,2)
SymPy is proving harder than anticipated, but in the meantime in case anyone's finding this useful, I'll also point out how to modify this Op to allow for changing the final timepoint without creating a new Op. This can be useful if you have a point process, or if you have uncertainty in your time measurements.
class integrateOut2(theano.Op):
def __init__(self, f, int_var, *args,**kwargs):
super(integrateOut2,self).__init__()
self.f = f
self.int_var = int_var
def make_node(self, *inputs):
tmax = inputs[0]
self.fvars=list(inputs[1:])
return theano.Apply(self, [tmax]+self.fvars, [T.dscalar().type()])
def perform(self, node, inputs, output_storage):
# Everything else is an argument to the quad function
tmax = inputs[0]
args = tuple(inputs[1:])
# create a function to evaluate the integral
f = theano.function([self.int_var]+self.fvars, self.f)
# actually compute the integral
output_storage[0][0] = quad(f, 0., tmax, args=args)[0]
def grad(self, inputs, grads):
tmax = inputs[0]
param_grads = T.grad(self.f, self.fvars)
## Recall fundamental theorem of calculus
## d/dt \int^{t}_{0}f(x)dx = f(t)
## So sub in t_max to the graph
FTC_grad = theano.clone(self.f, {self.int_var: tmax})
grad_list = [FTC_grad*grads[0]] + \
[integrateOut2(grad_fn, self.int_var)(*inputs)*grads[0] \
for grad_fn in param_grads]
return grad_list
I always use the following code where I generate B = 10000 samples of n = 30 observations from a normal distribution with µ = 1 and σ 2 = 2.25. For each sample, the parameters µ and σ are estimated and stored in a matrix. I hope this can help you.
loglik <- function(p,z){
sum(dnorm(z,mean=p[1],sd=p[2],log=TRUE))
}
set.seed(45)
n <- 30
x <- rnorm(n,mean=1,sd=1.5)
optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=x)
B <- 10000
bootstrap.results <- matrix(NA,nrow=B,ncol=3)
colnames(bootstrap.results) <- c("mu","sigma","convergence")
for (b in 1:B){
sample.b <- rnorm(n,mean=1,sd=1.5)
m.b <- optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=sample.b)
bootstrap.results[b,] <- c(m.b$par,m.b$convergence)
}
One can also obtain the ML estimate of λ and use the bootstrap to estimate the bias and the standard error of the estimate. First calculate the MLE of λ Then, we estimate the bias and the standard error of λˆ by a nonparametric bootstrap.
B <- 9999
lambda.B <- rep(NA,B)
n <- length(w.time)
for (b in 1:B){
b.sample <- sample(1:n,n,replace=TRUE)
lambda.B[b] <- 1/mean(w.time[b.sample])
}
bias <- mean(lambda.B-m$estimate)
sd(lambda.B)
In the second part we calculate a 95% confidence interval for the mean time between failures.
n <- length(w.time)
m <- mean(w.time)
se <- sd(w.time)/sqrt(n)
interval.1 <- m + se * qnorm(c(0.025,0.975))
interval.1
But we can also use the the assumption that the data are from an exponential distribution. In that case we have varX¯ = 1/(nλ^2) = θ^{2}/n which can be estimated by X¯^{2}/n.
sd.m <- sqrt(m^2/n)
interval.2 <- m + sd.m * qnorm(c(0.025,0.975))
interval.2
We can also estimate the standard error of ˆθ by means of a boostrap procedure. We use the nonparametric bootstrap, that is, we sample from the original sample with replacement.
B <- 9999
m.star <- rep(NA,B)
for (b in 1:B){
m.star[b] <- mean(sample(w.time,replace=TRUE))
}
sd.m.star <- sd(m.star)
interval.3 <- m + sd.m.star * qnorm(c(0.025,0.975))
interval.3
An interval not based on the assumption of normality of ˆθ is obtained by the percentile method:
interval.4 <- quantile(m.star, probs=c(0.025,0.975))
interval.4
This is my code
import os
import sys
import numpy as np
import scipy
from scipy.optimize import leastsq
def peval (inp_mat,p):
m0,m1,m2,m3,m4,m5,m6,m7 = p
out_mat = np.array(np.zeros(inp_mat.shape,dtype=np.float32))
mid = inp_mat.shape[0]/2
for xy in range(0,inp_mat.shape[0]):
if (xy<(inp_mat.shape[0]/2)):
out_mat[xy] = ( ( (inp_mat[xy+mid]*m0)+(inp_mat[xy]*m1)+ m2 ) /( (inp_mat[xy+mid]*m6)+(inp_mat[xy]*m7)+1 ) )
else:
out_mat[xy] = ( ( (inp_mat[xy]*m3)+(inp_mat[xy-mid]*m4)+ m5 ) /( (inp_mat[xy]*m6)+(inp_mat[xy-mid]*m7)+1 ) )
return np.array(out_mat)
def residuals(p, out_mat, inp_mat):
m0,m1,m2,m3,m4,m5,m6,m7 = p
err=np.array(np.zeros(inp_mat.shape,dtype=np.float32))
if (out_mat.shape == inp_mat.shape):
for xy in range(0,inp_mat.shape[0]):
err[xy] = err[xy]+ (out_mat[xy] -inp_mat[xy])
return np.array(err)
f = open('/media/anilil/Data/Datasets/repo/txt_op/vid.txt','r')
x = np.loadtxt(f,dtype=np.int16,comments='#',delimiter='\t')
nof = x.shape[0]/72 # Find the number of frames
x1 = x.reshape(-1,60,40)
x1_1= x1[0,:,:].flatten()
x1_2= x1[1,:,:].flatten()
x= []
y= []
for xy in range(1,50,1):
y.append(x1[xy,:,:].flatten())
x.append(x1[xy-1,:,:].flatten())
x=np.array(x,dtype=np.float32)
y=np.array(y,dtype=np.float32)
length = x1_1.shape#initail guess
p0 = np.array([1,1,1,1,1,1,1,1],dtype=np.float32)
abc=leastsq(residuals, p0,args=(y,x))
print ('Size of first matrix is '+str(x1_1.shape))
print ('Size of first matrix is '+str(x1_2.shape))
print ("Done with program")
I have tried adding np.array in most places with no use.
Could someone please help me ?
Another question here is do I give the output of the residuals() as a single value by adding all errorsnp.sum(err,axis=1). or leave it the way it is ?
When I return np.sum(err,axis=1) in the function residuals(). There is no change in the initial guess. It just remains the same.
I.E error is for each item in the input output mapping. or a combined error overall ?
Example data.
Output
ValueError: object too deep for desired array
Traceback (most recent call last):
File "/media/anilil/Data/charm/mv_clean/.idea/nose_reduction_mpeg.py", line 49, in <module>
abc=leastsq(residuals, p0,args=(y,x))
File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 378, in leastsq
gtol, maxfev, epsfcn, factor, diag)
minpack.error: Result from function call is not a proper array of floats.
leastsq requires a 1D array to be returned from your residuals function.
Currently you calculate the residuals for the whole image and return that as a 2D array.
The simple fix would be to flatten the array of residuals (turning your 2D array into a 1D one).
So instead of returning
return np.array(err)
Do this instead
return err.flatten()
Note that err is already a numpy array so doesn't need to be cast before the return (I guess that slipped in when you were trying to debug it!)
I am new to python and trying to convert some matlab code as an exercise. One of the tasks involves finding the root, or minimum absolute value if no root exists, of a function. I can do that just fine, but I want to add some error checking to see that scipy.optimize.fsolve actually finds a solution. In particular, fsolve returns a parameter ier which is just a success flag and I want to read it.
My code is as follows:
import numpy as np
from scipy.optimize import fsolve, minimize
mu = -8
sigma = 4
mun = -2
ARL0 = 2000
hmin = 0.5
hmax = 100
f = lambda h: (np.exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun**2)-ARL0
if f(hmin)*f(hmax) < 0:
opth, ier = fsolve(f,hmax)
print ier
print opth[0]
else:
f = lambda h: np.abs((np.exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun**2)-ARL0)
opth = minimize(f,hmax,bounds=((hmin,hmax),))
print opth.success
print opth.x[0]
The "else" bit works fine, it prints the solution and a true/false if it found one. The first if block doesn't: I get the following error when it runs:
line 14, in <module>
opth, ier = fsolve(f,hmax)
ValueError: need more than 1 value to unpack
I'm guessing it's just a synatx error but I haven't been able to find an example using fsolve to do this. Can someone point me in the right direction?
You need to add the argument full_output=True and there are actually 4 return values, looking at the docs, so you need to unpack all of them.
Here's the source for fsolve in my version of SciPy (0.14.0), you can see the two options for return values:
def fsolve(func, x0, args=(), fprime=None, full_output=0,
col_deriv=0, xtol=1.49012e-8, maxfev=0, band=None,
epsfcn=None, factor=100, diag=None):
options = {'col_deriv': col_deriv,
'xtol': xtol,
'maxfev': maxfev,
'band': band,
'eps': epsfcn,
'factor': factor,
'diag': diag,
'full_output': full_output}
res = _root_hybr(func, x0, args, jac=fprime, **options)
if full_output:
x = res['x']
info = dict((k, res.get(k))
for k in ('nfev', 'njev', 'fjac', 'r', 'qtf') if k in res)
info['fvec'] = res['fun']
return x, info, res['status'], res['message']
else:
return res['x']
I seem to be getting an error when I use the root-finder in scipy. I was wondering if anyone could point out what I'm doing wrong.
The function I'm finding the root of is just an easy example, and not particularly important.
If I run this code with scipy 0.9.0:
import numpy as np
from scipy.optimize import fsolve
tmpFunc = lambda xIn: (xIn[0]-4)**2 + (xIn[1]-5)**2 + (xIn[2]-7)**3
x0 = [3,4,5]
xFinal = fsolve(tmpFunc, x0 )
print xFinal
I get the following error message:
Traceback (most recent call last):
File "tmpStack.py", line 7, in <module>
xFinal = fsolve(tmpFunc, x0 )
File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 115, in fsolve
_check_func('fsolve', 'func', func, x0, args, n, (n,))
File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 26, in _check_func
raise TypeError(msg)
TypeError: fsolve: there is a mismatch between the input and output shape of the 'func' argument '<lambda>'.
Well it looks like I was trying to use this routine incorrectly. This routine requires the same number of equations and variables vs. the one equation with three variables I gave it. So if the input to the function to be minimized is a 3-D array the output should be a 3-D array. This code works:
import numpy as np
from scipy.optimize import fsolve
tmpFunc = lambda xIn: np.array( [(xIn[0]-4)**2 + xIn[1], (xIn[1]-5)**2 - xIn[2]) \
, (xIn[2]-7)**3 + xIn[0] ] )
x0 = [3,4,5]
xFinal = fsolve(tmpFunc, x0 )
print xFinal
Which represents solving three equations simultaneously.