Solving an elliptic PDE using FiPy - python

I'm trying to solve for an elliptic pde using FiPy and I'm running into some convergence problems. The equation I'm trying to solve is:
\[ \frac{\partial^2 \alpha}{\partial x^2} = (\alpha -1)/L^2 \]
where, L = f(x) and I'm using a tuple of dx values since the solution for alpha is dependent on the mesh.
I've made the following script to solve the equation using FiPy:
from fipy import *
import numpy as np
deltax = tuple(np.genfromtxt('delx_eps.dat')[:,0])
mesh = Grid1D(dx=deltax, nx=257)
# compute L^2
CL = 0.161
Ceta = 80.0
eps = np.genfromtxt('delx_eps.dat')[:,1]
nu = np.empty_like(eps)
nu[:] = 1.0/590.0
L_sq = np.empty_like(eps)
L_sq = (CL*Ceta*(nu**3/eps)**(1/4))**2
coeff_L = CellVariable(mesh=mesh, value=L_sq, name='Lsquare')
alpha = CellVariable(mesh=mesh, name='Solution', value=0.0)
# Boundary conditions
alpha.constrain(0., where=mesh.facesLeft)
alpha.constrain(0., where=mesh.facesRight)
eq = DiffusionTerm(coeff=1.0, var=alpha) == (alpha-1.0)/coeff_L
mySolver = LinearLUSolver(iterations=10, tolerance=5e-6)
res = 1e+100
while res > 1e-8:
res = eq.sweep(var=alpha, solver=mySolver)
print(res)
The solution diverges until the value of res is "inf" resulting in the error:
/usr/local/lib/python3.8/dist-packages/fipy/variables/variable.py:1143: RuntimeWarning: overflow encountered in true_divide
return self._BinaryOperatorVariable(lambda a, b: a / b, other)
/usr/local/lib/python3.8/dist-packages/fipy/variables/variable.py:1122: RuntimeWarning: invalid value encountered in multiply
return self._BinaryOperatorVariable(lambda a, b: a*b, other)
Traceback (most recent call last):
File "elliptic_shielding.py", line 73, in <module>
res = eq.sweep(var=alpha, solver=mySolver)
File "/usr/local/lib/python3.8/dist-packages/fipy/terms/term.py", line 232, in sweep
solver._solve()
File "/usr/local/lib/python3.8/dist-packages/fipy/solvers/scipy/scipySolver.py", line 26, in _solve
self.var[:] = numerix.reshape(self._solve_(self.matrix, self.var.ravel(), numerix.array(self.RHSvector)), self.var.shape)
File "/usr/local/lib/python3.8/dist-packages/fipy/solvers/scipy/linearLUSolver.py", line 31, in _solve_
LU = splu(L.matrix.asformat("csc"), diag_pivot_thresh=1.,
File "/usr/local/lib/python3.8/dist-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 337, in splu
return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr,
RuntimeError: Factor is exactly singular
What I have noticed is that the solution converges well when the value of L^2 is constant. However, I've not been able to make it work with varying L.
How should I best go about solving this issue?
Any help or guidance is much appreciated.
Thanks in advance.
PS: The data used is available through this link.

If L^2 is small enough, the solution is unstable even when L^2 is constant.
Changing to an implicit source seems to work, e.g.,
eq = DiffusionTerm(coeff=1.0, var=alpha) == ImplicitSourceTerm(coeff=1./coeff_L, var=alpha) - 1.0 / coeff_L

Related

How do I construct design matrix in NumPy (for linear regression)?

For this lab I need to sample 150 x-values from a Normal distribution using a mean of 0 and standard deviation of 10, then from the x-values construct a design matrix using the features {1,x,x^2}.
We have to sample parameters and then use the design matrix to create y values for regression data.
The problem is that my design matrix isn't square, and the Moore-Penrose Pseduoinverse needs square matrices, but I don't know how to get that to work given the earlier setup of the lab?
This is what I've done
#Linear Regression Lab
import numpy as np
import math
data = np.random.normal(0, 10, 150)
design_matrix = np.zeros((150,3))
for i in range(150):
design_matrix[i][0] = 1
design_matrix[i][1] = data[i]
design_matrix[i][2] = pow(data[i], 2)
print("-------------------Design Matrix---------------------")
print("|--------1--------|-------x-------|--------x^2--------|")
print(design_matrix[:20])
#sampling paramters
theta_0 = np.random.uniform(low = -30, high = 20)
theta_1 = np.random.uniform(low = -30, high = 20)
theta_2 = np.random.uniform(low = -30, high = 20)
print(theta_0, theta_1, theta_2)
theta = np.array([theta_0, theta_1, theta_2])
theta = np.transpose(theta)
#moore penrose psuedo inverse
MPpi = np.linalg.pinv(design_matrix) ##problem here
y_values = np.linalg.inv(MPpi)
Feel free to edit this incomplete answer
After running this code on Repl, I got the following error message
Traceback (most recent call last):
File "main.py", line 32, in <module>
y_values = np.linalg.inv(MPpi)
File "<__array_function__ internals>", line 5, in inv
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 542, in inv
_assert_stacked_square(a)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 213, in _assert_stacked_square
raise LinAlgError('Last 2 dimensions of the array mustbe square')
numpy.linalg.LinAlgError: Last 2 dimensions of the array must be square
The first error propagates from taking the inverse of MPpi
By looking at the docs, it seems that pinv switches the last two dimensions [e.g., an m x n matrix becomes n x m], so we will need to format the matrix before calculating the psuedoinverse
As far as the Moore Penrose inverse AKA pinv is concerned, this article suggests that multiplying MPpi*data, which will yield x_0 {notation from Ross MacAusland}, which is the best fit for your least squares regression.

How to work around the name 'nabla_div' is not defined error in Fenics Example ft06_elasticity.py?

I installed Fenics using Conda on Ubuntu 18.04, and receive the following error while running their ft06_elasticity.py example.
I have tried to find a solution or workaround to this in the documentation, but I cannot even find a nabla_div() function description anywhere.
The Fenics documentation states the following:
nabla_grad
The gradient and divergence operators now have a prefix nabla_. This
is strictly not necessary in the present problem, but recommended in
general for vector PDEs arising from continuum mechanics, if you
interpret ∇ as a vector in the PDE notation; see the box about
nabla_grad in the section Variational formulation.
"""
FEniCS tutorial demo program: Linear elastic problem.
-div(sigma(u)) = f
The model is used to simulate an elastic beam clamped at
its left end and deformed under its own weight.
"""
from __future__ import print_function
from fenics import *
# Scaled variables
L = 1; W = 0.2
mu = 1
rho = 1
delta = W/L
gamma = 0.4*delta**2
beta = 1.25
lambda_ = beta
g = gamma
# Create mesh and define function space
mesh = BoxMesh(Point(0, 0, 0), Point(L, W, W), 10, 3, 3)
V = VectorFunctionSpace(mesh, 'P', 1)
# Define boundary condition
tol = 1E-14
def clamped_boundary(x, on_boundary):
return on_boundary and x[0] < tol
bc = DirichletBC(V, Constant((0, 0, 0)), clamped_boundary)
# Define strain and stress
def epsilon(u):
return 0.5*(nabla_grad(u) + nabla_grad(u).T)
#return sym(nabla_grad(u))
def sigma(u):
return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u)
# Define variational problem
u = TrialFunction(V)
d = u.geometric_dimension() # space dimension
v = TestFunction(V)
f = Constant((0, 0, -rho*g))
T = Constant((0, 0, 0))
a = inner(sigma(u), epsilon(v))*dx
L = dot(f, v)*dx + dot(T, v)*ds
# Compute solution
u = Function(V)
solve(a == L, u, bc)
# Plot solution
plot(u, title='Displacement', mode='displacement')
# Plot stress
s = sigma(u) - (1./3)*tr(sigma(u))*Identity(d) # deviatoric stress
von_Mises = sqrt(3./2*inner(s, s))
V = FunctionSpace(mesh, 'P', 1)
von_Mises = project(von_Mises, V)
plot(von_Mises, title='Stress intensity')
# Compute magnitude of displacement
u_magnitude = sqrt(dot(u, u))
u_magnitude = project(u_magnitude, V)
plot(u_magnitude, 'Displacement magnitude')
print('min/max u:',
u_magnitude.vector().array().min(),
u_magnitude.vector().array().max())
# Save solution to file in VTK format
File('elasticity/displacement.pvd') << u
File('elasticity/von_mises.pvd') << von_Mises
File('elasticity/magnitude.pvd') << u_magnitude
# Hold plot
interactive()
Traceback (most recent call last):
File "fenics_ft06_elasticity.py", line 48, in <module>
a = inner(sigma(u), epsilon(v))*dx
File "fenics_ft06_elasticity.py", line 40, in sigma
return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u)
NameError: name 'nabla_div' is not defined
I found that replacing 'nabla_div(u)' with just 'div(u)' solved that error. However, it did lead straight to the next error:
Traceback (most recent call last):
File "fenics_ft06_elasticity.py", line 56, in <module>
plot(u, title='Displacement', mode='displacement')
File "/home/ron/miniconda3/envs/fenicsproject/lib/python3.7/site-packages/dolfin/common/plotting.py", line 438, in plot
return _plot_matplotlib(object, mesh, kwargs)
File "/home/ron/miniconda3/envs/fenicsproject/lib/python3.7/site-packages/dolfin/common/plotting.py", line 282, in _plot_matplotlib
ax.set_aspect('equal')
File "/home/ron/miniconda3/envs/fenicsproject/lib/python3.7/site-packages/matplotlib/axes/_base.py", line 1281, in set_aspect
'It is not currently possible to manually set the aspect '
NotImplementedError: It is not currently possible to manually set the aspect on 3D axes
Simply add these two lines to the beginning of your code to use the nabla_grad and nabla_div:
from ufl import nabla_grad
from ufl import nabla_div

Custom Theano Op to do numerical integration

I'm attempting to write a custom Theano Op which numerically integrates a function between two values. The Op is a custom likelihood for PyMC3 which involves the numerical evaluation of some integrals. I can't simply use the #as_op decorator as I need to use HMC to do the MCMC step. Any help would be much appreciated, as this question seems to have come up several times but has never been solved (e.g. https://stackoverflow.com/questions/36853015/using-theano-with-numerical-integration, Theano: implementing an integral function).
Clearly one solution would be to write a numerical integrator within Theano, but this seems like a waste of effort when very good integrators are already available, for example through scipy.integrate.
To keep this as a minimal example, let's just try and integrate a function between 0 and 1 inside an Op. The following integrates a Theano function outside of an Op, and produces correct results as far as my testing has gone.
import theano
import theano.tensor as tt
from scipy.integrate import quad
x = tt.dscalar('x')
y = x**4 # integrand
f = theano.function([x], y)
print f(0)
print f(1)
ans = integrate.quad(f, 0, 1)[0]
print ans
However, attempting to do integration within an Op appears much harder. My current best effort is:
import numpy as np
import theano
import theano.tensor as tt
from scipy import integrate
class IntOp(theano.Op):
__props__ = ()
def make_node(self, x):
x = tt.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs, output_storage):
x = inputs[0]
z = output_storage[0]
f_to_int = theano.function([x], x)
z[0] = tt.as_tensor_variable(integrate.quad(f_to_int, 0, 1)[0])
def infer_shape(self, node, i0_shapes):
return i0_shapes
def grad(self, inputs, output_grads):
ans = integrate.quad(output_grads[0], 0, 1)[0]
return [ans]
intOp = IntOp()
x = tt.dmatrix('x')
y = intOp(x)
f = theano.function([x], y)
inp = np.asarray([[2, 4], [6, 8]], dtype=theano.config.floatX)
out = f(inp)
print inp
print out
Which gives the following error:
Traceback (most recent call last):
File "stackoverflow.py", line 35, in <module>
out = f(inp)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
outputs = self.fn()
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 912, in rval
r = p(n, [x[0] for x in i], o)
File "stackoverflow.py", line 17, in perform
f_to_int = theano.function([x], x)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 320, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 390, in pfunc
for p in params]
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 489, in _pfunc_param_to_in
raise TypeError('Unknown parameter type: %s' % type(param))
TypeError: Unknown parameter type: <type 'numpy.ndarray'>
Apply node that caused the error: IntOp(x)
Toposort index: 0
Inputs types: [TensorType(float64, matrix)]
Inputs shapes: [(2, 2)]
Inputs strides: [(16, 8)]
Inputs values: [array([[ 2., 4.],
[ 6., 8.]])]
Outputs clients: [['output']]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "stackoverflow.py", line 30, in <module>
y = intOp(x)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 611, in __call__
node = self.make_node(*inputs, **kwargs)
File "stackoverflow.py", line 11, in make_node
return theano.Apply(self, [x], [x.type()])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
I'm surprised by this, especially the TypeError, as I thought I had converted the output_storage variable into a tensor but it appears to believe here that it is still an ndarray.
I found your question because I'm trying to build a random variable in PyMC3 that represents a general point process (Hawkes, Cox, Poisson, etc) and the likelihood function has an integral. I really want to be able to use Hamiltonian Monte Carlo or NUTS samplers, so I needed that integral with respect to time to be differentiable.
Starting off of your attempt, I made an integrateOut theano Op that seems to work correctly with the behavior I need. I've tested it out on a few different inputs (not on my stats model just yet, but it appears promising!). I'm a total theano n00b, so pardon any stupidity. I would greatly appreciate feedback if anyone has any. Not sure it's exactly what you're looking for, but here's my solution (example at the bottom and in the doc strings). *EDIT: simplified some remnants of screwing around with ways to do this.
import theano
import theano.tensor as T
from scipy.integrate import quad
class integrateOut(theano.Op):
"""
Integrate out a variable from an expression, computing
the definite integral w.r.t. the variable specified
!!! Only implemented in this for scalars !!!
Parameters
----------
f : scalar
input 'function' to integrate
t : scalar
the variable to integrate out
t0: float
lower integration limit
tf: float
upper integration limit
Returns
-------
scalar
a new scalar with the 't' integrated out
Notes
-----
usage of this looks like:
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2 + y**2)*t
# integrate z w.r.t. t as a function of (x,y)
intZ = integrateOut(z,t,0.0,5.0)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
"""
def __init__(self,f,t,t0,tf,*args,**kwargs):
super(integrateOut,self).__init__()
self.f = f
self.t = t
self.t0 = t0
self.tf = tf
def make_node(self,*inputs):
self.fvars=list(inputs)
# This will fail when taking the gradient... don't be concerned
try:
self.gradF = T.grad(self.f,self.fvars)
except:
self.gradF = None
return theano.Apply(self,self.fvars,[T.dscalar().type()])
def perform(self,node, inputs, output_storage):
# Everything else is an argument to the quad function
args = tuple(inputs)
# create a function to evaluate the integral
f = theano.function([self.t]+self.fvars,self.f)
# actually compute the integral
output_storage[0][0] = quad(f,self.t0,self.tf,args=args)[0]
def grad(self,inputs,grads):
return [integrateOut(g,self.t,self.t0,self.tf)(*inputs)*grads[0] \
for g in self.gradF]
x = T.dscalar('x')
y = T.dscalar('y')
t = T.dscalar('t')
z = (x**2+y**2)*t
intZ = integrateOut(z,t,0,1)(x,y)
gradIntZ = T.grad(intZ,[x,y])
funcIntZ = theano.function([x,y],intZ)
funcGradIntZ = theano.function([x,y],gradIntZ)
print funcIntZ(2,2)
print funcGradIntZ(2,2)
SymPy is proving harder than anticipated, but in the meantime in case anyone's finding this useful, I'll also point out how to modify this Op to allow for changing the final timepoint without creating a new Op. This can be useful if you have a point process, or if you have uncertainty in your time measurements.
class integrateOut2(theano.Op):
def __init__(self, f, int_var, *args,**kwargs):
super(integrateOut2,self).__init__()
self.f = f
self.int_var = int_var
def make_node(self, *inputs):
tmax = inputs[0]
self.fvars=list(inputs[1:])
return theano.Apply(self, [tmax]+self.fvars, [T.dscalar().type()])
def perform(self, node, inputs, output_storage):
# Everything else is an argument to the quad function
tmax = inputs[0]
args = tuple(inputs[1:])
# create a function to evaluate the integral
f = theano.function([self.int_var]+self.fvars, self.f)
# actually compute the integral
output_storage[0][0] = quad(f, 0., tmax, args=args)[0]
def grad(self, inputs, grads):
tmax = inputs[0]
param_grads = T.grad(self.f, self.fvars)
## Recall fundamental theorem of calculus
## d/dt \int^{t}_{0}f(x)dx = f(t)
## So sub in t_max to the graph
FTC_grad = theano.clone(self.f, {self.int_var: tmax})
grad_list = [FTC_grad*grads[0]] + \
[integrateOut2(grad_fn, self.int_var)(*inputs)*grads[0] \
for grad_fn in param_grads]
return grad_list
I always use the following code where I generate B = 10000 samples of n = 30 observations from a normal distribution with µ = 1 and σ 2 = 2.25. For each sample, the parameters µ and σ are estimated and stored in a matrix. I hope this can help you.
loglik <- function(p,z){
sum(dnorm(z,mean=p[1],sd=p[2],log=TRUE))
}
set.seed(45)
n <- 30
x <- rnorm(n,mean=1,sd=1.5)
optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=x)
B <- 10000
bootstrap.results <- matrix(NA,nrow=B,ncol=3)
colnames(bootstrap.results) <- c("mu","sigma","convergence")
for (b in 1:B){
sample.b <- rnorm(n,mean=1,sd=1.5)
m.b <- optim(c(mu=0,sd=1),loglik,control=list(fnscale=-1),z=sample.b)
bootstrap.results[b,] <- c(m.b$par,m.b$convergence)
}
One can also obtain the ML estimate of λ and use the bootstrap to estimate the bias and the standard error of the estimate. First calculate the MLE of λ Then, we estimate the bias and the standard error of λˆ by a nonparametric bootstrap.
B <- 9999
lambda.B <- rep(NA,B)
n <- length(w.time)
for (b in 1:B){
b.sample <- sample(1:n,n,replace=TRUE)
lambda.B[b] <- 1/mean(w.time[b.sample])
}
bias <- mean(lambda.B-m$estimate)
sd(lambda.B)
In the second part we calculate a 95% confidence interval for the mean time between failures.
n <- length(w.time)
m <- mean(w.time)
se <- sd(w.time)/sqrt(n)
interval.1 <- m + se * qnorm(c(0.025,0.975))
interval.1
But we can also use the the assumption that the data are from an exponential distribution. In that case we have varX¯ = 1/(nλ^2) = θ^{2}/n which can be estimated by X¯^{2}/n.
sd.m <- sqrt(m^2/n)
interval.2 <- m + sd.m * qnorm(c(0.025,0.975))
interval.2
We can also estimate the standard error of ˆθ by means of a boostrap procedure. We use the nonparametric bootstrap, that is, we sample from the original sample with replacement.
B <- 9999
m.star <- rep(NA,B)
for (b in 1:B){
m.star[b] <- mean(sample(w.time,replace=TRUE))
}
sd.m.star <- sd(m.star)
interval.3 <- m + sd.m.star * qnorm(c(0.025,0.975))
interval.3
An interval not based on the assumption of normality of ˆθ is obtained by the percentile method:
interval.4 <- quantile(m.star, probs=c(0.025,0.975))
interval.4

'TypeError' while using NSGA2 to solve Multi-objective prob. from pyopt-sparse in OpenMDAO 1.x

I am trying to solve a multi-objective optimization problem using openMDAO's pyopt-sparse driver with NSGA2 algorithm. Following is the code:
from __future__ import print_function
from openmdao.api import IndepVarComp, Component, Problem, Group, pyOptSparseDriver
class Circ2(Component):
def __init__(self):
super(Circ2, self).__init__()
self.add_param('x', val=10.0)
self.add_param('y', val=10.0)
self.add_output('f1', val=40.0)
self.add_output('f2', val=40.0)
def solve_nonlinear(self, params, unknowns, resids):
x = params['x']
y = params['y']
unknowns['f1'] = (x - 0.0)**2 + (y - 0.0)**2
unknowns['f2'] = (x - 1.0)**2 + (y - 1.0)**2
def linearize(self, params, unknowns, resids):
J = {}
x = params['x']
y = params['y']
J['f1', 'x'] = 2*x
J['f1', 'y'] = 2*y
J['f2', 'x'] = 2*x-2
J['f2', 'y'] = 2*y-2
return J
if __name__ == "__main__":
# Defining Problem & Root
top = Problem()
root = top.root = Group()
# Adding in-present variable values and model
startVal = 50.0
root.add('p1', IndepVarComp('x', startVal))
root.add('p2', IndepVarComp('y', startVal))
root.add('p', Circ2())
# Making Connections
root.connect('p1.x', 'p.x')
root.connect('p2.y', 'p.y')
# Configuring Driver
top.driver = pyOptSparseDriver()
top.driver.options['optimizer'] = 'NSGA2'
# Setting bounds for the optimizer
top.driver.add_desvar('p1.x', lower=-600, upper=600)
top.driver.add_desvar('p2.y', lower=-600, upper=600)
# Setting Objective Function(s)
top.driver.add_objective('p.f1')
top.driver.add_objective('p.f2')
# # Setting up constraints
# top.driver.add_constraint('con.c', lower=1.0)
top.setup()
top.run()
and I am getting the following error -
Traceback (most recent call last):
File "/home/prasad/DivyaManglam/Python Scripts/Pareto Testing/Basic_Sphere.py", line 89, in <module>
top.run()
File "/usr/local/lib/python2.7/dist-packages/openmdao/core/problem.py", line 1038, in run
self.driver.run(self)
File "/usr/local/lib/python2.7/dist-packages/openmdao/drivers/pyoptsparse_driver.py", line 280, in run
sol = opt(opt_prob, sens=self._gradfunc, storeHistory=self.hist_file)
File "/usr/local/lib/python2.7/dist-packages/pyoptsparse/pyNSGA2/pyNSGA2.py", line 193, in __call__
self.optProb.comm.bcast(-1, root=0)
File "MPI/Comm.pyx", line 1276, in mpi4py.MPI.Comm.bcast (src/mpi4py.MPI.c:108819)
File "MPI/msgpickle.pxi", line 620, in mpi4py.MPI.PyMPI_bcast (src/mpi4py.MPI.c:47164)
File "MPI/msgpickle.pxi", line 143, in mpi4py.MPI.Pickle.load (src/mpi4py.MPI.c:41248)
TypeError: only length-1 arrays can be converted to Python scalars
Kindly tell if you can make anything of the above error and how to fix it.
Also, in what form would the output of the multi-objective problem's solution be returned. I intend to generate a pareto-optimal front for this.
Thank you all.
It turns out this is not an OpenMDAO bug, but rather a bug in the Pyopt sparse wrapper for NSGA2. The fix wasn't terrible, and I've submitted a pull request to the pyoptsparse repo for it. In the meantime it would be easy enough to patch your local copy of pyoptsparse.
You asked how the results will be reported. The current NSGA2 wrapper doesn't do anything with the results. It just lets NSGA2 write them all out to a series of text files such as nsga2_best_pop.out that look like this:
# This file contains the data of final feasible population (if found)
# of objectives = 1, # of constraints = 0, # of real_var = 2, # of bits of bin_var = 0, constr_violation, rank, crowding_distance
-1.000000e+00 3.190310e+01 -2.413640e+02 0.000000e+00 1 1.000000e+14
-1.000000e+00 -5.309160e+02 2.449727e+02 0.000000e+00 1 0.000000e+00
-1.000000e+00 -3.995119e+02 -1.829071e+02 0.000000e+00 1 0.000000e+00
As a side note, in your example, you implemented a linearize method for the component in your example. That method is used when you need to compute gradients, but NSGA2 is a gradient free optimizer and won't use it at all. So unless you're also planning to test out some gradient based methods you can leave that method out of your components.

SciPy optimization for under-constrained system

I often have to solve nonlinear problems in which the number of variables exceeds the number of constraints (or sometimes the other way around). Usually some of the constraints or variables are redundant in a complicated way. Is there any way to solve such problems?
Most of the scipy solvers seem to assume that the number of constraints equals the number of variables, and that the Jacobian is nonsingular. leastsq works sometimes but it doesn't even try when the constraints are fewer than the number of variables. I realize that I could just run fmin on linalg.norm(F), but this is much less efficient than any method which makes use of the Jacobian.
Here is an example of a problem which demonstrates what I am talking about. It obviously has a solution, but leastsq gives an error. Of course, this example is easy to solve by hand, I just put it here to demonstrate the issue.
import numpy as np
import scipy.optimize
mat = np.random.randn(5, 7)
def F(x):
y = np.dot(mat, x)
return np.array([ y[0]**2 + y[1]**3 + 12, y[2] + 17 ])
x0 = np.random.randn(7)
scipy.optimize.leastsq(F, x0)
The error message I get is:
Traceback (most recent call last):
File "question.py", line 13, in <module>
scipy.optimize.leastsq(F, x0)
File "/home/dstahlke/apps/scipy/lib64/python2.7/site-packages/scipy/optimize/minpack.py", line 278, in leastsq
raise TypeError('Improper input: N=%s must not exceed M=%s' % (n,m))
TypeError: Improper input: N=7 must not exceed M=2
I have scoured the net for an answer and have even asked on the SciPy mailing list, and got no response. For now I hacked the SciPy source so that the newton_krylov solver uses pinv(), but I don't think this is an optimal solution.
How about resize the return array from F() to the number of variables:
import numpy as np
import scipy.optimize
mat = np.random.randn(5, 7)
def F(x):
y = np.dot(mat, x)
return np.resize(np.array([ y[0]**2 + y[1]**3 + 12, y[2] + 17]), 7)
while True:
x0 = np.random.randn(7)
r = scipy.optimize.leastsq(F, x0)
err = F(r[0])
norm = np.dot(err, err)
if norm < 1e-6:
break
print err

Categories