This is a part of my code:
import gekko as GEKKO
for i in range(0,4,1):
m = GEKKO()
x = m.Var()
e_i1 = R_n[0,0] * A_slice[i,0] + R_n[1, 0] * A_slice[i, 1]
e_i2 = R_n[0, 1] * A_slice[i, 1] + R_n[1, 1] * A_slice[i, 1]
r = get_determinante(R_n)
d = P[0,i]
m.Equation(e_i1*R_n[0,1]*x + e_i1*R_n[1,1]*A_slice[i,4] - e_i2*R_n[0,0]*x + e_i2*R_n[1,0]*A_slice[i,4] == r * d)
m.solve()
print(x)
however it is not printing x.
Any Reasons?
Try printing the value of x as:
print(x.value)
The variable x is a Gekko variable type and the results are loaded into x.value as a numpy array. To access the value, you can print x.value or x.value[0] to get just the first element of the array.
You are creating a new Gekko model, variables, and equations on each loop. You can also declare an x array of values with x=m.Array(m.Var,4) and solve all of the equations and variables simultaneously without a loop. This may help speed up your code. If you do this then you'll need to print each value of x as:
print([x[i].value[0] for i in range(4)])
Here is a self-contained example with array functions:
from gekko import GEKKO
m = GEKKO()
# variable array dimension
n = 3 # rows
p = 2 # columns
# create array
x = m.Array(m.Var,(n,p))
for i in range(n):
for j in range(p):
x[i,j].value = 2.0
x[i,j].lower = -10.0
x[i,j].upper = 10.0
# create parameter
y = m.Param(value = 1.0)
# sum columns
z = [None]*p
for j in range(p):
z[j] = m.Intermediate(sum([x[i,j] for i in range(n)]))
# objective
m.Obj(sum([z[j]**2 + y for j in range(p)]))
# minimize objective
m.solve()
print(x)
Related
I have fit a lognormal model, and I would now like to use the coefficients from that model to find the combination of variable values that will provide the greatest response. I intend on introducing some constraints later, but for now I would just like to get the optimization running. The complicating issue is that the model was a mixed methods model, so I have coefficients for each individual in the model. As a result, I need to optimize the variable values for each individual, given their individual coefficients.
My example:
import numpy as np
from scipy.optimize import minimize
def objective(x, beta):
x1, x2 = x
beta1, beta2 = beta
return -1 * np.sum(np.exp(3 + x1*beta1 + x2*beta2))
# initial guesses for variables x1 and x2
n = 2
x1 = np.zeros(n)
x1[0] = 1.0
x1[1] = 2.0
x2 = np.zeros(n)
x2[0] = 3.0
x2[1] = 4.0
x0 = np.vstack((x1,x2))
# the coefficients (weights) for each of n individuals, in each variable
beta1 = np.zeros(n)
beta1[0] = 1.1
beta1[1] = 1.01
beta2 = np.zeros(n)
beta2[0] = 1.1
beta2[1] = 1.01
beta0 = np.vstack((beta1, beta2))
# show initial objective
print('Initial SSE Objective: ' + str(objective(x0, beta0))) # this works as intended
# but I'm not sure how to specify bounds given my shape
b = (1.0,5.0) #min = 1, max = 5 on any one channel for any one individual
bnds = (b, b)
# running without bounds
solution = minimize(objective, x0, method='SLSQP',
args=beta0)
...gives the following error;
File "<ipython-input-92-554f967ca90b>", line 2, in objective
x1, x2 = x
ValueError: too many values to unpack (expected 2)
Given that the objective function works if I pass it the args, is this a limitation of minimize() ? Is the function unable to take x in shape (2,2)?
Also, I'm having a hard time specifying the bounds correctly...
# running with bounds
solution = minimize(objective, x0, method='SLSQP',
args=beta0, bounds=bnds)
...gives the error;
ValueError: operands could not be broadcast together with shapes (4,) (2,) (2,)
I was able to do this with gekko DOCS HERE
from gekko import GEKKO
m = GEKKO() # Initialize gekko
n = 2
# Init the coefficients for each HCP
alpha_list = np.random.normal(3, 0.1, n)
beta1_list = np.random.normal(1.01, 0.1, n)
beta2_list = np.random.normal(1.02, 0.1, n)
beta3_list = np.random.normal(1.04, 0.1, n)
# Initialize variables
x1 = [m.Var(value=1,lb=0,ub=4) for i in range(n)]
x2 = [m.Var(value=1,lb=0,ub=4) for i in range(n)]
x3 = [m.Var(value=1,lb=0,ub=4) for i in range(n)]
# Init the coefficients
alpha = [m.Const(value=alpha_list[i]) for i in range(n)]
beta1 = [m.Const(value=beta1_list[i]) for i in range(n)]
beta2 = [m.Const(value=beta2_list[i]) for i in range(n)]
beta3 = [m.Const(value=beta3_list[i]) for i in range(n)]
# Inequality constraints
m.Equation(m.sum(x1) + m.sum(x2) + m.sum(x3) <= n*10)
m.Obj(-1 * m.sum([m.exp(alpha[i] + x1[i]*beta1[i] + x2[i]*beta2[i] + x3[i]*beta3[i]) for i in range(n)])) # Objective
m.options.IMODE = 3 # Steady state optimization set to 3, change to 1 for integer
m.options.MAX_ITER = 1000
m.solve() # Solve
print('Results')
print('x1: ' + str(x1))
print('x2: ' + str(x2))
print('x3: ' + str(x3))
...but for my own learning I'd like to be able to do this with scipy too.
I'm reporting back in case someone else ever runs in to the same problem. I think the limitation is that the input from minimize to the objective function must be a flat array, so my (2,2) shape wouldn't work (correct me if I'm wrong folks).
So, a workaround is to input a flat array, and then unpack it in the objective function itself. If you hand the function the number of individuals to expect, the function can process the arrays so that each channel gets a 1xN as input to the regression equations we're trying to optimize.
import numpy as np
from scipy.optimize import minimize
def objective(x, beta, n):
x1, x2 = x.reshape(2,n)
beta1, beta2 = beta.reshape(2,n)
return -1 * np.sum(np.exp(3 + x1*beta1 + x2*beta2))
# initial guesses for variables x1 and x2
n = 2
x1 = np.zeros(n)
x1[0] = 1.0
x1[1] = 2.0
x2 = np.zeros(n)
x2[0] = 3.0
x2[1] = 4.0
x0 = np.concatenate((x1,x2))
# the coefficients (weights) for each of n individuals, in each variable
beta1 = np.zeros(n)
beta1[0] = 1.1
beta1[1] = 1.01
beta2 = np.zeros(n)
beta2[0] = 1.1
beta2[1] = 1.01
beta0 = np.concatenate((beta1, beta2))
# show initial objective
print('Initial SSE Objective: ' + str(objective(x0, beta0, 2))) # this works as intended
# specifying bounds is much easier now we just have a flat array
b = (1.0,5.0) #min = 1, max = 5 on any one channel for any one individual
bnds = (b , b)*n
# running with bounds
solution = minimize(objective, x0, method='SLSQP',
args=(beta0,n), bounds=bnds)
x = solution.x
# show final objective
print('Final SSE Objective: ' + str(objective(x, beta0, n)))
# print solution
print('Solution')
x_sol = x.reshape(2,n)
print('x1 = ' + str(x_sol[0]))
print('x2 = ' + str(x_sol[1]))
As expected, without constraints, the minimize call just maxes out the value of each variable in the equation. My next step will be to put constraints on this function.
I am trying to implement the algorithm of GMRES with right-preconditioner P for solving the linear system Ax = b . The code is running without error; however, it pops into unprecise result for me because the error I have is very large. For the GMRES method (without preconditioning matrix - remove P in the algorithm), the error I get is around 1e^{-12} and it converges with the same matrix.
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
from scipy.linalg import norm as norm
import scipy.sparse as sp
from scipy.sparse import diags
"""The program is to split the matrix into D-diagonal; L: strictly lower matrix; U strictly upper matrix
satisfying: A = D - L - U """
def splitMat(A):
n,m = A.shape
if (n == m):
diagval = np.diag(A)
D = diags(diagval,0).toarray()
L = (-1)*np.tril(A,-1)
U = (-1)*np.triu(A,1)
else:
print("A needs to be a square matrix")
return (L,D,U)
"""Preconditioned Matrix for symmetric successive over-relaxation (SSOR): """
def P_SSOR(A,w):
## Split up matrix A:
L,D,U = splitMat(A)
Comp1 = (D - w*U)
Comp2 = (D - w*L)
Comp1inv = np.linalg.inv(Comp1)
Comp2inv = np.linalg.inv(Comp2)
P = w*(2-w)*np.matmul(Comp1inv, np.matmul(D,Comp2inv))
return P
"""GMRES_SSOR using right preconditioning P:
A - matrix of linear system Ax = b
x0 - initial guess
tol - tolerance
maxit - maximum iteration """
def myGMRES_SSOR(A,x0, b, tol, maxit):
matrixSize = A.shape[0]
e = np.zeros((maxit+1,1))
rr = 1
rstart = 2
X = x0
w = 1.9 ## in ssor
P = P_SSOR(A,w) ### preconditioned matrix
### Starting the GMRES ####
for rs in range(0,rstart+1):
### first check the residual:
if rr<tol:
break
else:
r0 = (b-A.dot(x0))
rho = norm(r0)
e[0] = rho
H = np.zeros((maxit+1,maxit))
Qcol = np.zeros((matrixSize, maxit+1))
Qcol[:,0:1] = r0/rho
for k in range(1, maxit+1):
### Arnodi procedure ##
Qcol[:,k] =np.matmul(np.matmul(A,P), Qcol[:,k-1]) ### This step applies P here:
for j in range(0,k):
H[j,k-1] = np.dot(np.transpose(Qcol[:,k]),Qcol[:,j])
Qcol[:,k] = Qcol[:,k] - (np.dot(H[j,k-1], Qcol[:,j]))
H[k,k-1] =norm(Qcol[:,k])
Qcol[:,k] = Qcol[:,k]/H[k,k-1]
### QR decomposition step ###
n = k
Q = np.zeros((n+1, n))
R = np.zeros((n, n))
R[0, 0] = norm(H[0:n+2, 0])
Q[:, 0] = H[0:n+1, 0] / R[0,0]
for j in range (0, n+1):
t = H[0:n+1, j-1]
for i in range (0, j-1):
R[i, j-1] = np.dot(Q[:, i], t)
t = t - np.dot(R[i, j-1], Q[:, i])
R[j-1, j-1] = norm(t)
Q[:, j-1] = t / R[j-1, j-1]
g = np.dot(np.transpose(Q), e[0:k+1])
Y = np.dot(np.linalg.inv(R), g)
Res= e[0:n] - np.dot(H[0:n, 0:n], Y[0:n])
rr = norm(Res)
#### second check on the residual ###
if rr < tol:
break
#### Updating the solution with the preconditioned matrix ####
X = X + np.matmul(np.matmul(P,Qcol[:, 0:k]), Y) ### This steps applies P here:
return X
######
A = np.random.rand(100,100)
x = np.random.rand(100,1)
b = np.matmul(A,x)
x0 = np.zeros((100,1))
maxit = 100
tol = 0.00001
x = myGMRES_SSOR(A,x0,b,tol,maxit)
res = b - np.matmul(A,x)
print(norm(res))
print("Solution with gmres\n", np.matmul(A,x))
print("---------------------------------------")
print("b matrix:", b)
I hope anyone could help me figure out this!!!
I'm not sure where you got you "Symmetric_successive_over-relaxation" SSOR code from, but it appears to be wrong. You also seem to be assuming that A is symmetric matrix, but in your random test case it is not.
Following SSOR's Wikipedia entry, I replaced your P_SSOR function with
def P_SSOR(A,w):
L,D,U = splitMat(A)
P = 2/(2-w) * (1/w*D+L)*np.linalg.inv(D)*(1/w*D+L).T
return P
and your test matrix with
A = np.random.rand(100,100)
A = A + A.T
and your code works up to a 12 digit residual error.
So I have this 3x3 G matrix (not shown here, it's irrelevant to my problem) that I created using the two variables u (a vector, x - y) and the scalar k. x_j = (x_1 (j), x_2 (j), x_3 (j)) and y_j = (y_1 (j), y_2 (j), y_3 (j)). alpha_j is a 3x3 matrix. The A matrix is block diagonal matrix of size 3nx3n. I am having trouble with the W matrix. How do I code a matrix of size 3nx3n, where the (i,j)th block is the 3x3 matrix given by alpha_i*G_[ij]*alpha_j?? I am lost.
My alpha_j matrix also seems to be having some trouble. The loop keeps throwing me the error, "only length-1 arrays can be converted to Python scalars." pls help :/
def W(x, y, k, alpha, A):
u = x - y
n = x.shape[0]
W = np.zeros((3*n, 3*n))
for i in range(0, n-1):
for j in range(0, n-1):
#u = -np.array([[x[i,0] - x[j,0]], [x[i,1] - x[j,1]], [0]]) ??
W[i][j] = (alpha_j(alpha, A) * G(u, k) * alpha_j(alpha, A))
W[i][i] = np.zeros((n, n))
return W
def alpha_j(a, A):
alph = np.array([[0,0,0],[0,0,0],[0,0,0]],complex)
rho = np.random.rand(3,1)
for i in range(0, 2):
for j in range(0, 2):
alph[i][j] = (rho[i] * a * A[i][j])
return alph
#-------------------------------------------------------------------
x1 = np.array([[1], [2], [0]])
y1 = np.array([[4], [5], [0]])
# SYSTEM PARAMETERS
# incoming Wave angle
theta = 0 # can range from [0, 2pi)
# susceptibility
chi = 10 + 1j
# wavelength
lam = 0.5 # microns (values between .4-.7)
# frequency
k = (2 * np.pi)/lam # 1/microns
# volume
V_0 = (0.05)**3 # microns^3
# incoming wave vector
K = k * np.array([[0], [np.sin(theta)], [np.cos(theta)]])
# polarization vector
vecinc = np.array([[1], [0], [0]]) # (can choose any vector perpendicular to K)
# for the fixed alpha case
alpha = (V_0 * 3 * chi)/(chi + 3)
# 3 x 3 matrix
A = np.matlib.identity(3) # could be any symmetric matrix,
#-------------------------------------------------------------------
# TEST FUNCTIONS
test = G((x1-y1), k)
print(test)
w = W(x1, y1, k, alpha, A)
print(w)
Sometimes my W loops throws me the error, "can't set an array element with a sequence." But I need to set each array element in this arbitrary matrix W to the 3x3 matrix created by multiplying alpha by G...
To your question of how to create a new array with a block for each element, the following should do the trick:
G = np.random.random([3,3])
result = np.zeros([9,9])
num_blocks = 3
a = np.random.random([3,3])
b = np.random.random([3,3])
for i in range(G.shape[0]):
for j in range(G.shape[1]):
block_result = a*G[i,j]*b
for k in range(num_blocks):
for l in range(num_blocks):
result[3*i + k, 3*j + l] = block_result[i, j]
You should be able to generalize from there. I hope I've understood correctly.
EDIT: It looks like I haven't understood correctly. I'm leaving it in hopes it spurs you to an answer. The general idea is to generate ranges of indices to operate on, and then just operate on them directly. Slicing might be helpful, too.
Ah, you asked how to create a diagonal filled with blocks. In that case:
num_diagonal_blocks = 3 # for example
for block_dim in range(num_diagonal_blocks)
# do your block calculation...
for k in range(G.shape[0]):
for l in range(G.shape[1]):
result[3*block_dim + k, 3*block_dim + l] = # assign to element of block
I think that's nearly it.
I am trying to code an RK4 method for my numerical analysis course. I am getting the error:
IndexError: index 1 is out of bounds for axis 0 with size 1
File "/Users/luciusanderson/Documents/Numerical Analysis II/RungeKutta4.py", line 62, in Runge_kutta4
k1 = h * f(t[i],y[i])
I am confused about what this actually means. I've tried googling and the only analysis I've come to so far is that the array I'm trying to iterate the points into starts at axis 0 and the size I'm inputting is 1... I'm not exactly sure.
Here is my code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Feb 21 20:02:25 2018
#author: luciusanderson
"""
#import packages needed
from __future__ import division
import numpy as np
#==================================================================
#Runge Kutta 4th Order Approx Method
def Runge_kutta4(def_fn, a, b, N, ya ):
"""
Test the Runge Kutta 4th Order Approx method to solve
initial value problem y'=f(t,y) with t in [a,b] and y(a) = ya.
Step size h is computed according to input number of mesh points N
"""
#intialization of inputs
f = def_fn #intakes function to method to approximate
h = (b-a)/N #creates step size based on input values of a, b,N
t = np.arange(a, b+h, h) #array intialized to hold mesh points t
y = np.zeros((N+1,)) # array to hold RK4 approximated y values
y[0] = ya #intial condition
#step 1
#here i am having trouble intializing t and y intial points
t[0] = ya
y[0] = ya
# step 2
# iterative method
for i in range(1,N+1):
#maybe trying this below
# w = y[i] ?
# or tau = t[i]
#step 3
k1 = h * f(t[i],y[i])
k2 = h * f(t[i] + (h/2.0), y[i] +(k1/2.0))
k3 = h * f(t[i] + (h/2.0), y[i] + (k2/2.0))
k4 = h * f(t[i] + h, y[i] + k3)
y[i + 1] = y[i] + (k1 + 2.0*k2 + 2.0*k3 + k4)/6.0
return (t,y)
I am performing a least squares regression as below (univariate). I would like to express the significance of the result in terms of R^2. Numpy returns a value of unscaled residual, what would be a sensible way of normalizing this.
field_clean,back_clean = rid_zeros(backscatter,field_data)
num_vals = len(field_clean)
x = field_clean[:,row:row+1]
y = 10*log10(back_clean)
A = hstack([x, ones((num_vals,1))])
soln = lstsq(A, y )
m, c = soln [0]
residues = soln [1]
print residues
See http://en.wikipedia.org/wiki/Coefficient_of_determination
Your R2 value =
1 - residual / sum((y - y.mean())**2)
which is equivalent to
1 - residual / (n * y.var())
As an example:
import numpy as np
# Make some data...
n = 10
x = np.arange(n)
y = 3 * x + 5 + np.random.random(n)
# Note that polyfit is an easier way to do this...
# It would just be "model, resid = np.polyfit(x,y,1,full=True)[:2]"
A = np.vstack((x, np.ones(n))).T
model, resid = np.linalg.lstsq(A, y)[:2]
r2 = 1 - resid / (y.size * y.var())
print r2