I have a code for doing constrained optimization in Matlab:
this one is for objective function from deriving a function according to the constraint and the case:
function f=funcobj(X);
f=[(2-3*X(1)*2+X(3)*(1-X(1))+X(4)*(5+X(1)/5));
(3-4*X(2)+3*X(3)+2*X(4));
(X(1)+3*X(2)-X(1).^2/2-5.5);
(5*X(1)+2*X(2)+X(1).^2/10-10)];
end
and this one is the Jacobian function
function [f0,jac]=jacobian(x);
h = 1.0e-4;
n = length(x);
jac = zeros(n,n);
f0 = funcobj(x);
for zz=1:n;
temp = x(zz);
x(zz)= temp + h;
f1 = funcobj(x);
x(zz)= temp;
jac(:,zz) = (f1 - f0)/h;
disp((f1 - f0)/h);
end
% disp(jac)
and this one is the main code
clc;close all;clear all
%initial
X(1,:)= [1 1 1 1]';
niter=30;tol=1e-6;
for ii=1:niter-1
disp(X(ii,:));
[f,dp]=jacobian(X(ii,:));
dX=inv(dp)*f;
X(ii+1,:)=X(ii, :)'-dX;
fprintf('Iterasion=ti Solution=$.4f \n',ii,X(ii+1))
if abs(X(ii+1,:)-X(ii,:))<tol;
r=X(ii+1,:);
disp('The Solution is convergent')
break
end
end
x=r(1);y=r(2);lambda_1=r(3);lambda_2=r(4);
f = (2*x)+(3*y)-(x).^3-2*(y.^2);
disp('Case 1')
disp(['x=' num2str(x) ', y = ' num2str(y),',f = ' num2str(f)])
disp(['lambda_1 = ' num2str(lambda_1), ', lambda_2 = ' num2str(lambda_2)])
When I try to convert it to Python, I still confused with the X array and how to rewrite jacobian in Python. This is my attempt:
import numpy as np
def funcobj(z):
f = np.array([[2-3*z[0]**2 + z[2]*(1-z[0])+z[3]*(5+z[0]/5)], [(-4*z[1]+3*z[2]+2*z[3])], [z[0]+3*z[1]-(z[2]**2)/2-5.5], [5*z[0]+2*z[1]+(z[0]**2)/10-10]])
print(f)
return f
def jacobian(X):
h = 1.0e-4
n = len(X)
print(n)
jac = np.zeros([4,4])
f0 = funcobj(X)
for i in range(0,n):
temp = X[i]
X[i] = temp + h
f1 = funcobj(X)
X[i] = temp
#print((f1-f0)/h)
jac[0,i] = (f1-f0)/h
return (f0, jac)
X=np.array([[1],[1],[1],[1]])
niter=30
tol=1e-6
for i in range(0,niter):
jacobian(X[:,i])
if abs(X[:,i]-X[:,i-1])<tol:
r=X[:,i]
print('The Solution is convergent')
break
How to fix this code? I still get the error in Python
Your funcobj returns a np.ndarray with shape (n,1) instead of (n,). Note that contrary to matlab, in numpy, the former corresponds to a matrix while the latter corresponds to a vector. Next, in the line jac[0, i] = (f1-f0)/h you are trying to assign a np.ndarray to a single matrix element. It should be jac[:, i] instead. Note also that range starts by default at 0 since python has 0-based indexing.
In code:
def funcobj(z):
f1 = 2-3*z[0]**2 + z[2]*(1-z[0])+z[3]*(5+z[0]/5)
f2 = (-4*z[1]+3*z[2]+2*z[3])
f3 = z[0]+3*z[1]-(z[2]**2)/2-5.5
f4 = 5*z[0]+2*z[1]+(z[0]**2)/10-10
return np.array((f1, f2, f3, f4))
def jacobian(X):
h = 1.0e-4
n = len(X)
jac = np.zeros([4,4])
f0 = funcobj(X)
for i in range(n):
temp = X[i]
X[i] = temp + h
f1 = funcobj(X)
X[i] = temp
#print((f1-f0)/h)
jac[:,i] = (f1-f0)/h
return (f0, jac)
x0 = np.ones(4)
# works as expected
print(jacobian(x0))
Now it's your turn to go on from here and implement the main algorithm in python.
Related
I have an ordinary differential equation like this:
DiffEq = Eq(-ℏ*ℏ*diff(Ψ,x,2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ , 0)
I want to perform a variable change :
sp.Eq(u , x*sqrt(m*w/ℏ))
sp.Eq(Ψ, H*exp(-u*u/2))
How can I do this with sympy?
Use the following function:
def variable_change(ODE,dependent_var,
independent_var,
new_dependent_var = None,
new_independent_var= None,
dependent_var_relation = None,
independent_var_relation = None,
order = 2):
if new_dependent_var == None:
new_dependent_var = dependent_var
if new_independent_var == None:
new_independent_var = independent_var
# dependent variable change
if new_independent_var != independent_var:
for i in range(order, -1, -1):
# remplace derivate
a = D(dependent_var , independent_var, i )
ξ = Function("ξ")(independent_var)
b = D( dependent_var.subs(independent_var, ξ), independent_var ,i)
rel = solve(independent_var_relation, new_independent_var)[0]
for j in range(order, 0, -1):
b = b.subs( D(ξ,independent_var,j), D(rel,independent_var,j))
b = b.subs(ξ, new_independent_var)
rel = solve(independent_var_relation, independent_var)[0]
b = b.subs(independent_var, rel)
ODE = ODE.subs(a,b)
ODE = ODE.subs(independent_var, rel)
# change of variables of indpendent variable
if new_dependent_var != dependent_var:
ODE = (ODE.subs(dependent_var.subs(independent_var,new_independent_var) , (solve(dependent_var_relation, dependent_var)[0])))
ODE = ODE.doit().expand()
return ODE.simplify()
For the example posted:
from sympy import *
from sympy import diff as D
E, ℏ ,w,m,x,u = symbols("E, ℏ , w,m,x,u")
Ψ ,H = map(Function, ["Ψ ","H"])
Ψ ,H = Ψ(x), H(u)
DiffEq = Eq(-ℏ*ℏ*D(Ψ,x,2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ,0)
display(DiffEq)
display(Eq(u , x*sqrt(m*w/ℏ)))
display(Eq(Ψ, H*exp(-u*u/2)))
newODE = variable_change(ODE = DiffEq,
independent_var = x,
new_independent_var= u,
independent_var_relation = Eq(u , x*sqrt(m*w/ℏ)),
dependent_var = Ψ,
new_dependent_var = H,
dependent_var_relation = Eq(Ψ, H*exp(-u*u/2)),
order = 2)
display(newODE)
Under this substitution the differential equation outputted is then:
Eq((-E*H + u*w*ℏ*D(H, u) + w*ℏ*H/2 - w*ℏ*D(H, (u, 2))/2)*exp(-u**2/2), 0)
If anyone is wondering how they could do it as well on CoCalc notebooks/anywhere where you can mix Sage and Python, here I defined basically the same variables and functions as OP did on his accepted answer, and then after substitution the result is converted back to Sage:
# Sage objects
var("E w m x u")
var("h_bar", latex_name = r'\hbar')
Ψ = function("Ψ")(x)
H = function('H')(u)
DiffEq = (-h_bar*h_bar*Ψ.diff(x, 2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ == 0)
display(DiffEq)
display(u == x*sqrt(m*w/h_bar))
display(Ψ == H*exp(-u*u/2))
# Function is purely sympy
newODE = variable_change(
ODE = DiffEq._sympy_(),
independent_var = x._sympy_(),
new_independent_var = u._sympy_(),
independent_var_relation = (u == x*sqrt(m*w/h_bar))._sympy_(),
dependent_var = Ψ._sympy_(),
new_dependent_var = H._sympy_(),
dependent_var_relation = (Ψ == H*exp(-u*u/2))._sympy_(),
order = 2
)
display(newODE._sage_())
Note that the only difference is that here things are converted to SymPy when using as arguments inside OP's function (it'll probably break if you don't!). After you call _sympy_() only once on a variable or expression, every sympy object gets a _sage_() method to convert back.
The result given was:
# Sage object again
1/2*(2*h_bar*u*w*diff(H(u), u) + h_bar*w*H(u) - h_bar*w*diff(H(u), u, u) - 2*E*H(u))*e^(-1/2*u^2) == 0
Which is just OP's result, but Sage handles operands a little bit differently.
Note: in order to avoid overriding stuff on Sage after importing everything from SymPy, you may want to import only diff as D, Function and solve from the main library. You might also want to rename sympy's solve to something else to avoid overriding Sage's own sage.symbolic.relation.solve.
I use Sympy solve() function to solve a large number of equations. All variables in the equations are defined as symbols. Variables can start with the letter P or F. I use solve() to express one specific P variable (the one that I observe) with only F variables, so I use solve() to substitute all other P variables with F variables. The sum of the coefficients before the F variables is ideally 1 or almost 1 (e.g.: 0.99).
This produces good results till a certain point where the number of equations becomes pretty big and also their length. There the Sympy solve() function starts to give me wrong results. The sum of the coefficients becomes negative (e.g. -7,...). It looks like that the solve() function gets problems with substituting any carrying over all variables and their coefficients.
Is there a way to correct this problem?
Dictionary of equations under link: https://drive.google.com/open?id=1VBQucrDU-o1diCd6i4rR3MlRh95qycmK
import json
from sympy import Symbol, Add, Eq, solve
# Get data
# data from link above
with open("C:\\\\Test\\dict.json") as f:
equations = json.load(f)
comp =[]
expressions = []
for p, equation_components in equations.items():
p = Symbol(p)
comp.append(p)
expression = []
for name, multiplier in equation_components.items():
if type(multiplier) == float or type(multiplier) == int:
expression.append(Symbol(name) * multiplier)
else:
expression.append(Symbol(name) * Symbol(multiplier))
expressions.append(Eq(p, Add(*expression)))
# Solution for variable P137807
print("Solving...")
# Works for slice :364 !!!!!
solutions = solve(expressions[:364], comp[:364], simplify=False, rational=False)
# Gives wrong results for slice :366 and above !!!!!
# solutions = solve(expressions[:366], comp[:366], simplify=False, rational=False)
vm_symbol = Symbol("P137807")
solution_1 = solutions[vm_symbol]
print("\n")
print("Solution_1:")
print(solution_1)
print("\n")
#Sum of coefficients
list_sum = []
for i in solution_1.args:
if str(i.args[1]) != "ANaN":
list_sum.append(i.args[0])
coeff_sum = sum(list_sum)
print("Sum:")
print(coeff_sum)
...
I just wanted to mark the problem as solved and provide reference to the solution. Please look at numerical instability when solving n=385 linear equations with Float coefficients #17136.
The solution that worked for me was to use the following solver and not the Sympy solve() function:
def ssolve(eqs, syms):
"""return the solution of linear system of equations
with symbolic coefficients and a unique solution.
Examples
========
>>> eqs=[x-1,x+2*y-z-2,x+z+w-6,2*y+z+x-2]
>>> v=[x,y,z,w]
>>> ssolve(eqs, v)
{x: 1, z: 0, w: 5, y: 1/2}
"""
from sympy.solvers.solveset import linear_coeffs
v = list(syms)
N = len(v)
# convert equations to coefficient dictionaries
print('checking linearity')
d = []
v0 = v + [0]
for e in [i.rewrite(Add) for i in eqs]:
co = linear_coeffs(e, *v)
di = dict([(i, c) for i, c in zip(v0, co) if c or not i])
d.append(di)
print('forward solving')
sol = {}
impl = {}
done = False
while not done:
# check for those that are done
more = set([i for i, di in enumerate(d) if len(di) == 2])
did = 0
while more:
di = d[more.pop()]
c = di.pop(0)
x = list(di)[0]
a = di.pop(x)
K = sol[x] = -c/a
v.remove(x)
changed = True
did += 1
# update everyone else
for j, dj in enumerate(d):
if x not in dj:
continue
dj[0] += dj.pop(x)*K
if len(dj) == 2:
more.add(j)
if did: print('found',did,'definitions')
# solve implicitly for the next variable
dcan = [i for i in d if len(i) > 2]
if not dcan:
done = True
else:
# take shortest first
di = next(ordered(dcan, lambda i: len(i)))
done = False
x = next(ordered(i for i in di if i))
c = di.pop(x)
for k in di:
di[k] /= -c
impl[x] = di.copy()
di.clear()
v.remove(x)
# update everyone else
for j, dj in enumerate(d):
if x not in dj:
continue
done = False
c = dj.pop(x)
for k in impl[x]:
dj[k] = dj.get(k, 0) + impl[x][k]*c
have = set(sol)
sol[0] = 1
while N - len(have):
print(N - len(have), 'to backsub')
for k in impl:
if impl[k] and not set(impl[k]) - have - {0}:
sol[k] = sum(impl[k][vi]*sol[vi] for vi in impl[k])
impl[k].clear()
have.add(k)
sol.pop(0)
return sol
I have a class that was taking in lists of 1's and 0's and performing GF(2) finite field arithmetic operations. It used to work until I tried to make it take the input in polynomial format. As for how the finite arithmetic will be done after fixing the regex issue, I was thinking about overloading the operators.
The actual code in parsePolyToListInput(input) works when outside the class. The problem seems to be in the regex, which errors that it will only take in a string (this makes sense), but does not seem to initialize with self.expr as a parameter (that's a problem). The #staticmethod just before the initialization was an attempt to salvage the unbound error as it the polynomial was passed in, but this is apparently completely wrong. Just to save you time if you decide to look at any of the arithmetic operations, modular inverse does not work (seems to be due to the formatting issue after every iteration of that while loop for division in the function and what the return type is):
import re
class gf2poly:
#binary arithemtic on polynomials
##staticmethod
def __init__(self,expr):
self.expr = expr
#self.expr = [int(i) for i in expr]
self.expr = gf2poly.parsePolyToListInput(self.expr)
def convert(self): #to clarify the input if necessary
convertToString = str(self.expr)
print "expression is %s"%(convertToString)
def id(self): #returns modulus 2 (1,0,0,1,1,....) for input lists
return [int(self.expr[i])%2 for i in range(len(self.expr))]
def listToInt(self): #converts list to integer for later use
result = gf2poly.id(self)
return int(''.join(map(str,result)))
def prepBinary(a,b): #converts to base 2 and orders min and max for use
a = gf2poly.listToInt(a); b = gf2poly.listToInt(b)
bina = int(str(a),2); binb = int(str(b),2)
a = min(bina,binb); b = max(bina,binb);
return a,b
#staticmethod
def outFormat(raw):
raw = str(raw[::-1]); g = [] #reverse binary string for enumeration
[g.append(i) for i,c in enumerate(raw) if c == '1']
processed = "x**"+' + x**'.join(map(str, g[::-1]))
if len(g) == 0: return 0 #return 0 if list empty
return processed #returns result in gf(2) polynomial form
def parsePolyToListInput(poly):
c = [int(i.group(0)) for i in re.finditer(r'\d+', poly)] #re.finditer returns an iterator
#m = max(c)
return [1 if x in c else 0 for x in xrange(max(c), -1, -1)]
#return d
def add(self,other): #accepts 2 lists as parameters
a = gf2poly.listToInt(self); b = gf2poly.listToInt(other)
bina = int(str(a),2); binb = int(str(b),2)
m = bina^binb; z = "{0:b}".format(m)
return z #returns binary string
def subtract(self,other): #basically same as add() but built differently
result = [self.expr[i] ^ other.expr[i] for i in range(len(max(self.expr,other.expr)))]
return int(''.join(map(str,result)))
def multiply(a,b): #a,b are lists like (1,0,1,0,0,1,....)
a,b = gf2poly.prepBinary(a,b)
g = []; bitsa = "{0:b}".format(a)
[g.append((b<<i)*int(bit)) for i,bit in enumerate(bitsa)]
m = reduce(lambda x,y: x^y,g); z = "{0:b}".format(m)
return z #returns product of 2 polynomials in gf2
def divide(a,b): #a,b are lists like (1,0,1,0,0,1,....)
a,b = gf2poly.prepBinary(a,b)
bitsa = "{0:b}".format(a); bitsb = "{0:b}".format(b)
difflen = len(str(bitsb)) - len(str(bitsa))
c = a<<difflen; q=0
while difflen >= 0 and b != 0: #a is divisor, b is dividend, b/a
q+=1<<difflen; b = b^c # b/a because of sorting in prep
lendif = abs(len(str(bin(b))) - len(str(bin(c))))
c = c>>lendif; difflen -= lendif
r = "{0:b}".format(b); q = "{0:b}".format(q)
return r,q #returns r remainder and q quotient in gf2 division
def remainder(a,b): #separate function for clarity when calling
r = gf2poly.divide(a,b)[0]; r = int(str(r),2)
return "{0:b}".format(r)
def quotient(a,b): #separate function for clarity when calling
q = gf2poly.divide(a,b)[1]; q = int(str(q),2)
return "{0:b}".format(q)
def extendedEuclideanGF2(a,b): # extended euclidean. a,b are GF(2) polynomials in list form
inita,initb=a,b; x,prevx=0,1; y,prevy = 1,0
while sum(b) != 0:
q = gf2poly.quotient(a,b);
q = list(q); q = [int(x) for x in q]
#q = list(q);
#q = tuple([int(i) for i in q])
q = gf2poly(q)
a,b = b,gf2poly.remainder(a,b);
#a = map(list, a);
#b = [list(x) for x in a];
#a = [int(x) for x in a]; b = [int(x) for x in b];
b = list(b); b = [int(x) for x in b]
#b = list(b);
#b = tuple([int(i) for i in b])
b = gf2poly(b)
#x,prevx = (prevx-q*x, x);
#y,prevy=(prevy-q*y, y)
print "types ",type(q),type(a),type(b)
#q=a//b; a,b = b,a%b; x,prevx = (prevx-q*x, x); y,prevy=(prevy-q*y, y)
#print("%d * %d + %d * %d = %d" % (inita,prevx,initb,prevy,a))
return a,prevx,prevy # returns gcd of (a,b), and factors s and t
def modular_inverse(a,mod): # where a,mod are GF(2) polynomials in list form
gcd,s,t = gf2poly.extendedEuclideanGF2(a,mod); mi = gf2poly.remainder(s,mod)
#gcd,s,t = ext_euc_alg_i(a,mod); mi = s%mod
if gcd !=1: return False
#print ("%d * %d mod %d = 1"%(a,mi,mod))
return mi # returns modular inverse of a,mod
I usually test it with this input:
a = x**14 + x**1 + x**0
p1 = gf2poly(a)
b = x**6 + x**2 + x**1
p2 = gf2poly(b)
The first thing you might notice about my code is that it's not very good. There are 2 reasons for that:
1) I wrote it so that the 1st version could do work in the finite field GF(2), and output in polynomial format. Then the next versions were supposed to be able to take polynomial inputs, and also perform the crucial 'modular inverse' function which is not working as planned (this means it's actually not working at all).
2) I'm teaching myself Python (I'm actually teaching myself programming overall), so any constructive criticism from pro Python programmers is welcome as I'm trying to break myself of beginner habits as quickly as possible.
EDIT:
Maybe some more of the code I've been testing with will help clarify what works and what doesn't:
t1 = [1,1,1]; t2 = [1,0,1]; t3 = [1,1]; t4 = [1, 0, 1, 1, 1, 1, 1]
t5 = [1,1,1,1]; t6 = [1,1,0,1]; t7 = [1,0,1,1,0]
f1 = gf2poly(t1); f2 = gf2poly(t2); f3 = gf2poly(t3); f4 = gf2poly(t4)
f5 = gf2poly(t5);f6 = gf2poly(t6);f7 = gf2poly(t7)
##print "subtract: ",a.subtract(b)
##print "add: ",a.add(b)
##print "multiply: ",gf2poly.multiply(f1,f3)
##print "multiply: ",gf2poly.multiply(f1,f2)
##print "multiply: ",gf2poly.multiply(f3,f4)
##print "degree a: ",a.degree()
##print "degree c: ",c.degree()
##print "divide: ",gf2poly.divide(f1,b)
##print "divide: ",gf2poly.divide(f4,a)
##print "divide: ",gf2poly.divide(f4,f2)
##print "divide: ",gf2poly.divide(f2,a)
##print "***********************************"
##print "quotient: ",gf2poly.quotient(f2,f5)
##print "remainder: ",gf2poly.remainder(f2,f5)
##testq = gf2poly.quotient(f4,f2)
##testr = gf2poly.remainder(f4,f2)
##print "quotient: ",testq,type(testq)
##print "remainder: ",testr,type(testr)
##print "***********************************"
##print "outFormat testp: ",gf2poly.outFormat(testq)
##print "outFormat testr: ",gf2poly.outFormat(testr)
##print "***********************************"
#print "gf2poly.modular_inverse(): ",gf2poly.modular_inverse(f2,f3)
print "p1 ",p1 #,type(f2),type(f3)
#print "parsePolyToListInput ",gf2poly.parsePolyToListInput(a)
Part of your problem is that you haven't declared self as an argument for parsePolyToListInput. When you call a method, the instance you call it on is implicitly bound as the first argument. Naming the first argument self is a convention, not a strict requirement - the instance is being bound to poly, which you then try to run a regexp over.
It looks me like there's some confusion in your design here about what's behavior of individual instances of the class and what's class-level or module-level behavior. In Python, it's perfectly acceptable to leave something that doesn't take an instance of a class as a parameter defined as a module-level function rather than shoehorning it in awkwardly. parsePolyToListInput might be one such function.
Your add implementation, similarly, has a comment saying it "accepts 2 lists as parameters". In fact, it's going to get a gf2poly instance as its first argument - this is probably right if you're planning to do operator overloading, but it means the second argument should also be a gf2poly instance as well.
EDIT:
Yeah, your example code shows a breakdown between class behavior and instance behavior. Either your multiply call should look something like this:
print "multiply: ",f1.multiply(f3)
Or multiply shouldn't be a method at all:
gfpoly.py:
def multiply(f1, f2):
a,b = prepBinary(a,b)
g = []; bitsa = "{0:b}".format(a)
[g.append((b<<i)*int(bit)) for i,bit in enumerate(bitsa)]
m = reduce(lambda x,y: x^y,g); z = "{0:b}".format(m)
return z #returns product of 2 polynomials in gf2
That latter approach is, for instance, how the standard math library does things.
The advantage of defining a multiplication method is that you could name it appropriately (http://docs.python.org/2/reference/datamodel.html#special-method-names) and use it with the * operator:
print "multiply: ",f1 *f3
My code is working right except when i enter r1 into the function equation below
def u(Substrate):
return((u_max*ys[:,0])/(Ks+ys[:,0]))
biomass = ys[:,1]
u = u(ys[:,0])
def r1(u,biomass):
r1 = u*biomass*YieldCO2_1
return r1
r1 = r1(u,biomass)
def F(y,t):
Ptot = 710
Vgas = 2
D = 0.00826*(273.15+Temp)
Cstar_CO2 = KH_CO2 * y[2]
Cstar_CH4 = KH_CH4 * y[3]
TG_CO2 = KLa_CO2*(Cstar_CO2-y[0])
TG_CH4 = KLa_CH4*(Cstar_CH4-y[1])
Q_CO2 = -D*V*TG_CO2
Q_CH4 = -D*V*TG_CH4
Qgas = (Q_CO2+Q_CH4)+Q
F=np.zeros(4)
F[0] = Q/V * (CO2_To-y[0]) + TG_CO2 + r1
F[1] = Q/V * (CH4_Do-y[1]) + TG_CH4
F[2] = -Ptot*D*TG_CO2*(V/Vgas)-y[2]*(Qgas/Vgas)
F[3] = -Ptot*D*TG_CH4*(V/Vgas)-y[3]*(Qgas/Vgas)
return F
yinit = np.array([4,3,250,200])
ts = np.arange(0,4,0.4)
y = odeint(F,yinit,ts)
When r1 is seen in equation F[0] I get the following error:
F[0] = Q/V * (CO2_To-y[0]) + TG_CO2 + r1
ValueError: setting an array element with a sequence.
odepack.error: Error occurred while calling the Python function named F
However when I do the function without the r1 array, there is no error. so that is why i am assuming something is wrong with putting the r1 array into the function
If anyone could provide input to my problem i would
F[0] = expression expects expression to be a number here, not an array. However Q/V * (CO2_To-y[0]) + TG_CO2 + r1 is an array of r1 dimensions. To see this, try evaluating the following line:
>>> 1 + numpy.array([1,2])
array([2, 3])
To get rid of the exception you should covert this expression to a number somehow depending on what you are trying to achieve.
How to convert the following MATLAB code to Python? Here is my solution, but it doesn't quite produce the same results. For example, f seems to be always positive in the MATLAB code, but in my Python code, f also gets negative values.
Any ideas how to fix the program?
Mostly, I am concerned about these:
MATLAB:
for k = 1 : nx
j = k+2;
Python:
for k in range(1,nx+1):
j = k+2
MATLAB:
[V,D] = eig(A, B);
DD = diag(D);
keep_idxs = find( ~isinf(DD) );
D = diag( DD(keep_idxs) );
V = V(:, keep_idxs);
[lambda, idx] = min(diag(D));
f = V(:,idx);
Python:
w,vr = scipy.linalg.decomp.eig(A,B)
w = w.real
vr = vr.real
w = w[2:-1-2]
lambda_ = w.min()
idx = w.argmin()
f = vr[:,idx]
MATLAB:
f = f(3:end-2);
[nf, nf_idx] = max(abs(f)); % L_infty norm
n2 = f(nf_idx); % normalize sign away, too
f = f ./ n2;
Python:
f = f[2:-1-1]
nf = max(np.absolute(f))
nf_idx = np.absolute(f).argmax()
nf_idx = np.ma.argmax(f)
n2 = f[nf_idx]
f = f/n2
MATLAB:
xx = -kappa:h:kappa;
Python:
xx = np.arange(-kappa, kappa+h, h)
Are those equivalent with each other? If they are, then why don't they produce exact the same results?
I don't know about matlab, but for python the code
for k in range(1,nx+1):
j = k+2
is the same as
j = nx+2
This isn't your main problem, but it's worrying.