I want to double integrate a function. But I get different results when using dblquad over scipy.integrate and matlab. A python implementation of my function to double integrate is like this:
###Python implementation##
import numpy as np
from scipy.integrate import dblquad
def InitialCondition(x_b, y_b, m10, m20, N0):
IC = np.zeros((len(x_b)-1,len(y_b)-1))
for i in xrange(len(x_b) - 1):
for j in xrange(len(y_b) - 1):
IC[i,j], abserr = dblquad(ExponenIC, x_b[i], x_b[i + 1], lambda x: y_b[j], lambda x: y_b[j+1], args=(m10, m20, N0), epsabs=1.49e-15, epsrel=1.49e-15)
return IC
def ExponenIC(x, y, m10, m20, N0):
retVal = (16 * N0) / (m10 * m20) * (x / m10)* (y / m20) * np.exp(-2 * (x / m10) - 2 * (y / m20))
return retVal
if __name__=='__main__':
x_min, x_max = 0.0004, 20.0676
x_b = np.exp(np.linspace(np.log(x_min), np.log(x_max), 4))
y_b = np.copy(x_b)
m10, m20, N0 = 0.04, 0.04, 1
print InitialCondition(x_b, y_b, m10, m20, N0)
But if I repeat the same in matlab, with equivalent implementation and same input as shown below:
%%%Matlab equivalent%%%
function IC = test(x_b, y_b, m10, m20, N0)
for i = 1:length(x_b)-1
for j = 1:length(y_b)-1
IC(i, j) = dblquad(#ExponenIC, x_b(i), x_b(i+1), y_b(j), y_b(j+1), 1e-6, #quad, m10, m20, N0);
end
end
return
function retVal = ExponenIC(x, y, m10, m20, N0)
retVal = (16 * N0) / (m10*m20) * (x / m10) .* (y / m20) .* exp(-2*(x/m10) - 2 * (y/m20));
return
% for calling
x_min = 0.0004;
x_max = 20.0676;
x_b = exp(linspace(log(x_min), log(x_max), 4));
y_b = x_b;
m10 = 0.04;
m20 = 0.04;
N0 = 1;
I = test(x_b, y_b, m10, m20, N0)
Scipy dblquad returns:
[[ 2.84900512e-02 1.40266599e-01 7.34019842e-12]
[ 1.40266599e-01 6.90582083e-01 3.61383932e-11]
[ 7.28723691e-12 3.58776449e-11 1.89113430e-21]]
and Matlab dblquad returns:
IC =
28.4901e-003 140.2666e-003 144.9328e-012
140.2666e-003 690.5820e-003 690.9716e-012
144.9328e-012 690.9716e-012 737.2926e-021
I have tried to change tolerances and order of input but two solutions remained always different. Thus, I am not able to understand which one is accurate and I would like to have it correct in python. Can someone suggest if this is a bug in either of the dblquadsolver or in my code somewhere?
From a glance at the results, the repetition of 690 in the Matlab output (in places where Python has different results) casts a doubt on Matlab's performance.
One of problems with using the (deprecated) function dblquad in Matlab is that the tolerance you specify to it is absolute (to my understanding). This is why when you specify 1e-6, the integrals of order 1e-11 come out wrong. When you replace it by 1e-12, the computation takes a lot longer (because the larger integrals must now be computed to a great precision), yet the smallest of integrals, of size 1e-21, is still wrong.
Hence, you should use a routine that supports relative error tolerance, such as integral2.
Replacing the Matlab line with dblquad by
IC(i, j) = integral2(#(x,y) ExponenIC(x,y, m10, m20, N0), x_b(i), x_b(i+1), y_b(j), y_b(j+1), 'RelTol', 1e-12);
I get
0.0284900512006556 0.14026659933722 7.10653215130477e-12
0.14026659933722 0.690582082532588 3.51109000906259e-11
7.10653215130476e-12 3.5110900090626e-11 1.78512164747727e-21
which roughly agrees with Python output. Still, there remains a substantial difference. To settle the matter definitely, I calculated the integrals analytically. The exact result is
0.0284900512006717 0.140266599337199 7.28723691243472e-12
0.140266599337199 0.690582082532677 3.58776449039036e-11
7.28723691243472e-12 3.58776449039036e-11 1.86394265998016e-21
Neither package achieved the desired accuracy, but Python/scipy was much closer.
For completeness, the loop that outputs the analytic solution:
function IC = test(x_b, y_b, m10, m20, N0)
F = #(x,a) -0.25*exp(-2*x/a)*(2*x+a);
for i = 1:length(x_b)-1
for j = 1:length(y_b)-1
IC(i,j) = (16 * N0) / (m10*m20) *(F(x_b(i+1),m10)-F(x_b(i),m10)) * (F(y_b(j+1),m20)-F(y_b(j),m20));
end
end
end
Related
I am running scipy.optimize.minimize trying to maximize the likelihood for left-truncated data on a Gompertz distribution. Since the data is left-truncated at 1, I get this likelihood:
# for a single point x_i, the left-truncated log-likelihood is:
# ln(tau) + tau*(ln(theta) - ln(x_i)) - (theta / x_i) ** tau - ln(x_i) - ln(1 - exp(-(theta / d) ** tau))
def to_minimize(args, data, d=1):
theta, tau = args
if tau <= 0 or theta <= 0 or theta / d < 0 or np.exp(-(theta / d) ** tau) >= 1:
print('ERROR')
term1 = len(data) * (np.log(tau) + tau * np.log(theta) - np.log(1 - np.exp(-(theta / d) ** tau)))
term2 = 0
for x in data:
term2 += (-(tau + 1) * np.log(x)) - (theta / x) ** tau
return term1 + term2
This will fail in all instances where the if statement is true. In other words, tau and theta have to be strictly positive, and theta ** tau must be sufficiently far away from 0 so that np.exp(-theta ** tau) is "far enough away" from 1, since otherwise the logarithm will be undefined.
These are the constraints which I thus defined. I used the notation with a dict instead of a NonlinearConstraints object since it seems that this methods accepts strict inequality (np.exp(-x[0] ** x[1]) must be strictly less than 1). Maybe I have misunderstood the documentation on this.
def constraints(x):
return [1 - np.exp(-(x[0]) ** x[1])]
To maximize the likelihood, I minimize the negative likelihood.
opt = minimize(lambda args: -to_minimize(args, data),
x0=np.array((1, 1)),
constraints={'type': 'ineq', 'fun': constraints},
bounds=np.array([(1e-15, 10), (1e-15, 10)]))
As I take it, the two arguments should then never be chosen in a way such that my code fails. Yet, the algorithm tries to move theta very close to its lower bound and tau very close to its upper bound so that the logarithm becomes undefined.
What makes my code fail?
Both forms of constraints, i.e. NonlinearConstraint and dict constraints don't support strict inequalities. Typically, one therefore uses g(x) >= c + Ɛ to model the strict inequality g(x) > c, where Ɛ is a sufficiently small number.
Note also that it is not guaranteed that each iteration lies inside the feasible region. Internally, most of the methods try to bring it back into the feasible region by a simple clipping of the bounds. In cases where this doesn't work, you can try NonlinearConstraints keep_feasible option and then use the trust-constr method:
import numpy as np
from scipy.optimize import NonlinearConstraint, minimize
def con_fun(x):
return 1 - np.exp(-(x[0]) ** x[1])
# 1.0e-8 <= con_fun <= np.inf
con = NonlinearConstraint(con_fun, 1.0e-8, np.inf, keep_feasible=True)
x0 = np.array((1., 1.))
bounds = np.array([(1e-5, 10), (1e-5, 10)])
opt = minimize(lambda args: -to_minimize(args, data),
x0=x0, constraints=(con,),
bounds=bounds, method="trust-constr")
I am simulating a chemical reaction of the form A --> B --> C using a chemical batch reactor model. The corresponding ODE is a follows:
dcA/dt = - kA * cA(t) ** nA1
dcB/dt = kA * cA(t) ** nA1 - kB * cB(t) **nB2
dcC/dt = - kB * cB(t) ** nB2
Pyomo solves the ODE system fine if the exponents nA1 and nB2 are 1 or higher. But in my case they below 1 and as the components concentrations approach zero the ode integration fails, giving out only nans. The reason is that once the concentrations approach zero they numerically become values of cA(t) = -10e-20 for example and then the expression cA(t)**nA1 is not solvable any more.
I tried to implement a workaround of the form:
if cA < 0:
R1 = 0
else:
R1 = kA * cA(t) ** nA1
but I wasn't able to do it properly as I had a hard time using the pyomo synthax.
This is the minimal working example:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from pyomo.environ import *
from pyomo.dae import *
V = 40 # l
kA = 0.5 # 1/min
kB = 0.1 # 1/min
nA1 = 0.5
nB2 = 0.5
cAf = 2.0 # mol/l
def batch_plot(t, y):
plt.plot(t, y[:, 0], label = "cA")
plt.plot(t, y[:, 1], label = "cB")
plt.plot(t, y[:, 2], label = "cC")
plt.legend()
def batch():
m = ConcreteModel()
m.t = ContinuousSet(bounds = (0, 500))
m.cA = Var(m.t, domain = NonNegativeReals)
m.cB = Var(m.t, domain = NonNegativeReals)
m.cC = Var(m.t, domain = NonNegativeReals)
m.dcA = DerivativeVar(m.cA, wrt = m.t)
m.dcB = DerivativeVar(m.cB, wrt = m.t)
m.dcC = DerivativeVar(m.cC, wrt = m.t)
m.cA[0] = cAf
m.cB[0] = 0
m.cC[0] = 0
R1 = lambda m, t: kA * m.cA[t] ** nA1
R2 = lambda m, t: kB * m.cB[t] ** nB2
m.odeA = Constraint(m.t, rule = lambda m, t: m.dcA[t] == - R1(m, t) )
m.odeB = Constraint(m.t,
rule = lambda m, t: m.dcB[t] == R1(m, t) - R2(m, t) )
m.odeC = Constraint(m.t,
rule = lambda m, t: m.dcC[t] == R2(m, t) )
return m
tsim, profiles = Simulator(batch(), package = "scipy").simulate(numpoints = 100)
batch_plot(tsim, profiles)
I expect the ode integration to work even with reaction orders below 1.
Does anybody have an idea on how to achieve this?
There are two aims in modifying the power function x^n:
extend to negative x in a smooth way so that the numerical method does not hiccup close to x=0 and
have a small slope for small x so that the numerical integration for very small x has a greater chance to be stable.
The first condition is satisfied by constructs like
x*max(eps,abs(x))^(n-1) or
x*(eps+abs(x-eps))^(n-1),
x*(eps^2+abs(x-eps)^2)^(0.5*(n-1)),
which all have the exact same value x^n for x>eps and are continuous and piecewise smooth. But the slope at x=0 is of the size eps^(n-1) which will require very small step sizes even after the system stabilizes.
The solution is to extract even more integer power from the rational power in the form of
x*abs(x) * max(eps,abs(x))^(n-2)
or one of the other variants for the last factor. For 0<x<eps and n=0.5 this results in the value r(x)=x^2 * eps^(-1.5), so that the equation x'=-k*r(x) has the solution x(t)=x1/(1+x1*k*eps^(-1.5)*(t-t1)) after it fell to a point 0<x1<eps at t=t1. The slope of r is smaller 2, which is nice for numerical integrators.
This was implemented for scipy.integrate.solve_ivp, using method LSODA and rather strict tolerances, with the ODE right side function
# your original function, stabilizes at negative values
power0 = lambda x,n: max(0,x) ** n;
# linear at x=0, small step sizes
def power1(x,n): eps=1e-4; return x * max(eps, abs(x)) ** (n-1);
def power2(x,n): eps=1e-4; return x * (eps**2+(x-eps)**2) ** (0.5*(n-1))
# quadratic at x=0, large step sizes on the tail
eps = 1e-8
power3 = lambda x,n: x * abs(x) * max(eps,abs(x)) ** (n-2)
power4 = lambda x,n: x * abs(x) * (eps**2+(x-eps)**2) ** (0.5*n-1)
# select the power approximation used
power = power3
def model(t,u):
cA, cB, Cc = u;
R1 = kA * power(cA, nA1)
R2 = kB * power(cB, nB2)
return [ -R1, R1-R2, R2 ]
The integration runs successfully, using step sizes 20-30 in the tail end. The resulting plot looks qualitatively correct,
and in the zoom for small values is smooth and remains positive.
I'm desperately trying to solve (and display the graph) a system made of nine nonlinear differential equations which model the path of a boomerang. The system is the following:
All the letters on the left side are variables, the others are either constants or known functions depending on v_G and w_z
I have tried with scipy.odeint with no conclusive results (I had this issue but the workaround did not work.)
I begin to think that the problem is linked with the fact that these equations are nonlinear or that the function in denominator might cause a singularity that the scipy solver is simply unable to handle. However, I am not familiar with that sort of mathematical knowledge.
What possibilities python-wise do I have to solve this set of equations?
EDIT : Sorry if I was not clear enough. Since it models the path of a boomerang, my goal is not to solve analytically this system (ie I don't care about the mathematical expression of each function), but rather to get the values of each function for a specific time range (say, from t1 = 0s to t2 = 15s with an interval of 0.01s between each value) in order to display the graph of each function and the graph of the center of mass of the boomerang (X,Y,Z are its coordinates).
Here is the code I tried :
import scipy.integrate as spi
import numpy as np
#Constants
I3 = 10**-3
lamb = 1
L = 5*10**-1
mu = I3
m = 0.1
Cz = 0.5
rho = 1.2
S = 0.03*0.4
Kz = 1/2*rho*S*Cz
g = 9.81
#Initial conditions
omega0 = 20*np.pi
V0 = 25
Psi0 = 0
theta0 = np.pi/2
phi0 = 0
psi0 = -np.pi/9
X0 = 0
Y0 = 0
Z0 = 1.8
INPUT = (omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0) #initial conditions
def diff_eqs(t, INP):
'''The main set of equations'''
Y=np.zeros((9))
Y[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
Y[1] = -(lamb/m)*INP[1]
Y[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
Y[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
Y[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
Y[5] = -np.cos(INP[3])*Y[4]
Y[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
Y[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
Y[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return Y # For odeint
t_start = 0.0
t_end = 20
t_step = 0.01
t_range = np.arange(t_start, t_end, t_step)
RES = spi.odeint(diff_eqs, INPUT, t_range)
However, I keep getting the same problem as shown here and especially the error message :
Excess work done on this call (perhaps wrong Dfun type)
I am not quite sure what it means but it looks like the solver have troubles solving the system. In any case, when I try to display the 3D path thanks to the XYZ coordinates, I just get 3 or 4 points where there should be something like 2000.
So my questions are : - Am I doing something wrong in my code ?
- If not, is there an other maybe more sophisticated tool to solve this sytem ?
- If not, is it even possible to get what I want from this system of ODEs ?
Thanks in advance
There are several issues:
if I copy the code, it does not run
the workaround you mention does not work with odeint, the given
solution uses ode
The scipy reference for odeint says:"For new code, use
scipy.integrate.solve_ivp to solve a differential equation."
the call RES = spi.odeint(diff_eqs, INPUT, t_range) should be
consistent to the function head def diff_eqs(t, INP) . Mind the
order: RES = spi.odeint(diff_eqs,t_range, INPUT)
There are some issues about to mathematical formulas too:
have a look at the 3rd formula on your picture. It has no tendency term, it starts with a zero - what does that mean ?
it's hard to check wether you have translated the formula correctly into code since the code does not follow the formulas strictly.
Below I tried a solution with scipy solve_ivp. In case A I'm able to run a pendulum, but in case B no meaningful solution for the boomerang can be found. So check the maths, I guess some error in the mathematical expressions.
For the graphics use pandas to plot all variables together (see code below).
import scipy.integrate as spi
import numpy as np
import pandas as pd
def diff_eqs_boomerang(t,Y):
INP = Y
dY = np.zeros((9))
dY[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
dY[1] = -(lamb/m)*INP[1]
dY[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
dY[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
dY[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
dY[5] = -np.cos(INP[3])*INP[4]
dY[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
dY[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
dY[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return dY
def diff_eqs_pendulum(t,Y):
dY = np.zeros((3))
dY[0] = Y[1]
dY[1] = -Y[0]
dY[2] = Y[0]*Y[1]
return dY
t_start, t_end = 0.0, 12.0
case = 'A'
if case == 'A': # pendulum
Y = np.array([0.1, 1.0, 0.0]);
Yres = spi.solve_ivp(diff_eqs_pendulum, [t_start, t_end], Y, method='RK45', max_step=0.01)
if case == 'B': # boomerang
Y = np.array([omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0])
print('Y initial:'); print(Y); print()
Yres = spi.solve_ivp(diff_eqs_boomerang, [t_start, t_end], Y, method='RK45', max_step=0.01)
#---- graphics ---------------------
yy = pd.DataFrame(Yres.y).T
tt = np.linspace(t_start,t_end,yy.shape[0])
with plt.style.context('fivethirtyeight'):
plt.figure(1, figsize=(20,5))
plt.plot(tt,yy,lw=8, alpha=0.5);
plt.grid(axis='y')
for j in range(3):
plt.fill_between(tt,yy[j],0, alpha=0.2, label='y['+str(j)+']')
plt.legend(prop={'size':20})
I'm tring to approximate an empirical cumulative distribution function (ECDF I want to approximate) with a smooth function (with less than 5 parameter) such as the generalized logistic function.
However, using scipy.optimize.curve_fit, the fitting operation gives really bad approximations or it doesn't work at all (depending on the initial values). The variable series represents my data stored as pandas.Series.
from scipy.optimize import curve_fit
def fit_ecdf(x):
x = np.sort(x)
def result(v):
return np.searchsorted(x, v, side='right') / x.size
return result
ecdf = fit_ecdf(series)
def genlogistic(x, B, M, Q, v):
return 1 / (1 + Q * np.exp(-B * (x - M))) ** (1 / v)
params = curve_fit(genlogistic, xdata = series, ydata = ecdf(series), p0 = (0.1, 10.0, 0.1, 0.1))[0]
Should I use another type of function for the fit?
Are there any code mistakes?
UPDATE - 1
As asked, I link to a csv containing the data.
UPDATE - 2
After a lot of search and trial and error I find out this function
f(x; a, b, c) = 1 - 1 / (1 + (x / b) ** a) ** c
with a = 4.61320000, b = 2.94570952, c = 0.5886922
which fits a lot better than the other one. The only problem is the little step that the ECDF shows near x=1. How can I modify f to improve the quality of the fit? I was thinking of adding some sort of function that is "relevant" only in those kind of points. Here are the graphical results of the fit where the solid blue line is the ECDF and the dotted line represents the (x, f(x)) points.
I find out how to deal with that little step near x=1. As expressed in the question, adding some sort of function that is significant only in that interval was the game changer.
The "step" ends at about (1.7, 0.04) so I needed a sort of function that flattens for x > 1.7 and has y = 0.04 as asymptote. The natural choice (just to stay on point) was to take a function like f(x) = 1/exp(x).
Thanks to JamesPhillips, I also picked up the proper data for the regression (no double values = no overweighted points).
Python Code
from scipy.optimize import curve_fit
def fit_ecdf(x):
x = np.sort(x)
def result(v):
return np.searchsorted(x, v, side = 'right') / x.size
return result
ecdf = fit_ecdf(series)
unique_series = series.unique().tolist()
def cdf_interpolation(x, a, b, c, d):
f_1 = 0.95 + (0 - 0.95) / (1 + (x / b) ** a) ** c + 0.05
f_2 = (0 - 0.05)/(np.exp(d * x))
return f_1 + f_2
params = curve_fit(cdf_interpolation,
xdata = unique_series ,
ydata = ecdf(unique_series),
p0 = (6.0, 3.0, 0.4, 1.0))[0]
Parameters
a = 6.03256462
b = 2.89418871
c = 0.42997956
d = 1.06864006
Graphical results
I got an OK fit for a 5-parameter logistic equation (see image and code) using unique values, not sure if the low end curve is sufficient for your needs, please check.
import numpy as np
def Sigmoidal_FiveParameterLogistic_model(x_in): # from zunzun.com
# coefficients
a = 9.9220221252324947E-01
b = -3.1572339989462903E+00
c = 2.2303376075685142E+00
d = 2.6271495036080207E-02
f = 3.4399008905318986E+00
return d + (a - d) / np.power(1.0 + np.power(x_in / c, b), f)
I'm trying to implement euler's method to approximate the value of e in python. This is what I have so far:
def Euler(f, t0, y0, h, N):
t = t0 + arange(N+1)*h
y = zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] + h*f(t[n], y[n])
f = (1+(1/N))^N
return y
However, when I try to call the function, I get the error "ValueError: shape <= 0". I suspect this has something to do with how I defined f? I tried inputting f directly when euler is called, but gave me errors related to variables not being defined. I also tried defining f as its own function, which gave me a division by 0 error.
def f(N):
for n in range(N):
return (1+(1/n))^n
(not sure if N was the appropriate variable to use here...)
The formula you are trying to use is not Euler's method, but rather the exact value of e as n approaches infinity wiki,
$n = \lim_{n\to\infty} (1 + \frac{1}{n})^n$
Euler's method is used to solve first order differential equations.
Here are two guides that show how to implement Euler's method to solve a simple test function: beginner's guide and numerical ODE guide.
To answer the title of this post, rather than the question you are asking, I've used Euler's method to solve usual exponential decay:
$\frac{dN}{dt} = -\lambda N$
Which has the solution,
$N(t) = N_0 e^{-\lambda t}$
Code:
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
# Concentration over time
N = lambda t: N0 * np.exp(-k * t)
# dN/dt
def dx_dt(x):
return -k * x
k = .5
h = 0.001
N0 = 100.
t = np.arange(0, 10, h)
y = np.zeros(len(t))
y[0] = N0
for i in range(1, len(t)):
# Euler's method
y[i] = y[i-1] + dx_dt(y[i-1]) * h
max_error = abs(y-N(t)).max()
print 'Max difference between the exact solution and Euler's approximation with step size h=0.001:'
print '{0:.15}'.format(max_error)
Output:
Max difference between the exact solution and Euler's approximation with step size h=0.001:
0.00919890254720457
Note: I'm not sure how to get LaTeX displaying properly.
Are you sure you are not trying to implement the Newton's method? Because Newton's method is used to approximate the roots.
In case you decide to go with Newton's method, here is a slightly changed version of your code that approximates the square-root of 2. You can change f(x) and fp(x) with the function and its derivative you use in your approximation to the thing you want.
import numpy as np
def f(x):
return x**2 - 2
def fp(x):
return 2*x
def Newton(f, y0, N):
y = np.zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] - f(y[n])/fp(y[n])
return y
print Newton(f, 1, 10)
gives
[ 1. 1.5 1.41666667 1.41421569 1.41421356 1.41421356
1.41421356 1.41421356 1.41421356 1.41421356 1.41421356]
which are the initial value and the first ten iterations to the square-root of two.
Besides this a big problem was the usage of ^ instead of ** for powers which is a legal but a totally different (bitwise) operation in python.