Executing Monte Carlo method for finding the minimum of f - python

I am considering a function:
f(x) = 1 − exp −(x + 3)2 − 2 exp −(x − 2)2).
My question is:
Write a Python function that inputs a function f, an initial guess x,
and a number of iterations N, and does the following: Starting with
the initial x, then for N iterations:
generate a normally distributed random number δx
find the cost f(x + δx). If this is less than the present cost f(x), update x with x + δx. Otherwise leave x unchanged.
output the final x.
I want to try different initial values of x and see where the algorithm ends up.
This is my code:
import numpy as np
fun = lambda x: 1 - np.exp(-(x+3)**2) - 2*np.exp(-(x-2)**2)
def monteCarlo(costFun = fun, x = 0., N=100, sigma = 1.):
'''Find a minimum of a function approximately using simulated annealing'''
cost = costFun(x)
for j in range(N):
T = 1. - float(j)/N #Temperature
dx = np.random.normal(scale = sigma)
newx = x + dx
newcost = costFun(newx)
p = np.random.random()
if newcost < cost or p < np.exp(-(newcost-cost)/T):
cost = newcost
x = newx
print('point: ' + str(x) + ', cost = ' + str(cost) + ', T = ' + str(T))
return
Note: I haven't been able to list my code properly, but it starts from: 'import numpy'. (I am relatively new to this forum).
I consider my python code to have no bugs, but I am having difficulty in the final stage to vary the initial values of x.
Any help would be so appreciated. Thank you in advance.

Related

how to implement least square polynomial with no built in methods using python?

currently running into a problem solving this.
The objective of the exercise given is to find a polynom of certian degree (the degree is given) from a dataset of points (that can be noist) and to best fit it using least sqaure method.
I don't understand the steps that lead to solving the linear equations?
what are the steps or should anyone provide such a python program that lead to the matrix that I put as an argument in my decomposition program?
Note:I have a python program for cubic splines ,LU decomposition/Guassian decomposition.
Thanks.
I tried to apply guassin / LU decomposition straight away on the dataset but I understand there are more steps to the solution...
I donwt understand how cubic splines add to the mix either..
Edit:
guassian elimintaion :
import numpy as np
import math
def swapRows(v,i,j):
if len(v.shape) == 1:
v[i],v[j] = v[j],v[i]
else:
v[[i,j],:] = v[[j,i],:]
def swapCols(v,i,j):
v[:,[i,j]] = v[:,[j,i]]
def gaussPivot(a,b,tol=1.0e-12):
n = len(b)
# Set up scale factors
s = np.zeros(n)
for i in range(n):
s[i] = max(np.abs(a[i,:]))
for k in range(0,n-1):
# Row interchange, if needed
p = np.argmax(np.abs(a[k:n,k])/s[k:n]) + k
if abs(a[p,k]) < tol: error.err('Matrix is singular')
if p != k:
swapRows(b,k,p)
swapRows(s,k,p)
swapRows(a,k,p)
# Elimination
for i in range(k+1,n):
if a[i,k] != 0.0:
lam = a[i,k]/a[k,k]
a[i,k+1:n] = a[i,k+1:n] - lam*a[k,k+1:n]
b[i] = b[i] - lam*b[k]
if abs(a[n-1,n-1]) < tol: error.err('Matrix is singular')
# Back substitution
b[n-1] = b[n-1]/a[n-1,n-1]
for k in range(n-2,-1,-1):
b[k] = (b[k] - np.dot(a[k,k+1:n],b[k+1:n]))/a[k,k]
return b
def polyFit(xData,yData,m):
a = np.zeros((m+1,m+1))
b = np.zeros(m+1)
s = np.zeros(2*m+1)
for i in range(len(xData)):
temp = yData[i]
for j in range(m+1):
b[j] = b[j] + temp
temp = temp*xData[i]
temp = 1.0
for j in range(2*m+1):
s[j] = s[j] + temp
temp = temp*xData[i]
for i in range(m+1):
for j in range(m+1):
a[i,j] = s[i+j]
return gaussPivot(a,b)
degree = 10 # can be any degree
polyFit(xData,yData,degree)
I was under the impression the code above gets a dataset of points and a degree. The output should be coeefients of a polynom that fits those points but I have a grader that was provided by my proffesor , and after checking the grading the polynom that returns has a lrage error.
After that I tried the following LU decomposition instead:
import numpy as np
def swapRows(v,i,j):
if len(v.shape) == 1:
v[i],v[j] = v[j],v[i]
else:
v[[i,j],:] = v[[j,i],:]
def swapCols(v,i,j):
v[:,[i,j]] = v[:,[j,i]]
def LUdecomp(a,tol=1.0e-9):
n = len(a)
seq = np.array(range(n))
# Set up scale factors
s = np.zeros((n))
for i in range(n):
s[i] = max(abs(a[i,:]))
for k in range(0,n-1):
# Row interchange, if needed
p = np.argmax(np.abs(a[k:n,k])/s[k:n]) + k
if abs(a[p,k]) < tol: error.err('Matrix is singular')
if p != k:
swapRows(s,k,p)
swapRows(a,k,p)
swapRows(seq,k,p)
# Elimination
for i in range(k+1,n):
if a[i,k] != 0.0:
lam = a[i,k]/a[k,k]
a[i,k+1:n] = a[i,k+1:n] - lam*a[k,k+1:n]
a[i,k] = lam
return a,seq
def LUsolve(a,b,seq):
n = len(a)
# Rearrange constant vector; store it in [x]
x = b.copy()
for i in range(n):
x[i] = b[seq[i]]
# Solution
for k in range(1,n):
x[k] = x[k] - np.dot(a[k,0:k],x[0:k])
x[n-1] = x[n-1]/a[n-1,n-1]
for k in range(n-2,-1,-1):
x[k] = (x[k] - np.dot(a[k,k+1:n],x[k+1:n]))/a[k,k]
return x
the results were a bit better but nowhere near what it should be
Edit 2:
I tried the chebyshev method suggested in the comments and came up with:
import numpy as np
def chebyshev_transform(x, n):
"""
Transforms x-coordinates to Chebyshev coordinates
"""
return np.cos(n * np.arccos(x))
def chebyshev_design_matrix(x, n):
"""
Constructs the Chebyshev design matrix
"""
x_cheb = chebyshev_transform(x, n)
T = np.zeros((len(x), n+1))
T[:,0] = 1
T[:,1] = x_cheb
for i in range(2, n+1):
T[:,i] = 2 * x_cheb * T[:,i-1] - T[:,i-2]
return T
degree =10
f = lambda x: np.cos(X)
xdata = np.linspace(-1,1,num=100)
ydata = np.array([f(i) for i in xdata])
M = chebyshev_design_matrix(xdata,degree)
D_x ,D_y = np.linalg.qr(M)
D_x, seq = LUdecomp(D_x)
A = LUsolve(D_x,D_y,seq)
I can't use linalg.qr in my program , it was just for checking how it works.In addition , I didn't get the 'slow way' of the formula that were in the comment.
The program cant get an x point that is not between -1 and 1 , is there any way around it , any normalizition?
Thanks a lot.
Hints:
You are probably asked for an unsophisticated method. If the degree of the polynomial remains low, you can use the straightforward approach below. For the sake of the explanation, I'll use a cubic model.
Assume that you want to fit your data to this polynomial, by observing that it seems to follow a cubic behavior:
ax³ + bx² + cx + d ~ y
[All x and y should be understood with an index i which is omitted for notational convenience.]
If there are more than four data points, you get an overdetermined system of equations, usually with no solution. The trick is to consider the error on the individual equations, e = ax³ + bx² + cx + d - y, and to minimize the total error. As the error is a signed number, negative errors would make minimization impossible. Instead, we minimize the sum of squared errors. (The sum of absolute errors is another option but it unfortunately leads to a much harder problem.)
Min(a, b, c, d) Σ(ax³ + bx² + cx + d - y)²
As the unknown parameters are unconstrained, it suffices to look for a stationary point, i.e. cancel the gradient of the total error. By differentiation on the unknowns a, b, c and d, we obtain
2Σ(ax³x³ + bx²x³ + cxx³ + dx³ - yx³) = 0
2Σ(ax³x² + bx²x² + cxx² + dx² - yx²) = 0
2Σ(ax³x + bx²x + cxx + dx - yx ) = 0
2Σ(ax³ + bx² + cx + d - y ) = 0
As you can recognize, this is a square linear system of equations.

How to use Newtons Method with a given interval

I can calculate the root of a function using Newtons Method by subtracting the old x-value from the new one and checking for the convergence criterion. Is there a way of doing it when given a closed interval, e.g
Given a function and the interval [a,b] = [0.1, 3.0], the convergence criterion will be calculated by checking if [3.0 - 0.1] < 0.000001, i.e [b-a] < 0.000001.
The code I provided is calculating the convergence criterion using the x-values. I'm trying to figure out if there is a way I can use the interval instead of the x-values.
from math import *
x = 1.0 #initial value
for j in range(1, 101):
xnew = (x**2 + cos(x)**2 -4*x)/(2*(x - cos(x)*sin(x) -2))
if abs(xnew - x) < 0.000001:
break
x = xnew
print('Root = %0.6f ' % xnew)
print('Number of iterations = %d' % j)
It sounds like you are wanting to guarantee that the root is found within a given interval (which is not something the Newton-Raphson can guarantee). You could use bisection for this. If you know the function changes sign in a given interval (and is continuous in the same) then something like the following works:
>>> from sympy.abc import x
>>> from sympy import nsolve
>>> ivl = 0,3
>>> expr = (x**2 + cos(x)**2 -4*x)
>>> nsolve(expr, x, ivl)
0.250324492526265
But it also looks like you might have some variables mixed up in what you are trying with the NR method. The xnew you are calculating looks very much like f(x)/f'(x) which is dx in xnew = x - dx. So if you write:
for j in range(1, 101):
dx = (x**2 + cos(x)**2 -4*x)/(2*(x - cos(x)*sin(x) -2))
if abs(dx) < 0.000001:
break
x = x - dx
print('Root = %0.6f ' % x)
print('Number of iterations = %d' % j)
you will get
Root = 0.250324
Number of iterations = 4

fmin_slsqp returns initial guess finding the minimum of cubic spline

I am trying to find the minimum of a natural cubic spline. I have written the following code to find the natural cubic spline. (I have been given test data and have confirmed this method is correct.) Now I can not figure out how to find the minimum of this function.
This is the data
xdata = np.linspace(0.25, 2, 8)
ydata = 10**(-12) * np.array([1,2,1,2,3,1,1,2])
This is the function
import scipy as sp
import numpy as np
import math
from numpy.linalg import inv
from scipy.optimize import fmin_slsqp
from scipy.optimize import minimize, rosen, rosen_der
def phi(x, xd,yd):
n = len(xd)
h = np.array(xd[1:n] - xd[0:n-1])
f = np.divide(yd[1:n] - yd[0:(n-1)],h)
q = [0]*(n-2)
for i in range(n-2):
q[i] = 3*(f[i+1] - f[i])
A = np.zeros(((n-2),(n-2)))
#define A for j=0
A[0,0] = 2*(h[0] + h[1])
A[0,1] = h[1]
#define A for j = n-2
A[-1,-2] = h[-2]
A[-1,-1] = 2*(h[-2] + h[-1])
#define A for in the middle
for j in range(1,(n-3)):
A[j,j-1] = h[j]
A[j,j] = 2*(h[j] + h[j+1])
A[j,j+1] = h[j+1]
Ainv = inv(A)
B = Ainv.dot(q)
b = (n)*[0]
b[1:(n-1)] = B
# now we find a, b, c and d
a = [0]*(n-1)
c = [0]*(n-1)
d = [0]*(n-1)
s = [0]*(n-1)
for r in range(n-1):
a[r] = 1/(3*h[r]) * (b[r + 1] - b[r])
c[r] = f[r] - h[r]*((2*b[r] + b[r+1])/3)
d[r] = yd[r]
#solution 1 start
for m in range(n-1):
if xd[m] <= x <= xd[m+1]:
s = a[m]*(x - xd[m])**3 + b[m]*(x-xd[m])**2 + c[m]*(x-xd[m]) + d[m]
return(s)
#solution 1 end
I want to find the minimum on the domain of my xdata, so a fmin didn't work as you can not define bounds there. I tried both fmin_slsqp and minimize. They are not compatible with the phi function I wrote so I rewrote phi(x, xd,yd) and added an extra variable such that phi is phi(x, xd,yd, m). M indicates in which subfunction of the spline we are calculating a solution (from x_m to x_m+1). In the code we replaced #solution 1 by the following
# solution 2 start
return(a[m]*(x - xd[m])**3 + b[m]*(x-xd[m])**2 + c[m]*(x-xd[m]) + d[m])
# solution 2 end
To find the minimum in a domain x_m to x_(m+1) we use the following code: (we use an instance where m=0, so x from 0.25 to 0.5. The initial guess is 0.3)
fmin_slsqp(phi, x0 = 0.3, bounds=([(0.25,0.5)]), args=(xdata, ydata, 0))
What I would then do (I know it's crude), is iterate this with a for loop to find the minimum on all subdomains and then take the overall minimum. However, the function fmin_slsqp constantly returns the initial guess as the minimum. So there is something wrong, which I do not know how to fix. If you could help me this would be greatly appreciated. Thanks for reading this far.
When I plot your function phi and the data you feed in, I see that its range is of the order of 1e-12. However, fmin_slsqp is unable to handle that level of precision and fails to find any change in your objective.
The solution I propose is scaling the return of your objective by the same order of precision like so:
return(s*1e12)
Then you get good results.
>>> sol = fmin_slsqp(phi, x0=0.3, bounds=([(0.25, 0.5)]), args=(xdata, ydata))
>>> print(sol)
Optimization terminated successfully. (Exit mode 0)
Current function value: 1.0
Iterations: 2
Function evaluations: 6
Gradient evaluations: 2
[ 0.25]

Solving a non-linear system of equations in Python using Newton's Method

I am trying to solve this exercise for College. I have already submitted the code bellow. However, I am not completely satisfied with it.
The task is to build an implementation of Newton's method to solve the following non-linear system of equations:
In order to learn the Newton's method, besides the classes, I watched this YouTube video: https://www.youtube.com/watch?v=zPDp_ewoyhM
The guy on the video explained the math process behind Newton's method and did, manually, two iterations.
I did a Python implementation for that and the code went fine for the example on the video. Nonetheless, the example on the video deals with 2 variables and my homework deals with 3 variables. Hence, I adapted it.
That's the code:
import numpy as np
#### example from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(x,y):
return [[1,2],[2*x,8*y]]
def function_example(x,y):
return [(-1)*(x+(2*y)-2),(-1)*((x**2)+(4*(y**2))-4)]
####################################################################
### agora com os dados do exercício
def jacobian_exercise(x,y,z):
return [[1,1,1],[2*x,2*y,2*z],[np.exp(x),x,-x]]
#print (jacobian_exercise(1,2,3))
jotinha = (jacobian_exercise(1,2,3))
def function_exercise(x,y,z):
return [x+y+z-3, (x**2)+(y**2)+(z**2)-5,(np.exp(x))+(x*y)-(x*z)-1]
#print (function_exercise(1,2,3))
bezao = (function_exercise(1,2,3))
def x_delta_by_gauss(J,b):
return np.linalg.solve(J,b)
print (x_delta_by_gauss(jotinha, bezao))
x_delta_test = x_delta_by_gauss(jotinha,bezao)
def x_plus_1(x_delta,x_previous):
x_next = x_previous + x_delta
return x_next
print (x_plus_1(x_delta_test,[1,2,3]))
def newton_method(x_init):
first = x_init[0]
second = x_init[1]
third = x_init[2]
jacobian = jacobian_exercise(first, second, third)
vector_b_f_output = function_exercise(first, second, third)
x_delta = x_delta_by_gauss(jacobian, vector_b_f_output)
x_plus_1 = x_delta + x_init
return x_plus_1
def iterative_newton(x_init):
counter = 0
x_old = x_init
print ("x_old", x_old)
x_new = newton_method(x_old)
print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
print (diff)
while diff>0.000000000000000000000000000000000001:
counter += 1
print ("x_old", x_old)
x_new = newton_method(x_old)
print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
print (diff)
x_old = x_new
convergent_val = x_new
print (counter)
return convergent_val
#print (iterative_newton([1,2]))
print (iterative_newton([0,1,2]))
I am pretty sure this code is definitely not totally wrong.
If I input the initial values as a vector [0,1,2], my code returns as an output [0,1,2]. This is a correct answer, it solves the three equations above.
Moreover, if a input [0,2,1], a slightly different input, the code also works and the answer it returns is also a correct one.
However, if I change my initial value to something like [1,2,3] I get a weird result: 527.7482, -1.63 and 2.14.
This result does not make any sense. Look at the first equation, if you input these values, you can easily see that (527)+(-1.63)+(2.14) does not equal to 3. This is false.
If I change the input value close to a correct solution, like [0.1,1.1,2.1] it also crashes.
OK, Newton's method does not guarantee the correct convergence. I know. It depends on the initial value, among other stuff.
Is my implementation wrong in any way? Or is the vector [1,2,3] just a "bad" initial value?
Thanks.
To make your code more readable, I would suggest reducing the number of function definitions. They obscure the relatively simple computations which are happening.
I rewrote my own version:
def iter_newton(X,function,jacobian,imax = 1e6,tol = 1e-5):
for i in range(int(imax)):
J = jacobian(X) # calculate jacobian J = df(X)/dY(X)
Y = function(X) # calculate function Y = f(X)
dX = np.linalg.solve(J,Y) # solve for increment from JdX = Y
X -= dX # step X by dX
if np.linalg.norm(dX)<tol: # break if converged
print('converged.')
break
return X
I don't find the same behavior:
>>>X_0 = np.array([1,2,3],dtype=float)
>>>iter_newton(X_0,function_exercise,jacobian_exercise)
converged.
array([9.26836542e-18, 2.00000000e+00, 1.00000000e+00])
even works for far worse guesses
>>>X_0 = np.array([13.4,-2,31],dtype=float)
>>>iter_newton(X_0,function_exercise,jacobian_exercise)
converged.
array([1.59654153e-18, 2.00000000e+00, 1.00000000e+00])
The guys that answered this question helped me. However, modifying one line of code made everything work in my implementation.
Since I am using the approach described on the YouTube video that I mentioned, I need to multiply the Vector-valued function by (-1), which modifies the value of each element of the vector.
I did this for the function_example. However, when I coded function_exercise, the one that I needed to solve for my homework without the negative sign. I missed it.
Now, it is fixed and it works fully, even with very diverse starting vectors.
import numpy as np
#### example from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(x,y):
return [[1,2],[2*x,8*y]]
def function_example(x,y):
return [(-1)*(x+(2*y)-2),(-1)*((x**2)+(4*(y**2))-4)]
####################################################################
### agora com os dados do exercício
def jacobian_exercise(x,y,z):
return [[1,1,1],[2*x,2*y,2*z],[np.exp(x),x,-x]]
#print (jacobian_exercise(1,2,3))
jotinha = (jacobian_exercise(1,2,3))
def function_exercise(x,y,z):
return [(-1)*(x+y+z-3),(-1)*((x**2)+(y**2)+(z**2)-5),(-1)*((np.exp(x))+(x*y)-(x*z)-1)]
#print (function_exercise(1,2,3))
bezao = (function_exercise(1,2,3))
def x_delta_by_gauss(J,b):
return np.linalg.solve(J,b)
print (x_delta_by_gauss(jotinha, bezao))
x_delta_test = x_delta_by_gauss(jotinha,bezao)
def x_plus_1(x_delta,x_previous):
x_next = x_previous + x_delta
return x_next
print (x_plus_1(x_delta_test,[1,2,3]))
def newton_method(x_init):
first = x_init[0]
second = x_init[1]
third = x_init[2]
jacobian = jacobian_exercise(first, second, third)
vector_b_f_output = function_exercise(first, second, third)
x_delta = x_delta_by_gauss(jacobian, vector_b_f_output)
x_plus_1 = x_delta + x_init
return x_plus_1
def iterative_newton(x_init):
counter = 0
x_old = x_init
#print ("x_old", x_old)
x_new = newton_method(x_old)
#print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
#print (diff)
while diff>0.0000000000001:
counter += 1
#print ("x_old", x_old)
x_new = newton_method(x_old)
#print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
#print (diff)
x_old = x_new
convergent_val = x_new
#print (counter)
return convergent_val
#print (iterative_newton([1,2]))
print (list(map(float,(iterative_newton([100,200,3])))))
I tried to rewrite your code in a more Pythonic way. I hope it helps. Maybe the error is the sign of vector_b_f_output in x_delta_by_gauss(jacobian, vector_b_f_output)? or some missing term in the Jacobian.
import numpy as np
# Example from the video:
# from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(xy):
x, y = xy
return [[1, 2],
[2*x, 8*y]]
def function_example(xy):
x, y = xy
return [x + 2*y - 2, x**2 + 4*y**2 - 4]
# From the exercise:
def function_exercise(xyz):
x, y, z = xyz
return [x + y + z - 3,
x**2 + y**2 + z**2 - 5,
np.exp(x) + x*y - x*z - 1]
def jacobian_exercise(xyz):
x, y, z = xyz
return [[1, 1, 1],
[2*x, 2*y, 2*z],
[np.exp(x) + y - z, x, -x]]
def iterative_newton(fun, x_init, jacobian):
max_iter = 50
epsilon = 1e-8
x_last = x_init
for k in range(max_iter):
# Solve J(xn)*( xn+1 - xn ) = -F(xn):
J = np.array(jacobian(x_last))
F = np.array(fun(x_last))
diff = np.linalg.solve( J, -F )
x_last = x_last + diff
# Stop condition:
if np.linalg.norm(diff) < epsilon:
print('convergence!, nre iter:', k )
break
else: # only if the for loop end 'naturally'
print('not converged')
return x_last
# For the exercice:
x_sol = iterative_newton(function_exercise, [2.0,1.0,2.0], jacobian_exercise)
print('solution exercice:', x_sol )
print('F(sol)', function_exercise(x_sol) )
# For the example:
x_sol = iterative_newton(function_example, [1.0,2.0], jacobian_example)
print('solution example:', x_sol )
print( function_example(x_sol) )
If you want to verify using fsolve:
# Verification using fsvole from Scipy
from scipy.optimize import fsolve
x0 = [2, 2, 2]
sol = fsolve(function_exercise, x0, fprime=jacobian_exercise, full_output=1)
print('solution exercice fsolve:', sol)

How can I check to see the number of iterations Newton's method takes to run?

So basically I want to grab the number of iterations it takes my newton's method to find the root, and then take that number and apply it to my color scheme to make the longer the amount of iterations, the darker the color, and the fewer, the more full the color.
so here's my code
from numpy import *
import pylab as pl
def myffp(x):
return x**3 - 1, 3*(x**2)
def newton( ffp, x, nits):
for i in range(nits):
#print i,x
f,fp = ffp(x)
x = x - f/fp
return x
q = sqrt(3)/2
def leggo(xmin=-1,xmax=1,jmin=-1,jmax=1,pts=1000,nits=30):
x = linspace(xmin, xmax, pts)
y = linspace(jmin, jmax, pts)*complex(0,1)
x1,y1 = meshgrid(x,y)
n = newton(myffp,x1+y1,nits) #**here is where i wanna see the number of iterations newton's method takes to find my root**
r1 = complex(1,0)
r2 = complex(-.5, q)
r3 = complex(-.5,-q)
data = zeros((pts,pts,3))
data[:,:,0] = abs(n-r1) #**and apply it here**
data[:,:,2] = abs(n-r2)
data[:,:,1] = abs(n-r3)
pl.show(pl.imshow(data))
leggo()
The main problem is finding the number of iterations, I can then figure out how to apply that to darkening the color, but for now it's just finding the number of iterations it takes for each value ran through newton's method.
Perhaps the simplest way is to just refactor your newton function so that it keeps track of the total iterations and then returns it (along with the result, of course), e.g.,
def newton( ffp, x, nits):
c = 0 # initialize iteration counter
for i in range(nits):
c += 1 # increment counter for each iteration
f, fp = ffp(x)
x = x - f/fp
return x, c # return the counter when the function is called
so in the main body of your code, change your call to newton, like so:
res, tot_iter = newton(myffp, x, nits)
the number of iterations in the last call to newton is stored in tot_iter
As aside, your implementation of Newton's Method seems to be incomplete.
for instance, it's missing a test against some convergence criterion.
Here's a simple implementation in python that works:
def newtons_method(x_init, fn, max_iter=100):
"""
returns: approx. val of root of the function passed in, fn;
pass in: x_init, initial value for the root;
max_iter, total iteration count not exceeded;
fn, a function of the form:
def f(x): return x**3 - 2*x
"""
x = x_init
eps = .0001
# set initial value different from x_init so at lesat 1 loop
x_old = x + 10 * eps
step = .1
c = 0
# (x - x_old) is convergence criterion
while (abs(x - x_old) > eps) and (c < max_iter):
c += 1
fval = fn(x)
dfdx = (fn(x + step)) - fn(x) / step
x_old = x
x = x_old - fval / dfdx
return x, c
The code you're currently using for newton() has a fixed number of iterations (nits - which is being passed in as 30), so the results would be kind of trivial and uninteresting.
It looks like you're trying to generate a Newton fractal -- the method you're trying to use is incorrect; the typical coloring mode is based on the output of the function, not the number of iterations. See the Wikipedia article for a full explanation.

Categories