Related
I have a differential equation:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = 5
# time points
t = np.linspace(0,10)
t1=2
# solve ODE
y = odeint(model,y0,t)
And I want to evaluate the solution of this differential equation on two different points. For example I want y(t=2) and y(t=3).
I can solve the problem in the following way:
Suppose that you need y(2). Then you, define
t = np.linspace(0,2)
and just print
print y[-1]
In order to get the value of y(2). However I think that this procedure is slow, since I need to do the same again in order to calculate y(3), and if I want another point I need to do same again. So there is some faster way to do this?
isn't this just:
y = odeint(model, y0, [0, 2, 3])[1:]
i.e. the third parameter just specifies the values of t that you want back.
as an example of printing the results out, we'd just follow the above with:
print(f'y(2) = {y[0,0]}')
print(f'y(3) = {y[1,0]}')
which gives me:
y(2) = 2.7440582441900494
y(3) = 2.032848408317066
which seems the same as the anytical solution:
5 * np.exp(-0.3 * np.array([2,3]))
You can get exactly what you want if you use solve_ivp with the dense-output option
from scipy.integrate import solve_ivp
# function that returns dy/dt
def model(t,y):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = [5]
# solve ODE
res = solve_ivp(model,[0,10],y0,dense_output=True)
y = lambda t: res.sol(t)[0]
for t in [2,3,3.4]:
print(f'y({t}) = {y(t)}')
with the output
y(2) = 2.743316182689662
y(3) = 2.0315223673200338
y(3.4) = 1.802238620366918
I am trying to solve this exercise for College. I have already submitted the code bellow. However, I am not completely satisfied with it.
The task is to build an implementation of Newton's method to solve the following non-linear system of equations:
In order to learn the Newton's method, besides the classes, I watched this YouTube video: https://www.youtube.com/watch?v=zPDp_ewoyhM
The guy on the video explained the math process behind Newton's method and did, manually, two iterations.
I did a Python implementation for that and the code went fine for the example on the video. Nonetheless, the example on the video deals with 2 variables and my homework deals with 3 variables. Hence, I adapted it.
That's the code:
import numpy as np
#### example from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(x,y):
return [[1,2],[2*x,8*y]]
def function_example(x,y):
return [(-1)*(x+(2*y)-2),(-1)*((x**2)+(4*(y**2))-4)]
####################################################################
### agora com os dados do exercício
def jacobian_exercise(x,y,z):
return [[1,1,1],[2*x,2*y,2*z],[np.exp(x),x,-x]]
#print (jacobian_exercise(1,2,3))
jotinha = (jacobian_exercise(1,2,3))
def function_exercise(x,y,z):
return [x+y+z-3, (x**2)+(y**2)+(z**2)-5,(np.exp(x))+(x*y)-(x*z)-1]
#print (function_exercise(1,2,3))
bezao = (function_exercise(1,2,3))
def x_delta_by_gauss(J,b):
return np.linalg.solve(J,b)
print (x_delta_by_gauss(jotinha, bezao))
x_delta_test = x_delta_by_gauss(jotinha,bezao)
def x_plus_1(x_delta,x_previous):
x_next = x_previous + x_delta
return x_next
print (x_plus_1(x_delta_test,[1,2,3]))
def newton_method(x_init):
first = x_init[0]
second = x_init[1]
third = x_init[2]
jacobian = jacobian_exercise(first, second, third)
vector_b_f_output = function_exercise(first, second, third)
x_delta = x_delta_by_gauss(jacobian, vector_b_f_output)
x_plus_1 = x_delta + x_init
return x_plus_1
def iterative_newton(x_init):
counter = 0
x_old = x_init
print ("x_old", x_old)
x_new = newton_method(x_old)
print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
print (diff)
while diff>0.000000000000000000000000000000000001:
counter += 1
print ("x_old", x_old)
x_new = newton_method(x_old)
print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
print (diff)
x_old = x_new
convergent_val = x_new
print (counter)
return convergent_val
#print (iterative_newton([1,2]))
print (iterative_newton([0,1,2]))
I am pretty sure this code is definitely not totally wrong.
If I input the initial values as a vector [0,1,2], my code returns as an output [0,1,2]. This is a correct answer, it solves the three equations above.
Moreover, if a input [0,2,1], a slightly different input, the code also works and the answer it returns is also a correct one.
However, if I change my initial value to something like [1,2,3] I get a weird result: 527.7482, -1.63 and 2.14.
This result does not make any sense. Look at the first equation, if you input these values, you can easily see that (527)+(-1.63)+(2.14) does not equal to 3. This is false.
If I change the input value close to a correct solution, like [0.1,1.1,2.1] it also crashes.
OK, Newton's method does not guarantee the correct convergence. I know. It depends on the initial value, among other stuff.
Is my implementation wrong in any way? Or is the vector [1,2,3] just a "bad" initial value?
Thanks.
To make your code more readable, I would suggest reducing the number of function definitions. They obscure the relatively simple computations which are happening.
I rewrote my own version:
def iter_newton(X,function,jacobian,imax = 1e6,tol = 1e-5):
for i in range(int(imax)):
J = jacobian(X) # calculate jacobian J = df(X)/dY(X)
Y = function(X) # calculate function Y = f(X)
dX = np.linalg.solve(J,Y) # solve for increment from JdX = Y
X -= dX # step X by dX
if np.linalg.norm(dX)<tol: # break if converged
print('converged.')
break
return X
I don't find the same behavior:
>>>X_0 = np.array([1,2,3],dtype=float)
>>>iter_newton(X_0,function_exercise,jacobian_exercise)
converged.
array([9.26836542e-18, 2.00000000e+00, 1.00000000e+00])
even works for far worse guesses
>>>X_0 = np.array([13.4,-2,31],dtype=float)
>>>iter_newton(X_0,function_exercise,jacobian_exercise)
converged.
array([1.59654153e-18, 2.00000000e+00, 1.00000000e+00])
The guys that answered this question helped me. However, modifying one line of code made everything work in my implementation.
Since I am using the approach described on the YouTube video that I mentioned, I need to multiply the Vector-valued function by (-1), which modifies the value of each element of the vector.
I did this for the function_example. However, when I coded function_exercise, the one that I needed to solve for my homework without the negative sign. I missed it.
Now, it is fixed and it works fully, even with very diverse starting vectors.
import numpy as np
#### example from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(x,y):
return [[1,2],[2*x,8*y]]
def function_example(x,y):
return [(-1)*(x+(2*y)-2),(-1)*((x**2)+(4*(y**2))-4)]
####################################################################
### agora com os dados do exercício
def jacobian_exercise(x,y,z):
return [[1,1,1],[2*x,2*y,2*z],[np.exp(x),x,-x]]
#print (jacobian_exercise(1,2,3))
jotinha = (jacobian_exercise(1,2,3))
def function_exercise(x,y,z):
return [(-1)*(x+y+z-3),(-1)*((x**2)+(y**2)+(z**2)-5),(-1)*((np.exp(x))+(x*y)-(x*z)-1)]
#print (function_exercise(1,2,3))
bezao = (function_exercise(1,2,3))
def x_delta_by_gauss(J,b):
return np.linalg.solve(J,b)
print (x_delta_by_gauss(jotinha, bezao))
x_delta_test = x_delta_by_gauss(jotinha,bezao)
def x_plus_1(x_delta,x_previous):
x_next = x_previous + x_delta
return x_next
print (x_plus_1(x_delta_test,[1,2,3]))
def newton_method(x_init):
first = x_init[0]
second = x_init[1]
third = x_init[2]
jacobian = jacobian_exercise(first, second, third)
vector_b_f_output = function_exercise(first, second, third)
x_delta = x_delta_by_gauss(jacobian, vector_b_f_output)
x_plus_1 = x_delta + x_init
return x_plus_1
def iterative_newton(x_init):
counter = 0
x_old = x_init
#print ("x_old", x_old)
x_new = newton_method(x_old)
#print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
#print (diff)
while diff>0.0000000000001:
counter += 1
#print ("x_old", x_old)
x_new = newton_method(x_old)
#print ("x_new", x_new)
diff = np.linalg.norm(x_old-x_new)
#print (diff)
x_old = x_new
convergent_val = x_new
#print (counter)
return convergent_val
#print (iterative_newton([1,2]))
print (list(map(float,(iterative_newton([100,200,3])))))
I tried to rewrite your code in a more Pythonic way. I hope it helps. Maybe the error is the sign of vector_b_f_output in x_delta_by_gauss(jacobian, vector_b_f_output)? or some missing term in the Jacobian.
import numpy as np
# Example from the video:
# from youtube https://www.youtube.com/watch?v=zPDp_ewoyhM
def jacobian_example(xy):
x, y = xy
return [[1, 2],
[2*x, 8*y]]
def function_example(xy):
x, y = xy
return [x + 2*y - 2, x**2 + 4*y**2 - 4]
# From the exercise:
def function_exercise(xyz):
x, y, z = xyz
return [x + y + z - 3,
x**2 + y**2 + z**2 - 5,
np.exp(x) + x*y - x*z - 1]
def jacobian_exercise(xyz):
x, y, z = xyz
return [[1, 1, 1],
[2*x, 2*y, 2*z],
[np.exp(x) + y - z, x, -x]]
def iterative_newton(fun, x_init, jacobian):
max_iter = 50
epsilon = 1e-8
x_last = x_init
for k in range(max_iter):
# Solve J(xn)*( xn+1 - xn ) = -F(xn):
J = np.array(jacobian(x_last))
F = np.array(fun(x_last))
diff = np.linalg.solve( J, -F )
x_last = x_last + diff
# Stop condition:
if np.linalg.norm(diff) < epsilon:
print('convergence!, nre iter:', k )
break
else: # only if the for loop end 'naturally'
print('not converged')
return x_last
# For the exercice:
x_sol = iterative_newton(function_exercise, [2.0,1.0,2.0], jacobian_exercise)
print('solution exercice:', x_sol )
print('F(sol)', function_exercise(x_sol) )
# For the example:
x_sol = iterative_newton(function_example, [1.0,2.0], jacobian_example)
print('solution example:', x_sol )
print( function_example(x_sol) )
If you want to verify using fsolve:
# Verification using fsvole from Scipy
from scipy.optimize import fsolve
x0 = [2, 2, 2]
sol = fsolve(function_exercise, x0, fprime=jacobian_exercise, full_output=1)
print('solution exercice fsolve:', sol)
I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.
I would like to reduce the computation time for the code posted below. In essence, the code below calculates the array Tf as product of the following nested loop:
Af = lambda x: Approximationf(f, x)
for idxp, prior in enumerate(grid_prior):
for idxy, y in enumerate(grid_y):
posterior = lambda yPrime: updated_posterior(prior, y, yPrime)
integrateL = integrate(lambda z: Af(np.array([y*np.exp(mu[0])*z,
posterior(y*np.exp(mu[0]) * z)])))
integrateH = integrate(lambda z: Af(np.array([y*np.exp(mu[1])*z,
posterior(y * np.exp(mu[1])*z)])))
Tf[idxy, idxp] = (h[idxy, idxp] +
beta * ((prior * integrateL) +
(1-prior)*integrateH))
The objects posterior, integrate and Af are functions that are repeatedly called while iterating over the loop. The function posterior calculates a scalar called posterior. The function Af approximates the function f at sample points x and passes the result on to the function integrate, which calculates the conditional expectation of the function f.
The code posted below is a simplification of a more difficult problem. Instead of running the nested loop once, I have to run it multiple times to solve a fixed point problem. This problem is initialized with an arbitrary function f and a function Tf is created. This array is then used in the next iteration over the nested loop to calculate another array Tf. The process continues until convergence.
I decided not to report results of the cProfile module. By neglecting the iteration over the nested loop until convergence a lot of internal python executions require a relatively long time. However, when iterating until convergence, these internal executions loose their relative importance and are relegated to lower positions in the cPython output.
I tried to mimick different suggestions for lowering the computation time of loops I found online for slightly modified problems. Unfortunately, I couldn't do so and could not really figure out a common approach to tackle these problems. Does somebody has an idea how to lower the computation time of this loop? I am grateful for any help!
import numpy as np
from scipy import interpolate
from scipy.stats import lognorm
from scipy.integrate import fixed_quad
# == The following lines define the paramters for the problem == #
gamma, beta, sigma, mu = 2, 0.95, 0.0255, np.array([0.0113, -0.0016])
grid_y, grid_prior = np.linspace(7, 10, 15), np.linspace(0, 1, 5)
int_min, int_max = np.exp(- 7 * sigma), np.exp(+ 7 * sigma)
phi = lognorm(sigma)
f = np.array([[ 1.29824564, 1.29161017, 1.28379398, 1.2676886, 1.15320819],
[ 1.26290108, 1.26147364, 1.24755837, 1.23819851, 1.11912802],
[ 1.22847276, 1.23013194, 1.22128198, 1.20996971, 1.0864706 ],
[ 1.19528104, 1.19645792, 1.19056084, 1.17980572, 1.05532966],
[ 1.16344832, 1.16279841, 1.15997191, 1.15169942, 1.02564429],
[ 1.13301675, 1.13109952, 1.12883038, 1.1236645, 0.99730795],
[ 1.10398195, 1.10125013, 1.0988554, 1.09612933, 0.97019688],
[ 1.07630046, 1.07356297, 1.07126087, 1.06878758, 0.94417658],
[ 1.04989686, 1.04728542, 1.04514962, 1.04289665, 0.91910765],
[ 1.02467087, 1.0221532, 1.02011384, 1.01797238, 0.89485162],
[ 1.00050447, 0.99795025, 0.99576917, 0.99330549, 0.87127677],
[ 0.97726849, 0.97443288, 0.97190614, 0.96861352, 0.84826362],
[ 0.95482612, 0.94783816, 0.94340077, 0.93753641, 0.82569922],
[ 0.93302433, 0.91985497, 0.9059118, 0.88895196, 0.80348449],
[ 0.91165997, 0.88253486, 0.86126688, 0.84769975, 0.78147382]])
# == Calculate function h, Used in the loop below == #
E0 = np.exp((1-gamma)*mu + (1-gamma)**2*sigma**2/2)
h = np.outer(beta*grid_y**(1-gamma), grid_prior*E0[0] + (1-grid_prior)*E0[1])
def integrate(g):
"""
This function is repeatedly called in the loop below
"""
integrand = lambda z: g(z) * phi.pdf(z)
result = fixed_quad(integrand, int_min, int_max, n=15)[0]
return result
def Approximationf(f, x):
"""
This function approximates the function f and is repeatedly called in
the loop
"""
# == simplify notation == #
fApprox = np.empty((x.shape[1]))
lower, middle = (x[0] < grid_y[0]), (x[0] >= grid_y[0]) & (x[0] <= grid_y[-1])
upper = (x[0] > grid_y[-1])
# = Calculate Polynomial == #
y_tile = np.tile(grid_y, len(grid_prior))
prior_repeat = np.repeat(grid_prior, len(grid_y))
s = interpolate.SmoothBivariateSpline(y_tile, prior_repeat,
f.T.flatten(), kx=5, ky=5)
# == interpolation == #
fApprox[middle] = s(x[0, middle], x[1, middle])[:, 0]
# == Extrapolation == #
if any(lower):
s0 = s(lower[lower]*grid_y[0], x[1, lower])[:, 0]
s1 = s(lower[lower]*grid_y[1], x[1, lower])[:, 0]
slope_lower = (s0 - s1)/(grid_y[0] - grid_y[1])
fApprox[lower] = s0 + slope_lower*(x[0, lower] - grid_y[0])
if any(upper):
sM1 = s(upper[upper]*grid_y[-1], x[1, upper])[:, 0]
sM2 = s(upper[upper]*grid_y[-2], x[1, upper])[:, 0]
slope_upper = (sM1 - sM2)/(grid_y[-1] - grid_y[-2])
fApprox[upper] = sM1 + slope_upper*(x[0, upper] - grid_y[-1])
return fApprox
def updated_posterior(prior, y, yPrime):
"""
This function calculates the posterior weights put on each distribution.
It is the thrid function repeatedly called in the loop below.
"""
z_0 = yPrime/(y * np.exp(mu[0]))
z_1 = yPrime/(y * np.exp(mu[1]))
l0, l1 = phi.pdf(z_0), phi.pdf(z_1)
posterior = l0*prior / (l0*prior + l1*(1-prior))
return posterior
Tf = np.empty_like(f)
Af = lambda x: Approximationf(f, x)
# == Apply the T operator to f == #
for idxp, prior in enumerate(grid_prior):
for idxy, y in enumerate(grid_y):
posterior = lambda yPrime: updated_posterior(prior, y, yPrime)
integrateL = integrate(lambda z: Af(np.array([y*np.exp(mu[0])*z,
posterior(y*np.exp(mu[0]) * z)])))
integrateH = integrate(lambda z: Af(np.array([y*np.exp(mu[1])*z,
posterior(y * np.exp(mu[1])*z)])))
Tf[idxy, idxp] = (h[idxy, idxp] +
beta * ((prior * integrateL) +
(1-prior)*integrateH))
Some experience with multiprocessing Following reptilicus comment, I decided to investigate how to use the multiprocessing module. My idea was to begin by parallizing the computation of the intergrateL array. To do so, I fixed the outer loop to prior =0.5 and wanted to iterate over the inner loop, grid_y. However, I still have to take into consideration that intergrateL is a lambda function in z. I tried to follow the advice of the stack-overflow question "How to let Pool.map take a lambda function" and wrote the following code:
prior = 0.5
Af = lambda x: Approximationf(f, x)
class Iteration(object):
def __init__(self,state):
self.y = state
def __call__(self,z):
Af(np.array([self.y*np.exp(mu[0])*z,
updated_posterior(prior,
self.y,self.y*np.exp(mu[0])*z)]))
with Pool(processes=4) as pool:
out = pool.map(Iteration(y), np.nditer(grid_y))
Unfortunately, python returns upon running the program:
IndexError: tuple index out of range
On first sight, these sniffs like a trivial error, but I cannot remedy it. Does somebody has an idea how to tackle the problem? Again, I'm grateful for any advice I receive!
I would target that nested loop, something like this. This is psuedo-code but it should get you started.
def do_calc(idxp, idxy, y, prior):
posterior = lambda yPrime: updated_posterior(prior, y, yPrime)
integrateL = integrate(lambda z: Af(np.array([y*np.exp(mu[0])*z,
posterior(y*np.exp(mu[0]) * z)])))
integrateH = integrate(lambda z: Af(np.array([y*np.exp(mu[1])*z,
posterior(y * np.exp(mu[1])*z)])))
return (idxp, idyy, posterior, integrateL, integrateH)
pool = multiprocessing.pool(8) # or however many cores you have
results = []
# This is the part that I would try to parallelize
for idxp, prior in enumerate(grid_prior):
for idxy, y in enumerate(grid_y):
results.append(pool.apply_async(do_calc, args=(idxpy, idxy, y, prior))
pool.close()
pool.join()
results = [r.get() for r in results]
for r in results:
Tf[r[0], r[1] = (h[r[0], r[1]] +
beta * ((prior * r[3]) +
(1-prior)*r[4))
So basically I want to grab the number of iterations it takes my newton's method to find the root, and then take that number and apply it to my color scheme to make the longer the amount of iterations, the darker the color, and the fewer, the more full the color.
so here's my code
from numpy import *
import pylab as pl
def myffp(x):
return x**3 - 1, 3*(x**2)
def newton( ffp, x, nits):
for i in range(nits):
#print i,x
f,fp = ffp(x)
x = x - f/fp
return x
q = sqrt(3)/2
def leggo(xmin=-1,xmax=1,jmin=-1,jmax=1,pts=1000,nits=30):
x = linspace(xmin, xmax, pts)
y = linspace(jmin, jmax, pts)*complex(0,1)
x1,y1 = meshgrid(x,y)
n = newton(myffp,x1+y1,nits) #**here is where i wanna see the number of iterations newton's method takes to find my root**
r1 = complex(1,0)
r2 = complex(-.5, q)
r3 = complex(-.5,-q)
data = zeros((pts,pts,3))
data[:,:,0] = abs(n-r1) #**and apply it here**
data[:,:,2] = abs(n-r2)
data[:,:,1] = abs(n-r3)
pl.show(pl.imshow(data))
leggo()
The main problem is finding the number of iterations, I can then figure out how to apply that to darkening the color, but for now it's just finding the number of iterations it takes for each value ran through newton's method.
Perhaps the simplest way is to just refactor your newton function so that it keeps track of the total iterations and then returns it (along with the result, of course), e.g.,
def newton( ffp, x, nits):
c = 0 # initialize iteration counter
for i in range(nits):
c += 1 # increment counter for each iteration
f, fp = ffp(x)
x = x - f/fp
return x, c # return the counter when the function is called
so in the main body of your code, change your call to newton, like so:
res, tot_iter = newton(myffp, x, nits)
the number of iterations in the last call to newton is stored in tot_iter
As aside, your implementation of Newton's Method seems to be incomplete.
for instance, it's missing a test against some convergence criterion.
Here's a simple implementation in python that works:
def newtons_method(x_init, fn, max_iter=100):
"""
returns: approx. val of root of the function passed in, fn;
pass in: x_init, initial value for the root;
max_iter, total iteration count not exceeded;
fn, a function of the form:
def f(x): return x**3 - 2*x
"""
x = x_init
eps = .0001
# set initial value different from x_init so at lesat 1 loop
x_old = x + 10 * eps
step = .1
c = 0
# (x - x_old) is convergence criterion
while (abs(x - x_old) > eps) and (c < max_iter):
c += 1
fval = fn(x)
dfdx = (fn(x + step)) - fn(x) / step
x_old = x
x = x_old - fval / dfdx
return x, c
The code you're currently using for newton() has a fixed number of iterations (nits - which is being passed in as 30), so the results would be kind of trivial and uninteresting.
It looks like you're trying to generate a Newton fractal -- the method you're trying to use is incorrect; the typical coloring mode is based on the output of the function, not the number of iterations. See the Wikipedia article for a full explanation.