Need help to compute derivative and integral using Numpy - python

I need help to compute derivative and integral of function using finite difference method and Numpy without using loops.
The whole task: Tabulate Gauss function f(x) = (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2) on the interval [-10,10] for m = 0 and s=[0.5,5]. Compute derivative and integral of the function using finite difference method without using loops. Create plots of the function and its derivative. Use Numpy and Matplotlib.
Here's the beginning of the programm:
def f(x,s,m):
return (1./(sqrt(2.*pi)*s))*e**(-0.5*((x-m)/s)**2)
def main():
m = 0
s = np.linspace(0.5,5,3)
x = np.linspace(-10,10,20)
for i in range(3):
print('s = ', s[i])
for j in range(20):
f(x[j],s[i],m)
print('x = ',x[j],', y = ',f(x[j],s[i],m))

By using the numpy arrays you can apply that operation directly with algebraic notation:
result = (1./(np.sqrt(2.*np.pi)*s))*np.exp(-0.5*((x-m)/s)**2)

The simplest way (without using SciPy) seems to me to directly sum for the integral and central difference method for the derivative:
import numpy as np
import pylab
def gaussian(x, s, m):
return 1./(np.sqrt(2.*np.pi)*s) * np.exp(-0.5*((x-m)/s)**2)
m = 0
s = np.linspace(0.5,5,3)
x, dx = np.linspace(-10,10,1000, retstep=True)
x = x[:,np.newaxis]
y = gaussian(x,s,m)
h = 1.e-6
dydx = (gaussian(x+h, s, m) - gaussian(x-h, s, m))/2/h
int_y = np.sum(gaussian(x, s, m), axis=0) * dx
print(int_y)
pylab.plot(x, y)
pylab.plot(x, dydx)
pylab.show()

Related

How to resolve integration function not integrating correctly?

I am trying to build a few simple operations such as a derivative and integral function to operate on lambda functions because sympy and scipy were struggling to integrate some things that I was passing to them.
The derivative function does not give me any issues and looks to return the derivative of the input function when plotted, but the integral function does not return the same, and does not plot the correct integral of the input.
import matplotlib.pyplot as plt
import numpy as np
from phys_func import func
sr = [-10,10]
x = np.linspace(sr[0],sr[1], 100)
F = lambda x: x**2
f = func.I(F,x)
plt.plot(x,F(x), label = 'F(x)')
plt.plot(x,f(x), label = 'f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
The integration function that does not work:
def I(F,x):
dx = (x[len(x)-1] - x[0])/len(x)
return lambda x : 0.5*( F(x+dx) + F(x) )*dx
The derivative function that works:
def d(f,x):
dx = (x[len(x)-1]-x[0])/len(x)
return lambda x: (f(x+dx)-f(x))/dx
Can anyone lend me a hand please?
You cannot find the anti derivative of a function numerically with out knowing the value of the anti derivative at a single point. Suppose if you fix the value of the antiderivative function value at x =a to be 0 (and given function is continuous from [a,x]) , then we can use definite integrals. For this function, let us take a=0 (i.e 0 is a root of anti derivative function), so you can do a definite integral from [0,x]. Also, your integration function is wrong. You need to sum all the 0.5*( F(x+dx) + F(x) )*dx elements from 0 to x to get the definite integral.
You can modify I(f,x) as follows
def I(F1): # N is number of intervals
return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N ) + F1(np.linspace(0,x,num=N)))*(x)/N)
In [1]: import numpy as np
In [2]: def I(F1): # N is number of intervals
...: return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N
...: ) + F1(np.linspace(0,x,num=N)))*(x)/N)
...:
In [3]: F = lambda x: x**2
In [4]: x_ran = np.linspace(-10,10, 100)
In [5]: y = I(F)
In [6]: y_ran = []
In [7]: for i in x_ran:
...: y_ran.append(y(i,100))
In [8]: plt.plot(x_ran,y_ran)
In [9]: plt.show()

How can I control odeint to stop integration when the result reach a threshold?

Here is my code.
import numpy as np
from scipy.integrate import odeint
#Constant
R0=1.475
gamma=2.
ScaleMeVfm3toEskm3 = 8.92*np.power(10.,-7.)
def EOSe(p):
return np.power((p/450.785),(1./gamma))
def M(m,r):
return (4./3.)*np.pi*np.power(r,3.)*p
# function that returns dz/dt
def model(z,r):
p, m = z
dpdr = -((R0*EOSe(p)*m)/(np.power(r,2.)))*(1+(p/EOSe(p)))*(1+((4*math.pi*(np.power(r,3))*p)/(m)))*((1-((2*R0)*m)/(r))**(-1.))
dmdr = 4.*math.pi*(r**2.)*EOSe(p)
dzdr = [dpdr,dmdr]
return dzdr
# initial condition
r0=10.**-12.
p0=10**-6.
z0 = [p0, M(r0, p0)]
# radius
r = np.linspace(r0, 15, 100000)
# solve ODE
z = odeint(model,z0,r)
The result of z[:,0] keeps decreasing as I expected. But what I want is only positive values. One may run the code and try print(z[69306]) and it will show [2.89636405e-11 5.46983202e-01]. That is the last point I want the odeint to stop integration.
Of course, the provided code shows
RuntimeWarning: invalid value encountered in power
return np.power((p/450.785),(1./gamma))
because the result of p starts being negative. For any further points, the odeint yields the result [nan nan].
However, I could use np.nanmin() to find the minimum of z[:,0] that is not nan. But I have a set of p0 values for my work. I will need to call odeint in a loop like
P=np.linspace(10**-8.,10**-2.,10000)
for p0 in P:
#the code for solving ode provided above.
which takes more time.
I think it would reduce a time for execution if I can just stop at before z[:,0] going to be negative a value?
Here is the modified code using solve_ivp:
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pylab as plt
# Constants
R0 = 1.475
gamma = 2.
def EOSe(p):
return np.power(np.abs(p)/450.785, 1./gamma)
def M(m, r):
return (4./3.)*np.pi*np.power(r,3.)*p
# function that returns dz/dt
# note: the argument order is reversed compared to `odeint`
def model(r, z):
p, m = z
dpdr = -R0*EOSe(p)*m/r**2*(1 + p/EOSe(p))*(1 + 4*np.pi*r**3*p/m)*(1 - 2*R0*m/r)**(-1)
dmdr = 4*np.pi * r**2 * EOSe(p)
dzdr = [dpdr, dmdr]
return dzdr
# initial condition
r0 = 1e-3
r_max = 50
p0 = 1e-6
z0 = [p0, M(r0, p0)]
# Define the event function
# from the doc: "The solver will find an accurate value
# of t at which event(t, y(t)) = 0 using a root-finding algorithm. "
def stop_condition(r, z):
return z[0]
stop_condition.terminal = True
# solve ODE
r_span = (r0, r_max)
sol = solve_ivp(model, r_span, z0,
events=stop_condition)
print(sol.message)
print('last p, m = ', sol.y[:, -1], 'for r_event=', sol.t_events[0][0])
r_sol = sol.t
p_sol = sol.y[0, :]
m_sol = sol.y[1, :]
# Graph
plt.subplot(2, 1, 1);
plt.plot(r_sol, p_sol, '.-b')
plt.xlabel('r'); plt.ylabel('p');
plt.subplot(2, 1, 2);
plt.plot(r_sol, m_sol, '.-r')
plt.xlabel('r'); plt.ylabel('m');
Actually, using events in this case do not prevent a warning because of negative p. The reason is that the solver is going to evaluate the model for p<O anyway. A solution is to take the absolute value of p in the square root (as in the code above). Using np.sign(p)*np.power(np.abs(p)/450.785, 1./gamma) gives interesting result too.

How do I convert the x and y values in polar form from these coupled ODEs to to cartesian form and graph them?

I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:

Plotting mathematical function in python

i'm trying to write function called plotting which takes i/p parameters Z, p and q and plots the function
f(y) = det(Z − yI) on the interval [p, q]
(Note: I is the identity matrix.) det() is the determinant.
For finding det(), numpy.linalg.det() can be used
and for indentity matrix , np.matlib.identity(n)
Is there a way to write such functions in python? and plot them?
import numpy as np
def f(y):
I2 = np.matlib.identity(y)
x = Z-yI2
numpy.linalg.det(x)
....
Is what i am tryin correct? any alternative?
You could use the following implementation.
import numpy as np
import matplotlib.pyplot as plt
def f(y, Z):
n, m = Z.shape
assert(n==m)
I = np.identity(n)
x = Z-y*I
return np.linalg.det(x)
Z = np.matrix('1 2; 3 4')
p = -15
q = 15
y = np.linspace(p, q)
w = np.zeros(y.shape)
for i in range(len(y)):
w[i] = f(y[i], Z)
plt.plot(y, w)
plt.show()

Python ODEINT problems with args

I am relatively new to Python and trying to use it to solve an integrator problem
x' = - L * x
Where L is the Laplacian Matrix, that is a matrix representation of a graph. This is part of my code:
def integrate_cons(x, t, l):
xdot = -l*x
return xdot;
t = np.linspace(0, 10, 101)
#laplacian is a 3x3 matrix
#initial_condition is a vector
solution = odeint(integrate_cons, initial_conditions, t, args=(laplacian,))
print solution
I'm having problems to pass a matrix like an argument in odeint. How can i solve?
import numpy as np
from scipy.integrate import odeint
def integrate_cons(x, t, l):
# unless you use np.matrix which I never do, you have to use np.dot
xdot = -np.dot(l,x)
return xdot;
t = np.linspace(0, 10, 101)
# just a random matrix
l = np.random.rand(3,3)
# initial conditions
x0 = np.array([1,1,1])
#laplacian is a 3x3 matrix
#initial_condition is a vector
solution = odeint(integrate_cons, x0, t, args=(l,))
print(solution)
Look at the scipy cookbook for examples.

Categories