Programming a PDE in Python with Brownian path - python

I'm currently trying to simulate a PDE including a Brownian path (one of the terms includes, that when going one timestep 'dt' further the change is weighted by a normal distributed variable with mean 0 and variance dt).
For this I used Fast Fourier Transform to get a system of ODEs which I can solve much more easily (at least that's what I thought). This lead me to the following code.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#Defining some parameters a, b, c, which are included in the PDE
a = 10
b = 1.5
c = 20
#Creating the mesh
L = 100
N = 100
dx = L/N
x = np.arange(0, L, dx)
dt=0.01
t = np.linspace(0, 1, 100)
#Frequency for the Fourier Transformation
kappa= 2*np.pi*np.fft.fftfreq(N, d=dx)
#Initial condition for function u and its Fast Fourier Transformation
u0= np.zeros_like(x)
u0[int((L/4-L/10)/dx):int((L/4+L/10)/dx)]=2.5
u0[int((3*L/4-L/10)/dx):int((3*L/4+L/10)/dx)]=2.5
u0hat = np.fft.fft(u0)
u0hat_ri= np.concatenate((u0hat.real, u0hat.imag))
#Define the function describing the Transformation from the PDE to the system of ODEs
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
#Define the weighted change by the Brownian path B
mean = [0]*len(uhat)
diag = [0.1] * len(uhat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat * B
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
#Solve the ODE with odeint
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c))
uhat = uhat_ri[:, :N] + (1j) * uhat_ri[:, N:]
u = np.zeros_like(uhat)
#Inverse Transform the Solution
for k in range(len(t)):
u[:, k] = np.fft.ifft(uhat[k, :])
u = u.real
This program works if I exclude the Brownian path B in func
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
But it takes a long time to execute when including B and also it tells me:
C:\Users\leo_h\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
EDIT/ANSWER:
I solved the problem, by putting the change in the Brownian path out of func. I guess it was just too much for odeint to cope with the function (or it generated a new Brownian path for each t?)
mean = [0]*len(u0hat)
diag = [2] * len(u0hat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
def func(uhat_ri, t, kappa, a, b, c, B):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = np.zeros_like(uhat)
d_uhat = -a**2 * (np.power(kappa, 2)) * uhat-c * (1j) * kappa * uhat + b * B * (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c, B))

Related

solve two simultaneous equations: one contains a Python function

On the below map, I have two known points (A and B) with their coordinates (longitude, latitude). I need to derive the coordinates of a point C which is on the line, and is 100 kilometres away from A.
First I created a function to calculate the distances between two points in kilometres:
# pip install haversine
from haversine import haversine
def get_distance(lat_from,long_from,lat_to,long_to):
distance_in_km = haversine((lat_from,long_from),
(lat_to, long_to),
unit='km')
return distance_in_km
Then using the slope and the distance, the coordinates of point C should be the solution to the below equations:
# line segment AB and AC share the same slope, so
# (15.6-27.3)/(41.6-34.7) = (y-27.3)/(x-34.7)
# the distance between A and C is 100 km, so
# get_distance(y,x,27.3,34.7) = 100
Then I try to solve these two equations in Python:
from sympy import symbols, Eq, solve
slope = (15.6-27.3)/(41.6-34.7)
x, y = symbols('x y')
eq1 = Eq(y-slope*(x-34.7)-27.3)
eq2 = Eq(get_distance(y,x,34.7,27.3)-100)
solve((eq1,eq2), (x, y))
The error is TypeError: can't convert expression to float. I may understand the error, because the get_distance function is expecting inputs as floats, while my x and y in eq2 are sympy.core.symbol.Symbol.
I tried to add np.float(x), but the same error remains.
Is there a way to solve equations like these? Or do you have better ways to achieve what is needed?
Many thanks!
# there is a simple example of solving equations:
from sympy import symbols, Eq, solve
x, y = symbols('x y')
eq1 = Eq(2*x-y)
eq2 = Eq(x+2-y)
solve((eq1,eq2), (x, y))
# output: {x: 2, y: 4}
You can directly calculate that point. We can implement a python version of the intermediate calculation for lat long.
Be aware this calculations assume the earth is a sphere, and takes the curve into account, i.e. this is not a Euclidean approximation like your original post.
Say we have two (lat,long) points A and B;
import numpy as np
A = (52.234869, 4.961132)
B = (46.491267, 26.994655)
EARTH_RADIUS = 6371.009
We can than calculate the intermediate point fraction f by taking 100/distance-between-a-b-in-km
from sklearn.neighbors import DistanceMetric
dist = DistanceMetric.get_metric('haversine')
point_1 = np.array([A])
point_2 = np.array([B])
delta = dist.pairwise(np.radians(point_1), np.radians(point_2) )[0][0]
f = 100 / (delta * EARTH_RADIUS)
phi_1, lambda_1 = np.radians(point_1)[0]
phi_2, lambda_2 = np.radians(point_2)[0]
a = np.sin((1-f) * delta) / np.sin(delta)
b = np.sin( f * delta) / np.sin(delta)
x = a * np.cos(phi_1) * np.cos(lambda_1) + b * np.cos(phi_2) * np.cos(lambda_2)
y = a * np.cos(phi_1) * np.sin(lambda_1) + b * np.cos(phi_2) * np.sin(lambda_2)
z = a * np.sin(phi_1) + b * np.sin(phi_2)
phi_n = np.arctan2(z, np.sqrt(x**2 + y**2) )
lambda_n = np.arctan2(y,x)
The point C, going from A to B, with 100 km distance from A, is than
C = np.degrees( phi_n ), np.degrees(lambda_n)
In this case
(52.02172458025681, 6.384361456573444)

How to find 2 parameters with gradient descent method in Python?

I have a few lines of code which doesn't converge. If anyone has an idea why, I would greatly appreciate. The original equation is written in def f(x,y,b,m) and I need to find parameters b,m.
np.random.seed(42)
x = np.random.normal(0, 5, 100)
y = 50 + 2 * x + np.random.normal(0, 2, len(x))
def f(x, y, b, m):
return (1/len(x))*np.sum((y - (b + m*x))**2) # it is supposed to be a sum operator
def dfb(x, y, b, m): # partial derivative with respect to b
return b - m*np.mean(x)+np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
return np.sum(x*y - b*x - m*x**2)
b0 = np.mean(y)
m0 = 0
alpha = 0.0001
beta = 0.0001
epsilon = 0.01
while True:
b = b0 - alpha * dfb(x, y, b0, m0)
m = m0 - alpha * dfm(x, y, b0, m0)
if np.sum(np.abs(m-m0)) <= epsilon and np.sum(np.abs(b-b0)) <= epsilon:
break
else:
m0 = m
b0 = b
print(m, f(x, y, b, m))
Both derivatives got some signs mixed up:
def dfb(x, y, b, m): # partial derivative with respect to b
# return b - m*np.mean(x)+np.mean(y)
# ^-------------^------ these are incorrect
return b + m*np.mean(x) - np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
# v------ this should be negative
return -np.sum(x*y - b*x - m*x**2)
In fact, these derivatives are still missing some constants:
dfb should be multiplied by 2
dfm should be multiplied by 2/len(x)
I imagine that's not too bad because the gradient is scaled by alpha anyway, but it could make the speed of convergence worse.
If you do use the correct derivatives, your code will converge after one iteration:
def dfb(x, y, b, m): # partial derivative with respect to b
return 2 * (b + m * np.mean(x) - np.mean(y))
def dfm(x, y, b, m): # partial derivative with respect to m
# Used `mean` here since (2/len(x)) * np.sum(...)
# is the same as 2 * np.mean(...)
return -2 * np.mean(x * y - b * x - m * x**2)

Problems minimizing a two variable function with "scipy.optimize.brute"

I'm trying to minimize a function of two variables using scipy.optimize.brute algorithm, but I've receiving the error TypeError: Fcone() missing 1 required positional argument: 'R'.
Here's my code:
import numpy as np
import scipy.optimize as opt
gamma = 17.
C = 45.
T = 12.
H = 1.2
R = 1.3
H1 = 0.57
def Fcone(alpha, beta, H1, gamma, C, T, H, R):
return np.pi/((np.cos(alpha))**2 * (np.cos(beta))**2) * ((np.cos(alpha))**2
* ((np.cos(beta))**2 * (gamma*H1**3 + 2*H1**2*(C-T-gamma*H) - H*H1*
(2*C-2*T-gamma*H) + H*(gamma*R**2 + H*(C-T)-(gamma*H**2)/3)) -
2*R*(H-H1)*np.cos(beta) * ((gamma*H1/2 - gamma*H/2 + C -T)*
np.sin(beta) - C) + (C*np.sin(beta) + gamma*H/3 - gamma*H1/3
-C + T)*(H-H1)**2) - 2*H1*np.cos(beta)*np.cos(alpha)*(
R*np.cos(beta)*((gamma*H1/2 + C -T -gamma*H)*np.sin(alpha)
- C) + np.sin(alpha)*(H-H1)*((gamma*H1/2 - gamma*H/2 + C -T)
*np.sin(beta)-C)) + (np.cos(beta))**2 * H1**2 *
(gamma*H + C*np.sin(alpha)
-2*gamma*H1/3 -C+T))
ranges = (slice(0, np.pi/2, np.pi/10000), slice(0, np.pi/2, np.pi/10000))
res = opt.brute(Fcone, ranges, args=(H1, gamma, C, T, H, R), finish=None)
Where gamma, C,T, H, R1 and H1 are given parameters. Despite the very large (and probably confusing equation), how can I fix this problem and minimize the function for the two parameters alpha and beta? Thank you!
P.S.: The function alone is running fine when I assign all values for the parameters.
Amend Fcone() as follows...
def Fcone(alpha_beta, H1, gamma, C, T, H, R):
alpha = alpha_beta[0]
beta = alpha_beta[1]
.....
The variables being optimized have to be put into a 1D array, check docs here. Their values will be in res[0]

Python integration of large array of coefficient

I'm trying to optimize simple integration in python which looks something like
from scipy import integrate
import numpy as np
from scipy.special import kv
import time
#Example function
def integrand(x, a, b, c):
return a * (x ** (-b)) * (np.sqrt(x ** (c) + 1) - 1)
#Real Function that I want to calculate
def Bes(xx):
return integrate.quad(lambda x: kv(5./3.,x), xx,np.inf)
def F(x,a,b,c,d,e,f):
zx = 1/((x**2.+1)*a)
feq = e*x**(f)
if (x>c):
feq *= c/x * np.exp(-(x/d)**2.)
return b*Bes(zx)*feq*x**2.
start = time.time()
array_length = 10
a = np.random.rand(array_length)+3.
b = np.random.rand(array_length)+1.
c = np.random.rand(array_length)
d = (np.random.rand(array_length)+1)*100.
e = np.random.rand(array_length)*100.
f = np.random.rand(array_length)
inte = np.array([])
for i in range(array_length):
result = integrate.quad(lambda x: F(x, a[i], b[i], c[i],d[i],e[i],f[i]),0.01,100000.)
inte = np.append(inte,result[0])
print("For array length = %i" % array_length)
print("Time = %.2f [sec]" %(time.time()-start))
But the problems that I'm facing are
a, b, c are array with length > 10^7 (same length)
integration range of x starts at 0.01 and extends to infinite
Integration at the small x (like [0.01, 1]) is very important and needs small step.
I want to integrate this function on each coefficient value and returns the entire array of integration as the result (length ~ 10^7), efficiently.
What kind of tools should I use?
(+) I just changed my code from simple example to actual integration form that I need to solve. Sorry for making confusion.
I suspected that this integral would converge for certain values of b and c, so I tried to evaluate this using Sympy:
import sympy
sympy.init_printing()
a, b, c = sympy.symbols('a, b, c', positive=True)
x = sympy.Symbol('x', positive=True)
sympy.integrate(a*(x**(-b))*(sympy.sqrt(x**c+1)-1), (x, 0, sympy.oo))
This means that you should be able to obtain the correct results with this code as long as your coefficients pass the check function.
from numpy import sqrt, pi
from scipy.special import gamma
def check(a, b, c):
assert (-(-b + 1)/c < 1)
assert (1/2 - (-b + 1)/c > 1)
assert (1 - (-b + 1)/c > 1)
def result(a, b, c):
return a*gamma(-b/c + 1 + 1/c)*gamma(b/c - 1/2 - 1/c)/(2*sqrt(pi)*(b - 1))

Explicit Euler method doesn't behave how I expect

I have implemented the following explicit euler method in python:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(x[i])
return x
Following the article on wikipedia I can plot the function and verify that I get the same plot: . I believe that here the method I have written is working correctly.
Next I tried to use it to solve the last system given on this page and instead of the plot shown there I obtain this:
I am not sure why my plot doesn't match the one shown on the webpage. The explicit euler method seems to work fine when I use it to solve systems where the slope doesn't change, but for an oscillating function it never seems to mimic it at all. Not even showing the expected error gain as indicated on the linked webpage. I am not sure what is wrong with the method I have implemented.
Here is the code used for plotting and the derivative:
def g(t):
return -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5)
* np.cos(5 * t)
h = 0.001
x0 = 0
tn = 4
N = int(tn / h)
x = ee.explicit_euler(f, x0, h, N)
t = np.arange(0, tn, h)
fig = plt.figure()
plt.plot(t, x, label="Explicit Euler")
plt.plot(t, (np.exp(0.5 * t) * np.sin(5 * t)), label="Analytical
solution")
#plt.plot(t, np.exp(0.5 * t), label="Analytical solution")
plt.xlabel('Timesteps t')
plt.ylabel('x(t)=e^(0.5*t) * sin(5*t)')
plt.legend()
plt.grid()
plt.show()
Edit:
As requested here is the current equation I am applying the method to:
y'-y=-0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t)
Where y(0)=0.
I would like to make clear however that this behaviour doesn't occur just for this equation but all equations where the slope has a change in sign, or oscillating behaviour.
Edit 2:
Ok thanks. Yes the code below does indeed work. But I have one further question. In the simple example I had for the exponential function, I had defined a method:
def f(x):
return x
for the system f'(x)=x. This gave the output of my first graph which looks correct. I then defined another function:
def k(x):
return cos(x)
for the system f'(x)=cos(x), this does not give expected output. But when I change the function definition to
def k(t, x):
return cos(t)
I get the expected output. If I change my function
def f(t, x):
return t
I get an incorrect output. Am I always actually evaluating the function at a time step and is it just by chance for the system x'=x that at each time step the value is just the value of x?
I had understood that the Euler method used the value of the previously calculated value in order to get the next value. But if I run code for my function k(x)=cos(x), I get output pictured below, which must be incorrect. This now uses the updated code you provided.
def k(t, x):
return np.cos(x)
h = 0.1 # Step size
x0 = (0, 0) # Initial point of iteration
tn = 10 # Time step to iterate to
N = int(tn / h) # Number of steps
x = ee.explicit_euler(k, x0, h, N)
t = np.arange(0, tn, h)
The problem is that you have incorrectly raised the function g, you want to solve the equation:
From where we observe that:
y' = y -0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t)
Then we define the function f(t, y) = y -0.5*e^(t/2)*sin(5t)+5e^(t/2)*cos(5t) as:
def f(t, y):
return y -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5) * np.cos(5 * t)
The initial point of iteration is f0=(t(0), y(0)):
f0 = (0, 0)
Then from Euler's equations:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
t, x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(t ,x[i])
t += h
return x
Complete Code:
def explicit_euler(df, x0, h, N):
"""Solves an ODE IVP using the Explicit Euler method.
Keyword arguments:
df - The derivative of the system you wish to solve.
x0 - The initial value of the system you wish to solve.
h - The step size.
N - The number off steps.
"""
x = np.zeros(N)
t, x[0] = x0
for i in range(0, N-1):
x[i+1] = x[i] + h * df(t ,x[i])
t += h
return x
def df(t, y):
return -0.5 * np.exp(t * 0.5) * np.sin(5 * t) + 5 * np.exp(t * 0.5) * np.cos(5 * t) + y
h = 0.001
f0 = (0, 0)
tn = 4
N = int(tn / h)
x = explicit_euler(df, f0, h, N)
t = np.arange(0, tn, h)
fig = plt.figure()
plt.plot(t, x, label="Explicit Euler")
plt.plot(t, (np.exp(0.5 * t) * np.sin(5 * t)), label="Analytical solution")
#plt.plot(t, np.exp(0.5 * t), label="Analytical solution")
plt.xlabel('Timesteps t')
plt.ylabel('x(t)=e^(0.5*t) * sin(5*t)')
plt.legend()
plt.grid()
plt.show()
Screenshot:
Dump y' and what is on the right side is what you should place in the df function.
We will modify the variables to maintain the same standard for the variables, and will y be the dependent variable, and t the independent variable.
Equation 2: In this case the equation f'(x)=cos(x) will be rewritten to:
y'=cos(t)
Then:
def df(t, y):
return np.cos(t)
In conclusion, if we have an equation of the following form:
y' = f(t, y)
Then:
def df(t, y):
return f(t, y)

Categories