This is a follow up to How to set up and solve simultaneous equations in python but I feel deserves its own reputation points for any answer.
For a fixed integer n, I have a set of 2(n-1) simultaneous equations as follows.
M(p) = 1+((n-p-1)/n)*M(n-1) + (2/n)*N(p-1) + ((p-1)/n)*M(p-1)
N(p) = 1+((n-p-1)/n)*M(n-1) + (p/n)*N(p-1)
M(1) = 1+((n-2)/n)*M(n-1) + (2/n)*N(0)
N(0) = 1+((n-1)/n)*M(n-1)
M(p) is defined for 1 <= p <= n-1. N(p) is defined for 0 <= p <= n-2. Notice also that p is just a constant integer in every equation so the whole system is linear.
Some very nice answers were given for how to set up a system of equations in python. However, the system is sparse and I would like to solve it for large n. How can I use scipy's sparse matrix representation and http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html for example instead?
I wouldn't normally keep beating a dead horse, but it happens that my non-vectorized approach to solving your other question, has some merit when things get big. Because I was actually filling the coefficient matrix one item at a time, it is very easy to translate into COO sparse matrix format, which can efficiently be transformed to CSC and solved. The following does it:
import scipy.sparse
def sps_solve(n) :
# Solution vector is [N[0], N[1], ..., N[n - 2], M[1], M[2], ..., M[n - 1]]
n_pos = lambda p : p
m_pos = lambda p : p + n - 2
data = []
row = []
col = []
# p = 0
# n * N[0] + (1 - n) * M[n-1] = n
row += [n_pos(0), n_pos(0)]
col += [n_pos(0), m_pos(n - 1)]
data += [n, 1 - n]
for p in xrange(1, n - 1) :
# n * M[p] + (1 + p - n) * M[n - 1] - 2 * N[p - 1] +
# (1 - p) * M[p - 1] = n
row += [m_pos(p)] * (4 if p > 1 else 3)
col += ([m_pos(p), m_pos(n - 1), n_pos(p - 1)] +
([m_pos(p - 1)] if p > 1 else []))
data += [n, 1 + p - n , -2] + ([1 - p] if p > 1 else [])
# n * N[p] + (1 + p -n) * M[n - 1] - p * N[p - 1] = n
row += [n_pos(p)] * 3
col += [n_pos(p), m_pos(n - 1), n_pos(p - 1)]
data += [n, 1 + p - n, -p]
if n > 2 :
# p = n - 1
# n * M[n - 1] - 2 * N[n - 2] + (2 - n) * M[n - 2] = n
row += [m_pos(n-1)] * 3
col += [m_pos(n - 1), n_pos(n - 2), m_pos(n - 2)]
data += [n, -2, 2 - n]
else :
# p = 1
# n * M[1] - 2 * N[0] = n
row += [m_pos(n - 1)] * 2
col += [m_pos(n - 1), n_pos(n - 2)]
data += [n, -2]
coeff_mat = scipy.sparse.coo_matrix((data, (row, col))).tocsc()
return scipy.sparse.linalg.spsolve(coeff_mat,
np.ones(2 * (n - 1)) * n)
It is of course much more verbose than building it from vectorized blocks, as TheodorosZelleke does, but an interesting thing happens when you time both approaches:
First, and this is (very) nice, time is scaling linearly in both solutions, as one would expect from using the sparse approach. But the solution I gave in this answer is always faster, more so for larger ns. Just for the fun of it, I also timed TheodorosZelleke's dense approach from the other question, which gives this nice graph showing the different scaling of both types of solutions, and how very early, somewhere around n = 75, the solution here should be your choice:
I don't know enough about scipy.sparse to really figure out why the differences between the two sparse approaches, although I suspect heavily of the use of LIL format sparse matrices. There may be some very marginal performance gain, although a lot of compactness in the code, by turning TheodorosZelleke's answer into COO format. But that is left as an exercise for the OP!
This is a solution using scipy.sparse. Unfortunately the problem is not stated here. So in order to comprehend this solution, future visitors have to first look up the problem under the link provided in the question.
Solution using scipy.sparse:
from scipy.sparse import spdiags, lil_matrix, vstack, hstack
from scipy.sparse.linalg import spsolve
import numpy as np
def solve(n):
nrange = np.arange(n)
diag = np.ones(n-1)
# upper left block
n_to_M = spdiags(-2. * diag, 0, n-1, n-1)
# lower left block
n_to_N = spdiags([n * diag, -nrange[-1:0:-1]], [0, 1], n-1, n-1)
# upper right block
m_to_M = lil_matrix(n_to_N)
m_to_M[1:, 0] = -nrange[1:-1].reshape((n-2, 1))
# lower right block
m_to_N = lil_matrix((n-1, n-1))
m_to_N[:, 0] = -nrange[1:].reshape((n-1, 1))
# build A, combine all blocks
coeff_mat = hstack(
(vstack((n_to_M, n_to_N)),
vstack((m_to_M, m_to_N))))
# const vector, right side of eq.
const = n * np.ones((2 * (n-1),1))
return spsolve(coeff_mat.tocsr(), const).reshape((-1,1))
There's some code that I've looked at before here: http://jkwiens.com/heat-equation-using-finite-difference/ His function implements a finite difference method to solve the heat equation using the scipy sparse matrix package.
Related
This is more of a computational physics problem, and I've asked it on physics stack exchange, but no answers on there. This is, I suppose, a mix of the disciplines on here and there (and maybe even mathematics stack exchange), so finding the right place to post is a task in of itself apparently...
I'm attempting to use Crank-Nicolson scheme to solve the TDSE in 1D. The initial wave is a real Gaussian that has been normalised wrt its probability density. As the solution evolves, a depression grows in the central peak of the real part of the wave, and the imaginary part's central trough is perhaps a bit higher than I expect (image below).
Does this behaviour seem reasonable? I have searched around and not seen questions/figures that are similar. I've tested another person's code from Github and it exhibits the same behaviour, which makes me feel a bit better. But I still think the center peak should just decrease in height and increase in width. The likelihood of me getting a physics-based explanation is relatively low here I'd assume, but a computational-based explanation on errors I may have made is more likely.
I'm happy to give more information, for example my code, or the matrices used in the scheme, etc. Thanks in advance!
Here's a link to GIF of time evolution:
And the part of my code relevant to solving the 1D TDSE:
(pretty much the entire thing except the plotting)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# Define function for norm.
def normf(dxc, uc, ic):
return sum(dxc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of position.
def xexpf(dxc, xc, uc, ic):
return sum(dxc * xc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of squared position.
def xexpsf(dxc, xc, uc, ic):
return sum(dxc * np.square(xc) * np.square(np.abs(uc[ic, :])))
# Define function for standard deviation.
def sdaf(xexpc, xexpsc, ic):
return np.sqrt(xexpsc[ic] - np.square(xexpc[ic]))
# Time t: t0 =< t =< tf. Have N steps at which to evaluate the CN scheme. The
# time interval is dt. decp: variable for plotting to certain number of decimal
# places.
t0 = 0
tf = 20
N = 200
dt = tf / N
t = np.linspace(t0, tf, num = N + 1, endpoint = True)
decp = str(dt)[::-1].find('.')
# Initialise array for filling with norm values at each time step.
norm = np.zeros(len(t))
# Initialise array for expectation value of position.
xexp = np.zeros(len(t))
# Initialise array for expectation value of squared position.
xexps = np.zeros(len(t))
# Initialise array for alternate standard deviation.
sda = np.zeros(len(t))
# Position x: -a =< x =< a. M is an even number. There are M + 1 total discrete
# positions, for the points to be symmetric and centred at x = 0.
a = 100
M = 1200
dx = (2 * a) / M
x = np.linspace(-a, a, num = M + 1, endpoint = True)
# The gaussian function u diffuses over time. sd sets the width of gaussian. u0
# is the initial gaussian at t0.
sd = 1
var = np.power(sd, 2)
mu = 0
u0 = np.sqrt(1 / np.sqrt(np.pi * var)) * np.exp(-np.power(x - mu, 2) / (2 * \
var))
u = np.zeros([len(t), len(x)], dtype = 'complex_')
u[0, :] = u0
# Normalise u.
u[0, :] = u[0, :] / np.sqrt(normf(dx, u, 0))
# Set coefficients of CN scheme.
alpha = dt * -1j / (4 * np.power(dx, 2))
beta = dt * 1j / (4 * np.power(dx, 2))
# Tridiagonal matrices Al and AR. Al to be solved using Thomas algorithm.
Al = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Al[i + 1, i] = alpha
Al[i, i] = 1 - (2 * alpha)
Al[i, i + 1] = alpha
# Corner elements for BC's.
Al[M, M], Al[0, 0] = 1 - alpha, 1 - alpha
Ar = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Ar[i + 1, i] = beta
Ar[i, i] = 1 - (2 * beta)
Ar[i, i + 1] = beta
# Corner elements for BC's.
Ar[M, M], Ar[0, 0] = 1 - 2*beta, 1 - beta
# Thomas algorithm variables. Following similar naming as in Wiki article.
a = np.diag(Al, -1)
b = np.diag(Al)
c = np.diag(Al, 1)
NT = len(b)
cp = np.zeros(NT - 1, dtype = 'complex_')
for n in range(0, NT - 1):
if n == 0:
cp[n] = c[n] / b[n]
else:
cp[n] = c[n] / (b[n] - (a[n - 1] * cp[n - 1]))
d = np.zeros(NT, dtype = 'complex_')
dp = np.zeros(NT, dtype = 'complex_')
# Iterate over each time step to solve CN method. Maintain boundary
# conditions. Keep track of standard deviation.
for i in range(0, N):
# BC's.
u[i, 0], u[i, M] = 0, 0
# Find RHS.
d = np.dot(Ar, u[i, :])
for n in range(0, NT):
if n == 0:
dp[n] = d[n] / b[n]
else:
dp[n] = (d[n] - (a[n - 1] * dp[n - 1])) / (b[n] - (a[n - 1] * \
cp[n - 1]))
nc = NT - 1
while nc > -1:
if nc == NT - 1:
u[i + 1, nc] = dp[nc]
nc -= 1
else:
u[i + 1, nc] = dp[nc] - (cp[nc] * u[i + 1, nc + 1])
nc -= 1
norm[i] = normf(dx, u, i)
xexp[i] = xexpf(dx, x, u, i)
xexps[i] = xexpsf(dx, x, u, i)
sda[i] = sdaf(xexp, xexps, i)
# Fill in final norm value.
norm[N] = normf(dx, u, N)
# Fill in final position expectation value.
xexp[N] = xexpf(dx, x, u, N)
# Fill in final squared position expectation value.
xexps[N] = xexpsf(dx, x, u, N)
# Fill in final standard deviation value.
sda[N] = sdaf(xexp, xexps, N)
Essentially, I am trying to solve the lapalcian using a random walk numerical solution. My domain is a circle, and the boundary condition is some function: f(phi) = cos(2phi). Essentially, I am trying to take a point within the 2D domain of my (unit) circle, randomly walk it until it meets the edge of the circle, so when x^2 + y+2 = 1. I am then going to take the x and y coordinate, find theta, then plug theta into my function to obtain a value. I will record and store this value, and repeat this process 'NumbRepeats' times. Then take the average of these values. This should give an approximate solution to the equation. I am not so concerned about the physics per se, I need help in what I have listed above the process I have described above. It may not necesarily be correct, but if I can get the program to do what I think is correct, then I will be happy. Thanks for the help. I will post my code below:
Note, my expertise is not coding, so apologies if this is difficult to understand. Any help is appreciated.
import matplotlib.pyplot as plt
import numpy as np
import random
#Python code for 2D random walk
# defining the number of steps
n = 100000
#creating two array for containing x and y coordinate
#of size equals to the number of size and filled up with 0's
x = np.zeros(n)
y = np.zeros(n)
NumbRepeats = 100
PotBoundVec = []
for j in range(NumbRepeats):
PotBound = 0
for i in range(1, n):
val = random.randint(0, 1)
if val == .25:
x[i] = x[i - 1] + 1
y[i] = y[i - 1]
elif val == .5:
x[i] = x[i - 1] - 1
y[i] = y[i - 1]
elif val == .75:
x[i] = x[i - 1]
y[i] = y[i - 1] + 1
else:
x[i] = x[i - 1]
y[i] = y[i - 1] - 1
if x[i]**2 + y[i]**2 == 1:
Theta = np.tan(x[i]/y[i])
PotBound = np.cos(2*Theta)
PotBoundVec.append(PotBound)
x = np.zeros(n)
y = np.zeros(n)
else:
pass
print(PotBoundVec)
I have a square matrix of size 8*8. Some of terms are a function of frequency(omega). I want to write a function which searches for eigenfrequencies in a given range like (0 - 1kHz).
I have included the function below. Here the terms 'tx', 'ki1', 'ki2' are function of omega. For finding eigenfrequencies, the determinant of matrix should be zero. But I can't find determinant of matrix if all values are not given.
Basically, I don't want to give a frequency and then get eigenvalues.
I want the matrix to be solved to give eigenvalues which will be eigenfrequencies.
Can you please suggest any method or function for that?
Any help or suggestion would be appreciated.
Thanks in advance!
import numpy as np
def mat(l1,l2,omega1):
kmat = np.zeros((8,8), dtype = complex)
ki1 = omega1 / c1
ki2 = omega1 / c2
tx = 1 + n * np.exp( -1j * omega1 *tau)
kmat[0][0] = -1
kmat[0][1] = 1
kmat[1][0] = - np.exp(- 1j * ki1 * l1) # simple duct
kmat[1][2] = 1
kmat[2][1] = - np.exp( 1j * ki1 * l1)
kmat[2][3] = 1
kmat[3][2] = tx # velocity coupling
kmat[3][3] = -tx
kmat[3][4] = -1
kmat[3][5] = 1
kmat[4][2] = 1
kmat[4][3] = -1
kmat[4][4] = -1
kmat[4][5] = -1
kmat[5][4] = - np.exp(- 1j * ki2 * l2)
kmat[5][6] = 1
kmat[6][5] = - np.exp( 1j * ki2 * l2)
kmat[6][7] = 1
kmat[7][6] = -1
kmat[7][7] = -1
return kmat
Why not do sort of bootstrapping to estimate the eigenvalues?
For each repetition fill the elements in the matrix which are not constant with f(omega) and find the eigenvalues. then sort them from largest to smallest
If the distribution of eigenvalues stays more or less the same across the repetitions - you have a fair enough estimate.
You don't say which project you are looking at the documentation for but sympy can do this:
In [1]: omega = Symbol('omega')
In [2]: M = Matrix([[1, omega], [omega, 1]])
In [3]: M
Out[3]:
⎡1 ω⎤
⎢ ⎥
⎣ω 1⎦
In [4]: M.eigenvals()
Out[4]: {1 - ω: 1, ω + 1: 1}
However you need to bear in mind that for a matrix larger than 4x4 with symbolic entries it is not always possible to obtain a "closed form" expression for the eigenvalues due to the Abel-Ruffini theorem:
https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem
Let's say an array sig:
sig = np.array([1,2,3,4,5])
Another array k which consists of indexes:
k = np.array([1,2,0,4])
I want to find an array that interpolates between s[k[i]-1] and s[k[i]] only if k[i]!= 0 and k[i] != len(k) i.e
p=2
result = np.zeros(len(k))
for i in range(len(k)):
if(k[i] == 0):
result[i] = sig[k[i]]
elif(k[i] == len(k)):
result[i] = sig[k[i] -1]
else:
result[i] = sig[k[i] -1] + (sig[k[i]] - sig[k[i]-1])*(p - k[i-1])/(k[i] - k[i-1])
How do I do this without looping over len(k) by vectorization
Expected : result = array([1.66666667,3, 1, 4])
Because for k = 0 and k =4 I did not interpolate the values were returned as sig[0] and sig[3] respectively
For a (very) limited amount of cases like here, an approach to vectorize such code is to build a linear combination of each case and the corresponding calculation.
So, set up vectors
alpha = (k == 0) to match the first case,
beta = (k > 0) to match the second case, and
gamma = (k < len(k)) to match the third case.
Then, build up a proper linear combination like:
alpha * sig[k] + beta * sig[k-1] + gamma * (sig[k] - sig[k-1] * (p - np.roll(k, 1)) / (k - np.roll(k, 1))
Pay attention, that - by the way beta and gamma are set up above - the calculations of the second and third cases can be combined. Also, we need np.roll here, to get the proper k[i-1].
The final solution, minimized to a one-liner, looks like this:
import numpy as np
# Inputs
sig = np.array([1, 2, 3, 4, 5])
k = np.array([1, 2, 0, 4])
p = 2
# Original solution using loop
result = np.zeros(len(k))
for i in range(len(k)):
if(k[i] == 0):
result[i] = sig[k[i]]
elif(k[i] == len(k)):
result[i] = sig[k[i] -1]
else:
result[i] = sig[k[i] -1] + (sig[k[i]] - sig[k[i]-1])*(p - k[i-1])/(k[i] - k[i-1])
# Vectorized solution
res = (k == 0) * sig[k] + (k > 0) * sig[k-1] + (k < len(k)) * (sig[k] - sig[k-1]) * (p - np.roll(k, 1)) / (k - np.roll(k, 1))
# Outputs
print('Original solution using loop:\n ', result)
print('Vectorized solution:\n ', res)
The outputs are identical:
Original solution using loop:
[1.66666667 3. 1. 4. ]
Vectorized solution:
[1.66666667 3. 1. 4. ]
Hope that helps!
I will try and explain exactly what's going on and my issue.
This is a bit mathy and SO doesn't support latex, so sadly I had to resort to images. I hope that's okay.
I don't know why it's inverted, sorry about that.
At any rate, this is a linear system Ax = b where we know A and b, so we can find x, which is our approximation at the next time step. We continue doing this until time t_final.
This is the code
import numpy as np
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
for l in range(2, 12):
N = 2 ** l #number of grid points
dx = 1.0 / N #space between grid points
dx2 = dx * dx
dt = dx #time step
t_final = 1
approximate_f = np.zeros((N, 1), dtype = np.complex)
approximate_g = np.zeros((N, 1), dtype = np.complex)
#Insert initial conditions
for k in range(N):
approximate_f[k, 0] = np.cos(tau * k * dx)
approximate_g[k, 0] = -i * np.sin(tau * k * dx)
#Create coefficient matrix
A = np.zeros((2 * N, 2 * N), dtype = np.complex)
#First row is special
A[0, 0] = 1 -3*i*dt
A[0, N] = ((2 * dt / dx2) + dt) * i
A[0, N + 1] = (-dt / dx2) * i
A[0, -1] = (-dt / dx2) * i
#Last row is special
A[N - 1, N - 1] = 1 - (3 * dt) * i
A[N - 1, N] = (-dt / dx2) * i
A[N - 1, -2] = (-dt / dx2) * i
A[N - 1, -1] = ((2 * dt / dx2) + dt) * i
#middle
for k in range(1, N - 1):
A[k, k] = 1 - (3 * dt) * i
A[k, k + N - 1] = (-dt / dx2) * i
A[k, k + N] = ((2 * dt / dx2) + dt) * i
A[k, k + N + 1] = (-dt / dx2) * i
#Bottom half
A[N :, :N] = A[:N, N:]
A[N:, N:] = A[:N, :N]
Ainv = np.linalg.inv(A)
#Advance through time
time = 0
while time < t_final:
b = np.concatenate((approximate_f, approximate_g), axis = 0)
x = np.dot(Ainv, b) #Solve Ax = b
approximate_f = x[:N]
approximate_g = x[N:]
time += dt
approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0)
#Calculate the actual solution
actual_f = np.zeros((N, 1), dtype = np.complex)
actual_g = np.zeros((N, 1), dtype = np.complex)
for k in range(N):
actual_f[k, 0] = solution_f(t_final, k * dx)
actual_g[k, 0] = solution_g(t_final, k * dx)
actual_solution = np.concatenate((actual_f, actual_g), axis = 0)
print(np.sqrt(dx) * np.linalg.norm(actual_solution - approximate_solution))
It doesn't work. At least not in the beginning, it shouldn't start this slow. I should be unconditionally stable and converge to the right answer.
What's going wrong here?
The L2-norm can be a useful metric to test convergence, but isn't ideal when debugging as it doesn't explain what the problem is. Although your solution should be unconditionally stable, backward Euler won't necessarily converge to the right answer. Just like forward Euler is notoriously unstable (anti-dissipative), backward Euler is notoriously dissipative. Plotting your solutions confirms this. The numerical solutions converge to zero. For a next-order approximation, Crank-Nicolson is a reasonable candidate. The code below contains the more general theta-method so that you can tune the implicit-ness of the solution. theta=0.5 gives CN, theta=1 gives BE, and theta=0 gives FE.
A couple other things that I tweaked:
I selected a more appropriate time step of dt = (dx**2)/2 instead of dt = dx. That latter doesn't converge to the right solution using CN.
It's a minor note, but since t_final isn't guaranteed to be a multiple of dt, you weren't comparing solutions at the same time step.
With regards to your comment about it being slow: As you increase the spatial resolution, your time resolution needs to increase too. Even in your case with dt=dx, you have to perform a (1024 x 1024)*1024 matrix multiplication 1024 times. I didn't find this to take particularly long on my machine. I removed some unneeded concatenation to speed it up a bit, but changing the time step to dt = (dx**2)/2 will really bog things down, unfortunately. You could trying compiling with Numba if you are concerned with speed.
All that said, I didn't find tremendous success with the consistency of CN. I had to set N=2^6 to get anything at t_final=1. Increasing t_final makes this worse, decreasing t_final makes it better. Depending on your needs, you could looking into implementing TR-BDF2 or other linear multistep methods to improve this.
The code with a plot is below:
import numpy as np
import matplotlib.pyplot as plt
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) *
np.exp((tau2 + 4) * i * t))
l=6
N = 2 ** l
dx = 1.0 / N
dx2 = dx * dx
dt = dx2/2
t_final = 1.
x_arr = np.arange(0,1,dx)
approximate_f = np.cos(tau*x_arr)
approximate_g = -i*np.sin(tau*x_arr)
H = np.zeros([2*N,2*N], dtype=np.complex)
for k in range(N):
H[k,k] = -3*i*dt
H[k,k+N] = (2/dx2+1)*i*dt
if k==0:
H[k,N+1] = -i/dx2*dt
H[k,-1] = -i/dx2*dt
elif k==N-1:
H[N-1,N] = -i/dx2*dt
H[N-1,-2] = -i/dx2*dt
else:
H[k,k+N-1] = -i/dx2*dt
H[k,k+N+1] = -i/dx2*dt
### Bottom half
H[N :, :N] = H[:N, N:]
H[N:, N:] = H[:N, :N]
### Theta method. 0.5 -> Crank Nicolson
theta=0.5
A = np.eye(2*N)+H*theta
B = np.eye(2*N)-H*(1-theta)
### Precompute for faster computations
mat = np.linalg.inv(A)#B
t = 0
b = np.concatenate((approximate_f, approximate_g))
while t < t_final:
t += dt
b = mat#b
approximate_f = b[:N]
approximate_g = b[N:]
approximate_solution = np.concatenate((approximate_f, approximate_g))
#Calculate the actual solution
actual_f = solution_f(t,np.arange(0,1,dx))
actual_g = solution_g(t,np.arange(0,1,dx))
actual_solution = np.concatenate((actual_f, actual_g))
plt.figure(figsize=(7,5))
plt.plot(x_arr,actual_f.real,c="C0",label=r"$Re(f_\mathrm{true})$")
plt.plot(x_arr,actual_f.imag,c="C1",label=r"$Im(f_\mathrm{true})$")
plt.plot(x_arr,approximate_f.real,c="C0",ls="--",label=r"$Re(f_\mathrm{num})$")
plt.plot(x_arr,approximate_f.imag,c="C1",ls="--",label=r"$Im(f_\mathrm{num})$")
plt.legend(loc=3,fontsize=12)
plt.xlabel("x")
plt.savefig("num_approx.png",dpi=150)
I am not going to go through all of your math, but I'm going to offer a suggestion.
The use of a direct calculation for fxx and gxx seems like a good candidate for being numerically unstable. Intuitively a first order method should be expected to make second order mistakes in the terms. Second order mistakes in the individual terms, after passing through that formula, wind up as constant order mistakes in the second derivative. Plus when your step size gets small, you are going to find that a quadratic formula makes even small roundoff mistakes turn into surprisingly large errors.
Instead I would suggest that you start by turning this into a first-order system of 4 functions, f, fx, g, and gx. And then proceed with backward's Euler on that system. Intuitively, with this approach, a first order method creates second order mistakes, which pass through a formula that creates first order mistakes of them. And now you are converging as you should from the start, and are also not as sensitive to propagation of roundoff errors.