I'm developing the code below to check the difference between Euler and Enhanced Euler methods for the function y'= y.
In this case, I see that the more the iterations advance, the greater the difference between the values. Does anyone know why?
def euler_explicit(y_0, h, n):
steps = [None]*(n+1)
steps[0] = y_0
for k in range(1, n+1):
steps[k] = steps[k-1] + h*steps[k-1]
return steps
def euler_aprim(y_0, h, n):
steps = [None]*(n+1)
steps[0] = y_0
for k in range(1, n+1):
steps[k] = steps[k-1] + h*0.5*(steps[k-1] + steps[k-1] + h*steps[k-1])
return steps
def main():
t_f = 5
n = 5
y_0 = 1
t_i = 0
h = (t_f - t_i)/n
euler_explicit_results = euler_explicit(y_0, h, n)
euler_aprim_results = euler_aprim(y_0, h, n)
for i in range(0, n+1):
print("Diff in position "+str(i)+" is:"+str(euler_explicit_results[i]-euler_aprim_results[i]))
if __name__ == "__main__":
main()
My output is:
Diff in position 0 is:0
Diff in position 1 is:-0.5
Diff in position 2 is:-2.25
Diff in position 3 is:-7.625
Diff in position 4 is:-23.0625
Diff in position 5 is:-65.65625
Related
I'm writing a code that solves a heat equation implementing an implicit method. The problem is that the values between first and last layer of the matrix are NaNs. What could be the problem?
From my problem of view, the main issue might be with the 105th line, which represents the convrsion of original function to the one that includes the boundary function.
Boundary functions code:
def func(x, t):
return x*(1 - x)*np.exp(-2*t)
# boundary function for x = 0 and x = 1
def q0(t):
return t*np.exp(-t/0.1)*np.cos(t) # граничное условие при x = 0
def q1(t):
return t*np.exp(-t/0.5)*np.cos(t) # граничное уcловие при x = 1
def derivative(f, x0, step):
return (f(x0+step) - f(x0))/step
# boundary function that for t = 0
def u_x0(x):
return (-x + 1)*x
Function that solves the three-diagonal matrix equation
def solution(a, b):
n = len(a)
x = [0 for k in range(0, n)]
# forward
v = [0 for k in range(0, n)]
u = [0 for k in range(0, n)]
# first string (t = 0)
v[0] = a[0][1] / (-a[0][0])
u[0] = ( - b[0]) / (-a[0][0])
for i in range(1, n - 1):
v[i] = a[i][i+1] / ( -a[i][i] - a[i][i-1]*v[i-1] )
u[i] = ( a[i][i-1]*u[i-1] - b[i] ) / ( -a[i][i] - a[i][i-1]*v[i-1] )
# last string (t = 1)
v[n-1] = 0
u[n-1] = (a[n-1][n-2]*u[n-2] - b[n-1]) / (-a[n-1][n-1] - a[n-1][n-2]*v[n-2])
x[n-1] = u[n-1]
for i in range(n-1, 0, -1):
x[i-1] = v[i-1] * x[i] + u[i-1]
return x
Coefficent matrix values:
A = -t/h**2
B = 1 + 2*t/h**2
C = -t/h**2
Code that actually solves the matrix:
i = 1
X =[]
while i < 99:
X = solution(cool_array, f)
k = 0
while k < len(x_i):
#line-105
X[k] += 0.01*(func(x_i[k], x_i[i]) - (1 - x_i[i])*derivative(q0, x_i[i], 0.01) - (x_i[i])*derivative(q1, x_i[i], 0.01))
k+=1
a = 1
while a < 98:
w_h_t[i][a] = X[a]
a+=1
f = X
f[0] = w_h_t[i][0]
f[99] = w_h_t[i][99]
i+=1
print(w_h_t)
As far as I understand, the algorith solution(a, b) is written properly, so I guess the problem might be with the boundary functions or with the 105th line. The output I expect is at least an array of number, not NaNs.
I have a python code that implements Dynamic Time Wrapping, which I use to compare the predicted curve to my actual curve. I care about the shape of the curve but also about the distance between the 2 curves. I z-normalized the 2 curves before calling the function that returns the cost. However, I got weird results. For example:
I got cost of 0.28 for this example:
While I got 0.38 for the below example:
In the first plot, the prediction is very far away compared to the second plot. I even got the same value of 0.28 with even very far away prediction such as 5000 points further. What is wrong here?
Below is my code from this source:
#Dynamic Time Wrapping Algorithm
def dp(dist_mat):
N, M = dist_mat.shape
# Initialize the cost matrix
cost_mat = numpy.zeros((N + 1, M + 1))
for i in range(1, N + 1):
cost_mat[i, 0] = numpy.inf
for i in range(1, M + 1):
cost_mat[0, i] = numpy.inf
# Fill the cost matrix while keeping traceback information
traceback_mat = numpy.zeros((N, M))
for i in range(N):
for j in range(M):
penalty = [
cost_mat[i, j], # match (0)
cost_mat[i, j + 1], # insertion (1)
cost_mat[i + 1, j]] # deletion (2)
i_penalty = numpy.argmin(penalty)
cost_mat[i + 1, j + 1] = dist_mat[i, j] + penalty[i_penalty]
traceback_mat[i, j] = i_penalty
# Traceback from bottom right
i = N - 1
j = M - 1
path = [(i, j)] #Path is commented because I am not interested in the path
# while i > 0 or j > 0:
# tb_type = traceback_mat[i, j]
# if tb_type == 0:
# # Match
# i = i - 1
# j = j - 1
# elif tb_type == 1:
# # Insertion
# i = i - 1
# elif tb_type == 2:
# # Deletion
# j = j - 1
# path.append((i, j))
# Strip infinity edges from cost_mat before returning
cost_mat = cost_mat[1:, 1:]
return (path[::-1], cost_mat)
I use the above code as below:
z_actual=stats.zscore(actual)
z_pred=stats.zscore(mean_predictions)
N = actual.shape[0]
M = mean_predictions.shape[0]
dist_mat = numpy.zeros((N, M))
for i in range(N):
for j in range(M):
dist_mat[i, j] = abs(z_actual[i] - z_pred[j])
path,cost_mat=dp(dist_mat)
mape=cost_mat[N - 1, M - 1]/(N + M)
I want to solve the time-dependent Heisenberg equation of motion for a spin-lattice-system using Runge-Kutta in Python. But I'm open to any suggestion how to solve the equation.
Starting with a system of N*N classical vectors on triangular lattice in a 120° Structure (every vector is 120° rotated to his neighbors) with only nearest neighbor interactions by defining a matrix with (N,N,3) entries.
Normally one would expect that doing some excitation on the spins (in form of a time-dependent spin-flip) some propagation over the lattice would appear, but what I see is no movement of the vectors so far (or just some tiny bit). The equation looks like
but is in my case a bit simpler (source: https://arxiv.org/pdf/1711.03778.pdf) . I use only one J and I have no magnetic field B. The localField is the first term in this equation:
def localField(J,S,i,j):
n = len(S)
h = []
hx = - J*(S[(i - 1) % n,j][0]+S[(i + 1) % n,j][0] +S[i,(j - 1) % n][0] +S[(i - 1) % n,(j + 1) % n][0]+S[(i + 1) % n,(j - 1) % n][0] +S[i,(j + 1) %n][0])
hy = - J*(S[(i - 1) % n,j][1]+S[(i + 1) % n,j][1] +S[i,(j - 1) % n][1] +S[(i - 1) % n,(j + 1) % n][1]+S[(i + 1) % n,(j - 1) % n][1] +S[i,(j + 1) %n][1])
hz =- J*(S[(i - 1) % n,j][2]+S[(i + 1) % n,j][2] +S[i,(j - 1) % n][2] +S[(i - 1) % n,(j + 1) % n][2]+S[(i + 1) % n,(j - 1) % n][2] +S[i,(j + 1) %n][2])
h.append(hx)
h.append(hy)
h.append(hz)
return(h)
def HeisenEqM2(conf,J1):
S=np.copy(conf)
n=len(S)
conf_sum = np.zeros(3)
Snew = np.zeros((n,n,3))
locF = np.zeros(3)
for i in range(n):
for j in range(n):
localFie = localField(J1,S,i,j)
Snew[i,j] += np.cross(S[i,j],localFie)
return(Snew)
Then I start rotating one spin in the middle of the lattice and want to see if a wave is propagating through the lattice (according to spin-wave-theory).
def circ_ex_2(omega,t,config):
n = len(config)
S = np.copy(config)
i = int(n/2)
j = int(n/2)
S[i,j][0] += np.cos(omega*t)
S[i,j][1] += np.sin(omega*t)
S[i,j][2] += 0
S = norm_rand(S)
return(S)
My idea: This is my definition of the Runge-Kutta method. In the end I normalize my new spin configuration to ensure that the size of the vectors stays constant and calculate the energy to see what is happening:
def runge_kutta_4th(S,omega,J1,Tstart,Tmax,tsteps):
n=len(S)
S2 = np.copy(S)
tt = np.linspace(Tstart,Tmax,tsteps)
dt = (Tmax-Tstart)/tsteps
en = []
for i in range(tsteps):
S1 = np.copy(circ_ex(omega,tt[i],S2))
k1 = HeisenEqM2(S1,J1)
k2 = HeisenEqM2(circ_ex(omega,tt[i]+dt/2,S1+ dt/2*k1) ,J1)
k3 = HeisenEqM2(circ_ex(omega,tt[i]+dt/2,S1+ dt/2*k2) ,J1)
k4 = HeisenEqM2(circ_ex(omega,tt[i]+dt,S1+ dt*k3) ,J1)
S2 =S1 + dt*(1/6*k1 + 1/3*k2 + 1/3*k3 + 1/6*k4)
S2 = norm_rand(S2)
en.append(JEnergy(J1,S2)/n**2)
return(S2,en)
To reproduce the code: If you want to reproduce the code you can start with the following spin-lattice system:
#defining a neel lattice
def neelM(nsize):
X=np.ones((nsize,nsize))*np.pi/2
x= np.pi/2
for i in range(nsize):
for j in range(nsize):
X[i,j] += 2*np.pi/3*(i+2*j)
P = polar(X)
return(P)
#returning neel lattice as XY components
def neelMXY(M):
n = len(M[0])
v = np.zeros((int(n**2),3))
X = M[0].flatten()
Y = M[1].flatten()
for i in range(n**2):
v[i,0] += X[i]
v[i,1] += Y[i]
v[i,2] = 0
return(v)
It's called Neel-State and is the ground-state of the triangular Heisenberg Antiferromagnet. Then you also will need a normalization of spin vectors defined very simple:
def norm_rand(M):
n = len(M)
Mnew = np.zeros(np.shape(M))
for i in range(n):
for j in range(n):
norm = np.linalg.norm(M[i,j])
Mnew[i,j] += M[i,j]/norm
return(Mnew)
To see where the energy of your system is I defined the Heisenberg energy which is just the scalar product of one vector with all of his neighbors.
def JEnergy(J,S):
n=np.shape(S[1])[0]
H=0
counter = 0
for i in range(n):
for j in range(n):
H += 1/2*J*(np.dot(S[i,j],S[(i - 1) % n,j])+np.dot(S[i,j],S[(i + 1) % n,j])+np.dot(S[i,j],S[i,(j - 1) % n])+np.dot(S[i,j],S[(i - 1) % n,(j + 1) % n])+np.dot(S[i,j],S[(i + 1) % n,(j - 1) % n])+np.dot(S[i,j],S[i,(j + 1) %n]))
counter+=1
return(H)
After all you can run the following:
neel24 = np.reshape(neelMXY(neelM(24)),(24,24,3))
rk24 = runge_kutta_4th(neel24,5*np.pi,1,0,10,10000)
Tt = np.linspace(0,10,10000)
list0 = []
for i in range(10000):
list0.append(JEnergy(1,circ_ex(5*np.pi,Tt[i],iter24)))
plt.plot(np.linspace(0,10,10000),rk24[1])
plt.plot(np.linspace(0,10,10000),np.dot(list0,1/24**2))
It shows that just flipping one spin in time has an larger effect than the Runge-Kutta method. But this cannot be the case, since over the cross-product in the equation of motion the change of one spin should affect over spins as well and leads therefore to an higher energy change. I have also ploted the vectors using quiverand it shows that the vectors doesn't change much over time. This plot shows the energy:
It would be great if someone could help me. The problem should be possible to solve, since the paper above has done a similar thing but with an more complicated system.
n = 5
k = 3
x = 1
for i in range(1, n + 1):
x = x * i
y = 1
for i in range(1, k + 1):
y = y * i
z = 1
for i in range(1, n - k + 1):
z = z * i
c = x / (y * z)
p = 1
for i in range(k):
p = p * (1 / 6)
q = 1
for i in range(n - k):
q = q * (5 / 6)
result = c * p * q
So the following code calculates the probability of seeing exactly 3 sixes when throwing 5 dice. However, I'm unsure about the loops in this code.
I'm aware that:
n = number of trials
k = number of successes
And p/q success/failure?
But what are the loops doing and I'm unsure about the variables x,y,z and c. Traditionally I would just use powers to get the answer for these types of questions but I'm unsure about this method.
Thank you
Seems like c is the binomial coefficient, used in this probability calculation.
and the loops are used to calculate the required factorials (x is n!, y is k!, and z is (n-k)!).
The loops for p and q are indeed used to calculate the powers of success/fail probabilities.
This code would be much nicer using pow and math.factorial
I want to generate two linear chains of 20 monomers each at some distance to each other. The following code generates a single chain. Could someone help me with how to generate the second chain?
The two chains are fixed to a surface i.e the first monomer of the chain is fixed and the rest of the monomers move freely in x-y-z directions but the z component of the monomers should be positive.
Something like this:
import numpy as np
import numba as nb
#import pandas as pd
#nb.jit()
def gen_chain(N):
x = np.zeros(N)
y = np.zeros(N)
z = np.linspace(0, (N)*0.9, num=N)
return np.column_stack((x, y, z)), np.column_stack((x1, y1, z1))
#coordinates = np.loadtxt('2GN_50_T_10.txt', skiprows=199950)
#return coordinates
#nb.jit()
def lj(rij2):
sig_by_r6 = np.power(sigma**2 / rij2, 3)
sig_by_r12 = np.power(sigma**2 / rij2, 6)
lje = 4 * epsilon * (sig_by_r12 - sig_by_r6)
return lje
#nb.jit()
def fene(rij2):
return (-0.5 * K * np.power(R, 2) * np.log(1 - ((np.sqrt(rij2) - r0) / R)**2))
#nb.jit()
def total_energy(coord):
# Non-bonded energy.
e_nb = 0.0
for i in range(N):
for j in range(i - 1):
ri = coord[i]
rj = coord[j]
rij = ri - rj
rij2 = np.dot(rij, rij)
if (rij2 < rcutoff_sq):
e_nb += lj(rij2)
# Bonded FENE potential energy.
e_bond = 0.0
for i in range(1, N):
ri = coord[i]
rj = coord[i - 1] # Can be [i+1] ??
rij = ri - rj
rij2 = np.dot(rij, rij)
e_bond += fene(rij2)
return e_nb + e_bond
#nb.jit()
def move(coord):
trial = np.ndarray.copy(coord)
for i in range(1, N):
while True:
delta = (2 * np.random.rand(3) - 1) * max_delta
trial[i] += delta
#while True:
if trial[i,2] > 0.0:
break
trial[i] -= delta
return trial
#nb.jit()
def accept(delta_e):
beta = 1.0 / T
if delta_e < 0.0:
return True
random_number = np.random.rand(1)
p_acc = np.exp(-beta * delta_e)
if random_number < p_acc:
return True
return False
if __name__ == "__main__":
# FENE potential parameters.
K = 40.0
R = 0.3
r0 = 0.7
# L-J potential parameters
sigma = 0.5716
epsilon = 1.0
# MC parameters
N = 20 # Numbers of monomers
rcutoff = 2.5 * sigma
rcutoff_sq = rcutoff * rcutoff
max_delta = 0.01
n_steps = 100000
T = 10
# MAIN PART OF THE CODE
coord = gen_chain(N)
energy_current = total_energy(coord)
traj = open('2GN_20_T_10.xyz', 'w')
traj_txt = open('2GN_20_T_10.txt', 'w')
for step in range(n_steps):
if step % 1000 == 0:
traj.write(str(N) + '\n\n')
for i in range(N):
traj.write("C %10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
traj_txt.write("%10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
print(step, energy_current)
coord_trial = move(coord)
energy_trial = total_energy(coord_trial)
delta_e = energy_trial - energy_current
if accept(delta_e):
coord = coord_trial
energy_current = energy_trial
traj.close()
I except the chain of particles to collapse into a globule.
There is some problem with the logic of the MC you are implementing.
To perform a MC you need to ATTEMPT a move, evaluate the energy of the new state and then accept/reject according to a random number.
In your code there is not the slightest sign of the attempt to move a particle.
You need to move one (or more of them), evaluate the energy, and then update your coordinates.
By the way, I suppose this is not your entire code. There are many parameters that are not defined like the "k" and the "R0" in your fene potential
The FENE potential models bond interactions. What your code is saying is that all particles within the cutoff are bonded by FENE springs, and that the bonds are not fixed but rather defined by the cutoff. With a r_cutoff = 3.0, larger than equilibrium distance of the LJ well, you are essentially considering that each particle is bonded to potentially many others. You are treating the FENE potential as a non-bonded one.
For the bond interactions you should ignore the cutoff and only evaluate the energy for the actual pairs that are bonded according to your topology, which means that first you need to define a topology. I suggest generating a linear molecule of N atoms in a box big enough to contain the whole stretched molecule, and consider the i-th atom as bonded to the (i-1)-th atom, with i = 2, ..., N. In this way the topology is well defined and persistent. Then consider both interactions separately, non-bonded and bond, and add them at the end.
Something like this, in pseudo-code:
e_nb = 0
for particle i = 1 to N:
for particle j = 1 to i-1:
if (dist(i, j) < rcutoff):
e_nb += lj(i, j)
e_bond = 0
for particle i = 2 to N:
e_bond += fene(i, i-1)
e_tot = e_nb + e_bond
Below you can find a modified version of your code. To make things simpler, in this version there is no box and no boundary conditions, just a chain in free space. The chain is initialized as a linear sequence of particles each distant 80% of R0 from the next, since R0 is the maximum length of the FENE bond. The code considers that particle i is bonded with i+1 and the bond is not broken. This code is just a proof of concept.
#!/usr/bin/python
import numpy as np
def gen_chain(N, R):
x = np.linspace(0, (N-1)*R*0.8, num=N)
y = np.zeros(N)
z = np.zeros(N)
return np.column_stack((x, y, z))
def lj(rij2):
sig_by_r6 = np.power(sigma/rij2, 3)
sig_by_r12 = np.power(sig_by_r6, 2)
lje = 4.0 * epsilon * (sig_by_r12 - sig_by_r6)
return lje
def fene(rij2):
return (-0.5 * K * R0**2 * np.log(1-(rij2/R0**2)))
def total_energy(coord):
# Non-bonded
e_nb = 0
for i in range(N):
for j in range(i-1):
ri = coord[i]
rj = coord[j]
rij = ri - rj
rij2 = np.dot(rij, rij)
if (rij2 < rcutoff):
e_nb += lj(rij2)
# Bonded
e_bond = 0
for i in range(1, N):
ri = coord[i]
rj = coord[i-1]
rij = ri - rj
rij2 = np.dot(rij, rij)
e_bond += fene(rij2)
return e_nb + e_bond
def move(coord):
trial = np.ndarray.copy(coord)
for i in range(N):
delta = (2.0 * np.random.rand(3) - 1) * max_delta
trial[i] += delta
return trial
def accept(delta_e):
beta = 1.0/T
if delta_e <= 0.0:
return True
random_number = np.random.rand(1)
p_acc = np.exp(-beta*delta_e)
if random_number < p_acc:
return True
return False
if __name__ == "__main__":
# FENE parameters
K = 40
R0 = 1.5
# LJ parameters
sigma = 1.0
epsilon = 1.0
# MC parameters
N = 50 # number of particles
rcutoff = 3.5
max_delta = 0.01
n_steps = 10000000
T = 1.5
coord = gen_chain(N, R0)
energy_current = total_energy(coord)
traj = open('traj.xyz', 'w')
for step in range(n_steps):
if step % 1000 == 0:
traj.write(str(N) + '\n\n')
for i in range(N):
traj.write("C %10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
print(step, energy_current)
coord_trial = move(coord)
energy_trial = total_energy(coord_trial)
delta_e = energy_trial - energy_current
if accept(delta_e):
coord = coord_trial
energy_current = energy_trial
traj.close()
The code prints the current configuration at each step, you can just load it up on VMD and see how it behaves. The bonds will not show correctly at first on VMD, you must use a bead representation for the particles and define the bonds manually or with a script within VMD. In any case, you don't need to see the bonds to notice that the chain does not collapse.
Please bear in mind that if you want to simulate a chain at a certain density, you need to be careful to generate the correct topology. I recommend the EMC package to efficiently generate polymers at the desired thermodynamic conditions. It is by no means a trivial problem, especially for larger chains.
By the way, your code had an error in the FENE energy evaluation. rij2 is already squared, you squared it again.
Below you can see how the total energy as a function of the number of steps behaves for T = 1.0, N = 20, rcutoff = 3.5, and also the last current configuration after 10 thousand steps.
And below for N = 50, T = 1.5, max_delta = 0.01, K = 40, R = 1.5, rcutoff = 3.5, and 10 million steps. This is the last current configuration.
The full "trajectory", which isn't really a trajectory since this is MC, you can find here (it's under 6 MB).