solving 1D Schrödinger equation with Numerov method (python) - python

Good evening.
I'm currently trying to solve the 1D Schrödinger eq. (time independent) with the Numerov method. The derivation of the method is clear to me but I have some problems with the implementation. I tried to look for solutions on google, and there are some (like this one or this one), but I don't really understand what they are doing in their codes...
The Problem:
With some math you can get the equation to this form:
where . For the beginning I'd like to look at the potential V(x)=1 if -a<x<a.
Since I don't have values for the energy or the first values of Psi (which are needed to start the algorithm) I just guessed some...
The code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import hbar
m= 1e-27
E= 0.5
def numerov_step(psi_1,psi_2,k1,k2,k3,h):
#k1=k_(n-1), k2=k_n, k3=k_(n+1)
#psi_1 = psi_(n-1) and psi_2=psi_n
m = 2*(1-5/12. * h**2 * k2**2)*psi_2
n = (1+1/12.*h**2*k1**2)*psi_1
o = 1 + 1/12. *h**2 *k3**2
return (m-n)/o
def numerov(N,x0,xE,a):
x,dx = np.linspace(x0,xE,N+1,retstep=True)
def V(x,a):
if (np.abs(x)<a):
return 1
else:
return 0
k = np.zeros(N+1)
for i in range(len(k)):
k[i] = 2*m*(E-V(x[i],a))/hbar**2
psi= np.zeros(N+1)
psi[0]=0
psi[1]=0.1
for j in np.arange(2,N):
psi[j+1]= numerov_step(psi[j],psi[j+1],k[j-1],k[j],k[j+1],dx)
return psi
x0 =-10
xE = 10
N =1000
psi=numerov(N,x0,xE,3)
x = np.linspace(x0,xE,N+1)
plt.figure()
plt.plot(x,psi)
plt.show()
Since the plot doesn't look like a wavefunction at all something has to be wrong, but I'm having trobule to find out what it is.. Would be nice if someone could help a little.
Thanks Sito

Unfortunately I don't quite remember the quantum physics so I don't understand some details. Still I see some bugs in your code:
Why inside numerov_step you square k1, k2 and k3?
In your main cycle
for j in np.arange(2,N):
psi[j+1]= numerov_step(psi[j],psi[j+1],k[j-1],k[j],k[j+1],dx)
you messed up with indices. It looks like this line should be
for j in np.arange(2, N):
psi[j] = numerov_step(psi[j - 2], psi[j - 1], k[j - 2], k[j - 1], k[j], dx)
This is the part I don't really understand. Looking into animation at your first link it looks like this equation has good solutions only for certain combinations of V(x) and E and in other cases it quickly goes wild. It looks like both your V(x) and proportion of E to hbar and V(x) are quite different from the referenced articles and this might be one more reason why the solution goes wild.

Related

Avoiding divergent solutions with odeint? shooting method

I am trying to solve an equation in Python. Basically what I want to do is to solve the equation:
(1/x^2)*d(Gam*dL/dx)/dx)+(a^2*x^2/Gam-(m^2))*L=0
This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. It suppose that we know m and Gam=x^2-2*x. The initial/boundary condition that I know are L(2+epsilon)=1 and L(infty)=0. Notice that the asymptotic behavior of the equation is
L(x-->infty)-->Exp[(m^2-a^2)*x]/x and Exp[-(m^2-a^2)*x]/x
Then, if a^2>m^2 we will have oscillatory solutions, while if a^2 < m^2 we will have a divergent and a decay solution.
What I am interested is in the decay solution, however when I am trying to solve the above equation transforming it as a system of first order differential equations and using the shooting method in order to find the "a" that can give me the behavior that I am interested about, I am always having a divergent solution. I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method?
Basically what I am doing is to add a new system of equation for "a"
(d^2a/dx^2=0, da/dx(2+epsilon)=0,a(2+epsilon)=a_0)
in order to have "a" as a constant. Then I am considering different values for "a_0" and asking if my boundary conditions are fulfilled.
Thanks for your time. Regards,
Luis P.
I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from math import *
from scipy.integrate import ode
These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$
def init_sch(u_sch):
om = u_sch[0]
return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx]
These are our system of equations
def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0):
L = IC[0]
ph = IC[1]
om = IC[2]
b = IC[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.
return [dR_dr,dph_dr,dom_dr,db_dr]
Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here
p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case
ep = 0.2
ep_r = 0.01
r_end = 500
n_r = 500000
n_omega = 1000
omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)
r = np.linspace(2+ep_r,r_end,n_r)
tol = 0.01
a = 0
for j in range(len(omega)):
print('trying with $omega =$',omega[j])
omeg = [omega[j]]
ini = init_sch(omeg)
Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000)
print Y[-1,0]
#Here I ask if my asymptotic behavior is fulfilled or not. This should be basically my value at infinity
if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol:
print(j,'times iterations in omega')
print("R'(inf)) = ", Y[-1,0])
print("\omega",omega[j])
omega_1 = [omega[j]]
a = 10
break
if a > 1:
break
Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from math import *
from scipy.integrate import solve_bvp
def bc(ya,yb,p_sch):
m = p_sch[1]
om = p_sch[4]
tol_s = p_sch[5]
r_end = p_sch[6]
return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])
def fun(r,y,p_sch):
rho_c = p_sch[0]
m = p_sch[1]
lam = p_sch[2]
l = p_sch[3]
L = y[0]
ph = y[1]
om = y[2]
b = y[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.*y[3]
return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))
eps_r=0.01
r_end = 500
n_r = 50000
r = np.linspace(2+eps_r,r_end,n_r)
y = np.zeros((4,r.size))
y[0]=1
tol_s = 0.0001
p_sch= (1,1,0,0,0.8,tol_s,r_end)
sol = solve_bvp(fun,bc, r, y, p_sch)
I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).
ValueError: bc return is expected to have shape (11,), but actually has (8,).

Error from program using SymPy

I would like to use the SymPy packages to find the roots of a fourth-order polynomial equation. Subsequently I would like to plot these roots as a function of the parameters of the polynomial equations. I have written the piece of code below. It seems to calculate everything fine, but I cannot plot the results as I get the error "x and y are not of the same dimension". I think it has something to do with my usage of SymPy, because normally it always works like this.
from sympy import *
from math import *
from numpy import *
import pylab as lab
def RootFunc(root, m, c0, r, En):
A = 2*(m**2 - 0.25 - c0**2)/r**2 + 4
B = 8*En*c0/r
C = -4 - 4*En**2 + ((c0**2 + m**2 -.25)/r**2 + 2)**2
return root.subs([(a,A),(b,B),(c,C)])
# Define necessary symbols
x = symbols('x')
a, b, c = symbols('a b c')
En, r = symbols("En r")
# Fix constants
m = 0
c0 = -2
# Solve equation
eq = x**4 + a*x**2 + b*x + c
sol = solve(eq,x)
root1 = sol[0]
grid = linspace(1,10,10)
sol1 = [RootFunc(root1, m, c0, r, .5) for r in grid]
lab.figure(1)
lab.plot(grid,sol1)
lab.show()
Are you sure that you a running the same script that you've given us here?
I say this because I can copy and paste your example verbatim and it works with absolutely no issue.
Once you've checked, could you post which version of Python, SymPy, NumPy and Matplotlib you're using please?
Edit: I think something got slightly lost in translation when you put up your first minimal working example (MWE). The solution in your MWE was real-valued so it didn't have the same issue as your actual program. However, onto the solution:
Your main issue here is this line
sol1 = [RootFunc(root1, m, c0, help, .5) for help in grid]
RootFunc in this case returns a sympy.core.add.Add which pylab has no concept of and therefore can't plot. In your MWE you recognised that this was the issue and tried calling N() and real() on the return value. Unfortunately this just wraps the sympy.core.add.Add object in a NumPy array. When Pylab tries to plot this array it finds a sympy.core.add.Add object which it has no concept of and therefore just throws an error.
Fortunately SymPy allows you to turn a sympy.core.add.Add object into a number using int(), float() or complex(). Since your roots are complex you should use complex() on the return value and then to get the real component use .real.
So to get it too work you should just change the above line to
sol1 = [complex(RootFunc(root1, m, c0, help, .5)).real for help in grid]
Edit2: Just a quick point about style. You're using a lot of wildcard imports in your code (e.g. from numpy import *), which is fine if you're the only person using the code, it does make it neater after all.
However, if you're going to be posting on a forum like this please could you try to use qualified imports (like you've done for pylab) so that we don't have to go trudging through the documentation for all the modules you've used to try and figure out what you're doing.
One other thing: when you encounter a problem like this it really helps to execute it line by line in the python shell and examine the types (with type()) and values (with print() or repr()) of your variables. For this purpose I would strongly urge you to learn how to use IPython as it can really help.
You might be breaking some things with your imports. Can you try this:
import sympy as sy
import numpy as np
import pylab as lab
def RootFunc(root, A, B):
return root.subs([(a,A),(b,B)])
# Define necessary symbols
x = sy.symbols('x')
a, b = sy.symbols('a b')
# Solve equation
eq = x**4 + a*x**2 + b*x
sol = sy.solve(eq,x)
root1 = sol[1] # first element is trivial solution, so take second one
grid = np.linspace(1,10,10)
sol1 = [np.real(sy.N(RootFunc(root1, 1, x))) for x in grid]
lab.figure(1)
lab.plot(grid,sol1)
lab.show()

Bifurcation diagram

I want to draw a Bifurcation diagram of quadratic map in python.
Basically its a plot of x_{n+1}=x_n^2-c and it should look like http://static.sewanee.edu/Physics/PHYSICS123/image99.gif
But I am newbie so I am not sure do I make it right.
My code
import numpy as n
import scipy as s
import pylab as p
xa=0.252
xb=1.99
C=n.linspace(xa,0.001,xb)
iter=100
Y=n.zeros((len(X),iteracje))
i=1
Y0=1
for Y0 in iter:
Y(i+1)=Y0^2-C
for Y0 in iter:
Y(i+1)=Y0^2-C
p.plot(C,Y)
p.show()
My problem is that I don't know how properly write these for loop properly.
Here is some modified code (partial explanation below)
import numpy as n
import scipy as s
import pylab as p
xa=0.252
xb=1.99
C=n.linspace(xa,xb,100)
print C
iter=1000
Y = n.ones(len(C))
for x in xrange(iter):
Y = Y**2 - C #get rid of early transients
for x in xrange(iter):
Y = Y**2 - C
p.plot(C,Y, '.', color = 'k', markersize = 2)
p.show()
First, the linspace command had the wrong format. help(s.linspace) will give you insight into the syntax. The first two arguments are start and stop. The third is how many values. I then made Y a numpy array of the same length as C, but whose values were all 1. Your Y0 was simply the number 1, and it never changed. Then I did some iteration to get past the initial conditions. Then did more iteration plotting each value.
To really understand what I've done, you'll have to look at how numpy handles calculations with arrays.

Integrated for loop to calculate values over a grid/mesh

I am fairly new to python, and I am trying to plot a contour plot of water surface over a 2d mesh.
At the moment the code is running but I am not getting the right solution. I have checked the formula carefully and I am fairly confident that the issue is with my loops.
I want the code to run for each point on my mesh based on their x and y coordinates.
The mesh is 100 x 100 resulting in 10000 nodes. I have posted my code below, I believe the problem is with the integrated for loops. Any advice on what I might be able to try would be great.
Apologies for the length of code...
import numpy as np
import matplotlib.pyplot as plt
import math
import sys
from math import sqrt
import decimal
t=0
n=5
l=100000
d=100
g=9.81
nx, ny = (100,100)
x5 = np.linspace(-100000,100000,nx)
y5 = np.linspace(-100000,100000,ny)
xv,yv = np.meshgrid(x5,y5)
x = np.arange(-100000,100000,2000)
y = np.arange(-100000,100000,2000)
c=np.arange(len(x))
x2=np.arange(len(x))
y2=np.arange(len(x))
t59=np.arange (1,10001,1)
h=np.arange(len(t59))
om2=1.458*(10**-4.0)
phi=52
phirad=phi*(math.pi/180)
f=om2*math.sin(phirad)
A=(((d+n)**2.0)-(d**2.0))/(((d+n)**2.0)+(d**2.0))
w=(((8*g*d)/(l**2))+(f**2))**0.5
a=((1-(A**2.0))**0.5)/(1-(A*math.cos(w*t)))
b=(((1-(A**2.0))/(1-(A*math.cos(w*t)))**2.0)-1)
l2=l**2.0
for i in range (len(x)):
for j in range (len(y)):
h[i]=d*(a-1-((((x[i]**2.0)+(y[j]**2.0))/l2)*b))
h5=np.reshape(h,(100,100))
plt.figure(1)
plt.contourf(x5,y5,h5)
plt.colorbar()
plt.show()
Ok apologies I didn't make myself very clear. So I'm hoping to achieve a parabolic basin output with h values varying between roughly -10 and 10. Instead I am getting enormous values and the completely wrong shape. I thought the for loop needed to be more like:
for i in range (len(x)):
for j in range (len(y)):
h[i][j]=d*(a-1-((((x[i][j]**2.0)+(y[i][j]**2.0))/l2)*b))
Is that clearer? Let me know if not.
The first thing is that the complete loop is not necessary.
h = d * (a - 1 - (x[None,:]**2 + y[:,None]**2) / 12 * b)
Here the magic comes with the None in indexing. x[None, :] means "x as a row vector copied to as many rows as needed and y[:, None] means "y as a column vector copied to as many columns as needed`.
This might be easiest to understand with an example:
import numpy as np
x = np.arange(5)
y = np.arange(0,50,10)
print x, y, x[None,:] + y[:, None]
The one-liner above gives:
Some manual calculations show this should be rather ok.
d = 100
a = 1.05
b = 0.1025
For a corner point at (1e5, 1e5), we have 2e10 in the addition, so the values do not look badly off.

Graphing n iterations of a function- Python

I'm studying dynamical systems, particularly the logistic family g(x) = cx(1-x), and I need to iterate this function an arbitrary amount of times to understand its behavior. I have no problem iterating the function given a specific point x_0, but again, I'd like to graph the entire function and its iterations, not just a single point. For plotting a single function, I have this code:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
def logplot(c, n = 10):
dt = .001
x = np.arange(0,1.001,dt)
y = c*x*(1-x)
plt.plot(x,y)
plt.axis([0, 1, 0, c*.25 + (1/10)*c*.25])
plt.show()
I suppose I could tackle this by the lengthy/daunting method of explicitly creating a list of the range of each iteration using something like the following:
def log(c,x0):
return c*x0*(1-x)
def logiter(c,x0,n):
i = 0
y = []
while i <= n:
val = log(c,x0)
y.append(val)
x0 = val
i += 1
return y
But this seems really cumbersome and I was wondering if there were a better way. Thanks
Some different options
This is really a matter of style. Your solution works and is not very difficult to understand. If you want to go on on those lines, then I would just tweak it a bit:
def logiter(c, x0, n):
y = []
x = x0
for i in range(n):
x = c*x*(1-x)
y.append(x)
return np.array(y)
The changes:
for loop is easier to read than a while loop
x0 is not used in the iteration (this adds one more variable, but it is mathematically easier to understand; x0 is a constant)
the function is written out, as it is a very simple one-liner (if it weren't, its name should be changed to be something else than log, which is very easy to confuse with logarithm)
the result is converted into a numpy array. (Just what I usually do, if I need to plot something)
In my opinion the function is now legible enough.
You might also take an object-oriented approach and create a logistic function object:
class Logistics():
def __init__(self, c, x0):
self.x = x0
self.c = c
def next_iter(self):
self.x = self.c * self.x * (1 - self.x)
return self.x
Then you may use this:
def logiter(c, x0, n):
l = Logistics(c, x0)
return np.array([ l.next_iter() for i in range(n) ])
Or if you may make it a generator:
def log_generator(c, x0):
x = x0
while True:
x = c * x * (1-x)
yield x
def logiter(c, x0, n):
l = log_generator(c, x0)
return np.array([ l.next() for i in range(n) ])
If you need performance and have large tables, then I suggest:
def logiter(c, x0, n):
res = np.empty((n, len(x0)))
res[0] = c * x0 * (1 - x0)
for i in range(1,n):
res[i] = c * res[i-1] * (1 - res[i-1])
return res
This avoids the slowish conversion into np.array and some copying of stuff around. The memory is allocated only once, and the expensive conversion from a list into an array is avoided.
(BTW, if you returned an array with the initial x0 as the first row, the last version would look cleaner. Now the first one has to be calculated separately if copying the vector around is desired to be avoided.)
Which one is best? I do not know. IMO, all are readable and justified, it is a matter of style. However, I speak only very broken and poor Pythonic, so there may be good reasons why still something else is better or why something of the above is not good!
Performance
About performance: With my machine I tried the following:
logiter(3.2, linspace(0,1,1000), 10000)
For the first three approaches the time is essentially the same, approximately 1.5 s. For the last approach (preallocated array) the run time is 0.2 s. However, if the conversion from a list into an array is removed, the first one runs in 0.16 s, so the time is really spent in the conversion procedure.
Visualization
I can think of two useful but quite different ways to visualize the function. You mention that you will have, say, 100 or 1000 different x0's to start with. You do not mention how many iterations you want to have, but maybe we will start with just 100. So, let us create an array with 100 different x0's and 100 iterations at c = 3.2.
data = logiter(3.6, np.linspace(0,1,100), 100)
In a way a standard method to visualize the function is draw 100 lines, each of which represents one starting value. That is easy:
import matplotlib.pyplot as plt
plt.plot(data)
plt.show()
This gives:
Well, it seems that all values end up oscillating somewhere, but other than that we have only a mess of color. This approach may be more useful, if you use a narrower range of values for x0:
data = logiter(3.6, np.linspace(0.8,0.81,100), 100)
you may color-code the starting values by e.g.:
color1 = np.array([1,0,0])
color2 = np.array([0,0,1])
for i,k in enumerate(np.linspace(0, 1, data.shape[1])):
plt.plot(data[:,i], '.', color=(1-k)*color1 + k*color2)
This plots the first columns (corresponding to x0 = 0.80) in red and the last columns in blue and uses a gradual color change in between. (Please note that the more blue a dot is, the later it is drawn, and thus blues overlap reds.)
However, it is possible to take a quite different approach.
data = logiter(3.6, np.linspace(0,1,1000), 50)
plt.imshow(data.T, cmap=plt.cm.bwr, interpolation='nearest', origin='lower',extent=[1,21,0,1], vmin=0, vmax=1)
plt.axis('tight')
plt.colorbar()
gives:
This is my personal favourite. I won't spoil anyone's joy by explaining it too much, but IMO this shows many peculiarities of the behaviour very easily.
Here's what I was aiming for; an indirect approach to understanding (by visualization) the behavior of initial conditions of the function g(c, x) = cx(1-x):
def jam(c, n):
x = np.linspace(0,1,100)
y = c*x*(1-x)
for i in range(n):
plt.plot(x, y)
y = c*y*(1-y)
plt.show()

Categories