Boundary value problem for natural convection by shooting method - python

I am trying to resolve a special case of boundary value problem which is the boundary layer's equations for natural convection :
Thanks to a contributor of this forum #LutzLehmann, this set of equations is solved by using the function solve_bvp from the scipy library.
Unfortunately, I don't obtain the right result with these particular boundary conditions :
It seems that the initial guess for the solver doesn't work in this case and I wonder what could suit better.
Here is the code :
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
Pr = 5
def odesys(t,u):
F,dF,ddF,θ,dθ = u
return [dF, ddF, θ-0.25/Pr*(2*dF*dF-3*F*ddF), dθ, 0.75*F*dθ]
def bcs(u0,u1): return [u0[0], u0[1], u1[1], u0[3]-1, u1[3]]
x = np.linspace(0,8,25)
u = [x*x, np.exp(-x), 0*x+1, 1-x, 0*x-1]
res = solve_bvp(odesys,bcs,x,u, tol=1e-5)
print(res.message)
plt.subplot(2,1,1)
plt.plot(res.x,res.y[3], color='#801010', label='$\Delta T$')
plt.legend()
plt.grid()
plt.subplot(2,1,2)
plt.plot(res.x,res.y[1], '-', color='C0', label="$F'$")
plt.legend()
plt.grid()
And here are the wrong plots obtained :
Could someone help me by suggesting a better initial case for this problem ?
Thank you for your help,
PS : For the proposition :
u = [0.5*x*x*np.exp(-x), x*np.exp(-x), np.exp(-x), np.exp(-x), -np.exp(-x)]
I got this :
Compared to the literature, I am expecting this result (where g is here theta in the set of equations) :

Related

Scipy solve_ivp not giving expected solutions

I need to solve a 2nd order ODE which I have decoupled into two first order ODEs. I've tried solving it using solve_ivp, but it doesn't seem to provide the solution I expect. I have provided the code below.
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import derivative
from scipy.integrate import solve_ivp
from matplotlib import rc
rc('text', usetex = True)
V0 = 2*10**-10
A = 0.130383
f = 0.129576
phi_i_USR2 = 6.1
phi_Ni_USR2 = 1.2
def V_USR2(phi):
return V0*(np.tanh(phi/np.sqrt(6)) + A*np.sin(1/f*np.tanh(phi/np.sqrt(6))))**2
def V_phi_USR2(phi):
return derivative(V_USR2, phi)
N = np.linspace(0,66,1000)
def USR2(N, X):
phi, g = X
return [g, (g**2/2 - 3)*(V_phi_USR2(phi)/V_USR2(phi) + g)]
X0 = [phi_i_USR2, phi_Ni_USR2]
sol = solve_ivp(USR2, (0,66), X0, method = 'LSODA', t_eval = N)
phi_USR2 = sol.y[0]
phi_N_USR2 = sol.y[1]
N_USR2 = sol.t
plt.plot(phi_USR2, phi_N_USR2)
plt.xlabel("$\phi$")
plt.ylabel("$\phi'$")
plt.title("Phase plot for USR2")
plt.show()
solve_ivp gives me the following plot:
The problem is that there is supposed to be an oscillation near the origin, which is not well captured by solve_ivp. However, for the same equation and initial conditions, Mathematica gives me exactly what I want:
I want the same plot in Python as well. I tried various methods in solve_ivp such as RK45, LSODA, Radau and BDF, but all of them show the same problem (LSODA tries to depict an oscillation but fails, but the other methods don't even move past the point where the oscillation starts). It would be great if someone can shed light on the problem. Thanks in advance.

How to apply Particle Swarm Optimization using SKO.pso python

I find a package on the Github, it contains various algorithms about evolution. The link is below:
https://github.com/guofei9987/scikit-opt
However, I got confused when using the package. When I try to do as the example, setting the dim as one, and lower bound as zero when upper bound is ten, focusing on the non negative for x, it got wrong answer, here is my code:
def demo_func(x):
# Sphere
x1= x
return ((10-4*x1)/(4*x1+3))*x1
from sko.PSO import PSO
pso = PSO(func=demo_func, n_dim=1, pop=40, max_iter=150, lb=[0], ub=[10], w=0.8, c1=0.5, c2=0.5)
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)
import matplotlib.pyplot as plt
plt.plot(pso.gbest_y_hist)
plt.show()
the result are below:result
best_x is [10.] best_y is [-6.97674419]
However, it is a wrong answer. the right answer for x is between 0.5 to 1. Does anybody has suggestions?
PSO optimize the minimum value of function. add a negative sign to get the max.
def demo_func(x):
x1,= x
return -((10-4*x1)/(4*x1+3))*x1
from sko.PSO import PSO
pso = PSO(func=demo_func, n_dim=1, pop=40, max_iter=150, lb=[0], ub=[10], w=0.8, c1=0.5, c2=0.5)
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)
import matplotlib.pyplot as plt
plt.plot(pso.gbest_y_hist)
plt.show()

Fitting sin curve using python

I am having two list:
# on x-axis:
# list1:
[70.434654, 37.147266, 8.5787086, 161.40877, -27.31284, 80.429482, -81.918106, 52.320129, 64.064552, -156.40771, 12.37026, 15.599689, 166.40984, 134.93636, 142.55002, -38.073524, -38.073524, 123.88509, -82.447571, 97.934402, 106.28793]
# on y-axis:
# list2:
[86683.961, -40564.863, 50274.41, 80570.828, 63628.465, -87284.016, 30571.402, -79985.648, -69387.891, 175398.62, -132196.5, -64803.133, -269664.06, 36493.316, 22769.121, 25648.252, 25648.252, 53444.855, 684814.69, 82679.977, 103244.58]
I need to fit a sine curve a+bsine(2*3.14*list1+c) in the data points obtained by plotting list1(on x-axis) against(on-y-axis) using python.
I am not able to get any good result.Can anyone help me with a suitable code,explanation...
Thanks!
this is my graph after plotting the list1(on x-axis) and list2(on y-axis)
Well, if you used lmfit setting up and running your fit would look like this:
xdeg = [70.434654, 37.147266, 8.5787086, 161.40877, -27.31284, 80.429482, -81.918106, 52.320129, 64.064552, -156.40771, 12.37026, 15.599689, 166.40984, 134.93636, 142.55002, -38.073524, -38.073524, 123.88509, -82.447571, 97.934402, 106.28793]
y = [86683.961, -40564.863, 50274.41, 80570.828, 63628.465, -87284.016, 30571.402, -79985.648, -69387.891, 175398.62, -132196.5, -64803.133, -269664.06, 36493.316, 22769.121, 25648.252, 25648.252, 53444.855, 684814.69, 82679.977, 103244.58]
import numpy as np
from lmfit import Model
import matplotlib.pyplot as plt
def sinefunction(x, a, b, c):
return a + b * np.sin(x*np.pi/180.0 + c)
smodel = Model(sinefunction)
result = smodel.fit(y, x=xdeg, a=0, b=30000, c=0)
print(result.fit_report())
plt.plot(xdeg, y, 'o', label='data')
plt.plot(xdeg, result.best_fit, '*', label='fit')
plt.legend()
plt.show()
That is assuming your X data is in degrees, and that you really intended to convert that to radians (as numpy's sin() function requires).
But that just addresses the mechanics of how to do the fit (and I'll leave the display of results up to you - it seems like you may need the practice).
The fit result is terrible, because these data are not sinusoidal. They are also not well ordered, which isn't a problem for doing the fit, but does make it harder to see what is going on.

Gradient of Sin(x)^4 is not same when calculated from its derivative

When I calculate and plot the derivative of Sin(x)^4 by directly putting the derivative of it which is 4 Sin(x)^3 Cos(x) for x= [0 to 180 deg] using the following code I get the right result.
import numpy as np
import matplotlib.pyplot as plt
a = np.power( np.sin( np.deg2rad(range(0,180)) ),4 )
c = 4 * np.sin( np.deg2rad(range(0,180) ))**3 * np.cos(np.deg2rad(range(0,180)))
plt.plot(a)
plt.plot(c)
plt.show()
But when I try to do the same thing with the numpy Gradient function then it gives me a different result i.e. the gradient is simply like straight line. For example, using the following code:
import matplotlib.pyplot as plt
a = np.power( np.sin( np.deg2rad(range(0,180)) ),4 )
plt.plot(a)
plt.plot(np.gradient(a))
plt.show()
I am still unable the understand the reason of the difference. Could any one please give me a clue why they are different ?
Actually in simulation work I have set of values in an array and I need to calculate the derivative of them over phi=range(0,180).
You are not correctly specifying the spacing between samples (which defaults to 1), so the answer you have is incorrectly scaled.
Try:
a = np.power(np.sin(np.deg2rad(range(0,180))),4 )
plt.plot(a)
plt.plot(np.gradient(a, np.deg2rad(1)))
Now c and np.gradient(a, np.deg2rad(1)) should be almost identical.

Gradient in noisy data, python

I have an energy spectrum from a cosmic ray detector. The spectrum follows an exponential curve but it will have broad (and maybe very slight) lumps in it. The data, obviously, contains an element of noise.
I'm trying to smooth out the data and then plot its gradient.
So far I've been using the scipy sline function to smooth it and then the np.gradient().
As you can see from the picture, the gradient function's method is to find the differences between each point, and it doesn't show the lumps very clearly.
I basically need a smooth gradient graph. Any help would be amazing!
I've tried 2 spline methods:
def smooth_data(y,x,factor):
print "smoothing data by interpolation..."
xnew=np.linspace(min(x),max(x),factor*len(x))
smoothy=spline(x,y,xnew)
return smoothy,xnew
def smooth2_data(y,x,factor):
xnew=np.linspace(min(x),max(x),factor*len(x))
f=interpolate.UnivariateSpline(x,y)
g=interpolate.interp1d(x,y)
return g(xnew),xnew
edit: Tried numerical differentiation:
def smooth_data(y,x,factor):
print "smoothing data by interpolation..."
xnew=np.linspace(min(x),max(x),factor*len(x))
smoothy=spline(x,y,xnew)
return smoothy,xnew
def minim(u,f,k):
""""functional to be minimised to find optimum u. f is original, u is approx"""
integral1=abs(np.gradient(u))
part1=simps(integral1)
part2=simps(u)
integral2=abs(part2-f)**2.
part3=simps(integral2)
F=k*part1+part3
return F
def fit(data_x,data_y,denoising,smooth_fac):
smy,xnew=smooth_data(data_y,data_x,smooth_fac)
y0,xnnew=smooth_data(smy,xnew,1./smooth_fac)
y0=list(y0)
data_y=list(data_y)
data_fit=fmin(minim, y0, args=(data_y,denoising), maxiter=1000, maxfun=1000)
return data_fit
However, it just returns the same graph again!
There is an interesting method published on this: Numerical Differentiation of Noisy Data. It should give you a nice solution to your problem. More details are given in another, accompanying paper. The author also gives Matlab code that implements it; an alternative implementation in Python is also available.
If you want to pursue the interpolation with splines method, I would suggest to adjust the smoothing factor s of scipy.interpolate.UnivariateSpline().
Another solution would be to smooth your function through convolution (say with a Gaussian).
The paper I linked to claims to prevent some of the artifacts that come up with the convolution approach (the spline approach might suffer from similar difficulties).
I won't vouch for the mathematical validity of this; it looks like the paper from LANL that EOL cited would be worth looking into. Anyway, I’ve gotten decent results using SciPy’s splines’ built-in differentiation when using splev.
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from scipy.interpolate import splrep, splev
x = np.arange(0,2,0.008)
data = np.polynomial.polynomial.polyval(x,[0,2,1,-2,-3,2.6,-0.4])
noise = np.random.normal(0,0.1,250)
noisy_data = data + noise
f = splrep(x,noisy_data,k=5,s=3)
#plt.plot(x, data, label="raw data")
#plt.plot(x, noise, label="noise")
plt.plot(x, noisy_data, label="noisy data")
plt.plot(x, splev(x,f), label="fitted")
plt.plot(x, splev(x,f,der=1)/10, label="1st derivative")
#plt.plot(x, splev(x,f,der=2)/100, label="2nd derivative")
plt.hlines(0,0,2)
plt.legend(loc=0)
plt.show()
You can also use scipy.signal.savgol_filter.
Result
Example
import matplotlib.pyplot as plt
import numpy as np
import scipy
from random import random
# generate data
x = np.array(range(100))/10
y = np.sin(x) + np.array([random()*0.25 for _ in x])
dydx = scipy.signal.savgol_filter(y, window_length=11, polyorder=2, deriv=1)
# Plot result
plt.plot(x, y, label='Original signal')
plt.plot(x, dydx*10, label='1st Derivative')
plt.plot(x, np.cos(x), label='Expected 1st Derivative')
plt.legend()
plt.show()

Categories