result of coupled ODE in python code is different from mathematica - python

As per my language knowledge my code is written correct. But It is not giving me correct solution (plot). When I had solved same system of ODE's in mathematica, I have correct solution and both solutions are totally different. I am writing a research project so I need a proper code in python. could you please let me know the mistake of mine code.
python code solution
Mathematica solution
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as si
##Three system
def func(state, T):
H = state[0]
P = state[1]
R = state[2]
Hd = -(16./3.)*np.pi*P
Pd = -4.*H*P
Rd = H*R
return Hd,Pd,Rd
T = np.linspace(0.1,0.9,50)
state0 = [1,0.0001, 0.1]
s = si.odeint(func, state0, T)
h = np.transpose(s)
plt.plot(T,h[0])
plt.show()
Mathematica code
Clear[H,\[Rho],a]
Eq1=(H'[t] == -16 \[Pi] \[Rho][t]/3)
Eq2= (\[Rho]'[t] == -4 H[t] \[Rho][t])
Eq3 = (a'[t] == H[t] a[t])
sol=NDSolve[{Eq1,Eq2, Eq3,
H[0.1]==0.1, \[Rho][0.1]==0.1, a[0.1]==0.1},
{H[t],\[Rho][t],a[t]}, {t,0.1, 0.9}]
Plot[Evaluate[{H[t]}/.sol],{t,0.1,0.9}]

Both codes are correct, I just turned off my laptop and on it again, and it gives me the correct result (as mathematica)

Related

SymPy lambdify gives wrong result, while *.subs gives the accruate one

Sorry for bothering you with this. I have a serious issue and now im on clock to solve it, so here is my question.
I have an issue where I lambdify a quantity, but the result of the quantity differs from the ".subs" result, and sometimes it's way off, or it's a NaN, where in reality there is a real number (found by subs)
Here, I have a small MWE where you can see the issue! Thanks in advance for ur time
import sympy as sy
import numpy as np
##STACK
#some quantities needed before u see the problem
r = sy.Symbol('r', real=True)
th = sy.Symbol('th', real=True)
e_c = 1e51
lf0 = 100
A = 1.6726e-24
#here are some quantities I define to go the problem
lfac = lf0+2
rd = 4*3.14/4/sy.pi/A/lfac**2
xi = r/rd #rescaled r
#now to the problem:
#QUANTITY
lfxi = xi**(-3)*(lfac+1)/2*(sy.sqrt( 1 + 4*lfac/(lfac+1)*xi**(3) + (2*xi**(3)/(lfac+1))**2) -1)
#RESULT WITH SUBS
print(lfxi.subs({th:1.00,r:1.00}).evalf())
#RESULT WITH LAMBDIFY
lfxi_l = sy.lambdify((r,th),lfxi)
lfxi_l(0.01,1.00)
##gives 0
The issue is that your mpmath precision needs to be set higher!
By default mpmath uses prec=53 and dps=15, but your expression requires a much higher resolution than this for it
# print(lfxi)
3.0256512324559e+62*(sqrt(1.09235114769539e-125*pi**6*r**6 + 6.74235013645028e-61*pi**3*r**3 + 1) - 1)/(pi**3*r**3)
...
from mpmath import mp
lfxi_l = sy.lambdify((r,th),lfxi, modules=["mpmath"])
mp.dps = 125
print(lfxi_l(1.00,1.00))
# 101.999... result
Changing a couple of the constants to "modest" values:
In [89]: e_c=1; A=1
The different methods produce essentially the same thing:
In [91]: lfxi.subs({th:1.00,r:1.00}).evalf()
Out[91]: 1.00000000461176
In [92]: lfxi_l = sy.lambdify((r,th),lfxi)
In [93]: lfxi_l(1.0,1.00)
Out[93]: 1.000000004611762
In [94]: lfxi_m = sy.lambdify((r,th),lfxi, modules=["mpmath"])
In [95]: lfxi_m(1.0,1.00)
Out[95]: mpf('1.0000000046117619')

Solve coupled differential equation using the function scipy.integrate.RK45

x' = f(x,y,t)
y' = g(x,y,t)
Initial conditions have been given as x0 and y0 with t0. Find the solution graph in the range t0 to a.
I have tried doing this for non-coupled equations but there seems to be a problem there as well. I have to solve this exactly using this function so other functions are not the options.
from numpy import *
from matplotlib import pyplot as plt
def f(t,x):
return -x
import scipy
from scipy import integrate as inte
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
print (solution)
I expect the output to be an array of all the values.
But <scipy.integrate._ivp.rk.RK45 at 0x1988ba806d8> is what I get.
You need to collect data with calling step() function:
from math import e
from scipy import integrate as inte
def f(t,x):
return -x
solution = inte.RK45(f, 0 , [1] , 10 ,1, 0.001, e**-6)
# collect data
t_values = []
y_values = []
for i in range(100):
# get solution step state
solution.step()
t_values.append(solution.t)
y_values.append(solution.y[0])
# break loop after modeling is finished
if solution.status == 'finished':
break
data = zip(t_values, y_values)
Output:
(0.12831714796342164, 0.879574381033538)
(1.1283171479634215, 0.3239765636806864)
(2.1283171479634215, 0.11933136762238628)
(3.1283171479634215, 0.043953720407578944)
(4.128317147963422, 0.01618962035012491)
(5.128317147963422, 0.005963176828962677)
(6.128317147963422, 0.002196436798667919)
(7.128317147963422, 0.0008090208875093502)
(8.128317147963422, 0.00029798936023261037)
(9.128317147963422, 0.0001097594143523445)
(10, 4.5927433621121034e-05)

Quadratic Programming in Python using Numpy?

I am in the process of translating some MATLAB code into Python. There is one line that is giving me a bit of trouble:
[q,f_dummy,exitflag, output] = quadprog(H,f,-A,zeros(p*N,1),E,qm,[],[],q0,options);
I looked up the documentation in MATLAB to find that the quadprog function is used for optimization (particularly minimization).
I attempted to find a similar function in Python (using numpy) and there does not seem to be any.
Is there a better way to translate this line of code into Python? Or are there other packages that can be used? Do I need to make a new function that accomplishes the same task?
Thanks for your time and help!
There is a library called CVXOPT that has quadratic programming in it.
def quadprog_solve_qp(P, q, G=None, h=None, A=None, b=None):
qp_G = .5 * (P + P.T) # make sure P is symmetric
qp_a = -q
if A is not None:
qp_C = -numpy.vstack([A, G]).T
qp_b = -numpy.hstack([b, h])
meq = A.shape[0]
else: # no equality constraint
qp_C = -G.T
qp_b = -h
meq = 0
return quadprog.solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
I will start by mentioning that quadratic programming problems are a subset of convex optimization problems which are a subset of optimization problems.
There are multiple python packages which solve quadratic programming problems, notably
cvxopt -- which solves all kinds of convex optimization problems (including quadratic programming problems). This is a python version of the previous cvx MATLAB package.
quadprog -- this is exclusively for quadratic programming problems but doesn't seem to have much documentation.
scipy.optimize.minimize -- this is a very general minimizer which can solve quadratic programming problems, as well as other optimization problems (convex and non-convex).
You might also benefit from looking at the answers to this stackoverflow post which has more details and references.
Note: The code snippet in user1911226' answer appears to come from this blog post:
https://scaron.info/blog/quadratic-programming-in-python.html
which compares some of these quadratic programming packages. I can't comment on their answer, but they claim to be mentioning the cvxopt solution, but the code is actually for the quadprog solution.
OSQP is a specialized free QP solver based on ADMM. I have adapted the OSQP documentation demo and the OSQP call in the qpsolvers repository for your problem.
Note that matrices H and G are supposed to be sparse in CSC format. Here is the script
import numpy as np
import scipy.sparse as spa
import osqp
def quadprog(P, q, G=None, h=None, A=None, b=None,
initvals=None, verbose=True):
l = -np.inf * np.ones(len(h))
if A is not None:
qp_A = spa.vstack([G, A]).tocsc()
qp_l = np.hstack([l, b])
qp_u = np.hstack([h, b])
else: # no equality constraint
qp_A = G
qp_l = l
qp_u = h
model = osqp.OSQP()
model.setup(P=P, q=q,
A=qp_A, l=qp_l, u=qp_u, verbose=verbose)
if initvals is not None:
model.warm_start(x=initvals)
results = model.solve()
return results.x, results.info.status
# Generate problem data
n = 2 # Variables
H = spa.csc_matrix([[4, 1], [1, 2]])
f = np.array([1, 1])
G = spa.csc_matrix([[1, 0], [0, 1]])
h = np.array([0.7, 0.7])
A = spa.csc_matrix([[1, 1]])
b = np.array([1.])
# Initial point
q0 = np.ones(n)
x, status = quadprog(H, f, G, h, A, b, initvals=q0, verbose=True)
You could use the solve_qp function from qpsolvers. It can be installed, along with a starter kit of open source solvers, by pip install qpsolvers[open_source_solvers]. Then you can replace your line with:
from qpsolvers import solve_qp
solver = "proxqp" # or "osqp", "quadprog", "cvxopt", ...
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)
There are many solvers available in Python, each coming with their pros and cons. Make sure you try different values for the solver keyword argument to find the one that fits your problem best.
Here is a standalone example based on your question and the other comments:
import numpy as np
from qpsolvers import solve_qp
H = np.array([[4.0, 1.0], [1.0, 2.0]])
f = np.array([1.0, 1])
G = np.array([[1.0, 0.0], [0.0, 1.0]])
h = np.array([0.7, 0.7])
A = np.array([[1.0, 1.0]])
b = np.array([1.0])
q_0 = np.array([1.0, 1.0])
solver = "cvxopt" # or "osqp", "proxqp", "quadprog", ...
options = {"verbose": True}
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)

Multi-objective optimisation using PyGMO

I am using the PyGMO package for Python, for multi-objective optimisation. I am unable to fix the dimension of the fitness function in the constructor, and the documentation is not very descriptive either. I am wondering if anyone here has had experience with PyGMO in the past: this could be fairly simple.
I try to construct a minimum example below:
from PyGMO.problem import base
from PyGMO import algorithm, population
import numpy as np
import matplotlib.pyplot as plt
class my_problem(base):
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS)
self.set_bounds(0.01, 100)
def _objfun_impl(self, K):
E1 = K[0] + K[2]
E2 = K[1] + K[3]
return (E1, E2, )
if __name__ == '__main__':
prob = my_problem() # Create the problem
print (prob)
algo = algorithm.sms_emoa(gen=100)
pop = population(prob, 50)
pop = algo.evolve(pop)
F = np.array([ind.cur_f for ind in pop]).T
plt.scatter(F[0], F[1])
plt.xlabel("$E_1$")
plt.ylabel("$E_2$")
plt.show()
fdim=2 above is a failed attempt to set the fitness dimension. The code fails with the following error:
ValueError: ..\..\src\problem\base.cpp,584: fitness dimension was changed inside objfun_impl().
I'd be grateful if someone can help figure this out. Thanks!
Are you looking at the correct documentation?
There is no fdim (which anyway does nothing in your example since it is only a local variable and is not used). But there is n_obj:
n_obj: number of objectives. Defaults to 1
So, I think you want something like (corrected thanks to #Distopia):
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS, 0, fdim)
self.set_bounds(0.01, 100)
#(...)
I modified their example and this seemed to work for me.
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
# We call the base constructor as 'dim' dimensional problem, with 0 integer parts and 2 objectives.
super(my_problem, self).__init__(NUM_PARAMS,0,fdim)
self.set_bounds(0.01, 100)
#(...)

Estimate formants using LPC in Python

I'm new to signal processing (and numpy, scipy, and matlab for that matter). I'm trying to estimate vowel formants with LPC in Python by adapting this matlab code:
http://www.mathworks.com/help/signal/ug/formant-estimation-with-lpc-coefficients.html
Here is my code so far:
#!/usr/bin/env python
import sys
import numpy
import wave
import math
from scipy.signal import lfilter, hamming
from scikits.talkbox import lpc
"""
Estimate formants using LPC.
"""
def get_formants(file_path):
# Read from file.
spf = wave.open(file_path, 'r') # http://www.linguistics.ucla.edu/people/hayes/103/Charts/VChart/ae.wav
# Get file as numpy array.
x = spf.readframes(-1)
x = numpy.fromstring(x, 'Int16')
# Get Hamming window.
N = len(x)
w = numpy.hamming(N)
# Apply window and high pass filter.
x1 = x * w
x1 = lfilter([1., -0.63], 1, x1)
# Get LPC.
A, e, k = lpc(x1, 8)
# Get roots.
rts = numpy.roots(A)
rts = [r for r in rts if numpy.imag(r) >= 0]
# Get angles.
angz = numpy.arctan2(numpy.imag(rts), numpy.real(rts))
# Get frequencies.
Fs = spf.getframerate()
frqs = sorted(angz * (Fs / (2 * math.pi)))
return frqs
print get_formants(sys.argv[1])
Using this file as input, my script returns this list:
[682.18960189917243, 1886.3054773107765, 3518.8326108511073, 6524.8112723782951]
I didn't even get to the last steps where they filter the frequencies by bandwidth because the frequencies in the list aren't right. According to Praat, I should get something like this (this is the formant listing for the middle of the vowel):
Time_s F1_Hz F2_Hz F3_Hz F4_Hz
0.164969 731.914588 1737.980346 2115.510104 3191.775838
What am I doing wrong?
Thanks very much
UPDATE:
I changed this
x1 = lfilter([1., -0.63], 1, x1)
to
x1 = lfilter([1], [1., 0.63], x1)
as per Warren Weckesser's suggestion and am now getting
[631.44354635609318, 1815.8629524985781, 3421.8288991389031, 6667.5030877036006]
I feel like I'm missing something since F3 is very off.
UPDATE 2:
I realized that the order being passed to scikits.talkbox.lpc was off due to a difference in sampling frequency. Changed it to:
Fs = spf.getframerate()
ncoeff = 2 + Fs / 1000
A, e, k = lpc(x1, ncoeff)
Now I'm getting:
[257.86573127888488, 774.59006835496086, 1769.4624576002402, 2386.7093679399809, 3282.387975973973, 4413.0428174593926, 6060.8150432549655, 6503.3090645887842, 7266.5069407315023]
Much closer to Praat's estimation!
The problem had to do with the order being passed to the lpc function. 2 + fs / 1000 where fs is the sampling frequency is the rule of thumb according to:
http://www.phon.ucl.ac.uk/courses/spsci/matlab/lect10.html
I have not been able to get the results you expect, but I do notice two things which might cause some differences:
Your code uses [1, -0.63] where the MATLAB code from the link you provided has [1 0.63].
Your processing is being applied to the entire x vector at once instead of smaller segments of it (see where the MATLAB code does this: x = mtlb(I0:Iend); ).
Hope that helps.
There are at least two problems:
According to the link, the "pre-emphasis filter is a highpass all-pole (AR(1)) filter". The signs of the coefficients given there are correct: [1, 0.63]. If you use [1, -0.63], you get a lowpass filter.
You have the first two arguments to scipy.signal.lfilter reversed.
So, try changing this:
x1 = lfilter([1., -0.63], 1, x1)
to this:
x1 = lfilter([1.], [1., 0.63], x1)
I haven't tried running your code yet, so I don't know if those are the only problems.

Categories