Solving nonlinear overdetermined system using python - python

I'm trying to find a good way to solve a nonlinear overdetermined system with python. I looked into optimization tools here http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html but I can't figure out how to use them. What I have so far is
#overdetermined nonlinear system that I'll be using
'''
a = cos(x)*cos(y)
b = cos(x)*sin(y)
c = -sin(y)
d = sin(z)*sin(y)*sin(x) + cos(z)*cos(y)
e = cos(x)*sin(z)
f = cos(z)*sin(x)*cos(z) + sin(z)*sin(x)
g = cos(z)*sin(x)*sin(y) - sin(z)*cos(y)
h = cos(x)*cos(z)
a-h will be random int values in the range 0-10 inclusive
'''
import math
from random import randint
import scipy.optimize
def system(p):
x, y, z = p
return(math.cos(x)*math.cos(y)-randint(0,10),
math.cos(x)*math.sin(y)-randint(0,10),
-math.sin(y)-randint(0,10),
math.sin(z)*math.sin(y)*math.sin(x)+math.cos(z)*math.cos(y)-randint(0,10),
math.cos(x)*math.sin(z)-randint(0,10),
math.cos(z)*math.sin(x)*math.cos(z)+math.sin(z)*math.sin(x)-randint(0,10),
math.cos(z)*math.sin(x)*math.sin(y)-math.sin(z)*math.cos(y)-randint(0,10),
math.cos(x)*math.cos(z)-randint(0,10))
x = scipy.optimize.broyden1(system, [1,1,1], f_tol=1e-14)
could you help me out a bit here?

If I understand you right, you want to find an approximate solution to the non-linear system of equations f(x) = b where b is the vector containing the random values b=[a,...,h].
In order to do this you will first need to remove the random values from the system function, because otherwise in each iteration the solver will try to solve a different equation system. Moreover, I think that the basic Broyden method only works for a system with as many unknowns as equations. Alternatively you could use scipy.optimize.leastsq. A possible solution looks like this:
# I am using numpy because it's more convenient for the generation of
# random numbers.
import numpy as np
from numpy.random import randint
import scipy.optimize
# Removed random right-hand side values and changed nomenclature a bit.
def f(x):
x1, x2, x3 = x
return np.asarray((math.cos(x1)*math.cos(x2),
math.cos(x1)*math.sin(x2),
-math.sin(x2),
math.sin(x3)*math.sin(x2)*math.sin(x1)+math.cos(x3)*math.cos(x2),
math.cos(x1)*math.sin(x3),
math.cos(x3)*math.sin(x1)*math.cos(x3)+math.sin(x3)*math.sin(x1),
math.cos(x3)*math.sin(x1)*math.sin(x2)-math.sin(x3)*math.cos(x2),
math.cos(x1)*math.cos(x3)))
# The second parameter is used to set the solution vector using the args
# argument of leastsq.
def system(x,b):
return (f(x)-b)
b = randint(0, 10, size=8)
x = scipy.optimize.leastsq(system, np.asarray((1,1,1)), args=b)[0]
I hope this is of help for you. However, note that it is extremely unlikely that you will find a solution, especially when you generate random integers in the interval [0,10] while the range of f is limited to [-2,2]

Related

Solving an ODE with scipy.integrate.solve_bvp, the return of my function is not able to be broadcast into the appropriate array shape for solve_bvp

I am trying to solve a second order ODE with solve_bvp. I have split the second order ODE into a system of tow first oder ODEs. I have a changing set of constants depending on the x (mesh) value. So I am passing these as an array of shape (N,) into my function numdens. While trying to run solve_bvp I get the error that the returns have different shapes namely (N,) and (N-1,) and thus cannot be broadcast into one array. But when I check each return back manually outside of the function it has the shape (N,).
If I run the solver without my changing constants I get a solution akin to the right one.
import numpy as np
from scipy.integrate import solve_bvp,odeint
import matplotlib.pyplot as plt
E_0 = 1 * 0.0000016021773 #erg: gcm^2/s^2
m_H = 1.6*10**(-24) #g
c = 3e11 #cm
sigma_c = 2*10**(-23)
n_0 = 1*10**(20) #1/cm^3
v_0 = (2*E_0/m_H)**(0.5) #cm/s
T = 10**7
b = 20.3
n_eq = b*T**3
n_s = 2.03*10**(19)
Q = 1
def velocity(v,x):
dvdx = -sigma_c*n_0*v_0*((8*v_0*v-7*v**2-v_0**2)/(2*v*c))
return dvdx
n_num = 100
x_num = np.linspace(-1*10**(6),3*10**(6), n_num)
sol_velo = odeint(velocity,0.999999999999*v_0,x_num)
sol_new = np.reshape(sol_velo,n_num)
def constants(v):
D1 = (c*v/(3*n_0*v_0*sigma_c))
D2 = ((v**2-8*v_0*v+v_0**2)/(6*v))
D3 = sigma_c*n_0*v_0*((8*v_0*v-7*v**2-v_0**2)/(2*v*c))
return D1,D2,D3
def numdens(x,y):
v = sol_new
D1,D2,D3 = constants(v)
return np.vstack((y[1],(-D2*y[1]-D3*y[0]+Q*((1-y[0])/n_eq))/(D1)))
def bc_num(ya, yb):
return np.array([ya[0]-n_s,yb[0]-n_eq])
y_num = np.array([np.linspace(n_s, n_eq, n_num),np.linspace(n_s, n_eq, n_num)])
sol_num = solve_bvp(numdens, bc_num, x_num, y_num)
plt.plot(sol_num.x, sol_num.y[0], label='$n(x)$')
plt.plot(x_num, sol_velo-v_0/7, label='$v(x)$')
plt.yscale('log')
plt.grid(alpha=0.5)
plt.legend(framealpha=1)
plt.show()
You need to take into account that the BVP solver uses an adaptive mesh. That is, after refining the initial guess on the initial grid the solver identifies regions with overly large errors and creates new mesh nodes there. As far as I have seen, the opposite is not implemented, even if it may be in some applications sensible to reduce the number of mesh nodes on especially "nice" segments.
Thus what you are doing the the numdens function is incomprehensible, it has to function exactly like any other function that you would pass to an ODE solver. If I had to propose some fast fix, and without knowing what the underlying problem is that you want to solve, I would change the assignment of v to
v = np.interp(x,x_num,sol_velo)
as that should at least produce an array of the correct format.

Avoiding divergent solutions with odeint? shooting method

I am trying to solve an equation in Python. Basically what I want to do is to solve the equation:
(1/x^2)*d(Gam*dL/dx)/dx)+(a^2*x^2/Gam-(m^2))*L=0
This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. It suppose that we know m and Gam=x^2-2*x. The initial/boundary condition that I know are L(2+epsilon)=1 and L(infty)=0. Notice that the asymptotic behavior of the equation is
L(x-->infty)-->Exp[(m^2-a^2)*x]/x and Exp[-(m^2-a^2)*x]/x
Then, if a^2>m^2 we will have oscillatory solutions, while if a^2 < m^2 we will have a divergent and a decay solution.
What I am interested is in the decay solution, however when I am trying to solve the above equation transforming it as a system of first order differential equations and using the shooting method in order to find the "a" that can give me the behavior that I am interested about, I am always having a divergent solution. I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method?
Basically what I am doing is to add a new system of equation for "a"
(d^2a/dx^2=0, da/dx(2+epsilon)=0,a(2+epsilon)=a_0)
in order to have "a" as a constant. Then I am considering different values for "a_0" and asking if my boundary conditions are fulfilled.
Thanks for your time. Regards,
Luis P.
I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from math import *
from scipy.integrate import ode
These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$
def init_sch(u_sch):
om = u_sch[0]
return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx]
These are our system of equations
def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0):
L = IC[0]
ph = IC[1]
om = IC[2]
b = IC[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.
return [dR_dr,dph_dr,dom_dr,db_dr]
Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here
p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case
ep = 0.2
ep_r = 0.01
r_end = 500
n_r = 500000
n_omega = 1000
omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)
r = np.linspace(2+ep_r,r_end,n_r)
tol = 0.01
a = 0
for j in range(len(omega)):
print('trying with $omega =$',omega[j])
omeg = [omega[j]]
ini = init_sch(omeg)
Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000)
print Y[-1,0]
#Here I ask if my asymptotic behavior is fulfilled or not. This should be basically my value at infinity
if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol:
print(j,'times iterations in omega')
print("R'(inf)) = ", Y[-1,0])
print("\omega",omega[j])
omega_1 = [omega[j]]
a = 10
break
if a > 1:
break
Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from math import *
from scipy.integrate import solve_bvp
def bc(ya,yb,p_sch):
m = p_sch[1]
om = p_sch[4]
tol_s = p_sch[5]
r_end = p_sch[6]
return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])
def fun(r,y,p_sch):
rho_c = p_sch[0]
m = p_sch[1]
lam = p_sch[2]
l = p_sch[3]
L = y[0]
ph = y[1]
om = y[2]
b = y[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.*y[3]
return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))
eps_r=0.01
r_end = 500
n_r = 50000
r = np.linspace(2+eps_r,r_end,n_r)
y = np.zeros((4,r.size))
y[0]=1
tol_s = 0.0001
p_sch= (1,1,0,0,0.8,tol_s,r_end)
sol = solve_bvp(fun,bc, r, y, p_sch)
I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).
ValueError: bc return is expected to have shape (11,), but actually has (8,).

Evaluate sum of step functions

I have a fairly large number (around 1000) of step functions, each with only two intervals. I'd like to sum them up and then find the maximum value. What is the best way to do this? I've tried out sympy, with code as follows:
from sympy import Piecewise, piecewise_fold, evalf
from sympy.abc import x
from sympy.plotting import *
import numpy as np
S = 20
t = np.random.random(20)
sum_piecewise = None
for s in range(S):
p = Piecewise((np.random.random(), x<t[s]), (np.random.random(), x>=t[s]))
if not sum_piecewise:
sum_piecewise = p
else:
sum_piecewise += p
print sum_piecewise.evalf(0.2)
However, this outputs a large symbolic expression and not an actual value, which is what I want.
As it appears that you consider numerical functions, it is better (in terms of performance) to work with Numpy. Here's one approach:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(10)
S = 20 # number of piecewise functions
# generate S function parameters.
# For example, the k-th function is defined as equal to
# p_values[k,0] when t<t_values[k] and equal to
# p_values[k,1] when t>= t_values[k]
t_values = np.random.random(S)
p_values = np.random.random((S,2))
# define a piecewise function given the function's parameters
def p_func(t, t0, p0):
return np.piecewise(t, [t < t0, t >= t0], p0)
# define a function that sums a set of piecewise functions corresponding to
# parameter arrays t_values and p_values
def p_sum(t, t_values, p_values):
return np.sum([p_func(t, t0, p0) for t0, p0 in zip(t_values,p_values)])
Here is the plot of the sum of functions:
t_range = np.linspace(0,1,1000)
plt.plot(t_range, [p_sum(tt,t_values,p_values) for tt in t_range])
Clearly, in order to find the maximum, it suffices to consider only the S time instants contained in t_values. For this example,
np.max([p_sum(tt,t_values,p_values) for tt in t_values])
11.945901591934897
What about using substitution? Try changing sum_piecewise.evalf(0.2) by sum_piecewise.subs(x, 0.2)

Scipy odeint giving index out of bounds errors

I am trying to solve a differential equation in python using Scipy's odeint function. The equation is of the form dy/dt = w(t) where w(t) = w1*(1+A*sin(w2*t)) for some parameters w1, w2, and A. The code I've written works for some parameters, but for others I get given index out of bound errors.
Here's some example code that works
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 2*np.pi
w2 = 0.016*np.pi
A = 1.0
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
Here's some example code that doesn't work
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 0.3*np.pi
w2 = 0.005*np.pi
A = 0.15
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
The only thing that changes between these is that the three parameters w1, w2, and A are smaller in the second, but the second one always gives me the following error
line 13, in f
return w[t0]
IndexError: index 1001 is out of bounds for axis 0 with size 1000
This error continues even after restarting python and running the second code first. I've tried with other parameters, some seem to work, but others give me different index out of bounds errors. Some say 1001 is out of bounds, some say 1000, some say 1008, ect.
Changing the initial condition on y (the second input for odeint, which I have as 0 on the above codes) also changes the number on the index error, so it might be that I'm misunderstanding what to put here. I wasn't told what the initial conditions should be other than that y is used as a phase of a signal, so I presumed it to be initially 0.
What you want to do is
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w(t0)
Array indices are typically integers, time arguments and values of solutions of differential equations are typically real numbers. Thus there is some conceptual difficulty in invoking w[t0].
You might also try to integrate directly the function w, there is no inherent difficulty in this example.
As for coupled systems, you solve them as coupled systems.
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t):
wt = w(t)
return np.array([ wt, wt*sin(y[1]-y[0]) ])

Symbolically computing the input of an interpolation function?

I have a rather complicated function H(x), and I'm trying to solve for the value of x such that H(x) = constant. I would like to do this with an interpolation object generated from a discrete interval and the corresponding output of H(interval), where other inputs are held constant. I denote the interpolation object f.
My problem is that the call function of the interpolation object accepts an array_like, so passing a symbol to f(x) to use sage's solver method is out of the question. Any ideas of how to get around this?
I have interpolation function f. I would like to solve the equation f(x) == sageconstant forx.
from scipy.interpolate import InterpolatedUnivariateSpline as IUspline
import numpy as np
#Generating my interpolation object
xint = srange(30,200,step=.1)
val = [H(i,1,.1,0,.2,.005,40) for i in srange(30,299,step=.1)]
f = IUspline(xint,val,k=4)
#This will yield a sage constant
eq_G(x) = freeB - x
#relation that I would like to solve
eq_m(x) = eq_G(39.9) == f(x)
m = solve(eq_m(x),x)
The above code (f(x) to be more specific) generates
"TypeError: Cannot cast array data from dtype('0') to dtype('float64')
according to the rule 'safe'.
edit: Any function H(x) will result in the same error, hence it doesn't matter what H(x) is. For simplicity (I wasn't kidding when I said H is complicated), try H(x) = x. Then the block will read:
from scipy.interpolate import InterpolatedUnivariateSpline as IUspline
import numpy as np
#Generating my interpolation object
xint = srange(30,200,step=.1)
H(x) = x
val = [H(i) for i in srange(30,299,step=.1)]
f = IUspline(xint,val,k=4)
#This will yield a sage constant
eq_G(x) = freeB - x
#relation that I would like to solve
eq_m(x) = eq_G(39.9) == f(x)
m = solve(eq_m(x),x)
When working with numpy and scipy, prefer Python types to Sage types.
Instead of Sage Integers and Reals, use Python ints and floats.
Maybe you can fix your code like this.
from scipy.interpolate import InterpolatedUnivariateSpline as IUspline
import numpy as np
# Generate interpolation object
xint = srange(30,200,step=.1)
xint = [float(x) for x in xint]
val = [float(H(i,1,.1,0,.2,.005,40)) for i in srange(30,299,step=.1)]
f = IUspline(xint,val,k=4)
# This will yield a Sage constant
eq_G(x) = freeB - x
# relation that I would like to solve
eq_m(x) = eq_G(39.9) == f(x)
m = solve(eq_m(x),x)

Categories