I am writing a program that use integrals. I tried Sympy, but it was too slow, so I switched to SciPy integrate and it's improve my script so much, but I encountered one problem:
In general, it like this:
from scipy.integrate import quad
import sympy as S
x = S.Symbol('x')
y = S.Symbol('y')
xi = 0.75
a = 10; b = 10; f1 = 0.5; f2 = 0.5; f0 = f1+f2; al = -f1/f0; be = -f2/f0 `
F0 = f0*(al*(x**2/a**2)*xi+be*(y*2/b**2)**xi+1)
j2 = F0.diff(y,2)
jj2 = S.lambdify([y],j2,'scipy')
J2_ = quad(jj2,-a,a)
J2 = (J2_[0]*a**2)/f0
and it just crushed and here is error:
File "C:\Users\Mikhail\Desktop\robpy\cyc.py", line 60, in raschet
J2_ = quad(jj2,-a,a)
File "C:\Users\Mikhail\AppData\Local\Programs\Python\Python310\lib\site-packages\scipy\integrate\_quadpack_py.py", line 463, in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File "C:\Users\Mikhail\AppData\Local\Programs\Python\Python310\lib\site-packages\scipy\integrate\_quadpack_py.py", line 575, in _quad
return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
File "<lambdifygenerated-2>", line 2, in _lambdifygenerated
ZeroDivisionError: float division by zero
F0.diff(y,2) equal to:
-0.0118585412256314*(y**2)0.75/y**2
I suggest that this "y2" in denominator is the cause of the error, but when I've tried SymPy to do same integration like
jj2 = S.integrate(j2,(y,-a,a))
it works normal and solve it as -0.15
I can't just rebuild F0 using floats instead of variables cause task suppose different initial data from user.
How can I avoid this error? Thank you.
Thanks you all for your help!
I solve that problem like this
def jj2(x,y):
if y == 0:
return 0
else:
return j2(y)
J2_ = quad(jj2,-a,a)
J2 = (J2_[0]*a**2)/f0
So it return complex, but F0 is just one part of integral that I need, so it solving and solving right!
(yay, I reduced the execution time from 240 seconds to 3 sec!)
Related
I am trying to do an integral over the first derivative of the Fermi-Dirac function f(E) and a transmission function t(E) to find a value for conductance, G. I am having a problem with the fact that t(E) is a summation and has multiple values that are vectorized.
This is the code that I have produced, the error shows up in the integral.
import numpy as np
import scipy.constants as phys
import scipy.integrate as integrate
import math
E = np.expand_dims(np.linspace(0, 15, 10000), 1)
n = np.arange(0, 6)
h = 1/(1 + np.exp(-2*np.pi * (E-(n-0.5)*3)))
fermi = 2.5
kT = 0.2
def fermi_integral(E, fermi, T, n):
return (np.exp((E-fermi)/(kT))/((np.exp((E-fermi)/(kT)) + 1)**2) * 1/(kT)) * h
# above function is the integral part of G; df/dE * t(e)
result = integrate.quad(fermi_integral, 0, np.inf, args = (kT, fermi, E))
# integrating the function from 0 to infinity
print('Result of integral;', result)
G = (-(2*math.e**2)/phys.Planck) *np.array(result)
# multiplying the constants outside the integral in
print('Result for G:', G)
I am looking for multiple values but have not been able to produce any.
Any help would be appreciated.
Edit:
Error shows as following
Traceback (most recent call last):
File ~\OneDrive\Documents\BSc_Project\Fermi.py:27 in <module>
result = integrate.quad(fermi_integral, 0, np.inf, args = (kT, fermi, E))
File ~\anaconda3\lib\site-packages\scipy\integrate\quadpack.py:351 in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File ~\anaconda3\lib\site-packages\scipy\integrate\quadpack.py:465 in _quad
return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
TypeError: only size-1 arrays can be converted to Python scalars
scipy.quad does not support integration of vector valued functions. The vectorized version is: scipy.quad_vec and you need version 1.8.0 or newer for the "args" parameter.
Simply replacing quad with quad_vec works, but results in NaNs, although I think that might be a flaw in the logic, and not the code itself.
The answer was based on this SO question.
I am computing a solution to the free basis expansion of the dirac equation for electron-positron pairproduction. For this i need to solve a system of equations that looks like this:
Equation for pairproduction, from Mocken at al.
EDIT: This has been solved by passing y0 as complex type into the solver. As is stated in this issue: https://github.com/scipy/scipy/issues/8453 I would definitely consider this a bug but it seems like it has gone under the rock for at least 4 years
for this i am using SciPy's solve_ivp integrator in the following way:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from scipy.integrate import solve_ivp
import scipy.constants as constants
#Impulse
px, py = 0 , 0
#physics constants
e = constants.e
m = constants.m_e # electronmass
c = constants.c
hbar = constants.hbar
#relativistic energy
E = np.sqrt(m**2 *c**4 + (px**2+py**2) * c**2) # E_p
#adiabatic parameter
xi = 1
#Parameter of the system
w = 0.840 #frequency in 1/m_e
N = 8 # amount of amplitudes in window
T = 2* np.pi/w
#unit system
c = 1
hbar = 1
m = 1
#strength of electric field
E_0 = xi*m*c*w/e
print(E_0)
#vectorpotential
A = lambda t,F: -E_0/w *np.sin(t)*F
def linearFenster2(t):
conditions = [t <=0, (t/w>=0) and (t/w <= T/2), (t/w >= T/2) and (t/w<=T*(N+1/2)), (t/w>=T*(N+1/2)) and (t/w<=T*(N+1)), t/w>=T*(N+1)]
funcs = [lambda t: 0, lambda t: 1/np.pi *t, lambda t: 1, lambda t: 1-w/np.pi * (t/w-T*(N+1/2)), lambda t: 0]
return np.piecewise(t,conditions,funcs)
#Coefficient functions
nu = lambda t: -1j/hbar *e*A(w*t,linearFenster2(w*t)) *np.exp(2*1j/hbar * E*t) *(px*py*c**2 /(E*(E+m*c**2)) + 1j*(1- c**2 *py**2/(E*(E+m*c**2))))
kappa = lambda t: 1j*e*A(t,linearFenster2(w*t))* c*py/(E * hbar)
#System to solve
def System(t, y, nu, kappa):
df = kappa(t) *y[0] + nu(t) * y[1]
dg = -np.conjugate(nu(t)) * y[0] + np.conjugate(kappa(t))*y[1]
return np.array([df,dg], dtype=np.cdouble)
def solver(tmin, tmax,teval=None,f0=0,g0=1):
'''solves the system.
#tmin: starttime
#tmax: endtime
#f0: starting percentage of already present electrons of positive energy usually 0
#g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum
'''
y0=[f0,g0]
tspan = np.array([tmin, tmax])
koeff = np.array([nu,kappa])
sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)
return sol
#Plotting of windowfunction
amount = 10**2
t = np.arange(0, T*(N+1), 1/amount)
vlinearFenster2 = np.array([linearFenster2(w*a) for a in t ], dtype = float)
fig3, ax3 = plt.subplots(1,1,figsize=[24,8])
ax3.plot(t,E_0/w * vlinearFenster2)
ax3.plot(t,A(w*t,vlinearFenster2))
ax3.plot(t,-E_0 /w * vlinearFenster2)
ax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax3.set_xlabel("t in s")
ax3.grid(which = 'both')
plt.show()
sol = solver(0, 70,teval = t)
ts= sol.t
f=sol.y[0]
fsquared = 2* np.absolute(f)**2
plt.plot(ts,fsquared)
plt.show()
The plot for the window function looks like this (and is correct)
window function
however the plot for the solution looks like this:
Plot of pairproduction probability
This is not correct based on the papers graphs (and further testing using mathematica instead).
When running the line 'sol = solver(..)' it says:
\numpy\core\_asarray.py:102: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
I simply do not know why solve_ivp discard the imaginary part. Its absolutely necessary.
Can someone enlighten me who knows more or sees the mistake?
According to the documentation, the y0 passed to solve_ivp must be of type complex in order for the integration to be over the complex domain. A robust way of ensuring this is to add the following to your code:
def solver(tmin, tmax,teval=None,f0=0,g0=1):
'''solves the system.
#tmin: starttime
#tmax: endtime
#f0: starting percentage of already present electrons of positive energy usually 0
#g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum
'''
f0 = complex(f0) # <-- added
g0 = complex(g0) # <-- added
y0=[f0,g0]
tspan = np.array([tmin, tmax])
koeff = np.array([nu,kappa])
sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)
return sol
I tried the above, and it indeed made the warning disappear. However, the result of the integration seems to be the same regardless.
I'm having some trouble with python's complex_ode solver.
I'm trying to solve the following equation:
dy/dt = -iAy - icos(Omegat)By
where A and B are NxN arrays and the unknown y is an Nx1 array, i is the imaginary unit and Omega is a parameter.
Here's my code:
import numpy as np
from scipy.integrate import ode,complex_ode
N = 3 #linear matrix dim
Omega = 1.0 #parameter
# define symmetric matrices A and B
A = np.random.ranf((N,N))
A = (A + A.T)/2.0
B = np.random.ranf((N,N))
B = (B + B.T)/2.0
# define RHS of ODE
def f(t,y,Omega,A,B):
return -1j*A.dot(y)-1j*np.cos(Omega*t)*B.dot(y)
# define list of parameter
params=[Omega,A,B]
# choose solver: need complex_ode for this ODE
#solver = ode(f)
solver = complex_ode(f)
solver.set_f_params(*params)
solver.set_integrator("dop853")
# set initial value
v0 = np.zeros((N,),dtype=np.float64)
v0[0] = 1.0
# check that the function f works properly
print f(0,v0,Omega,A,B)
# solve-check the ODE
solver.set_initial_value(v0)
solver.integrate(10.0)
print solver.successful()
Running this script produces the error
capi_return is NULL
Call-back cb_fcn_in___user__routines failed.
Traceback (most recent call last):
File "ode_test.py", line 37, in <module>
solver.integrate(10.0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 515, in integrate
y = ode.integrate(self, t, step, relax)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 388, in integrate
self.f_params, self.jac_params)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 946, in run
tuple(self.call_args) + (f_params,)))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 472, in _wrap
f = self.cf(*((t, y[::2] + 1j * y[1::2]) + f_args))
TypeError: f() takes exactly 5 arguments (2 given)
If instead I use solver = ode(f), ie. the real-valued solver, it runs fine. Except that it doesn't solve the ODE I want which is complex-valued :(
I then tried to reduce the number of parameters by making the matrices A and B global variables. This way the only parameter the function f accepts is Omega. The error changes to
capi_return is NULL
Call-back cb_fcn_in___user__routines failed.
Traceback (most recent call last):
File "ode_test.py", line 37, in <module>
solver.integrate(10.0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 515, in integrate
y = ode.integrate(self, t, step, relax)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 388, in integrate
self.f_params, self.jac_params)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 946, in run
tuple(self.call_args) + (f_params,)))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 472, in _wrap
f = self.cf(*((t, y[::2] + 1j * y[1::2]) + f_args))
TypeError: 'float' object has no attribute '__getitem__'
where I figured out that float refers to the parameter Omega [by trying an integer]. Again, "ode" alone works in this case as well.
Last, I tried the same complex valued equation, but now A and B are just numbers. I tried to pass them both as parameters, i.e. params = [Omega,A,B], as well as making them global variables in which case params=[Omega]. The error is the
TypeError: 'float' object has no attribute '__getitem__'
error - the full error is the same as above. And once again this problem does not occur for the real-valued "ode".
I know zvode is an alternative, but it appears to become quite slow for large N. In the real problem I have, A is a diagonal matrix but B is a non-sparse full matrix.
Any insights are much appreciated! I'm interested both in (i) alternative ways to solve this complex-valued ODE with array-valued parameters, and (ii) how to get complex_ode to run :)
Thanks!
It seems like the link that Reti43 posted contains the answer, so let me put it here for the benefit of future users:
from scipy.integrate import complex_ode
import numpy as np
N = 3
Omega = 1.0;
class myfuncs(object):
def __init__(self, f, fargs=[]):
self._f = f
self.fargs=fargs
def f(self, t, y):
return self._f(t, y, *self.fargs)
def f(t, y, Omega,A,B):
return -1j*(A+np.cos(Omega*t)*B).dot(y)
A = np.random.ranf((N,N))
A = (A + A.T)/2.0
B = np.random.ranf((N,N))
B = (B + B.T)/2.0
v0 = np.zeros((N,),dtype=np.float64)
v0[0] = 1.0
t0 = 0
case = myfuncs(f, fargs=[Omega, A, B] )
solver = complex_ode(case.f)
solver.set_initial_value(v0, t0)
solver.integrate([10.0])
print solver.successful()
"""
t1 = 10
dt = 1
while solver.successful() and solver.t < t1:
solver.integrate(solver.t+dt)
print(solver.t, solver.y)
"""
Could maybe someone comment on why this trick does the job?
I'm doing a newton iteration to find T_a. Everything seems fine in the code except in one the very first definitions.
My rho(T_a) returns a division by zero (it assumes that T_a is zero while it's just a variable. If I change the T_a in the equation to something like 100, everything runs smoothly.
Any idea why it's returning a division by zero?
from numpy import *
import numpy as np
import pylab
import scipy
from scipy.optimize import leastsq
from math import *
import matplotlib.pyplot as plt
from scipy import integrate
# THETA NOTATION:
#pi/2: substellar point
#-pi/2: antistellar point
#0: terminators
#define constants used in equations:
alb = 0.2 #constant albedo
F = 866 #J/s*m**2
R = 287.0 #J/K*kg
U = 5.0 #m/s
C_p = 1000 #J/K*kg
C_d = 0.0015
p1 = 10**4
p2 = 10**5.0
p3 = 10**6.0 #Pa
sig = 5.67*10**-8.0 #J/s*m**2*K**4 #Stefan-Boltzmann cst
def rho(T_a):
p1=10000.0
R=287.0 #J/K*kg
return (p1/(T_a*R))
def a(T_a):
U = 5 #m/s
C_p = 1000 #J/K*kg
C_d = 0.0015
return rho(T_a)*C_p*C_d*U
#################################################
##### PART 2 : check integrals equality
#################################################
#define the RHS and LHS of integral equality
def LHS(theta):
return (1-alb)*F*np.sin(theta)*np.cos(theta)
#define the result of each integral
Left = integrate.quad(lambda theta: LHS(theta), 0, pi/2)[0]
#define a function 1-(result LHS/result RHS) >>> We look for the zero of this
x0=130.0 #guess a value for T_a
#T_a = 131.0
#Python way of solving for the zero of the function
#Define T_g in function of T_a, have RHS(T_a) return T_g**4 etc, have result_RHS(T_a) return int.RHS(T_a),
#have func(T_a) return result_LHS/result_RHS
def T_g(T_a,theta):
return np.roots(array([(sig),0,0,(a(T_a)),((-a(T_a)*T_a)-LHS(theta))]))[3]
def RHS(theta,T_a):
return sig*T_g(T_a,theta)**4*np.cos(theta)
def result_RHS(T_a,theta):
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
def function(T_a,theta):
return 1-((Left/result_RHS(T_a,theta)))
theta = np.arange(-pi/2, pi/2, pi/20)
T_a_0 = scipy.optimize.newton(function,x0,fprime=None,args=(theta,),tol= (10**-5),maxiter=50000)
Output:
Traceback (most recent call last):
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 85, in <module>
T_a_0 = scipy.optimize.newton(function,x0,fprime=None,args=(theta,),tol=(10**-5),maxiter=50000)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/zeros.py", line 120, in newton
q0 = func(*((p0,) + args))
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 81, in function
return 1-((Left/result_RHS(T_a,theta)))
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 78, in result_RHS
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.py", line 247, in quad
retval = _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.py", line 312, in _quad
return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 78, in <lambda>
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 75, in RHS
return sig*T_g(T_a,theta)**4*np.cos(theta)
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 72, in T_g
return np.roots(array([(sig),0,0,(a(T_a)),((-a(T_a)*T_a)-LHS(theta))]))[3]
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 38, in a
return rho(T_a)*C_p*C_d*U
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 32, in rho
return (p1/(T_a*R))
ZeroDivisionError: float division by zero
Your RHS function is defined slightly differently to all the others, in that it has theta first and T_a as its second argument:
def RHS(theta,T_a):
return sig*T_g(T_a,theta)**4*np.cos(theta)
I think that's why you passed the arguments in the wrong order here:
lambda theta: RHS(T_a,theta)
Get them in the right order and you should be OK.
As a side-note, some of your imports look like they could cause weird bugs:
from numpy import *
from math import *
Numpy and the math module have at least a few function names in common, like sqrt. It's safer to just do import math and import numpy as np, and access the functions through the module name. Otherwise what happens when you call sqrt could change depending on the order you do your imports in.
You reversed your parameters:
In result_RHS you call: RHS(T_a,theta), but the parameter definition of RHS is def RHS(theta,T_a)
Swap those in in the definition and the error no longer occurs. Your definition should look like this:
def RHS(T_a, theta)
I'm trying to back out Black-Scholes implied volatilities from financial options data. If the data contains options for which an implied volatility cannot be found, this will make all the results equal to the initial guess. See the following example
from scipy.optimize import fsolve
import numpy as np
from scipy.stats import norm
S = 1293.77
r = 0.05
K = np.array([1255, 1260, 1265, 1270, 1275])
T = 2./365
price = np.array([38.9, 34.35, 29.7, 25.35, 21.05])
def black_scholes(S, K, r, T, sigma):
d1 = (np.log(S / K) + (r + sigma ** 2 / 2) * T) / (sigma * np.sqrt(T))
d2 = d1 - sigma * np.sqrt(T)
return S * norm.cdf(d1) - K * np.exp(-r * T) * norm.cdf(d2)
volatility = lambda x: black_scholes(S, K, r, T, x) - price
print fsolve(volatility, np.repeat(0.1, len(K)))
gives
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
[ 0.1 0.1 0.1 0.1 0.1]
By doing the same operation with Matlab or Maple I know that no solution can be found for the first option. If I exclude that one, such that
K = np.array([1260, 1265, 1270, 1275])
price = np.array([34.35, 29.7, 25.35, 21.05])
I do get the right result
[ 0.19557092 0.20618568 0.2174149 0.21533821]
Therefore if a solution cannot be found I would expect fsolve to return NaN instead of my initial guess and not mess up the rest of the solutions.
Use the full_output argument to tell fsolve to return more information, and check the value of ier on return. For example,
sol, info, ier, msg = fsolve(volatility, np.repeat(0.1, len(K)), full_output=True)
if ier != 1:
print "ier = %d" % (ier,)
print msg
else:
print "sol =", sol
You said:
...if a solution cannot be found I would expect fsolve to return NaN instead of my initial guess and not mess up the rest of the solutions.
fsolve has no way of knowing that the problem you are solving is actually a collection of decoupled problems. You've given it a single n-dimensional problem. Either it succeeds or it fails to find a solution to that problem.