Problem with integral calculation integral with OOP in Python - python

First, i would like to calculate the integral
and then i' d like to plot a function F(x) but i have the following error:
F() missing 1 required positional argument: 't'
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
x=np.arange(-20,20,0.5)
class NLA():
def __init__(self,b=-20*10**-15,E=200*10**-9,w0=18*10**-6,t=50*10**-15,s=1.76,A=0.2,L=0.1):
self.b=b
self.E=E
self.w0=w0
self.t=t
self.s=s
self.A=A
self.L=L
self.I0=self.s*self.E/(self.t*np.pi*self.w0**2)
self.a0=(1/self.L)*np.log(10)*self.A
self.Leff=0.01*(1-np.exp(-self.L*self.a0))/self.a0
self.c=self.b*self.I0*self.Leff
def f(self,t,x):
return np.log(1+((self.c*np.exp(-t**2))/(1+self.x**2)))
def F(self,x,t):
return ((1+x**2)/(self.c*np.pi**(1/2)))*quad(self.f(t),-np.inf,np.inf,args=(x))[0]
nla=NLA()
T=nla.F(x)

def F(self,x,t):
return ((1+x**2)/(self.c*np.pi**(1/2)))*quad(self.f(t),-np.inf,np.inf,args=(x))[0]
change to
def F(self,x):
temp = quad(self.f,-np.inf,np.inf,args=(x,))[0]
return ((1+x**2)/(self.c*np.pi**(1/2)))*temp
quad is supposed to get a function, in this case self.f. quad will call it with the integration variable, t and the args tuple.
I split of temp, so the quad calls is easier to see (and if needed to debug).
F should not take t as an argument.
When using functions like quad, follow the documentation carefully. If anything is confusing, practice with its examples. I don't think the OOP is causing problems, except that all those "self" may be confusing you, and obscuring the basic quad calls.

Related

Python / GPyOpt: Optimizing only one argument

I´m currently trying to find the minimum of some function f(arg1, arg2, arg3, ...) via Gaussian optimization using the GPyOpt module. While f(...) takes many input arguments, I only want to optimize a single one of them. How do you do that?
My current "solution" is to put f(...) in a dummy class and specify the not-to-be-optimized arguments while initializing it. While this is arguably the most pythonesque way of solving this problem, it`s also way more complicated than it has any right to be.
Short working example for a function f(x, y, method) with fixed y (a numeric) and method (a string) while optimizing x:
import GPyOpt
import numpy as np
# dummy class
class TarFun(object):
# fix y while initializing the object
def __init__(self, y, method):
self.y = y
self.method = method
# actual function to be minimized
def f(self, x):
if self.method == 'sin':
return np.sin(x-self.y)
elif self.method == 'cos':
return np.cos(x-self.y)
# create TarFun object with y fixed to 2 and use 'sin' method
tarFunObj = TarFun(y=2, method='sin')
# describe properties of x
space = [{'name':'x', 'type': 'continuous', 'domain': (-5,5)}]
# create GPyOpt object that will only optimize x
optObj = GPyOpt.methods.BayesianOptimization(tarFunObj.f, space)
There definitely has to be a simpler way. But all the examples I found optimize all arguments and I couldn't figure it out reading the code on github (I though i would find the information in GPyOpt.core.task.space , but had no luck).
GPyOpt supports this natively with context. You describe the whole domain of your function, and then fix values of some of the variables with a context dictionary when calling optimization routine. API looks like that:
myBopt.run_optimization(..., context={'var1': .3, 'var2': 0.4})
More details can be found in this tutorial notebook about contextual optimization.
I would check out the partial function from the functools standard library. It allows you to partially specify a function, so for example:
import GPyOpt
import numpy as np
from functools import partial
def f(x, y=0):
return np.sin(x - y)
objective = partial(f, y=2)
space = [{'name': 'x', 'type': 'continuous', 'domain': (-5, 5)}]
opt = GPyOpt.methods.BayesianOptimization(
objective, domain=space
)

Optimizing root finding algorithm from scipy

I use the root function from scipy.optimize with the method "excitingmixing" in my code because other methods, like standard Newton, don't converge to the roots I am looking for.
However I would like to optimize my code using numba, which doesn't support the scipy package. I tried to look up the "exciting mixing" algorithm in the documentation to program it myself:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html
I didn't find anything useful except the not really helpful statement that the method "uses a tuned diagonal Jacobian approximation".
I would be glad if someone could tell me something about the algorithm or has an idea on how to optimize the scipy function in an other way.
As requested here is a minimal code example:
import numpy as np
from scipy import optimize
from numba import jit
#jit(nopython = True)
def func(x):
[a, b, c, d] = x
da = a*(1-b)
db = b*(1-c)
dc = c
dd = 1
return [da, db, dc, dd]
#jit(nopython = True)
def getRoot(x0):
solution = optimize.root(func, x0, method="excitingmixing")
return(solution.x)
root = getRoot([0.1,0.1,0.2,0.4])
print(root)
You can look in the source code of scipy to see the implementation of the excitingmixing option:
https://github.com/scipy/scipy/blob/c948e96ebb3454f6a82e9d14021cc601d7ce7a85/scipy/optimize/nonlin.py#L1272
You're likely not going to want to reimplement the entire root finding algorithm in numba. I better strategy you can test is to use numba to optimize the function that you pass to the scipy method. You're still going to pay some overhead of scipy calling a function, but you might see a performance increase if the bottleneck is evaluating the function and that can be done faster with a numba jitted version. I've found it best to just experiment with numba and test with the timeit method.
I wrote a little wrapper Minpack, called NumbaMinpack, which can be called within numba compiled functions: https://github.com/Nicholaswogan/NumbaMinpack.
You should try the lmdif method, if Newton's method is failing you.
from NumbaMinpack import lmdif, hybrd, minpack_sig
from numba import njit, cfunc
import numpy as np
#cfunc(minpack_sig)
def myfunc(x, fvec, args):
fvec[0] = x[0]**2 - args[0]
fvec[1] = x[1]**2 - args[1]
funcptr = myfunc.address # pointer to myfunc
x_init = np.array([10.0,10.0]) # initial conditions
neqs = 2 # number of equations
args = np.array([30.0,8.0]) # data you want to pass to myfunc
#njit
def test():
# solve with lmdif
sol = lmdif(funcptr, x_init, neqs, args)
# OR solve with hybrd
sol = hybrd(funcptr, x_init, args)
return sol
test() # it works!

TypeError: deriv() takes 2 positional arguments but 4 were given

I'm getting the above error. I understand in principle what it means, but can't really see how it applies to my code
#project starts here
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
from numpy import pi
from scipy.integrate import odeint
def deriv(cond,t):
for q in range (0,N):
i=6*q
dydt[i]=cond[i+3]
dydt[i+1]=cond[i+4]
dydt[i+2]=cond[i+5]
r=sqrt((cond[i])**2 +(cond[i+1])**2 +(cond[i+2])**2)
dydt[i+3]=-G*M*cond[i]/(r**3)
dydt[i+4]=-G*M*cond[i+1]/(r**3)
dydt[i+5]=-G*M*cond[i+2]/(r**3)
return dydt
G=1
M=1
N=12
vmag=((G*M)/(2))**(0.5)
theta = np.linspace(0,2*pi,N)
x=2*np.cos(theta)
y=2*np.sin(theta)
vx=-vmag*np.sin(theta)
vy=vmag*np.cos(theta)
z=np.zeros(N)
vz=np.zeros(N)
t=np.linspace(0,30,100)
cond=list(item for group in zip(x,y,z,vx,vy,vz) for item in group)
sln=odeint(deriv, cond, t, args=(G,M))
Any ideas where it is coming from? I feel like I have given the correct number of arguments.
You are sending 4 arguments to deriv. Per the odeint docs, you have a function deriv whose first two arguments must be y and t. When you call odeint(deriv,cond,t,...) the cond and t are automatically sent as the first two arguments to deriv. All you need to do is to make deriv(cond,t,G,M).
If you look documantation for odeint [1], you will see that your fucntion to call in oneint must be in form func(y, t0, ...). So when you call odeint(deriv, cond, t, args=(G,M)) it actually call your function as deriv(cond,t,G,m). But your function takes just 2 argument.
[1] http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.integrate.odeint.html

Error: [only length-1 arrays can be converted to Python scalars] when changing variable order

Dear Stackoverflow Community,
I am very new to Python and to programming in general, so please don't get mad when I don't get your answers and ask again.
I am trying to fit a curve to experimental data with scipy.optimization.curve_fit. This is my code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as nm
from __future__ import division
import cantera as ct
from matplotlib.backends.backend_pdf import PdfPages
import math as ma
import scipy.optimize as so
R = 8.314
T = nm.array([700, 900, 1100, 1300, 1400, 1500, 1600, 1700])
k = nm.array([289, 25695, 763059, 6358040, 14623536, 30098925, 56605969, 98832907])
def func(A, E, T):
return A*ma.exp(-E/(R*T))
popt, pcov = so.curve_fit(func, T, k)
Now this code works for me, but if I change the function to:
def func(T, A, E)
and keep the rest I get:
TypeError: only length-1 arrays can be converted to Python scalars
Also I am not really convinced by the Parameter solution of the first one.
Can anyone tell me what happens when you change the variable order?
I got the same problem and found the cause and its solution:
The problem lies on the implementation of Scipy. After the optimal parameter has been found, Scipy calls your function with the input array xdata as first argument. That is, it calls func(xdata, *args), and the function complains with a type error because xdata is not an scalar. For example:
from math import erf
erf([1, 2]) # TypeError
erf(np.array([1, 2])) # TypeError
To avoid the error, you can add custom code for supporting arrays, or better, as suggested in the answer of Joris, use numpy functions because they have support for scalars and arrays.
If the math function is not in numpy , like erf or any custom function you coded, then I recommend you instead of doing from math import erf, to do as follows:
from math import erf as math_erf # only supports scalars
import numpy as np
erf = np.vectorize(math_erf) # adds array support
def fit_func(t,s):
return 0.5*(1.0-erf(t/(np.sqrt(2)*s)))
X = np.linspace(-5,5,1000)
Y = np.array([fit_func(x,1) for x in X])
curve_fit(fit_func, X, Y)
The curve_fit function from scipy does not handle very well embedded functions from the math module. When you change the exponential function to the numpy exponential function you don't get the error:
def func(A, E, T):
return A*np.exp(-E/(R*T))
I wonder whether you data shows an exponential decay of rate. The mathematical model may not be the most suitable one.
See the doc string of curve_fit
f : callable
The model function, f(x, ...). It must take the independent variable as the first argument and the parameters to fit as separate remaining arguments.
since your formula is essentially: k=A*ma.exp(-E/(R*T)), the right order of parameters in func should be (T, A, E) or (T, E, A).
Regarding the order of A and E, they don't really matter. If you flip them, the result will get flipped as well:
>>> def func(T, A, E):
return A*ma.exp(-E/(R*T))
>>> so.curve_fit(func, T, k)
(array([ 8.21449078e+00, -5.86499656e+04]), array([[ 6.07720215e+09, 4.31864058e+12],
[ 4.31864058e+12, 3.07102992e+15]]))
>>> def func(T, E, A):
return A*ma.exp(-E/(R*T))
>>> so.curve_fit(func, T, k)
(array([ -5.86499656e+04, 8.21449078e+00]), array([[ 3.07102992e+15, 4.31864058e+12],
[ 4.31864058e+12, 6.07720215e+09]]))
I didn't get your typeerror at all.

Too many arguments used by python scipy.optimize.curve_fit

I'm attempting to do some curve fitting within a class instance method, and the curve_fit function is giving my class instance method too many arguments.
The code is
class HeatData(hx.HX):
"""Class for handling data from heat exchanger experiments."""
then several lines of methods that work fine, then my function is:
def get_flow(pressure_drop, coeff):
"""Sets flow based on coefficient and pressure drop."""
flow = coeff * pressure_drop**0.5
return flow
and the curve_fit function call
def set_flow_array(self):
"""Sets experimental flow rate through heat exchanger"""
flow = self.flow_data.flow
pressure_drop = self.flow_data.pressure_drop
popt, pcov = spopt.curve_fit(self.get_flow, pressure_drop, flow)
self.exh.flow_coeff = popt
self.exh.flow_array = ( self.exh.flow_coeff * self.exh.pressure_drop**0.5 )
gives the error
get_flow() takes exactly 2 arguments (3 given)
I can make it work by defining get_flow outside of the class and calling it like this:
spopt.curve_fit(get_flow, pressure_drop, flow)
but that's no good because it really needs to be a method within the class to be as versatile as I want. How can I get this work as a class instance method?
I'd also like to be able to pass self to get_flow to give it more parameters that are not fit parameters used by curve_fit. Is this possible?
Unlucky case, and maybe a bug in curve_fit. curve_fit uses inspect to determine the number of starting values, which gets confused or misled if there is an extra self.
So giving a starting value should avoid the problem, I thought. However, there is also an isscalar(p0) in the condition, I have no idea why, and I think it would be good to report it as a problem or bug:
if p0 is None or isscalar(p0):
# determine number of parameters by inspecting the function
import inspect
args, varargs, varkw, defaults = inspect.getargspec(f)
edit: avoiding the scalar as starting value
>>> np.isscalar([2])
False
means that the example with only 1 parameter works if the starting value is defined as [...], e.g.similar to example below:
mc.optimize([2])
An example with two arguments and a given starting value avoids the inspect call, and everything is fine:
import numpy as np
from scipy.optimize import curve_fit
class MyClass(object):
def get_flow(self, pressure_drop, coeff, coeff2):
"""Sets flow based on coefficient and pressure drop."""
flow = coeff * pressure_drop**0.5 + coeff2
return flow
def optimize(self, start_value=None):
coeff = 1
pressure_drop = np.arange(20.)
flow = coeff * pressure_drop**0.5 + np.random.randn(20)
return curve_fit(self.get_flow, pressure_drop, flow, p0=start_value)
mc = MyClass()
print mc.optimize([2,1])
import inspect
args, varargs, varkw, defaults = inspect.getargspec(mc.get_flow)
print args, len(args)
EDIT: This bug has been fixed so bound methods can now be passed as the first argument for curve_fit, if you have a sufficiently new version of scipy.
Commit of bug fix submission on github
If you define get_flow inside your HeatData class you'll have to have self as first parameter : def get_flow(self, pressure_drop, coeff):
EDIT: after seeking for the definition of curve_fit, i found that the prototype is curve_fit(f, xdata, ydata, p0=None, sigma=None, **kw) so the first arg must be a callable that will be called with first argument as the independent variable :
Try with a closure :
def set_flow_array(self):
"""Sets experimental flow rate through heat exchanger"""
flow = self.flow_data.flow
pressure_drop = self.flow_data.pressure_drop
def get_flow((pressure_drop, coeff):
"""Sets flow based on coefficient and pressure drop."""
#here you can use self.what_you_need
# you can even call a self.get_flow(pressure_drop, coeff) method :)
flow = coeff * pressure_drop**0.5
return flow
popt, pcov = spopt.curve_fit(get_flow, pressure_drop, flow)
self.exh.flow_coeff = popt
self.exh.flow_array = ( self.exh.flow_coeff * self.exh.pressure_drop**0.5 )
Trying dropping the "self" and making the call: spopt.curve_fit(get_flow, pressure_drop, flow)
The first argument of a class method definition should always be self. That gets passed automatically and refers to the calling class, so the method always receives one more argument than you pass when calling it.
The only pythonic way to deal with this is to let Python know get_flow is a staticmethod: a function that you put in the class because conceptually it belongs there but it doesn't need to be, and therefore doesn't need self.
#staticmethod
def get_flow(pressure_drop, coeff):
"""Sets flow based on coefficient and pressure drop."""
flow = coeff * pressure_drop**0.5
return flow
staticmethod's can be recognized by the fact that self is not used in the function.

Categories