fitting hyperbolic and harmonic functions with curvefit - python

I have a problem working with curvefit function.
Here I have a code with two functions to work with.
The first is an hyperbolic function.
The second is the same but with one parameter = 1.
My problem is that the result to fit the first function with curvefit works fine but with the second doesn´t.
I have a commercial program that generates correct solutions for both respectively. So it is possible to find a solution for the second function (a particular case of the first one as I mentioned above)
Is there someone that could give me an idea about what I am doing wrong ?
Thanks !
Here is the code to run:
def hypRegress(ptp,pir):
xData = np.arange(len(ptp))
yData = pir
xData = np.array(xData, dtype=float)
yData = np.array(yData, dtype= float)
def funcHyp(x, qi, exp, di):
return qi*(1+exp*di*x)**(-1/exp)
def errfuncHyp(p):
return funcHyp(xData, p[0], p[1], p[2]) - yData
#print(xData.min(), xData.max())
#print(yData.min(), yData.max())
trialX = np.linspace(xData[0], xData[-1], 1000)
# Fit an hyperbolic
popt, pcov = optimize.curve_fit(funcHyp, xData, yData)
print 'popt'
#print(popt)
yHYP = funcHyp(trialX, *popt)
#optimization
# initial values
p1, success = optimize.leastsq(errfuncHyp, popt,maxfev=10000)
print p1
aaaa = funcHyp(trialX, *p1)
plt.figure()
plt.plot(xData, yData, 'r+', label='Data', marker='o')
plt.plot(trialX, yHYP, 'r-',ls='--', label="Hyp Fit")
plt.plot(trialX, aaaa, 'y', label = 'Optimized')
plt.legend()
plt.show(block=False)
return p1
def harRegress(ptp,pir):
xData = np.arange(len(ptp))
yData = pir
xData = np.array(xData, dtype=float)
yData = np.array(yData, dtype=float)
def funcHar(x, qi, di):
return qi*(1+di*x)**(-1)
def errfuncHar(p):
return funcHar(xData, p[0], p[1]) - yData
#print(xData.min(), xData.max())
#print(yData.min(), yData.max())
trialX = np.linspace(xData[0], xData[-1], 1000)
# Fit an harmonic
popt, pcov = optimize.curve_fit(funcHar, xData, yData)
print 'popt'
print(popt)
yHAR = funcHar(trialX, *popt)
#optimization
# initial values
p1, success = optimize.leastsq(errfuncHar, popt,maxfev=1000)
print p1
aaaa = funcHar(trialX, *p1)
plt.figure()
plt.plot(xData, yData, 'r+', label='Data', marker='o')
plt.plot(trialX, yHAR, 'r-',ls='--', label="Har Fit")
plt.plot(trialX, aaaa, 'y', label = 'Optimized')
plt.legend()
plt.show(block=False)
return p1
ptp = ([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14])
pir = ([150,85,90,50,45,60,60,40,40,30,28,30,38,30,26])
hypRegress(ptp,pir)
harRegress(ptp,pir)
input('pause')

It's a classic problem. The curve_fit algorithm starts from an initial guess for the arguments to be optimized, which, if not supplied, is simply all ones.
That means, when you call
popt, pcov = optimize.curve_fit(funcHar, xData, yData)
the first attempt for the fitting routine will be to assume
funcHar(xData, qi=1, di=1)
If you haven't specified any of the other options, the fit will be poor, as evidenced by the large variances of the parameter estimates (check the diagonal of pcov and compare it to the actual values returned in popt).
In many cases, the situation is solved by supplying an intelligent guess. From your HAR-model, I gather that the values around x==0 are the same in size as qi. So you could supply an initial guess of p0 = (pir[0], 1), which will already lead to a satisfying solution. You could also call it with
popt, pcov = optimize.curve_fit(funcHar, ptp, pir, p0=(0, 1))
which leads to the same result. So the problem is just that the algorithm finds a local minimum.
An alternative would've been to supply a different factor, the "parameter determining the initial step bound":
popt, pcov = optimize.curve_fit(funcHar, ptp, pir, p0=(1, 1), factor=1)
In this case, even with the (default) initial guess of p0=(1,1), it gives the same resulting fit.
Remember: fitting is an art, not a science. Often times, by analyzing the model you want to fit, you could already supply a good initial guess.
I can't speak for the algorithm used in the commercial program. If it is open-source (unlikely), you could have a look to see what they do.

Related

How to fit a model of Gaussian rise and exponential decay to data (lightcurves) in Python?

I am trying to fit a model like this Gaussian rise before peak and exponential decay after peak, see image to my lightcurve data.
How do I code this? Below is my initial attempt to fit one of gaussian
and here is the curve (blue) I'm trying to fit
How do I restrict gaussian function up till peak and exponential/power law from peak to end?
from scipy.optimize import curve_fit
import numpy as np
#Function to calculate the exponential decay with constants a and b
def exponential(x, a, b):
return a*np.exp(-b*x)
#Function to calculate the power-law decay with constants a and b
def power_law(x, a, b):
return a*np.power(-x, b)
# Function to calculate the Gaussian rise with constants a, b, and c
def gaussian(x, a, b, c):
return a*np.exp(-np.power(x - b, 2)/(2*np.power(c, 2)))
# x and y data points
xData = t_slice['time']
yData = t_slice['flux']/10**38 #normalizing
#Plot data points
plt.plot(xData, yData, 'b.', label='SED')
# Fit Gaussian
pars, cov = curve_fit(gaussian, xdata=xData, ydata=yData, p0=[0, 0, 0], bounds=(-np.inf, np.inf))
# Get the standard deviations of the parameters (square roots of the # diagonal of the covariance)
stdevs = np.sqrt(np.diag(cov))
#for xData in range(50,1000):
#popt, pcov = curve_fit(exponential, xData, yData)
print("parameters:",pars)
print("std dev:",stdevs)
#print(popt)
#x values for the fitted function
#xFit = np.arange(50, 1000, 0.01)
# Plot the fit data as an overlay on the scatter data
plt.scatter(xData, gaussian(xData, *pars), linestyle='--', linewidth=2, color='black')
#popt, pcov = curve_fit(exponential, xData, yData, bounds=(0, [30., 2., 0.5]))
plt.ylim(0, 1000)
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
I am guessing a bit, but what you show in the picture cannot be generated by the piecewise formula in the other picture. For t > t_peak the curve apparently shows a stretched exponential decay, not a simple exponential. Combining a gaussian peak with an exponential decay would give a kink at t = t_peak.
Defining a picewise function is easy, e.g.:
def piecewise(x, a, b, c, tau, beta):
lower = (a*np.exp(-np.power(x - b, 2)/(2*np.power(c, 2)))) * (x < b)
upper = a*np.exp(-np.power((x - b)/tau,beta)) * (x >= b)
return lower+upper
(The key is to multiply with x < b and x >= b.)

scipy.optimize.curve_fit() failed to fit a exponential function

I'm trying to fit a exponential function by using scipy.optimize.curve_fit()(The example data and code as following). But it always shows a RuntimeError like this: RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 5000. I'm not sure where I'm going wrong.
import numpy as np
from scipy.optimize import curve_fit
x = np.arange(-1, 1, .01)
param1 = [-1, 2, 10, 100]
fit_func = lambda x, a, b, c, d: a * np.exp(b * x + c) + d
y = fit_func(x, *param1)
popt, _ = curve_fit(fit_func, x, y, maxfev=5000)
This is almost certainly due to the initial guess for the parameters.
You don't pass an initial guess to curve_fit, which means it defaults to a value of 1 for every parameter. Unfortunately, this is a terrible guess in your case. The function of interest is an exponential, one property of which is that the derivative is also an exponential. So all derivatives (first-order, second-order, etc) will be not just wrong, but have the wrong sign. This means the optimizer will have a very difficult time making progress.
You can solve this by giving the optimizer just a smidge of help. Since you know all your data is negative, you could just pass -1 as an initial guess for the first parameter (the scale or amplitude of the function). This alone is enough to for the optimizer to arrive at a reasonable guess.
p0 = (-1, 1, 1, 1)
popt, _ = curve_fit(x, y, p0=p0, maxfev=5000)
fig, ax = plt.subplots()
ax.plot(x, y, label="Data", color="k")
ax.plot(x, fit_func(x, *popt), color="r", linewidth=3.0, linestyle=":", label="Fitted")
fig.tight_layout()
You should see something like this:

Curve fitting with conditional equation

Problem
I have created a curve fitting exercise (see functional code below), but I would like to add to the functionality.
I need to be able to define the following condition: slope at min(xdata) = 0.
(in words: I want the fitted curve to start out with horizontal gradient)
What I have tried
I have spent quite a bit of time researching scipy.optimize.curve_fit and evaluated other options (lmfit package, and scipy functions scipy.optimize.fmin_slsqp, scipy.optimize.minimize, etc.). lmfit only allows me to set a static condition on the parameters, such as p1 = 2 * p2 + 3. But it does not allow me to address min(xdata) dynamically, and I cannot make use of the derivate in the constraint.
Scipy only allows me to minimize the function (find an optimal x, but parameters p are already known). Or it can be used to define a specific range for the parameters. I was not able to define a second function that can be used to constrain the parameters during the curve fitting.
I need to be able to pass the condition directly to the curve fitting algorithm (rather than addressing the problem by bringing the condition into the cubic_fit() equation - it seems possible to eliminate e.g. p3 and define it as a combination of the other parameters and min(xdata)). My actual fitting function is much more complex and I need to run this script iteratively on a batch of data (varying min(xdata)). I cannot manually alter the fitting function each time...
I am grateful for any suggestions, maybe there are other packages out there that allow for a more complex definition of the curve fitting problem?
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import scipy.optimize
# generate dummy data - on which I will run a curve fit below
def cubic_fit_with_noise(x, p1, p2, p3, p4):
return p1 + p2*x + p3*x**2 + p4*x**3 + np.random.rand()
xdata = [x * 0.1 for x in range(0, 100)]
ydata = np.array( [cubic_fit_with_noise (x, 2, 0.4, -.2,0.02) for x in xdata] )
# now, run the curve-fit
# set up the fitting function:
def cubic_fit(x, p1, p2, p3, p4):
return p1 + p2*x + p3*x**2 + p4*x**3
# define starting point:
s1 = 2.5
s2 = 0.2
s3 = -.2
s4 = 0.02
# scipy curve fitting:
popt, pcov = scipy.optimize.curve_fit(cubic_fit, xdata, ydata, p0=(s1,s2,s3,s4))
y_modelled = np.array([cubic_fit(x, popt[0], popt[1], popt[2], popt[3]) for x in xdata])
print(popt) # prints out the 4 parameters p1,p2,p3,p4 defined in curve-fitting
plt.plot(xdata, ydata, 'bo')
plt.plot(xdata, y_modelled, 'r-')
plt.show()
The above code runs with Python3 (fix the print statement if you have Python2).
As an addition, I want to bring in the derivative:
def cubic_fit_derivative(x, p1, p2, p3, p4):
return p2 + 2.0 * p3 * x + 3 * p4 * x**2
and the constraint that cubic_fit_derivative(min(xdata), p1,p2,p3,p4) = 0.
Your condition that the derivative of your polynomial = 0 at xmin can be expressed as a simple constraint and means that the variables p2, p3, and p4 are not actually independent. The derivate condition is
p2 + 2*p3*xmin + 3*p4*xmin**2 = 0
where xmin is the minimum value of xdata. Furthermore, xmin will be known prior to the fit (if not necessarily when your script is written), you can use this to constrain one of the three parameters. Since xmin may be zero (in fact, it is for your case), the constraint should be that
p2 = - 2*p3*xmin - 3*p4*xmin**2
Using lmfit, the original, unconstrained fit would look like this (I cleaned it up a bit):
import numpy as np
from lmfit import Model
import matplotlib.pylab as plt
# the model function:
def cubic_poly(x, p1, p2, p3, p4):
return p1 + p2*x + p3*x**2 + p4*x**3
xdata = np.arange(100) * 0.1
ydata = cubic_poly(xdata, 2, 0.4, -.2, 0.02)
ydata = ydata + np.random.normal(size=len(xdata), scale=0.05)
# make Model, create parameters, run fit, print results
model = Model(cubic_poly)
params = model.make_params(p1=2.5, p2=0.2, p3=-0.0, p4=0.0)
result = model.fit(ydata, params, x=xdata)
print(result.fit_report())
plt.plot(xdata, ydata, 'bo')
plt.plot(xdata, result.best_fit, 'r-')
plt.show()
which prints:
[[Model]]
Model(cubic_poly)
[[Fit Statistics]]
# function evals = 13
# data points = 100
# variables = 4
chi-square = 0.218
reduced chi-square = 0.002
Akaike info crit = -604.767
Bayesian info crit = -594.347
[[Variables]]
p1: 2.00924432 +/- 0.018375 (0.91%) (init= 2.5)
p2: 0.39427207 +/- 0.016155 (4.10%) (init= 0.2)
p3: -0.19902928 +/- 0.003802 (1.91%) (init=-0)
p4: 0.01993319 +/- 0.000252 (1.27%) (init= 0)
[[Correlations]] (unreported correlations are < 0.100)
C(p3, p4) = -0.986
C(p2, p3) = -0.967
C(p2, p4) = 0.914
C(p1, p2) = -0.857
C(p1, p3) = 0.732
C(p1, p4) = -0.646
and produces a plot of
Now, to add your constraint condition, we will add xmin as a fixed parameter, and constrain p2 as above, replace the above with:
params = model.make_params(p1=2.5, p2=0.2, p3=-0.0, p4=0.0)
# add an extra parameter for `xmin`
params.add('xmin', min(xdata), vary=False)
# constrain p2 so that the derivative is 0 at xmin
params['p2'].expr = '-2*p3*xmin - 3*p4*xmin**2'
result = model.fit(ydata, params, x=xdata)
print(result.fit_report())
plt.plot(xdata, ydata, 'bo')
plt.plot(xdata, result.best_fit, 'r-')
plt.show()
which now prints
[[Model]]
Model(cubic_poly)
[[Fit Statistics]]
# function evals = 10
# data points = 100
# variables = 3
chi-square = 1.329
reduced chi-square = 0.014
Akaike info crit = -426.056
Bayesian info crit = -418.241
[[Variables]]
p1: 2.39001759 +/- 0.023239 (0.97%) (init= 2.5)
p2: 0 +/- 0 (nan%) == '-2*p3*xmin - 3*p4*xmin**2'
p3: -0.10858258 +/- 0.002372 (2.19%) (init=-0)
p4: 0.01424411 +/- 0.000251 (1.76%) (init= 0)
xmin: 0 (fixed)
[[Correlations]] (unreported correlations are < 0.100)
C(p3, p4) = -0.986
C(p1, p3) = -0.742
C(p1, p4) = 0.658
and a plot like
If xmin had not been zero (say, xdata = np.linspace(-10, 10, 101), the value and uncertainty of p2 would not be zero.
As mentioned in my comment, you just have to fit the right function. I forgot the constant, though. So the function would be a*(x-xmin)**2*(x-xn)+c
As curvefit does not take additional parameters as would e.g. leatssq, the only trick is to pass xmin. I do that by a global variable (Maybe not the nicest way, but it works. Comments on how to do it better are welcome).
Eventually, you just need to add the following lines to your code:
def cubic_zero(x,a,xn,const):
global xmin
return (a*(x-xmin)**2*(x-xn)+const)
and
xmin=xdata[0]
popt2, pcov2 = scipy.optimize.curve_fit(cubic_zero, xdata, ydata)
y_modelled2 = np.array([cubic_zero(x, *popt2) for x in xdata])
print(popt2)
plt.plot(xdata, y_modelled2, color='#ee9900',linestyle="--")
providing
>>>[ 0.01429367 7.63190327 2.92604132]
and
This solution uses scipy.optimize.leastsq. Using the self made residuals function, there is actually no need to pass xmin as additional parameter to the fit. The fit function is as in the other post and therefore has no necessity for constraints. This looks like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
def cubic_fit_with_noise(x, p1, p2, p3, p4):
return p1 + p2*x + p3*x**2 + p4*x**3 + .2*(1-2*np.random.rand())
def cubic_zero(x,a,xn,const, xmin):
return (a*(x-xmin)**2*(x-xn)+const)
def residuals(params, dataX,dataY):
a,xn,const=params
xmin=dataX[0]
dist=np.fromiter( (y-cubic_zero(x,a,xn,const, xmin) for x,y in zip(dataX,dataY)), np.float)
return dist
xdata = np.linspace(.5,10.5,100)
ydata = np.fromiter( (cubic_fit_with_noise (x, 2, 0.4, -.2,0.02) for x in xdata), np.float )
# scipy curve fitting with leastsq:
initialGuess=[.3,.3,.3]
popt2, pcov2, info2, msg2, ier2 = leastsq(residuals,initialGuess, args=(xdata, ydata), full_output=True)
fullparams=np.append(popt2,xdata[0])
y_modelled2 = np.array([cubic_zero(x, *fullparams) for x in xdata])
print(popt2)
print(pcov2)
print np.array([ -popt2[0]*xdata[0]**2*popt2[1]+popt2[2],popt2[0]*(xdata[0]**2+2*xdata[0]*popt2[1]),-popt2[0]*(2*xdata[0]+popt2[1]),popt2[0] ])
plt.plot(xdata, ydata, 'bo')
plt.plot(xdata, y_modelled2, 'r-')
plt.show()
and provides:
>>>[ 0.01710749 7.69369653 2.38986378]
>>>[[ 4.33308441e-06 5.61402017e-04 2.71819763e-04]
[ 5.61402017e-04 1.10367937e-01 5.67852980e-02]
[ 2.71819763e-04 5.67852980e-02 3.94127702e-02]]
>>>[ 2.35695882 0.13589672 -0.14872733 0.01710749]
Image upload does not work at the moment ... for whatever reason but result is the same as in the other post

SciPy Curve Fit Fails Power Law

So, I'm trying to fit a set of data with a power law of the following kind:
def f(x,N,a): # Power law fit
if a >0:
return N*x**(-a)
else:
return 10.**300
par,cov = scipy.optimize.curve_fit(f,data,time,array([10**(-7),1.2]))
where the else condition is just to force a to be positive. Using scipy.optimize.curve_fit yields an awful fit (green line), returning values of 1.2e+04 and 1.9e0-7 for N and a, respectively, with absolutely no intersection with the data. From fits I've put in manually, the values should land around 1e-07 and 1.2 for N and a, respectively, though putting those into curve_fit as initial parameters doesn't change the result. Removing the condition for a to be positive results in a worse fit, as it chooses a negative, which leads to a fit with the wrong sign slope.
I can't figure out how to get a believable, let alone reliable, fit out of this routine, but I can't find any other good Python curve fitting routines. Do I need to write my own least-squares algorithm or is there something I'm doing wrong here?
UPDATE
In the original post, I showed a solution that uses lmfit which allows to assign bounds to your parameters. Starting with version 0.17, scipy also allows to assign bounds to your parameters directly (see documentation). Please find this solution below after the EDIT which can hopefully serve as a minimal example on how to use scipy's curve_fit with parameter bounds.
Original post
As suggested by #Warren Weckesser, you could use lmfit to get this task done, which allows you to assign bounds to your parameters and avoids this 'ugly' if-clause.
Since you do not provide any data, I created some which are shown here:
They follow the law f(x) = 10.5 * x ** (-0.08)
I fit them - as suggested by #roadrunner66 - by transforming the power law in a linear function:
y = N * x ** a
ln(y) = ln(N * x ** a)
ln(y) = a * ln(x) + ln(N)
So I first use np.log on the original data and then do the fit. When I now use lmfit, I get the following output:
[[Variables]]
lN: 2.35450302 +/- 0.019531 (0.83%) (init= 1.704748)
a: -0.08035342 +/- 0.005158 (6.42%) (init=-0.5)
So a is pretty close to the original value and np.exp(2.35450302) gives 10.53 which is also very close to the original value.
The plot then looks as follows; as you can see the fit describes the data very well:
Here is the entire code with a couple of inline comments:
import numpy as np
import matplotlib.pyplot as plt
from lmfit import minimize, Parameters, Parameter, report_fit
# generate some data with noise
xData = np.linspace(0.01, 100., 50.)
aOrg = 0.08
Norg = 10.5
yData = Norg * xData ** (-aOrg) + np.random.normal(0, 0.5, len(xData))
plt.plot(xData, yData, 'bo')
plt.show()
# transform data so that we can use a linear fit
lx = np.log(xData)
ly = np.log(yData)
plt.plot(lx, ly, 'bo')
plt.show()
def decay(params, x, data):
lN = params['lN'].value
a = params['a'].value
# our linear model
model = a * x + lN
return model - data # that's what you want to minimize
# create a set of Parameters
params = Parameters()
params.add('lN', value=np.log(5.5), min=0.01, max=100) # value is the initial value
params.add('a', value=-0.5, min=-1, max=-0.001) # min, max define parameter bounds
# do fit, here with leastsq model
result = minimize(decay, params, args=(lx, ly))
# write error report
report_fit(params)
# plot data
xnew = np.linspace(0., 100., 5000.)
# plot the data
plt.plot(xData, yData, 'bo')
plt.plot(xnew, np.exp(result.values['lN']) * xnew ** (result.values['a']), 'r')
plt.show()
EDIT
Assuming that you have scipy 0.17 installed, you can also do the following using curve_fit. I show it for your original definition of the power law (red line in the plot below) as well as for the logarithmic data (black line in the plot below). The data is generated in the same way as above. The plot the looks as follows:
As you can see, the data is described very well. If you print popt and popt_log, you obtain array([ 10.47463426, 0.07914812]) and array([ 2.35158653, -0.08045776]), respectively (note: for the letter one you will have to take the exponantial of the first argument - np.exp(popt_log[0]) = 10.502 which is close to the original data).
Here is the entire code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# generate some data with noise
xData = np.linspace(0.01, 100., 50)
aOrg = 0.08
Norg = 10.5
yData = Norg * xData ** (-aOrg) + np.random.normal(0, 0.5, len(xData))
# get logarithmic data
lx = np.log(xData)
ly = np.log(yData)
def f(x, N, a):
return N * x ** (-a)
def f_log(x, lN, a):
return a * x + lN
# optimize using the appropriate bounds
popt, pcov = curve_fit(f, xData, yData, bounds=(0, [30., 20.]))
popt_log, pcov_log = curve_fit(f_log, lx, ly, bounds=([0, -10], [30., 20.]))
xnew = np.linspace(0.01, 100., 5000)
# plot the data
plt.plot(xData, yData, 'bo')
plt.plot(xnew, f(xnew, *popt), 'r')
plt.plot(xnew, f(xnew, np.exp(popt_log[0]), -popt_log[1]), 'k')
plt.show()

Passing variable from an array to scipy.integrate.quad() in python

I'm using python to fit function to my dataset. My code worked and fitted function with curve_fit before I added integral scipy.integrate.quad() to the definition of function. I checked why does it give me an error "Supplied function does not return a valid float." and it turns out that the code works fine if I don't pass a variable from dataset over which I'm fitting my curve. If I set arbitrary value like 5. here: scipy.integrate.quad(args(5.)) instead of Xi it works perfectly again. Here is my code, help me please!:
from scipy import integrate
calka = lambda z, vz, t: np.exp(-1.*0.001-(z*0.001+vz*t*2.48138957816e-05))**2*1./((20.)**2)*np.exp(-(1.*1.42060911e-05-z*1.42060911e-05)**2*1./((20.)**2))
z_lin = linspace(0,zmax,npoints/2)
def func(Xi, vx, vz):
z=0.0
f=0.
t=Xi
f=integrate.quad(calka, z_lin[0], z_lin[3998], args=(vz,Xi))[0] #it works fine with arbitrarily set value
q=(1.7*np.pi*4./1.569)**2*0.000024813
v=(0.000024813)**2/((13.)**2)
p=2.*2.*np.pi/1.569*0.000024813
return np.exp(-vx*v*Xi)*np.exp(-(Xi*q*1.3))*np.cos(p*Xi*vz)*f
xdata = np.linspace(0, 1000, 1000)
print len(xdata)
ydata= autokowariancja[0,:]
popt, pcov = curve_fit(func, xdata, ydata)
print popt, pcov
plt.figure(figsize=(8,5))
pylab.plot(func(xdata, popt[0], popt[1]), 'b')
pylab.plot(autokowariancja[0,:], 'r')
legend("NoWin","Win")
pylab.show()
I solved the problem. Xi is an array and integrate.quad() takes float only, so I splitted Xi array by enumerating it and I created an array the size of Xi and calculated integral for every element of Xi separately and passed it into an array:
def func(Xi, vx, vz):
z=0.0
f=0.
t=Xi
for i, item in enumerate(Xi):
integral[i]=integrate.quad(calka, z_lin[0], z_lin[3998], args=(vz, Xi[i]))[0]
q=(1.7*np.pi*4./1.569)**2*0.000024813
v=(0.000024813)**2/((13.)**2)
p=2.*2.*np.pi/1.569*0.000024813
return np.exp(-vx*v*Xi)*np.exp(-(Xi*q*1.3))*np.cos(p*Xi*vz)*integral

Categories