I'm using scipy.integrate.ode and would like to know, what happens internally when I get the message UserWarning: zvode: Excess work done on this call. (Perhaps wrong MF.) 'Unexpected istate=%s' % istate))
This appears when I call ode.integrate(t1) for too big t1, so I'm forced to use a for-loop and incrementally integrate my equation, what lowers the speed since the solver can not use adaptive step size very effectively. I already tried different methods and setting for the integrator. The maximal number of steps nsteps=100000 is very big already but with this setting I still can't integrate up to 1000 in one call, which I would like to do.
The code I use is:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
h_bar=0.658212 #reduced Planck's constant (meV*ps)
m0=0.00568563 #free electron mass (meV*ps**2/nm**2)
m_e=0.067*m0 #effective electron mass (meV*ps**2/nm**2)
m_h=0.45*m0 #effective hole mass (meV*ps**2/nm**2)
m_reduced=1/((1/m_e)+(1/m_h)) #reduced mass of electron and holes combined
kB=0.08617 #Boltzmann's constant (meV/K)
mu_e=-50 #initial chemical potential for electrons
mu_h=-100 #initial chemical potential for holes
k_array=np.arange(0,1.5,0.02) #a list of different k-values
n_k=len(k_array) #number of k-values
def derivative(t,y_list,Gamma,g,kappa,k_list,n_k):
#initialize output vector
y_out=np.zeros(3*n_k+1,dtype=complex)
y_out[0:n_k]=-g*g*2*np.real(y_list[2*n_k:3*n_k])/h_bar
y_out[n_k:2*n_k]=-g*g*2*np.real(y_list[2*n_k:3*n_k])/h_bar
y_out[2*n_k:3*n_k]=((-1.j*(k_list**2/(2*m_reduced))-(Gamma+kappa))*y_list[2*n_k:3*n_k]-y_list[-1]*(1-y_list[n_k:2*n_k]-y_list[0:n_k])+y_list[0:n_k]*y_list[n_k:2*n_k])/h_bar
y_out[-1]=(2*np.real(g*g*sum(y_list[2*n_k:3*n_k]))-2*kappa*y_list[-1])/h_bar
return y_out
def dynamics(t_list,N_ini=1e-3, T=300, Gamma=1.36,kappa=0.02,g=0.095):
#initial values
t0=0 #initial time
y_initial=np.zeros(3*n_k+1,dtype=complex)
y_initial[0:n_k]=1/(1+np.exp(((h_bar*k_array)**2/(2*m_e)-mu_e)/(kB*T))) #Fermi-Dirac distributions
y_initial[n_k:2*n_k]=1/(1+np.exp(((h_bar*k_array)**2/(2*m_h)-mu_h)/(kB*T)))
t_list=t_list[1:] #remove t=0 from list (not feasable for integrator)
r=ode(derivative).set_integrator('zvode',method='adams', atol=10**-6, rtol=10**-6,nsteps=100000) #define ode solver
r.set_initial_value(y_initial,t0)
r.set_f_params(Gamma,g,kappa,k_array,n_k)
#create array for output (the +1 accounts values at t0=0)
y_output=np.zeros((len(t_list)+1,len(y_initial)),dtype=complex)
#insert initial data in output array
y_output[0]=y_initial
#perform integration for time steps given by t_list (the +1 account for the initial values already in the array)
for i in range(len(t_list)):
print(r't = %s' % t_list[i])
r.integrate(t_list[i])
if not (r.successful()):
print('Integration not successful!!')
break
y_output[i+1]=r.y
return y_output
t_list=np.arange(0,100,5)
data=dynamics(t_list,N_ini=1e-3, T=300, Gamma=1.36,kappa=0.02,g=1.095)
The message means that the method reached the number of steps specified by nsteps parameter. Since you asked about internals, I looked into the Fortran source, which offers this explanation:
-1 means an excessive amount of work (more than MXSTEP steps) was done on this call, before completing the requested task, but the integration was otherwise successful as far as T. (MXSTEP is an optional input and is normally 500.)
The conditional statement that brings up the error is this "GO TO 500".
According to LutzL, for your ODE the solver chooses step size 2e-4, which means 5000000 steps to integrate up to 1000. Your options are:
try such a large value of nsteps (which translates to MXSTEP in aforementioned Fortran routine)
reduce error tolerance
run a for loop, as you already do.
Related
I have a system of ODEs where my state variables and independent variable span many orders of magnitude (initial values are around 0 at t=0 and are expected to become about 10¹⁰ by t=10¹⁷). I also want to ensure that my state variables remain positive.
According to this Stack Overflow post, one way to enforce positivity is to log-transform the ODEs to solve for the evolution of the logarithm of a variable instead of the variable itself. However when I try this with my ODEs, I get an overflow error probably because of the huge dynamic range / orders of magnitude of my state variables and time variable. Am I doing something wrong or is log-transform just not applicable in my case?
Here is a minimal working example that is successfully solved by scipy.integrate.solve_ivp:
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,y):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = y
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = mdot_grow_now - mdot_convert
M2dot = mdot_convert
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
And here is the same example but now log-transformed following the Stack Overflow post referenced above (since dlogx/dt = 1/x * dx/dt, we simply replace the LHS with x*dlogx/dt and divide both sides by x to isolate dlogx/dt on the LHS; and we make sure to use np.exp() on the state variables – now logx instead of x – within the integrator function):
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,logy):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = np.exp(logy)
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = (mdot_grow_now - mdot_convert) / M1
M2dot = (mdot_convert) / M2
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
[…] is log-transform just not applicable in my case?
I don’t know where your transform went wrong, but it will certainly not achieve what you think it does. Log-transforming as a means to avoid negative values makes sense and works if and only if the following two conditions hold:
If the value of a dynamical variable approaches zero (from above), its derivative also approaches zero (from above) in your model.
Due to numerical noise, your derivative may turn negative though it actually isn’t.
Conversely, it is not necessary or doesn’t work in the following cases:
If Condition 1 fails because your derivative never approaches zero in your model, but is strictly positive, you have no problem to begin with, as your derivative should not become negative in any reasonable implementation of your model. (You might make it happen by implementing some spectacular numerical annihilation, but that’s quite a difficult feat to achieve and not what I would consider a reasonable implementation.)
If Condition 1 fails because your derivative becomes truly negative in your model, logarithms won’t save you, because the dynamics wants to push the derivative below zero and the logarithms cannot represent this. You usually get an overflow error due to the logarithms becoming extremely negative or the adaptive integration fails.
Even if Condition 1 applies, Condition 2 can usually be handled by avoiding numerical annihilations and similar when implementing your model.
Unless I am mistaken, your model falls into the first category. If M1 goes to zero, mdot_convert goes towards zero and thus M1dot = mdot_grow_now - mdot_convert is strictly positive, because mdot_grow_now is. M2dot is strictly positive anyway. Thus, you gain nothing from log-transforming. In fact, in the vast majority of cases, your dynamical variables will quickly increase.
With all that being said, some things you might want to look into are:
Normalising your variables to be in the order of magnitude of 1.
Stochastic differential equations.
I am new to stackoverflow and also quite new to Python. So, I hope to ask my question in an appropriate manner.
I am running a Python code similar to this minimal example with an example function that is a product of a lorentzian with a cosinus that I want to numerically integrate:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
#minimal example:
omega_loc = 15
gamma = 5
def Lorentzian(w):
#print(w)
return (w**3)/((w/omega_loc) + 1)**2*(gamma/2)/((w-omega_loc)**2+(gamma/2)**2)
def intRe(t):
return quad(lambda w: w**(-2)*Lorentzian(w)*(1-np.cos(w*t)),0,np.inf,limit=10000)[0]
plt.figure(1)
plot_range = np.linspace(0,100,1000)
plt.plot(plot_range, [intRe(t) for t in plot_range])
Independent on the upper limit of the integration I never get the code to run and to give me a result.
When I enable the #print(w) line it seems like the code just keeps on probing the integral at random different values of w in an infinite loop (?). Also the console gives me a detection of a roundoff error.
Is there a different way for numerical integration in Python that is better suited for this kind of function than the quad function or did I do a more fundamental error?
Observations
Close to zero (1 - cos(w*t)) / w**2 tends to 0/0. We can take the taylor expansion t**2(1/2 - (w*t)**2/24).
Going to the infinity the Lorentzian is a constant and the cosine term will cause the output to oscillate indefinitely, the integral can be approximated by multiplying that term by a slowly decreasing term.
You are using a linearly spaced scale with many points. It is easier to visualize with w in log scale.
The plot looks like this before damping the cosine term
I introduced two parameters to tune the attenuation of the oscilations
def cosinus_term(w, t, damping=1e4*omega_loc):
return np.where(abs(w*t) < 1e-6, t**2*(0.5 - (w*t)**2/24.0), (1-np.exp(-abs(w/damping))*np.cos(w*t))/w**2)
def intRe(t, damping=1e4*omega_loc):
return quad(lambda w: cosinus_term(w, t)*Lorentzian(w),0,np.inf,limit=10000)[0]
Plotting with the following code
plt.figure(1)
plot_range = np.logspace(-3,3,100)
plt.plot(plot_range, [intRe(t, 1e2*omega_loc) for t in plot_range])
plt.plot(plot_range, [intRe(t, 1e3*omega_loc) for t in plot_range])
plt.xscale('log')
It runs in less than 3 minutes here, and the two results are close to each other, specially for large w, suggesting that the damping doesn't affect too much the result.
I have a problem I am trying to find a solution for where I have 5 single variable polynomials that have a single peak in the range i'm concerned with. My goal is to find some values of the variable for each polynomial (under certain min/max and sum of all variables constraints) that maximizes the value of these curves multiplied by a constant for each curve.
I've set up some code using the scipy.optimize package and numpy. It seems to be able to reach solution, but the solution it reaches does not appear to be anywhere close to optimal. For example, the trivial case is for an input of 488 MW. This particular input value has a solution where each x0-x4 variable is it at the peak of it's function, which is as follows:
x0=90
x1=100
x2=93
x3=93
x4=112
The result it provides we with is:
x0=80
x1=97
x2=105
x3=80
x4=126
This does satisfy my constraint, but it does not appear to minimize the objective function.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
U1_Max=103
U1_Min=80
U2_Max=102
U2_Min=80
U3_Max=105
U3_Min=80
U4_Max=100
U4_Min=80
U5_Max=126
U5_Min=90
# Our whole goal here is to maximze the sum of several efficiency efficiency curves times the
# output MW of each unit. "The most efficiency where it matters the most"
# Assuming all units are available for assignment his would look something like:
#Where have the following efficiency curves:
#U1: Efficiency=-0.0231*(MW)^2+4.189*MW-102.39
#U2: Efficiency= -0.01*(MW)^2+1.978*MW-8.7451
#U3: Efficiency= -0.025*MW^2+4.5017*MW-115.37
#U4: Efficiency= -0.01*(MW)^2+1.978*MW-8.7451
#U5: Efficiency= -0.0005*(MW)^2+0.1395*(MW)^2-13.327*MW+503.41
#So I think we want to
#Maximize U1(x0)*U1_MAX+U2(x1)*U2_MAX+U3(x2)*U3_MAX+U4(x3)*U4_MAX+U5(x4)*U5_MAX
#I think this can also be stated as:
#Minimize (U1(x0)-100)*U1_MAX+(U2(x1)-100)*U2_MAX)+(U3(x2)-100)*U3_MAX)+(U4(x3)-100)*U4_MAX)+(U5(x4)-100)*U5_MAX)
#Which means 'minimize the sum of the products of the difference between 100% efficient and actual and the unit nameplates'
#By Choosing {x1, x2, x3, x4, x5}
#Such that x1+x2+x3+x4+x5=MW_Total
#Such that U1_Min<x1<U1Max
#Such that U2_Min<x2<U2Max
#Such that U3_Min<x3<U3Max
#Such that U4_Min<x4<U4Max
#Such that U5_Min<x5<U5Max
##so let's type that out....
#stack overflow says the optimizer does best if the function being optimized is around 1-5ish so we will get it there-ish.
def objective(x):
return (
(
((100-0.0231*x[0]**2+4.189*x[0]-102.39))*U1_Max
+((100-0.01*x[1]**2+1.978*x[1]-8.7451))*U2_Max
+((100-0.025*x[2]**2+4.5017*x[2]-115.37))*U3_Max
+((100-0.01*x[3]**2+1.978*x[3]-8.7451))*U4_Max
+((100-0.0005*x[4]**3+0.1395*x[4]**2-13.327*x[4]+503.41))*U5_Max
)
)
x=np.zeros(5)
print(
(
((100-0.0231*x[0]**2+4.189*x[0]-102.39))*U1_Max
+((100-0.01*x[1]**2+1.978*x[1]-8.7451))*U2_Max
+((100-0.025*x[2]**2+4.5017*x[2]-115.37))*U3_Max
+((100-0.01*x[3]**2+1.978*x[3]-8.7451))*U4_Max
+((100-0.0005*x[4]**3+0.1395*x[4]**2-13.327*x[4]+503.41))*U5_Max
)
)
#Now, let's formally define our constraints
#Note that this must be of a form that satisfies 'constraint equal to zero'
#First, the sum of all MW commands should be qual to the total MW commanded
def constraint1(x):
return -x[0]-x[1]-x[2]-x[3]-x[4]+MW_Total
#Since this is a numeric process let's give it some starting 'guess' conditions.
n=5
x0=np.zeros(n)
x0[0]=90
x0[1]=100
x0[2]=93
x0[3]=93
x0[4]=112
# show initial starting uess
print('Start by guessing: ')
print(x0)
print('Which gives a scaled algorithim value of: ')
print(
(
((100-0.0231*x0[0]**2+4.189*x0[0]-102.39))*U1_Max
+((100-0.01*x0[1]**2+1.978*x0[1]-8.7451))*U2_Max
+((100-0.025*x0[2]**2+4.5017*x0[2]-115.37))*U3_Max
+((100-0.01*x0[3]**2+1.978*x0[3]-8.7451))*U4_Max
+((100-0.0005*x0[4]**3+0.1395*x0[4]**2-13.327*x0[4]+503.41))*U5_Max
)
)
print('Which gives actual MW total of: ')
print(x0[0]+x0[1]+x0[2]+x0[3]+x0[4])
#Next, Let's give it some bounds to operate in
U1_Bnds=(U1_Min, U1_Max)
U2_Bnds=(U2_Min, U2_Max)
U3_Bnds=(U3_Min, U3_Max)
U4_Bnds=(U4_Min, U4_Max)
U5_Bnds=(U5_Min, U5_Max)
Bnds=(U1_Bnds, U2_Bnds, U3_Bnds, U4_Bnds, U5_Bnds)
con1 = {'type': 'eq', 'fun': constraint1}
print('MW Generated is: ')
for i in range (410,536):
MW_Total=i
solution = minimize(objective,x0,method='SLSQP',bounds=Bnds,constraints=con1,options={'maxiter': 10000000, 'eps':1.4901161193847656e-10})
x = solution.x
print(solution.x[0],solution.x[1],solution.x[2],solution.x[3],solution.x[4])
I would expect that for my trivial case of 488 MW it would give me the optimal answer. What am I doing wrong?
By looking at your objective and constraint definition, it looks like you are in the case of a quadratic objective function with a linear constraint.
The theory for this is well known and provides convergence guarantees, you can refer to the wikipedia page.
I don't know that well the scipy SLSQP interface but it looks like you are using less information than what you could do. Try to cast your problem in the form of Quadratic objective function with linear constraint. Also cast your constraint in a scipy.optimize.LinearConstraint object.
And please use functions calls such as print(objective(x)) and print(solution.x) in your code, this would enhance readability.
Ultimately I am trying to visualise the copula between two PDFs which are estimated from data (both via a KDE). Suppose, for one of the KDEs, I have discrete x,y data sorted in a tuple called data. I need to generate random variables with this distribution in order to perform the probability integral transform (and ultimately to obtain the uniform distribution). My methodology to generate random variables is as follows:
import scipy.stats as st
from scipy import interpolate, integrate
pdf1 = interpolate.interp1d(data[0], data[1])
class pdf1_class(st.rv_continuous):
def _pdf(self,x):
return pdf1(x)
pdf1_rv = pdf1_class(a = data[0][0], b= data[0][-1], name = 'pdf1_class')
pdf1_samples = pdf1_rv.rvs(size=10000)
However, this method is extremely slow. I also get the following warnings:
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
warnings.warn(msg, IntegrationWarning)
Is there a better way to generate the random variables?
As per suggestion by #unutbu I implemented _cdf and _ppf, which makes the calculation of 10000 samples instantaneous. To do this I added the following to the above code:
discrete_cdf1 = integrate.cumtrapz(y=data[1], x = data[0])
cdf1 = interpolate.interp1d(data[0][1:], discrete_cdf1)
ppf1 = interpolate.interp1d(discerete_cdf1, data[0][:-1])
I then add the following two methods to pdf1_class
def _cdf(self,x):
return cdf1(x)
def _ppf(self,x):
return ppf1(x)
I am trying to perform a least squares fit in python to a known function with three variables. I am able to complete this task for randomly generated data with errors, but the actual data that I need to fit includes some data points that are upper limits on the values. The function describes the flux as a function of wavelength, but in some cases the flux measured at the given wavelength is not an absolute value with an error but rather a maximum value for the flux, with the real value being anything below that down to zero.
Is there some way of telling the fitting task that some data points are upper limits? Additionally, I have to do this for a number of data sets, and the number of data points which could be upper limits is different for each one, so being able to do this automatically would be beneficial but not a necessity.
I apologise if any of this is unclear, I will endeavour to explain it more clearly if it is needed.
The code I am using to fit my data is included below.
import numpy as np
from scipy.optimize import leastsq
import math as math
import matplotlib.pyplot as plt
def f_all(x,p):
return np.exp(p[0])/((x**(3+p[1]))*((np.exp(14404.5/((x*1000000)*p[2])))-1))
def residual(p,y,x,error):
err=(y-(f_all(x,p)))/error
return err
p0=[-30,2.0,35.0]
data=np.genfromtxt("./Data_Files/Object_001")
wavelength=data[:,0]
flux=data[:,1]
errors=data[:,2]
p,cov,infodict,mesg,ier=leastsq(residual, p0, args = (flux, wavelength, errors), full_output=True)
print p
Scipy.optimize.leastsq is a convenient way to fit data, but the work underneath is the minimization of a function. Scipy.optimize contains many minimization functions, some of then having the capacity of handling constraints. Here I explain with fmin_slsqp which I know, perhaps the others can do also; see Scipy.optimize doc
fmin_slsqp requires a function to minimize and an initial value for the parameter. The function to minimize is the sum of the square of the residuals. For the parameters, I perform first a traditional leastsq fit and use the result as an initial value for the constrained minimization problem. Then there are several ways to impose constraints (see doc); the simpler is the f_ieqcons parameters: it requires a function which returns an array whose values must always be positive (that's the constraints). Here the function returns positive values if, for all maximal values points, the fit function is below the point.
import numpy
import scipy.optimize as scimin
import matplotlib.pyplot as mpl
datax=numpy.array([1,2,3,4,5]) # data coordinates
datay=numpy.array([2.95,6.03,11.2,17.7,26.8])
constraintmaxx=numpy.array([0]) # list of maximum constraints
constraintmaxy=numpy.array([1.2])
# least square fit without constraints
def fitfunc(x,p): # model $f(x)=a x^2+c
a,c=p
return c+a*x**2
def residuals(p): # array of residuals
return datay-fitfunc(datax,p)
p0=[1,2] # initial parameters guess
pwithout,cov,infodict,mesg,ier=scimin.leastsq(residuals, p0,full_output=True) #traditionnal least squares fit
# least square fir with constraints
def sum_residuals(p): # the function we want to minimize
return sum(residuals(p)**2)
def constraints(p): # the constraints: all the values of the returned array will be >=0 at the end
return constraintmaxy-fitfunc(constraintmaxx,p)
pwith=scimin.fmin_slsqp(sum_residuals,pwithout,f_ieqcons=constraints) # minimization with constraint
# plotting
ax=mpl.figure().add_subplot(1,1,1)
ax.plot(datax,datay,ls="",marker="x",color="blue",mew=2.0,label="Datas")
ax.plot(constraintmaxx,constraintmaxy,ls="",marker="x",color="red",mew=2.0,label="Max points")
morex=numpy.linspace(0,6,100)
ax.plot(morex,fitfunc(morex,pwithout),color="blue",label="Fit without constraints")
ax.plot(morex,fitfunc(morex,pwith),color="red",label="Fit with constraints")
ax.legend(loc=2)
mpl.show()
In this example I fit an imaginary sample of points on a parabola. Here is the result, without and with constraint (the red cross on left):
I hope this will do for your data sample; otherwise, please post one of your data files so that we can try with real data. I know my example does not takes care of error bars on data, but you can easily handle them by modifying the residuals function.