I have a x and y one-dimension numpy array and I would like to reproduce y with a known function to obtain "beta". Here is the code I am using:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
y = array([ 0.04022493, 0.04287536, 0.03983657, 0.0393201 , 0.03810298,
0.0363814 , 0.0331144 , 0.03074823, 0.02795767, 0.02413816,
0.02180802, 0.01861309, 0.01632699, 0.01368056, 0.01124232,
0.01005323, 0.00867196, 0.00940864, 0.00961282, 0.00892419,
0.01048963, 0.01199101, 0.01533408, 0.01855704, 0.02163586,
0.02630014, 0.02971127, 0.03511223, 0.03941218, 0.04280329,
0.04689105, 0.04960554, 0.05232003, 0.05487037, 0.05843364,
0.05120701])
x= array([ 0., 0.08975979, 0.17951958, 0.26927937, 0.35903916,
0.44879895, 0.53855874, 0.62831853, 0.71807832, 0.80783811,
0.8975979 , 0.98735769, 1.07711748, 1.16687727, 1.25663706,
1.34639685, 1.43615664, 1.52591643, 1.61567622, 1.70543601,
1.7951958 , 1.88495559, 1.97471538, 2.06447517, 2.15423496,
2.24399475, 2.33375454, 2.42351433, 2.51327412, 2.60303391,
2.6927937 , 2.78255349, 2.87231328, 2.96207307, 3.05183286,
3.14159265])
def func(x,beta):
return 1.0/(4.0*np.pi)*(1+beta*(3.0/2*np.cos(x)**2-1.0/2))
guesses = [20]
popt,pcov = curve_fit(func,x,y,p0=guesses)
y_fit = 1/(4.0*np.pi)*(1+popt[0]*(3.0/2*np.cos(x)**2-1.0/2))
plt.figure(1)
plt.plot(x,y,'ro',x,y_fit,'k-')
plt.show()
The code works but the fitting is completely off (see picture). Any idea why?
It looks like the formula to use contains an additional parameter, i.e. p
def func(x,beta,p):
return p/(4.0*np.pi)*(1+beta*(3.0/2*np.cos(x)**2-1.0/2))
guesses = [20,5]
popt,pcov = curve_fit(func,x,y,p0=guesses)
y_fit = func(angle_plot,*popt)
plt.figure(2)
plt.plot(x,y,'ro',x,y_fit,'k-')
plt.show()
print popt # [ 1.23341604 0.27362069]
In the popt which one is beta and which one is p?
This is perhaps not what you want but, if you are just trying to get a good fit to the data, you could use np.polyfit:
fit = np.polyfit(x,y,4)
fit_fn = np.poly1d(fit)
plt.scatter(x,y,label='data',color='r')
plt.plot(x,fit_fn(x),color='b',label='fit')
plt.legend(loc='upper left')
Note that fit gives the coefficient values of, in this case, a 4th order polynomial:
>>> fit
array([-0.00877534, 0.05561778, -0.09494909, 0.02634183, 0.03936857])
This is going to be as good as you can get (assuming you get the equation right as #mdurant suggested), an additional intercept term is required to further improve the fit:
def func(x,beta, icpt):
return 1.0/(4.0*np.pi)*(1+beta*(3.0/2*np.cos(x)**2-1.0/2))+icpt
guesses = [20, 0]
popt,pcov = curve_fit(func,x,y,p0=guesses)
y_fit = func(x, *popt)
plt.figure(1)
plt.plot(x,y,'ro', x,y_fit,'k-')
print popt #[ 0.33748816 -0.05780343]
Related
I've been trying to fit some data I have gained from some simulations. From the curve, I guess a logarithmic fit would be ideal. However, the curve comes looking out quite funky. I've also tried higher order polynomials and np.polyfit, but I couldn't get either to work.
Any help would be appreciated!
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
xdata=[9.24104360013e-06, 4.72619458107e-06, 4.03957328857e-06, 9.78301182748e-06, 1.36994566431e-05, 1.16294573409e-05, 7.70899546232e-06, 2.72587766232e-06, 2.19089955631e-06, 5.34851640035e-06, 7.84434545123e-06, 7.6524185787e-06, 1.00592536363e-05, 6.08711035578e-07, 4.08259572135e-07, 5.74424798328e-07, 6.20036326494e-07, 4.34755225756e-06, 4.72832211908e-06, 1.25156011417e-06, 1.44996714816e-05, 3.79992166335e-06, 4.45935911838e-06, 6.6307841155e-06, 2.38540191336e-06, 9.4649801666e-07, 9.11518608157e-06, 3.1944675219e-06, 5.32674287313e-06, 1.48463901861e-05, 3.41127723277e-06, 3.40027150288e-06, 3.33064781566e-06, 2.12828505238e-06, 7.22565690506e-06, 7.86527964811e-06, 2.25791582571e-06, 1.94875869207e-05, 1.54712884424e-05, 5.82300791075e-06, 9.5783833758e-06, 1.89519143607e-05, 1.03731970283e-05, 2.53090894753e-05, 9.26047056658e-06, 1.05428610146e-05, 2.89162870493e-05, 4.78624726782e-05, 1.00005855557e-05, 6.88617910928e-05]
ydata=[0.00281616449359, 0.00257023004939, 0.00250030932407, 0.00284317789756, 0.00300158447316, 0.00291690879783, 0.00274898865728, 0.0023625485679, 0.0023018015629, 0.00259860025555, 0.00269155777824, 0.00265941197135, 0.0028073724168, 0.00192920496041, 0.00182900945464, 0.00191452746379, 0.00193227563253, 0.00253266811688, 0.00255961306471, 0.00212426145702, 0.00285906942634, 0.00247877245272, 0.0025348504727, 0.00269881922057, 0.00232270371493, 0.00204672286703, 0.00281306442303, 0.00241938445736, 0.00261083321385, 0.00287440363274, 0.00244324770882, 0.00244364989768, 0.00244593671433, 0.00228714406931, 0.00263301289418, 0.00269385915315, 0.0022968948347, 0.00313898537645, 0.00305650121575, 0.00265291893623, 0.00278748794063, 0.00312801724905, 0.00289450806538, 0.00313176225397, 0.00284010926578, 0.0028957865422, 0.00335438183977, 0.00360421739757, 0.00270734995952, 0.00377301191882]
plt.plot(xdata,ydata,'o')
x = np.array(xdata, dtype=float) #transform your data in a numpy array of floats
y = np.array(ydata, dtype=float) #so the curve_fit can work
#def func(x,a,b,c):
# return a*x**2+ b*x +c
def func(x,a,b):
return a*np.log(x)+ b
popt, pcov = curve_fit(func, x, y)
plt.plot(x, func(x, *popt), label="Fitted Curve")
plt.show()
Sort x before plotting
x_sorted = np.sort(x)
plt.plot(x_sorted, func(x_sorted, *popt), label="Fitted Curve")
plt.show()
I need to input chi_square function and got stuck because it always shows there is a invalid syntax when run it, wonder how should I write the script? And how do I input "v"?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
data = np.loadtxt("214 ohm.txt", skiprows=1)
xdata = [row[0] for row in data ]#x represents current unit is "V"
ydata = [row[1] for row in data]#y represents voltage unit is "mA"
percision_error_V = np.array(xdata) * 0.0025 #we are using last digit of reading and multiply by measured voltage
accuracy_error_V = 0.01#we are using DC Vlotage, so use the error it provided online
erry = []
for i in range(len(percision_error_V)):
#to compare percision_error and accuracy_error for Voltage and use the larger one
erry.append(max(percision_error_V[i], accuracy_error_V))
def model_function (x, a, b):
return a*x + b
p0 = [0 , 0.]#214ohm is measured by ohmeter
p_opt , p_cov = curve_fit ( model_function ,
xdata , ydata , p0,
erry , True )
print(erry)
a_opt = p_opt[0]
b_opt = p_opt[1]
print(p_cov)
print("diagonal of P-cov is",np.diag(p_cov))
print("a_opt, b_opt is ",a_opt, b_opt)
xhat = np.arange(0, 16, 0.1)
plt.plot(xhat, model_function(xhat, a_opt, b_opt), 'r-', label="model function")
plt.errorbar(xdata, ydata,np.array(erry),linestyle="",marker='s', label="error bar")
plt.legend()
plt.ylabel('Current (mA)')
plt.xlabel('Voltage(V)')
plt.title("Voltage vs. Current with 220ohm Resistor")
plt.show()
p_sigma = np.sqrt(np.diag(p_cov))
print("p_sigma is" ,p_sigma)
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt))
chi.append(sum)
this is the required function I'm supposed to put on python
Thanks
my code is alright until the equation of chi-square, I wonder how should I fix it?
You have an indentation, 1 missing parenthesis, and variable naming issues so far in this sample of code
FROM
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt))
1.append(sum)
TO
for i in range(len(xdata)):
sum=sum((ydata[i]-model_function(xdata[i], a_opt, b_opt)) )
a.append(sum)
Variables cannot be named with numbers. E.g. 1,2,3. They must start with string - a1, alfa, betta, or s_t, _s.
in the framework of my bachelor's thesis, I need to evaluate my data with python. Unfortunately there's no suiting script of my fellow students yet and I'm quite new to programming.
I have this data set and I'm trying to fit it with a gaussian by using scipy.optimize.curve_fit. Since there are a lot of unusable counts especially at the end of the axis, I'd like to confine the part that is to be fitted.
Picture raw data
This is what I have so far:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x=np.arange(5120)
y=array([ 0.81434599, 1.17054264, 0.85279188, ..., 1. ,
1. , 13.56291391]) #most of the data isn't interesting
#to me, part of interest see below
def Gauss(x, a, x0, sigma):
return a * np.exp(-(x - x0)**2 / (2 * sigma**2))
mean = sum(x * y) / sum(y)
sigma = np.sqrt(sum(y * (x - mean)**2) / sum(y))
popt,pcov = curve_fit(Gauss, x, y, p0=[max(y), mean, sigma],
maxfev=360000)
plt.plot(x,y,label='data')
plt.plot(x,Gauss(x, *popt), 'r-',label='fit')
On docs.scipy.org I've found a general description about curve_fit
If I try using
bounds=([2400,-np.inf, -np.inf],[2600, np.inf, np.inf]),
I'm getting the ValueError: x0 is infeasible. What is the problem here?
I also tried to confine it with
popt,pcov = curve_fit(Gauss, x[2400:2600], y[2400:2600], p0=[max(y), mean, sigma], maxfev=360000)
as suggested in a comment on this question: "Error when obtaining gaussian fit for graph" at stackoverflow
In this case I only get a straight line though.
Picture: Confinement with x[2400:2600],y[2400:2600] as arguments of curve_fit
I really hope you can help me out here. I only need a way to fit a small part of my data. Thanks in advance!
interesting data:
y=array([ 0.93396226, 1.00884956, 1.15457413, 1.07590759,
0.88915094, 1.07142857, 1.10714286, 1.14171123, 1.06666667,
0.84975369, 0.95480226, 0.99388379, 1.01675978, 0.83967391,
0.9771987 , 1.02402402, 1.04531722, 1.07492795, 0.97135417,
0.99714286, 1.0248139 , 1.26223776, 1.1533101 , 0.99099099,
1.18867925, 1.15772871, 0.95076923, 1.03313253, 1.02278481,
0.93265993, 1.06705539, 1.00265252, 1.02023121, 0.92076503,
0.99728997, 1.03353659, 1.15116279, 1.04336043, 0.95076923,
1.05515588, 0.92571429, 0.93448276, 1.02702703, 0.90056818,
0.96068796, 1.08493151, 1.13584906, 1.1212938 , 1.0739645 ,
0.98972603, 0.94594595, 1.07913669, 0.98425197, 0.87762238,
0.96811594, 1.02710843, 0.99392097, 0.91384615, 1.09809264,
1.00630915, 0.93175074, 0.87572254, 1.00651466, 0.78772379,
1.12244898, 1.2248062 , 0.97109827, 0.94607843, 0.97900262,
0.97527473, 1.01212121, 1.16422287, 1.20634921, 0.97275204,
1.01090909, 0.99404762, 1.00561798, 1.01146132, 1.08695652,
0.97214485, 1.03525641, 0.99096386, 1.05135952, 1.16451613,
0.90462428, 0.76876877, 0.47701149, 0.27607362, 0.21580547,
0.20598007, 0.16766467, 0.15533981, 0.19745223, 0.15407855,
0.18925831, 0.26997245, 0.47603834, 0.596875 , 0.85126582, 0.96
, 1.06578947, 1.08761329, 0.89548023, 0.99705882, 1.07142857,
0.95677233, 0.86119874, 1.02857143, 0.98250729, 0.94214876,
1.04166667, 0.96024465, 1.07022472, 1.10344828, 1.04859335,
0.96655518, 1.06424581, 1.01754386, 1.03492063, 1.18627451,
0.91036415, 1.03355705, 1.09116809, 0.96083551, 1.01298701,
1.03691275, 1.02923977, 1.11612903, 1.01457726, 1.06285714,
0.98186528, 1.16470588, 0.86645963, 1.07317073, 1.09615385,
1.21192053, 0.94385027, 0.94244604, 0.88390501, 0.95718654,
0.9691358 , 1.01729107, 1.01119403, 1.20350877, 1.12890625,
1.06940063, 0.90410959, 1.14662757, 0.97093023, 1.03021148,
1.10629921, 0.97118156, 1.10693642, 1.07917889, 0.9484127 ,
1.07581227, 0.98006645, 0.98986486, 0.90066225, 0.90066225,
0.86779661, 0.86779661, 0.96996997, 1.01438849, 0.91186441,
0.91290323, 1.03745318, 1.0615942 , 0.97202797, 1.16608997,
0.94182825, 1.08333333, 0.9076087 , 1.18181818, 1.20618557,
1.01273885, 0.93606138, 0.87457627, 0.90575916, 1.09756098,
0.99115044, 1.13380282, 1.04333333, 1.04026846, 1.0297619 ,
1.04334365, 1.03395062, 0.92553191, 0.98198198, 1. ,
0.9439528 , 1.02684564, 1.1372549 , 0.96676737, 0.99649123,
1.07051282, 1.10367893, 1.0866426 , 1.15384615, 0.99667774])
You might find the lmfit module (https://lmfit.github.io/lmfit-py/) useful for this. It is designed to make curve fitting very easy, has built-in models for common peaks like Gaussian, and has many useful features such as allowing you to set bounds on parameters. A fit to your data with lmfit might look like this:
import numpy as np
import matplotlib.pyplot as plt
from lmfit.models import GaussianModel, ConstantModel
y = np.array([.....]) # uses your shorter data range
x = np.arange(len(y))
# make a model that is a Gaussian + a constant:
model = GaussianModel(prefix='peak_') + ConstantModel()
# make parameters with starting values:
params = model.make_params(c=1.0, peak_center=90,
peak_sigma=5, peak_amplitude=-5)
# it's not really needed for this data, but you can put bounds on
# parameters like this (or set .vary=False to fix a parameter)
params['peak_sigma'].min = 0 # sigma > 0
params['peak_amplitude'].max = 0 # amplitude < 0
params['peak_center'].min = 80
params['peak_center'].max = 100
# run fit
result = model.fit(y, params, x=x)
# print, plot results
print(result.fit_report())
plt.plot(x, y)
plt.plot(x, result.best_fit)
plt.show()
This will print out
[[Model]]
(Model(gaussian, prefix='peak_') + Model(constant))
[[Fit Statistics]]
# function evals = 54
# data points = 200
# variables = 4
chi-square = 1.616
reduced chi-square = 0.008
Akaike info crit = -955.625
Bayesian info crit = -942.432
[[Variables]]
peak_sigma: 4.03660814 +/- 0.204240 (5.06%) (init= 5)
peak_center: 91.2246614 +/- 0.200267 (0.22%) (init= 90)
peak_amplitude: -9.79111362 +/- 0.445273 (4.55%) (init=-5)
c: 1.02138228 +/- 0.006796 (0.67%) (init= 1)
peak_fwhm: 9.50548558 +/- 0.480950 (5.06%) == '2.3548200*peak_sigma'
peak_height: -0.96766623 +/- 0.041854 (4.33%) == '0.3989423*peak_amplitude/max(1.e-15, peak_sigma)'
[[Correlations]] (unreported correlations are < 0.100)
C(peak_sigma, peak_amplitude) = -0.599
C(peak_amplitude, c) = -0.328
C(peak_sigma, c) = 0.196
and make a plot like this:
There is an equation of exponential truncated power law in the article below:
Gonzalez, M. C., Hidalgo, C. A., & Barabasi, A. L. (2008). Understanding individual human mobility patterns. Nature, 453(7196), 779-782.
like this:
It is an exponential truncated power law. There are three parameters to be estimated: rg0, beta and K. Now we have got several users' radius of gyration(rg), and uploaded it onto Github: radius of gyrations.txt
The following codes can be used to read data and calculate P(rg):
import numpy as np
# read radius of gyration from file
rg = []
with open('/path-to-the-data/radius of gyrations.txt', 'r') as f:
for i in f:
rg.append(float(i.strip('\n')))
# calculate P(rg)
rg = sorted(rg, reverse=True)
rg = np.array(rg)
prg = np.arange(len(sorted_data)) / float(len(sorted_data)-1)
or you can directly get rg and prg data as the following:
rg = np.array([ 20.7863444 , 9.40547933, 8.70934714, 8.62690145,
7.16978087, 7.02575052, 6.45280959, 6.44755478,
5.16630287, 5.16092884, 5.15618737, 5.05610068,
4.87023561, 4.66753197, 4.41807645, 4.2635671 ,
3.54454372, 2.7087178 , 2.39016885, 1.9483156 ,
1.78393238, 1.75432688, 1.12789787, 1.02098332,
0.92653501, 0.32586582, 0.1514813 , 0.09722761,
0. , 0. ])
prg = np.array([ 0. , 0.03448276, 0.06896552, 0.10344828, 0.13793103,
0.17241379, 0.20689655, 0.24137931, 0.27586207, 0.31034483,
0.34482759, 0.37931034, 0.4137931 , 0.44827586, 0.48275862,
0.51724138, 0.55172414, 0.5862069 , 0.62068966, 0.65517241,
0.68965517, 0.72413793, 0.75862069, 0.79310345, 0.82758621,
0.86206897, 0.89655172, 0.93103448, 0.96551724, 1. ])
I can plot the P(r_g) and r_g using the following python script:
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(rg, prg, 'bs', alpha = 0.3)
# roughly estimated params:
# rg0=1.8, beta=0.15, K=5
plt.plot(rg, (rg+1.8)**-.15*np.exp(-rg/5))
plt.yscale('log')
plt.xscale('log')
plt.xlabel('$r_g$', fontsize = 20)
plt.ylabel('$P(r_g)$', fontsize = 20)
plt.show()
How can I use these data of rgs to estimate the three parameters above? I hope to solve it using python.
According to #Michael 's suggestion, we can solve the problem using scipy.optimize.curve_fit
def func(rg, rg0, beta, K):
return (rg + rg0) ** (-beta) * np.exp(-rg / K)
from scipy import optimize
popt, pcov = optimize.curve_fit(func, rg, prg, p0=[1.8, 0.15, 5])
print popt
print pcov
The results are given below:
[ 1.04303608e+03 3.02058550e-03 4.85784945e+00]
[[ 1.38243336e+18 -6.14278286e+11 -1.14784675e+11]
[ -6.14278286e+11 2.72951900e+05 5.10040746e+04]
[ -1.14784675e+11 5.10040746e+04 9.53072925e+03]]
Then we can inspect the results by plotting the fitted curve.
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(rg, prg, 'bs', alpha = 0.3)
plt.plot(rg, (rg+popt[0])**-(popt[1])*np.exp(-rg/popt[2]) )
plt.yscale('log')
plt.xscale('log')
plt.xlabel('$r_g$', fontsize = 20)
plt.ylabel('$P(r_g)$', fontsize = 20)
plt.show()
I am trying to fit a morse potential using a python and scipy.
The morse potential is defined as:
V = D*(exp(-2*m*(x-u)) - 2*exp(-m*(x-u)))
where D, m and u are the parameters I need to extract.
Unfortunately the fit is not satisfactory as you can see below (sorry I do not have 10 reputation so the image has to be clicked). Could anyone help me please? I must say I am not the best programmer with python.
Here is my code:
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
xdata2=np.array([1.0 ,1.1 ,1.2 ,1.3 ,1.4 ,1.5 ,1.6 ,1.7 ,1.8 ,1.9 ,2.0 ,2.1 ,2.2 ,2.3 ,2.4 ,2.5 ,2.6 ,2.7 ,2.8 ,2.9 ,3.0 ,3.1 ,3.2 ,3.3 ,3.4 ,3.5 ,3.6 ,3.7 ,3.8 ,3.9 ,4.0 ,4.1 ,4.2 ,4.3 ,4.4 ,4.5 ,4.6 ,4.7 ,4.8 ,4.9 ,5.0 ,5.1 ,5.2 ,5.3 ,5.4 ,5.5 ,5.6 ,5.7 ,5.8 ,5.9])
ydata2=[-1360.121815,-1368.532641,-1374.215047,-1378.090480,-1380.648178,-1382.223113,-1383.091562,-1383.479384,-1383.558087,-1383.445803,-1383.220380,-1382.931531,-1382.609269,-1382.273574,-1381.940879,-1381.621299,-1381.319042,-1381.036231,-1380.772039,-1380.527051,-1380.301961,-1380.096257,-1379.907700,-1379.734621,-1379.575837,-1379.430693,-1379.299282,-1379.181303,-1379.077272,-1378.985220,-1378.903626,-1378.831588,-1378.768880,-1378.715015,-1378.668910,-1378.629996,-1378.597943,-1378.572742,-1378.554547,-1378.543296,-1378.539843,-1378.543593,-1378.554519,-1378.572747,-1378.597945,-1378.630024,-1378.668911,-1378.715015,-1378.768915,-1378.831593]
t=np.linspace(0.1,7)
def morse(q, m, u, x ):
return (q * (np.exp(-2*m*(x-u))-2*np.exp(-m*(x-u))))
popt, pcov = curve_fit(morse, xdata2, ydata2, maxfev=40000000)
yfit = morse(t,popt[0], popt[1], popt[2])
print popt
plt.plot(xdata2, ydata2,"ro")
plt.plot(t, yfit)
plt.show()
Old fit before gboffi's comment
I am guessing the exact depth of the morse potential does not interest you overly much. So I added an additional parameter to shift the morse potential up and down (v), includes #gboffis comment. Furthermore, the first argument of your function must be the arguments, not the parameters you want to fit (see http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.optimize.curve_fit.html)
In addition, such fits are dependent on your starting position. The following should give you what you want.
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
xdata2=np.array([1.0 ,1.1 ,1.2 ,1.3 ,1.4 ,1.5 ,1.6 ,1.7 ,1.8 ,1.9 ,2.0 ,2.1 ,2.2 ,2.3 ,2.4 ,2.5 ,2.6 ,2.7 ,2.8 ,2.9 ,3.0 ,3.1 ,3.2 ,3.3 ,3.4 ,3.5 ,3.6 ,3.7 ,3.8 ,3.9 ,4.0 ,4.1 ,4.2 ,4.3 ,4.4 ,4.5 ,4.6 ,4.7 ,4.8 ,4.9 ,5.0 ,5.1 ,5.2 ,5.3 ,5.4 ,5.5 ,5.6 ,5.7 ,5.8 ,5.9])
ydata2=[-1360.121815,-1368.532641,-1374.215047,-1378.090480,-1380.648178,-1382.223113,-1383.091562,-1383.479384,-1383.558087,-1383.445803,-1383.220380,-1382.931531,-1382.609269,-1382.273574,-1381.940879,-1381.621299,-1381.319042,-1381.036231,-1380.772039,-1380.527051,-1380.301961,-1380.096257,-1379.907700,-1379.734621,-1379.575837,-1379.430693,-1379.299282,-1379.181303,-1379.077272,-1378.985220,-1378.903626,-1378.831588,-1378.768880,-1378.715015,-1378.668910,-1378.629996,-1378.597943,-1378.572742,-1378.554547,-1378.543296,-1378.539843,-1378.543593,-1378.554519,-1378.572747,-1378.597945,-1378.630024,-1378.668911,-1378.715015,-1378.768915,-1378.831593]
t=np.linspace(0.1,7)
tstart = [1.e+3, 1, 3, 0]
def morse(x, q, m, u , v):
return (q * (np.exp(-2*m*(x-u))-2*np.exp(-m*(x-u))) + v)
popt, pcov = curve_fit(morse, xdata2, ydata2, p0 = tstart, maxfev=40000000)
print popt # [ 5.10155662 1.43329962 1.7991549 -1378.53461345]
yfit = morse(t,popt[0], popt[1], popt[2], popt[3])
#print popt
#
#
#
plt.plot(xdata2, ydata2,"ro")
plt.plot(t, yfit)
plt.show()