I have a function z(T,x,p). With my given data points i want to fit the function and get the coefficients of the function. My constraint is that the derivation of z after x should be positive everywhere dz/dx > 0. But in my following code the constraint is not working and I do not know why.
import numpy as np
from scipy.optimize import minimize
import matplotlib.pyplot as plt
T = np.array([262,257,253,261,260,243,300,283,282], dtype=float)
p = np.array([25,22,19,24,24,14,62,45,44], dtype=float)
x = np.array([0.1,0.1,0.2,0.2,0.3,0.3,1,0.3,0.2], dtype=float)
z = np.array([10,9,13,16,20,12,62,37,28], dtype=float)
def func(pars, T, x, p): #my actual function
a,b,c,d,e,f = pars
return x * p + x * (1 - x) * (a + b * T + c * T ** 2 + d * x + e * x * T + f * x * T ** 2) * p
def resid(pars): #residual function
return ((func(pars, T, x, p) - z) ** 2).sum()
def der(pars): # constraint function: Derivation of func() after x positive everywhere
a,b,c,d,e,f = pars
return p+p*(2*x*a+2*x*b*T+2*x*c*T**2+3*x**2*d+3*x**2*e*T+3*x**2*f*T**2)+p*(a+b*T+c*T**2+2*x*d+2*e*x*T+2*f*x*T**2)
con1 = (dict(type='ineq', fun=der))
pars0 = np.array([0,0,0,0,0,0])
res = minimize(resid, pars0, method='cobyla',options={'maxiter': 5000000}, constraints=con1)
print("a = %f , b = %f, c = %f, d = %f, e = %f, f = %f" % (res.x[0], res.x[1], res.x[2], res.x[3], res.x[4], res.x[5]))
Trying to plot an example:
x0 = np.linspace(0, 1, 100) # plot two example graphs z(x) for a certain T and p
fig, ax = plt.subplots()
fig.dpi = 80
ax.plot(x,z,'ro', label='data')
ax.plot(x0, func(res.x, 300, x0, 62), '-', label='fit T=300, p=62')
ax.plot(x0, func(res.x, 283, x0, 45), '-', label='fit T=283, p=45')
plt.xlabel('x')
plt.ylabel('z')
plt.legend()
plt.show()
As you can see the derivation (gradient) is not positive everywhere. I do not know why the constraint gets ignored. Maybe someone can help me.
Related
I'm trying to plot a 3D superball in python matplotlib, where a superball is defined as a general mathematical shape that can be used to describe rounded cubes using a shape parameter p, where for p = 1 the shape is equal to that of a sphere.
This paper claims that the superball is defined by using modified spherical coordinates with:
x = r*cos(u)**1/p * sin(v)**1/p
y = r*cos(u)**1/p * sin(v)**1/p
z = r*cos(v)**1/p
with u = phi and v = theta.
I managed to get the code running, at least for p = 1 which generates a sphere - exactly as it should do:
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
r, p = 1, 1
# Make data
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
u, v = np.meshgrid(u, v)
x = r * np.cos(u)**(1/p) * np.sin(v)**(1/p)
y = r * np.sin(u)**(1/p) * np.sin(v)**(1/p)
z = r * np.cos(v)**(1/p)
# Plot the surface
ax.plot_surface(x, y, z)
plt.show()
This is a 3D plot of the code above for p = 1.
However, as I put in any other value for p, e.g. 2, it's giving me only a partial shape, while it should actually give me a full superball.
This is a 3D plot of the code above for p = 2.
I believe the fix is more of mathematical nature, but how can this be fixed?
When plotting a regular sphere, we transform positive and negative coordinates differently:
Positives: x**0.5
Negatives: -1 * abs(x)**0.5
For the superball variants, apply the same logic using np.sign and np.abs:
power = lambda base, exp: np.sign(base) * np.abs(base)**exp
x = r * power(np.cos(u), 1/p) * power(np.sin(v), 1/p)
y = r * power(np.sin(u), 1/p) * power(np.sin(v), 1/p)
z = r * power(np.cos(v), 1/p)
Full example for p = 4:
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
r, p = 1, 4
# Make the data
u = np.linspace(0, 2 * np.pi)
v = np.linspace(0, np.pi)
u, v = np.meshgrid(u, v)
# Transform the coordinates
# Positives: base**exp
# Negatives: -abs(base)**exp
power = lambda base, exp: np.sign(base) * np.abs(base)**exp
x = r * power(np.cos(u), 1/p) * power(np.sin(v), 1/p)
y = r * power(np.sin(u), 1/p) * power(np.sin(v), 1/p)
z = r * power(np.cos(v), 1/p)
# Plot the surface
ax.plot_surface(x, y, z)
plt.show()
I have a function as the following
q = 1 / sqrt( ((1+z)**2 * (1+0.01*o_m*z) - z*(2+z)*(1-o_m)) )
h = 5 * log10( (1+z)*q ) + 43.1601
I have experimental answers of above equation and once I must to put some data into above function and solve equation below
chi=(q_exp-q_theo)**2/err**2 # this function is a sigma, sigma chi from z=0 to z=1.4 (in the data file)
z, err and q_exp are in the data file(2.txt). Now I have to choose a range for o_m (0.2 to 0.4) and find in what o_m, the chi function will be minimized.
my code is:
from math import *
from scipy.integrate import quad
min = None
l = None
a = None
b = None
c = 0
def ant(z,om,od):
return 1/sqrt( (1+z)**2 * (1+0.01*o_m*z) - z*(2+z)*o_d )
for o_m in range(20,40,1):
o_d=1-0.01*o_m
with open('2.txt') as fp:
for line in fp:
n = list( map(float, line.split()) )
q = quad(ant,n[0],n[1],args=(o_m,o_d))[0]
h = 5.0 * log10( (1+n[1])*q ) + 43.1601
chi = (n[2]-h)**2 / n[3]**2
c = c + chi
if min is None or min>c:
min = c
l = o_m
print('chi=',q,'o_m=',0.01*l)
n[1],n[2],n[3],n[4] are z1, z2, q_exp and err, respectively in the data file. and z1 and z2 are the integration range.
I need your help and I appreciate your time and your attention.
Please do not rate a negative value. I need your answers.
Here is my understanding of the problem.
First I generate some data by the following code
import numpy as np
from scipy.integrate import quad
from random import random
def boxmuller(x0,sigma):
u1=random()
u2=random()
ll=np.sqrt(-2*np.log(u1))
z0=ll*np.cos(2*np.pi*u2)
z1=ll*np.cos(2*np.pi*u2)
return sigma*z0+x0, sigma*z1+x0
def q_func(z, oM, oD):
den= np.sqrt( (1.0 + z)**2 * (1+0.01 * oM * z) - z * (2+z) * (1-oD) )
return 1.0/den
def h_func(z,q):
out = 5 * np.log10( (1.0 + z) * q ) + .25#43.1601
return out
def q_Int(z1,z2,oM,oD):
out=quad(q_func, z1,z2,args=(oM,oD))
return out
ooMM=0.3
ooDD=1.0-ooMM
dataList=[]
for z in np.linspace(.3,20,60):
z1=.1+.1*z*.01*z**2
z2=z1+3.0+.08+z**2
q=q_Int(z1,z2,ooMM,ooDD)[0]
h=h_func(z,q)
sigma=np.fabs(.01*h)
h=boxmuller(h,sigma)[0]
dataList+=[[z,z1,z2,h,sigma]]
dataList=np.array(dataList)
np.savetxt("data.txt",dataList)
which I would then fit in the following way
import matplotlib
matplotlib.use('Qt5Agg')
from matplotlib import pyplot as plt
import numpy as np
from scipy.integrate import quad
from scipy.optimize import leastsq
def q_func(z, oM, oD):
den= np.sqrt( (1.0 + z)**2 * (1+0.01 * oM * z) - z * (2+z) * (1-oD) )
return 1.0/den
def h_func(z,q):
out = 5 * np.log10( (1.0 + z) * q ) + .25#43.1601
return out
def q_Int(z1,z2,oM,oD):
out=quad(q_func, z1,z2,args=(oM,oD))
return out
def residuals(parameters,data):
om,od=parameters
zList=data[:,0]
yList=data[:,3]
errList=data[:,4]
qList=np.fromiter( (q_Int(z1,z2, om,od)[0] for z1,z2 in data[ :,[1,2] ]), np.float)
hList=np.fromiter( (h_func(z,q) for z,q in zip(zList,qList)), np.float)
diffList=np.fromiter( ( (y-h)/e for y,h,e in zip(yList,hList,errList) ), np.float)
return diffList
dataList=np.loadtxt("data.txt")
###fitting
startGuess=[.4,.8]
bestFitValues, cov,info,mesg, ier = leastsq(residuals, startGuess , args=( dataList,),full_output=1)
print bestFitValues,cov
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
ax.plot(dataList[:,0],dataList[:,3],marker='x')
###fitresult
fqList=[q_Int(z1,z2,bestFitValues[0], bestFitValues[1])[0] for z1,z2 in zip(dataList[:,1],dataList[:,2])]
fhList=[h_func(z,q) for z,q in zip(dataList[:,0],fqList)]
ax.plot(dataList[:,0],fhList,marker='+')
plt.show()
giving output
>>[ 0.31703574 0.69572673]
>>[[ 1.38135263e-03 -2.06088258e-04]
>> [ -2.06088258e-04 7.33485166e-05]]
and the graph
Note that for leastsq the covariance matrix is in reduced form and needs to be rescaled.
Unconcsiosely, this question overlap my other question. The correct answer is:
from math import *
import numpy as np
from scipy.integrate import quad
min=l=a=b=chi=None
c=0
z,mo,err=np.genfromtxt('Union2.1_z_dm_err.txt',unpack=True)
def ant(z,o_m): #0.01*o_m is steps of o_m
return 1/sqrt(((1+z)**2*(1+0.01*o_m*z)-z*(2+z)*(1-0.01*o_m)))
for o_m in range(20,40):
c=0
for i in range(len(z)):
q=quad(ant,0,z[i],args=(o_m,))[0] #Integration o to z
h=5*log10((1+z[i])*(299000/70)*q)+25 #function of dL
chi=(mo[i]-h)**2/err[i]**2 #chi^2 test function
c=c+chi
l=o_m
print('chi^2=',c,'Om=',0.01*l,'OD=',1-0.01*l)
I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic).
I use Python and Numpy and for polynomial fitting there is a function polyfit(). But I found no such functions for exponential and logarithmic fitting.
Are there any? Or how to solve it otherwise?
For fitting y = A + B log x, just fit y against (log x).
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> numpy.polyfit(numpy.log(x), y, 1)
array([ 8.46295607, 6.61867463])
# y ≈ 8.46 log(x) + 6.62
For fitting y = AeBx, take the logarithm of both side gives log y = log A + Bx. So fit (log y) against x.
Note that fitting (log y) as if it is linear will emphasize small values of y, causing large deviation for large y. This is because polyfit (linear regression) works by minimizing ∑i (ΔY)2 = ∑i (Yi − Ŷi)2. When Yi = log yi, the residues ΔYi = Δ(log yi) ≈ Δyi / |yi|. So even if polyfit makes a very bad decision for large y, the "divide-by-|y|" factor will compensate for it, causing polyfit favors small values.
This could be alleviated by giving each entry a "weight" proportional to y. polyfit supports weighted-least-squares via the w keyword argument.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> numpy.polyfit(x, numpy.log(y), 1)
array([ 0.10502711, -0.40116352])
# y ≈ exp(-0.401) * exp(0.105 * x) = 0.670 * exp(0.105 * x)
# (^ biased towards small values)
>>> numpy.polyfit(x, numpy.log(y), 1, w=numpy.sqrt(y))
array([ 0.06009446, 1.41648096])
# y ≈ exp(1.42) * exp(0.0601 * x) = 4.12 * exp(0.0601 * x)
# (^ not so biased)
Note that Excel, LibreOffice and most scientific calculators typically use the unweighted (biased) formula for the exponential regression / trend lines. If you want your results to be compatible with these platforms, do not include the weights even if it provides better results.
Now, if you can use scipy, you could use scipy.optimize.curve_fit to fit any model without transformations.
For y = A + B log x the result is the same as the transformation method:
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, y)
(array([ 6.61867467, 8.46295606]),
array([[ 28.15948002, -7.89609542],
[ -7.89609542, 2.9857172 ]]))
# y ≈ 6.62 + 8.46 log(x)
For y = AeBx, however, we can get a better fit since it computes Δ(log y) directly. But we need to provide an initialize guess so curve_fit can reach the desired local minimum.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y)
(array([ 5.60728326e-21, 9.99993501e-01]),
array([[ 4.14809412e-27, -1.45078961e-08],
[ -1.45078961e-08, 5.07411462e+10]]))
# oops, definitely wrong.
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y, p0=(4, 0.1))
(array([ 4.88003249, 0.05531256]),
array([[ 1.01261314e+01, -4.31940132e-02],
[ -4.31940132e-02, 1.91188656e-04]]))
# y ≈ 4.88 exp(0.0553 x). much better.
You can also fit a set of a data to whatever function you like using curve_fit from scipy.optimize. For example if you want to fit an exponential function (from the documentation):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a * np.exp(-b * x) + c
x = np.linspace(0,4,50)
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
And then if you want to plot, you could do:
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
(Note: the * in front of popt when you plot will expand out the terms into the a, b, and c that func is expecting.)
I was having some trouble with this so let me be very explicit so noobs like me can understand.
Lets say that we have a data file or something like that
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
import sympy as sym
"""
Generate some data, let's imagine that you already have this.
"""
x = np.linspace(0, 3, 50)
y = np.exp(x)
"""
Plot your data
"""
plt.plot(x, y, 'ro',label="Original Data")
"""
brutal force to avoid errors
"""
x = np.array(x, dtype=float) #transform your data in a numpy array of floats
y = np.array(y, dtype=float) #so the curve_fit can work
"""
create a function to fit with your data. a, b, c and d are the coefficients
that curve_fit will calculate for you.
In this part you need to guess and/or use mathematical knowledge to find
a function that resembles your data
"""
def func(x, a, b, c, d):
return a*x**3 + b*x**2 +c*x + d
"""
make the curve_fit
"""
popt, pcov = curve_fit(func, x, y)
"""
The result is:
popt[0] = a , popt[1] = b, popt[2] = c and popt[3] = d of the function,
so f(x) = popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3].
"""
print "a = %s , b = %s, c = %s, d = %s" % (popt[0], popt[1], popt[2], popt[3])
"""
Use sympy to generate the latex sintax of the function
"""
xs = sym.Symbol('\lambda')
tex = sym.latex(func(xs,*popt)).replace('$', '')
plt.title(r'$f(\lambda)= %s$' %(tex),fontsize=16)
"""
Print the coefficients and plot the funcion.
"""
plt.plot(x, func(x, *popt), label="Fitted Curve") #same as line above \/
#plt.plot(x, popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3], label="Fitted Curve")
plt.legend(loc='upper left')
plt.show()
the result is:
a = 0.849195983017 , b = -1.18101681765, c = 2.24061176543, d = 0.816643894816
Here's a linearization option on simple data that uses tools from scikit learn.
Given
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import FunctionTransformer
np.random.seed(123)
# General Functions
def func_exp(x, a, b, c):
"""Return values from a general exponential function."""
return a * np.exp(b * x) + c
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Helper
def generate_data(func, *args, jitter=0):
"""Return a tuple of arrays with random data along a general function."""
xs = np.linspace(1, 5, 50)
ys = func(xs, *args)
noise = jitter * np.random.normal(size=len(xs)) + jitter
xs = xs.reshape(-1, 1) # xs[:, np.newaxis]
ys = (ys + noise).reshape(-1, 1)
return xs, ys
transformer = FunctionTransformer(np.log, validate=True)
Code
Fit exponential data
# Data
x_samp, y_samp = generate_data(func_exp, 2.5, 1.2, 0.7, jitter=3)
y_trans = transformer.fit_transform(y_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_samp, y_trans) # 2
model = results.predict
y_fit = model(x_samp)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, np.exp(y_fit), "k--", label="Fit") # 3
plt.title("Exponential Fit")
Fit log data
# Data
x_samp, y_samp = generate_data(func_log, 2.5, 1.2, 0.7, jitter=0.15)
x_trans = transformer.fit_transform(x_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_trans, y_samp) # 2
model = results.predict
y_fit = model(x_trans)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, y_fit, "k--", label="Fit") # 3
plt.title("Logarithmic Fit")
Details
General Steps
Apply a log operation to data values (x, y or both)
Regress the data to a linearized model
Plot by "reversing" any log operations (with np.exp()) and fit to original data
Assuming our data follows an exponential trend, a general equation+ may be:
We can linearize the latter equation (e.g. y = intercept + slope * x) by taking the log:
Given a linearized equation++ and the regression parameters, we could calculate:
A via intercept (ln(A))
B via slope (B)
Summary of Linearization Techniques
Relationship | Example | General Eqn. | Altered Var. | Linearized Eqn.
-------------|------------|----------------------|----------------|------------------------------------------
Linear | x | y = B * x + C | - | y = C + B * x
Logarithmic | log(x) | y = A * log(B*x) + C | log(x) | y = C + A * (log(B) + log(x))
Exponential | 2**x, e**x | y = A * exp(B*x) + C | log(y) | log(y-C) = log(A) + B * x
Power | x**2 | y = B * x**N + C | log(x), log(y) | log(y-C) = log(B) + N * log(x)
+Note: linearizing exponential functions works best when the noise is small and C=0. Use with caution.
++Note: while altering x data helps linearize exponential data, altering y data helps linearize log data.
Well I guess you can always use:
np.log --> natural log
np.log10 --> base 10
np.log2 --> base 2
Slightly modifying IanVS's answer:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
#return a * np.exp(-b * x) + c
return a * np.log(b * x) + c
x = np.linspace(1,5,50) # changed boundary conditions to avoid division by 0
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
This results in the following graph:
We demonstrate features of lmfit while solving both problems.
Given
import lmfit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(123)
# General Functions
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Data
x_samp = np.linspace(1, 5, 50)
_noise = np.random.normal(size=len(x_samp), scale=0.06)
y_samp = 2.5 * np.exp(1.2 * x_samp) + 0.7 + _noise
y_samp2 = 2.5 * np.log(1.2 * x_samp) + 0.7 + _noise
Code
Approach 1 - lmfit Model
Fit exponential data
regressor = lmfit.models.ExponentialModel() # 1
initial_guess = dict(amplitude=1, decay=-1) # 2
results = regressor.fit(y_samp, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Approach 2 - Custom Model
Fit log data
regressor = lmfit.Model(func_log) # 1
initial_guess = dict(a=1, b=.1, c=.1) # 2
results = regressor.fit(y_samp2, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp2, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Details
Choose a regression class
Supply named, initial guesses that respect the function's domain
You can determine the inferred parameters from the regressor object. Example:
regressor.param_names
# ['decay', 'amplitude']
To make predictions, use the ModelResult.eval() method.
model = results.eval
y_pred = model(x=np.array([1.5]))
Note: the ExponentialModel() follows a decay function, which accepts two parameters, one of which is negative.
See also ExponentialGaussianModel(), which accepts more parameters.
Install the library via > pip install lmfit.
Wolfram has a closed form solution for fitting an exponential. They also have similar solutions for fitting a logarithmic and power law.
I found this to work better than scipy's curve_fit. Especially when you don't have data "near zero". Here is an example:
import numpy as np
import matplotlib.pyplot as plt
# Fit the function y = A * exp(B * x) to the data
# returns (A, B)
# From: https://mathworld.wolfram.com/LeastSquaresFittingExponential.html
def fit_exp(xs, ys):
S_x2_y = 0.0
S_y_lny = 0.0
S_x_y = 0.0
S_x_y_lny = 0.0
S_y = 0.0
for (x,y) in zip(xs, ys):
S_x2_y += x * x * y
S_y_lny += y * np.log(y)
S_x_y += x * y
S_x_y_lny += x * y * np.log(y)
S_y += y
#end
a = (S_x2_y * S_y_lny - S_x_y * S_x_y_lny) / (S_y * S_x2_y - S_x_y * S_x_y)
b = (S_y * S_x_y_lny - S_x_y * S_y_lny) / (S_y * S_x2_y - S_x_y * S_x_y)
return (np.exp(a), b)
xs = [33, 34, 35, 36, 37, 38, 39, 40, 41, 42]
ys = [3187, 3545, 4045, 4447, 4872, 5660, 5983, 6254, 6681, 7206]
(A, B) = fit_exp(xs, ys)
plt.figure()
plt.plot(xs, ys, 'o-', label='Raw Data')
plt.plot(xs, [A * np.exp(B *x) for x in xs], 'o-', label='Fit')
plt.title('Exponential Fit Test')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
Hi i'm trying to get a plot of the trajectory of a mass under projectile motion. One with a force acting on the horizontal axis and one without (basically 2 sets of x values plotted against a 1 set of y-values). Here's what i have so far.. I'm new to programming and i can't seem to figure out where this went wrong. Hope you guys can help me. Thank you!
import numpy as np
import matplotlib.pyplot as pl
def position(y0, v0, theta, g, t):
y= y0 + v0*np.sin(theta)*t + (g*t**2)/2
return y
def position2(x0, v0, theta, c, e, alpha, t):
x1 = x0 + v0*(np.cos(theta))*t + c*(t*(e-1)+(2-2*e)/alpha)
return x1
def position3(x0, v0, theta, t):
x2 = x0 + v0*(np.cos(theta))*t
return x2
t = np.linspace(0,10,1000)
#part1
m = 1
theta = 45
y0 = 2
x0 = 0
v0 = 3
k = 1
alpha = 0.5
g = -9.8
c = (-k/m)*(1/alpha**2)
e = -(np.e**(-alpha*t))
x1 = []
x2 = []
y = []
for a in t:
x1_data = position2(x0, v0, theta, c, e, alpha, t)
x1.append(x1_data)
x2_data = position3(x0, v0, theta, t)
x2.append(x2_data)
y_data = position(y0, v0, theta, g, t)
y.append(y_data)
print x1_data
print x2_data
print y_data
pl.title('Constant and Time-Dependent Forces')
pl.xlabel(b'x-position')
pl.ylabel(b'y-position')
x1label = 'projectile 1'
x2label = "'normal' projectile"
plot1 = pl.plot(x1_data, y, 'r')
plot2 = pl.plot(x2_data, y, 'b')
pl.legend()
pl.show()
I went through your code since i am new to matplotlib and wanted to play a bit with it. The only mistake i found is in the for loop where you do for a in t: but end up passing t to the functions instead of a.
import numpy as np
import matplotlib.pyplot as pl
sin = np.sin
cos = np.cos
pi = np.pi
def y_position(y0, v0, phi, g, t):
y_t = y0 + v0 * sin(phi) * t + (g * t**2) / 2
return y_t
def x_position_force(x0, v0, phi, k, m, alpha, t):
term1 = (-k / m) * (1 / alpha ** 2)
term2 = -np.e ** (-alpha * t)
x_t = x0 + v0 * cos(phi) * t + term1 * (t * (term2 - 1) + (2 - 2 * term2) / alpha)
return x_t
def x_position_no_force(x0, v0, phi, t):
x_t = x0 + v0 * cos(phi) * t
return x_t
time = np.linspace(0, 10, 100)
#------------- I N P U T -------------#
x_init = 0
y_init = 2
v_init = 3
theta = 45
gravity = -9.8
m = 1
k = 1
alpha = 0.5
#------------- I N P U T -------------#
x_with_force = []
x_with_no_force = []
y = []
for time_i in time:
x_with_force.append(x_position_force(x_init, v_init, theta, k, m, alpha, time_i))
x_with_no_force.append(x_position_no_force(x_init, v_init, theta, time_i))
y.append(y_position(y_init, v_init, theta, gravity, time_i))
# print(x1_data)
# print(x2_data)
# print(y_data)
pl.subplot(211)
pl.title('Constant and Time-Dependent Forces')
pl.xlabel('time')
plot1 = pl.plot(time, x_with_force, 'r', label='x_coord_dynamicF')
plot2 = pl.plot(time, x_with_no_force, 'g', label='x_coord_staticF')
plot3 = pl.plot(time, y, 'b', label='y_coord')
pl.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode="expand", borderaxespad=0.)
pl.subplot(212)
pl.title('Trajectory (x,y)')
pl.xlabel('X')
pl.ylabel('Y')
plot4 = pl.plot(x_with_force, y, 'r^')
plot5 = pl.plot(x_with_no_force, y, 'b*')
pl.show()
I changed a number of things though to make the code inline with PEP8. In my opinion it is the use of bad variable names that lead you to the mistake you did. So i would recommend taking the time to type those few extra characters that ultimately help you and the people reading your code.
Here is my code to solve differential equation dy / dt = 2 / sqrt(pi) * exp(-x * x) to plot erf(x).
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import numpy as np
import math
def euler(df, f0, x):
h = x[1] - x[0]
y = [f0]
for i in xrange(len(x) - 1):
y.append(y[i] + h * df(y[i], x[i]))
return y
def i(df, f0, x):
h = x[1] - x[0]
y = [f0]
y.append(y[0] + h * df(y[0], x[0]))
for i in xrange(1, len(x) - 1):
fn = df(y[i], x[i])
fn1 = df(y[i - 1], x[i - 1])
y.append(y[i] + (3 * fn - fn1) * h / 2)
return y
if __name__ == "__main__":
df = lambda y, x: 2.0 / math.sqrt(math.pi) * math.exp(-x * x)
f0 = 0.0
x = np.linspace(-10.0, 10.0, 10000)
y1 = euler(df, f0, x)
y2 = i(df, f0, x)
y3 = odeint(df, f0, x)
plt.plot(x, y1, x, y2, x, y3)
plt.legend(["euler", "modified", "odeint"], loc='best')
plt.grid(True)
plt.show()
And here is a plot:
Am I using odeint in a wrong way or it's a bug?
Notice that if you change x to x = np.linspace(-5.0, 5.0, 10000), then your code works. Therefore, I suspect the problem has something to do with exp(-x*x) being too small when x is very small or very large. [Total speculation: Perhaps the odeint (lsoda) algorithm adapts its stepsize based on values sampled around x = -10 and increases the stepsize in such a way that values around x = 0 are missed?]
The code can be fixed by using the tcrit parameter, which tells odeint to pay special attention around certain critical points.
So, by setting
y3 = integrate.odeint(df, f0, x, tcrit = [0])
we tell odeint to sample more carefully around 0.
import matplotlib.pyplot as plt
import scipy.integrate as integrate
import numpy as np
import math
def euler(df, f0, x):
h = x[1] - x[0]
y = [f0]
for i in xrange(len(x) - 1):
y.append(y[i] + h * df(y[i], x[i]))
return y
def i(df, f0, x):
h = x[1] - x[0]
y = [f0]
y.append(y[0] + h * df(y[0], x[0]))
for i in xrange(1, len(x) - 1):
fn = df(y[i], x[i])
fn1 = df(y[i - 1], x[i - 1])
y.append(y[i] + (3 * fn - fn1) * h / 2)
return y
def df(y, x):
return 2.0 / np.sqrt(np.pi) * np.exp(-x * x)
if __name__ == "__main__":
f0 = 0.0
x = np.linspace(-10.0, 10.0, 10000)
y1 = euler(df, f0, x)
y2 = i(df, f0, x)
y3 = integrate.odeint(df, f0, x, tcrit = [0])
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.legend(["euler", "modified", "odeint"], loc='best')
plt.grid(True)
plt.show()