I have a set of value for x and y and I'm looking to find a way to find the value of a parameter for a function.
I have a function y = Ax^{4/3}.
I was thinking about using curvefit, but I'm not sure if this is the right way.
I tried to linearize the function and find the slope with polyfit, but the slope change radically if I remove some points.
Edit: I tried curvefit and something strange is happening. curvefit gives me A=0.55, but this value doesn't fit at all. However, 0.055 seems to work.
Here's my code.
def func(A,x):
return A*x**(4/3)
popt,pcov = curve_fit(func,x[:18], y[:18])
Any help will be appraciated.
Here is an example on how to fit data to your model:
Import relevant libraries:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
Define x values:
x = np.linspace(0, 10)
Define a function representing your model:
def f(x: np.ndarray, a: float) -> float:
return a * x ** (4/3)
Let's sample data from the above model and add noise:
y = f(x, a=16) * np.random.uniform(1, 2, len(x))
Perform the curve fitting:
popt, pcov = curve_fit(f, x, y)
Plot the results:
plt.scatter(x, y)
plt.plot(x, f(x, *popt), c="r")
plt.show()
Related
I used sklearn in python to fit polynomial functions:
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly = PolynomialFeatures(degree=2, include_bias=False)
poly_reg_model = LinearRegression()
poly_features = poly.fit_transform(xvalues.reshape(-1, 1))
poly_reg_model.fit(poly_features, y_values)
final_predicted = poly_reg_model.predict(poly_features)
...
Instead of only using x^n parts, I want to incude a (1-x^2)^(1/2) part in the fit-function.
Is this possible with sklearn?
I tried to define a Feature which includes more complex terms but I falied to achieve this.
No idea whether it is possible within scikitlearn - after all polynomial fit is constrained to specific polynomial formulations from the mathematical stanndpoint. If you want to fit a formula with some unknown parameters, you can use scipy.optimize.curve_fit. First let us generate some dummy data with noise:
import numpy as np
from matplotlib import pyplot as plt
def f(x):
return (1-x**2)**(1/2)
xvalues = np.linspace(-1, 1, 30)
yvalues = [f(x) + np.random.randint(-10, 10)/100 for x in xvalues]
Then, we set up our function to be optimized:
from scipy.optimize import curve_fit
def f_opt(x, a, b):
return (a-x**2)**(1/b)
popt, pcov = curve_fit(f_opt, xvalues, yvalues)
You can of course modify this function to be more elastic. Finally we plot the results
plt.scatter(xvalues, yvalues, label='data')
plt.plot(xvalues, f_opt(xvalues, *popt), 'r-', label='fit')
plt.legend()
So now you can use f_opt(new_x, *popt) to predict new points (alternatively you can print the values and hard-code them). popt basically has the parameters that you specify in f_opt except x - for more details check the documentation I've linked!
I'm new to sympy plotting, when I repeatedly try to graph the solution set for one variable linear inequality like here, I can't figure it out, can anyone help me with my problem?
My code is like this
from sympy import symbols, plot
from sympy.plotting import plot
from sympy import *
import numpy as np
x, y = symbols("x, y", real=True)
init_printing(use_unicode=True)
ekpr = (4*x - 2 <= 5 + 3*x)
pprint(ekpr)
xs = np.linspace(0, 10, 11)
yvals = [solve(ekpr, x, xi) for xi in xs]
sol = solve(ekpr, x)
print(sol)
plot = plot(ys, xlim=[0.0, 10.10],
markers=[{'args': [sol, [0], xs, yvals, 'ro']}])
The result of the graph that I expect is like the image below.
enter image description here
It plots correctly if I delete the function problem2, I need to print both of them, but whenever that's done, Exp[x] is printed as a straight line. I've printed f(x) to see is it getting the right values or not and it does get the right values. Problem2 is a differential eqn. I have solved it using odeint, f(x) is the answer of Problem2 given in the book, I want to plot both of them together to see whether the solution given by python is correct or not. Problem20 = 10 is an initial value I have taken randomly ( can it be also a source of error, I failed to see how if so).
import numpy as np
import scipy.integrate as sciint
import matplotlib.pyplot as plt
def problem2(y, x):
return x + x * y
def f(x):
return np.exp(x)
if __name__ == '__main__':
x = np.linspace(0, 5, 50)
problem20 = 10
sol = sciint.odeint(problem2, problem20, x)
plt.plot(x, sol, label="Calculated")
print(f(x))
plt.plot(x, f(x), label="Given in book")
plt.legend()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
from ipywidgets.widgets import interact
sym.init_printing(use_latex="mathjax")
x, y, z, t = sym.symbols('x y z t')
We were given a function in class to write as code
\begin{equation}
p_w(z,t)=\frac{1}{\sqrt{\pi \left(1-\exp\left[-2 t\right]\right)}}
\exp\left[-\frac{\left(z-\exp\left[-t\right]\right)^{2}}{1-
\exp\left[-2t\right]}\right]
\end{equation}
which I have written as this
p_w = (1/(sym.sqrt((sym.pi)*(1-(sym.exp(-2*t))))))*(sym.exp((-(z-sym.exp(-t))**2)/(1-sym.exp(-2*t))))
Then find the partial differential equation
∂𝑡𝑝𝑤(𝑧,𝑡)=∂𝑧[𝑧𝑝𝑤(𝑧,𝑡)]+1/2 ∂2𝑧𝑝𝑤(𝑧,𝑡)
which I have written as this:
LHS=sym.diff(p_w,t,1)
#differentiate once with respect to t
RHS=sym.diff(z*p_w,z,1)+((1/2)*(sym.diff(p_w,z,2)))
#now differentiate with respect to z
Now we need to plot it and can only use matplotlib/numpy/sympy libraries.
Plot 𝑝𝑤(𝑧,𝑡) for the three values t=0.1,1,10 in a 𝑝𝑤(𝑧,𝑡) versus z diagram.
Here's what I've got so far:
t_points=[0.1,1,10]
#pw = sym.lambdify(t,p_w)
mytspace=np.linspace(0,10,200)
#myzspace=pw(mytspace)
plt.xlabel("t axis")
plt.ylabel("z axis")
plt.plot(t_array,np.zeros(3),'bs')
I haven't studied multivariable calculus before so I'm a bit lost!
Since one of your variables is given (you know t must be t=0.1, t=1 or t=10) your only variable for plotting is z. I know you are using sympy for the calculations, but for plotting maybe it's simpler to just return p_w as a numpy array. You can define a function to return p_w as so:
import numpy as np
import matplotlib.pyplot as plt
def p_w(z, t):
p_w = (1/(np.sqrt((np.pi)*(1-(np.exp(-2*t))))))*(np.exp((-(z-np.exp(-t))**2)/(1-np.exp(-2*t))))
return p_w
This will give you a numpy array with the results of p_w(z, t) where z is an array and t is just one number. Then you can just iterate over the values of t that you need:
t_points=[0.1, 1, 10]
z = np.linspace(0,10,200)
for t in t_points:
plt.plot(z, p_w(z, t), label='t = {0}'.format(t))
plt.legend()
plt.show()
Result:
I have some data points which I was successfully able to graph, but now I would like to fit a curve to the data. I looked into other stackoverflow answers and found a few questions, but I can't seem to implement them. I know the function is a reverse sigmoid.
I would like to use this hill equation: https://imgur.com/rYqEASm
So far I tried to use the curve_fit() function from the scipy package to find the parameters but my code always breaks.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
x = np.array([1, 1.90, 7.70, 30.10, 120.40, 481.60, 1925.00, 7700.00])
y = np.array([4118.47, 4305.79, 4337.47, 4838.11, 2660.76, 1365.05, 79.21, -16.40])
def fit_hill(t,b,s,i,h):
return b + ((t-b)/(1 + (((x * s)/i)**-h)))
plt.plot(x,y, 'o')
plt.xscale('log')
plt.show()
params = curve_fit(fit_hill, x, y)
[t,b,s,i,h] = params[0]
fit_hill should have 6 parameters.
(see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html)
fit_hill(x,t,b,s,i,h).
You should try to give an initial guess for parameters.
For example in your model, when x=0, the value is t. So you can set the value at x=0 as an estimate for t.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
x = np.array([1, 1.90, 7.70, 30.10, 120.40, 481.60, 1925.00])
y = np.array([4118.47, 4305.79, 4337.47, 4838.11, 2660.76, 1365.05, 79.21])
def fit_hill(x,t,b,s,i,h):
return b + ((t-b)/(1 + (((x * s)/i)**-h)))
plt.plot(x,y, 'o')
popt,pcov = curve_fit(fit_hill, x, y,(4118,200,1,1900,-2))
plt.plot(x,fit_hill(x,*popt),'+')
plt.xscale('log')
plt.show()
Have you drawn your model to visualize if it is suitable for you data ?
s and i, used only in s/i could be replaced with one variable in your model.