How to fit a non linear function with python? - python

I have the following code written in R to estimate three coefficients (a, b and c):
y <- c(120, 125, 158, 300, 350, 390, 2800, 5900, 7790)
t <- 1:9
fit <- nls(y ~ a * (((b + c)^2/b) * exp(-(b + c) * t))/(1 + (c/b) *
exp(-(b + c) * t))^2, start = list(a = 17933, b = 0.01, c = 0.31))
and i get this result
> summary(fit )
Formula: y ~ a * (((b + c)^2/b) * exp(-(b + c) * t))/(1 + (c/b) * exp(-(b +
c) * t))^2
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 2.501e+04 2.031e+03 12.312 1.75e-05 ***
b 1.891e-05 1.383e-05 1.367 0.221
c 1.254e+00 1.052e-01 11.924 2.11e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 248.8 on 6 degrees of freedom
Number of iterations to convergence: 33
Achieved convergence tolerance: 6.836e-06
How to make the same thing with Python ?

You can use curve_fit, which gives you the same result:
import scipy.optimize as optimization
import numpy as np
y = np.array([120, 125, 158, 300, 350, 390, 2800, 5900, 7790])
t = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])
start = np.array([17933, 0.01, 0.31])
def f(t,a,b,c):
num = a*(np.exp(-t*(b+c))*np.power(b+c, 2)/b)
denom = np.power(1+(c/b)*np.exp(-t*(b+c)), 2)
return num/denom
print(optimization.curve_fit(f, t, y, start))
#(array([ 2.50111448e+04, 1.89129922e-05, 1.25426156e+00]), array([[ 4.12657233e+06, 2.58151776e-02, -2.00881091e+02],
# [ 2.58151776e-02, 1.91318685e-10, -1.44733425e-06],
# [ -2.00881091e+02, -1.44733425e-06, 1.10654268e-02]]))

Related

Simple Linear Regression - what am I doing wrong?

I am new to ML and tried to build a Linear Regression Model by myself. Object is to predict the fahrenheit values for celcius values.
This is my code:
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype= float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype = float)
inputs = celsius_q
output_expected = fahrenheit_a
# y = m * x + b
m = 100
b = 0
m_gradient = 0
b_gradient = 0
learning_rate = 0.00001
#Forwardpropagation
for i in range(10000):
for i in range(len(inputs)):
m_gradient += (m + (b * inputs[i] - output_expected[i]))
b_gradient += inputs[i] * (m + (b * inputs[i]) - output_expected[i])
m_new = m - learning_rate * (2/len(inputs)) * m_gradient
b_new = b - learning_rate * (2/len(inputs)) * b_gradient
The code generates wrong weights for m and b, no matter how much I change the learning_rate and the epochs. The weights for minimal loss function has to be:
b = 1.8
m = 32
What am I doing wrong?
The update of m and b needs to happen every step but this is not going to be enough. You also need to slightly increase your learning rate, say twice:
import numpy as np
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
inputs = celsius_q
output_expected = fahrenheit_a
# y = m * x + b
m_new = m = 100.0
b_new = b = 0.0
m_gradient = 0.0
b_gradient = 0.0
learning_rate = 0.0002
# Forwardpropagation
for i in range(10000):
m_gradient, b_gradient = 0, 0
for i in range(len(inputs)):
m_gradient += (m_new + (b_new * inputs[i] - output_expected[i]))
b_gradient += inputs[i] * (m_new + (b_new * inputs[i]) - output_expected[i])
m_new -= learning_rate * m_gradient
b_new -= learning_rate * b_gradient
print(m_new, b_new)
Getting:
31.952623523538897 1.7979482813813066
which is close to the expected 32 and 1.8.
You should continually update your parameters, in every step. Something like:
for i in range(10000):
m_gradient, b_gradient = 0, 0
for i in range(len(inputs)):
m_gradient += (m + (b * inputs[i] - output_expected[i]))
b_gradient += inputs[i] * (m + (b * inputs[i]) - output_expected[i])
m -= learning_rate * m_gradient
b -= learning_rate * b_gradient
(But I didn't check your math.)

Gradient Descent Problem with smallest/simplest data on planet Earth

I want to implement the Gradient Descent Algorithm on this simple data but I am facing problems. It would be great if someone points me in the right direction. The answer should be 7 for x=6 but I'm not getting there.
X = [1, 2, 3, 4]
Y = [2, 3, 4, 5]
m_gradient = 0
b_gradient = 0
m, b = 0, 0
learning_rate = 0.1
N = len(Y)
for p in range(100):
for idx in range(len(Y)):
x = X[idx]
y = Y[idx]
hyp = (m * x) + b
m_gradient += -(2/N) * x * (y - hyp)
b_gradient += -(2/N) * (y - hyp)
m = m - (m_gradient * learning_rate)
b = b - (b_gradient * learning_rate)
print(b+m*6)
You are calculating the gradients incorrectly for all but the first iteration. You need to set both gradients to 0 in the outer for loop.
X = [1, 2, 3, 4]
Y = [2, 3, 4, 5]
m_gradient = 0
b_gradient = 0
m, b = 0, 0
learning_rate = 0.1
N = len(Y)
for p in range(100):
for idx in range(len(Y)):
x = X[idx]
y = Y[idx]
hyp = (m * x) + b
m_gradient += -(2/N) * x * (y - hyp)
b_gradient += -(2/N) * (y - hyp)
m = m - (m_gradient * learning_rate)
b = b - (b_gradient * learning_rate)
m_gradient, b_gradient = 0, 0
print(b+m*6)
For example consider b_gradient. Before first iteration b_gradient = 0 and is calculated as 0 + -0.5*(y0 - (m*x0 +b)) + -0.5(y1 - (m*x1 +b)) + -0.5(y2 - (m*x2 + b)) + -0.5(y3 - (m*x3 + b)), where x0 and y0 are X[0] and Y[0], respectively.
After the first iteration the value of b_gradient is -7, this is correct.
The problem starts with the second iteration. Instead of calculating b_gradient as the sum of (-0.5(yn - (m*xn + b)) for 0 <= n <= 3, you calculated it as the previous value of b_gradient plus the sum of (-0.5(yn - (m*xn + b)) for 0 <= n <= 3.
After the second iteration the value of b_gradient is -2.6, this is incorrect. The correct value is 4.4, note that 4.4 - 7 = -2.6.
It seems you want coefficients for Linear Regression using Gradient Descent. Some more data points, a slightly smaller learning rate, training for more epochs by looking at the loss will help reduce error.
As input size gets larger the code below will give slightly off results. The above mentioned methods such as training for more epoch will give correct results for larger range of numbers.
Vectorized Version
import numpy as np
X = np.array([1, 2, 3, 4, 5, 6, 7])
Y = np.array([2, 3, 4, 5, 6, 7, 8])
w_gradient = 0
b_gradient = 0
w, b = 0.5, 0.5
learning_rate = .01
loss = 0
EPOCHS = 2000
N = len(Y)
for i in range(EPOCHS):
# Predict
Y_pred = (w * X) + b
# Loss
loss = np.square(Y_pred - Y).sum() / (2.0 * N)
if i % 100 == 0:
print(loss)
# Backprop
grad_y_pred = (2 / N) * (Y_pred - Y)
w_gradient = (grad_y_pred * X).sum()
b_gradient = (grad_y_pred).sum()
# Optimize
w -= (w_gradient * learning_rate)
b -= (b_gradient * learning_rate)
print("\n\n")
print("LEARNED:")
print(w, b)
print("\n")
print("TEST:")
print(np.round(b + w * (-2)))
print(np.round(b + w * 0))
print(np.round(b + w * 1))
print(np.round(b + w * 6))
print(np.round(b + w * 3000))
# Expected: 30001, but gives 30002.
# Training for 3000 epochs will give expected result.
# For simple demo with less training data and small input range 2000 in enough
print(np.round(b + w * 30000))
Output
LEARNED:
1.0000349103409163 0.9998271260509328
TEST:
-1.0
1.0
2.0
7.0
3001.0
30002.0
Loop Version
import numpy as np
X = np.array([1, 2, 3, 4, 5, 6, 7])
Y = np.array([2, 3, 4, 5, 6, 7, 8])
w_gradient = 0
b_gradient = 0
w, b = 0.5, 0.5
learning_rate = .01
loss = 0
EPOCHS = 2000
N = len(Y)
for i in range(EPOCHS):
w_gradient = 0
b_gradient = 0
loss = 0
for j in range(N):
# Predict
Y_pred = (w * X[j]) + b
# Loss
loss += np.square(Y_pred - Y[j]) / (2.0 * N)
# Backprop
grad_y_pred = (2 / N) * (Y_pred - Y[j])
w_gradient += (grad_y_pred * X[j])
b_gradient += (grad_y_pred)
# Optimize
w -= (w_gradient * learning_rate)
b -= (b_gradient * learning_rate)
# Print loss
if i % 100 == 0:
print(loss)
print("\n\n")
print("LEARNED:")
print(w, b)
print("\n")
print("TEST:")
print(np.round(b + w * (-2)))
print(np.round(b + w * 0))
print(np.round(b + w * 1))
print(np.round(b + w * 6))
print(np.round(b + w * 3000))
# Expected: 30001, but gives 30002.
# Training for 3000 epochs will give expected result.
# For simple demo with less training data and small input range 2000 in enough
print(np.round(b + w * 30000))
Output
LEARNED:
1.0000349103409163 0.9998271260509328
TEST:
-1.0
1.0
2.0
7.0
3001.0
30002.0

How to solve complex matrix differential equations using solve_ivp?

I want to solve a complex matrix differential equation y' = Ay.
import numpy as np
from scipy.integrate import solve_ivp
def deriv(y, t, A):
return np.dot(A, y)
A = np.array([[-0.25 + 0.14j, 0, 0.33 + 0.44j],
[ 0.25 + 0.58j, -0.2 + 0.14j, 0],
[ 0, 0.2 + 0.4j, -0.1 + 0.97j]])
time = np.linspace(0, 25, 101)
y0 = np.array([[2, 3, 4], [5, 6 , 7], [9, 34, 78]])
result = solve_ivp(deriv, y0, time, args=(A,))
There already seems to be an answer in case of 'odeint'.
https://stackoverflow.com/a/45970853/7952027
https://stackoverflow.com/a/26320130/7952027
https://stackoverflow.com/a/26747232/7952027
https://stackoverflow.com/a/26582411/7952027
I am curious as to whether it can be done with any of the new API of Scipy?
I have updated your snippet, have a look below. You should carefully check the doc as, I believe, everything is well detailed there.
import numpy as np
from scipy.integrate import solve_ivp
def deriv_vec(t, y):
return A # y
def deriv_mat(t, y):
return (A # y.reshape(3, 3)).flatten()
A = np.array([[-0.25 + 0.14j, 0, 0.33 + 0.44j],
[0.25 + 0.58j, -0.2 + 0.14j, 0],
[0, 0.2 + 0.4j, -0.1 + 0.97j]])
result = solve_ivp(deriv_vec, [0, 25], np.array([10 + 0j, 20 + 0j, 30 + 0j]),
t_eval=np.linspace(0, 25, 101))
print(result.y[:, 0])
# [10.+0.j 20.+0.j 30.+0.j]
print(result.y[:, -1])
# [18.46+45.25j 10.01+36.23j -4.98+80.07j]
y0 = np.array([[2 + 0j, 3 + 0j, 4 + 0j],
[5 + 0j, 6 + 0j, 7 + 0j],
[9 + 0j, 34 + 0j, 78 + 0j]])
result = solve_ivp(deriv_mat, [0, 25], y0.flatten(),
t_eval=np.linspace(0, 25, 101))
print(result.y[:, 0].reshape(3, 3))
# [[ 2.+0.j 3.+0.j 4.+0.j]
# [ 5.+0.j 6.+0.j 7.+0.j]
# [ 9.+0.j 34.+0.j 78.+0.j]]
print(result.y[:, -1].reshape(3, 3))
# [[ 5.67+12.07j 17.28+31.03j 37.83+63.25j]
# [ 3.39+11.82j 21.32+44.88j 53.17+103.80j]
# [ -2.26+22.19j -15.12+70.191j -38.34+153.29j]]

How to implement cubic spline interpolation in 3 dimensions?

I am trying to implement cubic spline interpolation in 3 dimensions, however I am unsure how to modify the code I have currently written to implement the z-axis. The purpose of this code will be to calculate a trajectory between a starting point and an end point, which passes through several intermediate points. Any assistance would be greatly appreciated!
import sys
import numpy as np
import matplotlib.pyplot as plt
X = np.array([1, 5, 8, 12, 16, 20, 25, 30, 38], np.float)
Y = np.array([20, 14, 10, 7, 3, 8, 17, 5, 3], np.float)
num_points = 1000
H_x = np.diff(X)
H_y = np.diff(Y)
H_n = N - 1
Alfa = 1 / H_x[1 : H_n - 1]
Gamma = 1 / H_x[1 : H_n - 1]
Beta = 2 * (1 / H_x[:H_n - 1] + 1 / H_x[1:])
dF = H_y / H_x
Delta = 3 * (dF[1:] / H_x[1:] + dF[:H_n-1] / H_x[:H_n-1])
TDM = np.diag(Alfa, k=-1) + np.diag(Beta, 0) + np.diag(Gamma, +1)
B = np.linalg.solve(TDM, Delta)
B = np.hstack([0, B, 0])
C = (3*dF - B[1:] - 2 * B[:H_n]) / H_x
D = (B[:H_n] + B[1:] - 2 * dF) / (H_x ** 2)
x_step = (X[N-1] - X[0]) / num_points
x_points = []
x_base = X[0]
for i in range(num_points):
x_points.append(x_base+x_step*i)
y_points = []
for x_point in x_points:
for i in range(N-1):
if ((x_point >= X[i]) and (x_point <= X[i+1])):
y_point = Y[i] + B[i] * (x_point - X[i]) + C[i] * ((x_point - X[i]) ** 2) + D[i] * ((x_point - X[i]) ** 3)
y_points.append(y_point)
spline, nodes = plt.plot(x_points, y_points, "-g", X, Y, "o")
plt.axis([X[0]-3, X[N-1]+3, np.min(y_points)-3, np.max(y_points)+3])
plt.title(u'P(x)')
plt.xlabel(u'X')
plt.ylabel(u'Y')
plt.grid()
plt.savefig('cubic_spline.png', format = 'png')
plt.show()

Explain why the error occurs: ValueError: all sizes of the input array, except the concatenation axis, must exactly match

I'm optimizing a function in Python using scipy.optimize.minimize. I wrote the code following the example in the documentation:
import numpy as np
from scipy.optimize import minimize
# I set bounds and constraints
bnds = ((10e-6, 2000), (10e-6, 16000), (10e-6, 120), (10e-6, 5000), (10e-6, 2000), (85, 93), (90, 95), (3, 12), (1.2, 4), (145, 162))
eq_cons = {'type': 'eq',
'fun': lambda x: np.array([1.22 * x[3] - x[0] - x[4],
(98000 * x[2]) / x[3] * x[8] + 1000 * x[2] - x[5],
((x[1] + x[4]) / x[0]) - x[7]]),
'jac': lambda x: np.array([[1.22, -1, -1],
[98000, 1, 1, 1000, -1],
[1, 1, 1, -1]])
}
ineq_cons = {'type': 'ineq',
'fun': lambda x: np.array([x[0] * (1.12 + 0.13167 * x[7] - 0.0067 * x[7] ** 2) - 0.99 * x[3],
-(x[0] * (1.12 + 0.13167 * x[7] - 0.0067 * x[7] ** 2) + (100 / 99) * x[3]),
(86.35 + 1.098 * x[7] - 0.038 * x[7] ** 2 + 0.325 * (x[5] - 89)) - 0.99 * x[6],
-(86.35 + 1.098 * x[7] - 0.038 * x[7] ** 2 + 0.325 * (x[5] - 89)) + (100 / 99) * x[6],
(35.82 - 0.222 * x[9]) - 0.9 * x[8],
-(35.82 - 0.222 * x[9]) + (10 / 9) * x[8],
(-133 + 3 * x[6]) - 0.99 * x[9],
-(-133 + 3 * x[6]) + (100 / 99) * x[9]]),
'jac': lambda x: np.array([[1, 0, 0.13167, -0.0134 * x[7], -0.99],
[-1, 0, -0.13167, 0.0134 * x[7], -(100 / 99)],
[0, 1.098, -0.076 * x[7], 0.325, -0.99],
[0, -1.098, 0.076 * x[7], -0.325, -(100 / 99)],
[0, -0.222, -0.9],
[0, 0.222, -(10 / 9)],
[0, 3, -0.99],
[0, -3, -(100 / 99)]])
}
# I set the initial values of variables
x0 = np.array([1745, 12000, 110, 3048, 1974, 89.2, 92.8, 8, 3.6, 145])
def f(x):
return 0.063 * x[3] * x[6] - 5.04 * x[0] - 0.035 * x[1] - 10 * x[2] - 3.36 * x[4]
res = minimize(f, x0, method='SLSQP', constraints=[ineq_cons, eq_cons], bounds=bnds, options={'ftol': 1e-9, 'disp': True})
print(res.x)
Here is the traceback:
Traceback (most recent call last):
File "C:\Users\user\Desktop\Python\optimize.py", line 41, in <module>
res = minimize(f, x0, method='SLSQP', constraints=[ineq_cons, eq_cons], bounds=bnds, options={'ftol': 1e-9, 'disp': True})
File "C:\Anaconda3\lib\site-packages\scipy\optimize\_minimize.py", line 611, in minimize
constraints, callback=callback, **options)
File "C:\Anaconda3\lib\site-packages\scipy\optimize\slsqp.py", line 422, in _minimize_slsqp
a = vstack((a_eq, a_ieq))
File "C:\Anaconda3\lib\site-packages\numpy\core\shape_base.py", line 234, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
ValueError: all the input array dimensions except for the concatenation axis must match exactly
When I do not use jac in eq_cons and ineq_cons, the calculation goes, but not correctly. I get at the output:
C:\Anaconda3\python.exe C:\Users\user\Desktop\Python\optimize.py
Positive directional derivative for linesearch (Exit mode 8)
Current function value: 0.00610273865507208
Iterations: 82
Function evaluations: 1050
Gradient evaluations: 78
fun: 0.00610273865507208
jac: array([-5.04000000e+00, -3.49999999e-02, -1.00000000e+01, 5.98500000e+00,
-3.36000000e+00, 0.00000000e+00, 4.70466912e-04, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00])
message: 'Positive directional derivative for linesearch'
nfev: 1050
nit: 82
njev: 78
status: 8
success: False
x: array([3.91761754e-03, 3.69776009e-02, 1.00000000e-05, 7.46772900e-03,
5.19422947e-03, 9.30000000e+01, 9.50000000e+01, 1.10436932e+01,
1.56130612e+00, 1.53536361e+02])
What is wrong with the dimension? I can not understand for which arrays the dimension does not match and how to fix it?

Categories