I'm working through a quiz where you have to implement some machine learning concepts from scratch. One question asks to implement linear interpolation in python without any external libraries besides numpy. The question states:
Q: Given the input data of points [(1, 2), (3, 4), (5, 6), (8, 8), (7, -1)], fit a line of the form Y = A * X + B using gradient descent. Provide the implementation of your algorithm in the function provided using no external libraries, except for numpy. The evaluate linear regression fit on the provided point.
def linear_interpolate_point(data, x):
"""
Fit a line to the provided data and then evaluate on the provided test point.
:param data: Collection of points to fit provided as a list of tuples
:param x: Point to interpolate using your fit line
:return: The output of your point on the interpolated line
"""
# fill in function below
I've tried a few different concepts but seem to be stumped on what the question is asking.
Any help is much appreciated.
A linear regression can be formalized as follows: f(x) = x*m + b, where b is a bias weigth and m is a weigth that is multiplied with the input. What gradient descent does is to optimize the m and b based on your data. So let's set up an initial model based on your data.
import numpy as np
data = np.array([(1, 2), (3, 4), (5, 6), (8, 8), (7, -1)])
x = data[:, 0]
y = data[:, 0]
m = 0
b = 0
With the initial model we can run a several optimization steps with gradient descent to optimize the weights. Gradient descent basically just computes in which direction to change the weights b and m. We further specify how much iterations we allow for optimization (max_iterations) and how much we move in the direction of the gradient (eta).
def gradient_descent(x, y, m, b, eta, max_iterations):
n = x.shape[0]
for iteration in range(max_iterations):
y_hat = x * m + b
error = y - y_hat
# Compute gradient for m and b
gradient_m = (-2/n) * x.dot(error)
gradient_b = (-2/n) * sum(error)
# Update weights
m = m - eta*gradient_m
b = b - eta*gradient_b
return m, b
m, b = gradient_descent(x, y, m, b, 1e-2, 1000)
Now plotting the data and the model with optimized weights.
plt.scatter(x, y)
plt.plot(np.linspace(1, 8, 100), np.linspace(1, 8, 100) * m + b )
plt.show()
For a complete explanation how gradient descent works probably some blog posts are better than stackoverflow, e.g. Gradient Descent.
Related
I want to solve a 1D heat conduction using neural netwroks in pytorch. The PDE represeting the heat conduction is as follows:
du/dt = k d2u/dx2
where, k is a constant, u represent temperature and x is also the space. I also include a boundary condition like 0 temperature at x=0 and initial condition like t=0. I am quite new in the field of PINN and only can solve normal ODEs with it. The following code is tryin to solve a very simple ODE like du/dx=u with boundary condition like u=0 at x=0. The answer is simply u = u2/2. The following code is a simple PINN that solve this ODE:
import torch
import torch.nn as nn
import numpy as np
# N is a Neural Network with three layers
N = nn.Sequential(nn.Linear(1, 50), nn.Sigmoid(), nn.Linear(50,1, bias=False))
BC = 0. # Boundary condition
g_f = lambda x: BC + x * N(x) # a general function satisfying the BC
f = lambda x: x # Undolvable ODE!!! : du/dx = x ----> u = x2/2
# The loss function
def loss(x):
x.requires_grad = True
outputs = g_f(x)
grdnt = torch.autograd.grad(outputs, x, grad_outputs=torch.ones_like(outputs),
create_graph=True)[0]
return torch.mean((grdnt - f(x)) ** 2)
optimizer = torch.optim.LBFGS(N.parameters())
x = torch.Tensor(np.linspace(-4, 4, 100)[:, None])
# Run the optimizer
def closure():
optimizer.zero_grad()
l = loss(x)
l.backward()
return l
for i in range(10):
optimizer.step(closure)
x_test = np.linspace(-4, 4, 100)[:, None]
with torch.no_grad():
y_test = g_f(torch.Tensor(x_test)).numpy()
How to replicate the code with 1D heat conduction?
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have to do Logistic regression using batch gradient descent.
import numpy as np
X = np.asarray([
[0.50],[0.75],[1.00],[1.25],[1.50],[1.75],[1.75],
[2.00],[2.25],[2.50],[2.75],[3.00],[3.25],[3.50],
[4.00],[4.25],[4.50],[4.75],[5.00],[5.50]])
y = np.asarray([0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1])
m = len(X)
def sigmoid(a):
return 1.0 / (1 + np.exp(-a))
def gradient_Descent(theta, alpha, X , y):
for i in range(0,m):
cost = ((-y) * np.log(sigmoid(X[i]))) - ((1 - y) * np.log(1 - sigmoid(X[i])))
grad = theta - alpha * (1.0/m) * (np.dot(cost,X[i]))
theta = theta - alpha * grad
gradient_Descent(0.1,0.005,X,y)
The way I have to do it is like this but I can't seem to understand how to make it work.
It looks like you have some stuff mixed up in here. It's critical when doing this that you keep track of the shape of your vectors and makes sure you're getting sensible results. For example, you are calculating cost with:
cost = ((-y) * np.log(sigmoid(X[i]))) - ((1 - y) * np.log(1 - sigmoid(X[i])))
In your case y is vector with 20 items and X[i] is a single value. This makes your cost calculation a 20 item vector which doesn't makes sense. Your cost should be a single value. (you're also calculating this cost a bunch of times for no reason in your gradient descent function).
Also, if you want this to be able to fit your data you need to add a bias terms to X. So let's start there.
X = np.asarray([
[0.50],[0.75],[1.00],[1.25],[1.50],[1.75],[1.75],
[2.00],[2.25],[2.50],[2.75],[3.00],[3.25],[3.50],
[4.00],[4.25],[4.50],[4.75],[5.00],[5.50]])
ones = np.ones(X.shape)
X = np.hstack([ones, X])
# X.shape is now (20, 2)
Theta will now need 2 values for each X. So initialize that and Y:
Y = np.array([0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1]).reshape([-1, 1])
# reshape Y so it's column vector so matrix multiplication is easier
Theta = np.array([[0], [0]])
Your sigmoid function is good. Let's also make a vectorized cost function:
def sigmoid(a):
return 1.0 / (1 + np.exp(-a))
def cost(x, y, theta):
m = x.shape[0]
h = sigmoid(np.matmul(x, theta))
cost = (np.matmul(-y.T, np.log(h)) - np.matmul((1 -y.T), np.log(1 - h)))/m
return cost
The cost function works because Theta has a shape of (2, 1) and X has a shape of (20, 2) so matmul(X, Theta) will be shaped (20, 1). The then matrix multiply the transpose of Y (y.T shape is (1, 20)), which result in a single value, our cost given a particular value of Theta.
We can then write a function that performs a single step of batch gradient descent:
def gradient_Descent(theta, alpha, x , y):
m = x.shape[0]
h = sigmoid(np.matmul(x, theta))
grad = np.matmul(X.T, (h - y)) / m;
theta = theta - alpha * grad
return theta
Notice np.matmul(X.T, (h - y)) is multiplying shapes (2, 20) and (20, 1) which results in a shape of (2, 1) — the same shape as Theta, which is what you want from your gradient. This allows you to multiply is by your learning rate and subtract it from the initial Theta, which is what gradient descent is supposed to do.
So now you just write a loop for a number of iterations and update Theta until it looks like it converges:
n_iterations = 500
learning_rate = 0.5
for i in range(n_iterations):
Theta = gradient_Descent(Theta, learning_rate, X, Y)
if i % 50 == 0:
print(cost(X, Y, Theta))
This will print the cost every 50 iterations resulting in a steadily decreasing cost, which is what you hope for:
[[ 0.6410409]]
[[ 0.44766253]]
[[ 0.41593581]]
[[ 0.40697167]]
[[ 0.40377785]]
[[ 0.4024982]]
[[ 0.40195]]
[[ 0.40170533]]
[[ 0.40159325]]
[[ 0.40154101]]
You can try different initial values of Theta and you will see it always converges to the same thing.
Now you can use your newly found values of Theta to make predictions:
h = sigmoid(np.matmul(X, Theta))
print((h > .5).astype(int) )
This prints what you would expect for a linear fit to your data:
[[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]
[1]]
I'm working through my Matlab code for the Andrew NG Coursera course and turning it into python. I am working on non-regularized logistic regression and after writing my gradient and cost functions I needed something similar to fminunc and after some googling, I found a couple options. They are both returning the same results, but they do not match what is in Andrew NG's expected results code. Others seem to be getting this to work correctly, but I'm wondering why my specific code does not seem to return the desired result when using scipy.optimize functions, but does for the cost and gradient pieces earlier in the code.
The data I'm using can be found at the link below;
ex2data1
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as op
#Machine Learning Online Class - Exercise 2: Logistic Regression
#Load Data
#The first two columns contains the exam scores and the third column contains the label.
data = pd.read_csv('ex2data1.txt', header = None)
X = np.array(data.iloc[:, 0:2]) #100 x 3
y = np.array(data.iloc[:,2]) #100 x 1
y.shape = (len(y), 1)
#Creating sub-dataframes for plotting
pos_plot = data[data[2] == 1]
neg_plot = data[data[2] == 0]
#==================== Part 1: Plotting ====================
#We start the exercise by first plotting the data to understand the
#the problem we are working with.
print('Plotting data with + indicating (y = 1) examples and o indicating (y = 0) examples.')
plt.plot(pos_plot[0], pos_plot[1], "+", label = "Admitted")
plt.plot(neg_plot[0], neg_plot[1], "o", label = "Not Admitted")
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
plt.show()
def sigmoid(z):
'''
SIGMOID Compute sigmoid function
g = SIGMOID(z) computes the sigmoid of z.
Instructions: Compute the sigmoid of each value of z (z can be a matrix,
vector or scalar).
'''
g = 1 / (1 + np.exp(-z))
return g
def costFunction(theta, X, y):
'''
COSTFUNCTION Compute cost and gradient for logistic regression
J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
parameter for logistic regression and the gradient of the cost
w.r.t. to the parameters.
'''
m = len(y) #number of training examples
h = sigmoid(X.dot(theta)) #logisitic regression hypothesis
J = (1/m) * np.sum((-y*np.log(h)) - ((1-y)*np.log(1-h)))
#h is 100x1, y is %100x1, these end up as 2 vector we subtract from each other
#then we sum the values by rows
#cost function for logisitic regression
return J
def gradient(theta, X, y):
m = len(y)
grad = np.zeros((theta.shape))
h = sigmoid(X.dot(theta))
for i in range(len(theta)): #number of rows in theta
XT = X[:,i]
XT.shape = (len(X),1)
grad[i] = (1/m) * np.sum((h-y)*XT) #updating each row of the gradient
return grad
#============ Part 2: Compute Cost and Gradient ============
#In this part of the exercise, you will implement the cost and gradient
#for logistic regression. You neeed to complete the code in costFunction.m
#Add intercept term to x and X_test
Bias = np.ones((len(X), 1))
X = np.column_stack((Bias, X))
#Initialize fitting parameters
initial_theta = np.zeros((len(X[0]), 1))
#Compute and display initial cost and gradient
(cost, grad) = costFunction(initial_theta, X, y), gradient(initial_theta, X, y)
print('Cost at initial theta (zeros): %f' % cost)
print('Expected cost (approx): 0.693\n')
print('Gradient at initial theta (zeros):')
print(grad)
print('Expected gradients (approx):\n -0.1000\n -12.0092\n -11.2628')
#Compute and display cost and gradient with non-zero theta
test_theta = np.array([[-24], [0.2], [0.2]]);
(cost, grad) = costFunction(test_theta, X, y), gradient(test_theta, X, y)
print('\nCost at test theta: %f' % cost)
print('Expected cost (approx): 0.218\n')
print('Gradient at test theta:')
print(grad)
print('Expected gradients (approx):\n 0.043\n 2.566\n 2.647\n')
result = op.fmin_tnc(func = costFunction, x0 = initial_theta, fprime = gradient, args = (X,y))
result[1]
Result = op.minimize(fun = costFunction,
x0 = initial_theta,
args = (X, y),
method = 'TNC',
jac = gradient, options={'gtol': 1e-3, 'disp': True, 'maxiter': 1000})
theta = Result.x
theta
test = np.array([[1, 45, 85]])
prob = sigmoid(test.dot(theta))
print('For a student with scores 45 and 85, we predict an admission probability of %f,' % prob)
print('Expected value: 0.775 +/- 0.002\n')
This was a very difficult problem to debug, and illustrates a poorly documented aspect of the scipy.optimize interface. The documentation vaguely indicates that theta will be passed around as a vector:
Minimization of scalar function of one or more variables.
In general, the optimization problems are of the form:
minimize f(x) subject to
g_i(x) >= 0, i = 1,...,m
h_j(x) = 0, j = 1,...,p
where x is a vector of one or more variables.
What's important is that they really mean vector in the most primitive sense, a 1-dimensional array. So you have to expect that whenever theta is passed into one of your callbacks, it will be passed in as a 1-d array. But in numpy, 1-d arrays sometimes behave differently from 2-d row arrays (and, obviously, from 2-d column arrays).
I don't know exactly why it's causing a problem in your case, but it's easily fixed regardless. You just have to add the following at the top of both your cost function and your gradient function:
theta = theta.reshape(-1, 1)
This guarantees that theta will be a 2-d column array, as expected. Once you've done this, the results are correct.
I have had similar issues with Scipy dealing with the same problem as you. As senderle points out the interface is not the easiest to deal with, especially combined with the numpy array interface... Here is my implementation which works as expected.
Defining the cost and gradient functions
Note that initial_theta is passed as a simple array of shape (3,) and converted to a column vector of shape (3,1) within the function. The gradient function then returns the grad.ravel() which has shape (3,) again. This is important as doing otherwise caused an error message with various optimization methods in Scipy.optimize.
Note that different methods have different behaviours but returning .ravel() seems to fix most issues...
import pandas as pd
import numpy as np
import scipy.optimize as opt
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def CostFunc(theta,X,y):
#Initializing variables
m = len(y)
J = 0
grad = np.zeros(theta.shape)
#Vectorized computations
z = X # theta
h = sigmoid(z)
J = (1/m) * ( (-y.T # np.log(h)) - (1 - y).T # np.log(1-h));
return J
def Gradient(theta,X,y):
#Initializing variables
m = len(y)
theta = theta[:,np.newaxis]
grad = np.zeros(theta.shape)
#Vectorized computations
z = X # theta
h = sigmoid(z)
grad = (1/m)*(X.T # ( h - y));
return grad.ravel() #<-- This is the trick
Initializing variables and parameters
Note that initial_theta.shape returns (3,)
X = data1.iloc[:,0:2].values
m,n = X.shape
X = np.concatenate((np.ones(m)[:,np.newaxis],X),1)
y = data1.iloc[:,-1].values[:,np.newaxis]
initial_theta = np.zeros((n+1))
Calling Scipy.optimize
model = opt.minimize(fun = CostFunc, x0 = initial_theta, args = (X, y), method = 'TNC', jac = Gradient)
Any comments from more knowledgeable people are welcome, this Scipy interface is a mystery to me, thanks
I'm trying to implement the gradient descent algorithm from scratch on a toy problem. My code always returns a vector of NaN's:
from sklearn.linear_model import LinearRegression
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(45)
x = np.linspace(0, 1000, num=1000)
y = 3*x + 2 + np.random.randn(len(x))
# sklearn output - This works (returns intercept = 1.6, coef = 3)
lm = LinearRegression()
lm.fit(x.reshape(-1, 1), y.reshape(-1, 1))
print("Intercept = {:.2f}, Coef = {:.2f}".format(lm.coef_[0][0], lm.intercept_[0]))
# BGD output
theta = np.array((0, 0)).reshape(-1, 1)
X = np.hstack([np.ones_like(x.reshape(-1, 1)), x.reshape(-1, 1)]) # [1, x]
Y = y.reshape(-1, 1) # Column vector
alpha = 0.05
for i in range(100):
# Update: theta <- theta - alpha * [X.T][X][theta] - [X.T][Y]
h = np.dot(X, theta) # Hypothesis
loss = h - Y
theta = theta - alpha*np.dot(X.T, loss)
theta
The sklearn part runs fine, so I must be doing something wrong in the for loop. I've tried various different alpha values and none of them converge.
The problem is theta keeps getting bigger and bigger throughout the loop, and eventually becomes too big for python to store.
Here's a contour plot of the cost function:
J = np.dot((np.dot(X, theta) - y).T, (np.dot(X, theta) - y))
plt.contour(J)
Clearly there's no minimum here. Where have I gone wrong?
Thanks
In the theta update, the second term should be divided by the size of the training set. More details are there: gradient descent using python and numpy
I have some points and I am trying to fit curve for this points. I know that there exist scipy.optimize.curve_fit function, but I do not understand documentation, i.e how to use this function.
My points: np.array([(1, 1), (2, 4), (3, 1), (9, 3)])
Can anybody explain how to do that?
I suggest you to start with simple polynomial fit, scipy.optimize.curve_fit tries to fit a function f that you must know to a set of points.
This is a simple 3 degree polynomial fit using numpy.polyfit and poly1d, the first performs a least squares polynomial fit and the second calculates the new points:
import numpy as np
import matplotlib.pyplot as plt
points = np.array([(1, 1), (2, 4), (3, 1), (9, 3)])
# get x and y vectors
x = points[:,0]
y = points[:,1]
# calculate polynomial
z = np.polyfit(x, y, 3)
f = np.poly1d(z)
# calculate new x's and y's
x_new = np.linspace(x[0], x[-1], 50)
y_new = f(x_new)
plt.plot(x,y,'o', x_new, y_new)
plt.xlim([x[0]-1, x[-1] + 1 ])
plt.show()
You'll first need to separate your numpy array into two separate arrays containing x and y values.
x = [1, 2, 3, 9]
y = [1, 4, 1, 3]
curve_fit also requires a function that provides the type of fit you would like. For instance, a linear fit would use a function like
def func(x, a, b):
return a*x + b
scipy.optimize.curve_fit(func, x, y) will return a numpy array containing two arrays: the first will contain values for a and b that best fit your data, and the second will be the covariance of the optimal fit parameters.
Here's an example for a linear fit with the data you provided.
import numpy as np
from scipy.optimize import curve_fit
x = np.array([1, 2, 3, 9])
y = np.array([1, 4, 1, 3])
def fit_func(x, a, b):
return a*x + b
params = curve_fit(fit_func, x, y)
[a, b] = params[0]
This code will return a = 0.135483870968 and b = 1.74193548387
Here's a plot with your points and the linear fit... which is clearly a bad one, but you can change the fitting function to obtain whatever type of fit you would like.