I am watching some videos for Stanford CS231: Convolutional Neural Networks for Visual Recognition but do not quite understand how to calculate analytical gradient for softmax loss function using numpy.
From this stackexchange answer, softmax gradient is calculated as:
Python implementation for above is:
num_classes = W.shape[0]
num_train = X.shape[1]
for i in range(num_train):
for j in range(num_classes):
p = np.exp(f_i[j])/sum_i
dW[j, :] += (p-(j == y[i])) * X[:, i]
Could anyone explain how the above snippet work? Detailed implementation for softmax is also included below.
def softmax_loss_naive(W, X, y, reg):
"""
Softmax loss function, naive implementation (with loops)
Inputs:
- W: C x D array of weights
- X: D x N array of data. Data are D-dimensional columns
- y: 1-dimensional array of length N with labels 0...K-1, for K classes
- reg: (float) regularization strength
Returns:
a tuple of:
- loss as single float
- gradient with respect to weights W, an array of same size as W
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
#############################################################################
# Compute the softmax loss and its gradient using explicit loops. #
# Store the loss in loss and the gradient in dW. If you are not careful #
# here, it is easy to run into numeric instability. Don't forget the #
# regularization! #
#############################################################################
# Get shapes
num_classes = W.shape[0]
num_train = X.shape[1]
for i in range(num_train):
# Compute vector of scores
f_i = W.dot(X[:, i]) # in R^{num_classes}
# Normalization trick to avoid numerical instability, per http://cs231n.github.io/linear-classify/#softmax
log_c = np.max(f_i)
f_i -= log_c
# Compute loss (and add to it, divided later)
# L_i = - f(x_i)_{y_i} + log \sum_j e^{f(x_i)_j}
sum_i = 0.0
for f_i_j in f_i:
sum_i += np.exp(f_i_j)
loss += -f_i[y[i]] + np.log(sum_i)
# Compute gradient
# dw_j = 1/num_train * \sum_i[x_i * (p(y_i = j)-Ind{y_i = j} )]
# Here we are computing the contribution to the inner sum for a given i.
for j in range(num_classes):
p = np.exp(f_i[j])/sum_i
dW[j, :] += (p-(j == y[i])) * X[:, i]
# Compute average
loss /= num_train
dW /= num_train
# Regularization
loss += 0.5 * reg * np.sum(W * W)
dW += reg*W
return loss, dW
Not sure if this helps, but:
is really the indicator function , as described here. This forms the expression (j == y[i]) in the code.
Also, the gradient of the loss with respect to the weights is:
where
which is the origin of the X[:,i] in the code.
I know this is late but here's my answer:
I'm assuming you are familiar with the cs231n Softmax loss function.
We know that:
So just as we did with the SVM loss function the gradients are as follows:
Hope that helped.
A supplement to this answer with a small example.
I came across this post and still was not 100% clear how to arrive at the partial derivatives.
For that reason I took another approach to get to the same results - maybe it is helpful to others too.
Related
I am new to PyTorch and I would like to implement linear regression partly with PyTorch and partly on my own. I want to use squared features for my regression:
import torch
# init
x = torch.tensor([1,2,3,4,5])
y = torch.tensor([[1],[4],[9],[16],[25]])
w = torch.tensor([[0.5], [0.5], [0.5]], requires_grad=True)
iterations = 30
alpha = 0.01
def forward(X):
# feature transformation [1, x, x^2]
psi = torch.tensor([[1.0, x[0], x[0]**2]])
for i in range(1, len(X)):
psi = torch.cat((psi, torch.tensor([[1.0, x[i], x[i]**2]])), 0)
return torch.matmul(psi, w)
def loss(y, y_hat):
return ((y-y_hat)**2).mean()
for i in range(iterations):
y_hat = forward(x)
l = loss(y, y_hat)
l.backward()
with torch.no_grad():
w -= alpha * w.grad
w.grad.zero_()
if i%10 == 0:
print(f'Iteration {i}: The weight is:\n{w.detach().numpy()}\nThe loss is:{l}\n')
When I execute my code, the regression doesn't learn the correct features and the loss increases permanently. The output is the following:
Iteration 0: The weight is:
[[0.57 ]
[0.81 ]
[1.898]]
The loss is:25.450000762939453
Iteration 10: The weight is:
[[ 5529.5835]
[22452.398 ]
[97326.12 ]]
The loss is:210414632960.0
Iteration 20: The weight is:
[[5.0884394e+08]
[2.0662339e+09]
[8.9567642e+09]]
The loss is:1.7820802835250162e+21
Does somebody know, why my model is not learning?
UPDATE
Is there a reason why it performs so poorly? I thought it's because of the low number of training data. But also with 10 data points, it is not performing well :
You should normalize your data. Also, since you're trying to fit x -> ax² + bx + c, c is essentially the bias. It should be wiser to remove it from the training data (I'm referring to psi here) and use a separate parameter for the bias.
What could be done:
normalize your input data and targets with mean and standard deviation.
separate the parameters into w (a two-component weight tensor) and b (the bias).
you don't need to construct psi on every inference since x is identical.
you can build psi with torch.stack([torch.ones_like(x), x, x**2], 1), but here we won't need the ones, as we've essentially detached the bias from the weight tensor.
Here's how it would look like:
x = torch.tensor([1,2,3,4,5]).float()
psi = torch.stack([x, x**2], 1).float()
psi = (psi - psi.mean(0)) / psi.std(0)
y = torch.tensor([[1],[4],[9],[16],[25]]).float()
y = (y - y.mean(0)) / y.std(0)
w = torch.tensor([[0.5], [0.5]], requires_grad=True)
b = torch.tensor([0.5], requires_grad=True)
iterations = 30
alpha = 0.02
def loss(y, y_hat):
return ((y-y_hat)**2).mean()
for i in range(iterations):
y_hat = torch.matmul(psi, w) + b
l = loss(y, y_hat)
l.backward()
with torch.no_grad():
w -= alpha * w.grad
b -= alpha * b.grad
w.grad.zero_()
b.grad.zero_()
if i%10 == 0:
print(f'Iteration {i}: The weight is:\n{w.detach().numpy()}\nThe loss is:{l}\n')
And the results:
Iteration 0: The weight is:
[[0.49954653]
[0.5004535 ]]
The loss is:0.25755801796913147
Iteration 10: The weight is:
[[0.49503425]
[0.5049657 ]]
The loss is:0.07994867861270905
Iteration 20: The weight is:
[[0.49056274]
[0.50943726]]
The loss is:0.028329044580459595
Question
In CS231 Computing the Analytic Gradient with Backpropagation which is first implementing a Softmax Classifier, the gradient from (softmax + log loss) is divided by the batch size (number of data being used in a cycle of forward cost calculation and backward propagation in the training).
Please help me understand why it needs to be divided by the batch size.
The chain rule to get the gradient should be below. Where should I incorporate the division?
Derivative of Softmax loss function
Code
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
#Train a Linear Classifier
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in range(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
correct_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(correct_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
if i % 10 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples # <---------------------- Why?
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
It's because you are averaging the gradients instead of taking directly the sum of all the gradients.
You could of course not divide for that size, but this division has a lot of advantages. The main reason is that it's a sort of regularization (to avoid overfitting). With smaller gradients the weights cannot grow out of proportions.
And this normalization allows comparison between different configuration of batch sizes in different experiments (How can I compare two batch performances if they are dependent to the batch size?)
If you divide for that size the gradients sum it could be useful to work with greater learning rates to make the training faster.
This answer in the crossvalidated community is quite useful.
Came to notice that the dot in dW = np.dot(X.T, dscores) for the gradient at W is Σ over the num_sample instances. Since the dscore, which is probability (softmax output), was divided by the num_samples, did not understand that it was normalization for dot and sum part later in the code. Now understood divide by num_sample is required (may still work without normalization if the learning rate is trained though).
I believe the code below explains better.
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores) / num_examples
db = np.sum(dscores, axis=0, keepdims=True) / num_examples
I've been searching on how to perform matrix factorization for this very simple and basic case that I will show, but didn't find anything. I only found complex and long solutions, so I will present what I want to solve:
U x V = A
I would just like to know how to solve this equation in Tensorflow 2, being A a known sparse matrix, and U and V two random initialized matrices. So I would like to find U and V, so that their multiplication is approximately equal to A.
For example, having these variables:
# I use this function to build a toy dataset for the sparse matrix
def build_rating_sparse_tensor(ratings):
indices = ratings[['U_num', 'V_num']].values
values = ratings['rating'].values
return tf.SparseTensor(
indices=indices,
values=values,
dense_shape=[ratings.U_num.max()+1, ratings.V_num.max()+1])
# here I create what will be the matrix A
ratings = (pd.DataFrame({'U_num': list(range(0,10_000))*30,
'V_num': list(range(0,60_000))*5,
'rating': np.random.randint(6, size=300_000)})
.sample(1000)
.drop_duplicates(subset=['U_num','V_num'])
.sort_values(['U_num','V_num'], ascending=[1,1]))
# Variables
A = build_rating_sparse_tensor(ratings)
U = tf.Variable(tf.random_normal(
[A_Sparse.shape[0], embeddings], stddev=init_stddev))
# this matrix would be transposed in the equation
V = tf.Variable(tf.random_normal(
[A_Sparse.shape[1], embeddings], stddev=init_stddev))
# loss function
def sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):
predictions = tf.reduce_sum(
tf.gather(user_embeddings, sparse_ratings.indices[:, 0]) *
tf.gather(movie_embeddings, sparse_ratings.indices[:, 1]),
axis=1)
loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)
return loss
Is it possible to do this with a particular loss function, optimizer and learning schedule?
Thank you very much.
A naive and straightforward approach using TensorFlow 2:
Note that rating was converted to float32. TensorFlow cannot calculate gradients over integer, see https://github.com/tensorflow/tensorflow/issues/20524.
A = build_rating_sparse_tensor(ratings)
indices = ratings[["U_num", "V_num"]].values
embeddings = 3000
U = tf.Variable(tf.random.normal([A.shape[0], embeddings]), dtype=tf.float32)
V = tf.Variable(tf.random.normal([embeddings, A.shape[1]]), dtype=tf.float32)
optimizer = tf.optimizers.Adam()
trainable_weights = [U, V]
for step in range(100):
with tf.GradientTape() as tape:
A_prime = tf.matmul(U, V)
# indexing the result based on the indices of A that contain a value
A_prime_sparse = tf.gather(
tf.reshape(A_prime, [-1]),
indices[:, 0] * tf.shape(A_prime)[1] + indices[:, 1],
)
loss = tf.reduce_sum(tf.metrics.mean_squared_error(A_prime_sparse, A.values))
grads = tape.gradient(loss, trainable_weights)
optimizer.apply_gradients(zip(grads, trainable_weights))
if step % 20 == 0:
print(f"Training loss at step {step}: {loss:.4f}")
We take advantage of the sparsity of A by calculating the loss only over the actual values of A. However, we still have to allocate two really big dense tensor for the trainable weights U and V. For big numbers like in your example, you will probably encounter some OOM errors.
Maybe it could be worth exploring another representation for your data.
I have had used linear regression using ML packages in python, but for sake of self gratification, I coded it from scratch. The loss starts at around 0.90 and keeps increasing (not learning) for some reason. I do not understand what mistake I may have committed.
Standardised the dataset as part of preprocessing
Initialise weight matrix with MLE estimate for parameter W i.e., (X^TX)^-1X^TY
Compute the output
Calculate gradient of loss function SSE (Sum of Squared Error) wrt param W and bias B
Use the gradients to update the parameters using gradient descent.
import preprocess as pre
import numpy as np
import matplotlib.pyplot as plt
data = pre.load_file('airfoil_self_noise.dat')
data = pre.organise(data,"\t","\r\n")
data = pre.standardise(data,data.shape[1])
t = np.reshape(data[:,5],[-1,1])
data = data[:,:5]
N = data.shape[0]
M = 5
lr = 1e-3
# W = np.random.random([M,1])
W = np.dot(np.dot(np.linalg.inv(np.dot(data.T,data)),data.T),t)
data = data.T # Examples are arranged in columns [features,N]
b = np.random.rand()
epochs = 1000000
loss = np.zeros([epochs])
for epoch in range(epochs):
if epoch%1000 == 0:
lr /= 10
# Obtain the output
y = np.dot(W.T,data).T + b
sse = np.dot((t-y).T,(t-y))
loss[epoch]= sse/N
var = sse/N
# log likelihood
ll = (-N/2)*(np.log(2*np.pi))-(N*np.log(np.sqrt(var)))-(sse/(2*var))
# Gradient Descent
W_grad = np.zeros([M,1])
B_grad = 0
for i in range(N):
err = (t[i]-y[i])
W_grad += err * np.reshape(data[:,i],[-1,1])
B_grad += err
W_grad /= N
B_grad /= N
W += lr * W_grad
b += lr * B_grad
print("Epoch: %d, Loss: %.3f, Log-Likelihood: %.3f"%(epoch,loss[epoch],ll))
plt.figure()
plt.plot(range(epochs),loss,'-r')
plt.show()
Now if you run the above code you are likely not to find anything wrong since I am doing W += lr * W_grad instead of W -= lr * W_grad. I would like to know why this is the case because it is the gradient descent formula to subtract the gradient from old weight matrix. The error constantly increase when I do it. What is that I am missing ?
Found it. The problem was I took the gradient of loss function from a slide which apparently was not right (at least it wasn't entirely wrong, instead it was already pointing to the steepest descent), which when I subtracted from weights it started pointing to the direction of greatest increase. This was what that gave rise to what I observed.
I did the partial derivative of loss function to clarify, and got this:
W_grad += data[:,i].reshape([-1,1])*(y[i]-t[i]).reshape([])
This points to the direction of greatest increase and when I multiply it with -lr it starts pointing to the steepest descent, and started working properly.
I'm trying to implement Gradient Descent (GD) (not stochastic one) for logistic regression in Python 3x. And have some troubles.
Logistic regression is defined as follows (1):
logistic regression formula
Formulas for gradients are defined as follows (2):
gradient descent for logistic regression
Description of data:
X is (Nx2)-matrix of objects (consist of positive and negative float numbers)
y is (Nx1)-vector of class labels (-1 or +1)
Task:
Implement gradient descent 1) with L2-regularization; and 2) without regularization. Desired results: vectors of weights.
Parameters: regularization rate C=10 for regularized regression and C=0 for unregularized regression; gradient step k=0.1; max.number of iterations = 10000; tolerance = 1e-5.
Note: GD is converged if distance between weighs vectors from current and previous steps is less than tolerance (1e-5).
Here is my implementation:
k - gradient step;
C - regularization rate.
import numpy as np
def sigmoid(z):
result = 1./(1. + np.exp(-z))
return result
def distance(vector1, vector2):
vector1 = np.array(vector1, dtype='f')
vector2 = np.array(vector2, dtype='f')
return np.linalg.norm(vector1-vector2)
def GD(X, y, C, k=0.1, tolerance=1e-5, max_iter=10000):
X = np.matrix(X)
y = np.matrix(y)
l=len(X)
w1, w2 = 0., 0. # weights (look formula (2) in the beginning of question)
difference = 1.
iteration = 1
while(difference > tolerance):
hypothesis = y*(X*np.matrix([w1, w2]).T)
w1_updated = w1 + (k/l)*np.sum(y*X[:,0]*(1.-(sigmoid(hypothesis)))) - k*C*w1
w2_updated = w2 + (k/l)*np.sum(y*X[:,1]*(1.-(sigmoid(hypothesis)))) - k*C*w2
difference = distance([w1, w2], [w1_updated, w2_updated])
w1, w2 = w1_updated, w2_updated
if(iteration >= max_iter):
break;
iteration = iteration + 1
return [w1_updated, w2_updated] #vector of weights
Respectively:
# call for UNregularized GD: C=0
w = GD(X, y, C=0., k=0.1)
and
# call for regularized GD: C=10
w_reg = GD(X, y, C=10., k=0.1)
Here are the resuls (weights-vectors):
# UNregularized GD
[0.035736331265589463, 0.032464572442830832]
# regularized GD
[5.0979561973044096e-06, 4.6312243707352652e-06]
However, it should be (right answers for self-control):
# UNregularized GD
[0.28801877, 0.09179177]
# regularized GD
[0.02855938, 0.02478083]
!!! Please, can you tell me whats going wrong here? I'm sitting with this problem for three days in a row and still have no idea.
Thank you in advance.
First of all, the sigmoid functions should be
def sigmoid(Z):
A=1/(1+np.exp(-Z))
return A
Try to run it again with this formula. Then, what is L?