Vectorized SVM gradient - python

I was going through the code for SVM loss and derivative, I did understand the loss but I cannot understand how the gradient is being computed in a vectorized manner
def svm_loss_vectorized(W, X, y, reg):
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero
num_train = X.shape[0]
scores = X.dot(W)
yi_scores = scores[np.arange(scores.shape[0]),y]
margins = np.maximum(0, scores - np.matrix(yi_scores).T + 1)
margins[np.arange(num_train),y] = 0
loss = np.mean(np.sum(margins, axis=1))
loss += 0.5 * reg * np.sum(W * W)
Understood up to here, After here I cannot understand why we are summing up row-wise in binary matrix and subtracting by its sum
binary = margins
binary[margins > 0] = 1
row_sum = np.sum(binary, axis=1)
binary[np.arange(num_train), y] = -row_sum.T
dW = np.dot(X.T, binary)
# Average
dW /= num_train
# Regularize
dW += reg*W
return loss, dW

Let us recap the scenario and the loss function first, so we are on the same page:
Given are P sample points in N-dimensional space in the form of a PxN matrix X, so the points are the rows of this matrix. Each point in X is assigned to one out of M categories. These are given as a vector Y of length P that has integer values between 0 and M-1.
The goal is to predict the classes of all points by M linear classifiers (one for each category) given in the form of a weight matrix W of shape NxM, so the classifiers are the columns of W. To predict the categories of all samples X the scalar products between all points and all weight vectors are formed. This is the same as matrix multiplying X and W yielding a score matrix Y0 that is arranged such that its rows are ordered like theh elements of Y, each row corresponds to one sample. The predicted category for each sample is simply that with the largest score.
There are no bias terms so I presume there is some kind of symmetry or zero mean assumption.
Now, to find a good set of weights we want a loss function that is small for good predictions and large for bad predictions and that lets us do gradient descent. One of the most straight-forward ways is to just punish for each sample i each score that is larger than the score of the correct category for that sample and let the penalty grow linearly with the difference. So if we write A[i] for the set of categories j that score more than the correct category Y0[i, j] > Y0[i, Y[i]] the loss for sample i could be written as
sum_{j in A[i]} (Y0[i, j] - Y0[i, Y[i]])
or equivalently if we write #A[i] for the number of elements in A[i]
(sum_{j in A[i]} Y0[i, j]) - #A[i] Y0[i, Y[i]]
The partial derivatives with respect to the score are thus simply
| -#A[i] if j == Y[i]
dloss / dY0[i, j] = { 1 if j in A[i]
| 0 else
which is precisely what the first four lines you say you don't understand compute.
The next line applies the chain rule dloss/dW = dloss/dY0 dY0/dW.
It remains to divide by the number of samples to get a per sample loss and to add the derivative of the regulatization term which the regularization being just a componentwise quadratic function is easy.

Personally, I found it much easier to understand the whole gradient calculation through looking at the analytic derivation of the loss function in more detail. To extend on the given answer, I would like to point to the derivatives of the loss function
with respect to the weights as follows:
Loss gradient wrt w_yi (correct class)
Hence, we count the cases where w_j is not meeting the margin requirement and sum those cases up. This negative sum is then specified as weight for the position of the correct class w_yi. (we later need to multiply this value with xi, this is what you do in your code in line 5)
2) Loss gradient wrt w_j (incorrect classes)
where 1 is the indicator function, 1 if true, else 0.
In other words, "programatically" we need to apply equation (2) to all cases where the margin requirement is not met, and adding the negative sum of all unmet requirements to the true class column (as in (1)).
So what you did in the first 3 lines of your code is to determine the cases where the margin is not met, as well as adding the negative sum of these cases to the correct class column (j). In the 5 line, you do the final step where you multiply the x_i's to the other term - and this completes the gradient calculations as in (1) and (2).
I hope this makes it easier to understand, let me know if anything remains unclear. source

Related

Python Linear Programming, Constrain Input to Zero or Range

In an Linear Program I am minimizing the distance between weighted input vectors and a target vector. I used Scipyto compute values for the weights I need. Currently they are between zero and one, but I'd like them to be zero if they are smaller than .2 for example, so x_i should be 0 or [.2; 1]. I was pointed to mixed integer linear programming but I still can't find any approach for my problem. How can I fix this?
tldr: i want to use (0,0) or (.3,1) as bounds for each x, how do i implement this?
Here is my SciPy code:
# minimize the distance between weighted input vectors and a target vector
def milp_objective_function(weights):
scaled_matrix = input_matrix * weights[:, np.newaxis] # scale input_matrix columns by weights
sum_vector = sum(scaled_matrix) # sum weighted_input_matrix columns
difference_vector = sum_vector - target_vector
return np.sqrt(difference_vector.dot(difference_vector)) # return the distance between the sum_vector and the target_vector
# sum of weights should equal 100%
def milp_constraint(weights):
return sum(weights) - 1
def main():
# bounds should be 0 or [.2; 1] -> mixed integer linear programming?
weight_bounds = tuple([(0, 1) for i in input_matrix])
# random guess, will implement later
initial_guess = milp_guess_weights()
constraint_obj = {'type': 'eq', 'fun': milp_constraint}
result = minimize(milp_objective_function, x0=initial_guess, bounds=weight_bounds, constraints=constraint_obj)
Variables that are in {0} ∪ [L,U] are called semi-continuous variables. Advanced MIP solvers have built-in support for these types of variables.
Note that SciPy does not have a MIP solver at all.
I also want to note that if your MIP solver does not support semi-continuous variables you can simulate them with binary variables:
L ⋅ δ(i) ≤ x(i) ≤ U ⋅ δ(i)
δ(i) ∈ {0,1}

Constructing a Custom Loss Function in Keras

I am attempting to write a custom loss function in Keras from this paper. Namely, the loss I want to create is this:
This is a type of ranking loss for multi-class multi-label problems. Here are the details:
Y_i = set of positive labels for sample i
Y_i^bar = set of negative labels for sample i (complement of Y_i)
c_j^i = prediction on i^th sample at label j
In what follows, both y_true and y_pred are of dimension 18.
def multilabel_loss(y_true, y_pred):
""" Multi-label loss function.
More complete description here...
"""
zero = K.tf.constant(0, dtype=tf.float32)
where_one = K.tf.not_equal(y_true, zero)
where_zero = K.tf.equal(y_true, zero)
Y_p = K.tf.where(where_one)
Y_n = K.tf.where(where_zero)
n = K.tf.shape(y_true)[0]
loss = 0
for i in range(n):
# Here i is the ith sample; for a specific i, I find all locations
# where Y_p, Y_n belong to the ith sample; axis 0 denotes
# the sample index space
Y_p_i = K.tf.equal(Y_p[:,0], K.tf.constant(i, dtype=tf.int64))
Y_n_i = K.tf.equal(Y_n[:,0], K.tf.constant(i, dtype=tf.int64))
# Here I plug in those locations to get the values
Y_p_i = K.tf.where(Y_p_i)
Y_n_i = K.tf.where(Y_n_i)
# Here I get the indices of the values above
Y_p_ind = K.tf.gather(Y_p[:,1], Y_p_i)
Y_n_ind = K.tf.gather(Y_n[:,1], Y_n_i)
# Here I compute Y_i and its complement
yi = K.tf.shape(Y_p_ind)[0]
yi_not = K.tf.shape(Y_n_ind)[0]
# The value to normalize the inner summation
normalizer = K.tf.divide(1, K.tf.multiply(yi, yi_not))
# This creates a matrix of all combinations of indices k, l from the
# above equation; then it is reshaped
prod = K.tf.map_fn(lambda x: K.tf.map_fn(lambda y: K.tf.stack( [ x, y ] ), Y_n_ind ), Y_p_ind )
prod = K.tf.reshape(prod, [-1, 2, 1])
prod = K.tf.squeeze(prod)
# Next, the indices are fed into the corresponding prediction
# matrix, where the values are then exponentiated and summed
y_pred_gather = K.tf.gather(y_pred[i,:].T, prod)
s = K.tf.cast(K.sum(K.tf.exp(K.tf.subtract(y_pred_gather[:,0], y_pred_gather[:,1]))), tf.float64)
loss = loss + K.tf.multiply(normalizer, s)
return loss
My questions are the following:
When I go to compile my graph, I get an error revolving around n. Namely, TypeError: 'Tensor' object cannot be interpreted as an integer. I've looked around, but I can't find a way to stop this. My hunch is that I need to avoid a for loop altogether, which brings me to
How can I write this loss without for loops? I'm fairly new to Keras and have spent a solid few hours writing this custom loss myself. I'd love to write it more concisely. What's blocking me from using all matrices is the fact that Y_i and its complement can take on different sizes for each i.
Please let me know if you'd like me to elaborate more on my code. Happy to do so.
UPDATE 3
As per #Parag S. Chandakkar 's suggestions, I have the following:
def multi_label_loss(y_true, y_pred):
# set consistent casting
y_true = tf.cast(y_true, dtype=tf.float64)
y_pred = tf.cast(y_pred, dtype=tf.float64)
# this get all positive predictions and negative predictions
# it also exponentiates them in their respective Y_i classes
PT = K.tf.multiply(y_true, tf.exp(-y_pred))
PT_complement = K.tf.multiply((1-y_true), tf.exp(y_pred))
# this step gets the weight vector that we'll normalize by
m = K.shape(y_true)[0]
W = K.tf.multiply(K.sum(y_true, axis=1), K.sum(1-y_true, axis=1))
W_inv = 1./W
W_inv = K.reshape(W_inv, (m,1))
# this step computes the outer product of two tensors
def outer_product(inputs):
"""
inputs: list of two tensors (of equal dimensions,
for which you need to compute the outer product
"""
x, y = inputs
batchSize = K.shape(x)[0]
outerProduct = x[:,:, np.newaxis] * y[:,np.newaxis,:]
outerProduct = K.reshape(outerProduct, (batchSize, -1))
# returns a flattened batch-wise set of tensors
return outerProduct
# set up inputs to outer product
inputs = [PT, PT_complement]
# compute final loss
loss = K.sum(K.tf.multiply(W_inv, outer_product(inputs)))
return loss
This is not an answer but more like my thought process which should help you to write a concise code.
Firstly, I don't think you should worry about that error for now because by the time you eliminate for loops, your code may look very different.
Now, I haven't looked at the paper but the predictions c_j^i should be the raw values that come out of the last non-softmax layer (that is what I assume).
So you can add an additional exp layer and compute exp(c_j^i) for each prediction. Now, the for loop comes because of the summation. If you look closely, all it is doing is first forming pairs of all the labels and then subtracting their corresponding predictions. Now, first express the subtraction as exp(c_l^i) * exp(-c_k^i). To see what is happening, take a simple example.
import numpy as np
a = [1, 2, 3]
a = np.reshape(a, (3,1))
Following above explanation, you want the following result.
r1 = sum([1 * 2, 1 * 3, 2 * 3]) = sum([2, 3, 6]) = 11
You could get the same result by matrix multiplication, which is a way to elimiate for loops.
r2 = a * a.T
# r2 = array([[1, 2, 3],
# [2, 4, 6],
# [3, 6, 9]])
Extract the upper triangular part, i.e. 2, 3, 6 and sum the array to get 11, which is the result you want. Now, there may be some differences, for example, you may need to exhaustively form all the pairs. You should be able to convert it in the form of matrix multiplication.
Once you have taken care of the summation term, the normalization term can be easily computed if you pre-compute the quantities |Y_i| and \bar{Y_i} for each sample i. Pass them as input arrays and pass them into loss as a part of y_pred. The final summation over i will be done by Keras.
Edit 1: Even if |Y_i| and \bar{Y_i} take on different values, you should be able to build a generic formula for extracting the upper triangular part irrespective of the matrix size once you have pre-computed |Y_i| and \bar{Y_i}.
Edit 2: I don't think you understood me completely. In my opinion, NumPy shouldn't be used at all in the loss function. This is (mostly) doable using only Tensorflow. I will explain once more, while preserving my earlier explanation.
I now know that there is a cartesian product between the positive labels and negative labels (i.e. |Y_i| and \bar{Y_i}, respectively). So first, put a layer of exp after the raw predictions (in TF, not in Numpy).
Now, you need to know which indices out the 18 dimensions of y_true correspond to positive and which ones correspond to negative. If you are using one hot encoding, you can find this out on-the-fly by using tf.where and tf.gather (see here).
By now, you should know the indices j (in c_j^i) that correspond to positive and negative labels. All you need to do is compute \sum_(k, l) {exp(c_k^i) * (1 / exp(c_l^i))} for pairs (k, l). All you need to do is form one tensor consisting of exp(c_k^i) for all k (call it A) and another one consisting of exp(c_l^i) for all l (call it B). Then compute sum(A * B^T). No need to extract the upper triangular part too if you are taking cartesian product. By now, you should have the result of inner-most summation.
Contrary to what I said before, I think you could also compute the normalization factor on-the-fly from y_true.
You only have to figure out how to extend this to three dimensions to handle multiple samples.
Note: The usage of Numpy is probably possible by using tf.py_func but does not seem necessary here. Just use functions of TF.

Fitting a bias in a weight decay regression using least-squares

I'm calcualting the weights for a linear regression with weight-decay, i.e. normally I am trying to find beta = (X'X + lambda I)^-1 X'Y where X has n rows of D features each and Y is a vector of outputs for each row of X.
I've been fitting without a bias term by using:
def wd_fit(A, y, lamb=0):
n_col = A.shape[1]
return np.linalg.lstsq(A.T.dot(A) + lamb * np.identity(n_col), A.T.dot(y))
I'd like to also calculate a bias or intercept term for the fit, instead of having it pass through the origin. I'd like to keep the same call to lstsq, so if there's some matrix transform I can carry out, that would be ideal. My inclination is to append column of 1s somewhere, so that X_mod say would then have D+1 features where the last relates to the intercept value, but I'm not quite sure where that should be or even if it's correct.
If you don't want to mean-center your variables, adding a column of ones will work and is a perfectly acceptable solution.
The bias term will just be the coefficient at the position of the added column.

What determines whether my Python gradient descent algorithm converges?

I've implemented a single-variable linear regression model in Python that uses gradient descent to find the intercept and slope of the best-fit line (I'm using gradient descent rather than computing the optimal values for intercept and slope directly because I'd eventually like to generalize to multiple regression).
The data I am using are below. sales is the dependent variable (in dollars) and temp is the independent variable (degrees celsius) (think ice cream sales vs temperature, or something similar).
sales temp
215 14.20
325 16.40
185 11.90
332 15.20
406 18.50
522 22.10
412 19.40
614 25.10
544 23.40
421 18.10
445 22.60
408 17.20
And this is my data after it has been normalized:
sales temp
0.06993007 0.174242424
0.326340326 0.340909091
0 0
0.342657343 0.25
0.515151515 0.5
0.785547786 0.772727273
0.529137529 0.568181818
1 1
0.836829837 0.871212121
0.55011655 0.46969697
0.606060606 0.810606061
0.51981352 0.401515152
My code for the algorithm:
import numpy as np
import pandas as pd
from scipy import stats
class SLRegression(object):
def __init__(self, learnrate = .01, tolerance = .000000001, max_iter = 10000):
# Initialize learnrate, tolerance, and max_iter.
self.learnrate = learnrate
self.tolerance = tolerance
self.max_iter = max_iter
# Define the gradient descent algorithm.
def fit(self, data):
# data : array-like, shape = [m_observations, 2_columns]
# Initialize local variables.
converged = False
m = data.shape[0]
# Track number of iterations.
self.iter_ = 0
# Initialize theta0 and theta1.
self.theta0_ = 0
self.theta1_ = 0
# Compute the cost function.
J = (1.0/(2.0*m)) * sum([(self.theta0_ + self.theta1_*data[i][1] - data[i][0])**2 for i in range(m)])
print('J is: ', J)
# Iterate over each point in data and update theta0 and theta1 on each pass.
while not converged:
diftemp0 = (1.0/m) * sum([(self.theta0_ + self.theta1_*data[i][1] - data[i][0]) for i in range(m)])
diftemp1 = (1.0/m) * sum([(self.theta0_ + self.theta1_*data[i][1] - data[i][0]) * data[i][1] for i in range(m)])
# Subtract the learnrate * partial derivative from theta0 and theta1.
temp0 = self.theta0_ - (self.learnrate * diftemp0)
temp1 = self.theta1_ - (self.learnrate * diftemp1)
# Update theta0 and theta1.
self.theta0_ = temp0
self.theta1_ = temp1
# Compute the updated cost function, given new theta0 and theta1.
new_J = (1.0/(2.0*m)) * sum([(self.theta0_ + self.theta1_*data[i][1] - data[i][0])**2 for i in range(m)])
print('New J is: %s') % (new_J)
# Test for convergence.
if abs(J - new_J) <= self.tolerance:
converged = True
print('Model converged after %s iterations!') % (self.iter_)
# Set old cost equal to new cost and update iter.
J = new_J
self.iter_ += 1
# Test whether we have hit max_iter.
if self.iter_ == self.max_iter:
converged = True
print('Maximum iterations have been reached!')
return self
def point_forecast(self, x):
# Given feature value x, returns the regression's predicted value for y.
return self.theta0_ + self.theta1_ * x
# Run the algorithm on a data set.
if __name__ == '__main__':
# Load in the .csv file.
data = np.squeeze(np.array(pd.read_csv('sales_normalized.csv')))
# Create a regression model with the default learning rate, tolerance, and maximum number of iterations.
slregression = SLRegression()
# Call the fit function and pass in the data.
slregression.fit(data)
# Print out the results.
print('After %s iterations, the model converged on Theta0 = %s and Theta1 = %s.') % (slregression.iter_, slregression.theta0_, slregression.theta1_)
# Compare our model to scipy linregress model.
slope, intercept, r_value, p_value, slope_std_error = stats.linregress(data[:,1], data[:,0])
print('Scipy linear regression gives intercept: %s and slope = %s.') % (intercept, slope)
# Test the model with a point forecast.
print('As an example, our algorithm gives y = %s given x = .87.') % (slregression.point_forecast(.87)) # Should be about .83.
print('The true y-value for x = .87 is about .8368.')
I'm having trouble understanding exactly what allows the algorithm to converge versus return values that are completely wrong. Given learnrate = .01, tolerance = .0000000001, and max_iter = 10000, in combination with normalized data, I can get the gradient descent algorithm to converge. However, when I use the un-normalized data, the smallest I can make the learning rate without the algorithm returning NaN is .005. This brings changes in the cost function from iteration to iteration down to around 614, but I can't get it to go any lower.
Is it definitely a requirement of this type of algorithm to have normalized data, and if so, why? Also, what would be the best way to take a novel x-value in non-normalized form and plug it into the point forecast, given that the algorithm needs normalized values? For instance, if I were going to deliver this algorithm to a client so they could make predictions of their own (I'm not, but for the sake of argument..), wouldn't I want them to simply be able to plug in the un-normalized x-value?
All and all, playing around with the tolerance, max_iter, and learnrate gives me non-convergent results the majority of the time. Is this normal, or are there flaws in my algorithm that are contributing to this issue?
Given learnrate = .01, tolerance = .0000000001, and max_iter = 10000, in combination with normalized data, I can get the gradient descent algorithm to converge. However, when I use the un-normalized data, the smallest I can make the learning rate without the algorithm returning NaN is .005
That's kind of to be expected the way you have your algorithm set up.
The normalization of the data makes it so the y-intercept of the best fit is around 0.0. Otherwise, you could have a y-intercept thousands of units off of the starting guess, and you'd have to trek there before you ever really started the optimization part.
Is it definitely a requirement of this type of algorithm to have normalized data, and if so, why?
No, absolutely not, but if you don't normalize, you should pick a starting point more intelligently (you're starting at (m,b) = (0,0)). Your learnrate may also be too small if you don't normalize your data, and same with your tolerance.
Also, what would be the best way to take a novel x-value in non-normalized form and plug it into the point forecast, given that the algorithm needs normalized values?
Apply whatever transformation that you applied to the original data to get the normalized data to your new x-value. (The code for normalization is outside of what you have shown). If this test point fell within the (minx,maxx) range of your original data, once transformed, it should fall within 0 <= x <= 1. Once you have this normalized test point, plug it into your theta equation of a line (remember, your thetas are m,b of the y-intercept form of the equation of a line).
All and all, playing around with the tolerance, max_iter, and learnrate gives me non-convergent results the majority of the time.
For a well-formed problem, if you're in fact diverging it often means your step size is too large. Try lowering it.
If it's simply not converging before it hits the max iterations, that could be a few issues:
Your step size is too small,
Your tolerance is too small,
Your max iterations is too small,
Your starting point is poorly chosen
In your case, using the non normalized data results in your starting point of (0,0) being very far off (the (m,b) of the non-normalized data is around (-159, 30) while the (m,b) of your normalized data is (0.10,0.79)), so most if not all of your iterations are being used just getting to the area of interest.
The problem with this is that by increasing the step size to get to the area of interest faster also makes it less-likely to find convergence once it gets there.
To account for this, some gradient descent algorithms have dynamic step size (or learnrate) such that large steps are taken at the beginning, and smaller ones as it nears convergence.
It may also be helpful for you to keep a history of of the theta pairs throughout the algorithm, then plot them. You'll be able to see the difference immediately between using normalized and non-normalized input data.

Compute the gradient of the SVM loss function

I am trying to implement the SVM loss function and its gradient.
I found some example projects that implement these two, but I could not figure out how they can use the loss function when computing the gradient.
Here is the formula of loss function:
What I cannot understand is that how can I use the loss function's result while computing gradient?
The example project computes the gradient as follows:
for i in xrange(num_train):
scores = X[i].dot(W)
correct_class_score = scores[y[i]]
for j in xrange(num_classes):
if j == y[i]:
continue
margin = scores[j] - correct_class_score + 1 # note delta = 1
if margin > 0:
loss += margin
dW[:,j] += X[i]
dW[:,y[i]] -= X[i]
dW is for gradient result. And X is the array of training data.
But I didn't understand how the derivative of the loss function results in this code.
The method to calculate gradient in this case is Calculus (analytically, NOT numerically!). So we differentiate loss function with respect to W(yi) like this:
and with respect to W(j) when j!=yi is:
The 1 is just indicator function so we can ignore the middle form when condition is true. And when you write in code, the example you provided is the answer.
Since you are using cs231n example, you should definitely check note and videos if needed.
Hope this helps!
If the substraction less than zero the loss is zero so the gradient of W is also zero. If the substarction larger than zero, then the gradient of W is the partial derviation of the loss.
If we don't keep these two lines of code:
dW[:,j] += X[i]
dW[:,y[i]] -= X[i]
we get loss value.

Categories