Forward Jacobian Of Neural Network in Pytorch is Slow - python

I am computing the forward jacobian (derivative of outputs with respect to inputs) of a 2 layer feedforward neural network in pytorch, and my results are correct but relatively slow. Given the nature of the calculation I would expect it to be approximately as fast as a forward pass through the network (or maybe 2-3x as long), but it takes ~12x as long to run an optimization step on this routine (in my test example I just want the jacobian=1 at all points) vs the standard mean squared error so I assume I am doing something in an un-optimal manner. I'm just wondering if anyone knew a faster way to code this. My test network has 2 input nodes, followed by 2 hidden layers of 5 nodes each and an output layer of 2 nodes, and uses tanh activation functions on the hidden layers, with a linear output layer.
The Jacobian calculations are based on the paper The Limitations of Deep Learning in Adversarial Settings which gives a basic recursive definition of the forward derivative (basically you end up multiplying the derivative of your activation functions with the weights and previous partial derivatives of each layer). This is very similar to forward propagation, which is why I would expect it to be faster than it is. Then the determinant of the 2x2 jacobian at the end is pretty straightforward.
Below is the code for the network and the jacobian
class Network(torch.nn.Module):
def __init__(self):
super(Network, self).__init__()
self.h_1_1 = torch.nn.Linear(input_1, hidden_1)
self.h_1_2 = torch.nn.Linear(hidden_1, hidden_2)
self.out = torch.nn.Linear(hidden_2, out_1)
def forward(self, x):
x = F.tanh(self.h_1_1(x))
x = F.tanh(self.h_1_2(x))
x = (self.out(x))
return x
def jacobian(self, x):
a = self.h_1_1.weight
x = F.tanh(self.h_1_1(x))
tanh_deriv_tensor = 1 - (x ** 2)
expanded_deriv = tanh_deriv_tensor.unsqueeze(-1).expand(-1, -1, input_1)
partials = expanded_deriv * a.expand_as(expanded_deriv)
a = torch.matmul(self.h_1_2.weight, partials)
x = F.tanh(self.h_1_2(x))
tanh_deriv_tensor = 1 - (x ** 2)
expanded_deriv = tanh_deriv_tensor.unsqueeze(-1).expand(-1, -1, out_1)
partials = expanded_deriv*a
partials = torch.matmul(self.out.weight, partials)
determinant = partials[:, 0, 0] * partials[:, 1, 1] - partials[:, 0, 1] * partials[:, 1, 0]
return determinant
and here are the two error functions being compared. Note that the first one requires an extra forward call through the network, to get the output values (labeled action) while the second function does not since it works on the input values.
def actor_loss_fcn1(action, target):
loss = ((action-target)**2).mean()
return loss
def actor_loss_fcn2(input): # 12x slower
jacob = model.jacobian(input)
loss = ((jacob-1)**2).mean()
return loss
Any insight on this would be greatly appreciated

The second calculation of 'a' takes the most time on my machine (cpu).
# Here you increase the size of the matrix with a factor of "input_1"
expanded_deriv = tanh_deriv_tensor.unsqueeze(-1).expand(-1, -1, input_1)
partials = expanded_deriv * a.expand_as(expanded_deriv)
# Here your torch.matmul() needs to handle "input_1" times more computations than in a normal forward call
a = torch.matmul(self.h_1_2.weight, partials)
On my machine the time of computing the Jacobian is roughly the time it takes torch to compute
a = torch.rand(hidden_1, hidden_2)
b = torch.rand(n_inputs, hidden_1, input_1)
%timeit torch.matmul(a,b)
I don't think it is possible to speed this up, computationally wise. Unless you can move from CPU to GPU, because GPU get better on larges matrices.

Related

Efficient batch derivative operations in PyTorch

I am using Pytorch to implement a neural network that has (say) 5 inputs and 2 outputs
class myNetwork(nn.Module):
def __init__(self):
super(myNetwork,self).__init__()
self.layer1 = nn.Linear(5,32)
self.layer2 = nn.Linear(32,2)
def forward(self,x):
x = torch.relu(self.layer1(x))
x = self.layer2(x)
return x
Obviously, I can feed this an (N x 5) Tensor and get an (N x 2) result,
net = myNetwork()
nbatch = 100
inp = torch.rand([nbatch,5])
inp.requires_grad = True
out = net(inp)
I would now like to compute the derivatives of the NN output with respect to one element of the input vector (let's say the 5th element), for each example in the batch. I know I can calculate the derivatives of one element of the output with respect to all inputs using torch.autograd.grad, and I could use this as follows:
deriv = torch.zeros([nbatch,2])
for i in range(nbatch):
for j in range(2):
deriv[i,j] = torch.autograd.grad(out[i,j],inp,retain_graph=True)[0][i,4]
However, this seems very inefficient: it calculates the gradient of out[i,j] with respect to every single element in the batch, and then discards all except one. Is there a better way to do this?
By virtue of backpropagation, if you did only compute the gradient w.r.t a single input, the computational savings wouldn't necessarily amount to much, you would only save some in the first layer, all layers afterwards need to be backpropagated either way.
So this may not be the optimal way, but it doesn't actually create much overhead, especially if your network has many layers.
By the way, is there a reason that you need to loop over nbatch? If you wanted the gradient of each element of a batch w.r.t a parameter, I could understand that, because pytorch will lump them together, but you seem to be solely interested in the input...

XOR neural network 2-1-1

I am trying to implement a XOR in neural networks with the typology of 2 inputs, 1 element in the hidden layer, and 1 output. But the learning rate is really bad (0,5). I think it is because I am missing a connection between the inputs AND the outputs, but I am not really sure how to do it. I have already made the bias connection so that the learning is better. Only using Numpy.
def sigmoid_output_to_derivative(output):
return output*(1-output)
a=0.1
X = np.array([[0,0],
[0,1],
[1,0],
[1,1]])
np.random.seed(1)
y = np.array([[0],
[1],
[1],
[0]])
bias = np.ones(4)
X = np.c_[bias, X]
synapse_0 = 2*np.random.random((3,1)) - 1
synapse_1 = 2*np.random.random((1,1)) - 1
for j in (0,600000):
layer_0 = X
layer_1 = sigmoid(np.dot(layer_0,synapse_0))
layer_2 = sigmoid(np.dot(layer_1,synapse_1))
layer_2_error = layer_2 - y
if (j% 10000) == 0:
print( "Error after "+str(j)+" iterations:" + str(np.mean(np.abs(layer_2_error))))
layer_2_delta = layer_2_error*sigmoid_output_to_derivative(layer_2)
layer_1_error = layer_2_delta.dot(synapse_1.T)
layer_1_delta = layer_1_error * sigmoid_output_to_derivative(layer_1)
synapse_1 -= a *(layer_1.T.dot(layer_2_delta))
synapse_0 -= a *(layer_0.T.dot(layer_1_delta))
You need to be careful with statements like
the learning rate is bad
Usually the learning rate is the step size that gradient descent takes in negative gradient direction. So, I'm not sure what you mean by a bad learning rate.
I'm also not sure if I understand your code correctly, but the forward step of a neural net is basically a matrix multiplication of the weight matrix for the hidden layer times the input vector. This will (if you set up everything correctly) result in a matrix which is equal to the size of your hidden layer. Now, you can simply add the bias before applying your logistic function elementwise to this matrix.
h_i = f(h_i+bias_in)
Afterwards you can do the same thing for the hidden layer times the output weights and apply its activation to get the outputs.
o_j = f(o_j+bias_h)
The backwards step is to calculate the deltas at output and hidden layer including another elementwise operation with your function
sigmoid_output_to_derivative(output)
and update both weight matrices using the gradients (here the learning rate is needed to define the step size). The gradients are simply the value of a corresponding node times its delta.
Note: The deltas are differently calculated for output and hidden nodes.
I'd advice you to keep separate variables for the biases. Because modern approaches usually update those by summing up the deltas of its connected notes times a different learning rate and subtract this product from the specific bias.
Take a look at the following tutorial (it uses numpy):
http://peterroelants.github.io/posts/neural_network_implementation_part04/

Get probable input based on a given output in a neural network

I'm starting to learn neural networks, and I just made a program that learned how to recognize handwritten digits with pretty good accuracy (trained with backpropagation). Now I want to be able to see what the network thinks a perfect number looks like (essentially getting a pixel array which produces a desired number but is not from the dataset). My research has come up empty, but I posted on another site and was suggested to look at backpropagating to the input. I don't have much of a math background, so could someone point me in the right direction on how to implement that (or any other method of achieving my goal)?
You can get some idea of an "ideal" input for each of the classes in a multi-class classifier neural network (NN) by inverting the model and visualizing the weights for the output layer, as projected onto the pixels at the input layer.
Suppose you had a simple linear classifier NN that had 784 inputs (the number of pixels in an MNIST digit image) and 10 outputs (the number of digit classes) -- with no hidden layer. The activation z of the output layer given an input image x (a 784-element column vector) is given by z = f(x) = Wx + b where W is the 10 x 784 weight matrix and b is the 10-element bias vector.
You can do some algebra and invert this model easily, to compute x given z: x = f^-1(z) = W^-1 (z - b). Now let's say you wanted to see the optimal input for the 4 class. The target output for this class would be z = [0 0 0 0 1 0 0 0 0 0]^T; if we ignore the bias for the moment, then you would just need to compute the 4th column (starting with 0) of the inverse of W, a 784-element column vector, rearrange it back into a 28 x 28 image, and view it. This is the optimal input because the output layer activation is proportional to the dot product of the input and the weight vector for that class, so an input vector that's identical to the weight vector for class 4 will maximally activate that class at the output layer.
Things get more complicated if you add more layers and nonlinearities to the model, but the general approach remains the same. You want some way to compute an optimal input x* given a target output z* for the model, but you only know the (possibly complicated) forward map z = f(x) from inputs to targets. You can treat this as an optimization problem: you're trying to compute x* = f^-1(z*) and you know f and z*. If your knowledge of f allows you to compute a symbolic inverse in closed form, then you can just plug z* in and get x*. If you can't do this, you can always use an iterative optimization procedure to compute successively better approximations x1, x2, ..., xn given a starting guess of x0. Here's some Python pseudocode for doing this using scipy.optimize:
import numpy as np
import scipy.optimize
# our forward model, paired layers of already-trained
# weights and biases.
weights = [np.array(...) ...]
biases = [np.array(...) ...]
def f(x):
for W, b in zip(weights, biases):
# relu activation.
x = np.clip(np.dot(W, x) + b, 0, np.inf)
return x
# set our sights on class #4.
zstar = np.array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0])
# the loss we want to optimize: minimize difference
# between zstar and f(x).
def loss(x):
return abs(f(x) - zstar).sum()
x0 = np.zeros(784)
result = scipy.optimize.minimize(loss, x0)
Incidentally, this process is basically at the core of the recent "Inceptionism" images from Google -- the optimization process is attempting to determine the input pixels that replicate the state of a particular hidden layer in a complicated network. It's more complicated in this case because of the convolutions etc. but the idea is similar.

Why does this backpropagation implementation fail to train weights correctly?

I've written the following backpropagation routine for a neural network, using the code here as an example. The issue I'm facing is confusing me, and has pushed my debugging skills to their limit.
The problem I am facing is rather simple: as the neural network trains, its weights are being trained to zero with no gain in accuracy.
I have attempted to fix it many times, verifying that:
the training sets are correct
the target vectors are correct
the forward step is recording information correctly
the backward step deltas are recording properly
the signs on the deltas are correct
the weights are indeed being adjusted
the deltas of the input layer are all zero
there are no other errors or overflow warnings
Some information:
The training inputs are an 8x8 grid of [0,16) values representing an intensity; this grid represents a numeral digit (converted to a column vector)
The target vector is an output that is 1 in the position corresponding to the correct number
The original weights and biases are being assigned by Gaussian distribution
The activations are a standard sigmoid
I'm not sure where to go from here. I've verified that all things I know to check are operating correctly, and it's still not working, so I'm asking here. The following is the code I'm using to backpropagate:
def backprop(train_set, wts, bias, eta):
learning_coef = eta / len(train_set[0])
for next_set in train_set:
# These record the sum of the cost gradients in the batch
sum_del_w = [np.zeros(w.shape) for w in wts]
sum_del_b = [np.zeros(b.shape) for b in bias]
for test, sol in next_set:
del_w = [np.zeros(wt.shape) for wt in wts]
del_b = [np.zeros(bt.shape) for bt in bias]
# These two helper functions take training set data and make them useful
next_input = conv_to_col(test)
outp = create_tgt_vec(sol)
# Feedforward step
pre_sig = []; post_sig = []
for w, b in zip(wts, bias):
next_input = np.dot(w, next_input) + b
pre_sig.append(next_input)
post_sig.append(sigmoid(next_input))
next_input = sigmoid(next_input)
# Backpropagation gradient
delta = cost_deriv(post_sig[-1], outp) * sigmoid_deriv(pre_sig[-1])
del_b[-1] = delta
del_w[-1] = np.dot(delta, post_sig[-2].transpose())
for i in range(2, len(wts)):
pre_sig_vec = pre_sig[-i]
sig_deriv = sigmoid_deriv(pre_sig_vec)
delta = np.dot(wts[-i+1].transpose(), delta) * sig_deriv
del_b[-i] = delta
del_w[-i] = np.dot(delta, post_sig[-i-1].transpose())
sum_del_w = [dw + sdw for dw, sdw in zip(del_w, sum_del_w)]
sum_del_b = [db + sdb for db, sdb in zip(del_b, sum_del_b)]
# Modify weights based on current batch
wts = [wt - learning_coef * dw for wt, dw in zip(wts, sum_del_w)]
bias = [bt - learning_coef * db for bt, db in zip(bias, sum_del_b)]
return wts, bias
By Shep's suggestion, I checked what's happening when training a network of shape [2, 1, 1] to always output 1, and indeed, the network trains properly in that case. My best guess at this point is that the gradient is adjusting too strongly for the 0s and weakly on the 1s, resulting in a net decrease despite an increase at each step - but I'm not sure.
I suppose your problem is in choice of initial weights and in choice of initialization of weights algorithm. Jeff Heaton author of Encog claims that it as usually performs worse then other initialization method. Here is another results of weights initialization algorithm perfomance. Also from my own experience recommend you to init your weights with different signs values. Even in cases when I had all positive outputs weights with different signs perfomed better then with the same sign.

Issue with gradient calculation in a Neural Network (stuck at 7% error in MNIST)

Hi I am having an issue with my calculation of checking the gradient when implementing a neural network in python using numpy.
I am using mnist dataset to try and trying to using mini-batch gradient descent.
I have check the math and on paper look good so maybe you can give me a hint of what's happening here:
EDIT: One answer made me realize that indeed the cost function was being calculated wrong. Howerver that does not explain the problem with the gradient as it is calculated using back_prop. I get %7 error rate using 300 units in the hidden layer using minibatch gradient descent with rmsprop, 30 epochs and 100 batches. (learning_rate = 0.001, small due to the rmsprop).
each input is has 768 features so for a 100 samples I have a matrix. Mnist has 10 classes.
X = NoSamplesxFeatures = 100x768
Y = NoSamplesxClasses = 100x10
I am using a one hidden layer neural network with hidden layer size of 300 when fully training. Another question I have is whether I should use a softmax output function for calculating the error... which I think not. But I am kinda newbie to all of this and the obvious might seem strange to me.
(NOTE: I know the code is ugly, but this is my first Python/Numpy code done under pressure, bear with me)
Here is back_prof and activations:
def sigmoid(z):
return np.true_divide(1,1 + np.exp(-z) )
#not calculated really - this the fake version to make it faster.
def sigmoid_prime(a):
return (a)*(1 - a)
def _back_prop(self,W,X,labels,f=sigmoid,fprime=sigmoid_prime,lam=0.001):
"""
Calculate the partial derivates of the cost function using backpropagation.
"""
#Weight for first layer and hidden layer
Wl1,bl1,Wl2,bl2 = self._extract_weights(W)
# get the forward prop value
layers_outputs = self._forward_prop(W,X,f)
#from a number make a binary vector, for mnist 1x10 with all 0 but the number.
y = self.make_1_of_c_encoding(labels)
num_samples = X.shape[0] # layers_outputs[-1].shape[0]
# Dot product return Numsamples (N) x Outputs (No CLasses)
# Y is NxNo Clases
# Layers output to
big_delta = np.zeros(Wl2.size + bl2.size + Wl1.size + bl1.size)
big_delta_wl1, big_delta_bl1, big_delta_wl2, big_delta_bl2 = self._extract_weights(big_delta)
# calculate the gradient for each training sample in the batch and accumulate it
for i,x in enumerate(X):
# Error with respect the output
dE_dy = layers_outputs[-1][i,:] - y[i,:]
# bias hidden layer
big_delta_bl2 += dE_dy
# get the error for the hiddlen layer
dE_dz_out = dE_dy * fprime(layers_outputs[-1][i,:])
#and for the input layer
dE_dhl = dE_dy.dot(Wl2.T)
#bias input layer
big_delta_bl1 += dE_dhl
small_delta_hl = dE_dhl*fprime(layers_outputs[-2][i,:])
#here calculate the gradient for the weights in the hidden and first layer
big_delta_wl2 += np.outer(layers_outputs[-2][i,:],dE_dz_out)
big_delta_wl1 += np.outer(x,small_delta_hl)
# divide by number of samples in the batch (should be done here)?
big_delta_wl2 = np.true_divide(big_delta_wl2,num_samples) + lam*Wl2*2
big_delta_bl2 = np.true_divide(big_delta_bl2,num_samples)
big_delta_wl1 = np.true_divide(big_delta_wl1,num_samples) + lam*Wl1*2
big_delta_bl1 = np.true_divide(big_delta_bl1,num_samples)
# return
return np.concatenate([big_delta_wl1.ravel(),
big_delta_bl1,
big_delta_wl2.ravel(),
big_delta_bl2.reshape(big_delta_bl2.size)])
Now the feed_forward:
def _forward_prop(self,W,X,transfer_func=sigmoid):
"""
Return the output of the net a Numsamples (N) x Outputs (No CLasses)
# an array containing the size of the output of all of the laye of the neural net
"""
# Hidden layer DxHLS
weights_L1,bias_L1,weights_L2,bias_L2 = self._extract_weights(W)
# Output layer HLSxOUT
# A_2 = N x HLS
A_2 = transfer_func(np.dot(X,weights_L1) + bias_L1 )
# A_3 = N x Outputs
A_3 = transfer_func(np.dot(A_2,weights_L2) + bias_L2)
# output layer
return [A_2,A_3]
And the cost function for the gradient checking:
def cost_function(self,W,X,labels,reg=0.001):
"""
reg: regularization term
No weight decay term - lets leave it for later
"""
outputs = self._forward_prop(W,X,sigmoid)[-1] #take the last layer out
sample_size = X.shape[0]
y = self.make_1_of_c_encoding(labels)
e1 = np.sum((outputs - y)**2, axis=1))*0.5
#error = e1.sum(axis=1)
error = e1.sum()/sample_size + 0.5*reg*(np.square(W)).sum()
return error
What kind of results are you getting when you run gradient checking? Often times you can tease out the nature of the implementation error by looking at the output of your gradient vs the output produced by gradient checking.
Furthermore, square error is usually a poor choice for a classification task such as MNIST and I would suggest using either a simple sigmoid top-layer or a softmax. With sigmoid the cross entropy function you want to use is:
L(h,Y) = -Y*log(h) - (1-Y)*log(1-h)
For a softmax
L(h,Y) = -sum(Y*log(h))
where Y is the target given as a 1x10 vector and h is your predicted value, but easily extends to arbitrary batch sizes.
In both cases the top-layer delta simply becomes:
delta = h - Y
And the top-layer gradient becomes:
grad = dot(delta, A_in)
Where A_in is the input into the top layer from the previous layer.
While I am having some trouble getting my head around your backprop routine, I suspect from your code that the error in gradient is due to the fact that you are not calculating the top-level dE/dw_l2 correctly when using square error, along with computing fprime on the incorrect input.
When using square error the top layer delta should be:
delta = (h - Y) * fprime(Z_l2)
Here Z_l2 is the input into your transfer function for layer 2. Similarly when computing fprime for the lower layers, you want to use the input to your transfer function (i.e. dot(X,weights_L1) + bias_L1)
Hope that helps.
EDIT:
As some added justification for using cross entropy error over square error I would suggest looking up Geoffrey Hinton's lectures on linear classification methods:
www.cs.toronto.edu/~hinton/csc2515/notes/lec3.ppt
EDIT2:
I ran some tests locally with my implementation of neural nets on the MNIST dataset with different parameters and 1 hidden layer using RMSPROP. Here are the results:
Test1
Epochs: 30
Hidden Size: 300
Learn Rate: 0.001
Lambda: 0.001
Train Method: RMSPROP with decrements=0.5 and increments=1.3
Train Error: 6.1%
Test Error: 6.9%
Test2
Epochs: 30
Hidden Size: 300
Learn Rate: 0.001
Lambda: 0.000002
Train Method: RMSPROP with decrements=0.5 and increments=1.3
Train Error: 4.5%
Test Error: 5.7%
It already appears that if you decrease your lambda parameter by a couple orders of magnitude you should end up with better performance.

Categories