I am trying to implement the back-propagation algorithm using numpy in python. I have been using this site to implement the matrix form of back-propagation. While testing this code on XOR, my network does not converge even after multiple runs of thousands of iterations. I think there is some sort of logic error. I would be very grateful if anyone would be willing to look it over. Fully runnable code can be found at github
import numpy as np
def backpropagate(network, tests, iterations=50):
#convert tests into numpy matrices
tests = [(np.matrix(inputs, dtype=np.float64).reshape(len(inputs), 1),
np.matrix(expected, dtype=np.float64).reshape(len(expected), 1))
for inputs, expected in tests]
for _ in range(iterations):
#accumulate the weight and bias deltas
weight_delta = [np.zeros(matrix.shape) for matrix in network.weights]
bias_delta = [np.zeros(matrix.shape) for matrix in network.bias]
#iterate over the tests
for potentials, expected in tests:
#input the potentials into the network
#calling the network with trace == True returns a list of matrices,
#representing the potentials of each layer
trace = network(potentials, trace=True)
errors = [expected - trace[-1]]
#iterate over the layers backwards
for weight_matrix, layer in reversed(list(zip(network.weights, trace))):
#compute the error vector for a layer
errors.append(np.multiply(weight_matrix.transpose()*errors[-1],
network.sigmoid.derivative(layer)))
#remove the input layer
errors.pop()
errors.reverse()
#compute the deltas for bias and weight
for index, error in enumerate(errors):
bias_delta[index] += error
weight_delta[index] += error * trace[index].transpose()
#apply the deltas
for index, delta in enumerate(weight_delta):
network.weights[index] += delta
for index, delta in enumerate(bias_delta):
network.bias[index] += delta
Additionally, here is the code that computes the output, and my sigmoid function. It is less likely that bug lies here; I was able to trained a network to simulate XOR using simulated annealing.
# the call function of the neural network
def __call__(self, potentials, trace=True):
#ensure the input is properly formated
potentials = np.matrix(potentials, dtype=np.float64).reshape(len(potentials), 1)
#accumulate the trace
trace = [potentials]
#iterate over the weights
for index, weight_matrix in enumerate(self.weights):
potentials = weight_matrix * potentials + self.bias[index]
potentials = self.sigmoid(potentials)
trace.append(potentials)
return trace
#The sigmoid function that is stored in the network
def sigmoid(x):
return np.tanh(x)
sigmoid.derivative = lambda x : (1-np.square(x))
The problem is the missing step-size parameter. Gradient should be additionally scaled, not to make the whole step in the weights space at once. So instead of: network.weights[index] += delta and network.bias[index] += delta it should be:
def backpropagate(network, tests, stepSize = 0.01, iterations=50):
#...
network.weights[index] += stepSize * delta
#...
network.bias[index] += stepSize * delta
Related
EDIT: Problem soved, just had to "transform" the input W using W.data ...
Hi guys,
In my code, i am trying to train a model so that it moves a given sample to a given target distribution. The next step is to introduce intermediate distributions and to use a loop so that the particles (the samples) are moved from one distribution to another iteratively. Unfortunately, at the second iteration I get the following Error-Message when running my code:
"Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward"
I don't think that retain_graph = True would fit my problem, since I would rather kind of clear the model after every iteraion than retain it. However, i gave it a shot, the result is the following error:
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 2]] is at version 2251; expected version 2250 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Here are the relevant parts of my code:
for k in range(1, K_intermediate+1):
flow = BasicFlow(dim=d, n_flows=n_flows, flow_layer=flow_layer)
ldj = train_flow(
flow, x, W, f_intermediate(x,k-1), lambda x:
f_intermediate(x,k), epochs=2500
)
x, xtransp = flow(x)
x = xtransp.data
And the train_flow function:
def train_flow(flow, sample, weights, f0, f1, epochs=1000):
optim = torch.optim.Adam(flow.parameters(), lr=1e-2)
for i in range(epochs):
x0, xtransp = flow(sample)
ldj = accumulate_kl_div(flow).reshape(sample.size(0))
loss = det_loss(
x_0 = x0,
x_transp = xtransp,
weights = weights,
ldj = ldj,
f0 = f0,
f1 = f1
)
loss.backward(retain_graph = True)
optim.step()
optim.zero_grad()
reset_kl_div(flow)
if i % 250 == 0:
if i > 0 and previous_loss - loss.item() < 1e-06:
break
print(loss.item())
previous_loss = loss.item()
if torch.isnan(loss) == True:
break
return ldj
Note that the problem only arises since I start capturing the ldj-value (log of the determinant jacobian, for those who wonder). Since this value is crucial for further computations i can not just delete this.
I currently investigate the development of a convolutional neural network involving up to 5 or 6 dimensional arrays efficiently.
I was aware that many of the tools used for convolutional neural networks do not really deal with ND convolutions, so I decided to try and write an implementation of Helix Convolution, whereby the convolution can be treated as a large, 1D convolution (see Reference 1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html , Reference 2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf for more details of the concept).
I did this under the (possibly incorrect) assumption that a large, single dimensional convolution was likely to be easier on a GPU than a multidimensional one, as well as that the method is trivially scalable to N dimensions.
Particularly, a quote from Reference 2. states:
We have not found important gains in computational efficiency between N-D standard convolution versus using the
algorithm described in the text. We have, however, found that
writing codes for seismic data regularization with the described
trick leads to algorithms that can easily handle regularization
problems with any number of spatial dimensions (Naghizadeh
and Sacchi, 2009).
I have written an implementation of the function below, which compares to signal.fftconvolve. It is slower on the CPU compared to this function, but I would nonetheless like to see how it performs on the GPU in PyTorch as a forward convolutional layer.
Can someone kindly help me port this code to PyTorch so I can verify how it behaves?
"""
HELIX CONVOLUTION FUNCTION
Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL.
Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.
GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.
FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS
Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL.
"""
import numpy as np
from numpy import *
from scipy import signal
import operator
import time
class HelixCPU:
#classmethod
def Shrink(cls,array, bounding):
start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return array[slices]
#classmethod
def Pad(cls,array, target_shape):
diff = target_shape-array.shape
padder=[(0,val) for val in diff]
padded = np.pad(array, padder, 'constant')
return padded
#classmethod
def GetLength(cls,array_shape, padded_shape):
temp=1
steps=np.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i==len(padded_shape)-1):
steps[i]=1
else:
temp=entry*temp
steps[i]=temp
steps=np.roll(steps, 1)
steps=steps[::-1]
ones=np.ones_like(array_shape)
ones[-1]=0
out=np.multiply(steps,array_shape - ones)
length = np.sum(out)
return length
#classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** np.ceil(cp.log2(shape)).astype(int)
fslice = slice(0, shape)
conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
return conv
#classmethod
def Convolve(cls,array, kernel):
m = array.shape
n = kernel.shape
mn = np.add(m, n)
mn = mn-np.ones_like(mn)
k_pad=cls.Pad(kernel, mn)
a_pad=cls.Pad(array, mn)
length_k = cls.GetLength(kernel.shape, k_pad.shape);
length_a = cls.GetLength(array.shape, a_pad.shape);
k_flat = k_pad.flatten()[0:length_k]
a_flat = a_pad.flatten()[0:length_a]
conv = cls.FFTConvolve(a_flat, k_flat)
conv = np.resize(conv,mn)
conv = cls.Shrink(conv, m)
return conv
def main():
array=np.random.rand(25,25,41,51)
kernel=np.random.rand(10, 10, 10, 10)
start2 =time.process_time()
test2 = HelixCPU.Convolve(array, kernel)
end2=time.process_time()
start1= time.process_time()
test1 = signal.fftconvolve(array, kernel, "same")
end1= time.process_time()
print ("")
print ("========================")
print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
print ("========================")
print("")
print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
print ("Random Kernel of Size {0} Created".format(kernel.shape))
print("")
print ("Value\tOriginal\tHelix")
print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
print ("All Close?\t{0}".format(np.allclose(test1, test2)))
Sorry, I cannot add a comment due to low rep, so I ask my question as an answer and hopefully can answer your question.
By helix convolution, do you mean defining a convolution operation as a single matrix multiplcation? If so, I did try this in the past but it is really memory inefficient for it to be practical.
I am trying to implement Probabilistic Matrix Factorization with Stochastic Gradient Descent updates, in theano, without using a for loop.
I have just started learning the basics of theano; unfortunately on my experiment I get this error:
UnusedInputError: theano.function was asked to create a function
computing outputs given certain inputs, but the provided input
variable at index 0 is not part of the computational graph needed
to compute the outputs: trainM.
The source code is the following:
def create_training_set_matrix(training_set):
return np.array([
[_i,_j,_Rij]
for (_i,_j),_Rij
in training_set
])
def main():
R = movielens.small()
U_values = np.random.random((config.K,R.shape[0]))
V_values = np.random.random((config.K,R.shape[1]))
U = theano.shared(U_values)
V = theano.shared(V_values)
lr = T.dscalar('lr')
trainM = T.dmatrix('trainM')
def step(curr):
i = T.cast(curr[0],'int32')
j = T.cast(curr[1],'int32')
Rij = curr[2]
eij = T.dot(U[:,i].T, V[:,j])
T.inc_subtensor(U[:,i], lr * eij * V[:,j])
T.inc_subtensor(V[:,j], lr * eij * U[:,i])
return {}
values, updates = theano.scan(step, sequences=[trainM])
scan_fn = function([trainM, lr],values)
print "training pmf..."
for training_set in cftools.epochsloop(R,U_values,V_values):
training_set_matrix = create_training_set_matrix(training_set)
scan_fn(training_set_matrix, config.lr)
I realize that it's a rather unconventional way to use theano.scan: do you have a suggestion on how I could implement my algorithm better?
The main difficulty lies on the updates: a single update depends on possibly all the previous updates. For this reason I defined the latent matrices U and V as shared (I hope I did that correctly).
The version of theano I am using is: 0.8.0.dev0.dev-8d6800181bedb03a4bced4f456338e5194524317
Any hint and suggestion is highly appreciated. I am available to provide further details.
I am trying to implement this algorithm to find the intercept and slope for single variable:
Here is my Python code to update the Intercept and slope. But it is not converging. RSS is Increasing with Iteration rather than decreasing and after some iteration it's becoming infinite. I am not finding any error implementing the algorithm.How Can I solve this problem? I have attached the csv file too.
Here is the code.
import pandas as pd
import numpy as np
#Defining gradient_decend
#This Function takes X value, Y value and vector of w0(intercept),w1(slope)
#INPUT FEATURES=X(sq.feet of house size)
#TARGET VALUE=Y (Price of House)
#W=np.array([w0,w1]).reshape(2,1)
#W=[w0,
# w1]
def gradient_decend(X,Y,W):
intercept=W[0][0]
slope=W[1][0]
#Here i will get a list
#list is like this
#gd=[sum(predicted_value-(intercept+slope*x)),
# sum(predicted_value-(intercept+slope*x)*x)]
gd=[sum(y-(intercept+slope*x) for x,y in zip(X,Y)),
sum(((y-(intercept+slope*x))*x) for x,y in zip(X,Y))]
return np.array(gd).reshape(2,1)
#Defining Resudual sum of squares
def RSS(X,Y,W):
return sum((y-(W[0][0]+W[1][0]*x))**2 for x,y in zip(X,Y))
#Reading Training Data
training_data=pd.read_csv("kc_house_train_data.csv")
#Defining fixed parameters
#Learning Rate
n=0.0001
iteration=1500
#Intercept
w0=0
#Slope
w1=0
#Creating 2,1 vector of w0,w1 parameters
W=np.array([w0,w1]).reshape(2,1)
#Running gradient Decend
for i in range(iteration):
W=W+((2*n)* (gradient_decend(training_data["sqft_living"],training_data["price"],W)))
print RSS(training_data["sqft_living"],training_data["price"],W)
Here is the CSV file.
Firstly, I find that when writing machine learning code, it's best NOT to use complex list comprehension because anything that you can iterate,
it's easier to read if written when normal loops and indentation and/or
it can be done with numpy broadcasting
And using proper variable names can help you better understand the code. Using Xs, Ys, Ws as short hand is nice only if you're good at math. Personally, I don't use them in the code, especially when writing in python. From import this: explicit is better than implicit.
My rule of thumb is to remember that if I write code I can't read 1 week later, it's bad code.
First, let's decide what is the input parameters for gradient descent, you will need:
feature_matrix (The X matrix, type: numpy.array, a matrix of N * D size, where N is the no. of rows/datapoints and D is the no. of columns/features)
output (The Y vector, type: numpy.array, a vector of size N)
initial_weights (type: numpy.array, a vector of size D).
Additionally, to check for convergence you will need:
step_size (the magnitude of change when iterating through to change the weights; type: float, usually a small number)
tolerance (the criteria to break the iterations, when the gradient magnitude is smaller than tolerance, assume that your weights have convereged, type: float, usually a small number but much bigger than the step size).
Now to the code.
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False # Set a boolean to check for convergence
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights.
# iterate through the row and find the single scalar predicted
# value for each weight * column.
# hint: a dot product can solve this easily
predictions = [??? for row in feature_matrix]
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
# Hint: the derivative is = 2 * dot product of feature_column and errors.
derivative = 2 * ????
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += (derivative * derivative)
# subtract the step size times the derivative from the current weight
weights[i] -= (step_size * derivative)
# compute the square-root of the gradient sum of squares to get the gradient magnitude:
gradient_magnitude = ???
# Then check whether the magnitude is lower than the tolerance.
if ???:
converged = True
# Once it while loop breaks, return the loop.
return(weights)
I hope the extended pseudo-code helps you better understand the gradient descent. I won't fill in the ??? so as to not spoil your homework.
Note that your RSS code is also unreadable and unmaintainable. It's easier to do just:
>>> import numpy as np
>>> prediction = np.array([1,2,3])
>>> output = np.array([1,1,5])
>>> residual = output - prediction
>>> RSS = sum(residual * residual)
>>> RSS
5
Going through numpy basics will go a long way to machine learning and matrix-vector manipulation without going nuts with iterations: http://docs.scipy.org/doc/numpy-1.10.1/user/basics.html
I have solved my own problem!
Here is the solved way.
import numpy as np
import pandas as pd
import math
from sys import stdout
#function Takes the pandas dataframe, Input features list and the target column name
def get_numpy_data(data, features, output):
#Adding a constant column with value 1 in the dataframe.
data['constant'] = 1
#Adding the name of the constant column in the feature list.
features = ['constant'] + features
#Creating Feature matrix(Selecting columns and converting to matrix).
features_matrix=data[features].as_matrix()
#Target column is converted to the numpy array
output_array=np.array(data[output])
return(features_matrix, output_array)
def predict_outcome(feature_matrix, weights):
weights=np.array(weights)
predictions = np.dot(feature_matrix, weights)
return predictions
def errors(output,predictions):
errors=predictions-output
return errors
def feature_derivative(errors, feature):
derivative=np.dot(2,np.dot(feature,errors))
return derivative
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
#Initital weights are converted to numpy array
weights = np.array(initial_weights)
while not converged:
# compute the predictions based on feature_matrix and weights:
predictions=predict_outcome(feature_matrix,weights)
# compute the errors as predictions - output:
error=errors(output,predictions)
gradient_sum_squares = 0 # initialize the gradient
# while not converged, update each weight individually:
for i in range(len(weights)):
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
feature=feature_matrix[:, i]
# compute the derivative for weight[i]:
#predict=predict_outcome(feature,weights[i])
#err=errors(output,predict)
deriv=feature_derivative(error,feature)
# add the squared derivative to the gradient magnitude
gradient_sum_squares=gradient_sum_squares+(deriv**2)
# update the weight based on step size and derivative:
weights[i]=weights[i] - np.dot(step_size,deriv)
gradient_magnitude = math.sqrt(gradient_sum_squares)
stdout.write("\r%d" % int(gradient_magnitude))
stdout.flush()
if gradient_magnitude < tolerance:
converged = True
return(weights)
#Example of Implementation
#Importing Training and Testing Data
# train_data=pd.read_csv("kc_house_train_data.csv")
# test_data=pd.read_csv("kc_house_test_data.csv")
# simple_features = ['sqft_living', 'sqft_living15']
# my_output= 'price'
# (simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
# initial_weights = np.array([-100000., 1., 1.])
# step_size = 7e-12
# tolerance = 2.5e7
# simple_weights = regression_gradient_descent(simple_feature_matrix, output,initial_weights, step_size,tolerance)
# print simple_weights
It is so simple
def mean(values):
return sum(values)/float(len(values))
def variance(values, mean):
return sum([(x-mean)**2 for x in values])
def covariance(x, mean_x, y, mean_y):
covar = 0.0
for i in range(len(x)):
covar+=(x[i]-mean_x) * (y[i]-mean_y)
return covar
def coefficients(dataset):
x = []
y = []
for line in dataset:
xi, yi = map(float, line.split(','))
x.append(xi)
y.append(yi)
dataset.close()
x_mean, y_mean = mean(x), mean(y)
b1 = covariance(x, x_mean, y, y_mean)/variance(x, x_mean)
b0 = y_mean-b1*x_mean
return [b0, b1]
dataset = open('trainingdata.txt')
b0, b1 = coefficients(dataset)
n=float(raw_input())
print(b0+b1*n)
reference : www.machinelearningmastery.com/implement-simple-linear-regression-scratch-python/
I've been banging my head against this for a while and can't figure out what I've done wrong (if anything) in implementing these RNNs. To spare you guys the forward phase, I can tell you that the two implementations compute the same outputs, so the forward phase is correct. The problem is in the backwards phase.
Here is my python backwards code. It follows the style of karpathy's neuraltalk quite closely but not exactly:
def backward(self, cache, target,c=leastsquares_cost, dc=leastsquares_dcost):
'''
cache is from forward pass
c is a cost function
dc is a function used as dc(output, target) which gives the gradient dc/doutput
'''
XdotW = cache['XdotW'] #num_time_steps x hidden_size
Hin = cache['Hin'] # num_time_steps x hidden_size
T = Hin.shape[0]
Hout = cache['Hout']
Xin = cache['Xin']
Xout = cache['Xout']
Oin = cache['Oin'] # num_time_steps x output_size
Oout=cache['Oout']
dcdOin = dc(Oout, target) # this will be num_time_steps x num_outputs. these are dc/dO_j
dcdWho = np.dot(Hout.transpose(), dcdOin) # this is the sum of outer products for all time
# bias term is added at the end with coefficient 1 hence the dot product is just the sum
dcdbho = np.sum(dcdOin, axis=0, keepdims=True) #this sums all the time steps
dcdHout = np.dot(dcdOin, self.Who.transpose()) #reflects dcdHout_ij should be the dot product of dcdoin and the i'th row of Who; this is only for the outputs
# now go back in time
dcdHin = np.zeros(dcdHout.shape)
# for t=T we can ignore the other term (error from the next timestep). self.df is derivative of activation function (here, tanh):
dcdHin[T-1] = self.df(Hin[T-1]) * dcdHout[T-1] # because we don't need to worry about the next timestep, dcdHout is already corrent for t=T
for t in reversed(xrange(T-1)):
# we need to add to dcdHout[t] the error from the next timestep
dcdHout[t] += np.dot(dcdHin[t], self.Whh.transpose())
# now we have the correct form for dcdHout[t]
dcdHin[t] = self.df(Hin[t]) * dcdHout[t]
# now we've gone through all t, and we can continue
dcdWhh = np.zeros(self.Whh.shape)
for t in range(T-1): #skip T bc dHdin[T+1] doesn't exist
dcdWhh += np.outer(Hout[t], dcdHin[t+1])
# and we can do bias as well
dcdbhh = np.sum(dcdHin,axis=0, keepdims=True)
# now we need to go back to the embeddings
dcdWxh = np.dot(Xout.transpose(), dcdHin)
return {'dcdOout': dcdOout, 'dcdWxh': dcdWxh, 'dcdWhh': dcdWhh, 'dcdWho': dcdWho, 'dcdbhh': dcdbhh, 'dcdbho': dcdbho, 'cost':c(Oout, target)}
And here's the theano code (mainly copied from another implementation I found online. I initialize the weights to my pure-python rnn's randomized weights so that everything is the same.):
# input (where first dimension is time)
u = TT.matrix()
# target (where first dimension is time)
t = TT.matrix()
# initial hidden state of the RNN
h0 = TT.vector()
# learning rate
lr = TT.scalar()
# recurrent weights as a shared variable
W = theano.shared(rnn.Whh)
# input to hidden layer weights
W_in = theano.shared(rnn.Wxh)
# hidden to output layer weights
W_out = theano.shared(rnn.Who)
# bias 1
b_h = theano.shared(rnn.bhh[0])
# bias 2
b_o = theano.shared(rnn.bho[0])
# recurrent function (using tanh activation function) and linear output
# activation function
def step(u_t, h_tm1, W, W_in, W_out):
h_t = TT.tanh(TT.dot(u_t, W_in) + TT.dot(h_tm1, W) + b_h)
y_t = TT.dot(h_t, W_out) + b_o
return h_t, y_t
# the hidden state `h` for the entire sequence, and the output for the
# entrie sequence `y` (first dimension is always time)
[h, y], _ = theano.scan(step,
sequences=u,
outputs_info=[h0, None],
non_sequences=[W, W_in, W_out])
# error between output and target
error = (.5*(y - t) ** 2).sum()
# gradients on the weights using BPTT
gW, gW_in, gW_out, gb_h, gb_o = TT.grad(error, [W, W_in, W_out, b_h, b_o])
# training function, that computes the error and updates the weights using
# SGD.
Now here's the crazy thing. If i run the following:
fn = theano.function([h0, u, t, lr],
[error, y, h, gW, gW_in, gW_out, gb_h, gb_o],
updates={W: W - lr * gW,
W_in: W_in - lr * gW_in,
W_out: W_out - lr * gW_out})
er, yout, hout, gWhh, gWhx, gWho, gbh, gbo =fn(numpy.zeros((n,)), numpy.eye(5), numpy.eye(5),.01)
cache = rnn.forward(np.eye(5))
bc = rnn.backward(cache, np.eye(5))
print "sum difference between gWho (theano) and bc['dcdWho'] (pure python):"
print np.sum(gWho - bc['dcdWho'])
print "sum differnce between gWhh(theano) and bc['dcdWho'] (pure python):"
print np.sum(gWhh - bc['dcdWhh'])
print "sum difference between gWhx (theano) and bc['dcdWxh'] (pure pyython):"
print np.sum(gWhx - bc['dcdWxh'])
print "sum different between the last row of gWhx (theano) and the last row of bc['dcdWxh'] (pure python):"
print np.sum(gWhx[-1] - bc['dcdWxh'][-1])
I get the following output:
sum difference between gWho (theano) and bc['dcdWho'] (pure python):
-4.59268040265e-16
sum differnce between gWhh(theano) and bc['dcdWhh'] (pure python):
0.120527063611
sum difference between gWhx (theano) and bc['dcdWxh'] (pure pyython):
-0.332613468652
sum different between the last row of gWhx (theano) and the last row of bc['dcdWxh'] (pure python):
4.33680868994e-18
So, I'm getting the derivatives of the weight matrix between the hidden layer and output right, but not the derivatives of the weight matrix hidden -> hidden or input -> hidden. But this insane thing is that I ALWAYS get the LAST ROW of the weight matrix input -> hidden correct. This is insanity to me. I have no idea what's happening here. Note that the last row of the weight matrix input -> hidden does NOT correspond to the last timestep or anything (this would be explained, for example, by me calculating the derivatives correctly for the last timestep but not propagating back through time correctly). dcdWxh is the sum over all time steps of dcdWxh -- so how can I get one row of this correct but none of the others???
Can anyone help? I'm all out of ideas here.
You should compute the sum of the pointwise absolute value of the difference of the two matrices. The plain sum could be close to zero due to the specific learning tasks (do you emulate the zero function? :), whichever that is.
The last row presumably implements the weights from a constant-on-neuron, i.e. the bias, so you - seem to - always get the bias right (however, check the sum of absolute values) .
It also looks like row-major and column-major notation of matrices are confused, like in
gWhx - bc['dcdWxh']
which reads like weight from "hidden to x" in opposite to "x to hidden".
I'd rather post this as a comment, but I lack the reputation in doing so. sorry!