How to train the Shared Layers in PyTorch - python

I have the follow code
import torch
import torch.nn as nn
from torchviz import make_dot, make_dot_from_trace
class Net(nn.Module):
def __init__(self, input, output):
super(Net, self).__init__()
self.fc = nn.Linear(input, output)
def forward(self, x):
x = self.fc(x)
x = self.fc(x)
return x
model = Net(12, 12)
print(model)
x = torch.rand(1, 12)
y = model(x)
make_dot(y, params = dict(model.named_parameters()))
Here I reuse the self.fc twice in the forward.
The computational graph is look
I am confused about the computational graph and,
I am curious how to train this model in back propagation? It seem for me the gradient will live in a loop forever. Thanks a lot.

There are no issues with your graph. You can train it the same way as any other feed-forward model.
Regarding looping: Since it is a directed acyclic graph, the are no actual loops (check out the arrow directions).
Regarding backprop: Let’s consider fc.bias parameter. Since you are reusing the same layer two times, the bias has two outgoing arrows (used in two places of your net). During backpropagation stage the direction is reversed: bias will get gradients from two places, and these gradients will add up.
Regarding the graph: An FC layer can be represented as this: Addmm(bias, x, T(weight), where T is transposing and Addmm is matrix multiplication plus adding a vector. So, you can see how data (weight, bias) is passed into functions (Addmm, T)
https://pytorch.org/docs/stable/generated/torch.addmm.html
https://pytorch.org/docs/stable/generated/torch.t.html

Related

Creating custom layer as stack of individual neurons TensorFlow

So, I'm trying to create a custom layer in TensorFlow 2.4.1, using a function for a neuron I defined.
# NOTE: this is not the actual neuron I want to use,
# it's just a simple example.
def neuron(x, W, b):
return W # x + b
Where the W and b it gets would be of shape (1, x.shape[0]) and (1, 1) respectively. This means this is like a single neuron in a dense layer. So, I want to create a dense layer by stacking however many of these individual neurons I want.
class Layer(tf.keras.layers.Layer):
def __init__(self, n_units=5):
super(Layer, self).__init__() # handles standard arguments
self.n_units = n_units # Number of neurons to be in the layer
def build(self, input_shape):
# Create weights and biases for all neurons individually
for i in range(self.n_units):
# Create weights and bias for ith neuron
...
def call(self, inputs):
# Compute outputs for all neurons
...
# Concatenate outputs to create layer output
...
return output
How can I create a layer as a stack of individual neurons (also in a way it can train)? I have roughly outlined the idea for the layer in the above code, but the answer doesn't need to follow that as a blueprint.
Finally; yes I'm aware that to create a dense layer you don't need to go about it in such a roundabout way (you just need 1 weight and bias matrix), but in my actual use case, this is neccessary. Thanks!
So, person who asked this question here, I have found a way to do it, by dynamically creating variables and operations.
First, let's re-define the neuron to use tensorflow operations:
def neuron(x, W, b):
return tf.add(tf.matmul(W, x), b)
Then, let's create the layer (this uses the blueprint layed out in the question):
class Layer(tf.keras.layers.Layer):
def __init__(self, n_units=5):
super(Layer, self).__init__()
self.n_units = n_units
def build(self, input_shape):
for i in range(self.n_units):
exec(f'self.kernel_{i} = self.add_weight("kernel_{i}", shape=[1, int(input_shape[0])])')
exec(f'self.bias_{i} = self.add_weight("bias_{i}", shape=[1, 1])')
def call(self, inputs):
for i in range(self.n_units):
exec(f'out_{i} = neuron(inputs, self.kernel_{i}, self.bias_{i})')
return eval(f'tf.concat([{", ".join([ f"out_{i}" for i in range(self.n_units) ])}], axis=0)')
As you can see, we're using exec and eval to dynamically create variables and perform operations.
That's it! We can perform a few checks to see if TensorFlow could use this:
# Check to see if it outputs the correct thing
layer = Layer(5) # With 5 neurons, it should return a (5, 6)
print(layer(tf.zeros([10, 6])))
# Check to see if it has the right trainable parameters
print(layer.trainable_variables)
# Check to see if TensorFlow can find the gradients
layer = Layer(5)
x = tf.ones([10, 6])
with tf.GradientTape() as tape:
z = layer(x)
print(f"Parameter: {layer.trainable_variables[2]}")
print(f"Gradient: {tape.gradient(z, layer.trainable_variables[2])}")
This solution works, but it's not very elegant... I wonder if there's a better way to do it, some magical TF method that can map the neuron to create a layer, I'm too inexperienced to know for the moment. So, please answer if you have a (better) answer, I'll be happy to accept it :)

Run multiple models of an ensemble in parallel with PyTorch

My neural network has the following architecture:
input -> 128x (separate fully connected layers) -> output averaging
I am using a ModuleList to hold the list of fully connected layers. Here's how it looks at this point:
class MultiHead(nn.Module):
def __init__(self, dim_state, dim_action, hidden_size=32, nb_heads=1):
super(MultiHead, self).__init__()
self.networks = nn.ModuleList()
for _ in range(nb_heads):
network = nn.Sequential(
nn.Linear(dim_state, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, dim_action)
)
self.networks.append(network)
self.cuda()
self.optimizer = optim.Adam(self.parameters())
Then, when I need to calculate the output, I use a for ... in construct to perform the forward and backward pass through all the layers:
q_values = torch.cat([net(observations) for net in self.networks])
# skipped code which ultimately computes the loss I need
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
This works! But I am wondering if I couldn't do this more efficiently. I feel like by doing a for...in, I am actually going through each separate FC layer one by one, while I'd expect this operation could be done in parallel.
In the case of Convnd in place of Linear you could use the groups argument for "grouped convolutions" (a.k.a. "depthwise convolutions"). This let's you handle all parallel networks simultaneously.
If you use a convolution kernel of size 1, then the convolution does nothing else than applying a Linear layer, where each channel is considered an input dimension. So the rough structure of your network would look like this:
Modify the input tensor of shape B x dim_state as follows: add an additional dimension and replicate by nb_state-times B x dim_state to B x (dim_state * nb_heads) x 1
replace the two Linear with
nn.Conv1d(in_channels=dim_state * nb_heads, out_channels=hidden_size * nb_heads, kernel_size=1, groups=nb_heads)
and
nn.Conv1d(in_channels=hidden_size * nb_heads, out_channels=dim_action * nb_heads, kernel_size=1, groups=nb_heads)
we now have a tensor of size B x (dim_action x nb_heads) x 1 you can now modify it to whatever shape you want (e.g. B x nb_heads x dim_action)
While CUDA natively supports grouped convolutions, there were some issues in pytorch with the speed of grouped convolutions (see e.g. here) but I think that was solved now.

PyTorch: How to write a neural network that only returns the weights?

I'm training a neural network that learns some weights and based on those weights, I compute transformations that produce the predicted model in combination with the weights. My network doesn't learn properly and therefore I'm writing a different network that does nothing but returning the weights independent from the input x (after normalization with softmax and transpose). This way, I want to find out whether the problem lies in the network or in the transformation estimation outside the network. But this doesn't work. This is what I've got.
class DoNothingNet(torch.nn.Module):
def __init__(self, n_vertices=6890, n_joints=14):
super(DoNothingNet, self).__init__()
self.weights = nn.parameter.Parameter(torch.randn(n_vertices, n_joints))
def forward(self, x, indices):
self.weights = F.softmax(self.weights, dim=1)
return self.weights.transpose(0,1)
But the line self.weights = F.softmax(self.weights, dim=1) doesn't work and produces the error TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weights' (torch.nn.Parameter or None expected). How do I fix this? And does the code even make sense?
nn.Module tracks all fields of type nn.Parameter for training. In your code every forward call you try to change parameters weights by assigning it to Tensor type, so the error occurs.
The following code outputs normalised weights without changing the stored ones. Hope this will help.
import torch
from torch import nn
from torch.nn import functional as F
class DoNothingNet(torch.nn.Module):
def __init__(self, n_vertices=6890, n_joints=14):
super(DoNothingNet, self).__init__()
self.weights = nn.parameter.Parameter(torch.randn(n_vertices, n_joints))
def forward(self, x, indices):
output = F.softmax(self.weights, dim=1)
return output.transpose(0,1)

Output from LSTM not changing for different inputs

I have the an LSTM implemented in PyTorch as below.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class LSTM(nn.Module):
"""
Defines an LSTM.
"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super(LSTM, self).__init__()
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
def forward(self, input_data):
lstm_out_pre, _ = self.lstm(input_data)
return lstm_out_pre
model = LSTM(input_dim=2, hidden_dim=2, output_dim=1, num_layers=8)
random_data1 = torch.Tensor(np.random.standard_normal(size=(1, 5, 2)))
random_data2 = torch.Tensor(np.random.standard_normal(size=(1, 5, 2)))
out1 = model(random_data1).detach().numpy()
out2 = model(random_data2).detach().numpy()
print(out1)
print(out2)
I am simply creating an LSTM network and passing two random inputs into it. The outputs does not make sense because no matter what random_data1 and random_data2 is, out1 and out2 are always the same. This does not make any sense to me as random inputs multiplied with random weights should give different outputs.
This does not seem to be the case if I use less number of hidden layers. With num_layers=2, this effect seems to be nil. And as you increase it, out1 and out2 keeps on getting closer. This does not make sense to me because with more layers of the LSTM stacked on top of each other, we are multiplying the input with more number of random weights which should magnify the differences in the input and give a very different output.
Can someone please explain this behavior? Is there something wrong with my implementation?
In one particular run, random_data1 is
tensor([[[-2.1247, -0.1857],
[ 0.0633, -0.1089],
[-0.6460, -0.1079],
[-0.2451, 0.9908],
[ 0.4027, 0.3619]]])
random_data2 is
tensor([[[-0.9725, 1.2400],
[-0.4309, -0.7264],
[ 0.5053, -0.9404],
[-0.6050, 0.9021],
[ 1.4355, 0.5596]]])
out1 is
[[[0.12221643 0.11449362]
[0.18342148 0.1620608 ]
[0.2154751 0.18075559]
[0.23373817 0.18768947]
[0.24482158 0.18987371]]]
out2 is
[[[0.12221643 0.11449362]
[0.18342148 0.1620608 ]
[0.2154751 0.18075559]
[0.23373817 0.18768945]
[0.24482158 0.18987371]]]
EDIT:
I am running on the following configurations -
PyTorch - 1.0.1.post2
Python - 3.6.8 with GCC 7.3.0
OS - Pop!_OS 18.04 (Ubuntu 18.04, more-or-less)
CUDA - 9.1.85
Nvidia driver - 410.78
Initial weights for LSTM are small numbers close to 0, and by adding more layers the initial weighs and biases are getting smaller: all the weights and biases are initialized from -sqrt(k) to -sqrt(k), where k = 1/hidden_size (https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM)
By adding more layers you effectively multiply the input by many small numbers, so effect of the input is basically 0 and only biases in the later layers matter.
If you try LSTM with bias=False, you will see that output getting closer and closer to 0 with adding more layers.
I tried changing the number of layers to a lower number and the values differ, it is because the values are getting multiplied by a small number over and over again which reduces the significance of input.
I initialized all the weights in the using kaiming_normal and it works fine.

PyTorch element-wise filter layer

Hi, I want to add element-wise multiplication layer to duplicate the input to multi-channels like this figure. (So, the input size M x N and multiplication filter size M x N is same), as illustrated in this figure
I want to add custom initialization value to filter, and also want them to get gradient while training. However, I can't find element-wise filter layer in PyTorch. Can I make it? Or is it just impossible in PyTorch?
In pytorch you can always implement your own layers, by making them subclasses of nn.Module. You can also have trainable parameters in your layer, by using nn.Parameter.
Possible implementation of such layer might look like
import torch
from torch import nn
class TrainableEltwiseLayer(nn.Module)
def __init__(self, n, h, w):
super(TrainableEltwiseLayer, self).__init__()
self.weights = nn.Parameter(torch.Tensor(1, n, h, w)) # define the trainable parameter
def forward(self, x):
# assuming x is of size b-n-h-w
return x * self.weights # element-wise multiplication
You still need to worry about initializing the weights. look into nn.init on ways to init weights. Usually, one init the weights of all the net prior to training and prior to loading any stored model (so partially trained models can override random init). Something like
model = mymodel(*args, **kwargs) # instantiate a model
for m in model.modules():
if isinstance(m, nn.Conv2d):
nn.init.normal_(m.weights.data) # init for conv layers
if isinstance(m, TrainableEltwiseLayer):
nn.init.constant_(m.weights.data, 1) # init your weights here...

Categories