python class call function without indicate function name - python

I was learning pytorch and I encountered a case I could not understand what's happening. Here is a class called MLP, with init function and a forward function. When I pass X as a parameter to the MLP instance net, without using net.forward(X), it seems forward function has been autimatically called. Why this is the case?
import torch
from torch import nn
from torch.nn import functional as F
class MLP(nn.Module):
def __init__(self):
super().__init__() # nn.Module's params
self.hidden = nn.Linear(20, 256)
self.out = nn.Linear(256, 10)
def forward(self, X):
return self.out(F.relu(self.hidden(X)))
X = torch.rand(2, 20)
net = MLP()
net(X)
"""
output of net(X)
tensor([[ 0.0614, -0.0143, -0.0546, 0.1173, -0.1838, -0.1843, 0.0861, 0.1152,
0.0990, 0.1818],
[-0.0483, -0.0196, 0.0720, 0.1243, 0.0261, -0.2727, -0.0480, 0.1391,
-0.0685, 0.2025]], grad_fn=<AddmmBackward0>)
"""
My initial guess is that the forward is the only function is MLP receives a parameter, but after I added another function that takes the same parameters, calling net(X) seems still choose forward function
class MLP(nn.Module):
def __init__(self):
super().__init__() # nn.Module's params
self.hidden = nn.Linear(20, 256)
self.out = nn.Linear(256, 10)
def forward2(self, X):
print("hello")
return self.out((self.hidden(X)))
def forward(self, X):
return self.out(F.relu(self.hidden(X)))
net = MLP()
net(X)
net.forward(X)
net.forward2(X)
then I got
>>> net.forward(X)
tensor([[-0.1273, -0.0338, -0.1412, -0.1321, -0.1213, 0.0589, 0.0752, 0.0066,
-0.0057, -0.1374],
[-0.1660, -0.0044, -0.1765, -0.0451, -0.0386, 0.0824, 0.0486, -0.1293,
0.0511, -0.1285]], grad_fn=<AddmmBackward0>)
>>> net.forward2(X)
hello
tensor([[-0.2027, -0.2304, -0.3597, -0.3741, -0.5000, -0.2698, 0.2464, 0.1709,
-0.2262, -0.1462],
[-0.1168, -0.0417, -0.3584, -0.3133, -0.2366, -0.1521, 0.2428, 0.0043,
-0.1296, -0.2021]], grad_fn=<AddmmBackward0>)
>>> net(X)
tensor([[-0.1273, -0.0338, -0.1412, -0.1321, -0.1213, 0.0589, 0.0752, 0.0066,
-0.0057, -0.1374],
[-0.1660, -0.0044, -0.1765, -0.0451, -0.0386, 0.0824, 0.0486, -0.1293,
0.0511, -0.1285]], grad_fn=<AddmmBackward0>)
What did I miss? Really appreciate with any help!

Related

Clarification for self.forward function in Python

I am not able to understand this sample_losses = self.forward(output, y) defined under the class Loss.
From which "forward function" it is taking input as forward function is previously defined for all three classes i.e. Dense_layer, Activation_ReLU and Activation_Softmax?
class Layer_Dense:
def __init__(self, n_inputs, n_neurons):
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
print(self.weights)
def forward(self, inputs):
self.output = np.dot(inputs, self.weights) + self.biases
class Activation_ReLU:
def forward(self, inputs):
self.output= np.maximum(0, inputs)
class Activation_Softmax:
def forward (self, inputs):
exp_values = np.exp(inputs - np.max(inputs, axis = 1, keepdims= True ))
probabilities= exp_values/np.sum(exp_values, axis = 1, keepdims= True )
self.output = probabilities
class Loss:
def calculate(self, output, y):
sample_losses = self.forward(output, y)
data_loss = np.mean(sample_losses)
return data_loss
self.forward() is similar to call method but with registered hooks. This is used to directly call a method in the class when an instance name is called. These methods are inherited from nn.Module.
https://gist.github.com/nathanhubens/5a9fc090dcfbf03759068ae0fc3df1c9
Or refer to the source code:
https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L485

How to define the loss function using the output of intermediate layers?

class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.encoder = nn.Linear(300, 100)
self.dense1 = nn.Sequential(nn.Linear(100, 10),nn.ReLU())
self.dense2 = nn.Sequential(nn.Linear(10, 5),nn.ReLU())
self.dense3 = nn.Sequential(nn.Linear(5, 1))
def forward(self, x):
x = self.encoder(x)
x = self.dense1(x)
x = self.dense2(x)
x = self.dense3(x)
return x
I am working on a regression problem, and I need to use the output of the dense2 layer to calculate the loss.
output of dense2 layer is 5 dimensional (5x1).
I am using PyTorch.
Dataset: Suppose i am using 300 features and i need to predict some score(a floating value).
Input: 300 Features
Output: Some Floating Value
In general, your nn.Module can return as many elements as you like. Moreover, you don't have to use them anywhere - there is no mechanism that checks that. Pytorch philosophy is to compute computational graph on-the-run.
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.encoder = nn.Linear(300, 100)
self.dense1 = nn.Sequential(nn.Linear(100, 10),nn.ReLU())
self.dense2 = nn.Sequential(nn.Linear(10, 5),nn.ReLU())
self.dense3 = nn.Sequential(nn.Linear(5, 1))
def forward(self, x):
enc_output = self.encoder(x)
dense1_output = self.dense1(enc_output)
dense2_output = self.dense2(dense1_output)
dense3_output = self.dense3(dense2_output)
return dense3_output, dense2_output

How to set layer weights during training tensorflow

In every forward pass of the model, I want to implement l2 normalization on the softmax layer's columns, then set the weights back as per the imprinted weights paper and this pytorch implementation. I am using layer.set_weights() to set the normalized weights during the call() function of the model, but this implementation only works with eager execution, as something goes wrong with layer.set_weights() when building the graph.
here is the implementation of the model in tf 1.15:
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import Model
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense
class Extractor(Model):
def __init__(self, input_shape):
super(Extractor, self).__init__()
self.basenet = ResNet50(include_top=False, weights="imagenet",
pooling="avg", input_shape=input_shape)
def call(self, x):
x = self.basenet(x)
return(x)
class Embedding(Model):
def __init__(self, num_nodes, norm=True):
super(Embedding, self).__init__()
self.fc = Dense(num_nodes, activation="relu")
self.norm = norm
def call(self, x):
x = self.fc(x)
if self.norm:
x = tf.nn.l2_normalize(x)
return x
class Classifier(Model):
def __init__(self, n_classes, norm=True, bias=False):
super(Classifier, self).__init__()
self.n_classes = n_classes
self.norm = norm
self.bias = bias
def build(self, inputs_shape):
self.prediction = Dense(self.n_classes,
activation="softmax",use_bias=False)
def call(self, x):
if self.norm:
w = self.prediction.trainable_weights
if w:
w = tf.nn.l2_normalize(w, axis=2)
self.prediction.set_weights(w)
x = self.prediction(x)
return x
class Net(Model):
def __init__(self, input_shape, n_classes, num_nodes, norm=True,
bias=False):
super(Net, self).__init__()
self.n_classes = n_classes
self.num_nodes = num_nodes
self.norm = norm
self.bias = bias
self.extractor = Extractor(input_shape)
self.embedding = Embedding(self.num_nodes, norm=self.norm)
self.classifier = Classifier(self.n_classes, norm=self.norm,
bias=self.bias)
def call(self, x):
x = self.extractor(x)
x = self.embedding(x)
x = self.classifier(x)
return x
The weight normalization can be found in the call step of the Classifier class, where I call .set_weights() after normalizing it.
Creating the model with model = Net(input_shape,n_classes, num_nodes) and using model(x) works, but model.predict() and model.fit() give me errors about .get_weights(). I can train the model in eager mode with gradient tape, but it is extremely slow.
Is there another way I can set the weights of a Dense layer during training in each forward call but lets me use the model outside of eager mode? When I say eager mode I mean with tf.enable_eager_execution() at the start of the program.
Here is the error I get when I use model.predict(x) instead:
TypeError: len is not well defined for symbolic Tensors. (imprint_net_1/classifier/l2_normalize:0) Please call `x.shape` rather than `len(x)` for shape information.

How do I retrieve name of a layer inside hook function?

I have a neural network
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
self.trunk = nn.ModuleList()
self.trunk.add_module('conv1', nn.Conv2d(3, 10, 3))
self.classifier = nn.Linear(30, 2)
def forward(self, x):
out = self.classifier(self.trunk.conv1(x))
return out
model = ConvNet()
I registered forward hook
def hook(module, input, output):
print(module, input[0].shape, output.shape)
x = model.trunk.conv1.register_forward_hook(hook)
How do I retrieve name of this layer that is 'conv1' inside hook function, module._get_name returns Conv2d, module.__class__ returns <class 'torch.nn.modules.conv.Conv2d'>, how do I get just 'conv1'?

How to implement current pytorch activation functions with parameters?

I am looking for a simple way to use an activation function which exist in the pytorch library, but using some sort of parameter. for example:
Tanh(x/10)
The only way I came up with looking for solution was implementing the custom function completely from scratch. Is there any better/more elegant way to do this?
edit:
I am looking for some way to append to my model the function Tanh(x/10) rather than plain Tanh(x). Here is the relevant code block:
self.model = nn.Sequential()
for i in range(len(self.layers)-1):
self.model.add_module("linear_layer_" + str(i), nn.Linear(self.layers[i], self.layers[i + 1]))
if activations == None:
self.model.add_module("activation_" + str(i), nn.Tanh())
else:
if activations[i] == "T":
self.model.add_module("activation_" + str(i), nn.Tanh())
elif activations[i] == "R":
self.model.add_module("activation_" + str(i), nn.ReLU())
else:
#no activation
pass
Instead of defining it as a specific function, you could inline it in a custom layer.
For instance your solution could look like:
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 10)
self.fc2 = nn.Linear(10, 3)
self.fc3 = nn.Softmax()
def forward(self, x):
return self.fc3(self.fc2(torch.tanh(self.fc1(x)/10)))
where torch.tanh(output/10) is inlined in the forward function of your module.
You can create a layer with the multiplying parameter:
import torch
import torch.nn as nn
class CustomTanh(nn.Module):
#the init method takes the parameter:
def __init__(self, multiplier):
self.multiplier = multiplier
#the forward calls it:
def forward(self, x):
x = self.multiplier * x
return torch.tanh(x)
Add it to your models with CustomTanh(1/10) instead of nn.Tanh().

Categories