Python pass multiple classes to function/method/ - python

I am trying to write the derivative function given in pseudo code in my mwe below. It is supposed to calculate the numerical derivative of the cost of the prediction of my neural network with respect to a parameter of one of its layers.
My problem is I don't know how to pass and access an instance of NeuralNetwork and an instance of Layer from within the function (or method?) at the same time.
Looking into e.g. Passing a class to another class (Python) did not provide an answer to me.
import copy
class NeuralNetwork:
def __init__(self):
self.first_layer = Layer()
self.second_layer = Layer()
def cost(self):
# not the actual cost but not of interest
return self.first_layer.a + self.second_layer.a
class Layer:
def __init__(self):
self.a = 1
''' pseudocode
def derivative(NeuralNetwork, Layer):
stepsize = 0.01
cost_unchanged = NeuralNetwork.cost()
NN_deviated = copy.deepcopy(NeuralNetwork)
NN_deviated.Layer.a += stepsize
cost_deviated = NN_deviated.cost()
return (cost_deviated - cost_unchanged)/stepsize
'''
NN = NeuralNetwork()
''' pseudocode
derivative_first_layer = derivative(NN, first_layer)
derivative_second_layer = derivative(NN, second_layer)
'''

Related

How to return extra loss from module forward function in PyTorch?

I made a module, that needs an extra loss term, e.g.
class MyModule:
def forward(self, x):
out = f(x)
extra_loss = loss_f(self.parameters(), x)
return out, extra_loss
I can't figure out how to make this module embeddable, for example, into a Sequential model: any regular module like Linear put after this one will fail because extra_loss causes the input to Linear to be a tuple, which Linear does not support.
So what I am looking for is extracting that extra loss after running the model forward
my_module = MyModule()
model = Sequential(
my_module,
Linear(my_module_outputs, 1)
)
output = model(x)
my_module_loss = ????
loss = mse(label, output) + my_module_loss
Does module composability support this scenario?
IMHO, hooks here is overreaction. Provided extra_loss is additive, we can use global variable like this:
class MyModule:
extra_loss =0
def forward(self, x):
out = f(x)
MyModule.extra_loss += loss_f(self.parameters(), x)
return out
output = model(x)
loss = mse(label, output) + MyModule.extra_loss
MyModule.extra_loss =0
You can register a hook in this case. A hook can be registered on a Tensor or a nn.Module. A hook is a function that is executed when the either forward or backward is called. In this case, we want to attach a forward hook without deattaching itself from the graph so that backward pass can happen.
import torch.nn as nn
act_out = {}
def get_hook(name):
def hook(m, input, output):
act_out[name] = output
return hook
class MyModule(torch.nn.Module):
def __init__(self, input, out, device=None):
super().__init__()
self.model = nn.Linear(input,out)
def forward(self,x):
return self.model(x), torch.sum(x) #our extra loss
class MyModule1(torch.nn.Module):
def __init__(self, input, out, device=None):
super().__init__()
self.model = nn.Linear(input,out)
def forward(self, pair):
x, loss = pair
return self.model(x)
model = nn.Sequential(
MyModule(5,10),
MyModule1(10,1)
)
for name, module in model.named_children():
print(name, module)
if name == '0':
module.register_forward_hook(get_hook(name))
x = torch.tensor([1,2,3,4,5]).float()
out = model(x)
print(act_out)
loss = myanotherloss(out)+act_out['0'][1] # this is the extra loss
# further processing
Note: I am using name == '0' because this is the only module where I want to attach the hook.
Note: Another notable point is nn.Sequential doesn't allow multiple inputs. In this case, it is simply considered as a tuple and then from that tuple we are using the loss and the input.

Input parameters from a nested class to Pytorch Optimization Function

I have the following Graph neural network model and I am not able to get the learnable parameters of the model to do optimization.
from torch.nn.parameter import Parameter
from torch.nn.modules.module import Module
class Graphconvlayer(nn.Module):
def __init__(self,adj,input_feature_neurons,output_neurons):
super(Graphconvlayer, self).__init__()
self.adj=adj
self.input_feature_neurons=input_feature_neurons
self.output_neurons=output_neurons
self.weights=Parameter(torch.normal(mean=0.0,std=torch.ones(input_feature_neurons,output_neurons)))
self.bias=Parameter(torch.normal(mean=0.0,std=torch.ones(output_neurons)))
def forward(self,inputfeaturedata):
output1= torch.mm(self.adj,inputfeaturedata)
print(output1.shape)
print(self.weights.shape)
print(self.bias.shape)
output2= torch.matmul(output1,self.weights)+ self.bias
return output2
class GCN(nn.Module):
def __init__(self,adj,input_feature_neurons,output_neurons,lr,dropoutvalue,hidden,data):
super(GCN, self).__init__()
self.adj=adj
self.input_feature_neurons=input_feature_neurons
self.output_neurons=output_neurons
self.lr=lr
self.dropoutvalue=dropoutvalue
self.hidden=hidden
self.data=data
self.gcn1 = Graphconvlayer(adj,input_feature_neurons,hidden)
self.gcn2 = Graphconvlayer(adj,hidden,output_neurons)
def forward(self,x):
x= F.relu(self.gcn1(x))
x= F.dropout(x,self.dropoutvalue)
x= self.gcn2(x)
print("opop")
return F.log_softmax(x,dim=1)
for n, p in a.named_parameters():
print(n, p.shape)
>>>
gcn1.weights torch.Size([1433, 2708])
gcn1.bias torch.Size([2708])
gcn2.weights torch.Size([2708, 7])
gcn2.bias torch.Size([7])
>>>
optimizer= optim.Adam(a.named_parameters()),lr=0.001)
>>>
NameError: name 'optim' is not defined
When I pass it as a dict(a.named_parameters()), I am able to print the values, but can not pass it to the optimization function. Can anyone guide me through this?

How to implement binary mask matrix in Keras?

I'm currently working on a project and part of it is reimplementing a model written for a paper in PyTorch in Keras. The overal model classifies proteins based on three elements of their properties: sequence, interaction with other proteins, and domains in their sequence (motifs). The part I'm working on recreating currently is the Protein-Protein Interaction part. Firstly, the input vectors simply go through some fully connected layers which is easy enough to implement in keras. However, the outputs from this model are fed into a 'weight classifier model' which applies a binary mask matrix to inputs using a layer created specifically for this model using PyTorch's nn.functional API.
Here is the code I am struggling to implement in keras:
class Weight_classifier(nn.Module):
def __init__(self, func):
super(Weight_classifier, self).__init__()
# self.weight_layer = nn.Linear(OUT_nodes[func]*3, OUT_nodes[func])
self.weight_layer = MaskedLinear(OUT_nodes[func]*3, OUT_nodes[func], 'data/{}_maskmatrix.csv'.format(func)).cuda()
self.outlayer= nn.Linear(OUT_nodes[func], OUT_nodes[func])
def forward(self, weight_features):
weight_out = self.weight_layer(weight_features)
# weight_out = F.sigmoid(weight_out)
weight_out = F.relu(weight_out)
weight_out = F.sigmoid(self.outlayer(weight_out))
return weight_out
class MaskedLinear(nn.Linear):
def __init__(self, in_features, out_features, relation_file, bias=True):
super(MaskedLinear, self).__init__(in_features, out_features, bias)
mask = self.readRelationFromFile(relation_file)
self.register_buffer('mask', mask)
self.iter = 0
def forward(self, input):
masked_weight = self.weight * self.mask
return F.linear(input, masked_weight, self.bias)
def readRelationFromFile(self, relation_file):
mask = []
with open(relation_file, 'r') as f:
for line in f:
l = [int(x) for x in line.strip().split(',')]
for item in l:
assert item == 1 or item == 0 # relation 只能为0或者1
mask.append(l)
return Variable(torch.Tensor(mask))
And this is the paper I am working to, it contains several diagrams and explanations of the models if I have not explained the issue sufficiently.
Many thanks.

Python Classes and inheritance. Calling super method returns Error

I have the following python code. The code analyses a digital signal for amplification processing however I am having trouble applying inheritance concepts to this for some reason.
I have a parent class and a child class, which sets the parent class as a parameter to inherit its properties in the child class.
But when I try accessing a method within the parent class using OOP inheritance principles I am getting an error. I originally tried extending the class and not overloading it, but when I tried to access the dofilter() method in the extended parent class, it would throw an error. However, if I try overloading the dofilter() method in the child class and then use "super" to call it from the parent class, there is no error, but returns "NaN", which clearly means I am doing something wrong still. I know data is being passed to the objects, there should be no reason why it returns NaN. It has left me at a little bit of an impass now. Can anyone please explain why this is not working?
What I have tried/my summarized script, to paint a better picture of the issue:
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
import numpy as np
import matplotlib.pyplot as plt
# =============================================================================
# CLASSES
# =============================================================================
class fir_filter:
def __init__(self, coefficients):
self.coefficients = coefficients
self.buffer = np.zeros(taps)
def dofilter(self, value, offset):
result = 0
#Splice coefficients and buffer arrays into smaller arrays
buffer_newest = self.buffer[0:offset+1]
buffer_oldest = self.buffer[offset+1:taps]
coefficients1 = self.coefficients[0:offset+1]
coefficients2 = self.coefficients[offset+1:taps]
#Accumulate
self.buffer[offset] = value
for i in range(len(coefficients1)):
result += coefficients1[i]*buffer_newest[offset-1-i]
for i in range(len(coefficients2), 0, -1):
result += coefficients2[len(coefficients2)-i] * buffer_oldest[i-1]
return result
class matched_filter(fir_filter):
def __init__(self,coefficients):
self.coefficients = coefficients
def dofilter(self,value,offset):
super().dofilter(value, offset)
return result
########################################
#START OF MAIN SCRIPT
########################################
#...
#REMOVED- import data files, create vars, perform various calculations.
########################################
#... RESUME CODE pertinent variables here
m_wavelet = (2/(np.sqrt(3*a*(np.pi**(1/4)))))*(1-((n/a)**2))*np.exp((-n**2)/(2*(a**2)))
m_wavelet = m_wavelet [::-1]
result = np.zeros(l)
offset = 0
plt.figure(6)
plt.plot(m_wavelet)
plt.plot(ecg_3[8100:8800]/len(ecg_3)/300)
########################################
#instantiated object here, no errors thrown
########################################
new_filter = matched_filter(m_wavelet)
########################################
#issue below
########################################
for k in range(len(ecg_3)):
result[k] = new_filter.dofilter(ecg_3[k], offset) #<- Every item in the array is "Nan"
offset += 1
if (offset == 2000):
offset = 0
########################################
#More removed code/end of script
########################################
Important bits:
Parent class:
class fir_filter:
def __init__(self, coefficients):
self.coefficients = coefficients
self.buffer = np.zeros(taps)
def dofilter(self, value, offset):
result = 0
#Splice coefficients and buffer arrays into smaller arrays
buffer_newest = self.buffer[0:offset+1]
buffer_oldest = self.buffer[offset+1:taps]
coefficients1 = self.coefficients[0:offset+1]
coefficients2 = self.coefficients[offset+1:taps]
#Accumulate
self.buffer[offset] = value
for i in range(len(coefficients1)):
result += coefficients1[i]*buffer_newest[offset-1-i]
for i in range(len(coefficients2), 0, -1):
result += coefficients2[len(coefficients2)-i] * buffer_oldest[i-1]
return result
Child Class:
class matched_filter(fir_filter):
def __init__(self,coefficients):
self.coefficients = coefficients
def dofilter(self,value,offset):
super().dofilter(value, offset)
return result
Code and issue:
new_filter = matched_filter(m_wavelet)
for k in range(len(ecg_3)):
result[k] = new_filter.dofilter(ecg_3[k], offset)
How can I access "dofilter()", in "fir_filter", which is inherited in "matched_filter"?
Any insight would be much appreciated. I can provide more code if necessary.

How to correctly deal with a changing property of a class?

I have the class below where a method changes its property. However, I need the original unaltered property. What's the idiomatic way to do this?
The class describes a Borrower. The borrower has a property called PMT. this is the amortized value of the borrower. It is calculated from the present value. However, the borrower has a method which applies interest. This changes the present value, which in turn changes the PMT. I need to original PMT after I have applied the interest. What's the best way to get around it.
Heres the sample of the code
import numpy as np
class Borrower:
def __init__(self, present_value, term, rate):
self.present_value = present_value
self.term = term
self.rate = rate
def pmt(self):
return -np.pmt(self.rate/12, self.term, self.present_value)
def apply_interest(self):
self.present_value *= 1 + self.rate
Heres the problem:
b = Borrower(1000, 12, 0.1)
b.pmt() # 87.91
b.apply_interest()
b.pmt() # 96.70 I need 87.91 here!
Should I create a Borrower with an initial pmt like this?
class Borrower:
def __init__(self, present_value, term, rate):
self.present_value = present_value
self.term = term
self.rate = rate
self.init_pmt = self.pmt()

Categories