Pytorch getting accuracy of train_loop doesn't work - python

I want to get the accuracy of my train section of my neuronal network
But i get this error:
correct += (prediction.argmax(1) == y).type(torch.float).item()
ValueError: only one element tensors can be converted to Python scalars
With this code :
def train_loop(dataloader, model, optimizer):
model.train()
size = len(dataloader.dataset)
correct = 0, 0
l_loss = 0
for batch, (X, y) in enumerate(dataloader):
prediction = model(X)
loss = cross_entropy(prediction, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
correct += (prediction.argmax(1) == y).type(torch.float).sum().item()
loss, current = loss.item(), batch * len(X)
l_loss = loss
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
correct /= size
accu = 100 * correct
train_loss.append(l_loss)
train_accu.append(accu)
print(f"Accuracy: {accu:>0.1f}%")
I don't understand why it is not working becaus in my test section it work perfektly fine with execly the same code line.

item function is used to convert a one-element tensor to a standard python number as stated in the here. Please try to make sure that the result of the sum() is only a one-element tensor before using item().
x = torch.tensor([1.0,2.0]) # a tensor contains 2 elements
x.item()
error message: ValueError: only one element tensors can be converted to Python scalars
Try to use this:
prediction = prediction.argmax(1)
correct = prediction.eq(y)
correct = correct.sum()
print(correct) # to check if it is a one value tensor
correct_sum += correct.item()

Related

normalize the input of relu function encountered a runtime error

when I try to pass the maximum activation value from previous layer to normalize the input of relu in next layer I encounter a runtime error as below. However, when I pass fixed value it works well without any error.
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 175, in backward allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved
tensors after they have already been freed). Saved intermediate values of the graph are freed
when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to
backward through the graph a second time or if you need to access saved tensors after calling
backward.
As you see in this code below, I pass the argument prev_layer_max from the previous layer and encounter the error:
class th_norm_ReLU(nn.Module):
def __init__(self, modify):
super(th_norm_ReLU, self).__init__()
self.therelu = F.relu
def forward(self, input, prev_layer_max):
output = input * (prev_layer_max / input.max())
norm_output = self.therelu (output)
return norm_output
But if I use a fixed value instead of passed prev_layer_max argument, as this code below I make it equal to 1 it works normally without any error:
def forward(self, input, prev_layer_max = 1):
output = input * (1 / input.max())
norm_output = self.therelu (output)
the training loop is as below :
for epoch in range(params.epochs):
running_loss = 0
start_time = time.time()
for i, (images, labels) in enumerate(train_loader):
model.train()
model.zero_grad()
optimizer.zero_grad()
labels.to(device)
images = images.float().to(device)
outputs = model(images, epoch)
loss = criterion(outputs.cpu(), labels)
running_loss += loss.item()
loss.backward()
optimizer.step()
here is the forward in the model where I record the max of each layer in a list ( thresh_list ):
def forward(self, input, epoch):
x = self.conv1(input)
x = self.relu(x,1)
self.thresh_list[0] = max(self.thresh_list[0], x.max()) #to get the max activation
x = self.conv_dropout(x)
x = self.conv2(x)
x = self.relu(x, self.thresh_list[0])
self.thresh_list[1] = max(self.thresh_list[1], x.max())
x = self.pool1(x)
x = self.conv_dropout(x)
x = self.conv3(x)
x = self.relu(x, self.thresh_list[1] )
self.thresh_list[2] = max(self.thresh_list[2], x.max())
The Relue function I call is :
self.relu = th_norm_ReLU(True)
and the_norm_ReLU model is shown above.

LSTM always predicts the same class

I’m trying to solve an nlp classification problem with a LSTM. The code for the model is defined here:
class LSTM(nn.Module):
def __init__(self, hidden_size, embedding_size=66 ):
super().__init__()
self.lstm = nn.LSTM(embedding_size, hidden_size, batch_first = True, bidirectional = True)
self.fc = nn.Linear(2*hidden_size,2)
def forward(self, input_seq):
output, (hidden_state, cell_state) = self.lstm(input_seq)
hidden_state = torch.cat((hidden_state[-1,:], hidden_state[-2,:]), -1)
logits = self.fc(hidden_state)
return nn.LogSoftmax(dim=1)(logits)
And the function I’m using to train this model is here:
def train_loop(dataloader, model, loss_fn, optimizer):
loss_fn = loss_fn
size = len(dataloader.dataset)
model.train()
zeros = 0
for batch, (X, y) in enumerate(dataloader):
# Transform string into tensor
tensor = torch.zeros(1,len(X[0]),66)
for i in range(len(X[0])):
tensor[0][i][ctoi[X[0][i]]] = 1
pred = model(tensor)
target = torch.zeros(2, dtype=torch.long)
target[y] = 1
if batch % 100 == 0:
print(pred.squeeze(), target)
loss = loss_fn(pred.squeeze(), target)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if pred.squeeze().argmax() == 0:
zeros += 1
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
print(f'In trainning predicted {zeros} zeroes out of {size} samples')
The X’s are still strings, that’s why I need to convert them to tensors before running it through the model. The y’s are either a 0 or 1 (since its a binary classification problem), that I need to convert to a tensor of shape (2,) to run through the loss function.
For some reason I keep getting the same class predicted for every input. The classes are not even that unbalanced (~45% to 55%), and I’ve tried changing the weights of the classes in the loss function with no improvements, it either converges to predicting always a 0 or always a 1. Most of the time it it converges to predicting always a 0, which makes even less sense because what happens usually is that the class 0 has less samples than class 1.
Since you're training a binary classification model, your output dim should be 1 (corresponding to a single probability P(y|x)). This means that the y you're retrieving from your dataloader should be the y used in your loss function (assuming a cross-entropy loss). The predicted class is therefore y_hat = round(pred) (i.e., is the prediction >= 0.5).
As a point of clarity, it would be much easier to follow your logic if the one-hot encoding happened within your dataset (either in __getitem__ or __iter__). It's also worth noting that you don't use embeddings, so the code of your classifier is a bit misleading.

cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not numpy.ndarray

Data preparation, and building the model:
dataset = datasets.load_iris()
data = dataset.data
target = dataset.target
data_tensor=torch.from_numpy(data).float()
target_tensor=torch.from_numpy(target).long()
model = nn.Sequential(
bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=4, out_features=100),
nn.ReLU(),
bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=100, out_features=3),
)
cross_entropy_loss = nn.CrossEntropyLoss()
klloss = bnn.BKLLoss(reduction='mean', last_layer_only=False)
klweight = 0.01
optimizer = optim.Adam(model.parameters(), lr=0.01)
Run the model: Error occurred
for step in range(3000):
models = model(data_tensor)
cross_entropy = cross_entropy_loss(models, target)
kl = klloss(model)
total_cost = cross_entropy + klweight*kl
optimizer.zero_grad()
total_cost.backward()
optimizer.step()
_, predicted = torch.max(models.data, 1)
final = target_tensor.size(0)
correct = (predicted == target_tensor).sum()
print('- Accuracy: %f %%' % (100 * float(correct) / final))
print('- CE : %2.2f, KL : %2.2f' % (cross_entropy.item(), kl.item()))
TypeError
----> 3 cross_entropy = cross_entropy_loss(models, target)
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not numpy.ndarray
I copied code from a web-learning page, but it shows the attached error, any suggestions? many thanks !!
You used the wrong variable for target.
cross_entropy_loss(models, target)
would be
cross_entropy_loss(models, target_tensor)
Tensors are more generalized vectors. Thus every tensor can be represented as a multidimensional array or vector, but not every vector can be represented as tensors.
Hence numpy arrays can easily be replaced with tensor , but the tensor can not be replaced with numpy arrays
The crossentropyloss expects a tensor to be passed as argument but your target variable is an numpy array so it's raising the error. Pass target_tensor variable instead of target it will solve the issue.
Read more about Cross Entropyless here

Multiplying a tensor by a scalar gives a tensor of Nans

I am new to tensorflow, i am trying to use Linear regression technique to train my module, but the function results a tensor of Nans! Here is the code
That's how i read the dataset
train_x = np.asanyarray(df[['Fat']]).astype(np.float32)
train_y = np.asanyarray(df[['Calories']]).astype(np.float32)
the weights initialization
a = tf.Variable(20.0)
b = tf.Variable(10.0)
the linear regression function
#tf.function
def h(x):
y = a*x +b
return y
the cost function
#tf.function
def costFunc(y_predicted,train_y):
return tf.reduce_mean(tf.square(y_predicted-train_y))
the module training
learning_rate = 0.01
train_data = []
loss_values =[]
a_values = []
b_values = []
# steps of looping through all your data to update the parameters
training_epochs = 200
train model
for epoch in range(training_epochs):
with tf.GradientTape() as tape:
y_predicted = h(train_x)
loss_value = loss_object(train_y,y_predicted)
loss_values.append(loss_value)
get gradients
gradients = tape.gradient(loss_value, [b,a])
# compute and adjust weights
a_values.append(a.numpy())
b_values.append(b.numpy())
b.assign_sub(gradients[0]*learning_rate)
a.assign_sub(gradients[1]*learning_rate)
if epoch % 5 == 0:
train_data.append([a.numpy(), b.numpy()])
but when i print (a*train_x) the result is Nans tensor
UPDATE
I found that the problem is in the dataset, when i changed the dataset it gives tensor of numbers, but i still don't know what is the problem with the first dataset
I am sorry, the mistake is silly, i had to re-initialize the variables every time by running the cells each time i run the script, because the variables had a value which results in infinite values

Convert 'int' to pytorch 'Variable' makes problems

First project with pytorch and I got stuck trying to convert an MNIST label 'int' into a torch 'Variable'. Debugger says it has no dimension?!
# numpy mnist data
X_train, Y_train = read_data("training")
X_test , Y_test = read_data("testing")
arr = np.zeros(5)
for i in range(5):
# in your training loop:
costs_ = 0
for k in range(10000):
optimizer.zero_grad() # zero the gradient buffers
a = torch.from_numpy(np.expand_dims(X_train[k].flatten(), axis=0)).float()
b = torch.from_numpy(np.array(Y_train[k], dtype=np.float)).float()
input = Variable(a)
output = net(input)
target = Variable(b) # PROBLEM!!
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
costs_ += loss.data.numpy()
arr[i] = costs_
print(i)
Thrown error is: "RuntimeError: input and target have different number of elements: input[1 x 1] has 1 elements, while target[] has 0 elements at /b/wheel/pytorch-src/torch/lib/THNN/generic/MSECriterion.c:12"
The error is telling you exactly what is happening. Your target variable is empty.
Edit (after the comment below):
if Y_train[k] = 5, then np.array(Y_train[k], dtype=np.float).shape = (), and in turn Variable(b) becomes a tensor with no dimension.
In order to fix this you will need to pass a list to np.array() and not a integer or a float.
Like this:
b = torch.from_numpy(np.array([Y_train[k]], dtype=np.float)).float()

Categories