RuntimeError: Operation does not have identity in f-string statement - python

I am evaluating a pytorch model. It gives results in following manner
results = model(batch)
# results is a list of dictionaries with 'boxes', 'labels' and 'scores' keys and torch tensor values
Then I try to print some of the values to check what is happening
print(
(
f"{results[0]['boxes'].shape[0]}\n" # Returns how many boxes there is
f"{results[0]['scores'].mean()}" # Mean credibility score of the boxes
)
)
This results in error
Exception has occurred: RuntimeError: operation does not have identity
To make things more confusing, print only fails some of the time. Why does this fail?

I had the same problem in my code. It turns out when trying to access attributes of empty tensors (e.g. shape, mean, etc.) the outcome is the no identity exception.
Code to reproduce:
import torch
a = torch.arange(12)
mask = a > 100
b = a[mask] # tensor([], dtype=torch.int64) -- empty tensor
b.min() # yields "RuntimeError: operation does not have an identity."
Figure out why your code returns empty tensors and this will solve the problem.

Related

Iterating over tf.tensor in graph execution mode

This slice of code is from a custom metric function in tensorflow:
r = []
p = []
count = 0
for idx,elem in enumerate(tf.round(data[:, -1])):
if elem == 1:
count += 1
r.append(count / (count_pos + 1e-6))
p.append(count / (idx + 1))
data is a 2-dimensional tensor and count_pos a scalar.
When I run the metric as a stand-alone function, everything works fine. But when I pass it to model.compile, I get the following error,referencing the for-loop in the code snippet above, probably due to graph execution mode:
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
I know that similar questions have been discussed here, related to this error message. However they don't seem to help me in this particular situation, as I am not able to get rid of the for-loop

Error when trying to simulate one FMU with 4 inputs from csv files [duplicate]

I have a fmu created in gt-suite. I am trying to work with it in python using python PyFMI package.
My code
from pyfmi import load_fmu
import numpy as np
model = load_fmu('AHUPIv2b.fmu')
t = np.linspace(0.,100.,100)
u = np.linspace(3.5,4.5,100)
v = np.linspace(900,1000,100)
u_traj = np.transpose(np.vstack((t,u)))
v_traj = np.transpose(np.vstack((t,v)))
input_object = (('InputVarI','InputVarP'),(u_traj,v_traj))
res = model.simulate(final_time=500, input=input_object, options={'ncp':500})
res = model.simulate(final_time=10)
model.simulate takes input as one of its parameters, Documentation says
input --
Input signal for the simulation. The input should be a 2-tuple
consisting of first the names of the input variable(s) and then
the data matrix.
'InputVarI','InputVarP' are the input variables and u_traj,v_traj are data matrices.
My code gives an error
gives an error -
TypeError: tuple indices must be integers or slices, not tuple
Is the input_object created wrong? Can someone help with how to create the input tuples correctly as per the documentation?
The input object is created incorrect. The second variable in the input tuple should be a single data matrix, not two data matrices.
The correct input should be:
data = np.transpose(np.vstack((t,u,v)))
input_object = (['InputVarI','InputVarP'],data)
See also pyFMI parameter change don't change the simulation output

TensorFlow strange (random?) output expected for RegisterGradient

I have created a custom op using the tutorial, and modified it a bit. I want to use this op as input for the compute_gradients method.
My op expects three inputs: target values, predicted values, and another matrix. It returns a new matrix with the same shape as the target values.
However, when I use #ops.RegisterGradient for this method, it expects strange return values, giving this message in one script:
ValueError: Num gradients 1 generated for op name: "SoftDtwJacobianSqEuc"
op: "SoftDtwJacobianSqEuc"
input: "targets"
input: "mul"
input: "strided_slice"
do not match num inputs 3
And this in another script:
ValueError: Num gradients 1 generated for op name: "SoftDtwJacobianSqEuc"
op: "SoftDtwJacobianSqEuc"
input: "decoder_targets"
input: "Reshape"
input: "strided_slice_8"
do not match num inputs 3
A snippet of the code I am running (full example below):
# Calling the op
backwards = soft_dtw_jacobian_sq_euc_module.soft_dtw_jacobian_sq_euc(decoder_targets, decoder_predictions, alignment_matrix[1:-1,1:-1])
# Need to register, other wise get error: No gradient defined for operation...
#ops.RegisterGradient("SoftDtwJacobianSqEuc")
def _soft_dtw_jacobian_sq_euc_grad(op, grad):
# To generate the error which mentions the expected return values:
return(None)
# This kind of works in the first script, as I can 'mul'tiply the gradient with the output of the op
#return(None, soft_dtw_jacobian_sq_euc_module.soft_dtw_jacobian_sq_euc(decoder_targets, decoder_predictions, alignment_matrix[1:-1,1:-1]), None)
optimizer = tf.train.AdamOptimizer(0.02)
params = tf.trainable_variables()
gradients = optimizer.compute_gradients(backwards, params)
train_op = optimizer.apply_gradients(gradients)
Why is RegisterGradients expecting different return values? How does RegisterGradients determine these?
Preferably I would just return the output of the op (since that's what I made it for), but if I do not use RegisterGradient I get a "No gradient defined for operation..." error.
I have a complete working example here: python part and c++ op
Using TensorFlow 1.2.1 and python 2.7
I had the same error before. The error message was generated in tensorflow function gradients_impl.py.
def _VerifyGeneratedGradients(grads, op):
"""Verify that gradients are valid in number and type.
Args:
grads: List of generated gradients.
op: Operation for which the gradients where generated.
Raises:
ValueError: if sizes of gradients and inputs don't match.
TypeError: if type of any gradient is not valid for its input.
"""
if len(grads) != len(op.inputs):
raise ValueError("Num gradients %d generated for op %s do not match num "
"inputs %d" % (len(grads), op.node_def, len(op.inputs)))
Now we can see this error means for your op SoftDtwJacobianSqEuc , you gave it three inputs. TensorFlow expects you to give three output for the function that generates gradients _soft_dtw_jacobian_sq_euc_grad, each output w.r.t each input for the function for forward propagation.
Note that actually you only want the gradients for the first input, and the other two gradients will not be used anymore in the back propagation, you can just return two more fake_gradients with the right shape.
Although my answer seems to be a little bit late, still hope it helps!

Tensorflow Split Using Feed Dict Input Dimension

I'm trying to tf.split a tensor based on the dimension of an input fed in using feed_dict (dimension of input changes with each batch). Currently I keep getting an error saying that a tensor cannot be split with a "Dimension". Is there a way to get the value of the dimension and split using it?
Thanks!
input_d = tf.placeholder(tf.int32, [None, None], name="input_d")
# toy feed dict
feed = {
input_d: [[20,30,40,50,60],[2,3,4,5,-1]] # document
}
W_embeddings = tf.get_variable(shape=[vocab_size, embedding_dim], \
initializer=tf.random_uniform_initializer(-0.01, 0.01),\
name="W_embeddings")
document_embedding = tf.gather(W_embeddings, input_d)
timesteps_d = document_embedding.get_shape()[1]
doc_input = tf.split(1, timesteps_d, document_embedding)
tf.split takes a python integer for the num_split argument. However, document_embedding.get_shape() returns a TensorShape, and document_embedding.get_shape()[1] gives a Dimension instance, hence you get an error says "can't split with a Dimension".
Try timestep_ds = document_embedding.get_shape().as_list()[1], this statement should give you a python integer.
Here are some relevant documentations for tf.split and tf.Tensor.get_shape

Sci-kit Learn SGD Classifier Partial_Fit Error

I'm using scikit-learn and the SGD classifier to train an SVM in mini-batches. Here's a little code snippet:
for row in reader:
if row[0] in model.docvecs:
TRAINING_X.append(model.docvecs[row[0]])
TRAINING_Y.append(row[2])
if count % 10000 == 0:
np_x = np.asarray(TRAINING_X)
np_y = np.asarray(TRAINING_Y)
clf.partial_fit(np_x,np_y, np.unique(np.asarray))
TRAINING_X = []
TRAINING_Y = []
count += 1
I'm using the partial_fit function to read in every 1000 data points and using np.unique() to generate class labels as per the documentation.
However, when I run this, I get the following error:
raise ValueError("The number of class labels must be " ValueError: The
number of class labels must be greater than one.
I'm a little confused. Am I generating class labels incorrectly?
The documentation for partial_fit says, Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset..
You seem to be passing np.unique(np.asarray) which does seem incorrect.
Going by the error thrown by the program, I think there is only one unique class in your target variable. Please use np.unique(np_y) and get the number of unique classes that you are feeding into the model and ensure that it is more than one.
Also, your value to the classes argument seem to be incorrect, it should have been np.unique(np_y) instead of np.unique(np.asarray)

Categories