missing 1 required positional argument, in model summary - python

I tried to print the summary of the SinGAN model, but I got an error which is:
This is the code:
def init_models(opt):
#generator initialization:
netG = models.GeneratorConcatSkip2CleanAdd(opt).to(opt.device)
netG.apply(models.weights_init)
if opt.netG != '':
netG.load_state_dict(torch.load(opt.netG))
summary(netG,input_size=(3, 201, 256))
print(netG)
#discriminator initialization:
netD = models.WDiscriminator(opt).to(opt.device)
netD.apply(models.weights_init)
if opt.netD != '':
netD.load_state_dict(torch.load(opt.netD))
print(netD)
return netD, netG
The problem when i add this line:
summary(netG,input_size=(3, 201, 256))
And I get the complete code from here.
So, is my way wrong? should I use a different variable as a model?

The forward function of your model expects two input images. In torchsummary.summary, you are providing only one input shape, so it is trying to pass only one input image to your model, leaving the second required argument unpassed and hence raising the issue. Read here how to pass inputs to torchsummary.summary when model expects multiple inputs in the forward method.

Related

ValueError: Missing data for input "input_2". You passed a data dictionary with keys ['y', 'x']. Expected the following keys: ['input_2']

Following the previous code here I am in process to evaluate the federated learning model and I got couple of issues.
This is the code for evaluation
central_test = test.create_tf_dataset_from_all_clients()
test_data = central_test.map(reshape_data)
# function that accepts a server state, and uses
#Keras to evaluate on the test dataset.
def evaluate(server_state):
keras_model = create_keras_model()
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
keras_model.set_weights(server_state)
keras_model.evaluate(central_test)
server_state = federated_algorithm.initialize()
evaluate(server_state)
this is the error message
ValueError: Missing data for input "input_2". You passed a data dictionary with keys ['y', 'x']. Expected the following keys: ['input_2']
So what would be the problem here?
and is the use of the method create_tf_dataset_from_all_clients in its right place? since -as it is written in the tutorial- used for create a centralized evaluation dataset. why do we need to use centralized dataset?
The test dataset has a different format during evaluation. Try:
test_data = test.create_tf_dataset_from_all_clients().map(reshape_data).batch(2)
test_data = test_data.map(lambda x: (x['x'], x['y']))
def evaluate(server_state):
keras_model = create_keras_model()
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
keras_model.set_weights(server_state)
keras_model.evaluate(test_data)
server_state = federated_algorithm.initialize()
evaluate(server_state)

PyTorch: Adding hooks to model for saving the intermediate layers output returns the features two times

I want to implement the Style Transfer paper using Pytorch and the VGG19 network.
For this, I need the intermediate output features for some layers.
I named the convolutional modules: ['conv_1', 'conv_2',..., 'conv_16']
For managing the hooks and features I use this method:
class SaveOutput:
#Callable object for saving the layers outputs
def __init__(self):
self.outputs = []
def __call__(self, module, module_in, module_out):
self.outputs.append(module_out)
def clear(self):
self.outputs = []
def addHooksToModel(model, layerNames, hookHandles):
#Remove hooks
for hook in hookHandles:
hook.remove()
hookHandles = []
features = SaveOutput()
for name, module in model.named_modules():
if name in layerNames:
hookHandles.append(module.register_forward_hook(features))
return features, hookHandles
I want to store the features of the content and style separately:
CONTENT_LAYERS = ["conv_14"]
STYLE_LAYERS = ["conv_1","conv_3","conv_5","conv_9","conv_13"]
hook_handles_content = []
hook_handles_style = []
content_features, hook_handles_content = addHooksToModel(model, CONTENT_LAYERS, hook_handles_content)
style_features, hook_handles_style = addHooksToModel(model, STYLE_LAYERS, hook_handles_style)
I then pass the contentImage and styleImage through the network and I expect the content_features.outputs to contain 1 tensor and the style_features.outputs to contain 5 tensors.
model(contentImage)
contentImg_content = content_features.outputs
content_features.clear()
model(styleImage)
styleImg_style = style_features.outputs
style_features.clear()
But in reality, I get 1 tensor for the content_features.outputs(as expected), but 10 tensors for the style_features.outputs(two times the expected).
Same thing happens if I first pass the styleImage and then the contentImage. I get 5 tensors for the style_features.outputs(as expected), but 2 tensors for the content_features.outputs(two times the expected).
Could somebody point me in the right direction. I know I'm missing something, probably in the way Pytorch hooks are working, but I can't figure out what. Thank you!
I don't know if I am the luckiest(for figuring out what's wrong) or the most distracted(for making the mistake in the first place) person.
When passing the contentImage through the network, the features get saved in both the content_features(1 tensor) and style_features(5 tensors) objects. Then, when passing the the styleImage through the network, I get an additional 5 tensors.
The solution was to clear both objects after passing the contentImage:
model(contentImage)
...
content_features.clear()
style_features.clear()
model(styleImage)

TensorFlow strange (random?) output expected for RegisterGradient

I have created a custom op using the tutorial, and modified it a bit. I want to use this op as input for the compute_gradients method.
My op expects three inputs: target values, predicted values, and another matrix. It returns a new matrix with the same shape as the target values.
However, when I use #ops.RegisterGradient for this method, it expects strange return values, giving this message in one script:
ValueError: Num gradients 1 generated for op name: "SoftDtwJacobianSqEuc"
op: "SoftDtwJacobianSqEuc"
input: "targets"
input: "mul"
input: "strided_slice"
do not match num inputs 3
And this in another script:
ValueError: Num gradients 1 generated for op name: "SoftDtwJacobianSqEuc"
op: "SoftDtwJacobianSqEuc"
input: "decoder_targets"
input: "Reshape"
input: "strided_slice_8"
do not match num inputs 3
A snippet of the code I am running (full example below):
# Calling the op
backwards = soft_dtw_jacobian_sq_euc_module.soft_dtw_jacobian_sq_euc(decoder_targets, decoder_predictions, alignment_matrix[1:-1,1:-1])
# Need to register, other wise get error: No gradient defined for operation...
#ops.RegisterGradient("SoftDtwJacobianSqEuc")
def _soft_dtw_jacobian_sq_euc_grad(op, grad):
# To generate the error which mentions the expected return values:
return(None)
# This kind of works in the first script, as I can 'mul'tiply the gradient with the output of the op
#return(None, soft_dtw_jacobian_sq_euc_module.soft_dtw_jacobian_sq_euc(decoder_targets, decoder_predictions, alignment_matrix[1:-1,1:-1]), None)
optimizer = tf.train.AdamOptimizer(0.02)
params = tf.trainable_variables()
gradients = optimizer.compute_gradients(backwards, params)
train_op = optimizer.apply_gradients(gradients)
Why is RegisterGradients expecting different return values? How does RegisterGradients determine these?
Preferably I would just return the output of the op (since that's what I made it for), but if I do not use RegisterGradient I get a "No gradient defined for operation..." error.
I have a complete working example here: python part and c++ op
Using TensorFlow 1.2.1 and python 2.7
I had the same error before. The error message was generated in tensorflow function gradients_impl.py.
def _VerifyGeneratedGradients(grads, op):
"""Verify that gradients are valid in number and type.
Args:
grads: List of generated gradients.
op: Operation for which the gradients where generated.
Raises:
ValueError: if sizes of gradients and inputs don't match.
TypeError: if type of any gradient is not valid for its input.
"""
if len(grads) != len(op.inputs):
raise ValueError("Num gradients %d generated for op %s do not match num "
"inputs %d" % (len(grads), op.node_def, len(op.inputs)))
Now we can see this error means for your op SoftDtwJacobianSqEuc , you gave it three inputs. TensorFlow expects you to give three output for the function that generates gradients _soft_dtw_jacobian_sq_euc_grad, each output w.r.t each input for the function for forward propagation.
Note that actually you only want the gradients for the first input, and the other two gradients will not be used anymore in the back propagation, you can just return two more fake_gradients with the right shape.
Although my answer seems to be a little bit late, still hope it helps!

Tensorflow Split Using Feed Dict Input Dimension

I'm trying to tf.split a tensor based on the dimension of an input fed in using feed_dict (dimension of input changes with each batch). Currently I keep getting an error saying that a tensor cannot be split with a "Dimension". Is there a way to get the value of the dimension and split using it?
Thanks!
input_d = tf.placeholder(tf.int32, [None, None], name="input_d")
# toy feed dict
feed = {
input_d: [[20,30,40,50,60],[2,3,4,5,-1]] # document
}
W_embeddings = tf.get_variable(shape=[vocab_size, embedding_dim], \
initializer=tf.random_uniform_initializer(-0.01, 0.01),\
name="W_embeddings")
document_embedding = tf.gather(W_embeddings, input_d)
timesteps_d = document_embedding.get_shape()[1]
doc_input = tf.split(1, timesteps_d, document_embedding)
tf.split takes a python integer for the num_split argument. However, document_embedding.get_shape() returns a TensorShape, and document_embedding.get_shape()[1] gives a Dimension instance, hence you get an error says "can't split with a Dimension".
Try timestep_ds = document_embedding.get_shape().as_list()[1], this statement should give you a python integer.
Here are some relevant documentations for tf.split and tf.Tensor.get_shape

Sci-kit Learn SGD Classifier Partial_Fit Error

I'm using scikit-learn and the SGD classifier to train an SVM in mini-batches. Here's a little code snippet:
for row in reader:
if row[0] in model.docvecs:
TRAINING_X.append(model.docvecs[row[0]])
TRAINING_Y.append(row[2])
if count % 10000 == 0:
np_x = np.asarray(TRAINING_X)
np_y = np.asarray(TRAINING_Y)
clf.partial_fit(np_x,np_y, np.unique(np.asarray))
TRAINING_X = []
TRAINING_Y = []
count += 1
I'm using the partial_fit function to read in every 1000 data points and using np.unique() to generate class labels as per the documentation.
However, when I run this, I get the following error:
raise ValueError("The number of class labels must be " ValueError: The
number of class labels must be greater than one.
I'm a little confused. Am I generating class labels incorrectly?
The documentation for partial_fit says, Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset..
You seem to be passing np.unique(np.asarray) which does seem incorrect.
Going by the error thrown by the program, I think there is only one unique class in your target variable. Please use np.unique(np_y) and get the number of unique classes that you are feeding into the model and ensure that it is more than one.
Also, your value to the classes argument seem to be incorrect, it should have been np.unique(np_y) instead of np.unique(np.asarray)

Categories