Reduce the dimension of a tensor using max-pooling layer - python

My question is very simple:
How can I reduce the dimension of a list or a tensor using max-pooling layer to 512 elements in the list:
I'm trying the following code:
input_ids = tokenizer.encode(question, text)
print(input_ids) # input_ids is a list of 700 elements
m = nn.AdaptiveMaxPool1d(512)
input_ids = m(torch.tensor([[input_ids]])) # convert the list to tensor and apply max-pooling layer
But I get the following error:
RuntimeError: "adaptive_max_pool2d_cpu" not implemented for 'Long'
So, please help to figure out where is the error

The problem is with your input_ids. You are passing a tensor of type long to AdaptiveMaxPool1d, just convert it to float.
input_ids = tokenizer.encode(question, text)
print(input_ids) # input_ids is a list of 700 elements
m = nn.AdaptiveMaxPool1d(512)
input_ids = m(torch.tensor([[input_ids]]).float()) #

Related

TensorFlow 2.0 Layer with None type shape Tensor

The function below I am attempting to call in a tf.keras.layers.Lambda() in adherence to TF 2.0. The inputs and outputs tensors will be two images of the same dimension with 3 color channels. My goal is to extract a mask from the outputs tensor, apply it to the inputs tensor, then return the resulting tensor. The motivation for flattening the tensors is due to the limitations of the tf.tensor_scatter_nd_update() function. When I construct the model, it fails to initialize updates since indices.shape[0] is a None value. If I call this layer outside the model with two tf.constant() tensors to initialize x, it runs perfectly fine in eager execution (since the x tensors have defined values). Unfortunately when I call this function with tf.keras.layers.Lambda() I recieve the following error:
TypeError: can't multiply sequence by non-int of type 'NoneType'
#tf.function
def applyMask(x):
# Extract Tensors
inputs = x[0]
outputs = x[1]
# Flatten the Outputs Tensor and Extract Mask Indices
outputs = tf.reshape(outputs,(tf.size(outputs),))
indices = tf.where(outputs==1.)
indices = tf.cast(indices, tf.int32)
# Construct Updates Tensor from Mask Indices
updates = tf.constant([1.]*indices.shape[0])
# Flatten Input Tensor and Apply Mask
out_dim = inputs.shape
inputs = tf.reshape(inputs,(tf.size(inputs),))
tensor = tf.tensor_scatter_nd_update(inputs, indices, updates)
# Reconstruct Input Into Tensor
tensor = tf.reshape(tensor, out_dim)
return tensor
Doesn't need to be this complex. Simply do,
inp1 = Input(shape=(None, None, 3)) # Inputs
inp2 = Input(shape=(None, None, 3)) # Outputs
out = Lambda(lambda x: tf.where(tf.equal(x[1], 1), x[1], x[0]))([inp1, inp2])
And you can even have height and width None and as long as parallel samples passed to inp1 and inp2 are exactly the same (shape-wise), tf.where will work fine.

BERT sentence embedding by summing last 4 layers

I used Chris Mccormick tutorial on BERT using pytorch-pretained-bert to get a sentence embedding as follows:
tokenized_text = tokenizer.tokenize(marked_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertModel.from_pretrained('bert-base-uncased')
model.eval()
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
# Holds the list of 12 layer embeddings for each token
# Will have the shape: [# tokens, # layers, # features]
token_embeddings = []
# For each token in the sentence...
for token_i in range(len(tokenized_text)):
# Holds 12 layers of hidden states for each token
hidden_layers = []
# For each of the 12 layers...
for layer_i in range(len(encoded_layers)):
# Lookup the vector for `token_i` in `layer_i`
vec = encoded_layers[layer_i][batch_i][token_i]
hidden_layers.append(vec)
token_embeddings.append(hidden_layers)
Now, I am trying to get the final sentence embedding by summing the last 4 layers as follows:
summed_last_4_layers = [torch.sum(torch.stack(layer)[-4:], 0) for layer in token_embeddings]
But instead of getting a single torch vector of length 768 I get the following:
[tensor([-3.8930e+00, -3.2564e+00, -3.0373e-01, 2.6618e+00, 5.7803e-01,
-1.0007e+00, -2.3180e+00, 1.4215e+00, 2.6551e-01, -1.8784e+00,
-1.5268e+00, 3.6681e+00, ...., 3.9084e+00]), tensor([-2.0884e+00, -3.6244e-01, ....2.5715e+00]), tensor([ 1.0816e+00,...-4.7801e+00]), tensor([ 1.2713e+00,.... 1.0275e+00]), tensor([-6.6105e+00,..., -2.9349e-01])]
What did I get here? How do I pool the sum of the last for layers?
Thank you!
You create a list using a list comprehension that iterates over token_embeddings. It is a list that contains one tensor per token - not one tensor per layer as you probably thought (judging from your for layer in token_embeddings). You thus get a list with a length equal to the number of tokens. For each token, you have a vector that is a sum of BERT embeddings from the last 4 layers.
More efficient would be avoiding the explicit for loops and list comprehenions:
summed_last_4_layers = torch.stack(encoded_layers[-4:]).sum(0)
Now, variable summed_last_4_layers contains the same data, but in the form of a single tensor of dimension: length of the sentence × 768.
To get a single (i.e., pooled) vector, you can do pooling over the first dimension of the tensor. Max-pooling or average-pooling might make much more sense in this case than summing all the token embeddings. When summing the values, vectors of differently long sentences are in different ranges and are not really comparable.

How to get output from randomly sampled k entries from a tensor

I have a keras/tf problem using sub-sampling of values from a tensor. My model is given below:
x_input = Input((input_size,))
enc1 = Dense(encoder_size[0], activation='relu')(x_input)
drop = Dropout(keep_prob)(enc1)
enc2 = Dense(encoder_size[1], activation='relu')(drop)
drop = Dropout(keep_prob)(enc2)
mu = Dense(latent_dim, activation='linear', name='encoder_mean')(drop)
encoder = Model(x_input,mu)
I want to sample from the input randomly and then get the encoded values of the input. The error I am getting is
ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size. Got tensor with shape: (None, 13)
which I can understand is because "predict" does not work on placeholder but I am not sure what to pass to get the output for a placeholder.
# sample input randomly
sample_num = 500
idxs = tf.range(tf.shape(x_input)[0])
ridxs = tf.random_shuffle(idxs)[:sample_num]
sample_input = tf.gather(x_input, ridxs)
# get sample shape
sample_shape = K.shape(sample_input)
# sample from encoded value
sample_encoded = encoder.predict(sample_input) <----- Error
If you see the predict function documentation, it doesn't expect a placeholder or a tensor node as an expected set of input. You have to pass directly the Numpy array (in your case).
If you wish to perform some special data preprocessing which is not part of your regular model, you have to do it in Numpy and avoid Tensor computations for it.

How to build an embedding layer in Tensorflow RNN?

I'm building an RNN LSTM network to classify texts based on the writers' age (binary classification - young / adult).
Seems like the network does not learn and suddenly starts overfitting:
Red: train
Blue: validation
One possibility could be that the data representation is not good enough. I just sorted the unique words by their frequency and gave them indices. E.g.:
unknown -> 0
the -> 1
a -> 2
. -> 3
to -> 4
So I'm trying to replace that with word embedding.
I saw a couple of examples but I'm not able to implement it in my code. Most of the examples look like this:
embedding = tf.Variable(tf.random_uniform([vocab_size, hidden_size], -1, 1))
inputs = tf.nn.embedding_lookup(embedding, input_data)
Does this mean we're building a layer that learns the embedding? I thought that one should download some Word2Vec or Glove and just use that.
Anyway let's say I want to build this embedding layer...
If I use these 2 lines in my code I get an error:
TypeError: Value passed to parameter 'indices' has DataType float32 not in list of allowed values: int32, int64
So I guess I have to change the input_data type to int32. So I do that (it's all indices after all), and I get this:
TypeError: inputs must be a sequence
I tried wrapping inputs (argument to tf.contrib.rnn.static_rnn) with a list: [inputs] as suggested in this answer, but that produced another error:
ValueError: Input size (dimension 0 of inputs) must be accessible via
shape inference, but saw value None.
Update:
I was unstacking the tensor x before passing it to embedding_lookup. I moved the unstacking after the embedding.
Updated code:
MIN_TOKENS = 10
MAX_TOKENS = 30
x = tf.placeholder("int32", [None, MAX_TOKENS, 1])
y = tf.placeholder("float", [None, N_CLASSES]) # 0.0 / 1.0
...
seqlen = tf.placeholder(tf.int32, [None]) #list of each sequence length*
embedding = tf.Variable(tf.random_uniform([VOCAB_SIZE, HIDDEN_SIZE], -1, 1))
inputs = tf.nn.embedding_lookup(embedding, x) #x is the text after converting to indices
inputs = tf.unstack(inputs, MAX_POST_LENGTH, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, inputs, dtype=tf.float32, sequence_length=seqlen) #---> Produces error
*seqlen: I zero-padded the sequences so all of them have the same list size, but since the actual size differ, I prepared a list describing the length without the padding.
New error:
ValueError: Input 0 of layer basic_lstm_cell_1 is incompatible with
the layer: expected ndim=2, found ndim=3. Full shape received: [None,
1, 64]
64 is the size of each hidden layer.
It's obvious that I have a problem with the dimensions... How can I make the inputs fit the network after embedding?
From the tf.nn.static_rnn , we can see the inputs arguments to be:
A length T list of inputs, each a Tensor of shape [batch_size, input_size]
So your code should be something like:
x = tf.placeholder("int32", [None, MAX_TOKENS])
...
inputs = tf.unstack(inputs, axis=1)
tf.squeeze is a method that removes dimensions of size 1 from the tensor. If the end goal is to have the input shape as [None,64], then put a line similar to inputs = tf.squeeze(inputs) and that would fix your problem.

TensorFlow bidirectional LSTM encoding of word embeddings

I have a word embedding matrix containing a vector for each word. I am trying to use TensorFlow to get the bidirectional LSTM encoding of each word given the embedding vectors. Unfortunately, I get the following error message:
ValueError: Shapes (1, 125) and () must have the same rank
Exception TypeError: TypeError("'NoneType' object is not callable",) in ignored
Here is the code I used:
# Declare max number of words in a sentence
self.max_len = 100
# Declare number of dimensions for word embedding vectors
self.wdims = 100
# Indices of words in the sentence
self.wrd_holder = tf.placeholder(tf.int32, [self.max_len])
# Embedding Matrix
wrd_lookup = tf.Variable(tf.truncated_normal([len(vocab)+3, self.wdims], stddev=1.0 / np.sqrt(self.wdims)))
# Declare forward and backward cells
forward = rnn_cell.LSTMCell(125, (self.wdims))
backward = rnn_cell.LSTMCell(125, (self.wdims))
# Perform lookup
wrd_embd = tf.nn.embedding_lookup(wrd_lookup, self.wrd_holder)
embd = tf.split(0, self.max_len, wrd_embd)
# run bidirectional LSTM
boutput = rnn.bidirectional_rnn(forward, backward, embd, dtype=tf.float32, sequence_length=self.max_len)
the sequence length passed to rnn must be a vector of length batch size.

Categories