I'm quite new to TensorFlow and trying to do a multitask classification with BERT (I have done this with GloVe in another part of the project). My problem is with the concept of placeholder in TensorFlow. I know that it is just a placeholder of some variables and will be filled. See this is the part of my classification model that I have problem with. I'll explain the exact problem down here.
def bert_emb_lookup(input_ids):
# TODO to be implemented;
"""
X is the input IDs, but a placeholder
"""
pass
class BertClassificationModel(object):
def __init__(self, num_class, args):
self.embedding_size = args.embedding_size
self.num_layers = args.num_layers
self.num_hidden = args.num_hidden
self.input_ids = tf.placeholder(tf.int32, [None, args.max_document_len])
self.Y1 = tf.placeholder(tf.int32, [None])
self.Y2 = tf.placeholder(tf.int32, [None])
self.dropout = tf.placeholder(tf.float64, [])
self.input_len = tf.reduce_sum(tf.sign(self.input_ids), 1)
with tf.name_scope("embedding"):
self.input_emb = bert_emb_lookup(self.)
...
It was easy to get the word embeddings from GloVe; I first loaded the glove vectors and then simply used tf.nn.embedding_lookup(embeddings, self.input_ids) to fetch the embeddings.
So in BERT classification model, I'm trying to do something similar by defining a function whose argument is input_ids, where I want to match input ids with their associated vocab (string). Thereafter, I'll use an API (BERT as a service) that gives BERT embeddings of any given list of strings at string-level/token-level. The problem is that since self.input_ids is just a placeholder, it shows it a NULL object. Is there any workaround that helps me with this?
Thanks!
You cannot use bert-as-service as a tensor directly. So you have two options:
Use bert-as-service to look up the embeddings. You give the sentences as input and get a numpy array of embeddings as ouput. You then feed the numpy array of embeddings to a placeholder self.embeddings = tf.placeholder(tf.float32, [None, 768])
Then you use self.embeddings wherever you would have used tf.nn.embedding_lookup(embeddings, self.input_ids).
The other option is probably overkill in this case, but it may give you some context. Here you do not use bert-as-service to get embeddings. Instead, you use the bert model graph directly. You would use BertModel from https://github.com/google-research/bert/blob/master/run_classifier.py to create a tensor which again can be used wherever you would use tf.nn.embedding_lookup(embeddings, self.input_ids).
Related
I am trying to get the embeddings from pre-trained wav2vec2 models (e.g., from jonatasgrosman/wav2vec2-large-xlsr-53-german) using my own dataset.
My aim is to use these features for a downstream task (not specifically speech recognition). Namely, since the dataset is relatively small, I would train an SVM with these embeddings for the final classification.
So far I have tried this:
model_name = "facebook/wav2vec2-large-xlsr-53-german"
feature_extractor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2Model.from_pretrained(model_name)
input_values = feature_extractor(train_dataset[:10]["speech"], return_tensors="pt", padding=True,
feature_size=1, sampling_rate=16000 ).input_values
Then, I am not sure whether the embeddings here correspond to the sequence of last_hidden_states:
hidden_states = model(input_values).last_hidden_state
or to the sequence of features of the last conv layer of the model:
features_last_cnn_layer = model(input_values).extract_features
Also, is this the correct way to extract features from a pre-trained model?
How one can get embeddings from a specific layer?
PD: Posting here as the HuggingFace's forum seems to be less active.
Just check the documentation:
last_hidden_state (torch.FloatTensor of shape (batch_size,
sequence_length, hidden_size)) – Sequence of hidden-states at the
output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size,
sequence_length, conv_dim[-1])) – Sequence of extracted feature
vectors of the last convolutional layer of the model.
The last_hidden_state vector represents so called contextualized embeddings (i.e. every feature (CNN output) has a vector representation that is to some extend influenced by the other tokens of the sequence).
The extract_features vector represents the embeddings of your input (after the CNNs).
.
Also, is this the correct way to extract features from a pre-trained
model?
Yes.
How one can get embeddings from a specific layer?
Set output_hidden_states=True:
o = model(input_values,output_hidden_states=True)
o.keys()
Output:
odict_keys(['last_hidden_state', 'extract_features', 'hidden_states'])
The hidden_states value contains the embeddings and the contextualized embeddings of each attention layer.
P.S.: jonatasgrosman/wav2vec2-large-xlsr-53-german model was trained with feat_extract_norm==layer. That means, you should also pass an attention mask to the model:
model_name = "facebook/wav2vec2-large-xlsr-53-german"
feature_extractor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2Model.from_pretrained(model_name)
i= feature_extractor(train_dataset[:10]["speech"], return_tensors="pt", padding=True,
feature_size=1, sampling_rate=16000 )
model(**i)
Prior to passing my tokens through BERT, I would like to perform some processing on their embeddings, (the result of the embedding lookup layer). The HuggingFace BERT TensorFlow implementation allows us to access the output of embedding lookup using:
import tensorflow as tf
from transformers import BertConfig, BertTokenizer, TFBertModel
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
input_ids = tf.constant(bert_tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]
attention_mask = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids])
token_type_ids = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids])
config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
bert_model = TFBertModel.from_pretrained('bert-base-uncased', config=config)
result = bert_model(inputs={'input_ids': input_ids,
'attention_mask': attention_mask,
'token_type_ids': token_type_ids})
inputs_embeds = result[-1][0] # output of embedding lookup
Subsequently, one can process inputs_embeds and then send this in as an input to the same model using:
inputs_embeds = process(inputs_embeds) # some processing on inputs_embeds done here (dimensions kept the same)
result = bert_model(inputs={'inputs_embeds': inputs_embeds,
'attention_mask': attention_mask,
'token_type_ids': token_type_ids})
output = result[0]
where output now contains the output of BERT for the modified input. However, this requires two full passes through BERT. Instead of running BERT all the way through just to perform embedding lookup, I would like to just get the output of the embedding lookup layer. Is this possible, and if so, how?
It is in fact incorrect to treat the first output result[-1][0] as the result of an embedding lookup. The raw embedding lookup is given by:
embeddings = bert_model.bert.get_input_embeddings()
word_embeddings = embeddings.word_embeddings
inputs_embeds = tf.gather(word_embeddings, input_ids)
while result[-1][0] gives the embedding lookup plus positional embeddings and token type embeddings. The above code does not require a full pass through BERT, and the result can be processed prior to feeding into the remaining layers of BERT.
EDIT: To get the result of addition positional and token type embeddings to an arbitrary inputs_embeds, one can use:
full_embeddings = embeddings(inputs=[None, None, token_type_ids, inputs_embeds])
Here, the call method for the embeddings object accepts a list which is fed into the _embeddings method. The first value is input_ids, the second position_ids, the third token_type_ids, and the fourth inputs_embeds. (See here for more details.) If you have multiple sentences in one input, you may need to set position_ids.
Im trying to create document context vectors from sentence-vectors via LSTM using keras (so each document consist of a sequence of sentence vectors).
My goal is to replicate the following blog post using keras: https://andriymulyar.com/blog/bert-document-classification
I have a (toy-)tensor, that looks like this: X = np.array(features).reshape(5, 200, 768) So 5 documents with each having a 200 sequence of sentence vectors - each sentence vector having 768 features.
So to get an embedding from my sentence vectors, I encoded my documents as one-hot-vectors to learn an LSTM:
y = [1,2,3,4,5] # 5 documents in toy-tensor
y = np.array(y)
yy = to_categorical(y)
yy = yy[0:5,1:6]
Until now, my code looks like this
inputs1=Input(shape=(200,768))
lstm1, states_h, states_c =LSTM(5,dropout=0.3,recurrent_dropout=0.2, return_state=True)(inputs1)
model1=Model(inputs1,lstm1)
model1.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['acc'])
model1.summary()
model1.fit(x=X,y=yy,batch_size=100,epochs=10,verbose=1,shuffle=True,validation_split=0.2)
When I print states_h I get a tensor of shape=(?,5) and I dont really know how to access the vectors inside the tensor, which should represent my documents.
print(states_h)
Tensor("lstm_51/while/Exit_3:0", shape=(?, 5), dtype=float32)
Or am I doing something wrong? To my understanding there should be 5 document vectors e.g. doc1=[...] ; ...; doc5=[...] so that I can reuse the document vectors for a classification task.
Well, printing a tensor shows exactly this: it's a tensor, it has that shape and that type.
If you want to see data, you need to feed data.
States are not weights, they are not persistent, they only exist with input data, just as any other model output.
You should create a model that outputs this information (yours doesn't) in order to grab it. You can have two models:
#this is the model you compile and train - exactly as you are already doing
training_model = Model(inputs1,lstm1)
#this is just for getting the states, nothing else, don't compile, don't train
state_getting_model = Model(inputs1, [lstm1, states_h, states_c])
(Don't worry, these two models will share the same weights and be updated together, even if you only train the training_model)
Now you can:
With eager mode off (and probably "on" too):
lstm_out, states_h_out, states_c_out = state_getting_model.predict(X)
print(states_h_out)
print(states_c_out)
With eager mode on:
lstm_out, states_h_out, states_c_out = state_getting_model(X)
print(states_h_out.numpy())
print(states_c_out.numpy())
TF 1.x with tf.keras (Tested with TF 1.15)
Keras does operations using symbolic tensors. Therefore, print(states_h) won't give you anything unless you pass data to the placeholders states_h depends on (in this case inputs1). You can do that as follows.
import tensorflow.keras.backend as K
inputs1=Input(shape=(200,768))
lstm1, states_h, states_c =LSTM(5,dropout=0.3,recurrent_dropout=0.2, return_state=True)(inputs1)
model1=Model(inputs1,lstm1)
model1.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['acc'])
model1.summary()
model1.fit(x=X,y=yy,batch_size=100,epochs=10,verbose=1,shuffle=True,validation_split=0.2)
sess = K.get_session()
out = sess.run(states_h, feed_dict={inputs1:X})
Then out will be (batch_size, 5) sized output.
TF 2.x with tf.keras
The above code won't work as it is. And I still haven't found how to get this to work with TF 2.0 (even though TF 2.0 will still produce a placeholder according to docs). I will edit my answer when I find how to fix this for TF 2.x.
I've been recently trying to implement a multi-class classification LSTM architecture, based on this example: biLSTM example
After I changed
self.label = tf.placeholder(tf.int32, [None])
to
self.label = tf.placeholder(tf.int32, [None,self.n_class)
The model seems to train normally, yet I am having trouble with this step:
self.loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y_hat, labels=self.label))
# prediction
self.prediction = tf.argmax(tf.nn.softmax(y_hat), 1)
As, even though the model learns normally, the predictions does not seem to work for multiple variables. I was wondering how should one code the self.prediction object, so that it emits a vector of predictions for individual instances?
Thank you very much.
I was wondering how should one code the self.prediction object, so
that it emits a vector of predictions for individual instances?
In general tf.nn.softmax returns a vector of probabilities. You just can't see them because your are using tf.argmax, which returns the index of the largest value. Therefore you will just get one number. Just remove tf.argmax and you should be fine.
I've recently reviewed an interesting implementation for convolutional text classification. However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following:
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
Does anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one?
There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup().
Simply create W as a tf.constant() that takes embedding as its value:
W = tf.constant(embedding, name="W")
This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples.
Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder():
W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])
embedding_init = W.assign(embedding_placeholder)
# ...
sess = tf.Session()
sess.run(embedding_init, feed_dict={embedding_placeholder: embedding})
This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False.
If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following:
W = tf.Variable(...)
embedding_saver = tf.train.Saver({"name_of_variable_in_other_model": W})
# ...
sess = tf.Session()
embedding_saver.restore(sess, "checkpoint_filename.ckpt")
I use this method to load and share embedding.
W = tf.get_variable(name="W", shape=embedding.shape, initializer=tf.constant_initializer(embedding), trainable=False)
2.0 Compatible Answer: There are many Pre-Trained Embeddings, which are developed by Google and which have been Open Sourced.
Some of them are Universal Sentence Encoder (USE), ELMO, BERT, etc.. and it is very easy to reuse them in your code.
Code to reuse the Pre-Trained Embedding, Universal Sentence Encoder is shown below:
!pip install "tensorflow_hub>=0.6.0"
!pip install "tensorflow>=2.0.0"
import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
embed = hub.KerasLayer(module_url)
embeddings = embed(["A long sentence.", "single-word",
"http://example.com"])
print(embeddings.shape) #(3,128)
For more information the Pre-Trained Embeddings developed and open-sourced by Google, refer TF Hub Link.
The answer of #mrry is not right because it provoques the overwriting of the embeddings weights each the network is run, so if you are following a minibatch approach to train your network, you are overwriting the weights of the embeddings. So, on my point of view the right way to pre-trained embeddings is:
embeddings = tf.get_variable("embeddings", shape=[dim1, dim2], initializer=tf.constant_initializer(np.array(embeddings_matrix))
With tensorflow version 2 its quite easy if you use the Embedding layer
X=tf.keras.layers.Embedding(input_dim=vocab_size,
output_dim=300,
input_length=Length_of_input_sequences,
embeddings_initializer=matrix_of_pretrained_weights
)(ur_inp)
I was also facing embedding issue, So i wrote detailed tutorial with dataset.
Here I would like to add what I tried You can also try this method,
import tensorflow as tf
tf.reset_default_graph()
input_x=tf.placeholder(tf.int32,shape=[None,None])
#you have to edit shape according to your embedding size
Word_embedding = tf.get_variable(name="W", shape=[400000,100], initializer=tf.constant_initializer(np.array(word_embedding)), trainable=False)
embedding_loopup= tf.nn.embedding_lookup(Word_embedding,input_x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for ii in final_:
print(sess.run(embedding_loopup,feed_dict={input_x:[ii]}))
Here is working detailed Tutorial Ipython example if you want to understand from scratch , take a look .