how to calculate mean of words' glove embedding in a sentence - python

I have downloaded the glove trained matrix and used it in a Keras layer. however, I need the sentence embedding for another task.
I want to calculate the mean of all the word embeddings that are in that sentence.
what is the most efficient way to do that since there are about 25000 sentences?
also, I don't want to use a Lambda layer in Keras to get the mean of them.

the best way to do this is to use a GlobalAveragePooling1D layer. it receives the embeddings of tokens inside the sentences from the Embedding layer with the shapes (n_sentence, n_token, emb_dim) and computes the average of each token present in the sentence. the result has shape (n_sentence, emb_dim)
here a code example
embedding_dim = 128
vocab_size = 100
sentence_len = 20
embedding_matrix = np.random.uniform(-1,1, (vocab_size,embedding_dim))
test_sentences = np.random.randint(0,vocab_size, (3,sentence_len))
inp = Input((sentence_len))
embedder = Embedding(vocab_size, embedding_dim,
trainable=False, weights=[embedding_matrix])(inp)
avg = GlobalAveragePooling1D()(embedder)
model = Model(inp, avg)
model.summary()
model(test_sentences) # the mean of all the word embeddings inside sentences

Related

how to resize the embedding vectors from huggingface bert

I try to use the tokenizer method to tokenize the sentence and then mean pool the attention mask to get the vectors for each sentence. However, the current default size embedding is 768 and I wish to reduce it to 200 instead but failed. below is my code.
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens')
model.resize_token_embeddings(200)
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
Error:
2193 # Note [embedding_renorm set_grad_enabled]
2194 # XXX: equivalent to
2195 # with torch.no_grad():
2196 # torch.embedding_renorm_
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
my expected output is:
when use:
print(len(sentence_embeddings[0]))
-> 200
I think you've misunderstood the resize_token_embeddings. According to docs It
Resizes input token embeddings matrix of the model if new_num_tokens != >config.vocab_size.
Takes care of tying weights embeddings afterwards if the model class has a >tie_weights() method.
meaning it is used when you add/remove tokens from vocabulary. Here resizing refers to resizing the token->embedding dictionary.
I guess what you want to do is change the hidden_size of bert model. In order to do that you have to change the hidden_size in config.json which will re-initialize all weights and you have to re-train everything i.e very computationally expensive.
I think your best option is to add a linear layer on top of BertModel of dimension (768x200) and fine-tune on your downstream task.

How to get the word of a embedding vector from the pretrained model of hugging face?

I use hugging face's pretrained model, bert, to help me get the meaning of sentence pooling(which means tokenize the sentence and get the average vector of all embedding words). My codes are as follows. I want to get the word which pooling vector refers to.
import torch
from transformers import BertModel, BertTokenizer
model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(model_name)
# Load the model
model = BertModel.from_pretrained(model_name)
# input sentence
input_text = "Here is some text to encode"
# from tokenizer to token_id
input_ids = tokenizer.encode(input_text, add_special_tokens=True)
# input_ids: [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102]
input_ids = torch.tensor([input_ids])
# get the tensors
with torch.no_grad():
last_hidden_states = model(input_ids)[0] # Models outputs are now tuples
# sentence pooling
last_hidden_states = last_hidden_states.mean(1)
print(last_hidden_states)
# last_hidden_states.shape = [1,768]
After this, I want to get the word of this encode vector([1,768]).
Theoretically, I should use this embedding vecter # embedding_matrix(size is[ dictionary_dimention ,embedding_dimention])
And then use the result of above matrix to be the index of the dictionary.
How could I get the embedding_matrix in embedding layers of hugging face, Please.

Getting embeddings from wav2vec2 models in HuggingFace

I am trying to get the embeddings from pre-trained wav2vec2 models (e.g., from jonatasgrosman/wav2vec2-large-xlsr-53-german) using my own dataset.
My aim is to use these features for a downstream task (not specifically speech recognition). Namely, since the dataset is relatively small, I would train an SVM with these embeddings for the final classification.
So far I have tried this:
model_name = "facebook/wav2vec2-large-xlsr-53-german"
feature_extractor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2Model.from_pretrained(model_name)
input_values = feature_extractor(train_dataset[:10]["speech"], return_tensors="pt", padding=True,
feature_size=1, sampling_rate=16000 ).input_values
Then, I am not sure whether the embeddings here correspond to the sequence of last_hidden_states:
hidden_states = model(input_values).last_hidden_state
or to the sequence of features of the last conv layer of the model:
features_last_cnn_layer = model(input_values).extract_features
Also, is this the correct way to extract features from a pre-trained model?
How one can get embeddings from a specific layer?
PD: Posting here as the HuggingFace's forum seems to be less active.
Just check the documentation:
last_hidden_state (torch.FloatTensor of shape (batch_size,
sequence_length, hidden_size)) – Sequence of hidden-states at the
output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size,
sequence_length, conv_dim[-1])) – Sequence of extracted feature
vectors of the last convolutional layer of the model.
The last_hidden_state vector represents so called contextualized embeddings (i.e. every feature (CNN output) has a vector representation that is to some extend influenced by the other tokens of the sequence).
The extract_features vector represents the embeddings of your input (after the CNNs).
.
Also, is this the correct way to extract features from a pre-trained
model?
Yes.
How one can get embeddings from a specific layer?
Set output_hidden_states=True:
o = model(input_values,output_hidden_states=True)
o.keys()
Output:
odict_keys(['last_hidden_state', 'extract_features', 'hidden_states'])
The hidden_states value contains the embeddings and the contextualized embeddings of each attention layer.
P.S.: jonatasgrosman/wav2vec2-large-xlsr-53-german model was trained with feat_extract_norm==layer. That means, you should also pass an attention mask to the model:
model_name = "facebook/wav2vec2-large-xlsr-53-german"
feature_extractor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2Model.from_pretrained(model_name)
i= feature_extractor(train_dataset[:10]["speech"], return_tensors="pt", padding=True,
feature_size=1, sampling_rate=16000 )
model(**i)

Using pretrained gensim Word2vec embedding in keras

I have trained word2vec in gensim. In Keras, I want to use it to make matrix of sentence using that word embedding. As storing the matrix of all the sentences is very space and memory inefficient. So, I want to make embedding layer in Keras to achieve this so that It can be used in further layers(LSTM). Can you tell me in detail how to do this?
PS: It is different from other questions because I am using gensim for word2vec training instead of keras.
Let's say you have following data that you need to encode
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
You must then tokenize it using the Tokenizer from Keras like this and find the vocab_size
t = Tokenizer()
t.fit_on_texts(docs)
vocab_size = len(t.word_index) + 1
You can then enocde it to sequences like this
encoded_docs = t.texts_to_sequences(docs)
print(encoded_docs)
You can then pad the sequences so that all the sequences are of a fixed length
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
Then use the word2vec model to make embedding matrix
# load embedding as a dict
def load_embedding(filename):
# load embedding into memory, skip first line
file = open(filename,'r')
lines = file.readlines()[1:]
file.close()
# create a map of words to vectors
embedding = dict()
for line in lines:
parts = line.split()
# key is string word, value is numpy array for vector
embedding[parts[0]] = asarray(parts[1:], dtype='float32')
return embedding
# create a weight matrix for the Embedding layer from a loaded embedding
def get_weight_matrix(embedding, vocab):
# total vocabulary size plus 0 for unknown words
vocab_size = len(vocab) + 1
# define weight matrix dimensions with all 0
weight_matrix = zeros((vocab_size, 100))
# step vocab, store vectors using the Tokenizer's integer mapping
for word, i in vocab.items():
weight_matrix[i] = embedding.get(word)
return weight_matrix
# load embedding from file
raw_embedding = load_embedding('embedding_word2vec.txt')
# get vectors in the right order
embedding_vectors = get_weight_matrix(raw_embedding, t.word_index)
Once you have the embedding matrix you can use it in Embedding layer like this
e = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=4, trainable=False)
This layer can be used in making a model like this
model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)
model.add(e)
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
All the codes are adapted from this awesome blog post. follow it to know more about Embeddings using Glove
For using word2vec see this post
With the new Gensim version this is pretty easy:
w2v_model.wv.get_keras_embedding(train_embeddings=False)
there you have your Keras embedding layer
My code for gensim-trained w2v model. Assume all words trained in the w2v model is now a list variable called all_words.
from keras.preprocessing.text import Tokenizer
import gensim
import pandas as pd
import numpy as np
from itertools import chain
w2v = gensim.models.Word2Vec.load("models/w2v.model")
vocab = w2v.wv.vocab
t = Tokenizer()
vocab_size = len(all_words) + 1
t.fit_on_texts(all_words)
def get_weight_matrix():
# define weight matrix dimensions with all 0
weight_matrix = np.zeros((vocab_size, w2v.vector_size))
# step vocab, store vectors using the Tokenizer's integer mapping
for i in range(len(all_words)):
weight_matrix[i + 1] = w2v[all_words[i]]
return weight_matrix
embedding_vectors = get_weight_matrix()
emb_layer = Embedding(vocab_size, output_dim=w2v.vector_size, weights=[embedding_vectors], input_length=FIXED_LENGTH, trainable=False)

Train only some word embeddings (Keras)

In my model, I use GloVe pre-trained embeddings. I wish to keep them non-trainable in order to decrease the number of model parameters and avoid overfit. However, I have a special symbol whose embedding I do want to train.
Using the provided Embedding Layer, I can only use the parameter 'trainable' to set the trainability of all embeddings in the following way:
embedding_layer = Embedding(voc_size,
emb_dim,
weights=[embedding_matrix],
input_length=MAX_LEN,
trainable=False)
Is there a Keras-level solution to training only a subset of embeddings?
Please note:
There is not enough data to generate new embeddings for all words.
These answers only relate to native TensorFlow.
Found some nice workaround, inspired by Keith's two embeddings layers.
Main idea:
Assign the special tokens (and the OOV) with the highest IDs. Generate a 'sentence' containing only special tokens, 0-padded elsewhere. Then apply non-trainable embeddings to the 'normal' sentence, and trainable embeddings to the special tokens. Lastly, add both.
Works fine to me.
# Normal embs - '+2' for empty token and OOV token
embedding_matrix = np.zeros((vocab_len + 2, emb_dim))
# Special embs
special_embedding_matrix = np.zeros((special_tokens_len + 2, emb_dim))
# Here we may apply pre-trained embeddings to embedding_matrix
embedding_layer = Embedding(vocab_len + 2,
emb_dim,
mask_zero = True,
weights = [embedding_matrix],
input_length = MAX_SENT_LEN,
trainable = False)
special_embedding_layer = Embedding(special_tokens_len + 2,
emb_dim,
mask_zero = True,
weights = [special_embedding_matrix],
input_length = MAX_SENT_LEN,
trainable = True)
valid_words = vocab_len - special_tokens_len
sentence_input = Input(shape=(MAX_SENT_LEN,), dtype='int32')
# Create a vector of special tokens, e.g: [0,0,1,0,3,0,0]
special_tokens_input = Lambda(lambda x: x - valid_words)(sentence_input)
special_tokens_input = Activation('relu')(special_tokens_input)
# Apply both 'normal' embeddings and special token embeddings
embedded_sequences = embedding_layer(sentence_input)
embedded_special = special_embedding_layer(special_tokens_input)
# Add the matrices
embedded_sequences = Add()([embedded_sequences, embedded_special])
I haven't found a nice solution like a mask for the Embedding layer. But here's what I've been meaning to try:
Two embedding layers - one trainable and one not
The non-trainable one has all the Glove embeddings for in-vocab words and zero vectors for others
The trainable one only maps the OOV words and special symbols
The output of these two layers is added (I was thinking of this like ResNet)
The Conv/LSTM/etc below the embedding is unchanged
That would get you a solution with a small number of free parameters allocated to those embeddings.

Categories