I am trying to implement the type of character level embeddings described in this paper in Keras. The character embeddings are calculated using a bidirectional LSTM.
To recreate this, I've first created a matrix of containing, for each word, the indexes of the characters making up the word:
char2ind = {char: index for index, char in enumerate(chars)}
max_word_len = max([len(word) for sentence in sentences for word in sentence])
X_char = []
for sentence in X:
for word in sentence:
word_chars = []
for character in word:
word_chars.append(char2ind[character])
X_char.append(word_chars)
X_char = sequence.pad_sequences(X_char, maxlen = max_word_len)
I then define a BiLSTM model with an embedding layer for the word-character matrix. I assume the input_dimension will have to be equal to the number of characters. I want a size of 64 for my character embeddings, so I set the hidden size of the BiLSTM to 32:
char_lstm = Sequential()
char_lstm.add(Embedding(len(char2ind) + 1, 64))
char_lstm.add(Bidirectional(LSTM(hidden_size, return_sequences=True)))
And this is where I get confused. How can I retrieve the embeddings from the model? I'm guessing I would have to compile the model and fit it then retrieve the weights to get the embeddings, but what parameters should I use to fit it ?
Additional details:
This is for an NER task, so the dataset technically could be be anything in the word - label format, although I am specifically working with the WikiGold ConLL corpus available here: https://github.com/pritishuplavikar/Resume-NER/blob/master/wikigold.conll.txt
The expected output from the network are the labels (I-MISC, O, I-PER...)
I expect the dataset to be large enough to be training character embeddings directly from it. All words are coded with the index of their constituting characters, alphabet size is roughly 200 characters. The words are padded / cut to 20 characters. There are around 30 000 different words in the dataset.
I hope to be able learn embeddings for each characters based on the info from the different words. Then, as in the paper, I would concatenate the character embeddings with the word's glove embedding before feeding into a Bi-LSTM network with a final CRF layer.
I would also like to be able to save the embeddings so I can reuse them for other similar NLP tasks.
Generally speaking Keras approach to building models (even seemingly complex ones) is dead simple. For example, the kind of model you want to build would simply look like (note this is for binary classification problem):
model = Sequential()
model.add(Embedding(max_features, out_dims, input_length=maxlen))
model.add(Bidirectional(LSTM(32)))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
This is no different from plain vanilla NN with the exception of having the Embedding and Bidirectional layers in place Dense layers. This is one of the things that makes Keras amazing.
Usually it's helpful to look for a working example (Keras has loads) that is doing more or less the same thing you are trying to do. In this case you could first look at this model and then "reverse engineer" the workings of it to answer your questions. Usually things come down to formatting the data in the right way, where a working example model works wonders as you can carefully investigate the data format its using.
Related
I found this on an article here:
Decoder takes the hidden state of the last Encoder RNN cell as the initial state of its first RNN cell along with the token as the initial input to produce an output sequence. We use Teacher Forcing for faster and efficient training of the decoder. Teacher forcing is a method for quickly and efficiently training recurrent neural network models that use the ground truth from a prior time step as input. In this method, the right answer is given as the beginning of training so that the model will train quickly and efficiently.
Now, I am unable to use the Teacher Forcing Method for the decoder. Can someone help me out?
What I have done so far
Now, I am taking many English sentences and tokenizing them, and then keeping it in a variable X. The shape of X is [7000, 5], which means that there are 7000 English sentences, each which has 5 words in it. I am doing the same with German Sentences and keeping it in a variable Y, which has shape of [7000, 10, 1], which means that there are 7000 total German Sentences, each having 10 words in it (I am not hot-encoding it, as I am using sparse categorical cross-entropy as the loss function).
Then, I defined my model which is this:
Model = Sequential([
Embedding(english_vocab_size, 256, input_length=english_max_len, mask_zero=True),
LSTM(256, activation='relu'),
RepeatVector(german_max_len),
LSTM(256, activation='relu', return_sequences=True),
Dense(german_vocab_size, activation='softmax')
])
The english_vocab_size and german_vocab_size is the total number of English words and German words present in the English and German vocabulary. The english_max_len and german_max_len are total number of words each English and German sentence has in it.
Now, from here what should I do to use the Teacher Forcing Technique?
I have a movie review data set which has two columns Review(Sentences) and Sentiment(1 or 0).
I want to create a classification model using word2vec for the embedding and a CNN for the classification.
I've looked for tutorials on youtube but all they do is create vectors for every words and show me the similar words. Like this-
model= gensim.models.Word2Vec(cleaned_dataset, min_count = 2, size = 100, window = 5)
words= model.wv.vocab
simalar= model.wv.most_similar("bad")
I already have my dependent variable(y) which is my 'Sentiment' column all I need is the independent variable(X) which I can pass on to my CNN model.
Before using word2vec I used the Bag Of Words(BOW) model which generated a sparse matrix which was my independent(X) variable. How can I achieve something similar using word2vec?
Kindly correct me if I'm doing something wrong.
To get the word vector, you have to do this:
model['word_that_you_want']
You may also want to handle the KeyError that could arise if you don't find that given word in your model. You also might want to read about what an embedding layer is, which is usually used as the first layer of the neural network (for NLP generally) and is basically a lookup mapping of a word to its corresponding word vector.
To get the word vectors for an entire sentence, you need to first initialize a numpy array of zeros to the dimensions you want.
You might need other variables such as the length of the longest sentence so that you can pad all sentences to that length. The documentation of the pad_sequences method for Keras is here.
A simple example of getting a sentence of word vectors is:
import numpy as np
embedding_matrix = np.zeros((vocab_len, size_of_your_word_vector))
Then iterate over the index of embedding_matrix and add to it, if you find a word vector in your model.
I use this resource which has a lot of examples and I have referenced some of the code there (which I have also used myself sometimes):
embedding_matrix = np.zeros((vocab_length, 100))
for word, index in word_tokenizer.word_index.items():
embedding_vector = model[word] # using your w2v model, KeyError possible
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
And in your model (I'm assuming Tensorflow with Keras)
embedding_layer = Embedding(vocab_length, 100, weights=[embedding_matrix], input_length=length_long_sentence, trainable=False)
I hope this helps.
Word2Vec doesn't inherently create vectors for a text (set of words) – just individual words.
But, sometimes a not-so-bad vector for a multi-word text is the average of all its word-vectors.
If list_of_words is a list of the words in your text, and all the words are in the Word2Vec model, a simple way to get the average of those words' vectors is:
avg_vector_of_words = model.wv[list_of_words].mean(axis=0)
(If some words aren't present, you'd need to filter them before attempting this to avoid KeyErrors. If you wanted to leave out some words, or use unit-normed word-vectors, or unit-normalize the final vector, you'd need more code.)
Then avg_vector_of_words is a small, dense/'embedded' feature vector for the list_of-words text.
You could pass these vectors, one per text, to another downstream classifier, like your CNN, exactly analogously to how you were previously using sparse BOW vectors.
My problem is mainly theoretical. I would like to use an LSTM model to classify the sentiment of sentences in this way 1 = positive, 0 = neutral and -1 = negative. I have a bag of word (BOW) that I would like to use to train the model. BOW is dataframe with two columns like this:
Text | Sentiment
hello dear... 1
I hate you... -1
... ...
According to the example proposed by keras I should transform the sentences of the 'Text' column of my BOW into numerical vectors where each number represents a word of the vocabulary.
Now my questions is how do I turn my sentences into vectors of numbers and what are the best techniques to do it?
For now my code is this, what am i doing wrong?
model = Sequential()
model.add(LSTM(units=50))
model.add(Dense(2, activation='softmax')) # 2 because I have 3 classes
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
X_train, X_test, y_train, y_test = train_test_split(df['Text'], df['Sentiment'], test_size=0.3, random_state=1) #Sentiment maiuscolo per altro dataframe
clf = model.fit(X_train, y_train)
predicted = clf.predict(X_test)
print(predicted)
First of all, as Marat commented, you are not using the term Bag of Words (BOW) correctly here. What you are claiming to be your BOW is simply just a labeled dataset of sentences. While there are a lot of questions here, I will try to answer the first one on how to convert your sentences into vectors that can be used in an LSTM model.
The most basic way to do this is to create one-hot-encoding vectors for each word in each sentence. To create these, you first need to iterate through your dataset and assign a unique index to each word. So for example:
vocab =
{ 'hello': 0,
'dear': 1,
.
.
.
'hate': 999}
Once you have this dictionary created, you can then go through each sentence and assign each word in each sentence a vector of len(vocab) with zeros at every index except for the index corresponding to that word. For example, using vocab above, dear would look like:
[0,1,0,0,0,...,0,0].
The pros of one-hot-encoding vectors is that they are easy to create, and pretty simple to work with. The downside is that you can pretty quickly be working with super high dimension vectors if you have a large vocabulary. That's where word embeddings come into play, and honestly are the superior route to one-hot-encoding vectors. However, they are a bit more complex and harder to understand what exactly they are doing behind the scenes. You can read more about that here if you want: https://towardsdatascience.com/what-the-heck-is-word-embedding-b30f67f01c81
You should first create an index of you vocabulary, i.e. assign an index to each token in your. And then transform to a numeric form by replacing each token in the text by its corresponding index. Your model should be then:
model = Sequential()
model.add(Embedding(len(vocab), 64, input_length=sent_len)
model.add(LSTM(units=50))
model.add(Dense(3, activation='softmax'))
Note that you need to pad you sentences to a common length before feeding them to the network. You can use np.pad to do so.
An other alternative is to used pre-trained word embeddings, you can download them from fastText
P.S. You are miss using the BOW, however BOW is a good baseline model you can use for sentiment analysis.
I have data that has various conversations between two people. Each sentence has some type of classification. I am attempting to use an NLP net to classify each sentence of the conversation. I tried a convolution net and get decent results (not ground breaking tho). I figured that since this a back and forth conversation, and LSTM net may produce better results, because what was previously said may have a large impact on what follows.
If I follow the structure above, I would assume that I am doing a many-to-many. My data looks like.
X_train = [[sentence 1],
[sentence 2],
[sentence 3]]
Y_train = [[0],
[1],
[0]]
Data has been processed using word2vec. I then design my network as follows..
model = Sequential()
model.add(Embedding(len(vocabulary),embedding_dim,
input_length=X_train.shape[1]))
model.add(LSTM(88))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='rmsprop',loss='binary_crossentropy',
metrics['accuracy'])
model.fit(X_train,Y_train,verbose=2,nb_epoch=3,batch_size=15)
I assume that this setup will feed one batch of sentences in at a time. However, if in model.fit, shuffle is not equal to false its receiving shuffled batches, so why is an LSTM net even useful in this case? From research on the subject, to achieve a many-to-many structure one would need to change the LSTM layer too
model.add(LSTM(88,return_sequence=True))
and the output layer would need to be...
model.add(TimeDistributed(Dense(1,activation='sigmoid')))
When switching to this structure I get an error on the input size. I'm unsure of how to reformat the data to meet this requirement, and also how to edit the embedding layer to receive the new data format.
Any input would be greatly appreciated. Or if you have any suggestions on a better method, I am more than happy to hear them!
Your first attempt was good. The shuffling takes place between sentences, the only shuffle the training samples between them so that they don't always come in in the same order. The words inside sentences are not shuffled.
Or maybe I didn't understand the question correctly?
EDIT :
After a better understanding of the question, here is my proposition.
Data preparation : You slice your corpus in blocks of n sentences (they can overlap).
You should then have a shape like (number_blocks_of_sentences, n, number_of_words_per_sentence) so basically a list of 2D arrays which contain blocks of n sentences. n shouldn't be too big because LSTM can't handle huge number of elements in the sequence when training (vanishing gradient).
Your targets should be an array of shape (number_blocks_of_sentences, n, 1) so also a list of 2D arrays containing the class of each sentence in your block of sentences.
Model :
n_sentences = X_train.shape[1] # number of sentences in a sample (n)
n_words = X_train.shape[2] # number of words in a sentence
model = Sequential()
# Reshape the input because Embedding only accepts shape (batch_size, input_length) so we just transform list of sentences in huge list of words
model.add(Reshape((n_sentences * n_words,),input_shape = (n_sentences, n_words)))
# Embedding layer - output shape will be (batch_size, n_sentences * n_words, embedding_dim) so each sample in the batch is a big 2D array of words embedded
model.add(Embedding(len(vocabaulary), embedding_dim, input_length = n_sentences * n_words ))
# Recreate the sentence shaped array
model.add(Reshape((n_sentences, n_words, embedding_dim)))
# Encode each sentence - output shape is (batch_size, n_sentences, 88)
model.add(TimeDistributed(LSTM(88)))
# Go over lines and output hidden layer which contains info about previous sentences - output shape is (batch_size, n_sentences, hidden_dim)
model.add(LSTM(hidden_dim, return_sequence=True))
# Predict output binary class - output shape is (batch_size, n_sentences, 1)
model.add(TimeDistributed(Dense(1,activation='sigmoid')))
...
This should be a good start.
I hope this helps
I would like to train a model to generate text, similar to this blog post
This model uses - as far as I understand it - the following architecture
[Sequence of Word Indices] -> [Embedding] -> [LSTM] -> [1 Hot Encoded "next word"]
Basically, the author models the process as classification problem, where the output layer has as many dimensions as there are words in the corpus.
I would like to model the process as regression problem, by re-using the learned Embeddings and then minimising the distance between predicted and real embedding.
Basically:
[Sequence of Word Indices] -> [Embedding] -> [LSTM] -> [Embedding-Vector of the "next word"]
My problem is, as the model is learning the embeddings on the fly, how could I feed the output in the same way I feed the input (as word indices) and then just tell the model "But before you use the output, replace it by its embedding vector" ?
Thank you very much for all help :-)
In training phase:
You can use two inputs (one for target, one for input, there's an offset of 1 between these two sequences) and reuse the embedding layer.
If you input sentence is [1, 2, 3, 4], you can generate two sequence from it: in = [1, 2, 3], out = [2, 3, 4]. Then you can use Keras' functional API to reuse embedding layer:
emb1 = Embedding(in)
emb2 = Embedding(out)
predict_emb = LSTM(emb1)
loss = mean_squared_error(emb2, predict_emb)
Note it's not Keras code, just pseudo code.
In testing phase:
Typically, you'll need to write your own decode function. Firstly, you choose a word (or a few words) to start from. Then, feed this word (or short word sequence) to network to predict next word's embedding. At this step, you can define your own sample function, say: you may want to choose the word whose embedding is nearest to the predicted one as the next word, or you may want to sample the next word from a distribution in which words with nearer embeddings to the predicted embedding has a larger probability to be chosen. Once you choose the next word, then feed it to network and predict the next one, and so forth.
So, you need to generate one word (put it another way, one embedding) at a time rather than input a whole sequence to the network.
If the above statements are too abstract for you, here's an good example: https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py
Line 85 is the introduction part, which randomly choose a small piece of texts from corpus to work on. From line 90 on there's a loop, in which each step samples a character (This is a char-rnn, so each timestep inputs a char. For your case, it should be a word, not a char): L95 predicts next char's distribution, L96 samples from the distribution. Hope this is clear enough.