How to predict with Word2Vec? - python

I'm doing arabic dialect text classification and I've used Word2Vec to train the model, I got this so far:
def read_input(input_file):
with open (input_file, 'rb') as f:
for i, line in enumerate (f):
yield gensim.utils.simple_preprocess (line)
documents = list (read_input (data_file))
logging.info ("Done reading data file")
model = gensim.models.Word2Vec (documents, size=150, window=10, min_count=2, workers=10)
model.train(documents,total_examples=len(documents),epochs=10)
What do I do now to predict a new text if it's of any of the 5 dialects I have?
Also, I looked around and found this code:
# load the pre-trained word-embedding vectors
embeddings_index = {}
for i, line in enumerate(open('w2vmodel.vec',encoding='utf-8')):
values = line.split()
embeddings_index[values[0]] = numpy.asarray(values[1:], dtype='float32')
# create a tokenizer
token = text.Tokenizer()
token.fit_on_texts(trainDF['text'])
word_index = token.word_index
# convert text to sequence of tokens and pad them to ensure equal length vectors
train_seq_x = sequence.pad_sequences(token.texts_to_sequences(train_x), maxlen=70)
valid_seq_x = sequence.pad_sequences(token.texts_to_sequences(valid_x), maxlen=70)
# create token-embedding mapping
embedding_matrix = numpy.zeros((len(word_index) + 1, 300))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
But it gives me this error when I run it and load my trained word2vec model:
ValueError: could not convert string to float: '\x00\x00\x00callbacksq\x04)X\x04\x00\x00\x00loadq\x05cgensim.utils'
Note:
Actually, there's another code that I didn't post here, I wanted to use word2vec with neural networks, I have the code for neural network, but I don't know how to make the features I got from word2vec to be as an input to the neural net and with labels as output. Is it possible to connect word2vec to a deep neural net and how?

Word2vec alone isn't something that would classify texts into dialects, so you haven't sketched out a plausible full approach here.
What made you think word2vec could or should be used as part of this task? (If there's a motivating theory-of-operation, or some other published precedent for this approach that gave you the idea, that could help guide what else should be done.)
What is your training data like?
If it's a large number of example texts, with accurate labels as to which dialect each text is from, have you tried classifiers working on simple bag-of-words or bag-of-character-n-grams representations of the texts? (By discovering relationships between the dialects, and the exact words, groups-of-words, or word-fragments in the texts, such a classifier might work far better than word2vec. Word2vec ignores word fragments, and drives the vectors of similar-meaning words close together, obscuring small differences in word-spelling or word-choice.)
You might also try:
FastText in classification mode, in which the word-vectors (and optionally, word-fragment-vectors) are trained to specifically be good at classifying amongst a set of known labels (rather than just to be good at predicting nearby words, as in classic word2vec
the technique of using multiple word2vec-models (one per dialect) as classifiers, as demonstrated in a notebook included with gensim: Deep Inverse Regression with Yelp Reviews
(Separately regarding your shown code:
you don't need to call train(documents,...) if you already supplied documents to the class-instantiation call – that will have already done training, as enabling INFO logging and watching the logs should make clear
you shouldn't need to use such code that tries to open/read your w2vmodel.vec file directly, because gensim includes methods for reading such files directly, such as .load_word2vec_format() or (if a full model was natively .save()d from gensim), just .load().
)

Related

I want to Train 4 more Word2vec models and average the resulting embedding matrices

I wrote the code below, I used Used spacy to restrict the words in the tweets to content words, i.e., nouns, verbs, and adjectives. Transform the words to lower case and add the POS with an underderscore. E.g.:
love_VERB old-fashioneds_NOUN
now I want to Train 4 more Word2vec models and average the resulting embedding matrices.
but I dont have any idea for it, can you help me please ?
# Tokenization of each document
from gensim.models.word2vec import FAST_VERSION
from gensim.models import Word2Vec
import spacy
import pandas as pd
from zipfile import ZipFile
import wget
url = 'https://raw.githubusercontent.com/dirkhovy/NLPclass/master/data/reviews.full.tsv.zip'
wget.download(url, 'reviews.full.tsv.zip')
with ZipFile('reviews.full.tsv.zip', 'r') as zf:
zf.extractall()
# nrows , max amount of rows
df = pd.read_csv('reviews.full.tsv', sep='\t', nrows=100000)
documents = df.text.values.tolist()
nlp = spacy.load('en_core_web_sm') # you can use other methods
# excluded tags
included_tags = {"NOUN", "VERB", "ADJ"}
vocab = [s for s in new_sentences]
sentences = documents[:103] # first 10 sentences
new_sentences = []
for sentence in sentences:
new_sentence = []
for token in nlp(sentence):
if token.pos_ in included_tags:
new_sentence.append(token.text.lower()+'_'+token.pos_)
new_sentences.append(new_sentence)
# initialize model
w2v_model = Word2Vec(
size=100,
window=15,
sample=0.0001,
iter=200,
negative=5,
min_count=1, # <-- it seems your min_count was too high
workers=-1,
hs=0
)
new_sentences
w2v_model.build_vocab(vocab)
w2v_model.train(vocab,
total_examples=w2v_model.corpus_count,
epochs=w2v_model.epochs)
w2v_model.wv['car_NOUN']
There's no reason to average together vectors from multiple training runs; it is more likely to destroy any value from the individual runs than provide any benefit.
No one run creates the 'right' final positions, nor do they all approach some idealized positions. Rather, each just creates a set-of-vectors that is internally comparable to others in that same co-trained set. Comparisons or combinations with vectors from other, non-interleaved training runs are usually going to be nonsense.
Instead, aim for one adequate run. If vectors move around a lot, in repeated runs, that's normal. But each reconfiguration should be about as useful, if used for word-to-word comparisons, or analysis of word neighborhoods/directions, or as input to downstream algorithms. If they vary wildly in usefulness, there are likely other inadequacies in the data or model parameters. (For example: too little data – word2vec requires lots to give meaningful results – or a model that's too large for the data – making it prone to overfitting.)
Other observations about your setup:
Just 103 sentences/texts is tiny for word2vec; you shouldn't expect the vectors from such a run to have any of the value that the algorithm would usually provide. (Running such a tiny dataset might be helpful for verifying no halting-errors in the process, or familiarize yourself with the steps/APIs, but the results will tell you nothing.)
min_count=1 is almost always a bad idea in word2vec and similar algorithms. Words that only appear once (or a few times) don't have the variety of subtly-different uses that are needed to train it into a balanced position against other words – so they wind up with weak/strange final positions, and the sheer number of such words dilutes the training effectiveness for other more-frequent words. The common practice of discarding rare words usually gets better results.
iter=200 is an extreme choice which is typically only valuable to try to squeeze results out of inadequate data. (In such a case, you might also have to reduce the vector-size from normal 100-plus dimensions.) So if you seem to need that, getting more data should be a top priority. (Using 20x more data is far, far better than using 20x more iterations on smaller data – but involves the same amount of training time.)

Linear regression load model doesn't predict as expected

I have trained a linear regression model, with sklearn, for a 5 star rating and it's good enough. I have used Doc2vec to create my vectors, and saved that model. Then I save the linear regression model to another file. What I'm trying to do is load the Doc2vec model and linear regression model and try to predict another review.
There is something very strange about this prediction: whatever the input it always predicts around 2.1-3.0.
Thing is, I have a suggestion that it predicts around the average of 5 (which is 2.5 +/-) but this is not the case. I have printed when training the model the prediction value and the actual value of the test data and they range normally 1-5. So my idea is, that there is something wrong with the loading part of the code. This is my load code:
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from bs4 import BeautifulSoup
from joblib import dump, load
import pickle
import re
model = Doc2Vec.load('../vectors/750000/doc2vec_model')
def cleanText(text):
text = BeautifulSoup(text, "lxml").text
text = re.sub(r'\|\|\|', r' ', text)
text = re.sub(r'http\S+', r'<URL>', text)
text = re.sub(r'[^\w\s]','',text)
text = text.lower()
text = text.replace('x', '')
return text
review = cleanText("Horrible movie! I don't recommend it to anyone!").split()
vector = model.infer_vector(review)
pkl_filename = "../vectors/750000/linear_regression_model.joblib"
with open(pkl_filename, 'rb') as file:
linreg = pickle.load(file)
review_vector = vector.reshape(1,-1)
predict_star = linreg.predict(review_vector)
print(predict_star)
Your example code shows imports of both joblib.dump and joblib.load – even though neither is used in this excerpt. And, the suffix of your file is suggestive that the model may have originally been saved with joblib.dump(), not vanilla pickle.
But, this code shows the file being loaded only via plain pickle.load() – which may be the source of the error.
The joblib.load() docs suggest that its load() may do things like load numpy arrays from multiple separate files created by its own dump(). (Oddly, the dump() docs are less clear on this, but supposedly dump() has a return-value that may be a list of filenames.)
You can check where the file was saved for extra files that appear to be related, and try using joblib.load() rather than plain-pickle, to see if that loads a more-functional/more-complete version of your linreg object.
(Update: I overlooked the .split() tokenization being done in the question code after .cleanText(), so this isn't the real problem. But keeping answer up for reference & because the real issue was discovered in the comments.)
Very commonly, users get mysteriously-weak results from Doc2Vec when they provide a plain string to infer_vector(). Doc2Vec infer_vector() requires a list-of-words, not a string.
If providing a string, the function will see it as a list-of-one-character words – per Python's modeling of strings as lists-of-characters, and type-conflation of characters and one-character-strings. Most of these one-character words probably aren't known by the model, and those that might be – 'i', 'a', etc – aren't very meaningful. So the inferred doc-vector will be weak & meaningless. (And, it isn't surprising such a vector, fed to your linear regression, always gives a middling predicted value.)
If you break the text into the expected list-of-words, your results should improve.
But more generally, the words provided to infer_vector() should be preprocessed and tokenized exactly however the training documents were.
(A fair sanity test of whether you're doing inference properly is to infer vectors for some of your training documents, then ask the Doc2Vec model for the doc-tags closest to these re-inferred vectors. In general, the same document's training-time tag/ID should be the top result, or at least one of the top few. If it isn't, there may be other problems in the data, model parameters, or inference.)

Gensim's Doc2vec - inferred vector isn't similar

When I train Doc2vec (using Gensim's Doc2vec in Python) on corpus of about 10k documents (each has few hundred words) and then infer document vectors using the same documents, they are not at all similar to the trained document vectors. I would expect they would be at least somewhat similar.
That is I do model.docvecs['some_doc_id'] and model.infer_vector(documents['some_doc_id']).
Cosine distances between trained and inferred vectors for few first documents:
0.38277733326
0.284007549286
0.286488652229
0.173178792
0.370117008686
0.275438070297
0.377647638321
0.171194493771
0.350615143776
0.311795353889
0.342757165432
As you can see, they are not really similar. If the similarity is so terrible even for documents used for training, I can't even begin to try to infer unseen documents.
Training configuration:
model = Doc2Vec(documents=documents, dm=1, size=100, window=6, alpha=0.1, workers=4,
seed=44, sample=1e-5, iter=15, hs=0, negative=8, dm_mean=1, min_alpha=0.01, min_count=2)
Inferring:
model.infer_vector(tokens, steps=20, alpha=0.025)
Note on the side: Documents are always preprocessed the same way (I checked that the same list of tokens goes into training and into inferring).
Also I played with parameters around a bit, too, and results were similar. So if your suggestion would be something like "try increasing or decreasing this or that training parameter", I've most likely tried it. Maybe I just didn't come across the 'correct' parameters though.
Thanks for any suggestions as to what can I do to make it work better.
EDIT: I am willing and able to use any other available Python implementation of paragraph vectors (doc2vec). It doesn't have to be this one. If you know of another that can achieve better results.
EDIT: Minimal working example
import fnmatch
import os
from scipy.spatial.distance import cosine
from gensim.models import Doc2Vec
from gensim.models.doc2vec import TaggedDocument
from keras.preprocessing.text import text_to_word_sequence
files = {}
folder = 'some path' # each file contains few regular sentences
for f in fnmatch.filter(os.listdir(folder), '*.sent'):
files[f] = open(folder + '/' + f, 'r', encoding="UTF-8").read()
documents = []
for k, v in files.items():
words = text_to_word_sequence(v, lower=True) # converts string to list of words, removes commas etc.
documents.append(TaggedDocument(tags=[k], words=words))
d2 = Doc2Vec(size=200, documents=documents)
for doc in documents:
trained = d2.docvecs[doc.tags[0]]
inferred = d2.infer_vector(doc.words, steps=50)
print(cosine(trained, inferred)) # cosine similarity from scipy
What is the type of your documents object, and are you sure that it is a multiply-iterable object, so that the model can do all of its 16 passes over the set of TaggedDocument-shaped text examples? That is, does iter(documents) always return a fresh iterator, with all items as TaggedDocument-shaped objects with the right list-of-words in words and list-of-tags in tags? (A common error is to supply a corpus that can be iterated over only once, and then ignoring any logged hints/warnings that no real training has happening. The inference/similarity results from such a model will be essentially random.)
Then for infer_vector(), does documents[tag] really return just the list-of-words it expects (not TaggedDocument or string)? (Users often supply strings, rather than lists-of-tokens, for training or inference words and get results that are just noise.)
Was there evaluation-guided reason for changing various defaults, either a little (window=6, negative=8) or a lot (alpha=0.1, min_count=2)? Such tweaks may not be a major factor in your problem, and there's nothing magical about the class defaults. But until you have the basics working, it's best to stick close to common configuration. (And then even after the basics are working, limit changes to those that can be demonstrated as better via a repeatable scoring process.)
Some report needing much higher steps values – 100 or more – to get better inference results, though that would be most crucial for very-small documents (of a handful to couple dozen words) rather than the few-hundred-words documents you describe.
A corpus of 10k documents is on the small side for Paragraph Vectors (Doc2Vec), but with your smallish vector-size (100) and larger number of iterations (15), it might be workable.
If you're still having problems, you should expand your question with more code showing how documents works, some suggestive example documents, and your cosine-similarity evaluation process – to see if there are any oversights at each of those steps.

Sentence matching with gensim word2vec: manually populated model doesn't work

I'm trying to solve a problem of sentence comparison using naive approach of summing up word vectors and comparing the results. My goal is to match people by interest, so the dataset consists of names and short sentences describing their hobbies. The batches are fairly small, few hundreds of people, so i wanted to give it a try before digging into doc2vec.
I prepare the data by cleaning it completely, removing stop words, tokenizing and lemmatizing. I use pre-trained model for word vectors which returns adequate results when finding similarities for some test words. Also tried summing up the sentence words to find similarities in the original model - the matches do make sense. The similarities would be around general sense of the phrase.
For sentence matching I'm trying the following: create an empty model
b = gs.models.Word2Vec(min_count=1, size=300, sample=0, hs=0)
Build vocab out of names (or person id's), no training
#first create vocab with an empty vector
test = [['test']]
b.build_vocab(test)
b.wv.syn0[b.wv.vocab['test'].index] = b.wv.syn0[b.wv.vocab['test'].index]*0
#populate vocab from an array
b.build_vocab([personIds], update=True)
Summ each sentence's word vectors and store the results into the model for a corresponding id
#sentences are pulled from pandas dataset df. 'a' is a pre-trained model i use to get vectors for each word
def summ(phrase, start_model):
'''
vector addition function
'''
#starting with a vector of 0's
sum_vec = start_model.word_vec("cat_NOUN")*0
for word in phrase:
sum_vec += start_model.word_vec(word)
return sum_vec
for i, row in df.iterrows():
try:
personId = row["ID"]
summVec = summ(df.iloc[i,1],a)
#updating syn0 for each name/id in vocabulary
b.wv.syn0[b.wv.vocab[personId].index] = summVec
except:
pass
I understand that i shouldn't be expecting much accuracy here, but the t-SNE print doesn't show any clustering whatsoever. Finding similarities method also fails to find matches (<0.2 similarity coefficient basically for everything). [
Wondering if anyone has an idea of where did i go wrong? Is my approach valid at all?
Your code, as shown, neither does any train() of word-vectors (using your local text), nor does it pre-load any vectors from elsewhere. So any vectors which do exist – created by the build_vocab() calls – will still just be in their randomly-initialized starting locations, and be useless for any semantic purposes.
Suggestions:
either (a) train your own vectors from your text, which makes sense if you have a good quantity of text; or (b) load vectors from elsewhere. But don't try to do both. (Or, in the case of the code above, neither.)
The update=True option for build_vocab() should be considered an expert, experimental option – only worth tinkering with if you've already had things working in simpler modes, and you're sure you need it and understand all the implications.
Normal use won't ever explicitly re-assign new values into the Word2Vec model's syn0 property - those are managed by the class's training routines, so you never need to zero them out or modify them. You should tally up your own text summary vectors, based on word-vectors, outside the model in your own data structures.

Classifying text with scikit

I'm learning Scikit machine-learning for a project and while I'm beginning to grasp the general process the details are a bit fuzzy still.
Earlier I managed to build a classifier, train it and test it with test set. I saved it to disk with cPickle. Now I want to create a class which loads this classifier and lets user to classify single tweets with it.
I thought this would be trivial but I seem to get ValueError('dimension mismatch') from X_new_tfidf = self.tfidf_transformer.fit_transform(fitTweetVec) line with following code:
class TweetClassifier:
classifier = None
vect = TfidfVectorizer()
tfidf_transformer = TfidfTransformer()
#open the classifier saved to disk to be utilized later
def openClassifier(self, name):
with open(name+'.pkl', 'rb') as fid:
return cPickle.load(fid)
def __init__(self, classifierName):
self.classifier = self.openClassifier(classifierName)
self.classifyTweet(np.array([u"Helvetin vittu miksi aina pitää sataa vettä???"]))
def classifyTweet(self, tweetText):
fitTweetVec = self.vect.fit_transform(tweetText)
print self.vect.get_feature_names()
X_new_tfidf = self.tfidf_transformer.fit_transform(fitTweetVec)
print self.classifier.predict(X_new_tfidf)
What I'm doing wrong here? I used similar code while I made the classifier and ran test set for it. Have I forgotten some important step here?
Now I admit that I don't fully understand yet the fitting and transforming here since I found the Scikit's tutorial a bit ambiguous about it. If someone knows an as clear explanation of them as possible, I'm all for links :)
The problem is that your classifier was trained with a fixed number of features (the length of the vocabulary of your previous data) and now when you fit_transform the new tweet, the TfidfTransformer will produce a new vocabulary and a new number of features, and will represent the new tweet in this space.
The solution is to also save the previously fitted TfidfTransformer (which contains the old vocabulary), load it with the classifier and .transform (not fit_transform because it was already fitted to the old data) the new tweet in this same representation.
You can also use a Pipeline that contains both the TfidfTransformer and the Classifier and pickle the Pipeline, this is easier and recommended.

Categories