6 GB RAM Fails in Vectorizing text using Word2Vec - python

I'm trying to do one basic tweet sentiment analysis using word2vec and tfidf-score on a dataset consisting of 1,6M tweets but my 6 GB Gforce-Nvidia fails to do so. since this is my first practice project relating machine learning I'm wondering what I'm doing wrong because dataset is all text it shouldn't take this much RAM which makes my laptop froze in tweet2vec function or giving Memory Error in scaling part. below is part of my code that everything collapses.
the last thing is that I've tried with up to 1M data and it worked! so I'm curious what causes the problem
# --------------- calculating word weight for using later in word2vec model & bringing words together ---------------
def word_weight(data):
vectorizer = TfidfVectorizer(sublinear_tf=True, use_idf=True)
d = dict()
for index in tqdm(data, total=len(data), desc='Assigning weight to words'):
# --------- try except caches the empty indexes ----------
try:
matrix = vectorizer.fit_transform([w for w in index])
tfidf = dict(zip(vectorizer.get_feature_names(), vectorizer.idf_))
d.update(tfidf)
except ValueError:
continue
print("every word has weight now\n"
"--------------------------------------")
return d
# ------------------- bringing tokens with weight to recreate tweets ----------------
def tweet2vec(tokens, size, tfidf):
count = 0
for index in tqdm(tokens, total=len(tokens), desc='creating sentence vectors'):
# ---------- size is the dimension of word2vec model (200) ---------------
vec = np.zeros(size)
for word in index:
try:
vec += model[word] * tfidf[word]
except KeyError:
continue
tokens[count] = vec.tolist()
count += 1
print("tweet vectors are ready for scaling for ML algorithm\n"
"-------------------------------------------------")
return tokens
dataset = read_dataset('training.csv', ['target', 't_id', 'created_at', 'query', 'user', 'text'])
dataset = delete_unwanted_col(dataset, ['t_id', 'created_at', 'query', 'user'])
dataset_token = [pre_process(t) for t in tqdm(map(lambda t: t, dataset['text']),
desc='cleaning text', total=len(dataset['text']))]
print('pre_process completed, list of tweet tokens is returned\n'
'--------------------------------------------------------')
X = np.array(tweet2vec(dataset_token, 200, word_weight(dataset_token)))
print('scaling vectors ...')
X_scaled = scale(X)
print('features scaled!')
the data given to word_weight function is a (1599999, 200) shaped list which each index is consisted of pre-processed tweet tokens.
I appreciate your time and answer in advance and of course I'm glad to hear better approaches for handling big datasets

If I understood correctly, it works with 1M tweets, but fails with 1.6M tweets? So you know the code is correct.
If the GPU is running out of memory when you think it shouldn't, it may be holding on from a previous process. Use nvidia-smi to check what processes are using the GPU, and how much memory. If (before you run your code) you spot python processes in there holding a big chunk, it could be a crashed process, or a Jupyter window still open, etc.
I find it useful to watch nvidia-smi (not sure if there is a windows equivalent), to see how GPU memory changes as training progresses. Normally a chunk is reserved at the start, and then it stays fairly constant. If you see it rising linearly, something could be wrong with the code (are you re-loading the model on each iteration, something like that?).

my problem was solved when i changed the code (tweet2vec function) to this
(w is word weight)
def tweet2vec(tokens, size, tfidf):
# ------------- size is the dimension of word2vec model (200) ---------------
vec = np.zeros(size).reshape(1, size)
count = 0
for word in tokens:
try:
vec += model[word] * tfidf[word]
count += 1
except KeyError:
continue
if count != 0:
vec /= count
return vec
X = np.concatenate([tweet2vec(token, 200, w) for token in tqdm(map(lambda token: token, dataset_token),
desc='creating tweet vectors',
total=len(dataset_token))]
)
I have no idea why!!!!

Related

NLP Bert Multiprocessing Text Summaries

I have a dataframe consisting of over 1m articles I have cleaned for the process of training BERT a text summary model which in turn gets fed into a QA pipeline to ultimately train a T5 closed book QA model. I used multithreading before to vastly improve scraping times. However, is there an equivalent of multiprocessing for BERT?
The current flow looks like this:
def longBatch(context):
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
inputs_no_trunc = tokenizer(row, max_length=None, return_tensors='pt', truncation=False)
chunk_start = 0
chunk_end = tokenizer.model_max_length
inputs_batch_lst = []
while chunk_start <= len(inputs_no_trunc['input_ids'][0]):
inputs_batch = inputs_no_trunc['input_ids'][0][chunk_start:chunk_end]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
inputs_batch_lst.append(inputs_batch)
chunk_start += tokenizer.model_max_length # == 1024 for Bart
chunk_end += tokenizer.model_max_length # == 1024 for Bart
# generate a summary on each batch
summary_ids_lst = [model.generate(inputs, num_beams=4, max_length=100, early_stopping=True) for inputs in inputs_batch_lst]
summary_batch_lst = []
for summary_id in summary_ids_lst:
summary_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_id]
summary_batch_lst.append(summary_batch[0])
summary_all = '\n'.join(summary_batch_lst)
listarray.append(summary_all)
print(summary_all)
listarray = []
for row in x["0"]:
try:
pd.DataFrame(listarray).to_csv("/drive/somepath/sum.csv")
print(len(listarray))
listarray.append(longBatch(row))
print("Done with line")
except:
print("error:" +" " + row)
Now this works fine as is, with each article in its own row to be processed one at a time, but it is fairly slow. Multithreading took article my scrape times into the realm of roughly 500,000 a day. But this current model of summarization puts the peak performance numbers at roughly 5,500 a day. I've tried defining the processes and running it into a pool but the times actually increased by 6 seconds per summarization. The system doesn't strain feeding 1 article at a time, so I'd imagine it could handle quite a bit more, how would I go about parallelizing the process?
For reference, I am testing each section of script in ColabPro+ which has Tesla V100 GPU and 8 core CPU before moving to my local machine.

Solving memory issues when using Gensim LDA Multicore

For my project I am trying to use unsupervised learning to identify different topics from application descriptions, but I am running into a strange problem. Firstly, I have 3 different datasets, one with 15k documents another with 50k documents and last with 2m documents. I am trying to test models with different number of topics (k) ranging from 5 to 100 with a step size of 5. This is in order to check which k results in the best model assessed with initially with the highest coherence score. For each k, I also build 3 different models with chunksize 10, 100 and 1000.
So now moving onto the problem I am having. Obviously my own machine is too slow and does not have enough cores for this kind of computation hence I am using my university's server. The problem here is my program seems to be consuming too much memory and I am unsure of the reason. I already made some adjustments such that the corpus is not loaded entirely to memory (or atleast I think I did). The dataset with 50k entries already at iteration k=50 (so halfway) seems to have consumed the alloted 100GB of memory, which seems very huge.
I would appreciate any help in the right direction and thanks for taking the time to look at this. Below is the code from my topic_modelling.py file. Comments on the file are a bit outdated, sorry about that.
class MyCorpus:
texts: list
dictionary: dict
def __init__(self, descriptions, dictionary):
self.texts = descriptions
self.dictionary = dictionary
def __iter__(self):
for line in self.texts:
try:
# assume there's one document per line, tokens separated by whitespace
yield self.dictionary.doc2bow(line)
except StopIteration:
pass
# Function given a dataframe creates a dictionary and corupus
# These are used to create an LDA model. Here we automatically use the Descriptionb column
# from each dataframe
def create_dict_and_corpus(df):
text_descriptions = remove_characters_and_create_list(df, 'Description')
# print(text_descriptions)
dictionary = gensim.corpora.Dictionary(text_descriptions)
corpus = MyCorpus(text_descriptions, dictionary)
return text_descriptions, dictionary, corpus
# Given a dataframe remove and a column name in the data frame, extract all words and return a list
# Also to remove all chracters that are not alphanumeric or spaces
def remove_characters_and_create_list(df, column_name, split=True):
df[column_name] = df[column_name].astype(str)
texts = []
for x in range(df[column_name].size):
current_string = df[column_name][x]
filtered_string = re.sub(r'[^A-Za-z0-9 ]+', '', current_string)
if split:
texts.append(filtered_string.split())
else:
texts.append(filtered_string)
return texts
# This function given the parameters creates an LDA model for each number between
# the start limit and the end limit. After this the coherence and perplexity is calulated
# for each of those models and saved in a csv file to analyze later.
def test_lda_models(text, corpus, dictionary, start_limit, end_limit, path):
results = []
print("============Starting topic modelling============")
for k in range(start_limit, end_limit+1, 5):
for p in range(1, 4):
chunk = pow(10, p)
t0 = time.time()
lda_model = gensim.models.ldamulticore.LdaMulticore(corpus,
num_topics=k,
id2word=dictionary,
passes=p,
chunksize=chunk)
# To calculate the goodness of the model
perplexity = lda_model.bound(corpus)
coherence_model_lda = CoherenceModel(model=lda_model, texts=text, dictionary=dictionary, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
t1 = time.time()
print(f"=====Done K={k} model with passes={p} and chunksize={chunk}, took {t1-t0} seconds=====")
results.append((k, chunk, coherence_lda, perplexity))
# Storing teh results in a csv file except the actual lda model (this would not make sense)
path = make_dir_if_not_exists(path)
list_tuples_to_csv(results, ['#OfTopics', 'ChunkSize', 'CoherenceScore', 'Perplexity'], f"{path}/K={start_limit}to{end_limit}.csv")
return results
# Function plot the visualization of an LDA model. This visualization is then
# saved as an html file inside the given path
def single_lda_model_visualization(k, c, corpus, dictionary, lda_model, path):
vis = gensimvis.prepare(lda_model, corpus, dictionary)
pyLDAvis.save_html(vis, f"{path}/visualization.html")
# Given the results produced by test_lda_models, loop though the models and save the
# topic words of each model and the visualization of the topics in the given path
def save_lda_result(k, c, lda_model, corpus, dictionary, path):
list_tuples_to_csv(lda_model.print_topics(num_topics=k), ['Topic#', 'Associated Words'], f"{path}/associated_words.csv")
single_lda_model_visualization(k, c, corpus, dictionary, lda_model, path)
# This is the entire pipeline that needs to be performed for a single dataset,
# which includes computing the LDA models from start to end limit and calculating
# and saving the topic words and visual graphs for the top n topics with the highest
# coherence score.
def perform_topic_modelling_single_df(df, start_limit, end_limit, path):
# Extracting the necessary data required for LDA model computation
text_descriptions,dictionary, corpus = create_dict_and_corpus(df)
results_lda = test_lda_models(text_descriptions, corpus, dictionary, start_limit, end_limit, path)
# Sorting the results based on the 2nd tuple value returned which is 'coherence'
results_lda.sort(key=lambda x:x[2],reverse=True)
# Getting the top 5 results to save pass to save_lda_results function
results = results_lda[:5]
corpus_for_saving = [dictionary.doc2bow(text) for text in text_descriptions]
texts = remove_characters_and_create_list(df, 'Description', split=False)
# Perfrom application to topic modelling for the best lda model based on the
# coherence score (TODO maybe test with other lda models?)
print("getting descriptions for csv")
for k, c, _, _ in results:
dir_path = make_dir_if_not_exists(f"{path}/k={k}_chunk={c}")
p = int(math.log10(c))
lda_model = gensim.models.ldamulticore.LdaMulticore(corpus,
num_topics=k,
id2word=dictionary,
passes=p,
chunksize=c)
print(f"=====REDOING K={k} model with passes={p} and chunksize={c}=====")
save_lda_result(k,c, lda_model, corpus_for_saving, dictionary, dir_path)
application_to_topic_modelling(df, k, c, lda_model, corpus_for_saving, texts, dir_path)
# Performs the whole topic modelling pipeline taking different genre data sets
# and the entire dataset as a whole
def perform_topic_modelling_pipeline(path_ex):
# entire_df = pd.read_csv("../data/preprocessed_data/preprocessed_10000_trial.csv")
entire_df = pd.read_csv(os.path.join(ROOT_DIR, f"data/preprocessed_data/preprocessedData_{path_ex}.csv"))
print("size of df")
print(entire_df.shape)
# For entire df go from start limit to ngenres to find best LDA model
nGenres = row_counter(os.path.join(ROOT_DIR, f"data/genre_wise_data/data{path_ex}/genre_frequency.csv"))
nGenres_rounded = math.ceil(nGenres / 5) * 5
print(f"Original number of genres should be {nGenres}, but we are rounding to {nGenres_rounded}")
path = make_dir_if_not_exists(os.path.join(ROOT_DIR, f"results/data{path_ex}/aall_data"))
perform_topic_modelling_single_df(entire_df, 5, 100, path)

Create a model to classificy a sentence logical or not

As the title mention, how could I train a model to classify following sentences are logical or illogical?
“He has two legs”–logical
“He has six legs”–illogical
Solution I tried:
1 : Train the classifier by cnn
I have done it before, it works very well if you have enough of data. Problem is I do not have a huge data set which comes with “logical” or “illogical” labels for this case.
2 : Use language model
Train a language model introduced by gluonnlp on some data set like wiki, use it to find out the probability of the sentences. If the probability of the sentences are high, mark it as logical and vice versa. Problem is the results not good.
The way I estimate the probability
def __predict(self):
lines = self.__text_edit_input.toPlainText().split("\n")
result = ""
for line in lines:
result += str(self.__sentence_prob(line, 10)) + "\n"
self.__text_edit_output.setPlainText(result)
def __prepare_sentence(self, text, max_len):
result = mx.nd.zeros([max_len, 1], dtype='float32')
max_len = min(len(text), max_len)
i = max(max_len - len(text), 0)
j = 0
for index in range(i, max_len):
result[index][0] = self.__vocab[text[j]]
j = j + 1
return result
def __sentence_prob(self, text, max_len):
hiddens = self.__model.begin_state(1, func=mx.nd.zeros, ctx=self.__context)
tokens = self.__tokenizer(text)
data = self.__prepare_sentence(tokens, max_len)
output, _ = self.__model(data, hiddens)
prob = 0
for i in range(max_len):
total_prob = mx.nd.softmax(output[i][0])
prob += total_prob[self.__vocab[i]].asscalar()
return prob / max_len
Possible issues of language models:
1. Do not use correct way to split the sentences(I am using jieba to split the Chinese senteces)
2. Number of vocab is too small/big(test 10000, 15000 and 30000)
3. Loss too high(ppl around 190) after 50 epochs?
4. Number of sentences length should be larger/smaller(tried 10,20,35)
5. The data I use do not meet my requirements(not every sentences are logical)
6. Language model is not appropriate for this task?
Any suggestions?Thanks
Issue 6. Language model is not appropriate for this task? is the main problem. Language models are built to make sense of the input text with respect of language usage (syntax, semantics etc.) and not draw logical conclusions. So, you may not get good results even with a large amount of data or very deep models.
The problem you're trying to solve is extremely difficult. Something you may want to look at is Symbolic AI. There's a lot of ongoing research in this area.

Memory Error when training Multinomial Naive Bayes model

The training portion of my code can handle data in the order of magnitude of 10^4 but given that my whole dataset consists of ~500,000 comments I would like to train it with much more data. I seem to run out of memory when running the trainer with 100,000 reviews.
My get_features function seems to be the culprit.
data = get_data(limit=size)
data = clean_data(data)
all_words = [w.lower() for (comment, category) in data for w in comment]
word_features = []
for i in nltk.FreqDist(all_words).most_common(3000):
word_features.append(i[0])
random.shuffle(data)
def get_features(comment):
features = {}
for word in word_features:
features[word] = (word in set(comment)) # error here
return features
# I can do it myself like this:
feature_set = [(get_features(comment), category) for
(comment, category) in data]
# Or use nltk's Lazy Map implementation which arguable does the same thing:
# feature_set = nltk.classify.apply_features(get_features, data, labeled=True)
Running this for 100,000 reviews eats up all of my 32GB of RAM and eventually crashes with a Memory Error, at the features[word] = (word in set(comment)) line.
What can I do to alleviate this problem?
EDIT: I have significantly reduced the number of features: I now use only the top 3000 most common words as features - this has significantly improved performance (for obvious reasons). I also corrected a small mistake pointed out by #Marat.
Disclaimer: this code has many potential flaws so I expect few iterations to get to the root cause.
Parameters mismatch:
# defined with one parameter
def get_features(comment):
...
# called with two
... get_features(comment, word_features), ...
Suboptimal word lookup:
# set(comment) executed on every iteration
for word in word_features:
features[word] = (word in set(comment))
# can be transformed into something like:
word_set = set(comment)
for word in word_features:
features[word] = word in word_set
# if typical comment length is < 30, list lookup is faster
for word in word_features:
features[word] = word in comment
Suboptimal feature computing:
# it is cheaper to set few positives than to check all word_features
# also MUCH more memory efficient
from collections import defaultdict
...
def get_features(comment):
features = defaultdict(bool)
for word in comment:
features[word] = True
return features
Suboptimal feature storage:
# numpy array is much more efficient than a list of dicts
# .. and with pandas on top it's even nicer:
import pandas as pd
...
feature_set = pd.DataFrame(
({word: True for word in comment}
for (comment, _) in data),
columns = word_features
).fillna(False)
feature_set['category'] = [category for (_, category) in data]

Python nltk classify with large feature set (Replicate Go Et Al 2009)

I'm trying to replicate Go Et Al. Twitter sentiment Analysis which can be found here http://help.sentiment140.com/for-students
The problem I'm having is the number of features is 364464. I'm currently using nltk and nltk.NaiveBayesClassifier to do this where tweets holds a replication of the 1,600,000 tweets and there polarity:
for tweet in tweets:
tweet[0] = extract_features(tweet[0], features)
classifier = nltk.NaiveBayesClassifier.train(training_set)
# print "NB Classified"
classifier.show_most_informative_features()
print(nltk.classify.util.accuracy(classifier, testdata))
Everything doesn't take very long apart from the extract_features function
def extract_features(tweet, featureList):
tweet_words = set(tweet)
features = {}
for word in featureList:
features['contains(%s)' % word] = (word in tweet_words)
return features
This is because for each tweet it's creating a dictionary of size 364,464 to represent whether something is present or not.
Is there a way to make this faster or more efficient without reducing the number of features like in this paper?
Turns out there is a wonderful function called:
nltk.classify.util.apply_features()
which you can find herehttp://www.nltk.org/api/nltk.classify.html
training_set = nltk.classify.apply_features(extract_features, tweets)
I had to change my extract_features function but it now works with the huge sizes without memory issues.
Here's a lowdown of the function description:
The primary purpose of this function is to avoid the memory overhead involved in storing all the featuresets for every token in a corpus. Instead, these featuresets are constructed lazily, as-needed. The reduction in memory overhead can be especially significant when the underlying list of tokens is itself lazy (as is the case with many corpus readers).
and my changed function:
def extract_features(tweet):
tweet_words = set(tweet)
global featureList
features = {}
for word in featureList:
features[word] = False
for word in tweet_words:
if word in featureList:
features[word] = True
return features

Categories