Training time of gensim word2vec - python

I'm training word2vec from scratch on 34 GB pre-processed MS_MARCO corpus(of 22 GB). (Preprocessed corpus is sentnecepiece tokenized and so its size is more) I'm training my word2vec model using following code :
from gensim.test.utils import common_texts, get_tmpfile
from gensim.models import Word2Vec
class Corpus():
"""Iterate over sentences from the corpus."""
def __init__(self):
self.files = [
"sp_cor1.txt",
"sp_cor2.txt",
"sp_cor3.txt",
"sp_cor4.txt",
"sp_cor5.txt",
"sp_cor6.txt",
"sp_cor7.txt",
"sp_cor8.txt"
]
def __iter__(self):
for fname in self.files:
for line in open(fname):
words = line.split()
yield words
sentences = Corpus()
model = Word2Vec(sentences, size=300, window=5, min_count=1, workers=8, sg=1, hs=1, negative=10)
model.save("word2vec.model")
My model is running now for about more than 30 hours now. This is doubtful since on my i5 laptop with 8 cores, I'm using all the 8 cores at 100% for every moment of time. Plus, my program seems to have read more than 100 GB of data from the disk now. I don't know if there is anything wrong here, but the main reason after my doubt on the training is because of this 100 GB of read from the disk. The whole corpus is of 34 GB, then why my code has read 100 GB of data from the disk? Does anyone know how much time should it take to train word2vec on 34 GB of text, with 8 cores of i5 CPU running all in parallel? Thank you. For more information, I'm also attaching the photo of my process from system monitor.
I want to know why my model has read 112 GB from memory, even when my corpus is of 34 GB in total? Will my training ever get finished? Also I'm bit worried about health of my laptop, since it is running constantly at its peak capacity since last 30 hours. It is really hot now.
Should I add any additional parameter in Word2Vec for quicker training without much performance loss?

Completing a model requires one pass over all the data to discover the vocabulary, then multiple passes, with a default of 5, to perform vector training. So, you should expect to see about 6x your data size in disk-reads, just from the model training.
(If your machine winds up needing to use virtual-memory swapping during the process, there could be more disk activity – but you absolutely do not want that to happen, as the random-access pattern of word2vec training is nearly a worst-case for virtual memory usage, which will slow training immensely.)
If you'd like to understand the code's progress, and be able to estimate its completion time, you should enable Python logging to at least the INFO level. Various steps of the process will report interim results (such as the discovered and surviving vocabulary size) and estimated progress. You can often tell if something is going wrong before the end of a run by studying the logging outputs for sensible values, and once the 'training' phase has begun the completion time will be a simple projection from the training completed so far.
I believe most laptops should throttle their own CPU if it's becoming so hot as to become unsafe or risk extreme wear on the CPU/components, but whether yours does, I can't say, and definitely make sure its fans work & vents are unobstructed.
I'd suggest you choose some small random subset of your data – maybe 1GB? – to be able to run all your steps to completion, becoming familiar with the Word2Vec logging output, resource usage, and results, and tinkering with settings to observe changes, before trying to run on your full dataset, which might require days of training time.
Some of your shown parameters aren't optimal for speedy training. In particular:
min_count=1 retains every word seen in the corpus-survey, including those with only a single occurrence. This results in a much, much larger model - potentially risking a model that doesn't fit into RAM, forcing disastrous swapping. But also, words with just a few usage examples can't possibly get good word vectors, as the process requires seeing many subtly-varied alternate uses. Still, via typical 'Zipfian' word-frequencies, the number of such words with just a few uses may be very large in total, so retaining all those words takes a lot of training time/effort, and even serves a bit like 'noise' making the training of other words, with plenty of usage examples, less effective. So for model size, training speed, and quality of remaining vectors, a larger min_count is desirable. The default of min_count=5 is better for more projects than min_count=1 – this is a parameter that should only really be changed if you're sure you know the effects. And, when you have plentiful data – as with your 34GB – the min_count can go much higher to keep the model size manageable.
hs=1 should only be enabled if you want to use the 'hierarchical-softmax' training mode instead of 'negative-sampling' – and in that case, negative=0 should also be set to disable 'negative-sampling'. You probably don't want to use hierarchical-softmax: it's not the default for a reason, and it doesn't scale as well to larger datasets. But here you've enabled in in addition to negative-sampling, likely more-than-doubling the required training time.
Did you choose negative=10 because you had problems with the default negative=5? Because this non-default choice, again, would slow training noticeably. (But also, again, a non-default choice here would be more common with smaller datasets, while larger datasets like yours are more likely to experiment with a smaller negative value.)
The theme of the above observations is: "only change the defaults if you've already got something working, and you have a good theory (or way of testing) how that change might help".
With a large-enough dataset, there's another default parameter to consider changing to speed up training (& often improve word-vector quality, as well): sample, which controls how-aggressively highly-frequent words (with many redundant usage-examples) may be downsampled (randomly skipped).
The default value, sample=0.001 (aka 1e-03), is very conservative. A smaller value, such as sample=1e-05, will discard many-more of the most-frequent-words' redundant usage examples, speeding overall training considerably. (And, for a corpus of your size, you could eventually experiment with even smaller, more-aggressive values.)
Finally, to the extent all your data (for either a full run, or a subset run) can be in an already-space-delimited text file, you can use the corpus_file alternate method of specifying the corpus. Then, the Word2Vec class will use an optimized multithreaded IO approach to assign sections of the file to alternate worker threads – which, if you weren't previously seeing full saturation of all threads/CPU-cores, could increase our throughput. (I'd put this off until after trying other things, then check if your best setup still leaves some of your 8 threads often idle.)

Related

processing text with nlp.pipe taking hours

I have 300.000 news articles in my dataset and I'm using en_core_web_sm to do POS tagging, parsing, ner extraction. However, this is taking hours and hours and seems to never be done.
The code is working, but very slow. When I take a sample of my dataset 650 articles it takes 37seconds and 6500 articles takes 6min. However I really need my full dataset of 300.000 articles to be completed in a reasonable amount of time...
texts = df["content"]
for doc in nlp.pipe(texts, n_process=2, batch_size=100, disable=["senter","attribute_ruler" "lemmatizer"]):
df["spacy_sm"] = texts
Is there a way to speed this up significantly, or am I missing something here?
On CPU you can usually use a much larger batch size and you can use a larger number of processes depending on your CPU. The default batch size for sm/md/lg models is currently 256 but you can increase it further as long as you aren't running out of RAM. A batch size of 1000 or higher would be totally reasonable on CPU for many tasks, but it really depends on the pipeline components and text lengths.
As long as your text lengths don't vary too much, you can figure out the approximate RAM required for a given batch size with one process and then you can estimate how many processes you can fit in the available RAM for that batch size.

Is it feasible to run a Support Vector Machine Kernel on a device with <= 1 MB RAM and <= 10 MB ROM?

Some preliminary testing shows that a project I'm working on could potentially benefit from the use of a Support-Vector-Machine to solve a tricky problem. The concern that I have is that there will be major memory constraints. Prototyping and testing is being done in python with scikit-learn. The final version will be custom written in C. The model would be pre-trained and only the decision function would be stored on the final product. There would be <= 10 training features, and <= 5000 training data-points. I've been reading mixed things regarding SVM memory, and I know the default sklearn memory cache is 200 MB. (Much larger than what I have available) Is this feasible? I know there are multiple different types of SVM kernel and that the kernel's can also be custom written. What kernel types could this potentially work with, if any?
If you're that strapped for space, you'll probably want to skip scikit and simply implement the math yourself. That way, you can cycle through the data in structures of your own choosing. Memory requirements depend on the class of SVM you're using; a two-class linear SVM can be done with a single pass through the data, considering only one observation at a time as you accumulate sum-of-products, so your command logic would take far more space than the data requirements.
If you need to keep the entire data set in memory for multiple passes, that's "only" 5000*10*8 bytes for floats, or 400k of your 1Mb, which might be enough room to do your manipulations. Also consider a slow training process, re-reading the data on each pass, as this reduces the 400k to a triviality at the cost of wall-clock time.
All of this is under your control if you look up a usable SVM implementation and alter the I/O portions as needed.
Does that help?

XGBoost - how should I set the nthread parameter?

I am trying to optimize my python training script (I need to run in multiple times, so it makes sense to try to speed up). I have a dataset composed of 9 months of data. The validation setup is a kind of "temporal validation" in which I leave one month out, I train on the remaining set of months (with different sampling methods) and I make a prediction over the "test month".
months # set of months
for test_month in months:
sample_list = generate_different_samples([months - test-months])
for sample in sample_list:
xgb.train(sample)
xgb.predict(test_month)
# evalutaion after
In practice, I have for each month almost 100 different training samples. I am running my code on a machine with 16 cores and 64GB of RAM. The memory is not a problem (the dataset contains millions of instances but they do not fill the memory). I am currently parallelizing at a "test_month" level, thus creating a ProcessPool that run all the 9 months together, however, I am struggling in setting the nthread parameter of xgboost. At the moment is 2, in this way each thread will run on a single core, but I am reading online different opinions (https://github.com/dmlc/xgboost/issues/3042). Should I increase this number? I know that the question may be a little vague, but I was looking for a systematic way to select the best value based on the dataset structure.
It will not come as a surprise, but there is no single golden-goose strategy for this. At least I never bumped into one so far. If you establish one, please share it here- I will be interested to learn.
There is an advice in lightgbm, which is a competitor GBM tool, where they say:
for the best speed, set this to the number of real CPU cores, not the number of threads (most CPUs use hyper-threading to generate 2 threads per CPU core)
I'm not aware if there a similar recommendation from xgboost authors. But to the zeroth order approximation, i do not see a reason, why the two implementations would scale differently.
The most in-depth benchmarking of GBM tools that I saw is this one by Laurae. It shows, among other things, performance scaling as a function of the number of threads. Beware, that it is really advanced and conclusions from there might not directly apply, unless one implements the same preparatory steps on the OS level.

How to make word2vec model's loading time and memory use more efficient?

I want to use Word2vec in a web server (production) in two different variants where I fetch two sentences from the web and compare it in real-time. For now, I am testing it on a local machine which has 16GB RAM.
Scenario:
w2v = load w2v model
If condition 1 is true:
if normalize:
reverse normalize by w2v.init_sims(replace=False) (not sure if it will work)
Loop through some items:
calculate their vectors using w2v
else if condition 2 is true:
if not normalized:
w2v.init_sims(replace=True)
Loop through some items:
calculate their vectors using w2v
I have already read the solution about reducing the vocabulary size to a small size but I would like to use all the vocabulary.
Are there new workarounds on how to handle this? Is there a way to initially load a small portion of the vocabulary for first 1-2 minutes and in parallel keep loading the whole vocabulary?
As a one-time delay that you should be able to schedule to happen before any service-requests, I would recommend against worrying too much about the first-time load() time. (It's going to inherently take a lot of time to load a lot of data from disk to RAM – but once there, if it's being kept around and shared between processes well, the cost is not spent again for an arbitrarily long service-uptime.)
It doesn't make much sense to "load a small portion of the vocabulary for first 1-2 minutes and in parallel keep loading the whole vocabulary" – as soon as any similarity-calc is needed, the whole set of vectors need to be accessed for any top-N results. (So the "half-loaded" state isn't very useful.)
Note that if you do init_sims(replace=True), the model's original raw vector magnitudes are clobbered with the new unit-normed (all-same-magnitude) vectors. So looking at your pseudocode, the only difference between the two paths is the explicit init_sims(replace=True). But if you're truly keeping the same shared model in memory between requests, as soon as condition 2 occurs, the model is normalized, and thereafter calls under condition 1 are also occurring with normalized vectors. And further, additional calls under condition 2 just redundantly (and expensively) re-normalize the vectors in-place. So if normalized-comparisons are your only focus, best to do one in-place init_sims(replace=True) at service startup - not at the mercy of order-of-requests.
If you've saved the model using gensim's native save() (rather than save_word2vec_format()), and as uncompressed files, there's the option to 'memory-map' the files on a future re-load. This means rather than immediately copying the full vector array into RAM, the file-on-disk is simply marked as providing the addressing-space. There are two potential benefits to this: (1) if you only even access some limited ranges of the array, only those are loaded, on demand; (2) many separate processes all using the same mapped files will automatically reuse any shared ranges loaded into RAM, rather than potentially duplicating the same data.
(1) isn't much of an advantage as soon as you need to do a full-sweep over the whole vocabulary – because they're all brought into RAM then, and further at the moment of access (which will have more service-lag than if you'd just pre-loaded them). But (2) is still an advantage in multi-process webserver scenarios. There's a lot more detail on how you might use memory-mapped word2vec models efficiently in a prior answer of mine, at How to speed up Gensim Word2vec model load time?

Memory Error Using scikit-learn on Predicting but Not Fitting

I'm classifying text data using TfidfVectorizer from scikit-learn.
I transform my dataset and it turns into a 75k by 172k sparse matrix with 865k stored elements. (I used ngrams ranges of 1,3)
I can fit my data after a long time, but it still indeed fits.
However, when I try to predict the test set I get memory issues. Why is this? I would think that the most memory intensive part would be fitting not predicting?
I've tried doing a few things to circumvent this but have had no luck. First I tried dumping the data locally with joblib.dump, quitting python and restarting. This unfortunately didn't work.
Then I tried switching over to a HashingVectorizer but ironically, the hashing vectorizer causes memory issues on the same data set. I was under the impression a Hashing Vectorizer would be more memory efficient?
hashing = HashingVectorizer(analyzer='word',ngram_range=(1,3))
tfidf = TfidfVectorizer(analyzer='word',ngram_range=(1,3))
xhash = hashing.fit_transform(x)
xtfidf = tfidf.fit_transform(x)
pac.fit(xhash,y) # causes memory error
pac.fit(xtfidf,y) # works fine
I am using scikit learn 0.15 (bleeding edge) and windows 8.
I have 8 GB RAM and a hard drive with 100 GB free space. I set my virtual RAM to be 50 GB for the purposes of this project. I can set my virtual RAM even higher if needed, but I'm trying to understand the problem before just blunt force try solutions like I have been for the past couple days...I've tried with a few different classifiers: mostly PassiveAggressiveClassifier, Perception, MultinomialNB, and LinearSVC.
I should also note that at one point I was using a 350k by 472k sparse matrix with 12M stored elements. I was still able to fit the data despite it taking some time. However had memory errors when predicting.
The scikit-learn library is strongly optimized (and uses NumPy and SciPy). TfidVectorizer stores sparse matrices (relatively small in size, compared with standard dense matrices).
If you think that it's issue with memory, you can set the max_features attribute when you create Tfidfvectorizer. It maybe useful for check your assumptions
(for more detail about Tfidfvectorizer, see the documentation).
Also, I can reccomend that you reduce training set, and check again; it can also be useful for check your assumptions.

Categories