Add progress bar (verbose) when creating gensim dictionary - python

I want to create a gensim dictionary from lines of a dataframe. The df.preprocessed_text is a list of words.
from gensim.models.phrases import Phrases, Phraser
from gensim.corpora.dictionary import Dictionary
def create_dict(df, bigram=True, min_occ_token=3):
token_ = df.preprocessed_text.values
if not bigram:
return Dictionary(token_)
bigram = Phrases(token_,
min_count=3,
threshold=1,
delimiter=b' ')
bigram_phraser = Phraser(bigram)
bigram_token = []
for sent in token_:
bigram_token.append(bigram_phraser[sent])
dictionary = Dictionary(bigram_token)
dictionary.filter_extremes(no_above=0.8, no_below=min_occ_token)
dictionary.compactify()
return dictionary
I couldn't find a progress bar option for it and the callbacks doesn't seem to work for it too. Since my corpus is huge, I really appreciate a way to show the progress. Is there any?

I'd recommend against changing prune_at for monitoring purposes, as it changes the behavior around which bigrams/words are remembered, possibly discarding many more than is strictly required for capping memory usage.
Wrapping tqdm around the iterables used (including the token_ use in the Phrases constructor and the bigram_token use in the Dictionary constructor) should work.
Alternatively, enabling INFO or greater logging should display logging that, while not as pretty/accurate as a progress-bar, will give some indication of progress.
Further, if as shown in the code, the use of bigram_token is only to support the next Dictionary, it need not be created as a full in-memory list. You should be able to just use layered iterators to transform the text, & tally the Dictionary, item-by-item. EG:
# ...
dictionary = Dictionary(tqdm(bigram_phraser[token_]))
# ...
(Also, if you're only using the Phraser once, you may not be getting any benefit from creating it at all - it's an optional memory optimization for when you want to keep applying the same phrase-creation operation without the full overhead of the original Phrases survey object. But if the Phrases is still in-scope, and all of it will be discarded immediately after this step, it might be just as fast to use the Phrases object directly without ever taking a detour to create the Phraser - so give that a try.)

Related

is there a way to stop creation of vocabulary in gensim.WikiCorpus when reach 2000000 tokens?

I downloaded the latest wiki dump multi-stream bz2. I call the WikiCorpus class from gensim corpora and after 90000 document the vocabulary reaches the highest value (2000000 tokens).
I got this in terminal:
keeping 2000000 tokens which were in no less than 0 and no more than 580000 (=100.0%) documents
resulting dictionary: Dictionary(2000000 unique tokens: ['ability', 'able', 'abolish', 'abolition', 'about']...)
adding document #580000 to Dictionary(2000000 unique tokens: ['ability', 'able', 'abolish', 'abolition', 'about']...)
The WikiCorpus class continues to work until the end of the documents in my bz2.
Is there a way to stop it? or to split the bz2 file in a sample?
thanks for help!
There's no specific parameter to cap the number of tokens. But when you use WikiCorpus.get_texts(), you don't have to read them all: you can stop at any time.
If, as suggested by another question of yours, you plan to use the article texts for Gensim Word2Vec (or a similar model), you don't need the constructor to do its own expensive full-scan vocabulary-discovery. If you supply any dummy object (such as an empty dict) as the optional dictionary parameter, it'll skip this unnecessary step. EG:
wiki_corpus = WikiCorpus(filename, dictionary={})
If you also want to use some truncated version of the full set of articles, I'd suggest manually iterating over just a subset of the articles. For example if the subset will easily fit as a list in RAM, say 50000 articles, that's as simple as:
import itertools
subset_corpus = list(itertools.islice(wiki_corpus, 50000))
If you want to create a subset larger than RAM, iterate over the set number of articles, writing their tokenized texts to a scratch text file, one per line. Then use that file as your later input. (By spending the WikiCorpus extraction/tokenization effort only once, then reusing the file from disk, this can sometimes offer a performance boost even if you don't need to do it.)

Normalize all words in a document

I need to normalize all words in a huge corpora. Any ideas how to optimize this code? That's too slow...
texts = [ [ list(morph.normalize(word.upper()))[0] for word in document.split() ]
for document in documents ]
documents is a list of strings, where each string is single book's text.
morph.normalize works only for upper register, so I apply .upper() to all words. Moreover, it returns a set with one element, which is normalized word (string)
The first and obvious thing I'd do would be to cache the normalized words in a local dict, as to avoid calling morph.normalize() more than once for a given word.
A second optimization is to alias methods to local variables - this avoids going thru the whole attribute lookup + function descriptor invocation + method object instanciation on each turn of the loop.
Then since it's a "huge" corpus you probably want to avoid creating a full list of lists at once, which might eat all your ram, make your computer start to swap (which is garanteed to make it snail slow) and finally crash with a memory error. I don't know what your supposed to do with this list of lists nor how huge each document is but as an example I iter on a per-document result and write it to stdout - what should really be done depends on the context and concrete use case.
NB : untested code, obviously, but at least this should get you started
def iterdocs(documents, morph):
# keep trac of already normalized words
# beware this dict might get too big if you
# have lot of different words. Depending on
# your corpus, you may want to either use a LRU
# cache instead and/or use a per-document cache
# and/or any other appropriate caching strategy...
cache = {}
# aliasing methods as local variables
# is faster for tight loops
normalize = morph.normalize
def norm(word):
upw = word.upper()
if upw in cache:
return cache[upw]
nw = cache[upw] = normalize(upw).pop()
return nw
for doc in documents:
words = [norm(word) for word in document.split() if word]
yield words
for text in iterdocs(docs, morph):
# if you need all the texts for further use
# at least write them to disk or other persistence
# mean and re-read them when needed.
# Here I just write them to sys.stdout as an example
print(text)
Also, I don't know where you get your documents from but if they are text files, you may want to avoid loading them all in memory. Just read them one by one, and if they are themselves huge don't even read a whole file at once (you can iterate over a file line by line - the most obvious choice for text).
Finally, once you made sure your code don't eat up to much memory for a single document, the next obvious optimisation is parallelisation - run a process per available core and split the corpus between processes (each writing it's results to a given place). Then you just have to sum up the results if you need them all at once...
Oh and yes : if that's still not enough you may want to distribute the work with some map reduce framework - your problem looks like a perfect fit for map reduce.

Size issues with Python shelve module

I want to store a few dictionaries using the shelve module, however, I am running into a problem with the size. I use Python 3.5.2 and the latest shelve module.
I have a list of words and I want to create a map from the bigrams (character level) to the words. The structure will look something like this:
'aa': 'aardvark', 'and', ...
'ab': 'absolute', 'dab', ...
...
I read in a large file consisting of approximately 1.3 million words. So the dictionary gets pretty large. This is the code:
self.bicharacters // part of class
def _create_bicharacters(self):
'''
Creates a bicharacter index for calculating Jaccard coefficient.
'''
with open('wordlist.txt', encoding='ISO-8859-1') as f:
for line in f:
word = line.split('\t')[2]
for i in range(len(word) - 1):
bicharacter = (word[i] + word[i+1])
if bicharacter in self.bicharacters:
get = self.bicharacters[bicharacter]
get.append(word)
self.bicharacters[bicharacter] = get
else:
self.bicharacters[bicharacter] = [word]
When I ran this code using a regular Python dictionary, I did not run into issues, but I can't spare those kinds of memory resources due to the rest of the program also having quite a large memory footprint.
So I tried using the shelve module. However, when I run the code above using shelve the program stops after a while due to no more memory on disk, the shelve db that was created was around 120gb, and it had still not read even half the 1.3M word list from the file. What am I doing wrong here?
The problem here is not so much the number of keys, but that each key references a list of words.
While in memory as one (huge) dictionary, this isn't that big a problem as the words are simply shared between the lists; each list is simply a sequence of references to other objects and here many of those objects are the same, as only one string per word needs to be referenced.
In shelve, however, each value is pickled and stored separately, meaning that a concrete copy of the words in a list will have to be stored for each value. Since your setup ends up adding a given word to a large number of lists, this multiplies your data needs rather drastically.
I'd switch to using a SQL database here. Python comes with bundled with sqlite3. If you create one table for individual words, and second table for each possible bigram, and a third that simply links between the two (a many-to-many mapping, linking bigram row id to word row id), this can be done very efficiently. You can then do very efficient lookups as SQLite is quite adept managing memory and indices for you.

Efficient and clean way of growing a tokenizer function in Python

I have a library that does some "translation" and uses the awesome tokenize.generate_tokens() function to do so.
And it is pretty fast and I have things working correctly. But when translating, I've found that the function keeps growing with new tokens that I want to translate and the if and elif conditions start to pop all over. I also keep a few variables outside the generator that keeps track of "last keyword seen" and similar.
A good example of this is the actual Python documentation one seen here (at the bottom): http://docs.python.org/library/tokenize.html#tokenize.untokenize
Every time I add a new thing I need to translate this function grows a couple of conditionals. I don't think that having a function with so many conditionals is the way to or the proper way to pave the ground to grow.
Furthermore, I feel that the tokenizer consumes a lot of irrelevant lines that do not contain any of the keywords I am translating.
So 2 questions:
How can I avoid adding more and more conditional statements that will make this translation function easy/clean to keep growing (without a performance hit)?
How can I make it efficient for all the irrelevant lines I am not interested in?
You could use a dict dispatcher. For example, the code you linked to might look like this:
def process_number(result,tokval):
if '.' in tokval:
result.extend([
(NAME, 'Decimal'),
(OP, '('),
(STRING, repr(tokval)),
(OP, ')')
])
def process_default(result,tokval):
result.append((toknum, tokval))
dispatcher={NUMBER: process_number, }
for toknum, tokval, _, _, _ in g:
dispatcher.get(toknum,process_default)(result,tokval)
Instead of adding more if-blocks, you add key-value pairs to dispatcher.
This may be more efficient than evaluating a long list of if-else conditionals, since dict lookup is O(1), but it does require a function call. You'll have to benchmark to see how this compares to many if-else blocks.
I think its main advantage is that it keeps code organized in small(er), comprehensible units.

Pattern for associating pyparsing results with a linked-list of nodes

I have defined a pyparsing rule to parse this text into a syntax-tree...
TEXT COMMANDS:
add Iteration name = "Cisco 10M/half"
append Observation name = "packet loss 1"
assign Observation results_text = 0.0
assign Observation results_bool = True
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append Observation name = "packet loss 2"
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
SYNTAX TREE:
['add', 'Iteration', ['name', 'Cisco 10M/half']]
['append', 'Observation', ['name', 'packet loss 1']]
['assign', 'Observation', ['results_text', '0.0']]
['assign', 'Observation', ['results_bool', 'True']]
['append', 'DataPoint']
['assign', 'DataPoint', ['metric', 'txpackets']]
['assign', 'DataPoint', ['units', 'packets']]
...
I'm trying to associate all the nested key-value pairs in the syntax-tree above into a linked-list of objects... the heirarchy looks something like this (each word is a namedtuple... children in the heirarchy are on the parents' list of children):
Log: [
Iteration: [
Observation:
[DataPoint, DataPoint],
Observation:
[DataPoint, DataPoint]
]
]
The goal of all this is to build a generic test data-acquisition platform to drive the flow of tests against network gear, and record the results. After the data is in this format, the same data structure will be used to build a test report. To answer the question in the comments below, I chose a linked list because it seemed like the easiest way to sequentially dequeue the information when writing the report. However, I would rather not assign Iteration or Observation sequence numbers before finishing the tests... in case we find problems and insert more Observations in the course of conducting the test. My theory is that the position of each element in the list is sufficient, but I'm willing to change that if it's part of the problem.
The problem is that I'm getting lost trying to assign Key-Values to objects in the linked list after it's built. For instance, after I insert an Observation namedtuple into the first Iteration, I have trouble reliably handling the update of assign Observation results_bool = True in the example above.
Is there a generalized design pattern to handle this situation? I have googled this for while, but I can't seem to make the link between parsing the text (which I can do) and managing the data-heirarchy (the main problem). Hyperlinks or small demo code is fine... I just need pointers to get on the right track.
I am not aware of an actual design pattern for what you're looking for, but I have a great passion for the issue at hand. I work heavily with network devices and parsing and organizing the data is a large ongoing challenge for me.
It's clear that the problem is not parsing the data, but what you do with it afterwards. This is where you need to think about the meaning you are attaching to the data you have parsed. The nested-list method might work well for you if the objects containing the lists are also meaningful.
Namedtuples are great for quick-and-dirty class-ish behavior, but they fall flat when you need them to do anything outside of basic attribute access, especially considering that as tuples they are immutable. It seems to me that you'll want to replace certain namedtuple objects with full-blown classes. This way you can highly customize the behavior and methods available.
For example, you know that an Iteration will always contain 1 or more Observation objects which will then contain 1 or more DataPoint objects. If you can accurately describe the relationships, this sets you on the path to handling them.
I wound up using textfsm, which allows me to keep state between different lines while parsing the configuration file.

Categories