Using sparse matrices/online learning in Naive Bayes (Python, scikit) - python

I'm trying to do Naive Bayes on a dataset that has over 6,000,000 entries and each entry 150k features. I've tried to implement the code from the following link:
Implementing Bag-of-Words Naive-Bayes classifier in NLTK
The problem is (as I understand), that when I try to run the train-method with a dok_matrix as it's parameter, it cannot find iterkeys (I've paired the rows with OrderedDict as labels):
Traceback (most recent call last):
File "skitest.py", line 96, in <module>
classif.train(add_label(matr, labels))
File "/usr/lib/pymodules/python2.6/nltk/classify/scikitlearn.py", line 92, in train
for f in fs.iterkeys():
File "/usr/lib/python2.6/dist-packages/scipy/sparse/csr.py", line 88, in __getattr__
return _cs_matrix.__getattr__(self, attr)
File "/usr/lib/python2.6/dist-packages/scipy/sparse/base.py", line 429, in __getattr__
raise AttributeError, attr + " not found"
AttributeError: iterkeys not found
My question is, is there a way to either avoid using a sparse matrix by teaching the classifier entry by entry (online), or is there a sparse matrix format I could use in this case efficiently instead of dok_matrix? Or am I missing something obvious?
Thanks for anyone's time. :)
EDIT, 6th sep:
Found the iterkeys, so atleast the code runs. It's still too slow, as it has taken several hours with a dataset of the size of 32k, and still hasn't finished. Here's what I got at the moment:
matr = dok_matrix((6000000, 150000), dtype=float32)
labels = OrderedDict()
#collect the data into the matrix
pipeline = Pipeline([('nb', MultinomialNB())])
classif = SklearnClassifier(pipeline)
add_label = lambda lst, lab: [(lst.getrow(x).todok(), lab[x])
for x in xrange(lentweets-foldsize)]
classif.train(add_label(matr[:(lentweets-foldsize),0], labels))
readrow = [matr.getrow(x + foldsize).todok() for x in xrange(lentweets-foldsize)]
data = np.array(classif.batch_classify(readrow))
The problem might be that each row that is taken doesn't utilize the sparseness of the vector, but goes through each of the 150k entry. As a continuation for the issue, does anyone know how to utilize this Naive Bayes with sparse matrices, or is there any other way to optimize the above code?

Check out the document classification example in scikit-learn. The trick is to let the library handle the feature extraction for you. Skip the NLTK wrapper, as it's not intended for such large datasets.(*)
If you have the documents in text files, then you can just hand those text files to the TfidfVectorizer, which creates a sparse matrix from them:
from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer(input='filename')
X = vect.fit_transform(list_of_filenames)
You now have a training set X in the CSR sparse matrix format, that you can feed to a Naive Bayes classifier if you also have a list of labels y (perhaps derived from the filenames, if you encoded the class in them):
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(X, y)
If it turns out this doesn't work because the set of documents is too large (unlikely since the TfidfVectorizer was optimized for just this number of documents), look at the out-of-core document classification example, which demonstrates the HashingVectorizer and the partial_fit API for minibatch learning. You'll need scikit-learn 0.14 for this to work.
(*) I know, because I wrote that wrapper. Like the rest of NLTK, it's intended for educational purposes. I also worked on performance improvements in scikit-learn, and some of the code I'm advertising is my own.

Related

FastText in Gensim

I am using Gensim to load my fasttext .vec file as follows.
m=load_word2vec_format(filename, binary=False)
However, I am just confused if I need to load .bin file to perform commands like m.most_similar("dog"), m.wv.syn0, m.wv.vocab.keys() etc.? If so, how to do it?
Or .bin file is not important to perform this cosine similarity matching?
Please help me!
The following can be used:
from gensim.models import KeyedVectors
model = KeyedVectors.load_word2vec_format(link to the .vec file)
model.most_similar("summer")
model.similarity("summer", "winter")
Many options to use the model now.
The gensim-lib has evolved, so some code fragments got deprecated. This is an actual working solution:
import gensim.models.wrappers.fasttext
model = gensim.models.wrappers.fasttext.FastTextKeyedVectors.load_word2vec_format(Source + '.vec', binary=False, encoding='utf8')
word_vectors = model.wv
# -- this saves space, if you plan to use only, but not to train, the model:
del model
# -- do your work:
word_vectors.most_similar("etc")
If you want to be able to retrain the gensim model later with additional data, you should save the whole model like this: model.save("fasttext.model").
If you save just the word vectors with model.wv.save_word2vec_format(Path("vectors.txt")), you will still be able to perform any of the functions that vectors provide - like similarity, but you will not be able to retrain the model with more data.
Note that if you are saving the whole model, you should pass a file name as a string instead of wrapping it in get_tmpfile, as suggested in the documentation here.
Maybe I am late in answering this:
But here you can find your answer in the documentation:https://github.com/facebookresearch/fastText/blob/master/README.md#word-representation-learning
Example use cases
This library has two main use cases: word representation learning and text classification. These were described in the two papers 1 and 2.
Word representation learning
In order to learn word vectors, as described in 1, do:
$ ./fasttext skipgram -input data.txt -output model
where data.txt is a training file containing UTF-8 encoded text. By default the word vectors will take into account character n-grams from 3 to 6 characters. At the end of optimization the program will save two files: model.bin and model.vec. model.vec is a text file containing the word vectors, one per line. model.bin is a binary file containing the parameters of the model along with the dictionary and all hyper parameters. The binary file can be used later to compute word vectors or to restart the optimization.

Using sklearn on audio files of different lengths

I'm new to ML and wanted to try a project myself to learn, so please excuse any blatant mistakes. I'm trying to classify a few files (ringtones and such) using audiolab and sklearn in python.
Here's the code:
from scikits.audiolab.pysndfile.matapi import oggread, wavread
import numpy as np
from sklearn import svm
files = ["Basic_Bell.ogg", "Beep-Beep.ogg", "Beep_Once.ogg", "Calling_You.ogg", "Time_Up.ogg"]
labels = [2,1,1,2,2]
train = []
for f in files:
data, fs, enc = oggread("Tones/"+f)
train.append(data)
clf = svm.SVC()
clf.fit(train, labels)
I'm getting an error message:
Traceback (most recent call last):
File "/home/athul/Projects/Audio Analysis/read.py", line 18, in <module>
clf.fit(train, labels)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 150, in fit
X = check_array(X, accept_sparse='csr', dtype=np.float64, order='C')
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 373, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
In my (limited) understanding this seems to be issue because the train data has different sizes and hence numpy can't convert it into a matrix, so how do I fix this? Can I pad it? If so whats the size I should use? Or is this a mistake on my part?
In my (limited) understanding this seems to be issue because the train
data has different sizes and hence numpy can't convert it into a
matrix,
True, that's exactly the issue.
so how do I fix this? Can I pad it? If so whats the size I should use?
Or is this a mistake on my part?
There are 2 options I can think of, one is to use a network that supports different size matrices (Option1). Other is to pad with zeros. (Option 2). One can also change the length of the audio with other methods (maintaining or not the pitch) but I have not find any applications of that in any paper so I will not post it as an option.
Option 1: Use a network that can deal with different sizes
Normally, one uses Recurrent Neural Network (RNN) as it can cope with different sizes of audio.
Option 2: zero-pad / truncate
I couldn't honestly find a standard here. You could choose a fix duration and then:
For shorter audios: Add silence at the end and/or start of the audio
For longer audios: Cut them
from pydub import AudioSegment
audio = pydub.AudioSegment.silent(duration=duration_ms) # The length you want
audio = audio.overlay(pydub.AudioSegment.from_wav(path))
raw = audio.split_to_mono()[0].get_array_of_samples() # I only keep the left sound
An example of this kind of application can be UrbanSoundDataset. It's a dataset of different lengths audio and therefore any paper that uses it (for a non-RNN network) will be forced to use this or another approach that converts sounds to a same length vector/matrix. I recommend the paper Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification or ENVIRONMENTAL SOUND CLASSIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS. The later has it's code open sourced and you can see it uses the method I explained (kind of) on function _load_audio in this notebook
A bit off topic, but for these kind of applications is highly recommended to use mel-spectrum
Standard (as far as I know) is to use a mel-spectrum for this kind of applications. You could use the Python library Essentia and follow this example or use librosa like this:
y, sr = librosa.load('your-wav-file.wav')
mel_spect = librosa.feature.melspectrogram(y=y, sr=sr, n_fft=2048, hop_length=1024)

Training classifier with large data

I was trying with two class text classification. Usually I created Pickle files of trained model and load those pickle in training phase to eliminate retraining.
When I had 12000 review + more then 50000 tweets for each of the class, the training model size goes to 1.4 GB.
Now storing this large model data into Pickle and loading it is really not feasible and advisable.
Is there any better alternative to this scenario?
Here is sample code, I tried multiple ways of pickleing, here i Have used dill package
def train(self):
global pos, neg, totals
retrain = False
# Load counts if they already exist.
if not retrain and os.path.isfile(CDATA_FILE):
# pos, neg, totals = cPickle.load(open(CDATA_FILE))
pos, neg, totals = dill.load(open(CDATA_FILE, 'r'))
return
for file in os.listdir("./suspected/"):
for word in set(self.negate_sequence(open("./unsuspected/" + file).read())):
neg[word] += 1
pos['not_' + word] += 1
for file in os.listdir("./suspected/"):
for word in set(self.negate_sequence(open("./suspected/" + file).read())):
pos[word] += 1
neg['not_' + word] += 1
self.prune_features()
totals[0] = sum(pos.values())
totals[1] = sum(neg.values())
countdata = (pos, neg, totals)
dill.dump(countdata, open(CDATA_FILE, 'w') )
UPDATE : Reason behind large pickle is, classification data is very large. And I have considered 1-4 gram for feature selection. Classification dataset itself is around 300mb, so considering multigram approach for feature selection creates large training model.
Pickle is very heavy as a format. It stores all the details of the objects.
It would be much better to store your data in an efficient format like hdf5.
If you are not familiar with hdf5, you can look into storing your data in a simple flat text files. You can use csv or json, depending on your data structure. You'll find that either is more efficient than pickle.
You can look at gzip to create and load compressed archives.
The problem and solution is explained here. In short, the problem is due to the fact that when doing featurization, e.g. using CountVectorizer, although you might ask for small number of features e.g. max_features=1000, the transformer still keeps a copy of all possible features for debugging purposes, under the hood.
For instance, the CountVectorizer has the following attribute:
stop_words_ : set
Terms that were ignored because they either:
- occurred in too many documents (max_df)
- occurred in too few documents (min_df)
- were cut off by feature selection (max_features).
This is only available if no vocabulary was given.
and this causes the model size to become too large. To solve this issue, you can set stop_words_ to None before pickling your model (taken from the above link's example): (please check the link above for details)
import pickle
model_name = 'clickbait-model-sm.pkl'
cfr_pipeline.named_steps.vectorizer.stop_words_ = None
pickle.dump(cfr_pipeline, open(model_name, 'wb'), protocol=2)

Meanshift in scikit learn (python) doesn't understand datatype

I have a dataset which has 7265 samples and 132 features.
I want to use the meanshift algorithm from scikit learn but I ran into this error:
Traceback (most recent call last):
File "C:\Users\OJ\Dropbox\Dt\Code\visual\facetest\facetracker_video.py", line 130, in <module>
labels, centers = getClusters(data,clusters)
File "C:\Users\OJ\Dropbox\Dt\Code\visual\facetest\facetracker_video.py", line 34, in getClusters
ms.fit(np.array(dataarray))
File "C:\python2.7\lib\site-packages\sklearn\cluster\mean_shift_.py", line 280, in fit
cluster_all=self.cluster_all)
File "C:\python2.7\lib\site-packages\sklearn\cluster\mean_shift_.py", line 137, in mean_shift
nbrs = NearestNeighbors(radius=bandwidth).fit(sorted_centers)
File "C:\python2.7\lib\site-packages\sklearn\neighbors\base.py", line 642, in fit
return self._fit(X)
File "C:\python2.7\lib\site-packages\sklearn\neighbors\base.py", line 180, in _fit
raise ValueError("data type not understood")
ValueError: data type not understood
My code:
dataarray = np.array(data)
bandwidth = estimate_bandwidth(dataarray, quantile=0.2, n_samples=len(dataarray))
ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)
ms.fit(dataarray)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
If I check the datatype of the data variable I see:
print isinstance( dataarray, np.ndarray )
>>> True
The bandwidth is 0.925538333061 and the dataarray.dtype is float64
I'm using scikit learn 0.14.1
I can cluster with other algorithms in sci-kit (tried kmeans and dbscan). What am I doing wrong ?
EDIT:
The data can be found here:
(pickle format) : http://ojtwist.be/datatocluster.p
and : http://ojtwist.be/datatocluster.npz
That`s a bug in scikit project. It is documented here.
There is a float -> int casting during the fitting process that can crash in some cases (by making the seed points be placed at the corner of the bins instead in the center). There is some code in the link to fix the problem.
If you don't wanna get your hands into the scikit code (and maintain compatibility between your code with other machines) i suggest you normalize your data before passing it to MeanShift.
Try this:
>>>from sklearn import preprocessing
>>>data2 = preprocessing.scale(dataarray)
And then use data2 into your code.
It worked for me.
If you don't want to do either solution, it is a great opportunity to contribute to the project, making a pull request with the solution :)
Edit: You probably want to retain information to "descale" the results of meanshift. So, use a StandardScaler object, instead using a function to scale.
Good luck!

Python NLTK Maximum Entropy Classifier Error

I'm currently using NLTK's Naive Bayes classifier, however I also wanted to try out the Max Ent classifier. It seems from the documentation that it should take the same format for the feature set as the Naive Bayes, but for some reason I am getting this error when I try it:
File "/usr/lib/python2.7/site-packages/nltk/classify/maxent.py", line 323, in train
gaussian_prior_sigma, **cutoffs)
File "/usr/lib/python2.7/site-packages/nltk/classify/maxent.py", line 1453, in train_maxent_classifier_with_scipy
model.fit(algorithm=algorithm)
File "/usr/lib64/python2.7/site-packages/scipy/maxentropy/maxentropy.py", line 1026, in fit
return model.fit(self, self.K, algorithm)
File "/usr/lib64/python2.7/site-packages/scipy/maxentropy/maxentropy.py", line 226, in fit
callback=callback)
File "/usr/lib64/python2.7/site-packages/scipy/optimize/optimize.py", line 636, in fmin_cg
gfk = myfprime(x0)
File "/usr/lib64/python2.7/site-packages/scipy/optimize/optimize.py", line 176, in function_wrapper
return function(x, *args)
File "/usr/lib64/python2.7/site-packages/scipy/maxentropy/maxentropy.py", line 420, in grad
G = self.expectations() - self.K
ValueError: shape mismatch: objects cannot be broadcast to a single shape
I'm not sure what this means, but I am using the same exact input as I am when I run Naive Bayes and that works.(Training data, represented as a list of pairs, the first member of which is a featureset, and the second of which is a classification label.) Any ideas?
Thanks!
I also encountered this problem with NLTK. While I was unable to resolve it satisfactorily (i.e. get Maxent working using scipy), I was able to train a maxent classifier in NLTK when I used a different algorithm. Try training with
me_classifier = nltk.MaxentClassifier.train(trainset,algorithm="iis")
or one of the other acceptable values for algorithm, like "gis" or "megam".
This issue is also dependent on what version of scipy you are using.
NLTK makes use of scipy.maxentropy which was deprecated in scipy 0.10 and removed in 0.11, see the docs for it: http://docs.scipy.org/doc/scipy-0.10.0/reference/maxentropy.html#
I did create an issue for that on github: https://github.com/nltk/nltk/issues/307
you must install nltk then you can classify.
use the code bellow to classify using maximum entropy in python
me_classifier = nltk.MaxentClassifier.train(trainset,algorithm="gis")
print(me_classifier.classify(testing))

Categories