I've used regression and classification in the past to train, test, and make predictions. Now, I am looking at some NLP sample code and everything is running fine, but at the end, I was hoping to make a prediction of a 'rating' score based on what is contained in a 'text' field. Maybe NLP can't do this, but it seems like it should be doable. Here is the code that I am testing.
from sklearn.feature_extraction.text import TfidfVectorizer
tf=TfidfVectorizer()
text_tf= tf.fit_transform(df['review_text'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(text_tf, df['reviews.rating'], test_size=0.3, random_state=123)
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
# Model Generation Using Multinomial Naive Bayes
clf = MultinomialNB().fit(X_train, y_train)
predicted= clf.predict(X_test)
print("MultinomialNB Accuracy:",metrics.accuracy_score(y_test, predicted))
# around 7% accurate...
Now, based on specific text, I want to predict the rating a customer will give.
y_predicted = clf.predict(text_tf["Didnt know how much i'd use a kindle so went for the lower end. im happy with it, even if its a little dark"])
Then I get this error: IndexError: Index dimension must be <= 2
The actual rating for this actual review is 4. I was expecting 'y_predicted' to show me a 4. Maybe there is some other library for this kind of thing. Again, I think it should be doable. Thoughts? Suggestions?
I think the issue is what you're asking it to predict on.
Text_tf is a matrix of size (n_samples, n_features). This is what you trained your model on. It doesn't have any text in it anymore. What you want is to transform your test sample the same way you did your training samples, using the TfidfVectorizer. Try the following:
y_predicted = clf.predict(tf.transform("Didnt know how much i'd use a kindle so went for the lower end. im happy with it, even if its a little dark"))
Related
I am solving the classic regression problem using the python language and the scikit-learn library. It's simple:
ml_model = GradientBoostingRegressor()
ml_params = {}
ml_model.fit(X_train, y_train)
where y_train is one-dimensional array-like object
Now I would like to expand the functionality of the task, to get not a single target value, but a set of them. Training set of samples X_train will remain the same.
An intuitive solution to the problem is to train several models, where X_train for all of them will be the same but y_train for each model will be specific. This is definitely a working, but, it seems to me, inefficient solution.
When searching for alternatives, I came across such concepts as Multi-Target Regression. As I understand such functionality is not implemented in scikit-learn.
How to solve Multi-Target Regression problem in python in efficient way? Thanks)
It depends on what problem you solve, training data you have, and an algorithm you choose to find a solution. It's really hard to suggest anything without knowing all the details. You could try a random forest as a starting point. It's a very powerful and robust algorithm which is resistant to overfitting in the case you have not so much data, and also it can be used for multi-target regression. Here is a working example:
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
X, y = make_regression(n_targets=2)
print('Feature vector:', X.shape)
print('Target vector:', y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8)
print('Build and fit a regressor model...')
model = RandomForestRegressor()
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print('Done. Score', score)
Output:
Feature vector: (100, 100)
Target vector: (100, 2)
Build and fit a regressor model...
Done. Score 0.4405974071273537
This algorithm natively supports multi-target regression. For those ones which don't, you can use the multi-output regressor which simply fits one regressor per target.
Another alternative to the random forest approach would be to use an adapted version of Support Vector Regression, that fits multi-target regression problems. The advantage over fitting SVR with MultiOutputRegressor is that this method takes the underlying correlations between the multiple targets into account and hence should perform better.
A working implementation with a paper reference can be found here
I am following Andrew NG instruction to evaluate the algorithm in Classification:
Find the Loss Function of the Training Set.
Compare it with the Loss Function of the Cross Validation.
If both are close enough and small, go to next step (otherwise, there is bias or variance..etc).
Make a prediction on the Test Set using the resulted Thetas(i.e. weights) produced from the previous step as a final confirmation.
I am trying to apply this using Scikit-Learn Library, however, I am really lost there and sure that I am totally wrong (I didn't find anything similar online):
from sklearn import model_selection, svm
from sklearn.metrics import make_scorer, log_loss
from sklearn import datasets
def main():
iris = datasets.load_iris()
kfold = model_selection.KFold(n_splits=10, random_state=42)
model= svm.SVC(kernel='linear', C=1)
results = model_selection.cross_val_score(estimator=model,
X=iris.data,
y=iris.target,
cv=kfold,
scoring=make_scorer(log_loss, greater_is_better=False))
print(results)
Error
ValueError: y_true contains only one label (0). Please provide the true labels explicitly through the labels argument.
I am not sure even it's the right way to start. Any help is very much appreciated.
Given the clarifications you provide in the comments and that you are not particularly interested in the log loss itself, I think the most straightforward approach is to abandon log loss and go for the accuracy instead:
from sklearn import model_selection, svm
from sklearn import datasets
iris = datasets.load_iris()
kfold = model_selection.KFold(n_splits=10, random_state=42)
model= svm.SVC(kernel='linear', C=1)
results = model_selection.cross_val_score(estimator=model,
X=iris.data,
y=iris.target,
cv=kfold,
scoring="accuracy") # change
Al already mentioned in the comments, inclusion of log loss in such situations still suffers from some unresolved issues in scikit-learn (see here and here).
For the purpose of estimating the generalization ability of your model, you will be fine with the accuracy metric.
This kind of error appears often when you do cross validation.
Basically your data is split into n_splits = 10 and some classes are missing on some of these splits. For example, your 9th split may not have training examples for class number 2.
So then when you evaluate your loss, the number of existing classes between your prediction and the test set do not match. So you cannot compute the loss if you have 3 classes in y_true and your model is trained to predict only 2.
What do you do in this case?
You have three possibilities:
Shuffle your data KFold(n_splits=10, random_state=42, shuffle = True)
Make n_splits bigger
provide the list of labels explicitly to the loss function as follows
args_loss = { "labels": [0,1,2] }
make_scorer(log_loss, greater_is_better=False,**args_loss)
Cherry pick your splits so you make sure this doesn't happen. I don't think Kfold allows this but GridSearchCV does
Just for future readers who are following Andrew's Course:
K-Fold is Not practically applicable to this purpose, because we mainly want to evaluate the Thetas (i.e. Weights) produced by a certain algorithm with some parameters on the Cross-Validation Set by using those Thetas in a comparison between both Cost-Functions J(train) vs J(CV) to determine if the model suffers from bias, variance or it's O.K.
Nevertheless, K-Fold is mainly for testing the prediction on the CV using the weights produced from training the Model on Training Set.
Reading documentation and procedures while using machine learning techniques for both classification and regression I came across with some topic which actually is new for me. It seems that a recommended procedure related to split the data before training and testing is to split it into three different sets training, validation and testing. Since this procedure makes sense to me I was wondering how should I proceed with this. Let's say we split the data into these three sets, since I came across with this reading sklearn approaches and tips
If we follow some interesting approaches like what I found in here:
Stratified Train/Validation/Test-split in scikit-learn
Taking this into account let's say we want to build a classifier using LogisticRegression(any classifier actually). The procedure as far as I am concerned should be something like this, right?:
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
Now if we want to make predictions we could use:
# make class predictions for the testing set
y_pred_class = logreg.predict(X_test)
What when one have to estimate accuracy of the model a common approach is:
# calculate accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
And here is where my question comes. Validation set which was splitted before should be use for calculating accuracy or for validating somehow using a Kfold cv instead?. For instance,:
# Perform 10-fold cross validation
scores = cross_val_score(logreg, df, y, cv=10)
Any hint of the procedure with these three sets would be really appreciated. What I was thinking of was that validation set should be use with train but do not know really in which way.
I have a data like shown in image. it is about 25,000 rows. The data containes details about 12 months for past 4 years. I want to predict Client and Position Opened for particular month and particular jobtitle.
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df_final['Clientname_numeric'] = le.fit_transform(df_final['ClientName'])
X = df_final[['MONTH','JobTitleID']]
y = df_final[['PositionsOpened','Clientname_numeric']]
x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.05 )
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
clf = RandomForestClassifier()
clf.fit(x_train, y_train)
predictions = clf.predict(x_test)
predictions = predictions.astype(int)
accuracy = accuracy_score(y_test,predictions)
I am using above code and getting error
ValueError: multiclass-multioutput is not supported
You could use the package scikit learn and the random forest classifier. I should point out that I only have very superficial knowledge of machine learning, so this might just be the wrong one for your specific case. The RandomForestClassifier however allows to predict multiple outputs at once.
In general, given your data, you would approach it like this (using Scikit Learn):
Split the tables into input columns and output columns. This could propably be done most easily using the pandas package. Then split those into training and test subsets. Scikit offers an off-the-shelf solution for this.
Create an instance of a classifier like RandomForestClassifier and train it using the input- and output-data from your training set (classifier.train(inputs_train, outputs_train))
Given the inputs of your test data, predict the outputs (classifier.predict(inputs_predict)). Decide whether you are satisfied with the predictive quality of your classifier.
For classifying multiple outputs, sklearn has this library, it expects a base estimator like random forests, gradient boosting etc.
The library allows multiple output regression and classification.
Hope this helps!
i am working with KNeighborsClassifier algorithm from scikit-learn library in Python. I followed basic instructions e.g. split my data and labels into training and test data, then trained my model on a training data. Now I am trying to predict accuracy of testing data but get an error. Here is my code:
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
data_train, data_test, label_train, label_test = train_test_split(df, labels,
test_size=0.2,
random_state=7)
mod = KNeighborsClassifier(n_neighbors=4)
mod.fit(data_train, label_train)
predictions = mod.predict(data_test)
print accuracy_score(label_train, predictions)
The error I get:
ValueError: Found arrays with inconsistent numbers of samples: [140 558]
140 is the portion of training data and 558 is the test data based on the test_size=0.2 (my data set is 698 samples). I verified that labels and data sets are of the same size 698. However, I get this error which is basically trying to compare test data and training data sets.
Does anyone knows what is wrong here? What should I use to train my model against to and what should I use to predict the score?
Thanks!
You should calculate the accuracy_score with label_test, not label_train. You want to compare the actual labels of the test set, label_test, to the predictions from your model, predictions, for the test set.
Did you tried to solve your issue via the following question ?
sklearn: Found arrays with inconsistent numbers of samples when calling LinearRegression.fit()