Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How do I apply scikit-learn to a numpy array with 4 columns each representing a different attribute?
Basically, I'm wanting to teach it how to recognize a healthy patient from these 4 characteristics and then see if it can identify an abnormal one.
Thanks in advance!
A pipeline usually has the following steps:
Define a classifier/ regressor
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
Fit the data
clf.fit(X_train,y_train)
Here X_train will your four column features and y_train will be the labels whether the patient is healthy.
Predict on new data
y_pred = clf.prdict(X_test)
This tutorial is great starting point for you to get some basic idea about the pipeline.
Look into the pandas package which allows you to import CSV files into a dataframe. pandas is supported by scikit-learn.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Please do help, I am unable to split the two. Should we do it after importing the data or before?
You can use train_test_split to split your datasets.
Try this :
from sklearn.model_selection import train_test_split
train_test_split(X, y, test_size=0.3, random_state=0)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have a question regarding the model.fit method and overfitting from the scikit learn library in Pandas
Does the generic sklearn method model.fit(x---, y--) returns the score after applying the model to the specified training data?
Also, it is overfitting when performance on the test data degrades as more training data is used to learn the model?
model.fit(X, y) doesn't explicitly give you the score, if you assign a variable to it, it stores all the artifacts, training parameters. You can get the score by using model.score(X, y).
Overfitting in simple words is increasing the variance in your model by which your model fails to generalize. There are ways to reduce overfitting like feature engineering, normalization, regularization, ensemble methods etc.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to build a classification model and my target is not binary. The correlations of my features against my target are all weak (mostly 0.1). I have preprocessed my data and applied the all the algorithms i used to it (the algorithms i used are svm, knn, naivebayes,logistic regression, decision tree,gradient boosting, random forest). I evaluated all of the models with sklearn metrics.accuracy_score just to know how good they perform on my data but all of them scored 0.1~0.2 . The target is productline column.
My questions
How could this happen?
How to tackle this issue?
Is there any other algorithm that could make better score?
What's the accuracy if you use a dummy classifier? The accuracy of the models you have tried should be at least equal to that of the dummy classifier.
"How could this happen?" If there's no relationship between the features and the target variable, the model isn't going to return good results.
I'm not sure about the details of your dataset, but you can try to 1) Get more data 2) Get more features 3) Do some feature engineering 4) Clean your dataset if you haven't, there might be outliers or wrong inputs affecting your results
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am working on a text classification problem. I have huge amount of data and when I am trying to fit data into the machine learning model it is causing a memory error. Is there any way through which I can fit data in parts to avoid memory error.
Additional information
I am using linearSVC model.
I have training data of 1.1 million rows.
I have vectorized text data using tfidf.
The shape of vectorized data (1121063, 4235687) which has to be
fitted into the model.
Or is there any other way out of this problem.
Unfortunately, I don't have any reproducible code for the same.
Thanks in advance.
The simple answer is not to use what I assume is the scikit-learn implementation of linearSVC and instead use some algorithm/implementation that allows training in batches. Most common of which are neural networks, but several other algorithms exists. In scikit-learn look for classifiers with the partial_fit method which will allow you to fit your classifier in batches. See e.g. this list
You could also try what's suggested from sklearn.svm import SVC (the second part, the first is using LinearSVC, which you did):
For large datasets consider using :class:'sklearn.svm.LinearSVC' or
:class:'sklearn.linear_model.SGDClassifier' instead, possibily after a :class:'sklearn.kernel_approximation.Nystroem' transformer.
If you check SGDClassifier() you can set the parameter "warm_start=True" so when you iterate trough your dataset it won't lose it's state.:
clf = SGDClassifier(warm_start=True)
for i in 'loop your data':
clf.fit(data[i])
Additionally you could reduce the dimension of your dataset by removing some words from your TFIDF model. Check the "max_df" and "min_df" parameters, they'll remove words with frequency higher than or lower than, can be a % or an unit.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a python Machine Learning script which learns from Firewall log data. I am using Random Forest Classifier in it. If I run it as a service is it possible to train the model, again and again, each day whenever last day's Firewall logs are received.
These are the estimators in sklearn that support what you want to do and unfortunately for you Random Forest isn't one of them, so you will have to refit every time you add data.
If you insist on sticking with Random Forest, one option would be to reduce the number of features (based on the classifier you currently have) to increase the speed of refitting your classifier.