from sklearn.linear_model import LinearRegression
X=data['reck']
y=data['price']
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=0)
linreg = LinearRegression().fit(X, y)
I wrote codes for linear regression problem but this error appeared when i want to see result this error is:
ValueError: Expected 2D array, got 1D array instead:
array=[122360. 122365. 49800. ... 2696. 2357. nan].
Reshape your data either using
array.reshape(-1, 1) if your data has a single feature or
array.reshape(1, -1) if it contains a single sample.
My model is just 1D. It tries to find relation between reception kilometer of cars and the price of services they have received.
chasis number reck price
0 999.JACJ5AT.SPC00 122360.0 330000
1 999.JACJ5AT.SPC00 122365.0 385000
2 999.JACS5AT.SPC00 49800.0 753500
3 999.JACS5AT.SPC00 49805.0 1732500
4 999.JACS5AT.SPC00 49908.0 1375000
The problem is the way you are declaring the X and Y
if you print shape of X or Y
X.shape
it would come something like this
(49,)
Which says 49 rows , but column is blank
to avoid this , you can edit your code like this
X=data[['reck']]
y=data[['price']]
when you print the shape
X.shape
the value would come something like this
(49,1)
When you pass these values to your model, model will not throw any error.
PS : i am also a new contributor, i tried to explain it as much i understand, however there could be more logical explanation to this
What about reshapeing array to 2D? (Note that the error message is verbose enough to propose it as well!)
from sklearn.linear_model import LinearRegression
X=data['reck'].reshape(-1, 1)
y=data['price']
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=0)
linreg = LinearRegression().fit(X, y)
Related
df = pd.read_csv('../input/etu-ai-club-competition-2/train.csv')
df.shape
(750000,77)
X = df.drop(columns = 'Target')
y = df['Target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
model = MLPRegressor(hidden_layer_sizes = 60, activation = "relu", solver = "adam")
model
model.fit(X_train, y_train)
pr = model.predict(X_test)
pr.shape
(187500,)
model.score(y_test, pr)
ValueError: Expected 2D array, got 1D array instead:
array=[-120.79511811 -394.11307519 -449.59524477 ... -432.46130084 -492.81440014
-753.02016315].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Just started getting into ml. I dont really understand why I need to have a 2d array to get score or how do I convert mine into one. I did try to reshape it as said in the error but when I do that I get the messages ValueError: X has 1 features, but MLPRegressor is expecting 76 features as input. and ValueError: X has 187500 features, but MLPRegressor is expecting 76 features as input. for reshaping into (-1, 1) and (1, -1) respectively.
The correct way to call the score method would be:
model.score(X_test, y_test)
Internally, it first computes the predictions and then passes the predictions to a scoring function.
If you want to pass the predictions directly, you need to use one of the scoring functions in the metrics package, as explained here:
https://scikit-learn.org/0.15/modules/model_evaluation.html
Note: you might also want to have a look at the example code in the MLPRegressor documentation:
https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html
While practicing Simple Linear Regression Model I got this error,
I think there is something wrong with my data set.
Here is my data set:
Here is independent variable X:
Here is dependent variable Y:
Here is X_train
Here Is Y_train
This is error body:
ValueError: Expected 2D array, got 1D array instead:
array=[ 7. 8.4 10.1 6.5 6.9 7.9 5.8 7.4 9.3 10.3 7.3 8.1].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
And this is My code:
import pandas as pd
import matplotlib as pt
#import data set
dataset = pd.read_csv('Sample-data-sets-for-linear-regression1.csv')
x = dataset.iloc[:, 1].values
y = dataset.iloc[:, 2].values
#Spliting the dataset into Training set and Test Set
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=0)
#linnear Regression
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)
Thank you
You need to give both the fit and predict methods 2D arrays. Your x_train and x_test are currently only 1 dimensional. What is suggested by the console should work:
x_train= x_train.reshape(-1, 1)
x_test = x_test.reshape(-1, 1)
This uses numpy's reshape to transform your array. For example, x = [1, 2, 3] wopuld be transformed to a matrix x' = [[1], [2], [3]] (-1 gives the x dimension of the matrix, inferred from the length of the array and remaining dimensions, 1 is the y dimension - giving us a n x 1 matrix where n is the input length).
Questions about reshape have been answered in the past, this for example should answer what reshape(-1,1) fully means: What does -1 mean in numpy reshape? (also some of the other below answers explain this very well too)
A lot of times when doing linear regression problems, people like to envision this graph
On the input, we have an X of X = [1,2,3,4,5]
However, many regression problems have multidimensional inputs. Consider the prediction of housing prices. It's not one attribute that determines housing prices. It's multiple features (ex: number of rooms, location, etc. )
If you look at the documentation you will see this
It tells us that rows consist of the samples while the columns consist of the features.
However, consider what happens when he have one feature as our input. Then we need an n x 1 dimensional input where n is the number of samples and the 1 column represents our only feature.
Why does the array.reshape(-1, 1) suggestion work? -1 means choose a number of rows that works based on the number of columns provided. See the image for how it changes in the input.
If you look at documentation of LinearRegression of scikit-learn.
fit(X, y, sample_weight=None)
X : numpy array or sparse matrix of shape [n_samples,n_features]
predict(X)
X : {array-like, sparse matrix}, shape = (n_samples, n_features)
As you can see X has 2 dimensions, where as, your x_train and x_test clearly have one.
As suggested, add:
x_train = x_train.reshape(-1, 1)
x_test = x_test.reshape(-1, 1)
Before fitting and predicting the model.
Use
y_pred = regressor.predict([[x_test]])
I would suggest to reshape X at the beginning before you do the split into train and test dataset:
import pandas as pd
import matplotlib as pt
#import data set
dataset = pd.read_csv('Sample-data-sets-for-linear-regression1.csv')
x = dataset.iloc[:, 1].values
y = dataset.iloc[:, 2].values
# Here is the trick
x = x.reshape(-1,1)
#Spliting the dataset into Training set and Test Set
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=0)
#linnear Regression
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)
This is what I use
X_train = X_train.values.reshape(-1, 1)
y_train = y_train.values.reshape(-1, 1)
X_test = X_test.values.reshape(-1, 1)
y_test = y_test.values.reshape(-1, 1)
This is the solution
regressor.predict([[x_test]])
And for polynomial regression:
regressor_2.predict(poly_reg.fit_transform([[x_test]]))
Modify
regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)
to
regressor.fit(x_train.values.reshape(-1,1),y_train)
y_pred = regressor.predict(x_test.values.reshape(-1,1))
#splitting the dataset into dependent(y) and independent variable(x)
x = training_data.iloc[:,[0,2,3,4,5,6,7]].values
y = training_data.iloc[:,1].values
from sklearn.model_selection import train_test_split
x_train,y_train,x_test,y_test = train_test_split(x,y,test_size = 0.3,random_state = 0)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(x_train,y_train)
i am trying to use logistic regression to train independent(x_train) and dependent variable(y_train) but everytime i run the code i see error
ValueError: y should be a 1d array, got an array of shape (295, 7) instead.
i don't know what to do
You have an error when making the train_test_split.
Be aware of output variables order, the correct output is like below:
X_train, X_test, y_train, y_test = train_test_split(x,y,test_size = 0.3,random_state=0)
Just changing this line, your problem should disappear.
I am new to this, anything will be helpful. The data size is large...
I am not sure where the error could be coming from. I dont even know if this is a good idea hahah, I am using longitude and latitude for my x and y.
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import pandas as pd
import numpy as np
df = pd.read_csv('aug.csv')
X = df.Lon
y = df.Lat
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
clf = DecisionTreeClassifier()
clf = clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)```
ValueError: Expected 2D array, got 1D array instead:
array=[-73.9713 -74.0635 -73.9881 ... -74.1777 -73.9923 -73.9661].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Your X variable for the inputs needs to be an array of features. You have a single column in that csv so it interprets that as a 1D array. The error message you are getting is correct, so change that line "X = df.Lon" to be:
"X = df.Lon.reshape(-1, 1)"
One thing to note: what you're doing doesn't make a ton of sense. What this code is trying to do is predict the Y (lat) given the X (lon). These really should be independent variables, so predicting one from the other will probably not yield any meaningful results.
I have generated features out of my data, to input into a learning algorithm.
I worked with a lot of features before but never encounter the problem of valueError: before on the same dataset.
Error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
structure of my data(features) is as follows:
p_s,ne_s,ng_s,su_s,val
90,2320,30,0
This was the issue. total labels are 5 but values are 4
on doing this:
print(np.where(np.isnan(X)))
I get:
(array([], dtype=int64), array([], dtype=int64))
I also tried:
np.isnan(X)
np.nan_to_num(X)
pd.DataFrame(X).fillna()
But nothing worked for me!
Code:
import pandas as pd
from sklearn.model_selection import train_test_split
data = pd.read_csv('data.csv')
X = data[['p_s','ne_s','ng_s','su_s']]
y = data['val']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
After this, I start fitting my data into algorithm.
---//some code//--
.fit(X_train, y_train)
---//some code//--
Expectations:
No value error. Also, why is this coming in the first place? I did a lot fo work with using upto seven features and never encountered the same. Moreover, I cannot be doing anything wrong in the code because there is no divide and multiplication in whole scenario.
Acutal:
ValueError: As stated above.
Identified Issue:
When I print y_train and y_test they return NaN values. So, I hope the data is correct.