I have a house price prediction dataset. I have to split the dataset into train and test.
I would like to know if it is possible to do this by using numpy or scipy?
I cannot use scikit learn at this moment.
I know that your question was only to do a train_test_split with numpy or scipy but there is actually a very simple way to do it with Pandas :
import pandas as pd
# Shuffle your dataset
shuffle_df = df.sample(frac=1)
# Define a size for your train set
train_size = int(0.7 * len(df))
# Split your dataset
train_set = shuffle_df[:train_size]
test_set = shuffle_df[train_size:]
For those who would like a fast and easy solution.
Although this is old question, this answer might help.
This is how sklearn implements train_test_split, this method given below, takes similar arguments as sklearn.
import numpy as np
from itertools import chain
def _indexing(x, indices):
"""
:param x: array from which indices has to be fetched
:param indices: indices to be fetched
:return: sub-array from given array and indices
"""
# np array indexing
if hasattr(x, 'shape'):
return x[indices]
# list indexing
return [x[idx] for idx in indices]
def train_test_split(*arrays, test_size=0.25, shufffle=True, random_seed=1):
"""
splits array into train and test data.
:param arrays: arrays to split in train and test
:param test_size: size of test set in range (0,1)
:param shufffle: whether to shuffle arrays or not
:param random_seed: random seed value
:return: return 2*len(arrays) divided into train ans test
"""
# checks
assert 0 < test_size < 1
assert len(arrays) > 0
length = len(arrays[0])
for i in arrays:
assert len(i) == length
n_test = int(np.ceil(length*test_size))
n_train = length - n_test
if shufffle:
perm = np.random.RandomState(random_seed).permutation(length)
test_indices = perm[:n_test]
train_indices = perm[n_test:]
else:
train_indices = np.arange(n_train)
test_indices = np.arange(n_train, length)
return list(chain.from_iterable((_indexing(x, train_indices), _indexing(x, test_indices)) for x in arrays))
Of course sklearn's implementation supports stratified k-fold, splitting of pandas series etc. This one only works for splitting lists and numpy arrays, which I think will work for your case.
This solution using pandas and numpy only
def split_train_valid_test(data,valid_ratio,test_ratio):
shuffled_indcies=np.random.permutation(len(data))
valid_set_size= int(len(data)*valid_ratio)
valid_indcies=shuffled_indcies[:valid_set_size]
test_set_size= int(len(data)*test_ratio)
test_indcies=shuffled_indcies[valid_set_size:test_set_size+valid_set_size]
train_indices=shuffled_indcies[test_set_size:]
return data.iloc[train_indices],data.iloc[valid_indcies],data.iloc[test_indcies]
train_set,valid_set,test_set=split_train_valid_test(dataset,valid_ratio=0.2,test_ratio=0.2)
print(len(train_set),len(valid_set),len(test_set))
##out: (16512, 4128, 4128)
This code should work (Assuming X_data is a pandas DataFrame):
import numpy as np
num_of_rows = len(X_data) * 0.8
values = X_data.values
np.random_shuffle(values) #shuffles data to make it random
train_data = values[:num_of_rows] #indexes rows for training data
test_data = values[num_of_rows:] #indexes rows for test data
Hope this helps!
import numpy as np
import pandas as pd
X_data = pd.read_csv('house.csv')
Y_data = X_data["prices"]
X_data.drop(["offers", "brick", "bathrooms", "prices"],
axis=1, inplace=True) # important to drop prices as well
# create random train/test split
indices = range(X_data.shape[0])
num_training_instances = int(0.8 * X_data.shape[0])
np.random.shuffle(indices)
train_indices = indices[:num_training_indices]
test_indices = indices[num_training_indices:]
# split the actual data
X_data_train, X_data_test = X_data.iloc[train_indices], X_data.iloc[test_indices]
Y_data_train, Y_data_test = Y_data.iloc[train_indices], Y_data.iloc[test_indices]
This assumes you want a random split. What happens is that we're creating a list of indices as long as the number of data points you have, i.e. the first axis of X_data (or Y_data). We then put them in random order and just take the first 80% of those random indices as training data and the rest for testing. [:num_training_indices] just selects the first num_training_indices from the list. After that you just extract the rows from your data using the lists of random indices and your data is split. Remember to drop the prices from your X_data and to set a seed if you want the split to be reproducible (np.random.seed(some_integer) in the beginning).
Related
Im trying to make my way to a sligthly more flexible knn input script than the tutorials based of the iris dataset but Im having some trouble (I think) to add the matching 2nd dimension to the numpy array in #6 and when I come to #11. the fitting.
File "G:\PROGRAMMERING\Anaconda\lib\site-packages\sklearn\utils\validation.py", line 212, in check_consistent_length
" samples: %r" % [int(l) for l in lengths]) ValueError: Found input variables with inconsistent numbers of samples: [150, 1]
x is (150,5) and y is (150,1). 150 is the number of samples in both, but they differ in number of fields, is this the problem and if so how do I fix it?
#1. Loading the Pandas libraries as pd
import pandas as pd
import numpy as np
#2. Read data from the file 'custom.csv' placed in your code directory
data = pd.read_csv("custom.csv")
#3. Preview the first 5 lines of the loaded data
print(data.head())
print(type(data))
#4.Test the shape of the data
print(data.shape)
df = pd.DataFrame(data)
print(df)
#5. Convert non-numericals to numericals
print(df.dtypes)
# Any object should be converted to numerical
df['species'] = pd.Categorical(df['species'])
df['species'] = df.species.cat.codes
print("outcome:")
print(df.dtypes)
#6.Convert df to numpy.ndarray
np = df.to_numpy()
print(type(np)) #this should state <class 'numpy.ndarray'>
print(data.shape)
print(np)
x = np.data
y = [df['species']]
print(y)
#K-nearest neighbor (find closest) - searach for the K nearest observations in the dataset
#The model calculates the distance to all, and selects the K nearest ones.
#8. Import the class you plan to use
from sklearn.neighbors import (KNeighborsClassifier)
#9. Pick a value for K
k = 2
#10. Instantiate the "estimator" (make an instance of the model)
knn = KNeighborsClassifier(n_neighbors=k)
print(knn)
#11. fit the model with data/model training
knn.fit(x, y)
#12. Predict the response for a new observation
print(knn.predict([[3, 5, 4, 2]]))```
This is how I used the scikit-learn KNeighborsClassifier to fit the knn model:
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
df = datasets.load_iris()
X = pd.DataFrame(df.data)
y = df.target
knn = KNeighborsClassifier(n_neighbors = 2)
knn.fit(X,y)
print(knn.predict([[6, 3, 5, 2]]))
#prints output class [2]
print(knn.predict([[3, 5, 4, 2]]))
#prints output class [1]
From DataFrame you don't need to convert to numpy array, you can directly fit the model on DataFrame, also while converting the DataFrame to numpy array you have named that as np which is also used to import numpy at the top import numpy as np
The input prediction input is 4 columns, leaving the fifth 'species' without prediction. Also, if 'species' was the target it cannot be given as input to the knn at the same time. The pop removes this particular column from the dataFrame df.
#npdf = df.to_numpy()
df = df.apply(lambda x:pd.Series(x))
y = np.asarray(df['species'])
#removes the target from the sample
df.pop('species')
x = df.to_numpy()
I've just seen this answer on SO which shows how to split data using numpy.
Assume we're going to split them as 0.8, 0.1, 0.1 for training, testing, and validation respectively, you do it this way:
train, test, val = np.split(df, [int(.8 * len(df)), int(.9 * len(df))])
I'm interested to know how could I consider stratifying while splitting data using this methodology.
Stratifying is splitting data while keeping the priors of each class you have in data. That is if you're going to take 0.8 for the training set, you take 0.8 from each class you have. Same for test and train.
I tried grouping the data first by class using:
grouped_df = df.groupby(class_col_name, group_keys=False)
But it did not show correct results.
Note: I'm familiar with train_test_split
Simply use your groupby object, grouped_df, which consists of each subsetted data frame where you can then run the needed np.split. Then concatenate all sampled data frames with pd.concat. Atogether, this would stratify according to your quoted message:
train_list = []; test_list = [], val_list = []
grouped_df = df.groupby(class_col_name)
# ITERATE THROUGH EACH SUBSET DF
for i, g in grouped_df:
# STRATIFY THE g (CLASS) DATA FRAME
train, test, val = np.split(g, [int(.8 * len(g)), int(.9 * len(g))])
train_list.append(train); test_list.append(test); val_list.append(val)
final_train = pd.concat(train_list)
final_test = pd.concat(test_list)
final_val = pd.concat(val_list)
Alternatively, a short-hand version using list comprehensions:
# LIST OF ARRAYS
arr_list = [np.split(g, [int(.8 * len(g)), int(.9 * len(g))]) for i, g in grouped_df]
final_train = pd.concat([t[0] for t in arr_list])
final_test = pd.concat([t[1] for t in arr_list])
final_val = pd.concat([v[2] for v in arr_list])
This assumes you have done stratification already such that a "category" column indicates which stratification each entry belongs to.
from collections import namedtuple
Dataset = namedtuple('Dataset', 'train test val')
grouped = df.groupby('headline')
splitted = {x: grouped.get_group(x).sample(frac=1) for x in grouped.groups}
datasets = {k:Dataset(*np.split(df, [int(.8 * len(df)), int(.9 * len(df))])) for k, df in splitted.items()}
This stores each stratified split by the category name assigned in df.
Each item in datasets is a Dataset namedtuple such that training, testing, and validation subsets are accessible by .train, .test, and .val respectively.
This is a follow on question from Subsetting Dask DataFrames. I wish to shuffle data from a dask dataframe before sending it in batches to a ML algorithm.
The answer in that question was to do the following:
for part in df.repartition(npartitions=100).to_delayed():
batch = part.compute()
However, even if I was to shuffle the contents of batch I'm a bit worried that it might not be ideal. The data is a time series set so datapoints would be highly correlated within each partition.
What I would ideally like is something along the lines of:
rand_idx = np.random.choice(len(df), batch_size, replace=False)
batch = df.iloc[rand_idx, :]
which would work on pandas but not dask. Any thoughts?
Edit 1: Potential Solution
I tried doing
train_len = int(len_df*0.8)
idx = np.random.permutation(len_df)
train_idx = idx[:train_len]
test_idx = idx[train_len:]
train_df = df.loc[train_idx]
test_df = df.loc[test_idx]
However, if I try doing train_df.loc[:5,:].compute() this return a 124451 row dataframe. So clearly using dask wrong.
I recommend adding a column of random data to your dataframe and then using that to set the index:
df = df.map_partitions(add_random_column_to_pandas_dataframe, ...)
df = df.set_index('name-of-random-column')
I encountered the same issue recently and came up with a different approach using dask array and shuffle_slice introduced in this pull request
It shuffles the whole sample
import numpy as np
from dask.array.slicing import shuffle_slice
d_arr = df.to_dask_array(True)
df_len = len(df)
np.random.seed(42)
index = np.random.choice(df_len, df_len, replace=False)
d_arr = shuffle_slice(d_arr, index)
and to transform back to dask dataframe
df = d_arr.to_dask_dataframe(df.columns)
for me it works well for large data sets
If you're trying to separate your dataframe into training and testing subsets, it is what does sklearn.model_selection.train_test_split and it works with pandas.DataFrame. (Go there for an example)
And for your case of using it with dask, you may be interested by the library dklearn, that seems to implements this function.
To do that, we can use the train_test_split function, which mirrors
the scikit-learn function of the same name. We'll hold back 20% of the
rows:
from dklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2)
More information here.
Note: I did not perform any test with dklearn, this is just a thing I came across, but I hope it can help.
EDIT: what about dask.DataFrame.random_split?
Examples
50/50 split
>>> a, b = df.random_split([0.5, 0.5])
80/10/10 split, consistent random_state
>>> a, b, c = df.random_split([0.8, 0.1, 0.1], random_state=123)
Use for ML applications is illustrated here
For people here really just wanting to shuffle the rows as the title implies:
This is costly
import numpy as np
random_idx = np.random.permutation(len(sd.index))
sd.assign(random_idx=random_idx)
sd = sd.set_index('x', sorted=True)
I have the following dataset, with over 20,000 rows:
I want to use columns A through E to predict column X using a k-nearest neighbor algorithm. I have tried to use KNeighborsRegressor from sklearn, as follows:
import pandas as pd
import random
from numpy.random import permutation
import math
from sklearn.neighbors import KNeighborsRegressor
df = pd.read_csv("data.csv")
random_indices = permutation(df.index)
test_cutoff = int(math.floor(len(df)/5))
test = df.loc[random_indices[1:test_cutoff]]
train = df.loc[random_indices[test_cutoff:]]
x_columns = ['A', 'B', 'C', D', E']
y_column = ['X']
knn = KNeighborsRegressor(n_neighbors=100, weights='distance')
knn.fit(train[x_columns], train[y_column])
predictions = knn.predict(test[x_columns])
This only makes predictions on the test data which is a fifth of the original dataset. I also want prediction values for the training data.
To do this, I tried to implement my own k-nearest algorithm by calculating the Euclidean distance for each row from every other row, finding the k shortest distances, and averaging the X value from those k rows. This process took over 30 seconds for just one row, and I have over 20,000 rows. Is there a quicker way to do this?
Give this code a try:
import numpy as np
import pandas as pd
from sklearn.model_selection import ShuffleSplit
from sklearn.neighbors import KNeighborsRegressor
df = pd.read_csv("data.csv")
X = np.asarray(df.loc[:, ['A', 'B', 'C', 'D', 'E']])
y = np.asarray(df['X'])
rs = ShuffleSplit(n_splits=1, test_size=1./5, random_state=0)
train_indices, test_indices = rs.split(X).next()
knn = KNeighborsRegressor(n_neighbors=100, weights='distance')
knn.fit(X[train_indices], y[train_indices])
predictions = knn.predict(X)
The main difference with respect to your solution is the use of ShuffleSplit.
Notes:
predictions contains the predicted values for all your data (test and train).
The proportion of test data can be adjusted through the parameter test_size (I used your setting, i.e. one fifth).
It is necessary to call the method next() for the iterator to yield the indices of the train and test data.
To do this, I tried to implement my own k-nearest algorithm by calculating the Euclidean distance for each row from every other row, finding the k shortest distances, and averaging the X value from those k rows. This process took over 30 seconds for just one row, and I have over 20,000 rows. Is there a quicker way to do this?
Yes, the problem is that loops in python are extremely slow. What you can do is vectorize your computations. So lets say that your data is in matrix X (n x d), then matrix of distances D_ij = || X_i - X_j ||^2 is
D = X^2 + X'^2 - 2 X X'
so in Python
D = (X ** 2).sum(1).reshape(-1, 1) + (X ** 2).sum(1).reshape(1, -1) - 2*X.dot(X.T)
You do not need to split the data into train and test if you want predictions on training data only.
You can just fit the original data then make predictions on it.
model.fit(original data, target)
model.predict(original data)
I am trying to perform some speed comparison test Python vs R and struggling with issue - LinearRegression under sklearn with categorical variables.
Code R:
# Start the clock!
ptm <- proc.time()
ptm
test_data = read.csv("clean_hold.out.csv")
# Regression Model
model_liner = lm(test_data$HH_F ~ ., data = test_data)
# Stop the clock
new_ptm <- proc.time() - ptm
Code Python:
import pandas as pd
import time
from sklearn.linear_model import LinearRegression
from sklearn.feature_extraction import DictVectorizer
start = time.time()
test_data = pd.read_csv("./clean_hold.out.csv")
x_train = [col for col in test_data.columns[1:] if col != 'HH_F']
y_train = ['HH_F']
model_linear = LinearRegression(normalize=False)
model_linear.fit(test_data[x_train], test_data[y_train])
but it's not work for me
return X.astype(np.float32 if X.dtype == np.int32 else np.float64)
ValueError: could not convert string to float: Bee True
I was tried another approach
test_data = pd.read_csv("./clean_hold.out.csv").to_dict()
v = DictVectorizer(sparse=False)
X = v.fit_transform(test_data)
However, I catched another error:
File
"C:\Anaconda32\lib\site-packages\sklearn\feature_extraction\dict_vectorizer.py",
line 258, in transform
Xa[i, vocab[f]] = dtype(v) TypeError: float() argument must be a string or a number
I don't understand how Python should resolve this issues ...
Example of data:
http://screencast.com/t/hYyyu7nU9hQm
I have to do some encoding before using fit.
There are several classes that can be used :
LabelEncoder : turn your string into incremental value
OneHotEncoder : use One-of-K algorithm to transform your String into integer
I wanted to have a scalable solution but didn't get any answer. I selected OneHotEncoder that binarize all the strings. It is quite effective but if you have a lot different strings the matrix will grow very quickly and memory will be required.