When I used to run this code I have this error. I have tried to solve it by others method but they are not sophisticated.
The dataset looks like this:
[]
And my code:
from sklearn.ensemble import RandomForestRegressor
regressor = RandomForestRegressor(n_estimators = 10, random_state = 0)
regressor.fit(df_train, y_train)
Error trace:
File "C:\Users\Acer
15\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 433,
in check_array array = np.array(array, dtype=dtype, order=order,
copy=copy) ValueError: could not convert string to float: '15ML'
To replace 15ML to 15 in pandas dataframe,
df['Quantity'].replace('15ML','15')
I have assumed the column name for 15ML column is Quantity. You should replace with the actual column name that you've. If you want to access by location, you can also use
df.ix[:,4].replace('15ML','15')
I have counted including the index column in the image. Actual location may vary according to how you load the data.
Related
I have a node2vec embedding stored as a .csv file, values are a square symmetric matrix. I have two versions of this, one with node names in the first column and another with node names in the first row. I would like to cluster this data with DBSCAN, but I can't seem to figure out how to get the input right. I tried this:
import numpy as np
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
input_file = "node2vec-labels-on-columns.emb"
# for tab delimited use:
df = pd.read_csv(input_file, header = 0, delimiter = "\t")
# put the original column names in a python list
original_headers = list(df.columns.values)
emb = df.as_matrix()
db = DBSCAN(eps=0.3, min_samples=10).fit(emb)
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print("Estimated number of clusters: %d" % n_clusters_)
print("Estimated number of noise points: %d" % n_noise_)
This leads to an error:
dbscan.py:14: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
emb = df.as_matrix()
Traceback (most recent call last):
File "dbscan.py", line 15, in <module>
db = DBSCAN(eps=0.3, min_samples=10).fit(emb)
File "C:\Python36\lib\site-packages\sklearn\cluster\_dbscan.py", line 312, in fit
X = self._validate_data(X, accept_sparse='csr')
File "C:\Python36\lib\site-packages\sklearn\base.py", line 420, in _validate_data
X = check_array(X, **check_params)
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 73, in inner_f
return f(**kwargs)
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 646, in check_array
allow_nan=force_all_finite == 'allow-nan')
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 100, in _assert_all_finite
msg_dtype if msg_dtype is not None else X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
I've tried other input methods that lead to the same error. All the tutorials I can find use datasets imported form sklearn so those are of not help figuring out how to read from a file. Can anyone point me in the right direction?
The error does not come from the fact that you are reading the dataset from a file but on the content of the dataset.
DBSCAN is meant to be used on numerical data. As stated in the error, it does not support NaNs.
If you are willing to cluster strings or labels, you should find some other model.
I am using the Melbourne Housing Dataset from Kaggle to fit a regression model on it, with Price being the target value. You can find the dataset here
import numpy as np
import pandas as pd
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble.partial_dependence import partial_dependence, plot_partial_dependence
from sklearn.preprocessing import Imputer
cols_to_use = ['Distance', 'Landsize', 'BuildingArea']
data = pd.read_csv('data/melb_house_pricing.csv')
# drop rows where target is NaN
data = data.loc[~(data['Price'].isna())]
y = data.Price
X = data[cols_to_use]
my_imputer = Imputer()
imputed_X = my_imputer.fit_transform(X)
print(f"Contains NaNs in training data: {np.isnan(imputed_X).sum()}")
print(f"Contains NaNs in target data: {np.isnan(y).sum()}")
print(f"Contains Infinity: {np.isinf(imputed_X).sum()}")
print(f"Contains Infinity: {np.isinf(y).sum()}")
my_model = GradientBoostingRegressor()
my_model.fit(imputed_X, y)
# Here we make the plot
my_plots = plot_partial_dependence(my_model,
features=[0, 2], # column numbers of plots we want to show
X=X, # raw predictors data.
feature_names=['Distance', 'Landsize', 'BuildingArea'], # labels on graphs
grid_resolution=10) # number of values to plot on x axis
Even after using the Imputer from sklearn, I get the following error -
Contains NaNs in training data: 0
Contains NaNs in target data: 0
Contains Infinity: 0
Contains Infinity: 0
/Users/adimyth/.local/lib/python3.7/site-packages/sklearn/utils/deprecation.py:85: DeprecationWarning: Function plot_partial_dependence is deprecated; The function ensemble.plot_partial_dependence has been deprecated in favour of sklearn.inspection.plot_partial_dependence in 0.21 and will be removed in 0.23.
warnings.warn(msg, category=DeprecationWarning)
Traceback (most recent call last):
File "partial_dependency_plots.py", line 29, in <module>
grid_resolution=10) # number of values to plot on x axis
File "/Users/adimyth/.local/lib/python3.7/site-packages/sklearn/utils/deprecation.py", line 86, in wrapped
return fun(*args, **kwargs)
File "/Users/adimyth/.local/lib/python3.7/site-packages/sklearn/ensemble/partial_dependence.py", line 286, in plot_partial_dependence
X = check_array(X, dtype=DTYPE, order='C')
File "/Users/adimyth/.local/lib/python3.7/site-packages/sklearn/utils/validation.py", line 542, in check_array
allow_nan=force_all_finite == 'allow-nan')
File "/Users/adimyth/.local/lib/python3.7/site-packages/sklearn/utils/validation.py", line 56, in _assert_all_finite
raise ValueError(msg_err.format(type_err, X.dtype))
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
As, you can see when I print the number of NaNs in imputed_X, I get 0. So, why do I still get ValueError. Any help?
Just change the code for plot_partial_dependence:
my_plots = plot_partial_dependence(my_model,
features=[0, 2], # column numbers of plots we want to show
X=imputed_X, # raw predictors data.
feature_names=['Distance', 'Landsize', 'BuildingArea'], # labels on graphs
grid_resolution=10) # num
It will work.
I need help with this. I'm a beginner and I am really confused with this. This is my code for the beginning of my preprocessing.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Import training set
dataset_train = pd.read_csv('Google_Stock_Price_Train.csv')
training_set = dataset_train.iloc[:, 1:6].values
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
With this dataset(not full, I only put 10 of them as there are actually 10000)
Date, Open, High, Low, Close, Volume
1/3/2012,325.25,332.83,324.97,663.59,"7,380,500"
1/4/2012,331.27,333.87,329.08,666.45,"5,749,400"
1/5/2012,329.83,330.75,326.89,657.21,"6,590,300"
1/6/2012,328.34,328.77,323.68,648.24,"5,405,900"
1/9/2012,322.04,322.29,309.46,620.76,"11,688,800"
1/10/2012,313.7,315.72,307.3,621.43,"8,824,000"
1/11/2012,310.59,313.52,309.4,624.25,"4,817,800"
1/12/2012,314.43,315.26,312.08,627.92,"3,764,400"
1/13/2012,311.96,312.3,309.37,623.28,"4,631,800"
I get this error
Traceback (most recent call last):
File "<ipython-input-10-94c47491afd8>", line 3, in <module>
training_set_scaled = sc.fit_transform(training_set)
File "C:\Users\MAx\Anaconda3\lib\site-packages\sklearn\base.py", line 517, in fit_transform
return self.fit(X, **fit_params).transform(X)
File "C:\Users\MAx\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 308, in fit
return self.partial_fit(X, y)
File "C:\Users\MAx\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 334, in partial_fit
estimator=self, dtype=FLOAT_DTYPES)
File "C:\Users\MAx\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 433, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: could not convert string to float: '1,770,000'
Sample code to help fix would be helpful
You need to get rid of the commas in your numbers: float("7,380,500") fails.
I don't know how/if you can change the data, but if you can, str.replace(',', '') deletes all the commas from your number-strings. As your file is a csv, you need to make sure it only applies to the number-columns, not to all commas in your file.
You can use the 'thousands' param in the 'read_csv'. This will format the data and remove the commas from between the numbers in 'Volume' column, and convert that to int (default) which can then be easily converted into float.
dataset_train = pd.read_csv('Google_Stock_Price_Train.csv', thousands=',')
dataset_train['Volume'].dtype
# Output: int64
I want to train a random forest on a bunch of matrices (first link below for an example). I want to classify them as either "g" or "b" (good or bad, a or b, 1 or 0, it doesn't matter).
I've called the script randfore.py. I am currently using 10 examples, but I will be using a much bigger data set once I actually get this up and running.
Here is the code:
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
import os
import sklearn
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
working_dir = os.getcwd() # Grabs the working directory
directory = working_dir+"/fakesourcestuff/" ## The actual directory where the files are located
sources = list() # Just sets up a list here which is going to become the input for the random forest
for i in range(10):
cutoutfile = pd.read_csv(directory+ "image2_with_fake_geotran_subtracted_corrected_cutout_" + str(i) +".dat", dtype=object) ## Where we get the input data for the random forest from
sources.append(cutoutfile) # add it to our sources list
targets = pd.read_csv(directory + "faketargets.dat",sep='\n',header=None, dtype=object) # Reads in our target data... either "g" or "b" (Good or bad)
sources = pd.DataFrame(sources) ## I convert the list to a dataframe to avoid the "ValueError: cannot copy sequence with size 99 to array axis with dimension 1" error. Necessary?
# Training sets
X_train = sources[:8] # Inputs
y_train = targets[:8] # Targets
# Random Forest
rf = RandomForestClassifier(n_estimators=10)
rf_fit = rf.fit(X_train, y_train)
Here is the current error output:
Traceback (most recent call last):
File "randfore.py", line 31, in <module>
rf_fit = rf.fit(X_train, y_train)
File "/home/ithil/anaconda2/envs/iraf27/lib/python2.7/site-packages/sklearn/ensemble/forest.py", line 247, in fit
X = check_array(X, accept_sparse="csc", dtype=DTYPE)
File "/home/ithil/anaconda2/envs/iraf27/lib/python2.7/site-packages/sklearn/utils/validation.py", line 382, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
I tried making the dtype = object, but it hasn't helped. I'm just not sure what sort of manipulation I need to perform to have this work.
I think the problem is because the files I appending to sources aren't just numbers but a mix of numbers, commas, and various square brackets (it's basically a big matrix). Is there a natural way to import this? The square brackets in particular are probably an issue.
Before I converted sources to a DataFrame I was getting the following error:
ValueError: cannot copy sequence with size 99 to array axis with dimension 1
This is due to the dimensions of my input (100 lines long) and my target which has 10 rows and 1 column.
Here is the contents of the first file that's read into cutouts (they're all the exact same style) to be used as the input:
https://pastebin.com/tkysqmVu
And here is the contents of faketargets.dat, the targets:
https://pastebin.com/632RBqWc
Any ideas? Help greatly appreciated. I am sure there is a lot of fundamental confusion going on here.
Try writing:
X_train = sources.values[:8] # Inputs
y_train = targets.values[:8] # Targets
I hope this will solve your problem!
I am trying to cluster over 200k points, by following:
km = KMeans(n_clusters=5)
km.fit_transform(ends)
But I get the following error:
km.fit_transform(ends)
So the matrix dimensions is 200kX2
File "/Users/fleh/anaconda/lib/python2.7/site-packages/sklearn/cluster/k_means_.py", line 814, in fit_transform
X = self._check_fit_data(X)
...
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
As far as I have been tracking the data.. the numbers are not that large.
How do i fix this?
Thanks
If you use pandas for data handling, you can run this:
import pandas as pd
df = pd.DataFrame(ends)
df.replace([np.inf, -np.inf], np.nan)
df.info()
The info() function will then tell you if you have any values that are non-computable.