I am executing the following code in the final block of my regression code:
steps = 50000
with tf.Session() as sess:
sess.run(init)
for i in range(steps):
sess.run(train, feed_dict={X_data:X_train,y_target:y_train})
if i%500 == 0:
rand_ind = np.random.random_integers(len(X_test)+1)
feed = {X_data:X_test.iloc[rand_ind:rand_ind+8,:],y_target:y_test.iloc[rand_ind:rand_ind+8,:]}
loss = tf.reduce_sum(tf.square(y_target-y_output))/8
print(sess.run(loss,feed_dict=feed))
Is this a good way to generate smaller batches from a pandas DataFrame or are there better ways to do so?
I am using iloc here, as before I was not able to index properly. Yet I am getting the following error:
DeprecationWarning: This function is deprecated. Please call randint(1, 6193 + 1) instead from ipykernel import kernelapp as app
If you want to select random rows from the dataframe you could use the following code:
import numpy as np
batch = df.iloc[np.random.choice(df.index.values, sample_size)]
This code will select random rows indices and then will select them for the batch. replace sample_size with the size of the batch.
If you will use it multiple times you will create a random sample with return over your data.
If you don't wont to re use the same exmples, you can use this code to sample and then drop the rows you samples and not use them again
import numpy as np
sample = np.random.choice(df.index.values, sample_size)
batch = df.iloc[sample]
newdf = df.drop(sample, axis = 0)
Related
I am running some regression models to predict performance.
After running the models I created a variable to see the predictions (y_pred_* are lists with 2567 values):
y_pred_LR = regressor.predict(X_test)
y_pred_SVR = regressor2.predict(X_test)
y_pred_RF = regressor3.predict(X_test)
the types of these prediction lists are Array of float64, while the y_test is a DataFrame.
I wanted to create a table with the results, I tried some different ways, calling as list, trying to convert, trying to select as values, and I did not succeed so far, any one could help?
My last trial was like below:
comparison = pd.DataFrame({'Real': y_test, LR':y_pred_LR,'RF':y_pred_RF,'SVM':y_pred_SVM})
In this case the DataFrame is created but the values donĀ“t appear.
Additionally, I would like to create two new rows with the mean and standard deviation of results and this row should be located at beginning (or first row) of the Data Frame.
Thanks
import pandas as pd
import numpy as np
real = np.array([2] * 10).reshape(-1,1)
y_pred_LR = np.array([0] * 10)
y_pred_SVR = np.array([1] * 10)
y_pred_RF = np.array([5] * 10)
real = real.flatten()
comparison = pd.DataFrame({'real':real,'y_pred_LR':y_pred_LR,'y_pred_SVR':y_pred_SVR,"y_pred_RF":y_pred_RF})
Mean = comparison.mean(axis=0)
StD = comparison.std(axis=0)
Mean_StD = pd.concat([Mean,StD],axis=1).T
result = pd.concat([Mean_StD,comparison],ignore_index=True)
print(result)
I have a 2D numpy array with rows being time series of a feature, based on which I'm training a neural network. For generalisation purposes, I would like to subset these time series at random points. I'd like them to have a minimum subset length as well. However, the network requires fixed length time series, so I need to pre-pad the resulting subsets with zeroes.
Currently, I'm doing it using the code below, which includes a nasty for-loop, because I don't know how I can use fancy indexing for this particular problem. As this piece of code is part of the network data generator, it needs to be fast to keep up to pace with the data-hungry GPU. Does anyone know a numpy-way of doing this without the for-loop?
import numpy as np
import matplotlib.pyplot as plt
# Amount of time series to consider
batchsize = 25
# Original length of the time series
timesteps = 150
# As an example, fill the 2D array with sine function time series
sinefunction = np.expand_dims(np.sin(np.arange(timesteps)), axis=0)
originalarray = np.repeat(sinefunction, batchsize, axis=0)
# Now the real thing, we want:
# - to start the time series at a random moment (between 0 and maxstart)
# - to end the time series at a random moment
# - however with a minimum length of the resulting subset time series (minlength)
maxstart = 50
minlength = 75
# get random starts
randomstarts = np.random.choice(np.arange(0, maxstart), size=batchsize)
# get random stops
randomstops = np.random.choice(np.arange(maxstart + minlength, timesteps), size=batchsize)
# determine the resulting random sizes of the subset time series
randomsizes = randomstops - randomstarts
# finally create a new 2D array with all the randomly subset time series, however pre-padded with zeros
# THIS IS THE FOR LOOP WE SHOULD TRY TO AVOID
cutarray = np.zeros_like(originalarray)
for i in range(batchsize):
cutarray[i, -randomsizes[i]:] = originalarray[i, randomstarts[i]:randomstops[i]]
To show what goes in and out of the function:
# Show that it worked
f, ax = plt.subplots(2, 1)
ax[0].imshow(originalarray)
ax[0].set_title('original array')
ax[1].imshow(cutarray)
ax[1].set_title('zero-padded subset array')
Approach #1 : Views-based
We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows to get sliding windowed views into a zeros padded version of the input and assign into a zeros padded version of the output. All of that padding is needed for a vectorized solution on account of the ragged nature. Upside is that working on views would be efficient on memory and performance.
The implementation would look something like this -
from skimage.util.shape import view_as_windows
n = randomsizes.max()
max_extent = randomstarts.max()+n
padlen = max_extent - origalarray.shape[1]
p = np.zeros((origalarray.shape[0],padlen),dtype=origalarray.dtype)
a = np.hstack((origalarray,p))
w = view_as_windows(a,(1,n))[...,0,:]
out_vals = w[np.arange(len(randomstarts)),randomstarts]
out_starts = origalarray.shape[1]-randomsizes
out_extensions_max = out_starts.max()+n
out = np.zeros((origalarray.shape[0],out_extensions_max),dtype=origalarray.dtype)
w2 = view_as_windows(out,(1,n))[...,0,:]
w2[np.arange(len(out_starts)),out_starts] = out_vals
cutarray_out = out[:,:origalarray.shape[1]]
Approach #2 : With masking
cutarray_out = np.zeros_like(origalarray)
r = np.arange(origalarray.shape[1])
m = (randomstarts[:,None]<=r) & (randomstops[:,None]>r)
s = origalarray.shape[1]-randomsizes
m2 = s[:,None]<=r
cutarray_out[m2] = origalarray[m]
Sup guys, I'm new to Python and new to Neural Networks as well. I'm trying to implement a Neural Network to predict the Close price of Bitcoin in a day, based on Open price in the same day. So I get a CSV file, and I'm trying to use 'Open' column as entry, and 'Close' column as target, you can see this in the code below:
from sklearn.neural_network import MLPClassifier
import numpy as np
import pandas as pd
dataset = pd.read_csv('BTC_USD.csv')
X = dataset['Open']
y = dataset['Close']
NeuralNetwork = MLPClassifier(verbose = True,
max_iter = 1000,
tol = 0,
activation = 'logistic')
NeuralNetwork.fit(X, y)
When I run the code I get this error:
ValueError: Expected 2D array, got 1D array instead:
array=[4.95100000e-02 4.95100000e-02 8.58400000e-02 ... 6.70745996e+03
6.66883984e+03 7.32675977e+03].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
After this error, I did some research here in stackoverflow, and I tried some solutions proposed in other posts, like this one:
from sklearn.neural_network import MLPClassifier
import numpy as np
import pandas as pd
dataset = pd.read_csv('BTC_USD.csv')
X = np.array(dataset[['Open']])
X = X.reshape(-1, 1)
y = np.array(dataset[['Close']])
y = y.reshape(-1, 1)
NeuralNetwork = MLPClassifier(verbose = True,
max_iter = 1000,
tol = 0,
activation = 'logistic')
NeuralNetwork.fit(X, y)
After running this code, I get this new error:
ValueError: Unknown label type: (array([4.95100000e-02, 8.58400000e-02, 8.08000000e-02, ...,
6.66883984e+03, 6.30685010e+03, 7.49379980e+03]),)
and this ''warning'' at the first line (which contains the directory):
DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
Could you help me please? I tried many solutions, but any of them worked.
You should use the values attribute of a data frame to get the elements of one column. In addition, what you want to achieve is a regression, not a classification, thus you must use a regressor such as MLPRegressor, following
from sklearn.neural_network import MLPRegressor
import numpy as np
import pandas as pd
dataset = pd.read_csv('BTC_USD.csv')
X = dataset["Open"].values.reshape(-1, 1)
y = dataset["Close"].values
NeuralNetwork = MLPRegressor(verbose = True,
max_iter = 1000,
tol = 0,
activation = "logistic")
NeuralNetwork.fit(X, y)
The code works now, but the results are not correct as you will need to work on the features and your network hyperparameters. But this is beyond the scope of SO.
I have a matrix of data with 1024 columns of 100 values each where I'm trying to perform a Gaussian fit to each column and save the results in a new array. My code is the following:
from astropy.io import fits
from astropy.modeling import models, fitting
import numpy as np
Image1 = fits.open('Image.fits')
Image_data = Image1.data[:,:]
x = np.linspace(-50,50,50)
Gauss_Model = models.Gaussian1D(amplitude=1000., mean=0, stddev=1.)
Fitting_Model = fitting.LevMarLSQFitter()
Fit_Data = Fitting_Model(Gauss_Model, x, Image_data[:,0])
This code works just fine and gives a fit to the first column in Image_data, but I want it to perform a fit to all 1024 columns of data in Image_data and save the results in a new array. I tried to use a for-loop but it didn't work. I would very much appreciate some help with how to do this, thanks!
You should store the results in a list:
Fit_Data = []
for i in range(0, Image_data.shape[1]):
Fit_Data.append(Fitting_Model(Gauss_Model, x, Image_data[:, i]))
To retrieve fit data results for a specific column, you can call Fit_Data[32] for eg
I have a time series with over a hundred million rows of data. I am trying to reshape it to include a time window. My sample data is of shape (79499, 9) and I am trying to reshape it to (79979, 10, 9). The following for loop works fine in numpy.
def munge(data, backprop_window):
result = []
for index in range(len(data) - backprop_window):
result.append(data[index: index + backprop_window])
return np.array(result)
X_train = munge(X_train, backprop_window)
I have tried a few variations with dask, but all of them seem to hang without giving any error messages, including this one:
import h5py
import dask.array as da
f1 = h5py.File("data.hdf5")
X_train = f1.create_dataset('X_train',data = X_train, dtype='float32')
x = da.from_array(X_train, chunks=(10000, d.shape[1]))
result = x.compute(munge(x, backprop_window))
Any wise thoughts appreciated.
This doesn't necessarily solve your dask issue, but as a much faster alternative to munge, you could instead use numpy's stride_tricks to create a rolling view into your data (based on example here).
def munge_strides(data, backprop_window):
""" take a rolling view into array by manipulating strides """
from numpy.lib.stride_tricks import as_strided
new_shape = (data.shape[0] - backprop_window,
backprop_window,
data.shape[1])
new_strides = (data.strides[0], data.strides[0], data.strides[1])
return as_strided(data, shape=new_shape, strides=new_strides)
X_train = np.arange(100).reshape(20, 5)
np.array_equal(munge(X_train, backprop_window=3),
munge_strides(X_train, backprop_window=3))
Out[112]: True
as_strided needs to be used very carefully - it is an 'advanced' feature and incorrect parameters can easily lead you into segfaults - see docstring