Reshape Keras Input for LSTM - python

I have two ndarrays, inputs and results, both consisting of multiple arrays looking like this:
inputs = [
[[1,2],[2,2],[3,2]],
[[2,1],[1,2],[2,3]],
[[2,2],[1,1],[3,3]],
...
]
results = [
[3,4,5],
[3,3,5],
[4,2,6],
...
]
I managed to split them up into train and test arrays, where train contains 66% of the arrays and test the other 33%. Now I'd like to reshape them for further use in my LSTM but my script fails when inputting them into np.reshape() function.
split = int(round(0.66 * results.shape[0]))
train_results = results[:split, :]
train_inputs = inputs[:split, :]
test_results = results[split:, :]
test_inputs = inputs[split:, :]
X_train = np.reshape(train_inputs, (train_inputs.shape[0], train_inputs.shape[1], 1))
X_test = np.reshape(test_inputs, (test_inputs.shape[0], test_inputs.shape[1], 1))
Please tell me how to use np.reshape() correctly in this case.
Basically I am loosely following this tutorial: https://github.com/Vict0rSch/deep_learning/tree/master/keras/recurrent

You just pass a tuple to np.reshape.
For an LSTM layer, you need the shape like (NumberOfExamples, TimeSteps, FeaturesPerStep).
So, we need to know how many steps your sequence has. By the looks of your X array, I'll suppose you have 3 steps and 2 features.
If that's the case:
X_train = train_inputs.reshape((split,3,2))
X_test = X_test.reshape((test_inputs.shape[0], 3, 2))
If, otherwise, you want 6 steps of one feature, the shape is (split,6,1). You can do anything, as long as the multiplication of the three elements in the shape must remain always the same
For the results. Do you want the results to be a result in sequence, matching the input steps? Or are they just single outputs (two independent outputs for the entire sequence)?
Since you've got 3 results, and I have assumed you have 3 time steps, I'll assume these 3 results are in sequence as well, so, I'll reshape them as:
Y_train = train_results.reshape((split,3,1)) #three steps, one result per step
#for this to work, your last LSTM layer should use `return_sequences=True`.
But if they are 3 independent results:
Y_train = train_results.reshape((split,3))
#for this to work, you must have 3 cells in the last layer, be it a Dense or an LSTM. But this LSTM must have `return_sequences=False`.

Related

Input Shape for Keras Model with multiple one-hot arrays

I don't quite understand how the input shape and dimensions for a keras model works when trying to use multiple one hot encoded arrays.
For example, this is my feature state containing 9 one-hot encoded arrays.
features = [[first_one_hot] + [second_one_hot] + \
[third_one_hot] + [fourth_one_hot] + [sixt_one_hot] + [seventh_one_hot]+ ...]
having a shape of: (3, 4, 4, 5, 5, 5, 10, 10, 10), where:
Shape: a shape (30,4,10) means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30 * 4 * 10 = 1200 elements or numbers.
If I just unpack each of my one hot arrays, my model works given a shape of (1, 56) - but as of my understanding, the model does not quite know which values correspond to which one hot by doing so.
Question 1
First of all, am I understanding right that each of the features concatenated in the array above should be separated, instead of using a (1, 56) array as I mentioned? Lets say, instead of:
[1,0,0,0,1,0,0,0,...] use:
[1,0,0], [1,0,0,0], ...
If so, how should I give the separated onehot's to the model? I'm new at machine learning, so this might be a strange question to ask.
Question 2
If so, what could be the advantage of also grouping thematically similar onehots into separate input layers?
My build_model right now uss just one input-dim with (1,56) layer size:
def _build_model(self, hl1_dims, hl2_dims, hl3_dims, input_layer_size, output_layer_size, optimizer, loss):
model = Sequential()
# My input_layer_size is set to 9, as I have 9 dimensions
model.add(Dense(hl1_dims, input_dim=input_layer_size))
model.add(BatchNormalization())
model.add(Activation('relu'))
# Second Hidden Layer
...
As of my understanding, I could also use multiple input layers something like that:
input_3d = Input(shape=(3,))
input_4d = Input(shape=(4,))
input_5d = Input(shape=(5,))
input_10d = Input(shape=(10,))
# multiple branches, for example:
branch_3d = Dense(32, activation='relu')(input_3d)
branch_3d = Dense(32, activation='relu')(branch_3d)
m_3d = Model(inputs=input_3d, outputs=branch_3d)
# Combine all output of branches
combined = Concatenate(axis=1)([m_3d.output, m_4d.output, m_5d.output, m_10d.output])
# Apply FC Layer
out = Dense(16, activation='relu')(combined)
out = Dense(output_layer_size, activation='linear')(out)
# Model accepts inputs of all branches and output action space based on output_layer_size
model = Model(inputs=[m_3d.input, m_4d.input, m_5d.input, m_10d.input], outputs=out)
I tried above implementation but never really got it to work, mostly got errors like:
ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 1, 9)
But as I said, I'm not even sure if you would split categorical inputs into separate layers or if it's best practice to just combine all categorical features into one shape. Would really much appreciate any input on this.

Cannot reshape array of size x into shape y

I'm following a tutorial to create an LSTM neural network using keras.
I have an array of 1270 rows and 26 features.
I split the data like this:
train_ind = int(0.8 * X.shape[0])
X_train = X[:train_ind]
X_test = X[train_ind:]
y_train = y[:train_ind]
y_test = y[train_ind:]
And i'm trying to reshape it for the lstm using this:
num_steps = 4
X_train_shaped = np.reshape(X_train_sc, newshape=(-1, num_steps, 26))
y_train_shaped = np.reshape(y_train_sc, newshape=(-1, num_steps, 26))
assert X_train_shaped.shape[0] == y_train_shaped.shape[0]
However, i'm getting this error:
ValueError: cannot reshape array of size 1016 into shape (4,26)
Well, 4 x 26 = 104, and 1270 isn't divisible by 104, so np.reshape() can't choose an integer number of rows (the -1) in order to fit that into an array. You need to change either num_steps or num_features (26) so that num_steps * num_features evenly divides 1270. Unfortunately, this is impossible with num_features = 26, since 13 does not divide 1270. Your other option is to choose a different number of total rows, say 1040 or 1144, which are both divisible by 104.
So, instead of setting train_ind = int(0.8 * X.shape[0]), try train_id = 1040 or a smaller multiple of 104. Note, however, that your test data will also have to have a nice number of rows in order to reshape it in the same way.
First of all, you don't need to reshape an array. The shape attribute of a numpy array simply determines how the underlying data is displayed to you and how the data is accessed; changing the shape doesn't actually move any data around.
Likewise, we note that one cannot change the shape to something that is impossible. For example, if an array has size (100,5,6), you can't change this to (100,5,7). In general the axes have to multiple to the correct values. 100*5*6 not equal 100*5*7.
In your case, you sound like you want to work with an LSTM, which would normally mean that you want to simply add an additional axis so that you have input vectors of size 1. A new axis can be added with a None entry in numpy. Something like:
X_train = X[:train_ind,:,None] #The axes are Batch, Time, and the Input Vector.
Shape should now be (1016,26,1).

(Keras) Apply pad_sequences for deeper levels // Variable label length

I got a label data shaped (2000,2,x) where x is between 100 and 250 for each of the 2000 sets with 2 being the x and y coordinates. To my understanding, fitting my model like in the code below would only match the length of the coordinates.
model.fit(
x=train_data,
y=keras.preprocessing.sequence.pad_sequences(train_labels, maxlen=250),
epochs=EPOCHS,
batch_size=BATCH_SIZE)
So how can I bring all of these labels to the same length since that seems necessary in order to use them to train the model?
I imagine labels are going to be a somewhat sparse matrix with shape ( 2000, 2, 250) if you account for padding right? And you're attempting predicting for each example a 2D matrix with (2, 250)?
Anyways, the padding you currently have will only affect the coordinate's dimension.
A hack to get padding on the last dimension would be to permute the axis of the data and add padding then permute back to original shape:
perm_y = np.moveaxis(y, 1, 2)
padded_perm_y = sequence.padding(y, max_len=250, padding='post',
truncating='post')
padded_y = np.moveaxis(padded_perm_y, 2, 1)
It turned out that np.pad works here (while np.moveaxis + sequence.padding didn't). So I'm iterating over my input twice; once to get the maximum length and a second time to apply np.pad to a new array that got the shape (training_samples, coordinates, maximum_sequence_length).
While I don't know whether padding distorts the output of the CNN-LSTM, I'm glad that the above error doesn't arise any longer.
For padding sequences with deeper levels (list of lists of lists,..) you can use ragged tensors and convert to tensors/arrays. For example:
import tensorflow as tf
padded_y = tf.ragged.constant(train_labels).to_tensor(0.)
This pads with 0.

How to reshape input for keras LSTM?

I have a numpy array of some 5000 rows and 4 columns (temp, pressure, speed, cost). So this is of the shape (5000, 4). Each row is an observation at a regular interval this is the first time i'm doing time series prediction and I'm stuck on input shape. I'm trying to predict
a value 1 timestep from the last data point. How do I reshape it into the 3D form for LSTM model in keras?
Also It will be much more helpful if a small sample program is written. There doesn't seem to be any example/tutorial where the input has more than one feature (and also not NLP).
The first question you should ask yourself is :
What is the timescale in which the input features encode relevant information for the value you want to predict?
Let's call this timescale prediction_context.
You can now create your dataset :
import numpy as np
recording_length = 5000
n_features = 4
prediction_context = 10 # Change here
# The data you already have
X_data = np.random.random((recording_length, n_features))
to_predict = np.random.random((5000,1))
# Make lists of training examples
X_in = []
Y_out = []
# Append examples to the lists (input and expected output)
for i in range(recording_length - prediction_context):
X_in.append(X_data[i:i+prediction_context,:])
Y_out.append(to_predict[i+prediction_context])
# Convert them to numpy array
X_train = np.array(X_in)
Y_train = np.array(Y_out)
At the end :
X_train.shape = (recording_length - prediction_context, prediction_context, n_features)
So you will need to make a trade-off between the length of your prediction context and the number of examples you will have to train your network.

Keras sequence prediction with multiple simultaneous sequences

My question is very similar to what it seems this post is asking, although that post doesn't pose a satisfactory solution. To elaborate, I am currently using keras with tensorflow backend and a sequential LSTM model. The end goal is I have n time-dependent sequences with equal time steps (the same number of points on each sequence and the points are all the same time apart) and I would like to feed all n sequences into the same network so it can use correlations between the sequences to better predict the next step for each sequence. My ideal output would be an n-element 1-D array with array[0] corresponding to the next-step prediction for sequence_1, array[1] for sequence_2, and so on.
My inputs are sequences of single values, so each of n inputs can be parsed into a 1-D array.
I was able to get a working model for each sequence independently using the code at the end of this guide by Jakob Aungiers, although my difficulty is adapting it to accept multiple sequences at once and correlate between them (i.e. be analyzed in parallel). I believe the issue is related to the shape of my input data, which is currently in the form of a 4-D numpy array because of how Jakob's Guide splits the inputs into sub-sequences of 30 elements each to analyze incrementally, although I could also be completely missing the target here. My code (which is mostly Jakob's, not trying to take credit for anything that isn't mine) presently looks like this:
As-is this complains with "ValueError: Error when checking target: expected activation_1 to have shape (None, 4) but got array with shape (4, 490)", I'm sure there are plenty of other issues but I'd love some direction on how to achieve what I'm describing. Anything stick out immediately to anyone? Any help you could give will be greatly appreciated.
Thanks!
-Eric
Keras is already prepared to work with batches containing many sequences, there is no secret at all.
There are two possible approaches, though:
You input your entire sequences (all steps at once) and predict n results
You input only one step of all sequences and predict the next step in a loop
Suppose:
nSequences = 30
timeSteps = 50
features = 1 #(as you said: single values per step)
outputFeatures = 1
First apporach: stateful=False:
inputArray = arrayWithShape((nSequences,timeSteps,features))
outputArray = arrayWithShape((nSequences,outputFeatures))
input_shape = (timeSteps,features)
#use layers like this:
LSTM(units) #if the first layer in a Sequential model, add the input_shape
#if you want to return the same number of steps (like a new sequence parallel to the input, use return_sequences=True
Train like this:
model.fit(inputArray,outputArray,....)
Predict like this:
newStep = model.predict(inputArray)
Second approach: stateful=True:
inputArray = sameAsBefore
outputArray = inputArray[:,1:] #one step after input array
inputArray = inputArray[:,:-1] #eliminate the last step
batch_input = (nSequences, 1, features) #stateful layers require the batch size
#use layers like this:
LSMT(units, stateful=True) #if the first layer in a Sequential model, add input_shape
Train like this:
model.reset_states() #you need this in stateful=True models
#if you don't reset states,
#the stateful model will think that your inputs are new steps of the same previous sequences
for step in range(inputArray.shape[1]): #for each time step
model.fit(inputArray[:,step:step+1], outputArray[:,step:step+1],shuffle=False,...)
Predict like this:
model.reset_states()
predictions = np.empty(inputArray.shape)
for step in range(inputArray.shape[1]): #for each time step
predictions[:,step] = model.predict(inputArray[:,step:step+1])

Categories