Cannot understand issue! Input tensor must be at least 2D - python

I have written this simple program to make a prediction (do not mind that there is no train/test split). x is a 2D input array of size 40k, 4.
import numpy as np
from numpy import load
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
tf.get_logger().setLevel('INFO')
tf.autograph.set_verbosity(1)
x = load('dataset/metadata/x.npy')
y = load('dataset/metadata/y.npy')
meta_model = keras.Sequential(
[
layers.Dense(3, activation='relu'),
layers.Dense(2, activation='relu'),
layers.Dense(1)
]
)
meta_model.compile(
loss=keras.losses.MeanSquaredError(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.MeanSquaredError()]
)
meta_model.fit(x, y, batch_size=25, epochs=10, verbose=2)
for i in range (10):
print(y[i], " vs ", meta_model(x[i]))
In the final few lines I am attempting to make the model output a prediction (also I am aware that the prediction is happening on the same data which it is using to learn, I am simply trying to get the model to work). I cannot understand why I am getting the following error (on the last line):
Input tensor must be at least 2D: [3]
Can anyone help explain what I am doing incorrectly?

Related

ValueError: Error when checking input: expected simple_rnn_input to have 3 dimensions, but got array with shape (8, 50)

I'm trying to create this simpleRNN Architecture from TensorFlow
I have an input data with dimension (8,50) rows & colons respectively, and the output matrix (8,8).
My code:
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from keras.regularizers import l2
model = keras.Sequential()
model.add(layers.SimpleRNN(
units= 8,
activation='sigmoid',
kernel_regularizer=l2(0.0),
input_dim = 8))
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.MeanSquaredError(),
metrics=[keras.metrics.MeanSquaredError()],)
model.fit(
x=trainingData, # (8*50)
y=OutputData, # (8*8)
batch_size=5,
epochs=2000,
verbose=0,)
I have looked through several stackoverflow and github suggestions for a similar problem, but none works.
Could someone help me please as I don't understand why all these methods failed.

How to develop a neural network to predict joint angles form joint positions and orientation

I am all new to neural network. I have a dataset of 3d joint positions (6400*23*3) and orientations in quaternions (6400*23*4) and I want to predict the joint angles for all 22 joints and 3 motion planes (6400*22*3). I have tried to make a model however it will not run as the input data don't match the output shape, and I can't figure out how to change it.
my code
import scipy
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling2D, Dense, Flatten
from tensorflow.keras.utils import to_categorical
Jaload = scipy.io.loadmat('JointAnglesXsens11MovementsIforlængelse.mat')
Orload = scipy.io.loadmat('OrientationXsens11MovementsIforlængelse.mat')
Or = np.array((Orload['OR'][:,:]), dtype='float')
Ja = np.array((Jaload['JA'][:,:]), dtype='float')
Jalabel = np.array(Ja)
a = 0.6108652382
Jalabel[Jalabel<a] = 0
Jalabel[Jalabel>a] = 1
Ja3d = np.array(Jalabel.reshape(6814,22,3)) # der er 22 ledvinkler
Or3d = np.array(Or.reshape(6814,23,4)) # der er 23 segmenter
X_train = np.array(Or3d)
Y_train = np.array(Ja3d)
model = Sequential([
Dense(64, activation='relu', input_shape=(23,4)),
Dense(64, activation='relu'),
Dense(3, activation='softmax'),])
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam') # works
model.fit(
X_train,
to_categorical(Y_train),
epochs=3,)
Running the model.fit returns with:
ValueError: A target array with shape (6814, 22, 3, 2) was passed for an output of shape (None, 3) while using as loss categorical_crossentropy. This loss expects targets to have the same shape as the output.
Here are some suggestions that might get you further down the road:
(1) You might want to insert a "Flatten()" layer just before the final Dense. This will basically collapse the output from the previous layers into a single dimension.
(2) You might want to make the final Dense layer have 22*3=66 units as opposed to three. Each output unit will represent a particular joint angle.
(3) You might want to likewise collapse the Y_train to be (num_samples, 22*3) using the numpy reshape.
(4) You might want to make the final Dense layer have "linear" activation instead of "softmax" - softmax will force the outputs to sum to 1 as a probability.
(5) Don't convert the y_train to categorical. It is already in the correct format I believe (after you reshape it to match the revised output of the model).
(6) The metric to use is probably not "categorical_crossentropy" but perhaps "mse" (mean squared error).
Hopefully, some of the above will help move you in the right direction. I hope this helps.

how to get data from within Keras model for visualisation?

I am using Tensorflow 1.12 which has Keras integrated together with Python 3.6.x
I wish to use Keras for its simplicity of model building, but also would like to use data on the intermediate layer for visualization of feature maps and kernels to better understand how machine learning works(even though this is admittedly not so evident)
I am using the mnist data base and a very basic Keras model to try to do what I want to do.
Here is the code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras
print(tf.VERSION)
print(tf.keras.__version__)
tf.keras.backend.clear_session()
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train_shaped = np.expand_dims(x_train, axis=3) / 255.0
x_test_shaped = np.expand_dims(x_test, axis=3) / 255.0
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Conv2D(32, kernel_size=(4, 4),strides=(1,1),activation='relu', input_shape=(28,28,1)),
keras.layers.Dropout(0.5),
keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2)),
keras.layers.Conv2D(24, kernel_size=(8, 8),strides=(1,1)),
keras.layers.Flatten(),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
The above sets up the dataset and the model
Next I define my session for Tensorflow and do the training.
This all works fine but now I want to get my data for the, as example, the first layer out as ideally a numpy array on which I can do the visualization.
My model.layers[0].output gives me a Tensor of (?,25,25,32) as expected and now I try to do a eval() and thenafter a .numpy() method to get my result.
The error message is
You must feed a value for placeholder tensor 'conv2d_6_input' with dtype float and shape [?,28,28,1]
I am looking for help on how to get my data (32 feature maps of 25x25 pixels) out as numpy array for visualization.
sess = tf.Session(graph=tf.get_default_graph())
tf.keras.backend.set_session(sess)
with sess.as_default():
model = create_model()
model.summary()
model.fit(x_train_shaped[:10000], y_train[:10000], epochs=2,
batch_size=64, validation_split=.2,)
model.layers[0].output
print(model.layers[0].output.shape)
my_array = model.layers[0].output
my_array.eval()
tf.keras.backend.clear_session()
sess.close()
First of all, you must note that getting the output of a model or a layer only makes sense when you feed the input layers with some data. You get the model something (i.e. input data), you get something in return (i.e. output or feature map or activation map). That's why it would produce the following error:
You must feed a value for placeholder tensor 'conv2d_6_input'
You haven't fed the baby, so it would cry :)
Now, the idea of building a new Keras model is counterproductive. When you have a large model in the first place, one would like to plug in some kind of ready-made code that can get the output of the feature maps and visualize them. So this route seems not really interesting.
I think you are mistakenly thinking that when you construct a new model out of the layers of another model, a whole new model is cloned. That's not the case since the parameters of the layers would be shared.
Concretely, what you are looking for can be achieved like this:
viz_conv = Model(model.input, model.layers[0].output)
conv_active = viz_conv(my_input_data) # my_input_data is a numpy array of shape `(num_samples,28,28,1)`
All the parameters of viz_conv are shared with model and they have not been copied either. Under the hood they are using the same weight Tensors.
Alternatively, you could define a backend function to do this:
from tensorflow.keras import backend as K
viz_func = K.function([model.input], [any layer(s) you would like in the model])
output = viz_func([my_input_data])
This has been covered in Keras documentation and I highly recommend to read that as well.

Error in Keras while doing Multi-class classification

I am trying to do Multi-class classification in Keras. I am using the
crowdflower dataset.Here is my code below:
import pandas as pd
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
from sklearn.preprocessing import LabelEncoder
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
from sklearn.preprocessing import LabelEncoder
df=pd.read_csv('text_emotion.csv')
df.drop(['tweet_id','author'],axis=1,inplace=True)
df=df[~df['sentiment'].isin(['empty','enthusiasm','boredom','anger'])]
df = df.sample(frac=1).reset_index(drop=True)
labels = []
texts = []
for i,row in df.iterrows():
texts.append(row['content'])
labels.append(row['sentiment'])
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences)
encoder = LabelEncoder()
encoder.fit(labels)
encoded_Y = encoder.transform(labels)
labels = np.asarray(encoded_Y)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
print labels.shape
model = Sequential()
model.add(Embedding(40000, 8,input_length=37))
model.add(Flatten())
model.add(Dense(100,activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(9, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data,labels, validation_split=0.2, epochs=150, batch_size=100)
I am getting this error:
ValueError: Error when checking target: expected dense_3 to have shape (9,) but got array with shape (1,)
Can someone please point out the fault with my logic? I understand my question is kind of similar to Exception: Error when checking model target: expected dense_3 to have shape (None, 1000) but got array with shape (32, 2)
But I have not managed to find the bug.
You are making multiple mistakes in that code and I will suggest some improvements to make the code better:
remove: for i,row in df.iterrows(): you can directly use
labels = df['sentiment']
texts = df['content']
While using tokenizer = Tokenizer(5000) set max words, this is vocabulary size.
When padding data = pad_sequences(sequences, maxlen=37) provide max length.
Don't convert the output just to an array of values labels = np.asarray(encoded_Y), it's not a regression. You have to one hot encode it:
from keras.utils import np_utils
labels = np_utils.to_categorical(encoded_Y)
When providing the embedding layer model.add(Embedding(40000, 8,input_length=37)) your vocab size is 40K and embedding dimension is 8. Does't make much sense as the data set you have provided has close to 40K unique words. which can't be all be given a proper embedding. model.add(Embedding(5000, 30, input_length=37)) Change to a more sensible number vocab size. NOTE: if you want to use 40000 please update Tokenizer(5000) to the same number.
Use variables like embedding_dim = 8, vocab_size=40000. whatever the value might be.
Instead of model.add(Dense(9, activation='softmax')) as final layer use this, keeps the code clean.
model.add(Dense(labels.shape[1], activation='softmax'))
Final working code is attached at this Link

Keras model.predict won't accept input of size one (scalar number)

I'm new to Keras and python, now I'm working on Keras to find a model of data and use that model.predict for optimization, however the model.predict can only take input as numpy array of at least 2 elements.
My code is
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import numpy as np
x = np.arange(-2,3.0,0.01)
y = x**2 - 2*x + 1
model = Sequential()
model.add(Dense(50, activation='sigmoid',
input_dim=1, init='uniform'))
model.add(Dense(1, activation='linear'))
sgd = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=False)
model.compile(loss='mean_squared_error',
optimizer='sgd',
metrics=['accuracy'])
model.fit(x,y,nb_epoch=300, batch_size = 5,verbose = 0)
The code can fit fine, but if I try to use model.predict for a scalar number it gives me error
(Pdb) model.predict(0.0)
*** Exception: Error when checking : data should be a Numpy array, or list/dict of Numpy arrays. Found: 0.0...
I force it to be numpy array but still failed, and it said the input needs to be 2 dimensions!!!
(Pdb) model.predict(np.asarray(0.0))
*** Exception: Error when checking : expected dense_input_1 to have 2 dimensions, but got array with shape ()
but if I input two numbers then it gives me the answer
(Pdb) model.predict([0.0,0.0])
array([[ 1.07415712],
[ 1.07415712]], dtype=float32)
I need the model.predict to take single number as input to use for optimization. I'm not sure any setting I use wrong. Please help, thanks.
Try:
model.predict(np.asarray(0.0).reshape((1,1)))
In Keras first dimension is always connected with example number, so it must be provided.

Categories