weird beheaviour when adding matrices [duplicate] - python

I recently started to follow along with Siraj Raval's Deep Learning tutorials on YouTube, but I an error came up when I tried to run my code. The code is from the second episode of his series, How To Make A Neural Network. When I ran the code I got the error:
Traceback (most recent call last):
File "C:\Users\dpopp\Documents\Machine Learning\first_neural_net.py", line 66, in <module>
neural_network.train(training_set_inputs, training_set_outputs, 10000)
File "C:\Users\dpopp\Documents\Machine Learning\first_neural_net.py", line 44, in train
self.synaptic_weights += adjustment
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
I checked multiple times with his code and couldn't find any differences, and even tried copying and pasting his code from the GitHub link. This is the code I have now:
from numpy import exp, array, random, dot
class NeuralNetwork():
def __init__(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
random.seed(1)
# We model a single neuron, with 3 input connections and 1 output connection.
# We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
# and mean 0.
self.synaptic_weights = 2 * random.random((3, 1)) - 1
# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def __sigmoid(self, x):
return 1 / (1 + exp(-x))
# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def __sigmoid_derivative(self, x):
return x * (1 - x)
# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in range(number_of_training_iterations):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)
# Calculate the error (The difference between the desired output
# and the predicted output).
error = training_set_outputs - output
# Multiply the error by the input and again by the gradient of the Sigmoid curve.
# This means less confident weights are adjusted more.
# This means inputs, which are zero, do not cause changes to the weights.
adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))
# Adjust the weights.
self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
# Pass inputs through our neural network (our single neuron).
return self.__sigmoid(dot(inputs, self.synaptic_weights))
if __name__ == '__main__':
# Initialize a single neuron neural network
neural_network = NeuralNetwork()
print("Random starting synaptic weights:")
print(neural_network.synaptic_weights)
# The training set. We have 4 examples, each consisting of 3 input values
# and 1 output value.
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]])
# Train the neural network using a training set
# Do it 10,000 times and make small adjustments each time
neural_network.train(training_set_inputs, training_set_outputs, 10000)
print("New Synaptic weights after training:")
print(neural_network.synaptic_weights)
# Test the neural net with a new situation
print("Considering new situation [1, 0, 0] -> ?:")
print(neural_network.think(array([[1, 0, 0]])))
Even after copying and pasting the same code that worked in Siraj's episode, I'm still getting the same error.
I just started out look into artificial intelligence, and don't understand what the error means. Could someone please explain what it means and how to fix it? Thanks!

Change self.synaptic_weights += adjustment to
self.synaptic_weights = self.synaptic_weights + adjustment
self.synaptic_weights must have a shape of (3,1) and adjustment must have a shape of (3,4). While the shapes are broadcastable numpy must not like trying to assign the result with shape (3,4) to an array of shape (3,1)
a = np.ones((3,1))
b = np.random.randint(1,10, (3,4))
>>> a
array([[1],
[1],
[1]])
>>> b
array([[8, 2, 5, 7],
[2, 5, 4, 8],
[7, 7, 6, 6]])
>>> a + b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> b += a
>>> b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> a
array([[1],
[1],
[1]])
>>> a += b
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
a += b
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
The same error occurs when using numpy.add and specifying a as the output array
>>> np.add(a,b, out = a)
Traceback (most recent call last):
File "<pyshell#31>", line 1, in <module>
np.add(a,b, out = a)
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
>>>
A new a needs to be created
>>> a = a + b
>>> a
array([[10, 4, 7, 9],
[ 4, 7, 6, 10],
[ 9, 9, 8, 8]])
>>>

Hopefully, by now you must have executed the code, but the problem between his code and your code is this line:
training_output = np.array([[0,1,1,0]]).T
While transposing don't forget to add 2 square brackets, I had the same problem for the same code, this worked for me.
Thanks

Related

tf slices with lambda layer only uses last index

TLDR
My for lambda layers to get tensor slices only get the last column of data.
I have a (Batch_size, R) shape tensor that I will be running through an embedding layer for each of the R features seperately. I wrote the following code to split the input (Batch_size, R) shaped tensor into R (None,) slices.
R=2
inp = tf.keras.Input(shape = (R,), dtype=tf.int32)
SLICES = []
for i in range(R):
slice_ = tf.keras.layers.Lambda(lambda a: a[:,i], name=f"slice_{i}", dtype=tf.int32)(inp)
SLICES.append(slice_)
model = tf.keras.Model(inputs= inp, outputs = SLICES)
Running tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True) makes it appear that the code works. Running data into the model shows that there is a problem: the model takes the last feature and copies it for all layers.
input_ = np.array([[1,2],[3,4],[5,6]])
model.predict(input_)
[array([2, 4, 6], dtype=int32), array([2, 4, 6], dtype=int32)]
Approach 1
I "fixed" the problem in the R=2 case by getting rid of the for loop and writing each layer by hand.
slice1 = tf.keras.layers.Lambda(lambda a: a[:,0], name=f"first_slice", dtype=tf.int32)(inp)
slice2 = tf.keras.layers.Lambda(lambda a: a[:,1], name=f"second_slice", dtype=tf.int32)(inp)
model = tf.keras.Model(inputs= inp, outputs = [slice1, slice2])
input_ = np.array([[1,2],[3,4],[5,6]])
model.predict(input_)
[array([1, 3, 5], dtype=int32), array([2, 4, 6], dtype=int32)]
This is clearly undesirable for any number of reasons.
Approach 2
Another approach is to do the embedding on the raw features. Unfortunately, I have a CutMix like layer in front of the embedding operation, preventing me from embedding the raw features.
How can I get the for loop to correctly copy each slice of the tensor?
The reason why your first block of codes not working is you need to write the lambda function like this instead: lambda a,k=i: a[:,k]
R=2
inp = tf.keras.Input(shape = (R,), dtype=tf.int32)
SLICES = []
for i in range(R):
slice_ = tf.keras.layers.Lambda(lambda a,k=i: a[:,k], name=f"slice_{i}", dtype=tf.int32)(inp)
SLICES.append(slice_)
model = tf.keras.Model(inp, SLICES)
input_ = np.array([[1,2],[3,4],[5,6]])
print(model.predict(input_))
Outputs:
[array([1, 3, 5], dtype=int32), array([2, 4, 6], dtype=int32)]

Proper way to input a scalar into a Tensorflow 2 model

In my Tensorflow 2 model, I want my batch size to be parametric, such that I can build tensors which have appropriate batch size dynamically. I have the following code:
batch_size_param = 128
tf_batch_size = tf.keras.Input(shape=(), name="tf_batch_size", dtype=tf.int32)
batch_indices = tf.range(0, tf_batch_size, 1)
md = tf.keras.Model(inputs={"tf_batch_size": tf_batch_size}, outputs=[batch_indices])
res = md(inputs={"tf_batch_size": batch_size_param})
The code throws an error in tf.range:
ValueError: Shape must be rank 0 but is rank 1
for 'limit' for '{{node Range}} = Range[Tidx=DT_INT32](Range/start, tf_batch_size, Range/delta)' with input shapes: [], [?], []
I think the problem is with the fact that tf.keras.Input automatically tries to expand the input array at the first dimension, since it expects the partial shape of the input without the batch size and will attach the batch size according to the shape of the input array, which in my case a scalar. I can just feed the scalar value as a constant integer into tf.range but this time, I won't be able to change it after the model graph has been compiled.
Interestingly, I failed to find a proper way to input only a scalar into a TF-2 model even though I checked the documentation, too. So, what would be the best way to handle such a case?
Don't use tf.keras.Input and just define the model by subclassing.
import tensorflow as tf
class ScalarModel(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, x):
return tf.range(0, x, 1)
print(ScalarModel()(10))
# tf.Tensor([0 1 2 3 4 5 6 7 8 9], shape=(10,), dtype=int32)
I'm not sure if this is actually a good idea, but you could use tf.squeeze like
inp = keras.Input(shape=(), dtype=tf.int32)
batch_indices = tf.range(tf.squeeze(inp))
model = keras.Model(inputs=inp, outputs=batch_indices)
so that
model(6)
gives
<tf.Tensor: shape=(6,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5])>
Edit:
Depending on what you want to achieve, it might also be worth looking into ragged tensors:
inp = keras.Input(shape=(), dtype=tf.int32)
batch_indices = tf.ragged.range(inp)
model = keras.Model(inputs=inp, outputs=batch_indices)
would make
model(np.array([6,7]))
return
<tf.RaggedTensor [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6]]>

Why is pytorch softmax function not working?

so this is my code
import torch.nn.functional as F
import torch
inputs = [1,2,3]
input = torch.tensor(inputs)
output = F.softmax(input, dim=1)
print(output)
is the reason why the code not working because of the dim?
the error here:
File "c:\Users\user\Desktop\AI\pytorch_jovian\linear_reg.py", line 19, in <module>
output = F.softmax(input, dim=1)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py", line 1583, in softmax
ret = input.softmax(dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Apart from dim=0, there is another issue in your code. Softmax doesn't work on a long tensor, so it should be converted to a float or double tensor first
>>> input = torch.tensor([1, 2, 3])
>>> input
tensor([1, 2, 3])
>>> F.softmax(input.float(), dim=0)
tensor([0.0900, 0.2447, 0.6652])

Concatenate identity matrix to each vector

I want to modify my input by adding several different suffixes to the input vectors. For example, if the (single) input is [1, 5, 9, 3] I want to create three vectors (stored as matrix) like this:
[[1, 5, 9, 3, 1, 0, 0],
[1, 5, 9, 3, 0, 1, 0],
[1, 5, 9, 3, 0, 0, 1]]
Of course, this is just one observation so the input to the model is (None, 4) in this case. The simple way is to prepare the input data somewhere else (numpy most probably) and adjust the shape of input accordingly. That I can do but I would prefer doing it inside TensorFlow/Keras.
I have isolated the problem into this code:
import keras.backend as K
from keras import Input, Model
from keras.layers import Lambda
def build_model(dim_input: int, dim_eye: int):
input = Input((dim_input,))
concat = Lambda(lambda x: concat_eye(x, dim_input, dim_eye))(input)
return Model(inputs=[input], outputs=[concat])
def concat_eye(x, dim_input, dim_eye):
x = K.reshape(x, (-1, 1, dim_input))
x = K.repeat_elements(x, dim_eye, axis=1)
eye = K.expand_dims(K.eye(dim_eye), axis=0)
eye = K.tile(eye, (-1, 1, 1))
out = K.concatenate([x, eye], axis=2)
return out
def main():
import numpy as np
n = 100
dim_input = 20
dim_eye = 3
model = build_model(dim_input, dim_eye)
model.compile(optimizer='sgd', loss='mean_squared_error')
x_train = np.zeros((n, dim_input))
y_train = np.zeros((n, dim_eye, dim_eye + dim_input))
model.fit(x_train, y_train)
if __name__ == '__main__':
main()
The problem seems to be in the -1 in shape argument in tile function. I tried to replace it with 1 and None. Each has its own error:
-1: error during model.fit
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected multiples[0] >= 0, but got -1
1: error duting model.fit
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [32,3,20] vs. shape[1] = [1,3,3]
None: error during build_model:
Failed to convert object of type <class 'tuple'> to Tensor. Contents: (None, 1, 1). Consider casting elements to a supported type.
You need to use K.shape() instead to get the symbolic shape of input tensor. That's because the batch size is None and therefore passing K.int_shape(x)[0] or None or -1 as a part of the second argument of K.tile() would not work:
eye = K.tile(eye, (K.shape(x)[0], 1, 1))

ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)

I recently started to follow along with Siraj Raval's Deep Learning tutorials on YouTube, but I an error came up when I tried to run my code. The code is from the second episode of his series, How To Make A Neural Network. When I ran the code I got the error:
Traceback (most recent call last):
File "C:\Users\dpopp\Documents\Machine Learning\first_neural_net.py", line 66, in <module>
neural_network.train(training_set_inputs, training_set_outputs, 10000)
File "C:\Users\dpopp\Documents\Machine Learning\first_neural_net.py", line 44, in train
self.synaptic_weights += adjustment
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
I checked multiple times with his code and couldn't find any differences, and even tried copying and pasting his code from the GitHub link. This is the code I have now:
from numpy import exp, array, random, dot
class NeuralNetwork():
def __init__(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
random.seed(1)
# We model a single neuron, with 3 input connections and 1 output connection.
# We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
# and mean 0.
self.synaptic_weights = 2 * random.random((3, 1)) - 1
# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def __sigmoid(self, x):
return 1 / (1 + exp(-x))
# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def __sigmoid_derivative(self, x):
return x * (1 - x)
# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in range(number_of_training_iterations):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)
# Calculate the error (The difference between the desired output
# and the predicted output).
error = training_set_outputs - output
# Multiply the error by the input and again by the gradient of the Sigmoid curve.
# This means less confident weights are adjusted more.
# This means inputs, which are zero, do not cause changes to the weights.
adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))
# Adjust the weights.
self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
# Pass inputs through our neural network (our single neuron).
return self.__sigmoid(dot(inputs, self.synaptic_weights))
if __name__ == '__main__':
# Initialize a single neuron neural network
neural_network = NeuralNetwork()
print("Random starting synaptic weights:")
print(neural_network.synaptic_weights)
# The training set. We have 4 examples, each consisting of 3 input values
# and 1 output value.
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]])
# Train the neural network using a training set
# Do it 10,000 times and make small adjustments each time
neural_network.train(training_set_inputs, training_set_outputs, 10000)
print("New Synaptic weights after training:")
print(neural_network.synaptic_weights)
# Test the neural net with a new situation
print("Considering new situation [1, 0, 0] -> ?:")
print(neural_network.think(array([[1, 0, 0]])))
Even after copying and pasting the same code that worked in Siraj's episode, I'm still getting the same error.
I just started out look into artificial intelligence, and don't understand what the error means. Could someone please explain what it means and how to fix it? Thanks!
Change self.synaptic_weights += adjustment to
self.synaptic_weights = self.synaptic_weights + adjustment
self.synaptic_weights must have a shape of (3,1) and adjustment must have a shape of (3,4). While the shapes are broadcastable numpy must not like trying to assign the result with shape (3,4) to an array of shape (3,1)
a = np.ones((3,1))
b = np.random.randint(1,10, (3,4))
>>> a
array([[1],
[1],
[1]])
>>> b
array([[8, 2, 5, 7],
[2, 5, 4, 8],
[7, 7, 6, 6]])
>>> a + b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> b += a
>>> b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> a
array([[1],
[1],
[1]])
>>> a += b
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
a += b
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
The same error occurs when using numpy.add and specifying a as the output array
>>> np.add(a,b, out = a)
Traceback (most recent call last):
File "<pyshell#31>", line 1, in <module>
np.add(a,b, out = a)
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
>>>
A new a needs to be created
>>> a = a + b
>>> a
array([[10, 4, 7, 9],
[ 4, 7, 6, 10],
[ 9, 9, 8, 8]])
>>>
Hopefully, by now you must have executed the code, but the problem between his code and your code is this line:
training_output = np.array([[0,1,1,0]]).T
While transposing don't forget to add 2 square brackets, I had the same problem for the same code, this worked for me.
Thanks

Categories