I'm trying to do this operation between two tensors:
green_mat = sio.loadmat('green.mat')
green = np.array(green_mat['G2'])
green = tf.convert_to_tensor(green)
green = tf.cast(green, dtype='complex64') # >>>green.shape = TensorShape([64, 40000])
tensor = tf.ones(128,1) # tensor.shape = TensorShape([128])
def mul_and_sum(tensor):
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
return tf.reduce_sum((tf.multiply(green, complex_tensor), 1))
res = mul_and_sum(tensor)
basically, at the end what I want to obtain is a tensor with 40000 elements to use as a layer for a neural network, but when I run this function as a test I have this error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]
It's the first time I'm working on tensors operation and maybe I'm a bit confused on how to treat the dimensions, any suggestions? Thanks :)
EDIT: ok, I have understood the point, indeed for the example that I provided everything works, but then I have another problem in my network:
def convolution(tensor):
tf.cast(tensor, dtype='float64')
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
a = tf.math.real(tf.reduce_sum((tf.multiply(green, complex_tensor)), 0))
return a
def get_model3(mask_kind):
epochs = 200
learning_rate = 0.1
decay_rate = learning_rate / epochs
inp_1 = keras.Input(shape=(64, 101, 129), name="RST_inputs")
x = layers.Conv2D(1, kernel_size=(1, 1), strides=(1, 1), padding="valid", trainable=False)(inp_1)
x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(64, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(32, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Flatten()(x)
x = layers.Dense(512)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Dense(256)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
out1 = layers.Dense(128, name="ls_weights")(x)
if mask_kind == 1:
binary_mask = layers.Lambda(mask_layer1, name="lambda_layer", dtype='float64')(out1)
elif mask_kind == 2:
binary_mask = layers.Lambda(mask_layer2, name="lambda_layer", dtype='float64')(out1)
else:
binary_mask = out1
#here the binary mask shape is [?,128]
binary_mask = tf.expand_dims(binary_mask, axis=2) #here the shape is [?,128,1]
binary_mask = tf.squeeze(binary_mask, axis=0) #here the shape is [128,1]
print('binary shape:', binary_mask.shape)
lambda_layer = layers.Lambda(convolution, name="convolutional_layer")(binary_mask)
print(lambda_layer.shape)
model3 = keras.Model(inp_1, lambda_layer, name="2_out_model")
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=decay_rate), # in caso
# rimettere 0.001
loss="mean_squared_error")
plot_model(model3, to_file='model.png', show_shapes=True, show_layer_names=True)
model3.summary()
return model3
and I get this error:
ValueError: Input 0 of layer sf_vec is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [40000]
I know that it is because there is a mismatch between the dimension, but the fact is that the shape of the output layer(tensor) should be [?,40000] and instead I get only a tensor of [40000], any suggestion?
EDIT 2.0 I didn't noticed that my output was already the lambda layer so in the way the model is written I get no errors, but from the summary in this way I get as lambda shape (1, 40000) while usually should be (None,40000).
Where is the error?
If you want to do a multiplication between 2 tensors, they need to have compatible shape, i.e, either the same shape, or a shape that is broadcastable. Quoting the numpy documentation (tensorflow follows the same broadcasting rules):
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
In your case, if you want to use tf.multiply, you need to add a dimension to your vector so that it has a the same number of dimensions. You can do that by using tf.expand_dims or using advanced indexing with tf.newaxis.
An example (using complexes, like in your question):
>>> a = tf.complex(tf.random.normal((64,128)),tf.random.normal((64,128)))
>>> a.shape
TensorShape([64, 128])
>>> b = tf.complex(tf.ones(64),tf.ones(64))
>>> b.shape
TensorShape([64])
To be able to use tf.multiply, you need to add a dimension to b :
>>> b_exp = tf.exand_dims(b, axis=1)
>>> b_exp.shape
TensorShape([64, 1])
>>> tf.multiply(a,b_exp).shape
TensorShape([64, 128])
Note: doing tf.reduce_sum on tf.multiply can be akin to just do a matrix multiplication
In your case, you can probably do something similar to
>>> tf.matmul(b[tf.newaxis,:], a).shape
TensorShape([1, 128])
If the extra dimension bothers you, you can get rid of it with tf.squeeze.
Related
I'm completely new to Keras and AI. I have Keras 2.9 with Python 3.8.10 under Ubuntu 20.04. I have a model trained using 2 X inputs and an Y, and technically the training runs. Now I wanted to predict the Y using 2 inputs, but it fails. The training is done using this code fragment (I think only input and output format is interesting here):
def generate(aBatchSize:int=32, aRepeatParameter:int=2, aPort:int=12345):
dim = (512, 512)
paraShape = (aRepeatParameter * 2,)
def generator():
while True:
# fill variables
yield ((xParameter, xImage), y)
dataset = tensorflow.data.Dataset.from_generator(generator,
output_signature=(
(tensorflow.TensorSpec(shape=paraShape, dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=dim, dtype=tensorflow.float32)),
tensorflow.TensorSpec(shape=(1), dtype=tensorflow.float32)
))
dataset = dataset.batch(aBatchSize)
return dataset
repeatParameter = 2
batchSize = 16
model.fit(landscapeGenerator.generate(batchSize, repeatParameter, port), validation_data=landscapeGenerator.generate(batchSize, repeatParameter, port),
epochs=50, steps_per_epoch=math.ceil(sampleSize / batchSize), validation_steps=validationSize/batchSize )
Printing the model input and output from training code yields this:
model.input [<KerasTensor: shape=(None, 4) dtype=float32 (created by layer 'input_1')>, <KerasTensor: shape=(None, 512, 512, 1) dtype=float32 (created by layer 'input_2')>]
model.output KerasTensor(type_spec=TensorSpec(shape=(None, 1), dtype=tf.float32, name=None), name='dense_4/BiasAdd:0', description="created by layer 'dense_4'")
This is the failing inference code:
image = numpy.multiply(imageio.imread(filename), 1.0 / 255.0)
model = tensorflow.keras.models.load_model(modelDir)
repeatParameter = 2
paraShape = (repeatParameter * 2,)
parameter = numpy.empty(paraShape, dtype=float)
# fill parameters
tempDiff = 5.0 * model.predict((parameter, image))
It writes, because does not understand that the model has 2 inputs of different size:
ValueError: Data cardinality is ambiguous:
x sizes: 4, 512
Make sure all arrays contain the same number of samples.
I also wanted to make the prediction using a generator, because for that I know how to provide shape info, but no success:
def generate(aParameter, aParaShape, aImage):
dim = (512, 512)
def generator():
while True:
yield (aParameter, aImage)
dataset = tensorflow.data.Dataset.from_generator(generator,
output_signature=(
tensorflow.TensorSpec(shape=paraShape, dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=dim, dtype=tensorflow.float32)
))
dataset = dataset.batch(1)
return dataset
image = numpy.multiply(imageio.imread(filename), 1.0 / 255.0)
model = tensorflow.keras.models.load_model(modelDir)
repeatParameter = 2
paraShape = (repeatParameter * 2,)
parameter = numpy.empty(paraShape, dtype=float)
# fill parameters
tempDiff = 5.0 * model.predict(generate(parameter, paraShape, image), batch_size=1, steps=1)
This one complains: ValueError: Layer "model_2" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 4) dtype=float32>]
EDIT: current model generation
def createMlp(aRepeatParameter:int):
vectorSize = aRepeatParameter * 2
inputs = Input(shape=(vectorSize,))
x = inputs
# do not process now, raw data are better x = Dense(vectorSize, activation="relu")(x)
return Model(inputs, x)
def createCnn():
filters=(256, 64, 16)
inputShape = (512, 512, 1)
chanDim = -1
inputs = Input(shape=inputShape)
x = inputs
for (i, f) in enumerate(filters):
x = Conv2D(f, (3, 3), padding="same")(x)
x = Activation("relu")(x)
x = BatchNormalization(axis=chanDim)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dense(16, activation='relu')(x)
x = BatchNormalization(axis=chanDim)(x)
x = Dropout(0.5)(x)
x = Dense(4)(x)
x = Activation("relu")(x)
return Model(inputs, x)
repeatParameter:int = 2
mlp = createMlp(repeatParameter)
cnn = createCnn()
combinedInput = Concatenate(axis=1)([mlp.output, cnn.output])
x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="linear")(x)
model = Model(inputs=[mlp.input, cnn.input], outputs=x)
It turned out I needed reshaping my inputs, and even that had a typo in it. The working solution is:
def generate(aParameter, aParaShape, aImage):
dim = (512, 512)
def generator():
while True:
yield (aParameter, aImage)
dataset = tensorflow.data.Dataset.from_generator(generator,
output_signature=(
tensorflow.TensorSpec(shape=paraShape, dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=dim, dtype=tensorflow.float32)
))
dataset = dataset.batch(1)
return dataset
image = numpy.multiply(imageio.imread(filename), 1.0 / 255.0)
model = tensorflow.keras.models.load_model(modelDir)
repeatParameter = 2
paraShape = (repeatParameter * 2,)
parameter = numpy.empty(paraShape, dtype=float)
# fill it
parameter = numpy.reshape(parameter, (1, 4))
image = numpy.reshape(image, (1, 512, 512))
tempDiff = 5.0 * model.predict([parameter, image], batch_size=1, steps=1)
I'm trying to train a CNN model for a speech emotion recognition task using spectrograms as input. I've reshaped the spectrograms to have the shape (num_frequency_bins, num_time_frames, 1) which I thought would be sufficient, but upon trying to fit the model to the dataset, which is stored in a Tensorflow dataset, I got the following error:
Input 0 of layer "sequential_12" is incompatible with the layer: expected shape=(None, 257, 1001, 1), found shape=(257, 1001, 1)
I tried reshaping the spectrograms to have the shape (1, num_frequency_bins, num_time_frames, 1), but that produced an error when creating the Sequential model:
ValueError: Exception encountered when calling layer "resizing_14" (type Resizing).
'images' must have either 3 or 4 dimensions.
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1, 257, 1001, 1), dtype=float32)
So I passed in the shape as (num_frequency_bins, num_time_frames, 1) when creating the model, and then fitted the model to the training data with the 4-dimensional data, but that raised this error:
InvalidArgumentError: slice index 0 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
So I'm kind of at a loss now. I genuinely have no idea what to do and how I can go about fixing this. I've read around but haven't come across anything useful. Would really appreciate any help.
Here's some of the code for context.
dataset = [[specgram_files[i], labels[i]] for i in range(len(specgram_files))]
specgram_files_and_labels_dataset = tf.data.Dataset.from_tensor_slices((specgram_files, labels))
def read_npy_file(data):
# 'data' stores the file name of the numpy binary file storing the features of a particular sound file
# item() returns numpy array of size 1 as a suitable python scalar.
# data.item() then returns the bytes string stored in the numpy array.
# decode() is then called on the bytes string to decode it from a bytes string to a regular string
# so that it can be passed as a parameter in np.load()
data = np.load(data.item().decode())
# Shape of data is now (1, rows, columns)
# Needs to be reshaped to (rows, columns, 1):
data = np.reshape(data, (data.shape[0], data.shape[1], 1))
return data.astype(np.float32)
specgram_dataset = specgram_files_and_labels_dataset.map(
lambda file, label: tuple([tf.numpy_function(read_npy_file, [file], [tf.float32]), label]),
num_parallel_calls=tf.data.AUTOTUNE)
num_files = len(train_df)
num_train = int(0.8 * num_files)
num_val = int(0.1 * num_files)
num_test = int(0.1 * num_files)
specgram_dataset.shuffle(buffer_size=1000)
specgram_train_ds = specgram_dataset.take(num_train)
specgram_test_ds = specgram_dataset.skip(num_train)
specgram_val_ds = specgram_test_ds.take(num_val)
specgram_test_ds = specgram_test_ds.skip(num_val)
batch_size = 32
specgram_train_ds.batch(batch_size)
specgram_val_ds.batch(batch_size)
specgram_train_ds = specgram_train_ds.cache().prefetch(tf.data.AUTOTUNE)
specgram_val_ds = specgram_val_ds.cache().prefetch(tf.data.AUTOTUNE)
for specgram, label in specgram_train_ds.take(1):
input_shape = specgram.shape
num_emotions = len(train_df["emotion"].unique())
model = models.Sequential([
layers.Input(shape=input_shape),
# downsampling the input.
layers.Resizing(32, 128),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, activation="relu"),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation="softmax"),
layers.Dense(num_emotions)
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"]
)
EPOCHS = 10
model.fit(
specgram_train_ds,
validation_data=specgram_val_ds,
epochs=EPOCHS,
callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=2)
)
Assuming you know your input_shape, I would recommend first hard-coding it into your model:
model = models.Sequential([
layers.Input(shape=(257, 1001, 1),
# downsampling the input.
layers.Resizing(32, 128),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, activation="relu"),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation="softmax"),
layers.Dense(num_emotions)
])
Also, when using tf.data.Dataset.batch, you should assign the Dataset output to a variable:
batch_size = 32
specgram_train_ds = specgram_train_ds.batch(batch_size)
specgram_val_ds = specgram_val_ds.batch(batch_size)
Afterwards, make sure that specgram_train_ds really does have the correct shape:
specgrams, _ = next(iter(specgram_train_ds.take(1)))
assert specgrams.shape == (32, 257, 1001, 1)
I'm trying to Implement Inception_resnet_v2 inside Faster-RCNN instead of using ResNet50. but when I try to run the code I got this TypeError:
TypeError: Inputs to a layer should be tensors
and this is the code of inception_resnet_v2 inputs part: (only first & last few lines)
N.B: if i only run InceptionResNetV2.py then i can see the model output perfectly.
def InceptionResNetV2(inputs):
img_input= inputs
#img_input = Input(shape=inputs)
#img_input = Input(shape=(600,600,3))
#img_input = Input(shape=input_shape)
# Stem block: 299,299,3 -> 35 x 35 x 192
x = conv2d_bn(img_input, 32, 3, strides=2, padding='valid')
x = conv2d_bn(x, 32, 3, padding='valid')
x = conv2d_bn(x, 64, 3)
x = MaxPooling2D(3, strides=2)(x)
x = conv2d_bn(x, 80, 1, padding='valid')
x = conv2d_bn(x, 192, 3, padding='valid')
x = MaxPooling2D(3, strides=2)(x)
.
.
.
.
x = Dense(6, activation='softmax', name='predictions')(x)
inputs = img_input
# model
x= Model(inputs, x, name='inception_resnet_v2')
return x
and this one is where I concat base_layers with Inception_resnet_v2(inputs)
def get_model(num_classes, backbone, num_anchors = 9):
#inputs = Input(shape=(600,600, 3))
inputs = Input(shape=(None, None, 3))
roi_input = Input(shape=(None, 4))
if backbone == 'inception_resnet_v2':
base_layers = InceptionResNetV2(inputs)
rpn = get_rpn(base_layers, num_anchors)
classifier = get_InceptionResNetV2_classifier(base_layers, roi_input, 14, num_classes)
Most probably i can't add base_layers with InceptionResNetV2(inputs).
I'll be very grateful if someone could help me out 🙏
First step is base_layers.output
Then after your convolutional layer you need to reshape x so that it matches the shape expected, something like
#x = tf.reshape(x, shape =(q,im_size,im_size,3))
Where you have to find the right q to match the size expected.
I was doing the same, and got passed this one, but then I ran into:
INVALID_ARGUMENT: logits and labels must be broadcastable: logits_size=[5,5] labels_size=[20,5]
I want to concatenate the output from an embedding layer with a custom tensor (myarr / myconst). I can specify everything with a fixed batch size like follows:
import numpy as np
import tensorflow as tf
BATCH_SIZE = 100
myarr = np.ones((10, 5))
myconst = tf.constant(np.tile(myarr, (BATCH_SIZE, 1, 1)))
# Model definition
inputs = tf.keras.layers.Input((10,), batch_size=BATCH_SIZE)
x = tf.keras.layers.Embedding(10, 5)(inputs)
x = tf.keras.layers.Concatenate(axis=1)([x, myconst])
model = tf.keras.models.Model(inputs=inputs, outputs=x)
However, if I don't specify batch size and tile my array, i.e. just the following...
myarr = np.ones((10, 5))
myconst = tf.constant(myarr)
# Model definition
inputs = tf.keras.layers.Input((10,))
x = tf.keras.layers.Embedding(10, 5)(inputs)
x = tf.keras.layers.Concatenate(axis=1)([x, myconst])
model = tf.keras.models.Model(inputs=inputs, outputs=x)
... I get an error specifying that shapes [(None, 10, 5), (10, 5)] can't be concatenated. Is there a way to add this None / batch_size axis to avoid tiling?
Thanks in advance
You want to concatenate to a 3D tensor of shape (batch, 10, 5) a constant of shape (10, 5) along the batch dimensionality. To do this your constant must be 3D. So you have to reshape it in (1, 10, 5) and repeat it along the axis=0 in order to match the shape (batch, 10, 5) and operate a concatenation.
We do this inside a Lambda layer:
X = np.random.randint(0,10, (100,10))
Y = np.random.uniform(0,1, (100,20,5))
myarr = np.ones((1, 10, 5)).astype('float32')
myconst = tf.convert_to_tensor(myarr)
def repeat_const(tensor, myconst):
shapes = tf.shape(tensor)
return tf.repeat(myconst, shapes[0], axis=0)
inputs = tf.keras.layers.Input((10,))
x = tf.keras.layers.Embedding(10, 5)(inputs)
xx = tf.keras.layers.Lambda(lambda x: repeat_const(x, myconst))(x)
x = tf.keras.layers.Concatenate(axis=1)([x, xx])
model = tf.keras.models.Model(inputs=inputs, outputs=x)
model.compile('adam', 'mse')
model.fit(X, Y, epochs=3)
I'm trying to do this operation between two tensors:
green_mat = sio.loadmat('green.mat')
green = np.array(green_mat['G2'])
green = tf.convert_to_tensor(green)
green = tf.cast(green, dtype='complex64') # >>>green.shape = TensorShape([64, 40000])
tensor = tf.ones(128,1) # tensor.shape = TensorShape([128])
def mul_and_sum(tensor):
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
return tf.reduce_sum((tf.multiply(green, complex_tensor), 1))
res = mul_and_sum(tensor)
basically, at the end what I want to obtain is a tensor with 40000 elements to use as a layer for a neural network, but when I run this function as a test I have this error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]
It's the first time I'm working on tensors operation and maybe I'm a bit confused on how to treat the dimensions, any suggestions? Thanks :)
EDIT: ok, I have understood the point, indeed for the example that I provided everything works, but then I have another problem in my network:
def convolution(tensor):
tf.cast(tensor, dtype='float64')
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
a = tf.math.real(tf.reduce_sum((tf.multiply(green, complex_tensor)), 0))
return a
def get_model3(mask_kind):
epochs = 200
learning_rate = 0.1
decay_rate = learning_rate / epochs
inp_1 = keras.Input(shape=(64, 101, 129), name="RST_inputs")
x = layers.Conv2D(1, kernel_size=(1, 1), strides=(1, 1), padding="valid", trainable=False)(inp_1)
x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(64, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(32, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Flatten()(x)
x = layers.Dense(512)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Dense(256)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
out1 = layers.Dense(128, name="ls_weights")(x)
if mask_kind == 1:
binary_mask = layers.Lambda(mask_layer1, name="lambda_layer", dtype='float64')(out1)
elif mask_kind == 2:
binary_mask = layers.Lambda(mask_layer2, name="lambda_layer", dtype='float64')(out1)
else:
binary_mask = out1
#here the binary mask shape is [?,128]
binary_mask = tf.expand_dims(binary_mask, axis=2) #here the shape is [?,128,1]
binary_mask = tf.squeeze(binary_mask, axis=0) #here the shape is [128,1]
print('binary shape:', binary_mask.shape)
lambda_layer = layers.Lambda(convolution, name="convolutional_layer")(binary_mask)
print(lambda_layer.shape)
model3 = keras.Model(inp_1, lambda_layer, name="2_out_model")
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=decay_rate), # in caso
# rimettere 0.001
loss="mean_squared_error")
plot_model(model3, to_file='model.png', show_shapes=True, show_layer_names=True)
model3.summary()
return model3
and I get this error:
ValueError: Input 0 of layer sf_vec is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [40000]
I know that it is because there is a mismatch between the dimension, but the fact is that the shape of the output layer(tensor) should be [?,40000] and instead I get only a tensor of [40000], any suggestion?
EDIT 2.0 I didn't noticed that my output was already the lambda layer so in the way the model is written I get no errors, but from the summary in this way I get as lambda shape (1, 40000) while usually should be (None,40000).
Where is the error?
If you want to do a multiplication between 2 tensors, they need to have compatible shape, i.e, either the same shape, or a shape that is broadcastable. Quoting the numpy documentation (tensorflow follows the same broadcasting rules):
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
In your case, if you want to use tf.multiply, you need to add a dimension to your vector so that it has a the same number of dimensions. You can do that by using tf.expand_dims or using advanced indexing with tf.newaxis.
An example (using complexes, like in your question):
>>> a = tf.complex(tf.random.normal((64,128)),tf.random.normal((64,128)))
>>> a.shape
TensorShape([64, 128])
>>> b = tf.complex(tf.ones(64),tf.ones(64))
>>> b.shape
TensorShape([64])
To be able to use tf.multiply, you need to add a dimension to b :
>>> b_exp = tf.exand_dims(b, axis=1)
>>> b_exp.shape
TensorShape([64, 1])
>>> tf.multiply(a,b_exp).shape
TensorShape([64, 128])
Note: doing tf.reduce_sum on tf.multiply can be akin to just do a matrix multiplication
In your case, you can probably do something similar to
>>> tf.matmul(b[tf.newaxis,:], a).shape
TensorShape([1, 128])
If the extra dimension bothers you, you can get rid of it with tf.squeeze.