Is keras based on closures in python? - python

While working with keras and tensorflow, I found the following lines of code confusing.
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),
dtype='float32'),trainable=True)
Also, I have seen something like:
Dense(64, activation='relu')(x)
Therefore, if Dense(...) will create the object for me, then how can I follow that with with (x)?
Likewise for w_init above. How can I say such thing:
tf.random_normal_initializer()(shape=(input_dim, units), dtype='float32'),trainable=True)
Do we have such thing in python "ClassName()" followed by "()" while creating an object such as a layer?
While I was looking into Closures in python, I found that a function can return another function. Hence, is this what really happens in Keras?
Any help is much appreciated!!

These are two totally different ways to define models.
Keras
Keras works with the concept of layers. Each line defines a full layer of your network. What you are referring to in specific is keras' functional API. The concept is to combine layers like this:
inp = Input(shape=(28, 28, 1))
x = Conv2D((6,6), strides=(1,1), activation='relu')(inp)
# ... etc ...
x = Flatten()(x)
x = Dense(10, activation='softmax')(x)
model = Model(inputs=[inp], outputs=[x])
This way you've created a full CNN in just a few lines. Note that you never had to manually input the shape of the weight vectors or the operations that are performed. These are inferred automatically by keras.
Now, this just needs to be compiled through model.compile(...) and then you can train it through model.fit(...).
Tensorflow
On the other hand TensorFlow is a bit more low-level. This means that you have do define the variables and operations by hand. So in order to write a fully-connected layer you'd have to do the following:
# Input placeholders
x = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.float32, shape=(None, 10))
# Convolution layer
W1 = tf.Variable(tf.truncated_normal([6, 6, 1, 32], stddev=0.1))
b1 = tf.Variable(tf.constant(0.1, tf.float32, [32]))
z1 = tf.nn.conv2d(x_2d, W1, strides=[1, 1, 1, 1], padding='SAME') + b1
c1 = tf.nn.relu(z1)
# ... etc ...
# Flatten
flat = tf.reshape(p2, [-1, ...]) # need to calculate the ... by ourselves
# Dense
W3 = tf.Variable(tf.truncated_normal([..., 10], stddev=0.1)) # same size as before
b3 = tf.Variable(tf.constant(0.1, tf.float32, [10]))
fc1 = tf.nn.relu(tf.matmul(flat, W3) + b3)
Two things to note here. There is no explicit definition of a model here and this has to be trained through a tf.Session with a feed_dict feeding the data to the placeholders. If you're interested you'll find several guides online.
Closing notes...
TensorFlow has a much friendlier and easier way to define and train models through eager execution, which will be default in TF 2.0! So the code you posted is in a sense the old way of doing things in tensorflow. It's worth taking a look into TF 2.0, which actually recommends doing things the keras way!
Edit (after comment by OP):
No a layer is not a clojure. A keras layer is a class that implements a __call__ method which also makes it callable. The way they did it was so that it is a wrapper to the call method that users typically write.
You can take a look at the implementation here
Basically how this works is:
class MyClass:
def __init__(self, param):
self.p = param
def call(self, x):
print(x)
If you try to write c = MyClass(1)(3), you'll get a TypeError saying that MyClass is not callable. But if you write it like this:
class MyClass:
def __init__(self, param):
self.p = param
def __call__(self, x):
print(x)
It works now. Essentially keras does it like this:
class MyClass:
def __init__(self, param):
self.p = param
def call(self, x):
print(x)
def __call__(self, x):
self.call(x)
So that when you write your own layer you can implement your own call method and the __call__ method that wraps your one will get inherited from keras' base Layer class.

Just from the syntax, I would say that Dense() returns a function (or more accurately a callable). Similarly w_init is a callable as well.

Related

Understanding Keras subclass method in Tensorflow's deep learning pipeline

I am trying to make a model in tensorflow using the keras subclasses method.
Q1) I am correctly calling layers as layers = [] and then using layers.append(GTLayer....) ?
Q2) calling GTLayer in init of GTN will run class GTLayer and will it call self.conv1 (which will return a tensor A from GTNconv) and self.conv2 (which will again return a tensor A from GTNconv)and then start the call mrthod of GTLayer to H,W , Am I right?
Q3) What happens to the returned H and W from 'Q2' will it store in layers[] list ? and then when we further call the GTNs call method it will bring up those layer? Am I correct?
Q4)Later in the GTNs call method I had to implement linear layers and thus I defined model = tf.keras.models.Sequential() and after theat initialised self.linear1 and self.linear2, this way I have implemented subclassing and sequential both! Is that correct?
Q5) I will finally get loss, y, Ws from calling GTN , now if I assign my model = GTN(arguments..) how will I do the training and back-propagation steps? using an optimiser and loss function? will it follow model.compile() and model.fit ? Or can we make it any different in the sub-classing method of keras?
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class GTN(layers.Layer):
def __init__(self, num_edge, num_channels,num_layers,norm):
super(GTN, self).__init__()
self.num_edge = num_edge
self.num_channels = num_channels
self.num_layers = num_layers
self.is_norm = norm
layers = []
for i in tf.range(num_layers):
if i == 0:
layers.append(GTLayer(num_edge, num_channels, first=True))
else:
layers.append(GTLayer(num_edge, num_channels, first=False))
model = tf.keras.models.Sequential()
self.loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
self.linear1 = model.add(tf.keras.layers.Dense(self.w_out, input_shape=(self.w_out*self.num_channels,), activation=None))
self.linear2 = model.add(tf.keras.layers.Dense(self.num_class, input_shape=(self.w_out,), activation=None))
def gcn_conv(self,X,H):
X = tf.matmul(X, self.weight)
H = self.norm(H, add=True)
return tf.matmul(tf.transpose(H),X)
def call(self, A, X, target_x, target):
A = tf.expand_dims(A, 0)
Ws = []
for i in range(self.num_layers):
H = self.normalization(H)
H, W = self.layers[i](A, H)
Ws.append(W)
for i in range(self.num_channels):
X_tmp = tf.nn.relu(self.gcn_conv(X,H[i])).numpy()
X_ = tf.concat((X_,X_tmp), dim=1)
X_ = self.linear1(X_)
X_ = tf.nn.relu(X_).numpy()
y = self.linear2(X_[target_x])
loss = self.loss(y, target)
return loss, y, Ws
class GTLayer(keras.layers.Layer):
def __init__(self, in_channels, out_channels, first=True):
super(GTLayer, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.conv1 = GTConv(in_channels, out_channels)
self.conv2 = GTConv(in_channels, out_channels)
def call(self, A, H_=None):
a = self.conv1(A)
b = self.conv2(A)
H = tf.matmul( a, b)
W = [tf.stop_gradient(tf.nn.softmax(self.conv1.weight, axis=1).numpy()),
tf.stop_gradient(tf.nn.softmax(self.conv2.weight, axis=1).numpy()) ]
return H,W
class GTConv(keras.layers.Layer):
def __init__(self, in_channels, out_channels):
super(GTConv, self).__init__()
def call(self, A):
A = tf.add_n(tf.nn.softmax(self.weight))
return A
Q1
No. There are two possibilities here
1 - If you want to access a standard layers property of Keras models:
Only Model has a layers property, a keras.layers.Layer doesn't have this property
You are not supposed to mess with the layers property of a Model, you should just read it
The variable you are creating named layers is not a property of your class because you did not use self.layers.
2 - If you just want a list named layers for personal use in your class:
I recommend you don't use a standard name like this and change it to myLayers or something like that to avoid confusion.
The variable layers you created is not being used anywhere else in your code, you just created it and never used.
Remember that layers = [] just creates a local variable, while self.layers = [] creates a property in your class that can be used in other methods inside your class
Q2
You are not "calling" GTLayer, you are "creating" GTLayer. This means that you are running GTLayer.__init__().
This distinction is important in Keras:
This is "creating" a layer: layer_instance = GTLayer(...), which runs __init__
This is "calling" a layer: layer_instance(input_tensors), which runs __call__ (which will eventually run call as defined by you)
You can do both in the same line as output_tensors = GTLayer(...)(input_tensors)
So, this is happening in GTN.__init__:
You are "creating" two instances of the GTLayer.
This runs GTLayer.__init__() for each instance
This hits the lines self.conv1 = GTConv(in_channels, out_channels) and self.conv2 = GTConv(in_channels, out_channels)
This is also "creating" (not "calling") GTConv.
self.conv1 and self.conv2 are "Layer" instances, not tensors.
Q3
No tensor is produced here because you never "called" any layer in GTN.__init__().
(And this is ok. Usually, you "create" layers inside __init__() and "call" layers inside call.)
Your layers local variable will have "instances of GTLayer".
Q4
You mixed two approaches in a strange way.
You can, of course, use a Sequential model if you want, but it's not necessary, and you're not using it correcly.
If in call you are calling each layer (that is X_ = self.linear1(X_) and y = self.linear2(X_[target_x])), you don't need a Sequential model at all, and you can just have the following in GTN.__init__() (this is the best approach for subclassing):
self.linear1 = tf.keras.layers.Dense(self.w_out, input_shape=(self.w_out*self.num_channels,), activation=None)
self.linear2 = tf.keras.layers.Dense(self.num_class, input_shape=(self.w_out,), activation=None)
But you could have self.submodel = Sequential(...) and then use self.submodel in GTN.call(). But having a Model inside a layer sounds weird and might cause some strange behavior in specific cases. And, of course, the ReLUs should be a part of this submodel.
Q5
I will finally get loss, y, Ws from calling GTN
That loss and weights coming from call is a "very very" strange thing. I never saw this and I don't understand why you're doing it this way. This is not standard use of Keras and only in very specific and otherwise unsolvable cases you'd try something like this. I cannot say it will work.
How will I do the training and back-propagation steps?
You should have implemented a keras.models.Model, not a keras.layers.Layer. Only models have the ability to compile and train.
Usually, you'd not create a loss in call, you'd create a loss in model.compile, unless you're dealing with unconventional losses, like weight or activity regularization, things that really depend on the layer and not on the model's inputs/outputs.
Extra tips
There is no need to create custom layers if you're not going to create custom trainable weights. It's not wrong, of course, but also not necessary. It can help organize your code, or just add extra complication.
You are trying to use weight from your layers, but you never defined any weight anywhere.
I'm pretty sure there is a better way to achieve what you want, but I don't know what you want (and that would be something for another question, I think...)
This might be a good reading for subclassing: https://www.tensorflow.org/guide/keras/custom_layers_and_models?hl=en

Creating custom layer as stack of individual neurons TensorFlow

So, I'm trying to create a custom layer in TensorFlow 2.4.1, using a function for a neuron I defined.
# NOTE: this is not the actual neuron I want to use,
# it's just a simple example.
def neuron(x, W, b):
return W # x + b
Where the W and b it gets would be of shape (1, x.shape[0]) and (1, 1) respectively. This means this is like a single neuron in a dense layer. So, I want to create a dense layer by stacking however many of these individual neurons I want.
class Layer(tf.keras.layers.Layer):
def __init__(self, n_units=5):
super(Layer, self).__init__() # handles standard arguments
self.n_units = n_units # Number of neurons to be in the layer
def build(self, input_shape):
# Create weights and biases for all neurons individually
for i in range(self.n_units):
# Create weights and bias for ith neuron
...
def call(self, inputs):
# Compute outputs for all neurons
...
# Concatenate outputs to create layer output
...
return output
How can I create a layer as a stack of individual neurons (also in a way it can train)? I have roughly outlined the idea for the layer in the above code, but the answer doesn't need to follow that as a blueprint.
Finally; yes I'm aware that to create a dense layer you don't need to go about it in such a roundabout way (you just need 1 weight and bias matrix), but in my actual use case, this is neccessary. Thanks!
So, person who asked this question here, I have found a way to do it, by dynamically creating variables and operations.
First, let's re-define the neuron to use tensorflow operations:
def neuron(x, W, b):
return tf.add(tf.matmul(W, x), b)
Then, let's create the layer (this uses the blueprint layed out in the question):
class Layer(tf.keras.layers.Layer):
def __init__(self, n_units=5):
super(Layer, self).__init__()
self.n_units = n_units
def build(self, input_shape):
for i in range(self.n_units):
exec(f'self.kernel_{i} = self.add_weight("kernel_{i}", shape=[1, int(input_shape[0])])')
exec(f'self.bias_{i} = self.add_weight("bias_{i}", shape=[1, 1])')
def call(self, inputs):
for i in range(self.n_units):
exec(f'out_{i} = neuron(inputs, self.kernel_{i}, self.bias_{i})')
return eval(f'tf.concat([{", ".join([ f"out_{i}" for i in range(self.n_units) ])}], axis=0)')
As you can see, we're using exec and eval to dynamically create variables and perform operations.
That's it! We can perform a few checks to see if TensorFlow could use this:
# Check to see if it outputs the correct thing
layer = Layer(5) # With 5 neurons, it should return a (5, 6)
print(layer(tf.zeros([10, 6])))
# Check to see if it has the right trainable parameters
print(layer.trainable_variables)
# Check to see if TensorFlow can find the gradients
layer = Layer(5)
x = tf.ones([10, 6])
with tf.GradientTape() as tape:
z = layer(x)
print(f"Parameter: {layer.trainable_variables[2]}")
print(f"Gradient: {tape.gradient(z, layer.trainable_variables[2])}")
This solution works, but it's not very elegant... I wonder if there's a better way to do it, some magical TF method that can map the neuron to create a layer, I'm too inexperienced to know for the moment. So, please answer if you have a (better) answer, I'll be happy to accept it :)

syntax pytorch.nn functions neural networks, class or functions?

I am puzzled by the syntax of the functions used in neural networks in pytorch.
Here is an example of how one can define a linear transformation layer: (cf. https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)
m = nn.Linear(20, 30)
input = torch.randn(128, 20)
output = m(input)
print(output.size())
torch.Size([128, 30])
Can someone explain me where the expression nn.Linear(20,30)(input) comes from ? It disturbs me a bit.
Indeed, one can define a class neural network with such cosntructor : (for example)
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes, p=0):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size, bias=True)
self.fc2 = nn.Linear(hidden_size, hidden_size, bias=True)
self.fc3 = nn.Linear(hidden_size, hidden_size, bias=True)
self.fc4 = nn.Linear(hidden_size, num_classes, bias=False)
self.dropout = nn.Dropout(p=p)
and I was trying to write an attribute using numpy function, like:
self.enter_reshape = np.reshape(-1, input_size * input_size)
self.exit_reshape = np.reshape(input_size, num_classes / input_size)
or, using the view function from pytorch:
self.reshape = view(-1, self.num_flat_features())
The closest thing I know about is the partial function and closures, where one could write f(z)(x)(y). I looked into the definition of Linear, and I saw that linear is an object, but I don't see where they redefined __call__ magic function, which I thought would be used here when one calls the object.
So basically, can one explain what is up with such writting, and also, would it be possible to give to the neural network the numpy or view functions as attributes?
torch.nn.Linear inherits from torch.nn.Module (see source code), which in turn defines __call__ method.
You can see source code for torch.nn.Module here. This class allows users to make their own Modules by inheritance (as you did in your example) and is a base for all PyTorch defined modules like nn.Linear (see available methods, documentation here).
Its __call__ essentially calls forward but running registered hooks (and registering), checking torchscript etc. (see source code here, with relevant line here.
would it be possible to give to the neural network the numpy or view
functions as attributes?
From the example you've given, what you are trying to do is probably partial function (or lambda as in the example below) saved as attribute (though that is pretty uncommon and never seen it tbh), like this:
import torch
class MyModule(torch.nn.Module):
def __init__(self, shape: int = -1):
super().__init__() # required
self.reshape = lambda tensor: torch.reshape(tensor, (shape,))
def forward(self, tensor):
return self.reshape(tensor)
module = MyModule()
module(torch.randn(4, 5, 6)).shape # [120] shape
You shouldn't use numpy with pytorch unless you really need it and/or there is no sensible pytorch counterpart (although you can if you transform torch.Tensor to numpy). Also you shouldn't do anything like the code above as it's really confusing, just save attributes (anything like input_shape, hidden_dim, output_size) and use it in forward:
class MyModule(torch.nn.Module):
def __init__(self, shape: int = -1):
super().__init__() # required
self.shape = shape
def forward(self, tensor):
return torch.reshape(tensor, (self.shape,))

TensorFlow 2.0 Keras layers with custom tensors as variables

In TF 1.x, it was possible to build layers with custom variables. Here's an example:
import numpy as np
import tensorflow as tf
def make_custom_getter(custom_variables):
def custom_getter(getter, name, **kwargs):
if name in custom_variables:
variable = custom_variables[name]
else:
variable = getter(name, **kwargs)
return variable
return custom_getter
# Make a custom getter for the dense layer variables.
# Note: custom variables can result from arbitrary computation;
# for the sake of this example, we make them just constant tensors.
custom_variables = {
"model/dense/kernel": tf.constant(
np.random.rand(784, 64), name="custom_kernel", dtype=tf.float32),
"model/dense/bias": tf.constant(
np.random.rand(64), name="custom_bias", dtype=tf.float32),
}
custom_getter = make_custom_getter(custom_variables)
# Compute hiddens using a dense layer with custom variables.
x = tf.random.normal(shape=(1, 784), name="inputs")
with tf.variable_scope("model", custom_getter=custom_getter):
Layer = tf.layers.Dense(64)
hiddens = Layer(x)
print(Layer.variables)
The printed variables of the constructed dense layer will be custom tensors we specified in the custom_variables dict:
[<tf.Tensor 'custom_kernel:0' shape=(784, 64) dtype=float32>, <tf.Tensor 'custom_bias:0' shape=(64,) dtype=float32>]
This allows us to create layers/models that use provided tensors in custom_variables directly as their weights, so that we could further differentiate the output of the layers/models with respect to any tensors that custom_variables may depend on (particularly useful for implementing functionality in modulating sub-nets, parameter generation, meta-learning, etc.).
Variable scopes used to make it easy to nest all off graph-building inside scopes with custom getters and build models on top of the provided tensors as their parameters. Since sessions and variable scopes are no longer advisable in TF 2.0 (and all of that low-level stuff is moved to tf.compat.v1), what would be the best practice to implement the above using Keras and TF 2.0?
(Related issue on GitHub.)
Answer based on the comment below
Given you have:
kernel = createTheKernelVarBasedOnWhatYouWant() #shape (784, 64)
bias = createTheBiasVarBasedOnWhatYouWant() #shape (64,)
Make a simple function copying the code from Dense:
def custom_dense(x):
inputs, kernel, bias = x
outputs = K.dot(inputs, kernel)
outputs = K.bias_add(outputs, bias, data_format='channels_last')
return outputs
Use the function in a Lambda layer:
layer = Lambda(custom_dense)
hiddens = layer([x, kernel, bias])
Warning: kernel and bias must be produced from a Keras layer, or come from an kernel = Input(tensor=the_kernel_var) and bias = Input(tensor=bias_var)
If the warning above is bad for you, you can always use kernel and bias "from outside", like:
def custom_dense(inputs):
outputs = K.dot(inputs, kernel) #where kernel is not part of the arguments anymore
outputs = K.bias_add(outputs, bias, data_format='channels_last')
return outputs
layer = Lambda(custom_dense)
hiddens = layer(x)
This last option makes it a bit more complicated to save/load models.
Old answer
You should probably use a Keras Dense layer and set its weights in a standard way:
layer = tf.keras.layers.Dense(64, name='the_layer')
layer.set_weights([np.random.rand(784, 64), np.random.rand(64)])
If you need that these weights are not trainable, before compiling the keras model you set:
model.get_layer('the_layer').trainable=False
If you want direct access to the variables as tensors, they are:
kernel = layer.kernel
bias = layer.bias
There are plenty of other options, but that depends on your exact intention, which is not clear in your question.
Below is a general-purpose solution that works with arbitrary Keras models in TF2.
First, we need to define an auxiliary function canonical_variable_name and a context manager custom_make_variable with the following signatures (see implementation in meta-blocks library).
def canonical_variable_name(variable_name: str, outer_scope: str):
"""Returns the canonical variable name: `outer_scope/.../name`."""
# ...
#contextlib.contextmanager
def custom_make_variable(
canonical_custom_variables: Dict[str, tf.Tensor], outer_scope: str
):
"""A context manager that overrides `make_variable` with a custom function.
When building layers, Keras uses `make_variable` function to create weights
(kernels and biases for each layer). This function wraps `make_variable` with
a closure that infers the canonical name of the variable being created (of the
form `outer_scope/.../var_name`) and looks it up in the `custom_variables` dict
that maps canonical names to tensors. The function adheres the following logic:
* If there is a match, it does a few checks (shape, dtype, etc.) and returns
the found tensor instead of creating a new variable.
* If there is a match but checks fail, it throws an exception.
* If there are no matching `custom_variables`, it calls the original
`make_variable` utility function and returns a newly created variable.
"""
# ...
Using these functions, we can create arbitrary Keras models with custom tensors used as variables:
import numpy as np
import tensorflow as tf
canonical_custom_variables = {
"model/dense/kernel": tf.constant(
np.random.rand(784, 64), name="custom_kernel", dtype=tf.float32),
"model/dense/bias": tf.constant(
np.random.rand(64), name="custom_bias", dtype=tf.float32),
}
# Compute hiddens using a dense layer with custom variables.
x = tf.random.normal(shape=(1, 784), name="inputs")
with custom_make_variable(canonical_custom_variables, outer_scope="model"):
Layer = tf.layers.Dense(64)
hiddens = Layer(x)
print(Layer.variables)
Not entirely sure I understand your question correctly, but it seems to me that it should be possible to do what you want with a combination of custom layers and keras functional api.
Custom layers allow you to build any layer you want in a way that is compatible with Keras, e.g.:
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_weight("kernel",
shape=[int(input_shape[-1]),
self.num_outputs],
initializer='normal')
self.bias = self.add_weight("bias",
shape=[self.num_outputs,],
initializer='normal')
def call(self, inputs):
return tf.matmul(inputs, self.kernel) + self.bias
and the functional api allows you to access the outputs of said layers and re-use them:
inputs = keras.Input(shape=(784,), name='img')
x1 = MyDenseLayer(64, activation='relu')(inputs)
x2 = MyDenseLayer(64, activation='relu')(x1)
outputs = MyDenseLayer(10, activation='softmax')(x2)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
Here x1 and x2 can be connected to other subnets.

How to create composable Keras layers while maintaining access to child layers?

I'm trying to create a custom Keras Layer that is composed of a variable number of other Keras layers. I want access to both the grouped layers treated as a single layer with an input and output as well as access to the individual sublayers.
I'm just wondering, what is the "proper" method for performing this form of layer abstraction in Keras?
I'm not sure if this should be done through Keras's Layer API, or if I can work with name scopes and wrapping layers in a function and then select all layers under some name scope. (e.g. model.get_layer('custom1/*'), but I don't think that works, and name_scope doesn't appear to apply to layer names)
One of the other issues with using Keras Layers for this is that you need to assign the child variables as direct attributes (I assume it makes use of the descriptor API) to the class in order for the trainable weights to be added to the model (which is messier when we don't have a fixed number of layers, meaning we'd need to hack something together with setattr(), which, ew).
Btw, this isn't my actual code, but is a barebones simplification of what I'm trying to accomplish.
class CustomLayer(Layer):
def __init__(self, *a, n_layers=2, **kw):
self.n_layers = n_layers
super().__init__(*a, **kw)
def call(self, x):
for i in range(self.n_layers):
x = Dense(64, activation='relu')(x)
x = Dropout(0.2)(x)
return x
input_shape, n_classes = (None, 100), 10
x = input = Input(input_shape)
x = CustomLayer()(x)
x = CustomLayer()(x)
x = Dense(n_classes, activation='softmax')(x)
model = Model([input], [x])
'''
So this works fine
'''
print([l.name for l in model.layers])
print(model.layers[1].name)
print(model.layers[1].output)
# Output: ['input', 'custom_1', 'custom_2', 'dense']
# Output: 'custom_1'
# Output: the custom_1 output tensor
'''
But I'm not sure how to do something to the effect of this ...
'''
print(model.layers[1].layers[0].name)
print(model.layers[1].layers[0].output)
# Output: custom_1/dense
# Output: custom_1/dense output tensor

Categories