how do I implement Gaussian blurring layer in Keras? - python

I have an autoencoder and I need to add a Gaussian noise layer after my output. I need a custom layer to do this, but I really do not know how to produce it, I need to produce it using tensors.
what should I do if I want to implement the above equation in the call part of the following code?
class SaltAndPepper(Layer):
def __init__(self, ratio, **kwargs):
super(SaltAndPepper, self).__init__(**kwargs)
self.supports_masking = True
self.ratio = ratio
# the definition of the call method of custom layer
def call(self, inputs, training=None):
def noised():
shp = K.shape(inputs)[1:]
**what should I put here????**
return out
return K.in_train_phase(noised(), inputs, training=training)
def get_config(self):
config = {'ratio': self.ratio}
base_config = super(SaltAndPepper, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
I also try to implement using lambda layer but it dose not work.

If you are looking for additive or multiplicative Gaussian noise, then they have been already implemented as a layer in Keras: GuassianNoise (additive) and GuassianDropout (multiplicative).
However, if you are specifically looking for the blurring effect as in Gaussian blur filters in image processing, then you can simply use a depth-wise convolution layer (to apply the filter on each input channel independently) with fixed weights to get the desired output (Note that you need to generate the weights of Gaussian kernel to set them as the weights of DepthwiseConv2D layer. For that you can use the function introduced in this answer):
import numpy as np
from keras.layers import DepthwiseConv2D
kernel_size = 3 # set the filter size of Gaussian filter
kernel_weights = ... # compute the weights of the filter with the given size (and additional params)
# assuming that the shape of `kernel_weighs` is `(kernel_size, kernel_size)`
# we need to modify it to make it compatible with the number of input channels
in_channels = 3 # the number of input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1)
kernel_weights = np.repeat(kernel_weights, in_channels, axis=-1) # apply the same filter on all the input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1) # for shape compatibility reasons
# define your model...
# somewhere in your model you want to apply the Gaussian blur,
# so define a DepthwiseConv2D layer and set its weights to kernel weights
g_layer = DepthwiseConv2D(kernel_size, use_bias=False, padding='same')
g_layer_out = g_layer(the_input_tensor_for_this_layer) # apply it on the input Tensor of this layer
# the rest of the model definition...
# do this BEFORE calling `compile` method of the model
g_layer.set_weights([kernel_weights])
g_layer.trainable = False # the weights should not change during training
# compile the model and start training...

After a while trying to figure out how to do this with the code #today has provided, I have decided to share my final code with anyone possibly needing it in future. I have created a very simple model that is only applying the blurring on the input data:
import numpy as np
from keras.layers import DepthwiseConv2D
from keras.layers import Input
from keras.models import Model
def gauss2D(shape=(3,3),sigma=0.5):
m,n = [(ss-1.)/2. for ss in shape]
y,x = np.ogrid[-m:m+1,-n:n+1]
h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) )
h[ h < np.finfo(h.dtype).eps*h.max() ] = 0
sumh = h.sum()
if sumh != 0:
h /= sumh
return h
def gaussFilter():
kernel_size = 3
kernel_weights = gauss2D(shape=(kernel_size,kernel_size))
in_channels = 1 # the number of input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1)
kernel_weights = np.repeat(kernel_weights, in_channels, axis=-1) # apply the same filter on all the input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1) # for shape compatibility reasons
inp = Input(shape=(3,3,1))
g_layer = DepthwiseConv2D(kernel_size, use_bias=False, padding='same')(inp)
model_network = Model(input=inp, output=g_layer)
model_network.layers[1].set_weights([kernel_weights])
model_network.trainable= False #can be applied to a given layer only as well
return model_network
a = np.array([[[1, 2, 3], [4, 5, 6], [4, 5, 6]]])
filt = gaussFilter()
print(a.reshape((1,3,3,1)))
print(filt.predict(a.reshape(1,3,3,1)))
For testing purposes the data are only of shape 1,3,3,1, the function gaussFilter() creates a very simple model with only input and one convolution layer that provides Gaussian blurring with weights defined in the function gauss2D(). You can add parameters to the function to make it more dynamic, e.g. shape, kernel size, channels. The weights according to my findings can be applied only after the layer was added to the model.

As the Error: AttributeError: 'float' object has no attribute 'dtype', just change K.sqrt to math.sqrt, then it will work.

Related

How to make my customized keras layer converge correctly? Gradient issue?

Background:
I am trying to use an HW classification engine that performs as follows:
Input is 12x12 image
On each 4x4 block (there are 3x3 blocks) it is applying a transform of the format Out4x4=Weight4x4 x In4x4 x Weight4x4' (a matrix representation of 4x4 transform) - by default, it is using a dct4x4 but this kernel is programable
As a result, there are 3 x 3 x 16 "dct coefficients", each 3x3 are weighted averaged to generate 16 "dct coefficients"
A complex classification process is deciding which of the 2 classes this image belongs to.
What I tried:
I wanted to find the "best" transform instead of using the default DCT.
I tried creating a CNN using Keras and Tensorflow that will simulate the feature extraction process. I tried reading examples and looking at the Keras code of DepthwiseConv2D, for example, and created the following customized Keras layer :
class ChildDenseTensor(keras.layers.Layer):
def __init__(self, units, activation=None):
super().__init__()
self.units = units
self.activation = activation
def get_config(self):
config = super().get_config()
config.update({
"units": self.units,
"activation": self.activation,
})
return config
def build(self, input_shape):
input_dim = input_shape[-1]
self.W = self.add_weight(shape=(1, 4,4, self.units), initializer='random_normal')
def call(self, inputs):
input_dim = inputs.shape[0]
if input_dim is not None:
input_dim=input_dim
else:
input_dim =1
out = tf.zeros((input_dim,12,12))
tmp_img_ts = tf.reshape(tf.convert_to_tensor(()), (0, 0, 0))
for img in range(0, input_dim): #quite a lot of work here to get the correct formating
for i in [0,4,8]:
for j in [0,4,8]:
y = tf.matmul(tf.matmul(tf.reshape(self.W[0],(4,4)), tf.reshape(inputs[img,i:(i+4),j:(j+4)],(4,4))), tf.reshape(self.W,(4,4)), transpose_b=True)
if j==0:
dim0 = tf.reshape(y,[1,16])
else:
dim0=tf.concat([dim0, tf.reshape(y,[1,16])],0)
if i==0:
dim1=tf.reshape(dim0,[1,3,16])
else:
dim1 = tf.concat([dim1, tf.reshape(dim0,[1,3,16])],0)
if img==0:
dim2= tf.reshape(dim1,[1,3,3,16])
else:
dim2 = tf.concat([dim2, tf.reshape(dim1,[1,3,3,16])],0)
return dim2
and built the following model
model = tf.keras.Sequential([
layers.Input((12,12,1)),
ChildDenseTensor(units=1, activation=tf.nn.relu), #, input_shape=(12,12,1) output_shape=(3,3,16)
tf.keras.layers.MaxPooling2D( pool_size=(3,3)),# -> (1,1,16) #averaging the "coefficients", I also tried 3x3 conv averages
tf.keras.layers.Conv2D(hparams[HP_FILTER_NUM], 1, activation='relu'),# classification layers - I also tried more layers and various number of filters but with no substential improvement
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(2)
])
I tried also more complex models but in all of them, the model training is very slow and does not converge to reasonable results.
My questions:
Am I required to produce the gradients somehow? I did not see that this was done in the examples or the Keras code but if I should do it then how and if I don't need then is it calculating it by some approximation?
Is there an easier solution to my problem? Is there something incorrect in my implementation?

How to use a batch_size of Keras tensor at the model building time?

I want to use an external program as a custom operation.
Because automatic gradient would be not available, I wrote the code to provide gradients by using numerical methods. However, because it have to compute the batch_size number of derivatives,
I wrote it to get batch_size from the shape of x.
Following is an example using numpy function as an external program
f(x) = np.sum(x**2)
(In fact, for this simple numpy function, no loop over batch_size is necessary. But, it is written for general external function.)
#tf.custom_gradient
def custom_op(x):
# without using numpy, use external function
# assume x shape = (batch_size,3)
batch_size= x.shape[0]
input_length = x.shape[1]
# assert input_length==3
yout=[] # shape should be (batch_size,1)
gout=[] # shape should be (batch_size,3)
for i in range(batch_size):
inputs = x[i,:] # shape (3,)
y = np.sum(inputs**2) # shape (3,)
yout.append(y) # shape (1,)
# compute differences
dy = []
for j in range(len(inputs)):
delta = np.zeros_like(inputs)
delta[j] = np.abs(inputs[j])*0.001
yplus = np.sum((inputs + delta)**2) # change only j-th input
grad = (yplus-y)/delta[j] #shape (1,)
dy.append(grad)
gout.append(dy)
yout = tf.convert_to_tensor(yout,dtype='float32') # (batch_size,)
yout = tf.reshape(yout,shape=(batch_size,1)) # (batch_size,1)
gout = tf.convert_to_tensor(gout,dtype='float32') # (batch_size,)
gout = tf.reshape(gout,shape=(batch_size,input_length)) # (batch_size,1)
def grad(upstream):
return upstream*gout
return yout, grad
x = tf.Variable([[1.,2.,3.],[2.,3.,4.]],dtype='float32')
with tf.GradientTape() as tape:
y = custom_op(x)
tape.gradient(y,x)
and found it works.
However, when I tried to use it in the keras model , for example,
def construct_model():
inputs = tf.keras.Input(shape=(3,)) #input array
x = tf.keras.layers.Dense(1)(inputs)
outputs = custom_op(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
optimizer = 'adam'
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = construct_model()
it gives errors
because kerasTensor "inputs" does not have specified batch_size.
I tried to specify batch_size as "tf.keras.Input(shape=(3,),batch_size=2)".
However, it also raises errors because of the use of kerasTensor.
How should I change the custom_op to be compatible with keras?

How can I specify input dimension in neural_tangent.stax framework?

I have a code defining structure of a model
from neural_tangents import stax
from neural_tangents.stax import Dense
from jax import jit
def model(
W_std,
b_std,
width,
depth,
activation,
parameterization
):
"""Construct fully connected NN model and infinite width NTK & NNGP kernel
function.
Args:
W_std (float): Weight standard deviation.
b_std (float): Bias standard deviation.
width (int): Hidden layer width.
depth (int): Number of hidden layers.
activation (string): Activation function string, 'erf' or 'relu'.
parameterization (string): Parameterization string, 'ntk' or 'standard'.
Returns:
`(init_fn, apply_fn, kernel_fn)`
"""
act = activation_fn(activation)
layers_list = [Dense(width, W_std, b_std, parameterization=parameterization)]
def layer_block():
return stax.serial(act(), Dense(width, W_std, b_std, parameterization=parameterization))
for _ in range(depth-1):
layers_list += [layer_block()]
layers_list += [act(), Dense(1, W_std, b_std, parameterization=parameterization)]
# print (f"---- layer list is {layers_list} ------")
init_fn, apply_fn, kernel_fn = stax.serial(*layers_list)
apply_fn = jit(apply_fn)
return init_fn, apply_fn, kernel_fn
I can't see where I can establish dimension of input. By default it is 1, but I need to adapt this structure to inputs of higher dimension. width parameter in Dense specifies only output dimension. How can I change input dimension?
Code is from here
The key is that Dense doesn't require input dimension. It is specified in init_fn function:
init_fn, apply_fn, kernel_fn = model(
W_std,
b_std,
width,
depth,
activation,
parameterization
)
_, init_params = init_fn(key, input.shape)

Replace BackPropagation functionality of Keras Layers

I am using the yolo v3 keras implmementation and wish to add a guided backpropagation module to further analyse the behaviour of the network.
Inside the yolo.py file, I have added this function in attempt to compute backprop:
def propb_guided(self, image, layer_index):
from tensorflow.python.ops import nn_ops, gen_nn_ops
from tensorflow.python.framework import ops
box_confidence = self.yolo_model.layers[layer_index].output[..., 4:5]
box_class_probs = self.yolo_model.layers[layer_index].output[...,5:]
#box_class_probs = tf.Print(box_class_probs, [tf.shape(box_class_probs)], message="box class probs")
scores = tf.reduce_sum(box_class_probs[0] * box_confidence[0], axis=2)
#scores = tf.contrib.layers.flatten(scores)
print(self.yolo_model.input.get_shape(), "input blabal")
scores = tf.Print(scores, [tf.shape(scores)], message="scores")
#grads = tf.map_fn(lambda x: tf.gradients(x, image)[0], tf.contrib.layers.flatten(scores), dtype=tf.float32)
# gradients are the 1,1,w,h,c shape, c = 3 because RGB
grads = tf.reduce_mean(tf.gradients(scores, self.yolo_model.input)[0][0], axis=2)
grads = tf.Print(grads, [tf.shape(grads)], message="grad shape")
# prepare image for forward prop
if self.model_image_size != (None, None):
assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
else:
new_image_size = (image.width - (image.width % 32),
image.height - (image.height % 32))
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
print(image_data.shape)
image_data /= 255.
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
# replace all relu layers with guided relu:
# TODO replacing backprop layers doesn't work. The gradient override map doesnt work in this case
#ops.RegisterGradient("GuidedReluBP")
def _GuidedReluGrad(op, grad):
#return tf.where(0. < grad, gen_nn_ops.relu_grad(grad, op.outputs[0]), tf.zeros(tf.shape(grad)))
return 100000000
tf_graph = K.get_session().graph
layers = [op.name for op in tf_graph.get_operations() if op.type=="Maximum"]
print(layers, "layers are")
with tf_graph.gradient_override_map({'Maximum': 'GuidedReluBP'}):
activation = sess.run(grads, feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.size[1], image.size[0]],
K.learning_phase(): 0})
return activation
Right now, the function gets the last layer (shape (?, ?, 255)) , and multiples the 5th filter (box confidence) with the class logits (filter 6 to 80, called box_class_probs). It then sums up a multiplication of all the filters and stores this in the scores tensor.
It then calculates gradients of each pixel from scores, with respect to some input image, and stores it in grads (grads at tf.Print has shape (416,416), which is the width and height of the input image).
at the end (where the comment is 'replace all relu layers with guided relu), I want to get all of the leaky relu keras layers, and replace its back propagation mechanism. I noticed that the keras leaky relu layer has a 'maximum' operation at the end, so I tried to replace each maximum operation with the guided relu operation I have made. Although this method does not work. Instead, the activation variable returned by this function has no effect, whether I implement the RegisterGradient code or not.
How do I replace the back propagation mechanism of each leaky relu inside some keras graph? This is so that I can implement guided backprop inside the keras yolo v3 implementation.

Keras Conv2D custom kernel initialization

I need to initialize custom Conv2D kernels with weights
W = a1b1 + a2b2 + ... + anbn
where W = custom weight of Conv2D layer to initialise with
a = random weight Tensors as keras.backend.variable(np.random.uniform()), shape=(64, 1, 10)
b = fixed basis filters defined as keras.backend.constant(...), shape=(10, 11, 11)
W = K.sum(a[:, :, :, None, None] * b[None, None, :, :, :], axis=2) #shape=(64, 1, 11, 11)
I want my model to update the 'W' values with only changing the 'a's while keeping the 'b's constant.
I pass the custom 'W's as
Conv2D(64, kernel_size=(11, 11), activation='relu', kernel_initializer=kernel_init_L1)(img)
where kernel_init_L1 returns keras.backend.variable(K.reshape(w_L1, (11, 11, 1, 64)))
Problem:
I am not sure if this is the correct way to do this. Is it possible to specify in Keras which ones are trainable and which are not. I know that layers can be set trainable = True but i am not sure about weights.
I think the implementation is incorrect because I get similar results from my model with or without the custom initializations.
It would be immensely helpful if someone can point out any mistakes in my approach or provide a way to verify it.
Warning about your shapes: If your kernel size is (11,11), and assuming you have 64 input channels and 1 output channel, your final kernel shape must be (11,11,64,1).
You should probably be going for a[None,None] and b[:,:,:,None,None].
class CustomConv2D(Conv2D):
def __init__(self, filters, kernel_size, kernelB = None, **kwargs):
super(CustomConv2D, self).__init__(filters, kernel_size,**kwargs)
self.kernelB = kernelB
def build(self, input_shape):
#use the input_shape to calculate the shapes of A and B
#if needed, pay attention to the "data_format" used.
#this is an actual weight, because it uses `self.add_weight`
self.kernelA = self.add_weight(
shape=shape_of_kernel_A + (1,1), #or (1,1) + shape_of_A
initializer='glorot_uniform', #or select another
name='kernelA',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
#this is an ordinary var that will participate in the calculation
#not a weight, not updated
if self.kernelB is None:
self.kernelB = K.constant(....)
#use the shape already containing the new axes
#in the original conv layer, this property would be the actual kernel,
#now it's just a var that will be used in the original's "call" method
self.kernel = K.sum(self.kernelA * self.kernelB, axis=2)
#important: the resulting shape should be:
#(kernelSizeX, kernelSizeY, input_channels, output_channels)
#the following are remains of the original code for "build" in Conv2D
#use_bias is True by default
if self.use_bias:
self.bias = self.add_weight(shape=(self.filters,),
initializer=self.bias_initializer,
name='bias',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
else:
self.bias = None
# Set input spec.
self.input_spec = InputSpec(ndim=self.rank + 2,
axes={channel_axis: input_dim})
self.built = True
Hints for custom layers
When you create a custom layer from zero (derived from Layer), you should have these methods:
__init__(self, ... parameters ...) - this is the creator, it's called when you create a new instance of your layer. Here, you store the values the user passed as parameters. (In a Conv2D, the init would have the "filters", "kernel_size", etc.)
build(self, input_shape) - this is where you should create the weights (all learnable vars are created here, based on the input shape)
compute_output_shape(self,input_shape) - here you return the output shape based on the input shape
call(self,inputs) - Here you perform the actual layer calculations
Since we're not creating this layer from zero, but deriving it from Conv2D, everything is ready, all we did was to "change" the build method and replace what would be considered the kernel of the Conv2D layer.
More on custom layers: https://keras.io/layers/writing-your-own-keras-layers/
The call method for conv layers is here in class _Conv(Layer):.

Categories