Pixel-wise loss weight for image segmentation in Keras - python

I am currently using a modified version of the U-Net (https://arxiv.org/pdf/1505.04597.pdf) to segment cell organelles in microscopy images. Since I am using Keras, I took the code from https://github.com/zhixuhao/unet. However, in this version no weight map is implemented to force the network to learn the border pixels.
The results that I have obtained so far are quite good, but the network fails to separate objects that are close to each other. So I want to try and make use of the weight map mentioned in the paper. I have been able to generate the weight map (based on the given formula) for each label image, but I was unable to find out how to use this weight map to train my network and thus solve the above mentioned problem.
Do weight maps and label images have to be combined somehow or is there a Keras function that will allow me to make use of the weight maps? I am Biologist, who only recently started to work with neural networks, so my understanding is still limited. Any help or advice would be greatly appreciated.

In case it is still relevant: I needed to solve this recently. You can paste the code below into a Jupyter notebook to see how it works.
%matplotlib inline
import numpy as np
from skimage.io import imshow
from skimage.measure import label
from scipy.ndimage.morphology import distance_transform_edt
import numpy as np
def generate_random_circles(n = 100, d = 256):
circles = np.random.randint(0, d, (n, 3))
x = np.zeros((d, d), dtype=int)
f = lambda x, y: ((x - x0)**2 + (y - y0)**2) <= (r/d*10)**2
for x0, y0, r in circles:
x += np.fromfunction(f, x.shape)
x = np.clip(x, 0, 1)
return x
def unet_weight_map(y, wc=None, w0 = 10, sigma = 5):
"""
Generate weight maps as specified in the U-Net paper
for boolean mask.
"U-Net: Convolutional Networks for Biomedical Image Segmentation"
https://arxiv.org/pdf/1505.04597.pdf
Parameters
----------
mask: Numpy array
2D array of shape (image_height, image_width) representing binary mask
of objects.
wc: dict
Dictionary of weight classes.
w0: int
Border weight parameter.
sigma: int
Border width parameter.
Returns
-------
Numpy array
Training weights. A 2D array of shape (image_height, image_width).
"""
labels = label(y)
no_labels = labels == 0
label_ids = sorted(np.unique(labels))[1:]
if len(label_ids) > 1:
distances = np.zeros((y.shape[0], y.shape[1], len(label_ids)))
for i, label_id in enumerate(label_ids):
distances[:,:,i] = distance_transform_edt(labels != label_id)
distances = np.sort(distances, axis=2)
d1 = distances[:,:,0]
d2 = distances[:,:,1]
w = w0 * np.exp(-1/2*((d1 + d2) / sigma)**2) * no_labels
else:
w = np.zeros_like(y)
if wc:
class_weights = np.zeros_like(y)
for k, v in wc.items():
class_weights[y == k] = v
w = w + class_weights
return w
y = generate_random_circles()
wc = {
0: 1, # background
1: 5 # objects
}
w = unet_weight_map(y, wc)
imshow(w)

I think you want to use class_weight in Keras. This is actually simple to introduce in your model if you have already calculated the class weights.
Create a dictionary with your class labels and their associated weights. For example
class_weight = {0: 10.9,
1: 20.8,
2: 1.0,
3: 50.5}
Or create a 1D Numpy array of the same length as your number of classes. For example
class_weight = [10.9, 20.8, 1.0, 50.5]
Pass this parameter during training in your model.fit or model.fit_generator
model.fit(x, y, batch_size=batch_size, epochs=num_epochs, verbose=1, class_weight=class_weight)
You can look up the Keras documentation for more details here.

Related

How to efficiently draw a plot of a torch.nn model?

I'm exploring neural networks, and I want to model some pictures with neural network. Picture is a function that maps pixel coordinates to color, so I make my network also with 2 input variables (x, y) and 1 (shade) to 3 (R, G, B) output coordinates. For example, like this:
import torch.nn as nn
net = nn.Sequential(
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 1),
)
Now, I plot it like this:
import matplotlib.pyplot as plt
import numpy as np
def draw_image1(f):
image = []
y = 1
delta = 0.005
while y > 0:
x = 0
row = []
while x < 1:
row.append(f(x, y))
x += delta
image.append(row)
y -= delta
plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter')
plt.draw()
draw_image1(lambda x, y: net(torch.Tensor([x, y])).item())
But it looks ugly and is slow because it uses Python lists instead of numpy arrays or tensors.
I have another version of code that draws images from functions, which looks better and is 100x faster:
def draw_image2(f):
x = np.linspace(0, 1, num = 200)
y = np.linspace(0, 1, num = 200)
X, Y = np.meshgrid(x, y)
image = f(X, Y)
plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter')
plt.draw()
It works for functions that use numpy operations (like lambda x: x + y), but when I plug in my net in the same way as for previous function (draw_image2(lambda x, y: net(torch.Tensor([x, y])).item())), I get RuntimeError: mat1 and mat2 shapes cannot be multiplied (400x200 and 2x2), which I understand as my neural net complaining that it wants to be fed data in smaller pieces.
Is there any proper way to plot pytorch neural network output?
To feed a whole batch into nn.Linear(i, o), the input typically has the shape (b, i) where b is the size of the batch. If we take a look at the documentation you can actually use additional "batch"-dimensions in between. Actually since pytorch was primarily made for deep learning that is based on stochastic gradietn descent, pretty much all modules of pytorch require you to have at least one batch dimension.
So you could easily modify your second plotting function to something like:
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
net = nn.Sequential(
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 1),
)
def draw_image2(f):
device = torch.device('cpu') # or use your gpu alternatively
with torch.no_grad(): # disable building evaluation graph if you don't need it
x = torch.linspace(0, 1, 200)
y = torch.linspace(0, 1, 200)
X, Y = torch.meshgrid(x, y)
# the data dimension should be the last (2), as per documentation
inp = torch.stack([X, Y], dim=2).to(device) # shape = (200, 200, 2)
image = f(inp) # shape = (200, 200, 1)
image = image[..., 0].detach().cpu() # shape (200, 200)
plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter')
plt.show()
return image
draw_image2(net)
Note that the with torch.no_grad() is not necessary for it to work, but it will save you some time. Depending on your network architecture it might also be worth to set your network to eval mode (net.eval()) first. Finally the .to(device)/.cpu() is also not necessary if you're not using your GPU.

GPflow 2 custom kernel construction: fine upon construction, but kernel of size None in optimization

I'm creating some GPflow models in which I need the observations pre and post of a threshold x0 to be independent a priori. I could achieve this with just GP models, or with a ChangePoints kernel with infinite steepness, but both solutions don't work well with my future extensions in mind (MOGP in particular).
I figured I could easily construct what I want from scratch, so I made a new Combination kernel object, which uses the appropriate child kernel pre- or post x0. This works as intended when I evaluate the kernel on a set of input points; the expected correlations between points before and after threshold are zero, and the rest is determined by the children kernels:
import numpy as np
import gpflow
from gpflow.kernels import Matern32
import matplotlib.pyplot as plt
import tensorflow as tf
from gpflow.kernels import Combination
class IndependentKernel(Combination):
def __init__(self, kernels, x0, forcing_variable=0, name=None):
self.x0 = x0
self.forcing_variable = forcing_variable
super().__init__(kernels, name=name)
def K(self, X, X2=None):
# threshold X, X2 based on self.x0, and construct a joint tensor
if X2 is None:
X2 = X
fv = self.forcing_variable
mask = tf.dtypes.cast(X[:, fv] >= self.x0, tf.int32)
X_partitioned = tf.dynamic_partition(X, mask, 2)
X2_partitioned = tf.dynamic_partition(X2, mask, 2)
K_pre = self.kernels[0].K(X_partitioned[0], X2_partitioned[0])
K_post = self.kernels[1].K(X_partitioned[1], X2_partitioned[1])
zero_block_1 = tf.zeros([K_pre.shape[0], K_post.shape[1]], tf.float64)
zero_block_2 = tf.zeros([K_post.shape[0], K_pre.shape[1]], tf.float64)
upper_row = tf.concat([K_pre, zero_block_1], axis=1)
lower_row = tf.concat([zero_block_2, K_post], axis=1)
return tf.concat([upper_row, lower_row], axis=0)
#
def K_diag(self, X):
fv = self.forcing_variable
mask = tf.dtypes.cast(X[:, fv] >= self.x0, tf.int32)
X_partitioned = tf.dynamic_partition(X, mask, 2)
return tf.concat([self.kernels[0].K_diag(X_partitioned[0]),
self.kernels[1].K_diag(X_partitioned[1])],
axis=1)
#
#
def f(x):
return np.sin(6*(x-0.7))
x0 = 0.3
n = 100
x = np.linspace(0, 1, n)
sigma = 0.5
y = np.random.normal(loc=f(x), scale=sigma)
fv = 0
X = x[:, None]
kernel = IndependentKernel([Matern32(), Matern32()], x0=x0, name='indep')
x_pred = np.linspace(0, 1, 100)
K = kernel.K(x_pred[:, None]) # <- kernel is evaluated correctly here
However, when I want to train a GPflow model with this kernel, I receive the error message TypeError: Expected int32, got None of type 'NoneType' instead. This appears to result from the sub-kernel matrices K_pre and K_post to be of size (None, 1), instead of the expected squares (which they correctly are if I evaluate the kernel 'manually').
m = gpflow.models.GPR(data=(X, y[:, None]), kernel=kernel)
gpflow.optimizers.Scipy().minimize(m.training_loss,
m.trainable_variables,
options=dict(maxiter=10000),
method="L-BFGS-B") # <- K_pre & K_post are of size (None, 1) now?
What can I do to make the kernel properly trainable?
I am using GPflow 2.1.3 and TensorFlow 2.4.1.
this is not a GPflow issue but a subtlety of TensorFlow's eager vs graph mode: In eager mode (which is the default behaviour when you interact with tensors "manually" as in calling the kernel) K_pre.shape works just as expected. In graph mode (which is what happens when you wrap code in tf.function(), this generally does not always work (e.g. the shape might depend on tf.Variables with None shapes), and you have to use tf.shape(K_pre) instead to obtain the dynamic shape (that depends on the actual values inside the variables). GPflow's Scipy class by default wraps the loss&gradient computation inside tf.function() to speed up optimization. If you explicitly turn this off by passing compile=False to the minimize() call, your code example runs fine. If you replace the .shape attributes with tf.shape() calls to fix it properly, it likewise will run fine.

How do I rotate a PyTorch image tensor around it's center in a way that supports autograd?

I'd like to randomly rotate an image tensor (B, C, H, W) around it's center (2d rotation I think?). I would like to avoid using NumPy and Kornia, so that I basically only need to import from the torch module. I'm also not using torchvision.transforms, because I need it to be autograd compatible. Essentially I'm trying to create an autograd compatible version of torchvision.transforms.RandomRotation() for visualization techniques like DeepDream (so I need to avoid artifacts as much as possible).
import torch
import math
import random
import torchvision.transforms as transforms
from PIL import Image
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
# Somehow rotate tensor around it's center
def rotate_tensor(tensor, radians):
...
return rotated_tensor
# Get a random angle within a specified range
r_degrees = 5
angle_range = list(range(-r_degrees, r_degrees))
n = random.randint(angle_range[0], angle_range[len(angle_range)-1])
# Convert angle from degrees to radians
ang_rad = angle * math.pi / 180
# test_tensor = preprocess_simple('path/to/file', (512,512))
test_tensor = torch.randn(1,3,512,512)
# Rotate input tensor somehow
output_tensor = rotate_tensor(test_tensor, ang_rad)
# Optionally use this to check rotated image
# deprocess_simple(output_tensor, 'rotated_image.jpg')
Some example outputs of what I'm trying to accomplish:
So the grid generator and the sampler are sub-modules of the Spatial Transformer (JADERBERG, Max, et al.). These sub-modules are not trainable, they let you apply a learnable, as well as non-learnable, spatial transformation.
Here I take these two submodules and use them to rotate an image by theta using PyTorch's functions torch.nn.functional.affine_grid and torch.nn.functional.affine_sample (these functions are implementations of the generator and the sampler, respectively):
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
def get_rot_mat(theta):
theta = torch.tensor(theta)
return torch.tensor([[torch.cos(theta), -torch.sin(theta), 0],
[torch.sin(theta), torch.cos(theta), 0]])
def rot_img(x, theta, dtype):
rot_mat = get_rot_mat(theta)[None, ...].type(dtype).repeat(x.shape[0],1,1)
grid = F.affine_grid(rot_mat, x.size()).type(dtype)
x = F.grid_sample(x, grid)
return x
#Test:
dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
#im should be a 4D tensor of shape B x C x H x W with type dtype, range [0,255]:
plt.imshow(im.squeeze(0).permute(1,2,0)/255) #To plot it im should be 1 x C x H x W
plt.figure()
#Rotation by np.pi/2 with autograd support:
rotated_im = rot_img(im, np.pi/2, dtype) # Rotate image by 90 degrees.
plt.imshow(rotated_im.squeeze(0).permute(1,2,0)/255)
In the example above, assume we take our image, im, to be a dancing cat in a skirt:
rotated_im will be a 90-degrees CCW rotated dancing cat in a skirt:
And this is what we get if we call rot_img with theta eqauls to np.pi/4:
And the best part that it's differentiable w.r.t the input and has autograd support! Hooray!
With torchvision it should be simple:
import torchvision.transforms.functional as TF
angle = 30
x = torch.randn(1,3,512,512)
out = TF.rotate(x, angle)
For example if x is:
out with a 30 degree rotation is (NOTE: counterclockwise):
There is a pytorch function for that:
x = torch.tensor([[0, 1],
[2, 3]])
x = torch.rot90(x, 1, [0, 1])
>> tensor([[1, 3],
[0, 2]])
Here are the docs: https://pytorch.org/docs/stable/generated/torch.rot90.html

How to calculate the 3x3 covariance matrix for RGB values across an image dataset?

I need to calculate the covariance matrix for RGB values across an image dataset, and then apply Cholesky decomposition to the final result.
The covariance matrix for RGB values is a 3x3 matrix M, where M_(i, i) is the variance of channel i and M_(i, j) is the covariance between channels i and j.
The end result should be something like this:
([[0.26, 0.09, 0.02],
[0.27, 0.00, -0.05],
[0.27, -0.09, 0.03]])
I'd prefer to stick to PyTorch functions even though Numpy has a Cov function.
I attempted to recreate the numpy Cov function in PyTorch here based on other cov implementations and clones:
def pytorch_cov(tensor, tensor2=None, rowvar=True):
if tensor2 is not None:
tensor = torch.cat((tensor, tensor2), dim=0)
tensor = tensor.view(1, -1) if tensor.dim() < 2 else tensor
tensor = tensor.t() if not rowvar and tensor.size(0) != 1 else tensor
tensor = tensor - torch.mean(tensor, dim=1, keepdim=True)
return 1 / (tensor.size(1) - 1) * tensor.mm(tensor.t())
def cov_vec(x):
c = x.size(0)
m1 = x - torch.sum(x, dim=[1],keepdims=True)/ c
out = torch.einsum('ijk,ilk->ijl',m1,m1) / (c - 1)
return out
The dataset loading would be like this:
dataset = torchvision.datasets.ImageFolder(data_path)
loader = torch.utils.data.DataLoader(dataset)
for images, _ in loader:
batch_size = images.size(0)
...
For the moment I'm just experimenting with images created with torch.randn(batch_size, 3, height, width).
Edit:
I'm attempting to replicate the matrix from Tensorflow's Lucid here, and somewhat explained on distill.pub here.
Second Edit:
In order to make the output resemble the example one, you have to do this instead of using Cholesky:
rgb_cov_tensor = rgb_cov_tensor / len(loader.dataset)
U,S,V = torch.svd(rgb_cov_tensor)
epsilon = 1e-10
svd_sqrt = U # torch.diag(torch.sqrt(S + epsilon))
The resulting matrix can then be used to perform color decorrelation, which is useful for visualizing features (DeepDream). I've implemented it in my project here.
Here is a function for computing the (unbiased) sample covariance matrix on a 3 channel image, named rgb_cov. Cholesky decomposition is straightforward with torch.cholesky:
import torch
def rgb_cov(im):
'''
Assuming im a torch.Tensor of shape (H,W,3):
'''
im_re = im.reshape(-1, 3)
im_re -= im_re.mean(0, keepdim=True)
return 1/(im_re.shape[0]-1) * im_re.T # im_re
#Test:
im = torch.randn(50,50,3)
cov = rgb_cov(im)
L_cholesky = torch.cholesky(cov)

Need Tensorflow/Keras equivalent for scipy signal.fftconvolve

I want to use scipy.signal.fftconvolve in Tensorflow/Keras, is there any way to do that?
Right now I am using the following code :
window = np.tile(window, (1, 1, 1, 3))
tf.nn.conv2d(img1, window, strides=[1,1,1,1], padding='VALID')
Are these lines equivalent to :
signal.fftconvolve(img1, window, mode='valid')
Implementation
FFT convolution can be relatively easily implemented in tensorflow. The following follows scipy.signal.fftconvolve quite strictly
import tensorflow as tf
def _centered(arr, newshape):
# Return the center newshape portion of the array.
currshape = tf.shape(arr)[-2:]
startind = (currshape - newshape) // 2
endind = startind + newshape
return arr[..., startind[0]:endind[0], startind[1]:endind[1]]
def fftconv(in1, in2, mode="full"):
# Reorder channels to come second (needed for fft)
in1 = tf.transpose(in1, perm=[0, 3, 1, 2])
in2 = tf.transpose(in2, perm=[0, 3, 1, 2])
# Extract shapes
s1 = tf.convert_to_tensor(tf.shape(in1)[-2:])
s2 = tf.convert_to_tensor(tf.shape(in2)[-2:])
shape = s1 + s2 - 1
# Compute convolution in fourier space
sp1 = tf.spectral.rfft2d(in1, shape)
sp2 = tf.spectral.rfft2d(in2, shape)
ret = tf.spectral.irfft2d(sp1 * sp2, shape)
# Crop according to mode
if mode == "full":
cropped = ret
elif mode == "same":
cropped = _centered(ret, s1)
elif mode == "valid":
cropped = _centered(ret, s1 - s2 + 1)
else:
raise ValueError("Acceptable mode flags are 'valid',"
" 'same', or 'full'.")
# Reorder channels to last
result = tf.transpose(cropped, perm=[0, 2, 3, 1])
return result
Example
A quick example of applying a gaussian smoothing with width 20 pixels to the standard "face" image is as follows:
if __name__ == '__main__':
from scipy import misc
import matplotlib.pyplot as plt
from tensorflow.python.ops import array_ops, math_ops
session = tf.InteractiveSession()
# Create gaussian
std = 20
grid_x, grid_y = array_ops.meshgrid(math_ops.range(3 * std),
math_ops.range(3 * std))
grid_x = tf.cast(grid_x[None, ..., None], 'float32')
grid_y = tf.cast(grid_y[None, ..., None], 'float32')
gaussian = tf.exp(-((grid_x - 1.5 * std) ** 2 + (grid_y - 1.5 * std) ** 2) / std ** 2)
gaussian = gaussian / tf.reduce_sum(gaussian)
face = misc.face(gray=False)[None, ...].astype('float32')
# Apply convolution
result = fftconv(face, gaussian, 'same')
result_r = session.run(result)
# Show results
plt.figure('face')
plt.imshow(face[0, ...] / 256.0)
plt.figure('convolved')
plt.imshow(result_r[0, ...] / 256.0)
You want just a regular conv2d then...
If you want it somewhere in the model, add a Conv2D(...,name='myLayer') layer, and in the model use model.get_layer('myLayer').set_weights([filters,biases])
If you want it in a loss function, just create a loss function:
import keras.backend as K
def myLoss(y_true, y_pred):
#where y_true is the true training data and y_pred is the model's output
convResult = K.conv2d(y_pred, kernel = window, padding = 'same')
anotherResult = K.depthwise_conv2d(y_pred, kernel = window, padding='same')
The regular conv2D will assume each output channel in the filter will process and sum all input channels.
The depthwise convolution will keep input channels separate.
Beware of the window, though. I don't know the format in tensorflow or scipy, but the kernel in keras should have this shape: (height, width, numberOfInputChannels, numberOfOutputChannels)
I believe, if I understand it right, it should be window = np.reshape(_FSpecialGauss(size, sigma), (size, size, 1, 1)), considering that "size" is the size of the kernel and you have only 1 input and output channels.
I used padding='same' to get the result image the same size of the input. If you use padding='valid', you will lose the borders (although in your case, your filter seems to have size (1,1), which won't remove borders).
You can use any tensorflow function inside the loss function as well:
def customLoss(yTrue,yPred):
tf.anyFunction(yTrue)
tf.anyFunction(yPred)
Using keras backend will let your code be portable to other backends later.
When compiling the model, give it your loss function:
model.compile(loss=myLoss, optimizer =....)

Categories