Random erasing at multiple random regions of image - python

I want to implement a custom random erasing function.
This function would take an input image and a percentage to mask, but would then mask between 1 and 4 random rectangles whose total area adds up to the mask percentage.
For example, say my image is 100100 pixels, and my mask percent is 15% so I randomly choose to create 3 rectangles with random shapes such that their combined area sums up to 100100*0.15 pixels.
so far i managed to write the code that decides upon the width and height and amount of rectangles, but i struggle with the part that makes sure they don't mask the same spot.
img_c, img_h, img_w = img.shape[-3], img.shape[-2], img.shape[-1]
area = img_h * img_w
for _ in range(10):
block_num = torch.randint(1,4,(1,)).item()
block_sizes = torch.rand((block_num))
block_sizes = torch.round((block_sizes / block_sizes.sum()) * (area * mask_percent))
h = torch.round((torch.rand(block_num)+0.5) * block_sizes.sqrt())
w = torch.round(block_sizes / h)
xs = []
ys = []
if not (any(h < img_h) and any(w < img_w)):
continue
term = True
while term:
xs = [torch.randint(0, img_h - h_ + 1, size=(1, )).item() for h_ in h]
ys = [torch.randint(0, img_w - w_ + 1, size=(1, )).item() for w_ in w]
for iter,x in enumerate(xs):
if (x+h[iter]-xs)<0
#here i get all confused. should have a loop that goes over each point and checks that the location + axial size
#doesn't go over another point. it's confusing because should i also take care vice versa? maybe someone knows of a ready made solution?
return i, j, h, w, v
# Return original image
return 0, 0, img_h, img_w, img
the while loop is released once the random location generator generates locations that corresppond to the terms.
edit_____________
my latest attempt seems to work, but always exits the loop unsolved! is it just not a very likely set of parameters?
img = torch.rand(1,160,1024)
img_c, img_h, img_w = img.shape[-3], img.shape[-2], img.shape[-1]
area = img_h * img_w
for _ in range(100):
block_num = torch.randint(1,3,(1,)).item()
block_sizes = torch.rand((block_num))
block_sizes = torch.round((block_sizes / block_sizes.sum()) * (area * 0.15))
h = torch.round((torch.rand(block_num)+0.5) * block_sizes.sqrt()).to(dtype=torch.long)
w = torch.round(block_sizes / h).to(dtype=torch.long)
if (h > img_h).any() or (w > img_w).any():
continue
overlap1 = torch.zeros(img.shape)
xs = [torch.randint(0, img_h - h_.item() + 1, size=(1, )).item() for h_ in h]
ys = [torch.randint(0, img_w - w_.item() + 1, size=(1, )).item() for w_ in w]
for iter,(x,y) in enumerate(zip(xs,ys)):
overlap1[0,x:x+h[iter],y:y+w[iter]] += 1
if (overlap1>1).any():
continue

When you are checking that rectangle2 doesn't overlap with rectangle1, just check their intersection area. If intersection area is greater than 0, reject rectangle2 and check next random rectangle. Repeat the process with new rectangle.
For checking intersection - use sklearn's Jaccard score (avoid reinventing the wheel). To be able to use Jaccard for comparison, the two arrays (images) should be of same size. So generate original image size equivalent masks (mask1 and mask2) from your rectangle1 and rectangle2 respectively and then calculate Jaccard of mask1 and mask2.
import numpy as np
from sklearn.metrics import jaccard_score
mask1 = np.array([[0, 1, 1],
[1, 1, 0]])
mask2 = np.array([[1, 1, 1],
[1, 0, 0]])
jaccard_score(mask1, mask2)
0.6666

Related

Padding scipy affine_transform output to show non-overlapping regions of transformed images

I have source (src) image(s) I wish to align to a destination (dst) image using an Affine Transformation whilst retaining the full extent of both images during alignment (even the non-overlapping areas).
I am already able to calculate the Affine Transformation rotation and offset matrix, which I feed to scipy.ndimage.interpolate.affine_transform to recover the dst-aligned src image.
The problem is that, when the images are not fuly overlapping, the resultant image is cropped to only the common footprint of the two images. What I need is the full extent of both images, placed on the same pixel coordinate system. This question is almost a duplicate of this one - and the excellent answer and repository there provides this functionality for OpenCV transformations. I unfortunately need this for scipy's implementation.
Much too late, after repeatedly hitting a brick wall trying to translate the above question's answer to scipy, I came across this issue and subsequently followed to this question. The latter question did give some insight into the wonderful world of scipy's affine transformation, but I have as yet been unable to crack my particular needs.
The transformations from src to dst can have translations and rotation. I can get translations only working (an example is shown below) and I can get rotations only working (largely hacking around the below and taking inspiration from the use of the reshape argument in scipy.ndimage.interpolation.rotate). However, I am getting thoroughly lost combining the two. I have tried to calculate what should be the correct offset (see this question's answers again), but I can't get it working in all scenarios.
Translation-only working example of padded affine transformation, which follows largely this repo, explained in this answer:
from scipy.ndimage import rotate, affine_transform
import numpy as np
import matplotlib.pyplot as plt
nblob = 50
shape = (200, 100)
buffered_shape = (300, 200) # buffer for rotation and translation
def affine_test(angle=0, translate=(0, 0)):
np.random.seed(42)
# Maxiumum translation allowed is half difference between shape and buffered_shape
# Generate a buffered_shape-sized base image with random blobs
base = np.zeros(buffered_shape, dtype=np.float32)
random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
i = random_locs[:nblob]
j = random_locs[nblob:]
for k, (_i, _j) in enumerate(zip(i, j)):
# Use different values, just to make it easier to distinguish blobs
base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
# Impose a rotation and translation on source
src = rotate(base, angle, reshape=False, order=1, mode="constant")
bsc = (np.array(buffered_shape) / 2).astype(int)
sc = (np.array(shape) / 2).astype(int)
src = src[
bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
]
# Cut-out destination from the centre of the base image
dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
src_y, src_x = src.shape
def get_matrix_offset(centre, angle, scale):
"""Follows OpenCV.getRotationMatrix2D"""
angle = angle * np.pi / 180
alpha = scale * np.cos(angle)
beta = scale * np.sin(angle)
return (
np.array([[alpha, beta], [-beta, alpha]]),
np.array(
[
(1 - alpha) * centre[0] - beta * centre[1],
beta * centre[0] + (1 - alpha) * centre[1],
]
),
)
# Obtain the rotation matrix and offset that describes the transformation
# between src and dst
matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
offset = offset - translate
# Determine the outer bounds of the new image
lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
# Find min and max bounds of the transformed image
min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
# Add translation to the transformation matrix to shift to positive values
anchor_x, anchor_y = 0, 0
if min_x < 0:
anchor_x = -min_x
if min_y < 0:
anchor_y = -min_y
shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
# Create padded destination image
dst_h, dst_w = dst.shape[:2]
pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
dst_padded = np.pad(
dst,
((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
"constant",
constant_values=-1,
)
dst_pad_h, dst_pad_w = dst_padded.shape
# Create the aligned and padded source image
source_aligned = affine_transform(
src,
matrix.T,
offset=shifted_offset,
output_shape=(dst_pad_h, dst_pad_w),
order=3,
mode="constant",
cval=-1,
)
# Plot the images
fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
axes[0].set_title("Source")
axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
axes[1].set_title("Dest")
axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
axes[2].set_title("Source aligned to Dest padded")
axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
axes[3].set_title("Dest padded")
plt.show()
e.g.:
affine_test(0, (-20, 40))
gives:
With a zoom in showing the aligned in the padded images:
I require the full extent of the src and dst images aligned on the same pixel coordinates, with both rotations and translations.
Any help is greatly appreciated!
Complexity analysis
The problem is to determine three parameters
Let's suppose that you have a grid for angle, x and y displacements, each with size O(n) and that your images are of size O(n x n) so, rotation, translation, and comparison of the images all take O(n^2), since you have O(n^3) candidate transforms to try, you end up with complexity O(n^5), and probably that's why you are asking the question.
However the part of the displacement can be computed slightly more efficiently by computing maximum correlation using Fourier transforms. The Fourier transforms can be performed with complexity O(n log n) each axis, and we have to perform them to the two spatial dimensions, the complete correlation matrix can be computed in O(n^2 log^2 n), then we find the maximum with complexity O(n^2), so the overall time complexity of determining the best alignment is O(n^2 log^2 n). However you still want to search for the best angle, since we have O(n) candidate angles the overall complexity of this search will be O(n^3 log^2 n). Remember we are using python and we may have some significant overhead, so this complexity only gives us an idea of how difficult it will be, and I have handled problems like this before so I start confident.
Preparing some example
I will start by downloading an image and applying rotation and centering the image padding with zeros.
def centralized(a, width, height):
'''
Image centralized to the given width and height
by padding with zeros (black)
'''
assert width >= a.shape[0] and height >= a.shape[1]
ap = np.zeros((width, height) + a.shape[2:], a.dtype)
ccx = (width - a.shape[0])//2
ccy = (height - a.shape[1])//2
ap[ccx:ccx+a.shape[0], ccy:ccy+a.shape[1], ...] = a
return ap
def image_pair(im, width, height, displacement=(0,0), angle=0):
'''
this build an a pair of images as numpy arrays
from the input image.
Both images will be padded with zeros (black)
and roughly centralized.
and will have the specified shape
make sure that the width and height chosen are enough
to fit the rotated image
'''
a = np.array(im)
a1 = centralized(a, width, height)
a2 = centralized(ndimage.rotate(a, angle), width, height)
a2 = np.roll(a2, displacement, axis=(0,1))
return a1, a2
def random_transform():
angle = np.random.rand() * 360
displacement = np.random.randint(-100, 100, 2)
return displacement, angle
a1, a2 = image_pair(im, 512, 512, *random_transform())
plt.subplot(121)
plt.imshow(a1)
plt.subplot(122)
plt.imshow(a2)
The displacement search
The first thing is to compute the correlation of the image
def compute_correlation(a1, a2):
A1 = np.fft.rfftn(a1, axes=(0,1))
A2 = np.fft.rfftn(a2, axes=(0,1))
C = np.fft.irfftn(np.sum(A1 * np.conj(A2), axis=2))
return C
Then, let's create an example without rotation and confirm that the with the index of the maximum correlation we can find the displacement that fit one image to the other.
displacement, _ = random_transform()
a1, a2 = image_pair(im, 521, 512, displacement, angle=0)
C = compute_correlation(a1, a2)
np.unravel_index(np.argmax(C), C.shape), displacement
a3 = np.roll(a2, np.unravel_index(np.argmax(C), C.shape), axis=(0,1))
assert np.all(a3 == a1)
With rotation or interpolation this result may not be exact but it gives the displacement that will give us the closest possible alignment.
Let's put this in a function for future use
def get_aligned(a1, a2, angle):
a1_rotated = ndimage.rotate(a1, angle, reshape=False)
C = compute_correlation(a2, a1_rotated)
found_displacement = np.unravel_index(np.argmax(C), C.shape)
a1_aligned = np.roll(a1_rotated, found_displacement, axis=(0,1))
return a1_aligned
Searching for the angle
Now we can do something in two steps,
in one we compute the correlation for each angle, then with the angle that gives maximum correlation find the alignment.
displacement, angle = random_transform()
a1, a2 = image_pair(im, 521, 512, displacement, angle)
C_max = []
C_argmax = []
angle_guesses = np.arange(0, 360, 5)
for angle_guess in angle_guesses:
a1_rotated = ndimage.rotate(a1, angle_guess, reshape=False)
C = compute_correlation(a1_rotated, a2)
i = np.argmax(C)
v = C.reshape(-1)[i]
C_max.append(v)
C_argmax.append(i)
Let's see how the correlation looks like
plt.plot(angle_guesses, C_max);
We have a clear winner looking at this curve, even if a sunflower has some sort of rotation symmetry.
Let's apply the transformation to the original image and see how it looks like
a1_aligned = get_aligned(a1, a2, angle_guesses[np.argmax(C_max)])
plt.subplot(121)
plt.imshow(a2)
plt.subplot(122)
plt.imshow(a1_aligned)
Great, I wouldn't have done better than this manually.
I am using a sunflower image for beauty reasons, but the procedure is the same for any type of image. I use RGB showing that the image may have one additional dimension, i.e. it uses a feature vector, instead of the scalar feature, you can use reshape your data to (width, height, 1) if your feature is a scalar.
Working code below in case anyone else has this need of scipy's affine transformations:
def affine_test(angle=0, translate=(0, 0), shape=(200, 100), buffered_shape=(300, 200), nblob=50):
# Maxiumum translation allowed is half difference between shape and buffered_shape
np.random.seed(42)
# Generate a buffered_shape-sized base image
base = np.zeros(buffered_shape, dtype=np.float32)
random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
i = random_locs[:nblob]
j = random_locs[nblob:]
for k, (_i, _j) in enumerate(zip(i, j)):
base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
# Impose a rotation and translation on source
src = rotate(base, angle, reshape=False, order=1, mode="constant")
bsc = (np.array(buffered_shape) / 2).astype(int)
sc = (np.array(shape) / 2).astype(int)
src = src[
bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
]
# Cut-out destination from the centre of the base image
dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
src_y, src_x = src.shape
def get_matrix_offset(centre, angle, scale):
"""Follows OpenCV.getRotationMatrix2D"""
angle_rad = angle * np.pi / 180
alpha = np.round(scale * np.cos(angle_rad), 8)
beta = np.round(scale * np.sin(angle_rad), 8)
return (
np.array([[alpha, beta], [-beta, alpha]]),
np.array(
[
(1 - alpha) * centre[0] - beta * centre[1],
beta * centre[0] + (1 - alpha) * centre[1],
]
),
)
matrix, offset = get_matrix_offset(np.array([((src_y - 1) / 2) - translate[0], ((src_x - 1) / 2) - translate[
1]]), angle, 1)
offset += np.array(translate)
M = np.column_stack((matrix, offset))
M = np.vstack((M, [0, 0, 1]))
iM = np.linalg.inv(M)
imatrix = iM[:2, :2]
ioffset = iM[:2, 2]
# Determine the outer bounds of the new image
lin_pts = np.array([[0, src_y-1, src_y-1, 0], [0, 0, src_x-1, src_x-1]])
transf_lin_pts = np.dot(matrix, lin_pts) + offset.reshape(2, 1) # - np.array(translate).reshape(2, 1) # both?
# Find min and max bounds of the transformed image
min_x = np.floor(np.min(transf_lin_pts[1])).astype(int)
min_y = np.floor(np.min(transf_lin_pts[0])).astype(int)
max_x = np.ceil(np.max(transf_lin_pts[1])).astype(int)
max_y = np.ceil(np.max(transf_lin_pts[0])).astype(int)
# Add translation to the transformation matrix to shift to positive values
anchor_x, anchor_y = 0, 0
if min_x < 0:
anchor_x = -min_x
if min_y < 0:
anchor_y = -min_y
dot_anchor = np.dot(imatrix, [anchor_y, anchor_x])
shifted_offset = ioffset - dot_anchor
# Create padded destination image
dst_y, dst_x = dst.shape[:2]
pad_widths = [anchor_y, max(max_y, dst_y) - dst_y, anchor_x, max(max_x, dst_x) - dst_x]
dst_padded = np.pad(
dst,
((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
"constant",
constant_values=-10,
)
dst_pad_y, dst_pad_x = dst_padded.shape
# Create the aligned and padded source image
source_aligned = affine_transform(
src,
imatrix,
offset=shifted_offset,
output_shape=(dst_pad_y, dst_pad_x),
order=3,
mode="constant",
cval=-10,
)
E.g. running:
affine_test(angle=-25, translate=(10, -40))
will show:
and zoomed in:
Apologies the code is not nicely written as is.
Note that running this in the wild I notice it cannot handle any change in scale size of the images, but I am not certain it isn't something to do with how I calculate the transformation - so a caveat worth noting, and checking out, if you are aligning images with different scales.
If you have two images that are similar (or the same) and you want to align them, you can do it using both functions rotate and shift :
from scipy.ndimage import rotate, shift
You need to find first the difference of angle between the two images angle_to_rotate, having that you apply a rotation to src:
angle_to_rotate = 25
rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
With reshape=True you avoid losing information from your original src matrix, and it pads the result so the image could be translated around the 0,0 indexes. You can calculate this translation as it is (x*cos(angle),y*sin(angle) where x and y are the dimensions of the image, but it probably won't matter.
Now you will need to translate the image to the source, for doing that you can use the shift function:
rot_translated_src = shift(rotated_src , [distance_x, distance_y])
In this case there is no reshape (because otherwise you wouldn't have any real translation) so if the image was not previously padded some information will be lost.
But you can do some padding with
np.pad(src, number, mode='constant')
To calculate distance_x and distance_y you will need to find a point that serves you as a reference between the rotated_src and the destination, then just calculate the distance in the x and y axis.
Summary
Make some padding in src, and dst
Find the angular distance between them.
Rotate src with scipy.ndimage.rotate using reshape=True
Find the horizontal and vertical distance distance_x, distance_y between the rotated image and dst
Translate your 'rotated_src' with scipy.ndimage.shift
Code
from scipy.ndimage import rotate, shift
import matplotlib.pyplot as plt
import numpy as np
First we make the destination image:
# make and plot dest
dst = np.ones([40,20])
dst = np.pad(dst,10)
dst[17,[14,24]]=4
dst[27,14:25]=4
dst[26,[14,25]]=4
rotated_dst = rotate(dst, 20, order=1)
plt.imshow(dst) # plot it
plt.imshow(rotated_dst)
plt.show()
We make the Source image:
# make_src image and plot it
src = np.zeros([40,20])
src = np.pad(src,10)
src[0:20,0:20]=1
src[7,[4,14]]=4
src[17,4:15]=4
src[16,[4,15]]=4
plt.imshow(src)
plt.show()
Then we align the src to the destination:
rotated_src = rotate(src, 20, order=1) # find the angle 20, reshape true is by default
plt.imshow(rotated_src)
plt.show()
distance_y = 8 # find this distances from rotated_src and dst
distance_x = 12 # use any visual reference or even the corners
translated_src = shift(rotated_src, [distance_y,distance_x])
plt.imshow(translated_src)
plt.show()
pd: If you find problems to find the angle and the distances in a programmatic way, please leave a comment providing a bit more of insight of what can be used as a reference that could be for example the frame of the image or some image features / data)

Implementing a bilateral filter

I am trying to implement a bilateral filter from the paper Fast Bilateral Filteringfor the Display of High-Dynamic-Range Images. The equation (from the paper) that implements the bilateral filter is given as :
According to what I understood,
f is a Gaussian filter
g is a Gaussian filter
p is a pixel in a given image window
s is the current pixel
Ip is the intensity at the current pixel
With this, I wrote the code to implement these equations, given as :
import cv2
import numpy as np
img = cv2.imread("fish.png")
# image of width 239 and height 200
bl_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
i = cv2.magnitude(
cv2.Sobel(bl_img, cv2.CV_64F, 1, 0, ksize=3),
cv2.Sobel(bl_img, cv2.CV_64F, 0, 1, ksize=3)
)
f = cv2.getGaussianKernel(5, 0.1, cv2.CV_64F)
g = cv2.getGaussianKernel(5, 0.1, cv2.CV_64F)
rows, cols, _ = img.shape
filtered = np.zeros(img.shape, dtype=img.dtype)
for r in range(rows):
for c in range(cols):
ks = []
for index in [-2,-1,1,2]:
if index + c > 0 and index + c < cols-1:
p = img[r][index + c]
s = img[r][c]
i_p = i[index+c]
i_s = i[c]
ks.append(
(f * (p-s)) * (g * (i_p * i_s)) # EQUATION 7
)
ks = np.sum(np.array(ks))
js = []
for index in [-2, -1, 1, 2]:
if index + c > 0 and index + c < cols -1:
p = img[r][index + c]
s = img[r][c]
i_p = i[index+c]
i_s = i[c]
js.append((f * (p-s)) * (g * (i_p * i_s)) * i_p) # EQUATION 6
js = np.sum(np.asarray(js))
js = js / ks
filtered[r][c] = js
cv2.imwrite("f.png", filtered)
But as I run this code I get an error saying:
Traceback (most recent call last):
File "bft.py", line 33, in <module>
(f * (p-s)) * (g * (i_p * i_s))
ValueError: operands could not be broadcast together with shapes (5,3) (5,239)
Did I incorrectly implement the equations? What am I missing?
There are various issues with your code. Foremost, the equation is interpreted in a wrong way. f(p-s) means evaluating the function f at p-s. f is the Gaussian. Likewise with g. The section of the code would look like this:
weight = gaussian(p - s, sigma_f) * gaussian(i_p - i_s, sigma_g)
ks.append(weight)
js.append(weight * i_p)
Note that the two loops can be merged, this way you avoid some duplicated computation. gaussian(x, sigma) would be a function that computes the Gaussian weight at x. You need to define two sigmas, sigma_f and sigma_g, the spatial and the tonal sigma respectively.
The second issue is in the definition of p and s. These are the coordinates of the pixel, not the value of the image at the pixel. i_p and i_s are the value of the image at those locations. p-s is basically the spatial distance between the pixel at (r,c) and the given neighbor.
The third issue is the loop over the neighborhood. The neighborhood is all pixels where gaussian(p - s, sigma_f) is not negligible. So how large the neighborhood is depends on the chosen sigma_f. You should take it at least to be ceil(2*sigma_f). Say sigma_f is 2, then you want the neighborhood to go from -4 to 4 (9 pixels). But this neighborhood is two dimensional, not one-dimensional as in your code. So you need two loops:
for ii in range(-ceil(2*sigma_f), ceil(2*sigma_f)+1):
if ii + c > 0 and ii + c < cols-1:
for jj in range(-ceil(2*sigma_f), ceil(2*sigma_f)+1):
if jj + r > 0 and jj + r < rows-1:
# compute weight here
Note that now, p-s is computed with math.sqrt(ii**2 + jj**2). But also note that the Gaussian uses x**2, so you could skip the computation of the square root by passing x**2 into your gaussian function.

Lucas-Kanade image alignment algorithm implementation not converging?

I've recently been attempting to implement the Lucas-Kanade algorithm for image alignment, as detailed in this paper here: https://www.ri.cmu.edu/pub_files/pub3/baker_simon_2004_1/baker_simon_2004_1.pdf
I've managed to implement the algorithm detailed in page 4 of the paper I linked, but the loss doesn't seem to converge. I've been looking over my code and my math, and can't seem to figure out where I might be going wrong.
What I've tried so far is implementing the entire algorithm, and re-doing my math for calculating the Jacobian of the warp, as well as just general checking of my code.
My code is below, as well as a more readable version on Pastebin: https://pastebin.com/j28mUV65
import cv2
import numpy as np
import matplotlib.pyplot as plt
def calculate_steepest_descent(grad_x_warped, grad_y_warped, h):
rows, columns = grad_x_warped.shape
steepest_descent = np.zeros((rows, columns, 8))
warp_jacobian = np.zeros((2, 8)) # 2 x 8 because it's a homography, would be 2 x 6 if it was affine
current_gradient = np.zeros((1, 2))
# Convert homography matrix into parameter array for better readability with the math functions later
p = h.flatten()
for y in range(rows):
for x in range(columns):
# Calculate Jacobian of the warp at each pixel, which contains the partial derivatives of the
# warp parameters with respect to x and y coordinates, evaluated at the current value
# of parameters
common_denominator = (p[6]*x + p[7]*y + 1)
warp_jacobian[0, 0] = (x) / common_denominator
warp_jacobian[0, 1] = (y) / common_denominator
warp_jacobian[0, 2] = (1) / common_denominator
warp_jacobian[0, 3] = 0
warp_jacobian[0, 4] = 0
warp_jacobian[0, 5] = 0
warp_jacobian[0, 6] = (-(p[0]*(x**2) + p[1]*x*y + p[2]*x)) / (common_denominator ** 2)
warp_jacobian[0, 7] = (-(p[1]*(y**2) + p[0]*x*y + p[2]*y)) / (common_denominator ** 2)
warp_jacobian[1, 0] = 0
warp_jacobian[1, 1] = 0
warp_jacobian[1, 2] = 0
warp_jacobian[1, 3] = (x) / common_denominator
warp_jacobian[1, 4] = (y) / common_denominator
warp_jacobian[1, 5] = (1) / common_denominator
warp_jacobian[1, 6] = (-(p[3]*(x**2) + p[4]*x*y + p[5]*x)) / (common_denominator ** 2)
warp_jacobian[1, 7] = (-(p[4]*(y**2) + p[3]*x*y + p[5]*y)) / (common_denominator ** 2)
# Get the x and y gradient intensity values corresponding to the current pixel location
current_gradient[0, 0] = grad_x_warped[y, x]
current_gradient[0, 1] = grad_y_warped[y, x]
# Calculate full Jacobian (aka steepest descent image) at current pixel value
steepest_descent[y, x, :] = np.dot(current_gradient, warp_jacobian)
return steepest_descent
def calculate_hessian(steepest_descent):
rows, columns, channels = steepest_descent.shape
hessian = np.zeros((channels, channels))
for y in range(rows):
for x in range(columns):
steepest_descent_single = steepest_descent[y, x, :][np.newaxis, :]
steepest_descent_single_transpose = np.transpose(steepest_descent_single)
hessian_current = np.dot(steepest_descent_single_transpose, steepest_descent_single)
hessian += hessian_current
return hessian
def calculate_sd_param_updates(steepest_descent, img_error):
rows, columns, channels = steepest_descent.shape
sd_param_updates = np.zeros((8, 1))
for y in range(rows):
for x in range(columns):
steepest_descent_single = steepest_descent[y, x, :][np.newaxis, :]
steepest_descent_single_transpose = np.transpose(steepest_descent_single)
img_error_single = img_error[y, x]
sd_param_updates += np.dot(steepest_descent_single_transpose, img_error_single)
return sd_param_updates
def calculate_final_param_updates(sd_param_updates, hessian):
hessian_inverse = np.linalg.inv(hessian)
final_param_updates = np.dot(hessian_inverse, sd_param_updates)
return final_param_updates
if __name__ == "__main__":
# Load image
reference = cv2.imread('test.png')
reference = cv2.cvtColor(reference, cv2.COLOR_BGR2GRAY)
# Generate template as small block from within reference image using homography
# 'h' is the ground truth homography for warping reference image onto template image
template_size = (100, 100)
h = np.float32([[1, 0, -100],[0, 1, -100],[0, 0, 1]])
h_ground_truth = h.copy()
template = cv2.warpPerspective(reference, h, template_size)
# Convert template corner points to reference image coordinate plane
template_corners = np.array([[0, 0],[0, 100],[100, 100],[100, 0]])
h_inverse = np.linalg.inv(h)
reference_corners = cv2.perspectiveTransform(np.array([template_corners], dtype='float32'), h_inverse)
# Small perturbation to ground truth homography
h_mod = np.random.uniform(low=-1.0, high=1.0, size=(h.shape))
h_mod = np.array([[1, 1, 1],[1, 1, 1],[1, 1, 1]])
h_mod[0, 0] = h_mod[0, 0] * 0
h_mod[0, 1] = -h_mod[0, 1] * 0
h_mod[0, 2] = h_mod[0, 2] * 10
h_mod[1, 0] = h_mod[1, 0] * 0
h_mod[1, 1] = h_mod[1, 1] * 0
h_mod[1, 2] = h_mod[1, 2] * 10
h_mod[2, 0] = h_mod[2, 0] * 0
h_mod[2, 1] = h_mod[2, 1] * 0
h_mod[2, 2] = h_mod[2, 1] * 0
h = h + h_mod
# Warp reference image to template image based on initial perturbed homography
reference_transformed = cv2.warpPerspective(reference, h, template_size)
# ##############################
# Lucas-Kanade algorithm below
# This is supposed to calculate the homography that undoes the small perturbation
# and returns a homography as close as possible to the ground truth homography
# ##############################
# Precompute image gradients
grad_x = cv2.Sobel(reference,cv2.CV_64F,1,0,ksize=1)
grad_y = cv2.Sobel(reference,cv2.CV_64F,0,1,ksize=1)
# Loop algorithm for given # of steps
for i in range(1000):
# Step 1
# Warp reference image onto coordinate frame of template
reference_transformed = cv2.warpPerspective(reference, h, template_size)
# Step 2
# Compute error image
img_error = template - reference_transformed
# fig_overlay = plt.figure()
# ax1 = fig_overlay.add_subplot(1,3,1)
# plt.imshow(img_warped)
# ax2 = fig_overlay.add_subplot(1,3,2)
# plt.imshow(template)
# ax3 = fig_overlay.add_subplot(1,3,3)
# plt.imshow(img_error)
# plt.show()
# Step 3
# Warp the gradients
grad_x_warped = cv2.warpPerspective(grad_x, h, template_size)
grad_y_warped = cv2.warpPerspective(grad_y, h, template_size)
# Step 4 & 5
# Use Jacobian of warp to calculate steepest descent images
steepest_descent = calculate_steepest_descent(grad_x_warped, grad_y_warped, h)
# fig_overlay = plt.figure()
# ax1 = fig_overlay.add_subplot(1,8,1)
# plt.imshow(steepest_descent[:, :, 0])
# ax2 = fig_overlay.add_subplot(1,8,2)
# plt.imshow(steepest_descent[:, :, 1])
# ax3 = fig_overlay.add_subplot(1,8,3)
# plt.imshow(steepest_descent[:, :, 2])
# ax4 = fig_overlay.add_subplot(1,8,4)
# plt.imshow(steepest_descent[:, :, 3])
# ax5 = fig_overlay.add_subplot(1,8,5)
# plt.imshow(steepest_descent[:, :, 4])
# ax6 = fig_overlay.add_subplot(1,8,6)
# plt.imshow(steepest_descent[:, :, 5])
# ax7 = fig_overlay.add_subplot(1,8,7)
# plt.imshow(steepest_descent[:, :, 6])
# ax8 = fig_overlay.add_subplot(1,8,8)
# plt.imshow(steepest_descent[:, :, 7])
# plt.show()
# Step 6
# Compute Hessian matrix
hessian = calculate_hessian(steepest_descent)
# Step 7
# Compute steepest descent parameter updates by
# dot producting error image with steepest descent images
sd_param_updates = calculate_sd_param_updates(steepest_descent, img_error)
# Step 8
# Compute final parameter updates
final_param_updates = calculate_final_param_updates(sd_param_updates, hessian)
# Step 9
# Update the parameters
h = h.reshape(-1,1)
h[:-1] += final_param_updates
h = h.reshape(3,3)
# Step 10
# Calculate norm of parameter updates
final_param_update_norm = np.linalg.norm(final_param_updates)
print("Final Param Norm: {}".format(final_param_update_norm))
reference_transformed = cv2.warpPerspective(reference, h, template_size)
cv2.imwrite('warps/warp_{}.png'.format(i), reference_transformed)
# Warp source image to destination based on homography
reference_transformed = cv2.warpPerspective(reference, h, template_size)
cv2.imwrite('final_warp.png', reference_transformed)
It should just need a reference image to test with.
The expected result is that the algorithm converges to a homography that matches the ground truth homography I calculate in the code, but the loss just seems to explode instead and I end up with a totally incorrect homography.
This should be a comment since I am not certain is the full cause of your problem
But it might be part of it
To solve a system of linear equations don't compute the inverse
hessian_inverse = np.linalg.inv(hessian)
and then multiply by it
final_param_updates = np.dot(hessian_inverse, sd_param_updates)
This is both wasteful and can cause more numerical instability than solving systems of linear equations normally have.
Instead use the method solve.
Computing the inverse will repeat some of the operations needed to do solve for each of the columns of the identity matrix. None of those operations is needed.

Tensorflow - Get Neighborhood of Pixel

I'm trying to implement the loss function of the classic Image Colorization paper by Levin et al (2004) in Tensorflow/Keras:
This is the weights equation (correlation between intensities):
y is every neighboring pixel of x in a 3x3 window and w is the weight for each of these pixels.
The weights require computing the mean and variance for the neighborhood of every pixel.
I couldn't find a function that would allow me to write this loss function in a symbolic way, and I'm thinking I should write it in a loop where I calculate the w for each window.
How can I write this Loss function in Tensorflow In a Symbolic way or in loops?
Thanks so much.
EDIT: Here's the code I've come up for calculating the weights in Numpy:
import cv2
import numpy as np
im = cv2.resize(cv2.imread('./Image.jpg', 0), (256, 256)) / np.float32(255.0)
M = 3
N = 3
# Split the image into 3x3 windows
windows = [im[x:x + M, y:y + N] for x in range(0, im.shape[0], M) for y in range(0, im.shape[1], N)]
# Calculate the correlation for each window
weights = [1 + np.corrcoef(tile) for tile in windows]
I think this code computes the value in your formula:
import tensorflow as tf
from itertools import product
SIGMA = 1.0
dtype = tf.float32
# Input images batch
img = tf.placeholder(dtype, [None, None, None])
img_shape = tf.shape(img)
img_height = img_shape[1]
img_width = img_shape[2]
# Compute 3 x 3 block means
mean_filter = tf.ones((3, 3), dtype) / 9
img_mean = tf.nn.conv2d(img[:, :, :, tf.newaxis],
mean_filter[:, :, tf.newaxis, tf.newaxis],
[1, 1, 1, 1], 'VALID')[:, :, :, 0]
# Remove 1px border
img_clip = img[:, 1:-1, 1:-1]
# Difference between pixel intensity and its block mean
x_diff = img_clip - img_mean
# Compute neighboring pixel loss contributions
contributions = []
for i, j in product((-1, 0, 1), repeat=2):
if i == j == 0: continue
# Take "shifted" image
displaced_img = img[:, 1 + i:img_width - 1 + i, 1 + j:img_height - 1 + j]
# Compute difference with mean of corresponding pixel block
y_diff = displaced_img - img_mean
# Weights formula
weight = 1 + x_diff * y_diff / (SIGMA ** 2)
# Contribution of this displaced image to the loss of each pixel
contribution = weight * displaced_img
contributions.append(contribution)
contributions = tf.add_n(contributions)
# Compute loss value
loss = tf.reduce_sum(tf.squared_difference(img_clip, contributions))
The loss for the pixels along the image border is not computed, since in principle is not well defined in the formula, although you could make a few changes to take them into account if you want (change convolution to "'SAME'", pad where necessary, etc.).
this is a mean squared error of a 3 x 3 windows. right?
sounds like a GLCM matrix for texture analysis do you want apply this loss function for every 3x3 windows in the image?
I think that is better build the function that make this calculation with a Random weight in Numpy so after try build with TF to try a optimization.

Sliding window in Python for GLCM calculation

I am trying to do texture analysis in a satellite imagery using GLCM algorithm. The scikit-image documentation is very helpful on that but for GLCM calculation we need a window size looping over the image. This is too slow in Python. I found many posts on stackoverflow about sliding windows but the computation takes for ever. I have an example shown below, it works but takes forever. I guess this must be a a naive way of doing it
image = np.pad(image, int(win/2), mode='reflect')
row, cols = image.shape
feature_map = np.zeros((M, N))
for m in xrange(0, row):
for n in xrange(0, cols):
window = image[m:m+win, n:n+win]
glcm = greycomatrix(window, d, theta, levels)
contrast = greycoprops(glcm, 'contrast')
feature_map[m,n] = contrast
I came across with skimage.util.view_as_windows method which might be good solution for me. My problem is that, when I try to calculate the GLCM I get an error which says:
ValueError: The parameter image must be a 2-dimensional array
This is because the result of the GLCM image has 4d dimensions and scikit-image view_as_windows method accepts only 2d arrays. Here is my attempt
win_w=40
win_h=40
features = np.zeros(image.shape, dtype='uint8')
target = features[win_h//2:-win_h//2+1, win_w//2:-win_w//2+1]
windowed = view_as_windows(image, (win_h, win_w))
GLCM = greycomatrix(windowed, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4], symmetric=True, normed=True)
haralick = greycoprops(GLCM, 'ASM')
Does anyone have an idea on how I can calculate the GLCM using skimage.util.view_as_windows method?
The feature extraction you are trying to perform is a computer-intensive task. I have speeded up your method by computing the co-occurrence map only once for the whole image, rather than computing the co-occurrence map over and over on overlapping positions of the sliding window.
The co-occurrence map is a stack of images of the same size as the original image, in which - for each pixel - intensity levels are replaced by integer numbers that encode the co-occurrence of two intensities, namely Ii at that pixel and Ij at an offset pixel. The co-occurrence map has as many layers as we considered offsets (i.e. all the possible distance-angle pairs). By retaining the co-occurrence map you don't need to compute the GLCM at each position of the sliding window from the scratch, as you can reuse the previously computed co-occurrence maps to obtain the adjacency matrices (the GLCMs) for each distance-angle pair. This approach provides you with a significant speed gain.
The solution I came up with relies on the functions below:
import numpy as np
from skimage import io
from scipy import stats
from skimage.feature import greycoprops
def offset(length, angle):
"""Return the offset in pixels for a given length and angle"""
dv = length * np.sign(-np.sin(angle)).astype(np.int32)
dh = length * np.sign(np.cos(angle)).astype(np.int32)
return dv, dh
def crop(img, center, win):
"""Return a square crop of img centered at center (side = 2*win + 1)"""
row, col = center
side = 2*win + 1
first_row = row - win
first_col = col - win
last_row = first_row + side
last_col = first_col + side
return img[first_row: last_row, first_col: last_col]
def cooc_maps(img, center, win, d=[1], theta=[0], levels=256):
"""
Return a set of co-occurrence maps for different d and theta in a square
crop centered at center (side = 2*w + 1)
"""
shape = (2*win + 1, 2*win + 1, len(d), len(theta))
cooc = np.zeros(shape=shape, dtype=np.int32)
row, col = center
Ii = crop(img, (row, col), win)
for d_index, length in enumerate(d):
for a_index, angle in enumerate(theta):
dv, dh = offset(length, angle)
Ij = crop(img, center=(row + dv, col + dh), win=win)
cooc[:, :, d_index, a_index] = encode_cooccurrence(Ii, Ij, levels)
return cooc
def encode_cooccurrence(x, y, levels=256):
"""Return the code corresponding to co-occurrence of intensities x and y"""
return x*levels + y
def decode_cooccurrence(code, levels=256):
"""Return the intensities x, y corresponding to code"""
return code//levels, np.mod(code, levels)
def compute_glcms(cooccurrence_maps, levels=256):
"""Compute the cooccurrence frequencies of the cooccurrence maps"""
Nr, Na = cooccurrence_maps.shape[2:]
glcms = np.zeros(shape=(levels, levels, Nr, Na), dtype=np.float64)
for r in range(Nr):
for a in range(Na):
table = stats.itemfreq(cooccurrence_maps[:, :, r, a])
codes = table[:, 0]
freqs = table[:, 1]/float(table[:, 1].sum())
i, j = decode_cooccurrence(codes, levels=levels)
glcms[i, j, r, a] = freqs
return glcms
def compute_props(glcms, props=('contrast',)):
"""Return a feature vector corresponding to a set of GLCM"""
Nr, Na = glcms.shape[2:]
features = np.zeros(shape=(Nr, Na, len(props)))
for index, prop_name in enumerate(props):
features[:, :, index] = greycoprops(glcms, prop_name)
return features.ravel()
def haralick_features(img, win, d, theta, levels, props):
"""Return a map of Haralick features (one feature vector per pixel)"""
rows, cols = img.shape
margin = win + max(d)
arr = np.pad(img, margin, mode='reflect')
n_features = len(d) * len(theta) * len(props)
feature_map = np.zeros(shape=(rows, cols, n_features), dtype=np.float64)
for m in xrange(rows):
for n in xrange(cols):
coocs = cooc_maps(arr, (m + margin, n + margin), win, d, theta, levels)
glcms = compute_glcms(coocs, levels)
feature_map[m, n, :] = compute_props(glcms, props)
return feature_map
DEMO
The following results correspond to a (250, 200) pixels crop from a Landsat image. I have considered two distances, four angles, and two GLCM properties. This results in a 16-dimensional feature vector for each pixel. Notice that the sliding window is squared and its side is 2*win + 1 pixels (in this test a value of win = 19 was used). This sample run took around 6 minutes, which is fairly shorter than "forever" ;-)
In [331]: img.shape
Out[331]: (250L, 200L)
In [332]: img.dtype
Out[332]: dtype('uint8')
In [333]: d = (1, 2)
In [334]: theta = (0, np.pi/4, np.pi/2, 3*np.pi/4)
In [335]: props = ('contrast', 'homogeneity')
In [336]: levels = 256
In [337]: win = 19
In [338]: %time feature_map = haralick_features(img, win, d, theta, levels, props)
Wall time: 5min 53s
In [339]: feature_map.shape
Out[339]: (250L, 200L, 16L)
In [340]: feature_map[0, 0, :]
Out[340]:
array([ 10.3314, 0.3477, 25.1499, 0.2738, 25.1499, 0.2738,
25.1499, 0.2738, 23.5043, 0.2755, 43.5523, 0.1882,
43.5523, 0.1882, 43.5523, 0.1882])
In [341]: io.imshow(img)
Out[341]: <matplotlib.image.AxesImage at 0xce4d160>

Categories