Map corresponding points between Delaunay triangles - python

I'm trying to morph two images of faces using an inverse warp. I have the Delaunay triangles for both images as well as all transformation matrices for all pairs of corresponding triangles.
I have applied the matrix to every pixel inside the triangles, but the image I am getting is all messed up and some pixels aren't being filled in as well.
I suspect the vertices lists are not in order which means the triangles are not corresponding. Or it could just be me messing up the row, cols order.
Here's my code:
from scipy.spatial import Delaunay
from skimage.draw import polygon
import numpy as np
def drawDelaunay(img, landmarks, color):
tri = Delaunay(landmarks)
vertices = []
for t in landmarks[tri.simplices]:
# t = [int(i) for i in t]
pt1 = [t[0][0], t[0][1]]
pt2 = [t[1][0], t[1][1]]
pt3 = [t[2][0], t[2][1]]
cv2.line(img, pt1, pt2, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt2, pt3, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt3, pt1, color, 1, cv2.LINE_AA, 0)
vertices.append([pt1, pt2, pt3])
return img, vertices
def getAffineMat(triangle1, triangle2):
x = np.transpose(np.matrix([*triangle1]))
y = np.transpose(np.matrix([*triangle2]))
# Add ones to bottom of x and y
x = np.vstack((x, [1,1,1]))
y = np.vstack((y, [1,1,1]))
xInv = np.linalg.pinv(x)
return np.dot(y, xInv)
srcImg = face2
srcRows, srcCols, srcDepth = face2.shape
destImg = np.zeros(face1.shape, dtype=np.uint8)
for triangle1, triangle2 in zip(vertices1, vertices2):
transMat = getAffineMat(triangle1, triangle2)
r, c = list(map(list, zip(*triangle2)))
rr, cc = polygon(r, c)
for row, col in zip(rr, cc):
transformed = np.dot(transMat, [col, row, 1])
srcX, srcY, *_ = np.array(transformed.T)
# Check if pixel is within image boundaries
if isWithinBounds(srcCols, srcRows, col, row):
# Interpolate the color of the pixel from the four nearest pixels
color = bilinearInterpolation(srcImg, srcX, srcY)
# Set the color of the current pixel in the destination image
destImg[row, col] = color
I wish to implement this without getAffineTransform or warpAffine. Any help would be much appreciated!
Sources:
Transfer coordinates from one triangle to another triangle
https://devendrapratapyadav.github.io/FaceMorphing/

But you don't have corresponding triangles! This looks like 2 separates Delaunay triangulation. Maybe made on matching points, but still no matching triangles. You can't do two Delaunay triangulation, one in each image, and expect them to match. You need 1 delaunay triangulation, and then use the same edges on both sides (so, for at least one side, triangulation will not be exactly Delaunay).
Look for example at the top-right corner of your images.
On one side you have you have 4 outgoing edges (counting those we can't see because they are confused with te image border, but they have to be there), on the other you have 6 outgoing edges.
The number of edges connected to two matching vertices is supposed to be a constant (otherwise, how could you warp anything?).
So, clearly, I think (but you did not provide any code, for that, since you postulate that triangulation is correct, when I am pretty sure it is triangulation that is not. So I can only surmise), you got a two sets of matching points, then performed 2 Delaunay's triangulation on those 2 sets of points, expecting to be able to match triangles, even tho they are not at all the same triangles.
Edit: how to transform
(in reply to your question in comment)
It's the same triangulations. You have a list of points p₁, p₂, p₃, ..., pₙ in the first images. A matching list of points q₁, q₂, q₃, ..., qₙ in the second image. You perform a triangulation in the 1st image. Whose output should be a list of triplets of indices, such as (1,3,4), (1, 2, 3), ... meaning that optimal triangulation in 1st image is the one made of triangle (p₁,p₃, p₄), (p₁, p₂, p₃), ...
And in the second image, you use triangulation (q₁,q₃,q₄), (q₁, q₂, q₃), ...
Even if it is not the optimal triangulation of q₁,q₂,...,qₙ (the one that maximize smallest angle). It should not be that far, if q₁,q₂,...,qₙ are not that different from p₁,p₂,...,pₙ (which they are not supposed to be, if you tried to match consistently both images).
So, transformation matrices are the one transforming coordinates in each matching triangles (there are one transformation for each pair of matching triangles).
To decide which point (x',y') of second image matches point (x,y) of first image, you need
to identify in which triangle (i,j,k) (that is (pᵢ,pⱼ,pₖ)) (x,y) is,
Find barycentric coordinates of (x,y) inside this triangle: (x,y)=αpᵢ+βpⱼ+γpₖ
Assume that (x',y') have the same barycentric coordinates inside the matching triangle, that is (x',y')=αqᵢ+βqⱼ+γqₖ
Transformation matrix (for triangle (i,j,k)) is the one going from (x,y) to (x',y')

Related

How to find the xmin, xmax, ymin, ymax of a mask

I have a mask drawn over an apple using segmentation. The mask layer has 1's where the pixel is part of the apple and 0's everywhere else. How do i find the extreme pixels in the mask to find the bounding box coordinates around this mask? I am using pytorch and yolact edge to perform the segmentation as shown in Yolact
Relevant stackoverflow answer with nice explanation.
TL;DR
Proposed code snippets (second is faster):
def bbox1(img):
a = np.where(img != 0)
bbox = np.min(a[0]), np.max(a[0]), np.min(a[1]), np.max(a[1])
return bbox
def bbox2(img):
rows = np.any(img, axis=1)
cols = np.any(img, axis=0)
rmin, rmax = np.where(rows)[0][[0, -1]]
cmin, cmax = np.where(cols)[0][[0, -1]]
return rmin, rmax, cmin, cmax
But in more general case (e.g. if you have more than one "instance" on image and each mask is separated from others) it may be worth to consider using OpenCV.
Specifically cv2.connectedComponentsWithStats.
Some brilliant description of this function can be found in another relevant answer.
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(mask)
Labels is a matrix the size of the input image where each element has a value equal to its label.
Stats is a matrix of the stats that the function calculates. It has a length equal to the number of labels and a width equal to the
number of stats. It can be used with the OpenCV documentation for it:
Statistics output for each label, including the background label, see
below for available statistics. Statistics are accessed via
stats[label, COLUMN] where available columns are defined below.
cv2.CC_STAT_LEFT The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction.
cv2.CC_STAT_TOP The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction.
cv2.CC_STAT_WIDTH The horizontal size of the bounding box
cv2.CC_STAT_HEIGHT The vertical size of the bounding box
cv2.CC_STAT_AREA The total area (in pixels) of the connected component
Centroids is a matrix with the x and y locations of each centroid. The row in this matrix corresponds to the label number.
So, basically each item in stats (first 4 values) determine the bounding box of each connected component (instance) in mask.
Possible function that you can use to return just bounding boxes:
def get_bounding_boxes(mask, min_size=None):
num_components, labeled_image, bboxes, centroids = cv2.connectedComponentsWithStats(image)
# return bboxes in cv2 format [x, y, w, h] without background bbox and component size
return bboxes[1:, :-1]
# (x, y, x+w, y+h) are 4 points that you are looking for
And of course in case of one instance this approach still works.

Smooth values from skimage.measure.marching_cubes

I'm using skimage.measure.marching_cubes to extract a surface, defined as faces and vertices. marching_cubes also outputs values for each face.
How do I "smooth" these values (the actual smoothing could be a low-pass filter, median filter etc)? I thought that one way to achieve this would be to project, or to represent this surface in 2D, and then apply standard filters, but I can't think of how to do this from a list of faces and vertices.
The reason for this "smoothing" is because the values are not informative at the scale of a single face of the surface, but over larger areas of the surface represented by many faces.
Thanks in advance!
I eventually found a way to do this, based on MATLAB code from this paper:
Welf et al. "Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments" in Developmental Cell, 2016, Vol 36, Issue 4, p462-475
def median_filter_surface(faces, verts, measure, radius, p_norm=2):
from scipy import spatial
import numpy as np
# INPUT:
# faces: triangular surface faces - defined by 3 vertices
# verts: the above vertices, defined by x,y,z coordinates
# measure: the value related to each face that needs to be filtered
# radius: the radius for median filtering (larger = more filtering)
# p_norm: distance metric for the radius, default 2 (euclidian)
# OUTPUT:
# measure_med_filt: the "measure" after filtering
num_faces = len(faces)
face_centres = np.zeros((num_faces, 3))
# get face centre positions in 3D space (from vert coordinates)
for face in range(0, num_faces):
face_centres[face, :] = np.mean(verts[faces[face, :], :], 0)
# return all other points within a radius
tree = spatial.KDTree(face_centres)
faces_in_radius = tree.query_ball_point(face_centres, radius, p_norm)
measure_med_filt = np.zeros(len(faces))
for face in range(0, len(faces)):
measure_med_filt[face] = np.median(measure[faces_in_radius[face]])
return measure_med_filt

Correctly closing of polygons generated using skimage.measure.find_contours()

I'm currently using skimage.measure.find_contours() to find contours on a surface. Now that I've found the contours I need to able to find the area enclosed within them.
When all of the vertices are within the data set this is fine as a have a fully enclosed polygon.
However, how do I ensure the polygon is fully enclosed if the contour breaches the edge of the surface, either at an edge or at a corner? When this happens I would like to use the edge of the surface as additional vertices to close off the polygon. For example in the following image, with contours shown, you can see that the contours end at the edge of the image, how do I close them up? Also in the example of the brown contour, which is just a single line, I don't think I want an area returned, how would I single out this case?
I know I can check for enclosed contours/polygons by checking if the last vertices of the polygon is the same as the first.
I have code for calculating the area inside a polygon, taken from here
def find_area(array):
a = 0
ox,oy = array[0]
for x,y in array[1:]:
a += (x*oy-y*ox)
ox,oy = x,y
return -a/2
I just need help in closing off the polygons. And checking for the different cases that might occur.
Thanks
Update:
After applying the solution suggested by #soupault I have this code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
# Construct some test data
x, y = np.ogrid[-np.pi:np.pi:100j, -np.pi:np.pi:100j]
r = np.sin(np.exp((np.sin(x)**3 + np.cos(y)**2)))
# Coordinates of point of interest
pt = [(49,75)]
# Apply thresholding to the surface
threshold = 0.8
blobs = r > threshold
# Make a labelled image based on the thresholding regions
blobs_labels = measure.label(blobs, background = 0)
# Show the thresholded regions
plt.figure()
plt.imshow(blobs_labels, cmap='spectral')
# Apply regionprops to charactersie each of the regions
props = measure.regionprops(blobs_labels, intensity_image = r)
# Loop through each region in regionprops, identify if the point of interest is
# in that region. If so, plot the region and print it's area.
plt.figure()
plt.imshow(r, cmap='Greys')
plt.plot(pt[0][0], pt[0][1],'rx')
for prop in props:
coords = prop.coords
if np.sum(np.all(coords[:,[1,0]] == pt[0], axis=1)):
plt.plot(coords[:,1],coords[:,0],'r.')
print(prop.area)
This solution assumes that each pixel is 1x1 in size. In my real data solution this isn't the case so I have also applied the following function to apply linear interpolation to the data. I believe you can also apply a similar function to make the area of each pixel smaller and increase the resolution of the data.
import numpy as np
from scipy import interpolate
def interpolate_patch(x,y,patch):
x_interp = np.arange(np.ceil(x[0]), x[-1], 1)
y_interp = np.arange(np.ceil(y[0]), y[-1], 1)
f = interpolate.interp2d(x, y, patch, kind='linear')
patch_interp = f(x_interp, y_interp)
return x_interp, y_interp, patch_interp
If you need to measure the properties of different regions, it is natural to start with finding the regions (not contours).
The algorithm will be the following, in this case:
Prepare a labeled image:
1.a Either fill the areas between different contour lines with the different colors;
1.b Or apply some image thresholding function, and then run skimage.measure.label (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label);
Execute regionprops using the very labeled image as an input (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops);
Iterate over regions in regionprops and calculate the desired parameters (area, perimeter, etc).
Once you identified the regions in your image via regionprops, you can call .coords for each of them to get the enclosed contour.
If someone will need close open contours by image edges (and make a polygon) here is:
import shapely.geometry as sgeo
import shapely.ops as sops
def close_contour_with_image_edge(contour, image_shape):
"""
this function uses shapely because its easiest way to do that
:param contour: contour generated by skimage.measure.find_contours()
:param image_shape: tuple (row, cols), standard return of numpy shape()
:return:
"""
# make contour linestring
contour_line = sgeo.LineString(contour)
# make image box linestring
box_rows, box_cols = image_shape[0], image_shape[1]
img_box = sgeo.LineString(coordinates=(
(0, 0),
(0, box_cols-1),
(box_rows-1, box_cols-1),
(box_rows-1, 0),
(0, 0)
))
# intersect box with non-closed contour and get shortest line which touch both of contour ends
edge_points = img_box.intersection(contour_line)
edge_parts = sops.split(img_box, edge_points)
edge_parts = list(part for part in edge_parts.geoms if part.touches(edge_points.geoms[0]) and part.touches(edge_points.geoms[1]))
edge_parts.sort(reverse=False, key=lambda x: x.length)
contour_edge = edge_parts[0]
# weld it
contour_line = contour_line.union(contour_edge)
contour_line = sops.linemerge(contour_line)
contour_polygon = sgeo.Polygon(contour_line.coords)
return contour_polygon

OpenCV Python HoughCircles: Circles detected outside of image boundary

I am using the OpenCV HoughCircles method in Python as follows:
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=0,maxRadius=0)
This seems to work quite well. However, one thing I noticed is that it detects circles which can extend outside of the image boundaries. Does anyone know how I can filter these results out?
Think of each circle as being bounded inside a square of dimensions 2r x 2r where r is the radius of the circle. Also, the centre of this box is located at (x,y) which also corresponds to where the centre of the circle is located in the image. To see if the circle is within the image boundaries, you simply need to make sure that the box that contains the circle does not go outside of the image. Mathematically speaking, you would need to ensure that:
r <= x <= cols-1-r
r <= y <= rows-1-r # Assuming 0-indexing
rows and cols are the rows and columns of your image. All you really have to do now is cycle through every circle in the detected result and filter out those circles that go outside of the image boundaries by checking if the centre of each circle is within the two inequalities specified above. If the circle is within the two inequalities, you would save this circle. Any circles that don't satisfy the inequalities, you don't include this in the final result.
To put this logic to code, do something like this:
import cv # Load in relevant packages
import cv2
import numpy as np
img = cv2.imread(...,0) # Load in image here - Ensure 8-bit grayscale
final_circles = [] # Stores the final circles that don't go out of bounds
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0) # Your code
rows = img.shape[0] # Obtain rows and columns
cols = img.shape[1]
circles = np.round(circles[0, :]).astype("int") # Convert to integer
for (x, y, r) in circles: # For each circle we have detected...
if (r <= x <= cols-1-r) and (r <= y <= rows-1-r): # Check if circle is within boundary
final_circles.append([x, y, r]) # If it is, add this to our final list
final_circles = np.asarray(final_circles).astype("int") # Convert to numpy array for compatability
The peculiar thing about cv2.HoughCircles is that it returns a 3D matrix where the first dimension is a singleton dimension. To eliminate this singleton dimension, I did circles[0, :] which will result in a 2D matrix. Each row of this new 2D matrix contains a tuple of (x, y, r) and characterizes where a circle is located in your image as well as its radius. I also converted the centres and radii to integers so that if you decide to draw them later on, you will be able to do it with cv2.circle.
you could, add a function which will take the center and the radius of the circle add them up/and subtract and check if this will result outside the boundaries of your image.

Using 3D perception in opencv2

Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.
If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.

Categories