Measure complex shapes using Voronoi centers - python

I am currently working on a project using opencv and python to measure objects that are usually curved for example the arrow shown below as accurately as possible.
I thought that one strategy might be to use the scipy Voronoi function to obtain the points along the center spine of the arrow but am having trouble right now. Here is my code:
img = cv2.imread('example_rubystreak_2.PNG')
img.shape
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,190,255,cv2.THRESH_BINARY)
countimage, contours, hierarchy = cv2.findContours(thresh,1,2)
blank = np.zeros((img.shape[0],img.shape[1],1),np.uint8)
#get max length contour
max_contour = 0
contour_idx = None
for ii in range(len(contours)):
if len(contours[ii]) > max_contour:
contour_idx = ii
max_contour = len(contours[ii])
cv2.drawContours(blank,contours,contour_idx,255,cv2.FILLED,8,hierarchy)
apdp = cv2.approxPolyDP(contours[contour_idx],1,True)
ap = [(a[0][0],a[0][1]) for a in apdp]
vor_ap = Voronoi(ap)
spined = []
for ridge in vor_ap.ridge_vertices:
if cv2.pointPolygonTest(cnt,tuple(vor_ap.vertices[ridge[0]]),True) <= 0.0 or cv2.pointPolygonTest(cnt,tuple(vor_ap.vertices[ridge[1]]),True) <= 0.0:
continue
else:
if tuple(vor_ap.vertices[ridge[0]]) not in spined:
spined.append([tuple(vor_ap.vertices[ridge[0]].tolist()),cv2.pointPolygonTest(cnt,tuple(vor_ap.vertices[ridge[0]]),True)])
if tuple(vor_ap.vertices[ridge[1]]) not in spined:
spined.append([tuple(vor_ap.vertices[ridge[1]].tolist()),cv2.pointPolygonTest(cnt,tuple(vor_ap.vertices[ridge[1]]),True)])
plt.figure(figsize=(12,12))
plt.scatter([s[0][0] for s in spined],[s[0][1] for s in spined])
plt.plot([a[0] for a in ap],[a[1] for a in ap])
Which produces this picture:
Anyone have ideas as to how I can then measure the length of the arrow using these center points? I've tried using np.polyfit and looked at the page here but can't figure out a way to consistently get the curve traced out by the most central points as the arrows are sometimes curved like an S or have different shaped points. Any help would be really appreciated. Thanks.

Here's an outline of what I'd try:
1) Find a parameterized representation of your central curve by interpolating the points using cubic B-splines. Use scipy.interpolate.splrep. You may need to remove the outlier points that don't follow the central curve to get a good cubic spline fit.
2) Once you have the cubic spline curve you can find the arc length by using the arc length integral formula from calculus and compute that integral numerically with the limits of integration your curve's endpoints. To do that you'll need to get the X and Y first derivatives of your spline curve, which scipy.interpolate.splev or scipy.interpolate.spalde should be able to give you. Use the scipy numerical integration routines for a function represented by Numpy arrays.

Related

How to stack a kernel above each point on a regular grid with Python?

The idea is to stack a kernel above each point scattered along 1D. Specifically, the peak of the kernel is align/ centered to x-axis point each of the point. This is akin to Kernel Density Estimation except only half of the kernel is stacked at each of the point as shown at the picture below.
Ultimately, the summation of each of density will be calculated and will yield a single curve (i.e., gray line) as shown below.
As a starting point, I had dig around the Kernel Density Estimation of scikit learn module for an idea. However, I fail to find any line on how/where the stack they kernel on top each of point.
Really appreciate if someone can provide a good reading material for me to achieve this objective.
Does this do what you want? Sorry the graphs are not half as pretty as yours but did not think that was the point
import numpy as np
from scipy.stats import norm
# define a half-kernel function. We normalize to have integral(half_kernel) = 1 if required
def half_kernel(x, center, width = 1, normalize = True):
kernel = (x>=center)*norm.pdf(x, center, width)
if normalize:
kernel *= 2
return kernel
# this are the points where we center our kernels -- random for testing
centers = np.random.normal(0.0,2.0,7)
# Grid on which we look at the results
x = np.linspace(-3.0,3.0,101)
# get the results here, each column is one of the kernels
discr_kernels = np.zeros((len(x),len(centers)))
for n in range(len(centers)):
discr_kernels[:,n] = half_kernel(x, centers[n])
y = discr_kernels.sum(axis= 1)
plt.plot(x,discr_kernels,'--')
plt.plot(x,y, '.-', label = 'total')
plt.legend(loc = 'best')

How to draw a polygon (parts was bent inward) edge on a group of points with Python?

I am trying to draw a polygon (concave) edge on a K-Means cluster shown below (fig_1).
With #ypnos's help, This piece of code plot everything except the edge.
df = pd.read_csv('https://raw.githubusercontent.com/MachineIntellect/dataset.ml/master/watermelon/watermelon_4_0.csv')
X = df.iloc[:,1:].to_numpy()
m0 = X[5]
m1 = X[11]
m2 = X[23]
centroids = np.array([m0, m1, m2])
labels = pairwise_distances_argmin(X, centroids)
m0 = X[labels == 0].mean(0)
m1 = X[labels == 1].mean(0)
m2 = X[labels == 2].mean(0)
new_centroids = np.array([m0, m1, m2])
plt.xlim(0.1,0.9)
plt.ylim(0, 0.8)
plt.scatter(X[:,0], X[:,1])
plt.scatter(new_centroids[:,0], new_centroids[:,1], c='r', marker = '+')
for i in range(3):
points = X[labels == i]
hull = ConvexHull(points)
for simplex in hull.simplices:
plt.plot(points[simplex, 0], points[simplex, 1], 'r-')
(fig_2)
The scikit-learn doc seems to be inspiring
The question is that the edges pointed by the arrow in fig_1 are different from the correspondence in fig_2.
the edge of the polygon that was being pointed to by the arrow was bent inward (thanks to #dwilli).
Thanks to #ImportanceOfBeingErnest's reminder, scipy.spatial.ConvexHull may not be able to produce concave.
Is there any other module/package to do this (concave)?
any hint would be appreciated.
What your inspiration shows is a Voronoi diagram. The coloring shows for any coordinate in the graph, which cluster it would be associated to.
The polygons you show in your first figure are a rough approximation of the convex hull of your cluster members. You could use scipy.spatial.ConvexHull or cv2.convexHull() (from OpenCV) to compute it. The documentation of the former also gives an example on how to plot it.
To generate the polygon you can try the below steps
Generate polygons around each cluster treating each cluster as in individual part of the plot.
You can create a rough polygon using the convex hull method mentioned by #ypnos, but to get a better result, have a look at the Delaunay triangulation method.
You will generate triangular regions between the points based on a set threshold value. The threshold will ensure the best possible fit.
Using this data, you can plot a concave hull using the extreme points. As you don't want the extreme points to be included as the vertices of the polygon, you should add a buffer to go around the points by a set value.
Expected result on some sample data
There's quite a bit of code required to achieve the result, here is a link to a comprehensive guide to generate the sample plot.

Computing diameter-lines of a 3D spherical mask

Background
For an algorithm I'm working on, I currently use a 3D sphere as binary mask, with a NxNxN array having voxels in a sphere of radius N//2 as True. Further processing does computation for each voxel set as True.
It proved computationally intensive for my specific task as N grew large = O(N^3), so I now want to reduce my binary mask to a subsample of lines radiating from array center within radius.
Objective
I want a 3D binary mask of the lines in gray in the image.
To have a bit of control over the number of voxels, I would have a parameter (say l) regulating the number of lines sampled in each 2D circle, and maybe a second one (k ?) for the number of z-rotation.
What I tried
I am using numpy and scipy, and I thought that I could use the scipy.ndimage.interpolation.rotate method to rotate a single line around on a plane, then use that complete 2D mask to rotate around the z-axis.
This proved difficult, as interpolate uses some deep magic regarding splines that discard my True values on rotation.
I am thinking that I could compute mathematically which voxel should be set to True by following some line-equations, but I'm at a loss to find them.
Any idea how to get there ?
Update : Solution !
Thanks to jkalden who helped me think this through and gave code samples, I have this :
rmax is radius of sphere, n_theta and n_phi the number of polar and azimutal lines to use.
out_mask = np.zeros((rmax*2,) * 3, dtype=bool)
# for each phi = one circle in azimutal circles
for phi in np.linspace(0, np.deg2rad(360), n_phi,endpoint=False):
# for all lines in polar circle of this azimutal circle
for theta in np.linspace(0, np.deg2rad(360), n_theta,endpoint=False):
# for all distances (0-rmax) in these lines
for r in range(rmax):
coords = spherical_to_cartesian([r, theta, phi]) + rmax
out_mask[tuple(coords)] = True
With the spherical_to_cartesian from this code sample.
Which gives me this (with rmax = 50 and n_theta = n_phi = 8) :
(Center area tuned out of my function by choice)
I propose to change the coordinate system to spherical coordinates. Thus, you will choose your 2D circle by an azimuthal angle, and a line then is defined by additionally choosing a polar angle. The variable along the line is then just the radius, and you can use ´numpy.linspace´ to discretize it. Doing so might also save time during calculation.
You can switch your coordinate system any time by using the bijective relation which is implemented e.g. here or here.

Affine transformation between contours in OpenCV

I have a historical time sequence of seafloor images scanned from film that need registration.
from pylab import *
import cv2
import urllib
urllib.urlretrieve('http://geoport.whoi.edu/images/frame014.png','frame014.png');
urllib.urlretrieve('http://geoport.whoi.edu/images/frame015.png','frame015.png');
gray1=cv2.imread('frame014.png',0)
gray2=cv2.imread('frame015.png',0)
figure(figsize=(14,6))
subplot(121);imshow(gray1,cmap=cm.gray);
subplot(122);imshow(gray2,cmap=cm.gray);
I want to use the black region on the left of each image to do the registration, since that region was inside the camera and should be fixed in time. So I just need to compute the affine transformation between the black regions.
I determined these regions by thresholding and finding the largest contour:
def find_biggest_contour(gray,threshold=40):
# threshold a grayscale image
ret,thresh = cv2.threshold(gray,threshold,255,1)
# find the contours
contours,h = cv2.findContours(thresh,mode=cv2.RETR_LIST,method=cv2.CHAIN_APPROX_NONE)
# measure the perimeter
perim = [cv2.arcLength(cnt,True) for cnt in contours]
# find contour with largest perimeter
i=perim.index(max(perim))
return contours[i]
c1=find_biggest_contour(gray1)
c2=find_biggest_contour(gray2)
x1=c1[:,0,0];y1=c1[:,0,1]
x2=c2[:,0,0];y2=c2[:,0,1]
figure(figsize=(8,8))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1,y1,'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2,y2,'g-')
axis([0,1500,1000,0]);
The blue is the longest contour from the 1st frame, the green is the longest contour from the 2nd frame.
What is the best way to determine the rotation and offset between the blue and green contours?
I only want to use the right side of the contours in some region surrounding the step, something like the region between the arrows.
Of course, if there is a better way to register these images, I'd love to hear it. I already tried a standard feature matching approach on the raw images, and it didn't work well enough.
Following Shambool's suggested approach, here's what I've come up with. I used a Ramer-Douglas-Peucker algorithm to simplify the contour in the region of interest and identified the two turning points. I was going to use the two turning points to get my three unknowns (xoffset, yoffset and angle of rotation), but the 2nd turning point is a bit too far toward the right because RDP simplified away the smoother curve in this region. So instead I used the angle of the line segment leading up to the 1st turning point. Differencing this angle between image1 and image2 gives me the rotation angle. I'm still not completely happy with this solution. It worked well enough for these two images, but I'm not sure it will work well on the entire image sequence. We'll see.
It would really be better to fit the contour to the known shape of the black border.
# select region of interest from largest contour
ind1=where((x1>190.) & (y1>200.) & (y1<900.))[0]
ind2=where((x2>190.) & (y2>200.) & (y2<900.))[0]
figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2[ind2],y2[ind2],'g-')
axis([0,1500,1000,0])
def angle(x1,y1):
# Returns angle of each segment along an (x,y) track
return array([math.atan2(y,x) for (y,x) in zip(diff(y1),diff(x1))])
def simplify(x,y, tolerance=40, min_angle = 60.*pi/180.):
"""
Use the Ramer-Douglas-Peucker algorithm to simplify the path
http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
Python implementation: https://github.com/sebleier/RDP/
"""
from RDP import rdp
points=vstack((x,y)).T
simplified = array(rdp(points.tolist(), tolerance))
sx, sy = simplified.T
theta=abs(diff(angle(sx,sy)))
# Select the index of the points with the greatest theta
# Large theta is associated with greatest change in direction.
idx = where(theta>min_angle)[0]+1
return sx,sy,idx
sx1,sy1,i1 = simplify(x1[ind1],y1[ind1])
sx2,sy2,i2 = simplify(x2[ind2],y2[ind2])
fig = plt.figure(figsize=(10,6))
ax =fig.add_subplot(111)
ax.plot(x1, y1, 'b-', x2, y2, 'g-',label='original path')
ax.plot(sx1, sy1, 'ko-', sx2, sy2, 'ko-',lw=2, label='simplified path')
ax.plot(sx1[i1], sy1[i1], 'ro', sx2[i2], sy2[i2], 'ro',
markersize = 10, label='turning points')
ax.invert_yaxis()
plt.legend(loc='best')
# determine x,y offset between 1st turning points, and
# angle from difference in slopes of line segments approaching 1st turning point
xoff = sx2[i2[0]] - sx1[i1[0]]
yoff = sy2[i2[0]] - sy1[i1[0]]
iseg1 = [i1[0]-1, i1[0]]
iseg2 = [i2[0]-1, i2[0]]
ang1 = angle(sx1[iseg1], sy1[iseg1])
ang2 = angle(sx2[iseg2], sy2[iseg2])
ang = -(ang2[0] - ang1[0])
print xoff, yoff, ang*180.*pi
-28 14 5.07775871644
# 2x3 affine matrix M
M=array([cos(ang),sin(ang),xoff,-sin(ang),cos(ang),yoff]).reshape(2,3)
print M
[[ 9.99959685e-01 8.97932821e-03 -2.80000000e+01]
[ -8.97932821e-03 9.99959685e-01 1.40000000e+01]]
# warp 2nd image into coordinate frame of 1st
Minv = cv2.invertAffineTransform(M)
gray2b = cv2.warpAffine(gray2,Minv,shape(gray2.T))
figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2b,cmap=cm.gray, alpha=0.5);
axis([0,1500,1000,0]);
title('image1 and transformed image2 overlain with 50% transparency');
Good question.
One approach is to represent contours as 2d point clouds and then do registration.
More simple and clear code in Matlab that can give you affine transform.
And more complex C++ code(using VXL lib) with python and matlab wrapper included.
Or you can use some modificated ICP(iterative closest point) algorithm that is robust to noise and can handle affine transform.
Also your contours seems to be not very accurate so it can be a problem.
Another approach is to use some kind of registration that use pixel values.
Matlab code (I think it's using some kind of minimizer+ crosscorrelation metric)
Also maybe there is some kind of optical flow registration(or some other kind) that is used in medical imaging.
Also you can use point features as SIFT(SURF).
You can try it quick in FIJI(ImageJ)
also this link.
Open 2 images
Plugins->feature extraction-> sift (or other)
Set expected transformation to affine
Look at estimated transformation model [3,3] homography matrix in ImageJ log.
If it works good then you can implement it in python using OpenCV or maybe using Jython with ImageJ.
And it will be better if you post original images and describe all conditions (it seems that image is changing between frames)
You can represent these contours with their respective ellipses. These ellipses are centered on the centroid of the contour and they are oriented towards the main density axis. You can compare the centroids and the orientation angle.
1) Fill the contours => drawContours with thickness=CV_FILLED
2) Find moments => cvMoments()
3) And use them.
Centroid: { x, y } = {M10/M00, M01/M00 }
Orientation (theta):
EDIT: I customized the sample code from legacy (enteringblobdetection.cpp) for your case.
/* Image moments */
double M00,X,Y,XX,YY,XY;
CvMoments m;
CvRect r = ((CvContour*)cnt)->rect;
CvMat mat;
cvMoments( cvGetSubRect(pImgFG,&mat,r), &m, 0 );
M00 = cvGetSpatialMoment( &m, 0, 0 );
X = cvGetSpatialMoment( &m, 1, 0 )/M00;
Y = cvGetSpatialMoment( &m, 0, 1 )/M00;
XX = (cvGetSpatialMoment( &m, 2, 0 )/M00) - X*X;
YY = (cvGetSpatialMoment( &m, 0, 2 )/M00) - Y*Y;
XY = (cvGetSpatialMoment( &m, 1, 1 )/M00) - X*Y;
/* Contour description */
CvPoint myCentroid(r.x+(float)X,r.y+(float)Y);
double myTheta = atan( 2*XY/(XX-YY) );
Also, check this with OpenCV 2.0 examples.
If you don't want to find the homography between the two images and want to find the affine transformation you have three unknowns, rotation angle (R), and the displacement in x and y (X,Y). Therefore minimum of two points (with two known values for each) are needed to find the unknowns. Two points should be matched between the two images or two lines, each has two known values, the intercept and slope. If you go with the point matching approach, the further the points are from each other the more robust is the found transform to noise (this is very simple if you remember error propagation rules).
In the two point matching method:
find two points (A and B) in the first image I1 and their corresponding points (A',B') in the second image I2
find the middle point between A and B: C, and the middle point between A' and B': C'
the difference C and C' (C-C') gives the translation between the images (X and Y)
using the dot product of C-A and C'-A' you can find the rotation angle (R)
To detect robust points, I would find the the points along the side of counter you have found with highest absolute value of the second derivative (Hessian) and then try to match them. Since you mentioned this is a video footage you can easily make the assumption the transformation between each two frames is small to reject the outliers.

How do I calculate a 3D centroid?

Is there even such a thing as a 3D centroid? Let me be perfectly clear—I've been reading and reading about centroids for the last 2 days both on this site and across the web, so I'm perfectly aware at the existing posts on the topic, including Wikipedia.
That said, let me explain what I'm trying to do. Basically, I want to take a selection of edges and/or vertices, but NOT faces. Then, I want to place an object at the 3D centroid position.
I'll tell you what I don't want:
The vertices average, which would pull too far in any direction that has a more high-detailed mesh.
The bounding box center, because I already have something working for this scenario.
I'm open to suggestions about center of mass, but I don't see how this would work, because vertices or edges alone don't define any sort of mass, especially when I just have an edge loop selected.
For kicks, I'll show you some PyMEL that I worked up, using #Emile's code as reference, but I don't think it's working the way it should:
from pymel.core import ls, spaceLocator
from pymel.core.datatypes import Vector
from pymel.core.nodetypes import NurbsCurve
def get_centroid(node):
if not isinstance(node, NurbsCurve):
raise TypeError("Requires NurbsCurve.")
centroid = Vector(0, 0, 0)
signed_area = 0.0
cvs = node.getCVs(space='world')
v0 = cvs[len(cvs) - 1]
for i, cv in enumerate(cvs[:-1]):
v1 = cv
a = v0.x * v1.y - v1.x * v0.y
signed_area += a
centroid += sum([v0, v1]) * a
v0 = v1
signed_area *= 0.5
centroid /= 6 * signed_area
return centroid
texas = ls(selection=True)[0]
centroid = get_centroid(texas)
print(centroid)
spaceLocator(position=centroid)
In theory centroid = SUM(pos*volume)/SUM(volume) when you split the part into finite volumes each with a location pos and volume value volume.
This is precisely the calculation done for finding the center of gravity of a composite part.
There is not just a 3D centroid, there is an n-dimensional centroid, and the formula for it is given in the "By integral formula" section of the Wikipedia article you cite.
Perhaps you are having trouble setting up this integral? You have not defined your shape.
[Edit] I'll beef up this answer in response to your comment. Since you have described your shape in terms of edges and vertices, then I'll assume it is a polyhedron. You can partition a polyedron into pyramids, find the centroids of the pyramids, and then the centroid of your shape is the centroid of the centroids (this last calculation is done using ja72's formula).
I'll assume your shape is convex (no hollow parts---if this is not the case then break it into convex chunks). You can partition it into pyramids (triangulate it) by picking a point in the interior and drawing edges to the vertices. Then each face of your shape is the base of a pyramid. There are formulas for the centroid of a pyramid (you can look this up, it's 1/4 the way from the centroid of the face to your interior point). Then as was said, the centroid of your shape is the centroid of the centroids---ja72's finite calculation, not an integral---as given in the other answer.
This is the same algorithm as in Hugh Bothwell's answer, however I believe that 1/4 is correct instead of 1/3. Perhaps you can find some code for it lurking around somewhere using the search terms in this description.
I like the question. Centre of mass sounds right, but the question then becomes, what mass for each vertex?
Why not use the average length of each edge that includes the vertex? This should compensate nicely areas with a dense mesh.
You will have to recreate face information from the vertices (essentially a Delauney triangulation).
If your vertices define a convex hull, you can pick any arbitrary point A inside the object. Treat your object as a collection of pyramidal prisms having apex A and each face as a base.
For each face, find the area Fa and the 2d centroid Fc; then the prism's mass is proportional to the volume (== 1/3 base * height (component of Fc-A perpendicular to the face)) and you can disregard the constant of proportionality so long as you do the same for all prisms; the center of mass is (2/3 A + 1/3 Fc), or a third of the way from the apex to the 2d centroid of the base.
You can then do a mass-weighted average of the center-of-mass points to find the 3d centroid of the object as a whole.
The same process should work for non-convex hulls - or even for A outside the hull - but the face-calculation may be a problem; you will need to be careful about the handedness of your faces.

Categories