opencv align two images by keypoints with stretching - python

I have two images
image1 (object):
]3
Original image without marked keypoints:
image2 is a white picture (500x500)
In image1 and in image2 I have marked keypoints.
I want to align image1 on image2 by keypoints. So the goal that both keypoints overlaps with stretching, scaling and transforming image2.
This are my keypoints (csv file). The coordinates are x and y for image1 in image1 and for image2 in image2.
object1_x,object1_y,image_x,image_y
0,0,80,137
286,0,409,42
286,198,416,390
174,198,331,384
158,116,291,119
0,97,111,311
How can I do this with opencv and python?
So the result image should looks like this (without the red dots, the red dots are only for demonstration the keypoints):

The Concept
Extract sets of 3 indices from the first set of keypoints that will form triangles when indexed from both sets of keypoints. With the indices we can get corresponding triangles from both sets of keypoints, allowing us to build the warped image triangle by triangle (see Warp one triangle to another using OpenCV for more details):
image1.png (with added points):
image2.png (with added points):
Result (with added points)
The Code
import cv2
import numpy as np
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
subdiv.insert(list(points))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
def warp(img1, img2, pts1, pts2):
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
img1 = cv2.imread("image1.png")
img2 = cv2.imread("image2.png")
pts1 = np.array([[0, 0], [286, 0], [286, 198], [174, 198], [158, 116], [0, 97]])
pts2 = np.array([[80, 37], [409, 42], [416, 390], [331, 384], [291, 119], [111, 311]])
warp(img1, img2, pts1, pts2)
for pt in pts2:
cv2.circle(img2, tuple(pt), 15, (0, 0, 255), -1)
cv2.imshow("Original", img1)
cv2.imshow("Transformed", img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
The Output
The Explanation
Import the necessary libraries:
import cv2
import numpy as np
Define a function, triangles, that will take in an array of coordinates, points, and yield lists of 3 indices of the array for triangles that will cover the area of the original array of coordinates:
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
subdiv.insert(list(points))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
Define a function, crop, that will take in an image array, img, and an array of three coordinates, pts. It will return a rectangular segment of the image just large enough to fit the triangle formed by the three point, and return the array of three coordinates transferred to the top-left corner of image:
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
Define a function, warp, that will take in 2 image arrays, img1 and img2, and 2 arrays of coordinates, pts1 and pts2. It will utilize the triangles function defined before iterate through the triangles from the first array of coordinates, the crop function defined before to crop both images at coordinates corresponding to the triangle indices and use the cv2.warpAffine() method to warp the image at the current triangle of the iterations:
def warp(img1, img2, pts1, pts2):
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
Read in your images. In your case, img1 is the image we want to warp, and img2 is the blank 500 x 500 image. Also, define 2 array of coordinates to be the keypoints of the images:
img1 = cv2.imread("image1.png")
img2 = cv2.imread("image2.png")
pts1 = np.array([[0, 0], [286, 0], [286, 198], [174, 198], [158, 116], [0, 97]])
pts2 = np.array([[80, 37], [409, 42], [416, 390], [331, 384], [291, 119], [111, 311]])
Finally, use the warp function defined before to warp img1 to have its keypoints overlap with the kewpoints of img2 and show the resulting image. I drew the points from the second array of coordinates onto the resulting warped image to make the warping process easier to visualize:
warp(img1, img2, pts1, pts2)
for pt in pts2:
cv2.circle(img2, tuple(pt), 15, (0, 0, 255), -1)
cv2.imshow("Original", img1)
cv2.imshow("Transformed", img2)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

Convert abstract mask to geometric shape and calculate all edges

I am very new at Python so what I am trying to do is this:
I already made a mask of floor of given image:
https://i.ibb.co/0r17SnT/www.png
What I want to do is to connect all points by geometric straight lines and find the length of each edge, like I drew on this example:
https://i.ibb.co/tbgyYF5/www.png
I am using openCV, and so far I tried to find some extreme points but it is not what i need i guess.
def getCalculatedImg():
# Load image
img = cv2.imread('img.jpg')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply cv2.threshold() to get a binary image
ret, thresh = cv2.threshold(gray_image, 50, 255, cv2.THRESH_BINARY)
# Find contours:
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cntrs = contours[0]
c = max(contours, key=cv2.contourArea)
# Obtain outer coordinates
left_coordinates_separated = [(c[c[:, :, 0].argmin()][0])[0], (c[c[:, :, 0].argmin()][0])[1]]
left = tuple(left_coordinates_separated)
right_coordinates_separated = [(c[c[:, :, 0].argmax()][0])[0], (c[c[:, :, 0].argmax()][0])[1]]
right = tuple(right_coordinates_separated)
top_coordinates_separated = [(c[c[:, :, 1].argmin()][0])[0], (c[c[:, :, 1].argmin()][0])[1]]
top = tuple(top_coordinates_separated)
bottom_coordinates_separated = [(c[c[:, :, 1].argmax()][0])[0], (c[c[:, :, 1].argmax()][0])[1]]
bottom = tuple(bottom_coordinates_separated)
# Draw contours and dots to image:
cv2.drawContours(img, [c], -1, (36, 255, 12), 2)
cv2.circle(img, left, 8, (0, 50, 255), -1)
cv2.circle(img, right, 8, (0, 255, 255), -1)
cv2.circle(img, top, 8, (255, 50, 0), -1)
cv2.circle(img, bottom, 8, (255, 255, 0), -1)
print('Left point: {}'.format(left))
print('Right point: {}'.format(right))
print('Top point: {}'.format(top))
print('Bottom point: {}'.format(bottom))
# Output
cv2.imwrite('outlined_boundered_image.jpg', img)
return img
It seems what you want is a straight line interpolation of the boundary and the length of the lines. One way of getting one is to use Splines of degree one. If you're more interested in the actual length of the boundary it would be more appropriate to use degree 3 interpolation and an integral. So let me know if you want that.
First I used blur and a sobel filter to obtain a clear picture of the boundary.
import cv2
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import UnivariateSpline
picture = cv2.imread('area.png', cv2.IMREAD_GRAYSCALE)
picture2 = cv2.GaussianBlur(picture, (3, 3), 0, 0, cv2.BORDER_DEFAULT)
picture3 = np.abs(cv2.Sobel(picture2, cv2.CV_16S,0,1))+np.abs(cv2.Sobel(picture2, cv2.CV_16S,0,1))
picture4 = (picture3>0.3*picture3.max()).T
plt.imshow(picture4.T, cmap='gray')
I can't give the image to UnivariateSpline to interpolate yet since there are multiple y pixels for each x. I solved that by taking the average of the y values for each x as the value of the function at that point.
x,y = np.where(picture4)
idx = np.unique(x, return_index = True)[1]
y = np.array(tuple(map(np.mean,np.split(y,idx)[1:])))
f = UnivariateSpline(x[idx],y,k=1,s=10**4)
knots = f.get_knots()
print(len(knots))
x2 = np.linspace(0,picture.shape[1])
plt.plot(x2, -f(x2))
And it gives me a straight line interpolation with 33 lines of the original boundary. If you think 33 lines is too much you can just increase s. Then it uses less lines.
And finally I calculate the length of the line segments
dx = knots[1:]-knots[:-1]
dy = f(knots[1:])-f(knots[:-1])
lengths = (dx**2+dy**2)**(1/2)
lengths
They are
array([ 48.00108553, 24.00049131, 48.59189432, 48.00784009,
24.00411983, 12.0353518 , 7.45847594, 76.56726501,
50.2062363 , 62.97882834, 55.66753909, 59.85823117,
192.18816002, 24.0756812 , 12.0380576 , 6.30271185,
68.6638399 , 8.91837947, 4.92632077, 11.31860656,
68.12574347, 18.55007016, 24.08123637, 48.59346072,
12.23820314, 18.36509318, 93.8749539 , 8.61755003,
15.5131074 , 43.85394011, 56.05155661, 9.3525071 ,
11.54485654])
Edit I thought It would be interesting to see a few pictures with different amounts of lines. If you're not happy with the results please try to describe what's wrong.

How to get coordinates of the *result* of perspective transform?

Tl;dr Given an image, patch_size and warp_mat, I would like to compute the coordinates of the patch "extracted" from the image, after applying cv2.warpPerspective.
In more details:
If my goal was to crop a specific area from the image, and squeeze it into a different-sized patch, then I know how to do it using getPerspectiveTransform:
import matplotlib.pyplot as plt
import cv2
smiley = cv2.imread('icon.png', cv2.IMREAD_GRAYSCALE)
print(smiley.shape)
# (255, 255)
plt.imshow(smiley, cmap='gray');
left, upper, right, lower = 120, 50, 190, 100
plt.plot([left, right, right, left, left], [upper, upper, lower, lower, upper]);
w, h = 40, 50
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
warp_mat = cv2.getPerspectiveTransform(src, dst)
print(warp_mat)
# [[ 0.571 0. -68.571]
# [ 0. 1. -50. ]
# [ -0. -0. 1. ]]
warp_patch = cv2.warpPerspective(smiley, warp_mat, (w, h))
plt.imshow(warp_patch, cmap='gray');
Finally, my question is:
My inputs are: image, warp_mat, w, h.
And my goal is to discover left, upper, right, lower.
How can I do that?
Following Micka's answer here, "you typically just invert the transformation matrix".
With respect to your input image's size, you reconstruct your patch on an empty image, such that you can then simply use cv2.boundingRect to reconstruct the original region of interest.
I prepared some code incorporating parts of your code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
smiley = cv2.imread('path/to/your/image.png', cv2.IMREAD_GRAYSCALE)
# Region of interest
left, upper, right, lower = 120, 50, 190, 100
print(left, upper, right, lower)
# Width and height
w, h = 40, 50
plt.figure(figsize=(16, 9))
plt.subplot(1, 3, 1), plt.imshow(smiley, cmap='gray')
plt.plot([left, right, right, left, left], [upper, upper, lower, lower, upper])
# Source and destination points
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
# Perspective transform
warp_mat = cv2.getPerspectiveTransform(src, dst)
warp_patch = cv2.warpPerspective(smiley, warp_mat, (w, h))
plt.subplot(1, 3, 2), plt.imshow(warp_patch, cmap='gray')
# Inverse perspective transform
inv_warp_mat = np.linalg.pinv(warp_mat)
reverse = cv2.warpPerspective(warp_patch, inv_warp_mat, smiley.shape)
plt.subplot(1, 3, 3), plt.imshow(reverse, cmap='gray')
# Reconstruct region of intereset
l, t, w, h = cv2.boundingRect(reverse)
left_rev = l
upper_rev = t
right_rev = l + w
lower_rev = t + h
print(left_rev, upper_rev, right_rev, lower_rev)
plt.tight_layout()
plt.show()
The print output looks like this:
120 50 190 100
119 50 190 100
And, the plot output looks like this:
Unfortunately, the reconstructed result is not perfect, but maybe you can investigate that and find further improvements.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
Matplotlib: 3.3.4
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------
The short answer:
inv_warp_mat = np.linalg.pinv(warp_mat)
left, upper, _ = inv_warp_mat # np.array([0, 0, 1])
right, lower, _ = inv_warp_mat # np.array([w, h, 1])
The longer answer:
Following #HansHirse's answer to this question, I understood that indeed "I only needed to multiply by the inverse", and more verbosely - multiplying the corners of the patch by the inverse warping mat, gives the corners of the patch in the original photo's coordinates - which is exactly what I was looking for.
BTW, the inaccuracies at the end of the calculation in #HansHirshes's answer were due to the cv2.boundingRect operation.
The full correct calculation:
import matplotlib.pyplot as plt
import cv2
import numpy as np
smiley = cv2.imread('icon.png', cv2.IMREAD_GRAYSCALE)
print(smiley.shape)
# (255, 255)
plt.imshow(smiley, cmap='gray');
left, upper, right, lower = 120, 50, 190, 100
print(left, upper, right, lower)
w, h = 40, 50
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
warp_mat = cv2.getPerspectiveTransform(src, dst)
Now assuming we only have the warp_mat and the patch size:
inv_warp_mat = np.linalg.pinv(warp_mat)
left_inv, upper_inv, _ = inv_warp_mat # np.array([0, 0, 1])
right_inv, lower_inv, _ = inv_warp_mat # np.array([w, h, 1])
print(left_inv, upper_inv, right_inv, lower_inv)
result:
120 50 190 100
120.00000000000011 50.000000000000014 190.00000000000037 100.00000000000007

How to find and draw largest rectangle inside contour area?

i want to compare an image with specific pixel using
(score, diff) = compare_ssim(grayA[y:y+h, x:x+w], grayB[y:y+h, x:x+w], full=True)
But that function only support rectangle ROI. And my ROI is a contour.
To compare that i need largest rectangle inside the contour. How to find largest rectangle inside contour area ?
Sample image
According to your OP, I suggest to use warpAffine to rotate the ROI to a rectangle shape, because the ROI is already in rectangle shape but rotated. Here is a simple sample:
import cv2
import numpy as np
img = cv2.imread("1.png")
(H,W,c) = img.shape
print("shape = {},{}".format(H,W))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,128,255,cv2.THRESH_BINARY_INV)
_,contours,_ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
res = np.zeros_like(img)
c = np.squeeze(contours[0])
# find rectangle's conner points
x = sorted(c, key=lambda a:a[0])
left = x[0]
right = x[-1]
y= sorted(c, key=lambda a:a[1])
top = y[0]
bottom = y[-1]
cv2.circle(img, (left[0],left[1]), 4, (0, 0, 255), -1)
cv2.circle(img, (right[0],right[1]), 4, (0, 0, 255), -1)
cv2.circle(img, (top[0],top[1]), 4, (0, 0, 255), -1)
cv2.circle(img, (bottom[0],bottom[1]), 4, (0, 0, 255), -1)
#calculate rectangle's shape
roi_w = int(np.sqrt((top[0]-right[0])*(top[0]-right[0])+(top[1]-right[1])*(top[1]-right[1])))
roi_h = int(np.sqrt((top[0]-left[0])*(top[0]-left[0])+(top[1]-left[1])*(top[1]-left[1])))
pts1 = np.float32([top,right,left])
# keep the top coords and calculate new coords for left and right
new_top = top
new_right = [top[0] + roi_w, top[1]]
new_left = [top[0], top[1] + roi_h]
pts2 = np.float32([new_top,new_right,new_left])
#rotate
matrix = cv2.getAffineTransform(pts1, pts2)
result = cv2.warpAffine(img, matrix, (W,H))
cv2.drawContours(res, [contours[0]], 0, (0,255,0), 3)
# extract roi
roi = result[new_top[1]:new_left[1],new_top[0]:new_right[0]]
cv2.imshow("img",img)
cv2.imshow("result",result)
cv2.waitKey(0)

How to fill a polygon in OpenCV?

The Python code given below draws a triangle, how can I fill it with a color inside? Or another easier way to draw a triangle in OpenCV?
pts = np.array([[100,350],[165,350],[165,240]], np.int32)
cv2.polylines(img,[pts],True,(0,255,255),2)
You have to use cv2.fillPoly().
Illustration for 2-channeled image
Change the second line to:
cv2.fillPoly(img, [pts], 255)
Code:
img = np.zeros([400, 400],dtype=np.uint8)
pts = np.array([[100,350],[165,350],[165,240]], np.int32)
cv2.fillPoly(img, [pts], 255)
cv2.imshow('Original', img)
Result:
Illustration for 3-channeled color image
img = cv2.imread('image_path')
pts = np.array([[170,50],[240, 40],[240, 150], [210, 100], [130, 130]], np.int32)
cv2.fillPoly(img, [pts], (255,150,255))
Result:

How to find the distance between two concentric contours, for different angles?

I have an image with two contours, where one contour is always 'inside' another. I want to find the distance between the two contours for 90 different angles (meaning, distance at every 4 degrees). How do I go about doing it?
Here's an example image:
Thank you!
Take this image of two sets of two shapes:
We want to find the distance between the edges of each set of shapes, including where the edges overlap.
First things first, we import the necessary modules:
import cv2
import numpy as np
To do that, we will first need to retrieve every shape in the image as lists of contours. In the above particular example, there are 4 shapes that need to be detected. To retrieve each shape, we will need to use a mask to mask out every color besides the color of the shape of interest:
def get_masked(img, lower, upper):
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(img_hsv, np.array(lower), np.array(upper))
img_mask = cv2.bitwise_and(img, img, mask=mask)
return img_mask
The lower and upper parameters will determine the minimum HVS values and the maximum HSV values that will not be masked out of the image. Given the right lower and upper parameters, you will be able to extract one image with only the green shapes, and one image with only the blue shapes:
With the masked images, you can then proceed to process them into more clean contours. Here is the preprocess function, with values that can be tweaked whenever necessary:
def get_processed(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (7, 7), 7)
img_canny = cv2.Canny(img_blur, 50, 50)
kernel = np.ones((7, 7))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
img_erode = cv2.erode(img_dilate, kernel, iterations=2)
return img_erode
Passing in the masked images will give you
With the images masked and processed, they will be ready for opencv to detect their contours:
def get_contours(img):
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
return [cnt for cnt in contours if cv2.contourArea(cnt) > 500]
The list comprehension at the return statement is there to filter out noise by specifying that every contour must have an area that is greater than 500.
Now, we will define some basic functions that we will later use:
def get_centeroid(cnt):
length = len(cnt)
sum_x = np.sum(cnt[..., 0])
sum_y = np.sum(cnt[..., 1])
return int(sum_x / length), int(sum_y / length)
def get_pt_at_angle(pts, pt, ang):
angles = np.rad2deg(np.arctan2(*(pt - pts).T))
angles = np.where(angles < -90, angles + 450, angles + 90)
found= np.rint(angles) == ang
if np.any(found):
return pts[found][0]
The names of the functions are pretty self-explanatory; the first one returns the center point of a contour, and the second one returns a point in a given array of points, pts, that is at a given angle, ang, relative to a given point, pt. The np.where in the get_pt_at_angle function is there to shift the starting angle, 0, to the positive x axis, as it by default will be at the positive y axis.
Time to define the function that will return the distances. First, define it so that these five parameters can be passed in:
def get_distances(img, cnt1, cnt2, center, step):
A brief explanation on each parameter:
img, the image array
cnt1, the first shape
cnt2, the second shape
center, the origin for the distance calculations
step, the number of degrees to be jumped per value
Define a dictionary to store the distances, with the angles as key and the distances as values:
angles = dict()
Loop through each angle you want to retrieve the distance of the edges of the two shapes, and find the coordinate of the two contours that are the ct angle of the iterations, angle, relative to the origin point, center, using the get_pt_at_angle function we defined earlier.
for angle in range(0, 360, step):
pt1 = get_pt_at_angle(cnt1, center, angle)
pt2 = get_pt_at_angle(cnt2, center, angle)
Check if a point exists in both contours that is at the specific angle relative to the origin:
if np.any(pt1) and np.any(pt2):
You can use the np.linalg.norm method to get the distance between the two points. I also made it draw the text and connecting lines for visualization. Don't forget to add the angle and value to the angles dictionary, and you can then break out of the inner for loop. At the end of the function, return the image that has the text and lines drawn on it:
d = round(np.linalg.norm(pt1 - pt2))
cv2.putText(img, str(d), tuple(pt1), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 0))
cv2.drawContours(img, np.array([[center, pt1]]), -1, (255, 0, 255), 1)
angles[angle] = d
return img, angles
Finally, you can utilize the function defined on an image:
img = cv2.imread("shapes1.png")
img_green = get_masked(img, [10, 0, 0], [70, 255, 255])
img_blue = get_masked(img, [70, 0, 0], [179, 255, 255])
img_green_processed = get_processed(img_green)
img_blue_processed = get_processed(img_blue)
img_green_contours = get_contours(img_green_processed)
img_blue_contours = get_contours(img_blue_processed)
Using the image of four shapes, you can tell that the img_green_contours and img_blue_contours will each contain two contours. But you might be wondering: how did I choose the minimum and maximum HSV values? Well, I used a trackbar code. You can run the below code, adjusting the HSV values using the trackbars until you find a range where everything in the image is masked out (in black) except for the shape you want to retrieve:
import cv2
import numpy as np
def empty(a):
pass
cv2.namedWindow("TrackBars")
cv2.createTrackbar("Hue Min", "TrackBars", 0, 179, empty)
cv2.createTrackbar("Hue Max", "TrackBars", 179, 179, empty)
cv2.createTrackbar("Sat Min", "TrackBars", 0, 255, empty)
cv2.createTrackbar("Sat Max", "TrackBars", 255, 255, empty)
cv2.createTrackbar("Val Min", "TrackBars", 0, 255, empty)
cv2.createTrackbar("Val Max", "TrackBars", 255, 255, empty)
img = cv2.imread("shapes0.png")
while True:
h_min = cv2.getTrackbarPos("Hue Min", "TrackBars")
h_max = cv2.getTrackbarPos("Hue Max", "TrackBars")
s_min = cv2.getTrackbarPos("Sat Min", "TrackBars")
s_max = cv2.getTrackbarPos("Sat Max", "TrackBars")
v_min = cv2.getTrackbarPos("Val Min", "TrackBars")
v_max = cv2.getTrackbarPos("Val Max", "TrackBars")
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower = np.array([h_min, s_min, v_min])
upper = np.array([h_max, s_max, v_max])
mask = cv2.inRange(img_hsv, lower, upper)
img_masked = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("Image", img_masked)
if cv2.waitKey(1) & 0xFF == ord("q"): # If you press the q key
break
With the values I chose, I got:
Loop through the blue shape contours and green shape contours in parallel, and depending on which color shape you want the origin to be at the center of, you can pass that color contour into the get_centeroid function we defined earlier:
for cnt_blue, cnt_green in zip(img_blue_contours, img_green_contours[::-1]):
center = get_centeroid(cnt_blue)
img, angles = get_distances(img, cnt_green.squeeze(), cnt_blue.squeeze(), center, 30)
print(angles)
Notice that I used 30 as the step; that number can be changed to 4, I used 30 so the visualization would be more clear.
Finally, we can display the image:
cv2.imshow("Image", img)
cv2.waitKey(0)
Altogether:
import cv2
import numpy as np
def get_masked(img, lower, upper):
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(img_hsv, np.array(lower), np.array(upper))
img_mask = cv2.bitwise_and(img, img, mask=mask)
return img_mask
def get_processed(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (7, 7), 7)
img_canny = cv2.Canny(img_blur, 50, 50)
kernel = np.ones((7, 7))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
img_erode = cv2.erode(img_dilate, kernel, iterations=2)
return img_erode
def get_contours(img):
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
return [cnt for cnt in contours if cv2.contourArea(cnt) > 500]
def get_centeroid(cnt):
length = len(cnt)
sum_x = np.sum(cnt[..., 0])
sum_y = np.sum(cnt[..., 1])
return int(sum_x / length), int(sum_y / length)
def get_pt_at_angle(pts, pt, ang):
angles = np.rad2deg(np.arctan2(*(pt - pts).T))
angles = np.where(angles < -90, angles + 450, angles + 90)
found= np.rint(angles) == ang
if np.any(found):
return pts[found][0]
def get_distances(img, cnt1, cnt2, center, step):
angles = dict()
for angle in range(0, 360, step):
pt1 = get_pt_at_angle(cnt1, center, angle)
pt2 = get_pt_at_angle(cnt2, center, angle)
if np.any(pt1) and np.any(pt2):
d = round(np.linalg.norm(pt1 - pt2))
cv2.putText(img, str(d), tuple(pt1), cv2.FONT_HERSHEY_PLAIN, 0.8, (0, 0, 0))
cv2.drawContours(img, np.array([[center, pt1]]), -1, (255, 0, 255), 1)
angles[angle] = d
return img, angles
img = cv2.imread("shapes1.png")
img_green = get_masked(img, [10, 0, 0], [70, 255, 255])
img_blue = get_masked(img, [70, 0, 0], [179, 255, 255])
img_green_processed = get_processed(img_green)
img_blue_processed = get_processed(img_blue)
img_green_contours = get_contours(img_green_processed)
img_blue_contours = get_contours(img_blue_processed)
for cnt_blue, cnt_green in zip(img_blue_contours, img_green_contours[::-1]):
center = get_centeroid(cnt_blue)
img, angles = get_distances(img, cnt_green.squeeze(), cnt_blue.squeeze(), center, 30)
print(angles)
cv2.imshow("Image", img)
cv2.waitKey(0)
Output:
{0: 5, 30: 4, 60: 29, 90: 25, 120: 31, 150: 8, 180: 5, 210: 7, 240: 14, 270: 12, 300: 14, 330: 21}
{0: 10, 30: 9, 60: 6, 90: 0, 120: 11, 150: 7, 180: 5, 210: 6, 240: 6, 270: 4, 300: 0, 330: 16}
Note: For certain shapes, some angles might be absent in the dictionary. That would be caused by the process function; you would get more accurate results if you turn down some of the values, like the blur sigma
In the following code, I have just given you the example for the vertical line, the rest can be obtained by rotating the line. Result looks like this, instead of drawing you can use the coordinates for distance calculation.
import shapely.geometry as shapgeo
import numpy as np
import cv2
img = cv2.imread('image.jpg', 0)
ret, img =cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)
#Fit the ellipses
_, contours0, hierarchy = cv2.findContours( img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
outer_ellipse = [cv2.approxPolyDP(contours0[0], 0.1, True)]
inner_ellipse = [cv2.approxPolyDP(contours0[2], 0.1, True)]
h, w = img.shape[:2]
vis = np.zeros((h, w, 3), np.uint8)
cv2.drawContours( vis, outer_ellipse, -1, (255,0,0), 1)
cv2.drawContours( vis, inner_ellipse, -1, (0,0,255), 1)
##Extract contour of ellipses
cnt_outer = np.vstack(outer_ellipse).squeeze()
cnt_inner = np.vstack(inner_ellipse).squeeze()
#Determine centroid
M = cv2.moments(cnt_inner)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
print cx, cy
#Draw full segment lines
cv2.line(vis,(cx,0),(cx,w),(150,0,0),1)
# Calculate intersections using Shapely
# http://toblerity.org/shapely/manual.html
PolygonEllipse_outer= shapgeo.asLineString(cnt_outer)
PolygonEllipse_inner= shapgeo.asLineString(cnt_inner)
PolygonVerticalLine=shapgeo.LineString([(cx,0),(cx,w)])
insecouter= np.array(PolygonEllipse_outer.intersection(PolygonVerticalLine)).astype(np.int)
insecinner= np.array(PolygonEllipse_inner.intersection(PolygonVerticalLine)).astype(np.int)
cv2.line(vis,(insecouter[0,0], insecinner[1,1]),(insecouter[1,0], insecouter[1,1]),(0,255,0),2)
cv2.line(vis,(insecouter[0,0], insecinner[0,1]),(insecouter[1,0], insecouter[0,1]),(0,255,0),2)
cv2.imshow('contours', vis)
0xFF & cv2.waitKey()
cv2.destroyAllWindows()
I borrowed the general idea using Shapely and the basic code from tfv's answer. Nevertheless, iterating the desired angles, calculating the needed end points for the correct lines to be intersected with the shapes, calculating and storing the distances, etc. were missing, so I added all that.
That'd be my full code:
import cv2
import numpy as np
import shapely.geometry as shapgeo
# Read image, and binarize
img = cv2.imread('G48xu.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]
# Find (approximated) contours of inner and outer shape
cnts, hier = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
outer = [cv2.approxPolyDP(cnts[0], 0.1, True)]
inner = [cv2.approxPolyDP(cnts[2], 0.1, True)]
# Just for visualization purposes: Draw contours of inner and outer shape
h, w = img.shape[:2]
vis = np.zeros((h, w, 3), np.uint8)
cv2.drawContours(vis, outer, -1, (255, 0, 0), 1)
cv2.drawContours(vis, inner, -1, (0, 0, 255), 1)
# Squeeze contours for further processing
outer = np.vstack(outer).squeeze()
inner = np.vstack(inner).squeeze()
# Calculate centroid of inner contour
M = cv2.moments(inner)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
# Calculate maximum needed radius for later line intersections
r_max = np.min([cx, w - cx, cy, h - cy])
# Set up angles (in degrees)
angles = np.arange(0, 360, 4)
# Initialize distances
dists = np.zeros_like(angles)
# Prepare calculating the intersections using Shapely
poly_outer = shapgeo.asLineString(outer)
poly_inner = shapgeo.asLineString(inner)
# Iterate angles and calculate distances between inner and outer shape
for i, angle in enumerate(angles):
# Convert angle from degrees to radians
angle = angle / 180 * np.pi
# Calculate end points of line from centroid in angle's direction
x = np.cos(angle) * r_max + cx
y = np.sin(angle) * r_max + cy
points = [(cx, cy), (x, y)]
# Calculate intersections using Shapely
poly_line = shapgeo.LineString(points)
insec_outer = np.array(poly_outer.intersection(poly_line))
insec_inner = np.array(poly_inner.intersection(poly_line))
# Calculate distance between intersections using L2 norm
dists[i] = np.linalg.norm(insec_outer - insec_inner)
# Just for visualization purposes: Draw lines for some examples
if (i == 10) or (i == 40) or (i == 75):
# Line from centroid to end points
cv2.line(vis, (cx, cy), (int(x), int(y)), (128, 128, 128), 1)
# Line between both shapes
cv2.line(vis,
(int(insec_inner[0]), int(insec_inner[1])),
(int(insec_outer[0]), int(insec_outer[1])), (0, 255, 0), 2)
# Distance
cv2.putText(vis, str(dists[i]), (int(x), int(y)),
cv2.FONT_HERSHEY_COMPLEX, 0.75, (0, 255, 0), 2)
# Output angles and distances
print(np.vstack([angles, dists]).T)
# Just for visualization purposes: Output image
cv2.imshow('Output', vis)
cv2.waitKey(0)
cv2.destroyAllWindows()
I generated some examplary output for visualization purposes:
And, here's an excerpt from the output, showing angle and the corresponding distance:
[[ 0 70]
[ 4 71]
[ 8 73]
[ 12 76]
[ 16 77]
...
[340 56]
[344 59]
[348 62]
[352 65]
[356 67]]
Hopefully, the code is self-explanatory. If not, please don't hesitate to ask questions. I'll gladly provide further information.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.2
OpenCV: 4.5.1
Shapely: 1.7.1
----------------------------------------

Categories