Tl;dr Given an image, patch_size and warp_mat, I would like to compute the coordinates of the patch "extracted" from the image, after applying cv2.warpPerspective.
In more details:
If my goal was to crop a specific area from the image, and squeeze it into a different-sized patch, then I know how to do it using getPerspectiveTransform:
import matplotlib.pyplot as plt
import cv2
smiley = cv2.imread('icon.png', cv2.IMREAD_GRAYSCALE)
print(smiley.shape)
# (255, 255)
plt.imshow(smiley, cmap='gray');
left, upper, right, lower = 120, 50, 190, 100
plt.plot([left, right, right, left, left], [upper, upper, lower, lower, upper]);
w, h = 40, 50
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
warp_mat = cv2.getPerspectiveTransform(src, dst)
print(warp_mat)
# [[ 0.571 0. -68.571]
# [ 0. 1. -50. ]
# [ -0. -0. 1. ]]
warp_patch = cv2.warpPerspective(smiley, warp_mat, (w, h))
plt.imshow(warp_patch, cmap='gray');
Finally, my question is:
My inputs are: image, warp_mat, w, h.
And my goal is to discover left, upper, right, lower.
How can I do that?
Following Micka's answer here, "you typically just invert the transformation matrix".
With respect to your input image's size, you reconstruct your patch on an empty image, such that you can then simply use cv2.boundingRect to reconstruct the original region of interest.
I prepared some code incorporating parts of your code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
smiley = cv2.imread('path/to/your/image.png', cv2.IMREAD_GRAYSCALE)
# Region of interest
left, upper, right, lower = 120, 50, 190, 100
print(left, upper, right, lower)
# Width and height
w, h = 40, 50
plt.figure(figsize=(16, 9))
plt.subplot(1, 3, 1), plt.imshow(smiley, cmap='gray')
plt.plot([left, right, right, left, left], [upper, upper, lower, lower, upper])
# Source and destination points
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
# Perspective transform
warp_mat = cv2.getPerspectiveTransform(src, dst)
warp_patch = cv2.warpPerspective(smiley, warp_mat, (w, h))
plt.subplot(1, 3, 2), plt.imshow(warp_patch, cmap='gray')
# Inverse perspective transform
inv_warp_mat = np.linalg.pinv(warp_mat)
reverse = cv2.warpPerspective(warp_patch, inv_warp_mat, smiley.shape)
plt.subplot(1, 3, 3), plt.imshow(reverse, cmap='gray')
# Reconstruct region of intereset
l, t, w, h = cv2.boundingRect(reverse)
left_rev = l
upper_rev = t
right_rev = l + w
lower_rev = t + h
print(left_rev, upper_rev, right_rev, lower_rev)
plt.tight_layout()
plt.show()
The print output looks like this:
120 50 190 100
119 50 190 100
And, the plot output looks like this:
Unfortunately, the reconstructed result is not perfect, but maybe you can investigate that and find further improvements.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
Matplotlib: 3.3.4
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------
The short answer:
inv_warp_mat = np.linalg.pinv(warp_mat)
left, upper, _ = inv_warp_mat # np.array([0, 0, 1])
right, lower, _ = inv_warp_mat # np.array([w, h, 1])
The longer answer:
Following #HansHirse's answer to this question, I understood that indeed "I only needed to multiply by the inverse", and more verbosely - multiplying the corners of the patch by the inverse warping mat, gives the corners of the patch in the original photo's coordinates - which is exactly what I was looking for.
BTW, the inaccuracies at the end of the calculation in #HansHirshes's answer were due to the cv2.boundingRect operation.
The full correct calculation:
import matplotlib.pyplot as plt
import cv2
import numpy as np
smiley = cv2.imread('icon.png', cv2.IMREAD_GRAYSCALE)
print(smiley.shape)
# (255, 255)
plt.imshow(smiley, cmap='gray');
left, upper, right, lower = 120, 50, 190, 100
print(left, upper, right, lower)
w, h = 40, 50
src = np.array([[left, upper], [right, upper], [right, lower], [left, lower]], dtype=np.float32)
dst = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
warp_mat = cv2.getPerspectiveTransform(src, dst)
Now assuming we only have the warp_mat and the patch size:
inv_warp_mat = np.linalg.pinv(warp_mat)
left_inv, upper_inv, _ = inv_warp_mat # np.array([0, 0, 1])
right_inv, lower_inv, _ = inv_warp_mat # np.array([w, h, 1])
print(left_inv, upper_inv, right_inv, lower_inv)
result:
120 50 190 100
120.00000000000011 50.000000000000014 190.00000000000037 100.00000000000007
Related
im currently trying to implement a paper of reading pressure gauges. There is a step where i dont know what to do. It says: "The display contour is then rotated to vertically align the long
axis of the ellipse and inscribed in a rectangle used to crop the gauge image." So im not sure how i can rotate this image vertically, for better understanding i will show you a example what the current image is and what it needs to be.
Im currently at step C and need to get the image in position of step D. The text detection in the example is currently not important.
At the moment i have the correct contours and ellipse for the display.
(cnts, boundingBoxes) = sort_contours(cnts)
#find correct contours and fitellipse
if len(cnts) != 0:
for i in range(len(cnts)):
if len(cnts[i]) >= 5: #if contours has more than 5 points
# cv2.drawContours(image,cnts[0],-1,(150,10,255),3)
ellipse = cv2.fitEllipse(cnts[i])
finalElps.append(ellipse) #(centx,centy), (width,height), angle
for i in range(len(finalElps)):
centx = finalElps[i][0][0]
centy = finalElps[i][0][1]
eWidth = finalElps[i][1][0]
eHeight = finalElps[i][1][1]
sfRes = Sf(eWidth, eHeight)
cfRes = Cf(centx, imgCenterX, centy, imgCenterY)
afRes = Af(eWidth,eHeight,imgWidth,imgHeight)
print("SF: " + str(sfRes) + "| " + "CF: " + str(cfRes) + "| Af: " + str(afRes))
if(sfRes < 0.4 and cfRes < 6 and afRes < 0.9):
print(finalElps[i])
cv2.ellipse(image, finalElps[i], (255,0,0), 2)
plt.imshow(image)
sfRes, cfRes and afRes are just calculations to find the right ellipse.
What should be my next step to reach the vertical rotation? I think the correct name for it is "image rectification" but im not 100% sure about it
Here is how to get the image D from the image C:
Find the binary mask
Find an ellipse (center, width, height, angle)
Find 4 points on opposite sides of the ellipse
Use these 4 points to get perspective transform that can be used to warp the rotated gauge image to rectangular image. The final result:
Code:
import cv2
import numpy as np
image = cv2.imread("gauge.png")
# find the parameters of the ellipse
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
mask = cv2.threshold(gray, 10, 255, cv2.THRESH_BINARY)[1]
points = np.stack(np.nonzero(mask.T)).T
hull = cv2.convexHull(points)
(cx, cy), (width, height), angle = cv2.fitEllipse(hull)
# for visualization
# cv2.ellipse(image, (int(cx), int(cy)), (int(width/2), int(height/2)), angle, 0, 360, (0, 0, 255), 2)
# find the points on the opposite sides of the ellipse
# define rectangular homogenuous coordinates and rotate them using a rotation matrix
mat = cv2.getRotationMatrix2D((cx, cy), -angle, 1)
mat = np.vstack((mat, [0, 0, 1]))
coords = np.array(
[
[cx + width // 2, cy, 1],
[cx - width // 2, cy, 1],
[cx, cy + height // 2, 1],
[cx, cy - height // 2, 1],
]
)
points = (mat # coords.T)[:2].T # drop the homogenuos part
# for visualization
# for px, py in points.astype(int)[:2]:
# cv2.circle(image, (px, py), 10, (0, 0, 255), -1)
# for px, py in points.astype(int)[2:]:
# cv2.circle(image, (px, py), 10, (255, 0, 0), -1)
# define points on the target image to which the ellipse points should be mapped
size = 300
target = np.float32(
[
[size, size // 2],
[0, size // 2],
[size // 2, size],
[size // 2, 0],
]
)
mat = cv2.getPerspectiveTransform(points.astype(np.float32), target)
rect_image = cv2.warpPerspective(image, mat, (size, size))
cv2.imwrite("rect_gauge.png", rect_image)
I'm trying to get the coordinate of every end point on every line, but i couldn't come up with a solution, this is what I've currently got but its finding the outline of the lines not the lines itself
import cv2
import numpy as np
img = cv2.imread('out copy.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(0,255,0),5)
lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)
cv2.imshow('out copy.png', lines_edges)
cv2.waitKey(0) ```
The hit-or-miss transform can be used to find end points of a line after skeletonization.
Code:
img = cv2.imread('image.png')
img2 = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# inverse binary image, to make the lines in white
th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# obtain binary skeleton
sk = cv2.ximgproc.thinning(th, None, 1)
# kernels to find endpoints in all 4 directions
k1 = np.array(([0, 0, 0], [-1, 1, -1], [-1, -1, -1]), dtype="int")
k2 = np.array(([0, -1, -1], [0, 1, -1], [0, -1, -1]), dtype="int")
k3 = np.array(([-1, -1, 0], [-1, 1, 0], [-1, -1, 0]), dtype="int")
k4 = np.array(([-1, -1, -1], [-1, 1, -1], [0, 0, 0]), dtype="int")
# perform hit-miss transform for every kernel
o1 = cv2.morphologyEx(sk, cv2.MORPH_HITMISS, k1)
o2 = cv2.morphologyEx(sk, cv2.MORPH_HITMISS, k2)
o3 = cv2.morphologyEx(sk, cv2.MORPH_HITMISS, k3)
o4 = cv2.morphologyEx(sk, cv2.MORPH_HITMISS, k4)
# add results of all the above 4
out = o1 + o2 + o3 + o4
# find points in white (255) and draw them on original image
pts = np.argwhere(out == 255)
for pt in pts:
img2 = cv2.circle(img2, (pt[1], pt[0]), 15, (0,255,0), -1)
I need to find the four corners of one (or more) paintings inside a picture. I got it working so far but only when the wall in the background is plain and there is only one painting in the picture.
This is the code I use currently:
def create_painting_mask(file_path):
def order(corners):
# Order points in following order: top-left, top-right, bottom-left, bottom-right
result = np.zeros((4, 2), dtype="float32")
# Sum coordinate values, top left will have smallest sum,
# top right will have biggest sum
s = corners.sum(axis=1)
result[0] = corners[np.argmin(s)]
result[3] = corners[np.argmax(s)]
# Subtract coordinate values, top right will be minimum,
# bottom left will be maximum
d = np.diff(corners, axis=1)
result[1] = corners[np.argmin(d)]
result[2] = corners[np.argmax(d)]
return result
# reading image and shape
img = cv.imread(file_path)
h, w, c = img.shape
# converting to gray
grayscale = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# blurring
blurred = cv.medianBlur(grayscale, 25)
mask = np.zeros(img.shape[:2], dtype=np.uint8)
edges = cv.Canny(blurred, 5, 10)
kernel_dilate = cv.getStructuringElement(cv.MORPH_RECT, (10, 10))
dilate = cv.dilate(edges, kernel_dilate)
contours, hierarchy = cv.findContours(dilate, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
result = np.zeros((h, w, 3), dtype=np.uint8)
for i, cnt in enumerate(contours):
area = cv.contourArea(cnt)
perimeter = cv.arcLength(cnt, closed=True)
if area > 1000000:
border = cv.approxPolyDP(curve=cnt, epsilon=0.05 * perimeter, closed=True)
cv.drawContours(mask, [border], 0, (255), -1)
if len(border) == 4:
border = np.reshape(border, (4, 2))
arranged_points = order(border)
new_points = np.float32([[0, 0],
[w, 0],
[0, h],
[w, h]])
transformation = cv.getPerspectiveTransform(arranged_points, new_points)
result = cv.warpPerspective(img, transformation, (w, h))
print_all(img, mask, result)
It needs to perform better, to reduce the errors like the ones below:
good result
bad result (background)
bad result (wires)
bad result (multiple frames)
If anyone has any help, for example tweaking some parameters or trying out new functions I don't know about, please reply! If you want to try and tweak a bit yourself a duplication of my code is available on github, including all the test images I use as well => https://github.com/rubendebbaudt/painting-corner-detection
THANKS IN ADVANCE!
I am very new at Python so what I am trying to do is this:
I already made a mask of floor of given image:
https://i.ibb.co/0r17SnT/www.png
What I want to do is to connect all points by geometric straight lines and find the length of each edge, like I drew on this example:
https://i.ibb.co/tbgyYF5/www.png
I am using openCV, and so far I tried to find some extreme points but it is not what i need i guess.
def getCalculatedImg():
# Load image
img = cv2.imread('img.jpg')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply cv2.threshold() to get a binary image
ret, thresh = cv2.threshold(gray_image, 50, 255, cv2.THRESH_BINARY)
# Find contours:
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cntrs = contours[0]
c = max(contours, key=cv2.contourArea)
# Obtain outer coordinates
left_coordinates_separated = [(c[c[:, :, 0].argmin()][0])[0], (c[c[:, :, 0].argmin()][0])[1]]
left = tuple(left_coordinates_separated)
right_coordinates_separated = [(c[c[:, :, 0].argmax()][0])[0], (c[c[:, :, 0].argmax()][0])[1]]
right = tuple(right_coordinates_separated)
top_coordinates_separated = [(c[c[:, :, 1].argmin()][0])[0], (c[c[:, :, 1].argmin()][0])[1]]
top = tuple(top_coordinates_separated)
bottom_coordinates_separated = [(c[c[:, :, 1].argmax()][0])[0], (c[c[:, :, 1].argmax()][0])[1]]
bottom = tuple(bottom_coordinates_separated)
# Draw contours and dots to image:
cv2.drawContours(img, [c], -1, (36, 255, 12), 2)
cv2.circle(img, left, 8, (0, 50, 255), -1)
cv2.circle(img, right, 8, (0, 255, 255), -1)
cv2.circle(img, top, 8, (255, 50, 0), -1)
cv2.circle(img, bottom, 8, (255, 255, 0), -1)
print('Left point: {}'.format(left))
print('Right point: {}'.format(right))
print('Top point: {}'.format(top))
print('Bottom point: {}'.format(bottom))
# Output
cv2.imwrite('outlined_boundered_image.jpg', img)
return img
It seems what you want is a straight line interpolation of the boundary and the length of the lines. One way of getting one is to use Splines of degree one. If you're more interested in the actual length of the boundary it would be more appropriate to use degree 3 interpolation and an integral. So let me know if you want that.
First I used blur and a sobel filter to obtain a clear picture of the boundary.
import cv2
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import UnivariateSpline
picture = cv2.imread('area.png', cv2.IMREAD_GRAYSCALE)
picture2 = cv2.GaussianBlur(picture, (3, 3), 0, 0, cv2.BORDER_DEFAULT)
picture3 = np.abs(cv2.Sobel(picture2, cv2.CV_16S,0,1))+np.abs(cv2.Sobel(picture2, cv2.CV_16S,0,1))
picture4 = (picture3>0.3*picture3.max()).T
plt.imshow(picture4.T, cmap='gray')
I can't give the image to UnivariateSpline to interpolate yet since there are multiple y pixels for each x. I solved that by taking the average of the y values for each x as the value of the function at that point.
x,y = np.where(picture4)
idx = np.unique(x, return_index = True)[1]
y = np.array(tuple(map(np.mean,np.split(y,idx)[1:])))
f = UnivariateSpline(x[idx],y,k=1,s=10**4)
knots = f.get_knots()
print(len(knots))
x2 = np.linspace(0,picture.shape[1])
plt.plot(x2, -f(x2))
And it gives me a straight line interpolation with 33 lines of the original boundary. If you think 33 lines is too much you can just increase s. Then it uses less lines.
And finally I calculate the length of the line segments
dx = knots[1:]-knots[:-1]
dy = f(knots[1:])-f(knots[:-1])
lengths = (dx**2+dy**2)**(1/2)
lengths
They are
array([ 48.00108553, 24.00049131, 48.59189432, 48.00784009,
24.00411983, 12.0353518 , 7.45847594, 76.56726501,
50.2062363 , 62.97882834, 55.66753909, 59.85823117,
192.18816002, 24.0756812 , 12.0380576 , 6.30271185,
68.6638399 , 8.91837947, 4.92632077, 11.31860656,
68.12574347, 18.55007016, 24.08123637, 48.59346072,
12.23820314, 18.36509318, 93.8749539 , 8.61755003,
15.5131074 , 43.85394011, 56.05155661, 9.3525071 ,
11.54485654])
Edit I thought It would be interesting to see a few pictures with different amounts of lines. If you're not happy with the results please try to describe what's wrong.
I have two images
image1 (object):
]3
Original image without marked keypoints:
image2 is a white picture (500x500)
In image1 and in image2 I have marked keypoints.
I want to align image1 on image2 by keypoints. So the goal that both keypoints overlaps with stretching, scaling and transforming image2.
This are my keypoints (csv file). The coordinates are x and y for image1 in image1 and for image2 in image2.
object1_x,object1_y,image_x,image_y
0,0,80,137
286,0,409,42
286,198,416,390
174,198,331,384
158,116,291,119
0,97,111,311
How can I do this with opencv and python?
So the result image should looks like this (without the red dots, the red dots are only for demonstration the keypoints):
The Concept
Extract sets of 3 indices from the first set of keypoints that will form triangles when indexed from both sets of keypoints. With the indices we can get corresponding triangles from both sets of keypoints, allowing us to build the warped image triangle by triangle (see Warp one triangle to another using OpenCV for more details):
image1.png (with added points):
image2.png (with added points):
Result (with added points)
The Code
import cv2
import numpy as np
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
subdiv.insert(list(points))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
def warp(img1, img2, pts1, pts2):
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
img1 = cv2.imread("image1.png")
img2 = cv2.imread("image2.png")
pts1 = np.array([[0, 0], [286, 0], [286, 198], [174, 198], [158, 116], [0, 97]])
pts2 = np.array([[80, 37], [409, 42], [416, 390], [331, 384], [291, 119], [111, 311]])
warp(img1, img2, pts1, pts2)
for pt in pts2:
cv2.circle(img2, tuple(pt), 15, (0, 0, 255), -1)
cv2.imshow("Original", img1)
cv2.imshow("Transformed", img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
The Output
The Explanation
Import the necessary libraries:
import cv2
import numpy as np
Define a function, triangles, that will take in an array of coordinates, points, and yield lists of 3 indices of the array for triangles that will cover the area of the original array of coordinates:
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
subdiv.insert(list(points))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
Define a function, crop, that will take in an image array, img, and an array of three coordinates, pts. It will return a rectangular segment of the image just large enough to fit the triangle formed by the three point, and return the array of three coordinates transferred to the top-left corner of image:
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
Define a function, warp, that will take in 2 image arrays, img1 and img2, and 2 arrays of coordinates, pts1 and pts2. It will utilize the triangles function defined before iterate through the triangles from the first array of coordinates, the crop function defined before to crop both images at coordinates corresponding to the triangle indices and use the cv2.warpAffine() method to warp the image at the current triangle of the iterations:
def warp(img1, img2, pts1, pts2):
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
Read in your images. In your case, img1 is the image we want to warp, and img2 is the blank 500 x 500 image. Also, define 2 array of coordinates to be the keypoints of the images:
img1 = cv2.imread("image1.png")
img2 = cv2.imread("image2.png")
pts1 = np.array([[0, 0], [286, 0], [286, 198], [174, 198], [158, 116], [0, 97]])
pts2 = np.array([[80, 37], [409, 42], [416, 390], [331, 384], [291, 119], [111, 311]])
Finally, use the warp function defined before to warp img1 to have its keypoints overlap with the kewpoints of img2 and show the resulting image. I drew the points from the second array of coordinates onto the resulting warped image to make the warping process easier to visualize:
warp(img1, img2, pts1, pts2)
for pt in pts2:
cv2.circle(img2, tuple(pt), 15, (0, 0, 255), -1)
cv2.imshow("Original", img1)
cv2.imshow("Transformed", img2)
cv2.waitKey(0)
cv2.destroyAllWindows()