I have followed OpenCV Feature Detection and Description tutorial and used SIFT and other algorithms in OpenCV to find matching feature points between 2 images. From what i understood, these algorithms can find the similar regions between 2 images. But i am interested in identifying the different or dis-similar regions. How can i draw all the NON-MATCHING feature points on both the images? Further more, can i draw boundaries around these non-matching points to be able to show which regions in the 2 images are different?
I am using Python code on Windows 7 and build from latest OpenCV source.
Draw all the NON-MATCHING feature points on both the images:
This task is pretty straight forward once you know the structure of the Matcher objects resulting from the match of two descriptors (matches = bf.match(des1,des2)). The two Matcher objects' properties relevant to this problem are the following:
DMatch.trainIdx: Index of the descriptor (or keypoint from the train image) in train descriptors
DMatch.queryIdx: Index of the descriptor (or keypoint from the query image) in query descriptors
Then, knowing this information and as #uzair_syed said, this is just a simple list operations task.
Draw boundaries around the non-matching points:
To achieve this, I would do something like this:
Create a black mask with a white pixel for each non-matching points
Depending on the density of the non-matching point's cluster, dilate the mask with a big kernel (i.e. 15 x 15 px).
Erode the mask with the same kernel's size.
Finally, apply the findContours function on the mask to get the boundaries of the non-matching points.
For more information, you can have a look at this question and its answer.
Hope it gets you on the right track!
It turned out to be simple list operations task. Here is my Python code
# code copied from
# http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html
import numpy as np
import cv2
from matplotlib import pyplot as plt
from scipy.spatial.distance import euclidean
MIN_MATCH_COUNT = 10
img1 = cv2.imread('Src.png',0) # queryImage
img2 = cv2.imread('Dest.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
kp1_matched=([ kp1[m.queryIdx] for m in good ])
kp2_matched=([ kp2[m.trainIdx] for m in good ])
kp1_miss_matched=[kp for kp in kp1 if kp not in kp1_matched]
kp2_miss_matched=[kp for kp in kp2 if kp not in kp2_matched]
# draw only miss matched or not matched keypoints location
img1_miss_matched_kp = cv2.drawKeypoints(img1,kp1_miss_matched, None,color=(255,0,0), flags=0)
plt.imshow(img1_miss_matched_kp),plt.show()
img2_miss_matched_kp = cv2.drawKeypoints(img2,kp2_miss_matched, None,color=(255,0,0), flags=0)
plt.imshow(img2_miss_matched_kp),plt.show()
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
Related
I'm trying to write a scale and rotation insensitive pattern matcher using a key point approach but I'm struggling to get good results even without changes in scale and rotation.
I'd like to avoid SIFT and SURF if possible as I'd prefer the approach to be patent free. I've experimented with FAST and ORB and have found that FAST seems to be picking up suitable key points. I've then used Brief to extract the descriptors for those points and fed those descriptors into a FLANN pattern matcher (I also tried Brute Force). Despite the template image coming from the same image the search is being conducted on, the results are terrible. Using matchTemplate works well.
This is the code I'm using:
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
directory = "C:\\Users\\MickeyCrozier\\Documents\\Digital Microscope\\Default\\Picture\\"
image = "1.bmp"
tmplImage = "tmpl1.bmp"
img = cv.imread(directory+image,flags = cv.IMREAD_GRAYSCALE)
template = cv.imread(directory+tmplImage,flags = cv.IMREAD_GRAYSCALE)
fast = cv.FastFeatureDetector_create()
fast.setNonmaxSuppression(0)
kp1 = fast.detect(img,None)
kp2 = fast.detect(template,None)
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
kp1, d1 = brief.compute(img,kp1)
kp2, d2 = brief.compute(img,kp2)
d1 = np.float32(d1)
d2 = np.float32(d2)
output = cv.drawKeypoints(img, kp1, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
output = cv.drawKeypoints(template, kp2, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
matcher = cv.FlannBasedMatcher()
knn_matches = matcher.knnMatch(d1, d2, 2)
#-- Filter matches using the Lowe's ratio test
ratio_thresh = 0.8
good_matches = []
for m,n in knn_matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
print(good_matches)
output = cv.drawMatches(img,
kp1,
template,
kp2,
good_matches,None,
flags = cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imshow('Original',output)
cv.waitKey(0)
Here are the results of the key points and match:
Any idea how to improve the match results?
Hi I'm trying to create an OCR where the model should be able to read an uploaded document. However, lot of times, the documents uploaded are skewed or tilted. I plan to straighten and/or resize the document based on a template.
To achieve this, I intend to use feature mapping and homography. However, whenever I calculate my keypoints and descriptors (using ORB), and try to match them using Brute Force Matching, none of the features seem to match. Here's the code that I've used so far and the results with it. Can someone point me in the right direction if I'm missing something or doing it in a certain incorrect way?
def straighten_image(ORIG_IMG, IMG2):
# read both the images:
orig_image = cv2.imread(ORIG_IMG)
img_input = cv2.imread(IMG2)
orig_gray_scale = cv2.cvtColor(orig_image, cv2.COLOR_BGR2GRAY)
gray_scale_img = cv2.cvtColor(img_input, cv2.COLOR_BGR2GRAY)
#Detect ORB features and compute descriptors
MAX_NUM_FEATURES = 100
orb = cv2.ORB_create(MAX_NUM_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(orig_gray_scale, None)
keypoints2, descriptors2= orb.detectAndCompute(gray_scale_img, None)
#display image with keypoints
orig_wid_decriptors = cv2.drawKeypoints(orig_gray_scale, keypoints1, outImage = np.array([]), color= (255, 0, 0), flags= cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
inp_wid_decriptors = cv2.drawKeypoints(img_input, keypoints2, outImage = np.array([]), color= (255, 0, 0), flags= cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
#Match features
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
print(type(matches))
#sort matches
# matches.sort(key=lambda x: x.distance, reverse=False)
#Remove not-so-good matches
numGoodMatches = int(len(matches)*0.1)
matches = matches[:numGoodMatches]
#Draw Top matches
im_matches = cv2.drawMatches(orig_gray_scale, keypoints1, gray_scale_img, keypoints2, matches, None)
cv2.imshow("", im_matches)
cv2.waitKey(0)
#Homography
points1 = np.zeros((len(matches), 2), dtype = np.float32)
points2 = np.zeros((len(matches), 2), dtype = np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
#Find homography:
h, mask = cv2.findHomography(points2, points1, cv2.RANSAC)
#Warp image
# Use homography to warp image
height, width = orig_gray_scale.shape
inp_reg = cv2.warpPerspective(gray_scale_img, h, (width, height), borderValue = 255)
return inp_reg
import cv2
import matplotlib.pyplot as plt
import numpy as np
template = "template_aadhaar.jpg"
test = "test.jpeg"
str_img = straighten_image(template, test)
cv2.imshow("", str_img)
cv2.waitKey(0)
EDIT: If I use my own ID-card (perfectly straight) as the template and try to align the same ID-card that is tilted, it matches the features and re-aligns the tilted image perfectly. However, I need the model to be able to re-align any other ID-card based on the template. By any ID, I mean the details could be different but the location and font would be exactly the same.
EDIT#2: As suggested by #Olli, I tried using a template with only those features that are same for all Aadhaar cards. Image attached. But still the feature matching is a bit arbitrary.
Feature mapping tries to detect the most significant features on an image and tries to match them. This only works if the features really are the same. If the features are similar but different, it will fail.
If you have some features that are always the same (e.g. the logo on the top left), you could try to create a template with only these features and blank in all other areas, i.e. remove the person and the name and the QR code and...
But because there are more differences ("Government of India inside the green area on image and above on the other,...) than similarities, I would try to find the rotation based on the corners and/or the edges of the shape.
For example:
convert to grayscale
perform canny edge detection
detect corners, e.g. using cv2.goodFeaturesToTrack. If some corners are hidden, try finding the sides using Hough lines instead.
undistort
If some images are rotated 90, 180 or 270 degrees after undistortion, you could use a filter to find the orange and green areas and rotate so that this area is at the top again.
I'm chasing a little assistance with an idea I'm playing with. I want to take the features located in an image with code similar to the example on
See sample image at bottom of page here
Last section/Example is the one I'm talking about
in particular for my issue I wanted to use the matches indicated in the image to find the target in the scene image like illustrated with a seemingly simple addition. I want to draw a bounding box around the target when located in the scene frame
Example of output I'm after
Rather than just putting a bounding box around the features, I would rather have a list of the four contour points that represent the transformed target on the scene frame if that makes sense.
Big picture, I want to take the subsection of the scene image containing my target and crop it out of the scene image, mask the non-target areas out of the image remaining and then use this as my source for a further process.
At this point I've managed to do all it need to with a hard coded set of points to represent the corners of the target image as rotated and transformed in the scene image so everything works I just need an example of how to determine the x,y co-ords of each corner of the target in that scene
I didn't want to post the code as its a bit clunky and its the concept I'm after, not a complete 'do it for me please' fix
Any advice much appreciated, If you could show me using the example code attached how to do this I'd be very grateful, Cheers.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
You need to find the prescriptive transform between the two images.
Create a set of corresponding coordinates according to the matched features.
For example you find that the feature FtI1 in image 1 corresponds to FtJ1 in image 2 so you know that coordinate of FtI1 (xi,yi) corresponds to the coordinate of FtJ1 (xj,yj) and you have this for all the corresponding features.
After you have a list of corresponding coordinates between the two images you can calculate the prescriptive transform using opecv getPerspectiveTransform.
Finally use the transformation you found on the 4 coordinates of the enclosing shape in the first image to get the coordinates of the enclosing shape in the second image. The opencv function for that is warpPerspective.
An example of how to do that in opecv is in:
http://docs.opencv.org/3.1.0/da/d6e/tutorial_py_geometric_transformations.html
I have two images that I want to align by using openCV. One of the images is a green band of true color imagery, the other is a NIR image of almost the same area (offset is about 180 pixels). For this alignment I want to use python-opencv 3.0 and the ORB algorithm. I use the following script to create the KNNmatches:
img1 = cv2.imread('rgb.png',1)
img2 = cv2.imread('nir.png',0)
img1=img1[:,:,1]
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2,outImg=None)
However, when I do that with my images I get just a few matches with the following images:
Would anyone of you know how I could best align these images? Thank you in advance and apologies if this was posted in the wrong forum.
The next step is to extract the keypoint locations from your "good matches", as use these to calculate a 3x3 transformation matrix that will transform the corners of one image to the other.
For this case, lets say that we want to transform img2 to align with img1. First we extract locations of good matches:
pts1 = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
pts2 = np.float32([kp2[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
Then we find the transformation matrix:
M = cv2.findHomography(pts2, pts1)
Finally, we can apply the transformation:
warpedImg2 = cv2.warpPerspective(img2, M, img1.shape)
Here is a great resource on feature detection in OpenCV using Python.
I am trying to use opencv with python. I wrote a descriptor (SIFT, SURF, or ORB) matching code in C++ version of opencv 2.4. I want to convert this code to opencv with python. I found some documents about how to use opencv functions in c++ but many of the opencv function in python I could not find how to use them. Here is my python code, and my current problem is that I don't know how to use "drawMatches" of opencv c++ in python. I found cv2.DRAW_MATCHES_FLAGS_DEFAULT but I have no idea how to use it. Here is my python code of matching using ORB descriptors:
im1 = cv2.imread(r'C:\boldt.jpg')
im2 = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im3 = cv2.imread(r'C:\boldt_resize50.jpg')
im4 = cv2.cvtColor(im3, cv2.COLOR_BGR2GRAY)
orbDetector2 = cv2.FeatureDetector_create("ORB")
orbDescriptorExtractor2 = cv2.DescriptorExtractor_create("ORB")
orbDetector4 = cv2.FeatureDetector_create("ORB")
orbDescriptorExtractor4 = cv2.DescriptorExtractor_create("ORB")
keypoints2 = orbDetector2.detect(im2)
(keypoints2, descriptors2) = orbDescriptorExtractor2.compute(im2,keypoints2)
keypoints4 = orbDetector4.detect(im4)
(keypoints4, descriptors4) = orbDescriptorExtractor4.compute(im4,keypoints4)
matcher = cv2.DescriptorMatcher_create('BruteForce-Hamming')
raw_matches = matcher.match(descriptors2, descriptors4)
img_matches = cv2.DRAW_MATCHES_FLAGS_DEFAULT(im2, keypoints2, im4, keypoints4, raw_matches)
cv2.namedWindow("Match")
cv2.imshow( "Match", img_matches);
Error message of the line "img_matches = cv2.DRAW_MATCHES_FLAGS_DEFAULT(im2, keypoints2, im4, keypoints4, raw_matches)"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'long' object is not callable
I spent much time search documentation and examples of using opencv functions with python. However, I am very frustrated because there is very little information of using opencv functions in python. It will be extremely helpful if anyone can teach me where I can find the documentation of how to use every function of the opencv module in python. I appreciate your time and help.
I've also written something myself that just uses the OpenCV Python interface and I didn't use scipy. drawMatches is part of OpenCV 3.0.0 and isn't part of OpenCV 2, which is what I'm currently using. Even though I'm late to the party, here's my own implementation that mimics drawMatches to the best of my ability.
I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counter-clockwise.
The basic premise of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y) co-ordinates. I then draw circles at each of the detected locations, then draw a line connecting these circles together.
Bear in mind that the detected keypoint in the second image is with respect to its own co-ordinate system. If you want to place this in the final output image, you need to offset the column co-ordinate by the amount of columns from the first image so that the column co-ordinate is with respect to the co-ordinate system of the output image.
Without further ado:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1,:] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:cols1+cols2,:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
To illustrate that this works, here are the two images that I used:
I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png') # Original image
img2 = cv2.imread('cameraman_rot55.png') # Rotated image
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches
drawMatches(img1, kp1, img2, kp2, matches[:10])
This is the image I get:
you can visualize the feature matching in Python as following. Note the use of scipy library.
# matching features of two images
import cv2
import sys
import scipy as sp
if len(sys.argv) < 3:
print 'usage: %s img1 img2' % sys.argv[0]
sys.exit(1)
img1_path = sys.argv[1]
img2_path = sys.argv[2]
img1 = cv2.imread(img1_path, cv2.CV_LOAD_IMAGE_GRAYSCALE)
img2 = cv2.imread(img2_path, cv2.CV_LOAD_IMAGE_GRAYSCALE)
detector = cv2.FeatureDetector_create("SURF")
descriptor = cv2.DescriptorExtractor_create("BRIEF")
matcher = cv2.DescriptorMatcher_create("BruteForce-Hamming")
# detect keypoints
kp1 = detector.detect(img1)
kp2 = detector.detect(img2)
print '#keypoints in image1: %d, image2: %d' % (len(kp1), len(kp2))
# descriptors
k1, d1 = descriptor.compute(img1, kp1)
k2, d2 = descriptor.compute(img2, kp2)
print '#keypoints in image1: %d, image2: %d' % (len(d1), len(d2))
# match the keypoints
matches = matcher.match(d1, d2)
# visualize the matches
print '#matches:', len(matches)
dist = [m.distance for m in matches]
print 'distance: min: %.3f' % min(dist)
print 'distance: mean: %.3f' % (sum(dist) / len(dist))
print 'distance: max: %.3f' % max(dist)
# threshold: half the mean
thres_dist = (sum(dist) / len(dist)) * 0.5
# keep only the reasonable matches
sel_matches = [m for m in matches if m.distance < thres_dist]
print '#selected matches:', len(sel_matches)
# #####################################
# visualization of the matches
h1, w1 = img1.shape[:2]
h2, w2 = img2.shape[:2]
view = sp.zeros((max(h1, h2), w1 + w2, 3), sp.uint8)
view[:h1, :w1, :] = img1
view[:h2, w1:, :] = img2
view[:, :, 1] = view[:, :, 0]
view[:, :, 2] = view[:, :, 0]
for m in sel_matches:
# draw the keypoints
# print m.queryIdx, m.trainIdx, m.distance
color = tuple([sp.random.randint(0, 255) for _ in xrange(3)])
cv2.line(view, (int(k1[m.queryIdx].pt[0]), int(k1[m.queryIdx].pt[1])) , (int(k2[m.trainIdx].pt[0] + w1), int(k2[m.trainIdx].pt[1])), color)
cv2.imshow("view", view)
cv2.waitKey()
As the error message says, DRAW_MATCHES_FLAGS_DEFAULT is of type 'long'. It is a constant defined by the cv2 module, not a function. Unfortunately, the function you want, 'drawMatches' only exists in OpenCV's C++ interface.