I have to detect same sized and same colored rectangles with same areas in an image for a project. This is an example image.
I don't know how to go about it. I am using OpenCV and python which I am new to.
I tried SIFT and SURF feature descriptors to get the similar features. I also tried template matching but it is not feasible in the case as the trainImage could change. But the main idea is to get those similar rectangles from the image provided.
I am using python3 and openCV3.
I took this code from the opencv tutorial site.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('template.jpg',0) # queryImage
img2 = cv2.imread('input.jpg',0) # trainImage
sift=cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)
Image result for project
reslut
Here's a simple approach.
generate a list of the unique colours in the image
for each unique colour
make everything that colour in the image white and everything else black
run findContours() and compare shapes and sizes
end for
For increased fun, do each colour in a separate thread :-)
Related
I'm trying to write a scale and rotation insensitive pattern matcher using a key point approach but I'm struggling to get good results even without changes in scale and rotation.
I'd like to avoid SIFT and SURF if possible as I'd prefer the approach to be patent free. I've experimented with FAST and ORB and have found that FAST seems to be picking up suitable key points. I've then used Brief to extract the descriptors for those points and fed those descriptors into a FLANN pattern matcher (I also tried Brute Force). Despite the template image coming from the same image the search is being conducted on, the results are terrible. Using matchTemplate works well.
This is the code I'm using:
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
directory = "C:\\Users\\MickeyCrozier\\Documents\\Digital Microscope\\Default\\Picture\\"
image = "1.bmp"
tmplImage = "tmpl1.bmp"
img = cv.imread(directory+image,flags = cv.IMREAD_GRAYSCALE)
template = cv.imread(directory+tmplImage,flags = cv.IMREAD_GRAYSCALE)
fast = cv.FastFeatureDetector_create()
fast.setNonmaxSuppression(0)
kp1 = fast.detect(img,None)
kp2 = fast.detect(template,None)
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
kp1, d1 = brief.compute(img,kp1)
kp2, d2 = brief.compute(img,kp2)
d1 = np.float32(d1)
d2 = np.float32(d2)
output = cv.drawKeypoints(img, kp1, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
output = cv.drawKeypoints(template, kp2, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
matcher = cv.FlannBasedMatcher()
knn_matches = matcher.knnMatch(d1, d2, 2)
#-- Filter matches using the Lowe's ratio test
ratio_thresh = 0.8
good_matches = []
for m,n in knn_matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
print(good_matches)
output = cv.drawMatches(img,
kp1,
template,
kp2,
good_matches,None,
flags = cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imshow('Original',output)
cv.waitKey(0)
Here are the results of the key points and match:
Any idea how to improve the match results?
im trying to create a project in which we provide the template image and a scene and then use feature matching of open cv python to detect if that object exist in that scene and make a rectangle or square around the matche found.......this the first step of the project after this im gonna integrate it.
`import cv2 as cv
import numpy as np
#reading images
template = cv.imread("dimitrescu1.jpg")
template = cv.cvtColor(template, cv.COLOR_BGR2GRAY)
w,h = template.shape[::-1]
template = cv.resize(template, (int(w/2),int(h/2)))
image = cv.imread("dimitrescu2.jpg")
image = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
w,h = image.shape[::-1]
image = cv.resize(image, (int(w/2),int(h/2)))
#initiate sift detector
orb = cv.ORB_create(nfeatures = 1500)
#creating keypoints and descriptors
kp1,des1 = orb.detectAndCompute(template,None)
kp2,des2 = orb.detectAndCompute(image,None)
#creating a brute force matcher and using knearest approach
bf = cv.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
#findind matches and setting threshold
good = []
for m,n in matches:
if m.distance < 0.8*n.distance:
good.append([m])
#drawing matches
img3 = cv.drawMatchesKnn(template,kp1,image,kp2,good,None,flags = 2)
cv.imshow('hey',img3)
cv.waitKey(0)
print(len(good))
`
this is my code so far.i just cant figure out how to draw the borders around the matches.
any help will be appereciated.
I'm chasing a little assistance with an idea I'm playing with. I want to take the features located in an image with code similar to the example on
See sample image at bottom of page here
Last section/Example is the one I'm talking about
in particular for my issue I wanted to use the matches indicated in the image to find the target in the scene image like illustrated with a seemingly simple addition. I want to draw a bounding box around the target when located in the scene frame
Example of output I'm after
Rather than just putting a bounding box around the features, I would rather have a list of the four contour points that represent the transformed target on the scene frame if that makes sense.
Big picture, I want to take the subsection of the scene image containing my target and crop it out of the scene image, mask the non-target areas out of the image remaining and then use this as my source for a further process.
At this point I've managed to do all it need to with a hard coded set of points to represent the corners of the target image as rotated and transformed in the scene image so everything works I just need an example of how to determine the x,y co-ords of each corner of the target in that scene
I didn't want to post the code as its a bit clunky and its the concept I'm after, not a complete 'do it for me please' fix
Any advice much appreciated, If you could show me using the example code attached how to do this I'd be very grateful, Cheers.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
You need to find the prescriptive transform between the two images.
Create a set of corresponding coordinates according to the matched features.
For example you find that the feature FtI1 in image 1 corresponds to FtJ1 in image 2 so you know that coordinate of FtI1 (xi,yi) corresponds to the coordinate of FtJ1 (xj,yj) and you have this for all the corresponding features.
After you have a list of corresponding coordinates between the two images you can calculate the prescriptive transform using opecv getPerspectiveTransform.
Finally use the transformation you found on the 4 coordinates of the enclosing shape in the first image to get the coordinates of the enclosing shape in the second image. The opencv function for that is warpPerspective.
An example of how to do that in opecv is in:
http://docs.opencv.org/3.1.0/da/d6e/tutorial_py_geometric_transformations.html
I've been trying to use ORB to find keypoint/descriptors and I need to mask out part of the image because many features are very similar in two parts of my image. However, I can't determine the correct format of the mask parameter to the detectandcompute function, and the documentation is ambiguous to me. I tried looking at the source code but I am not familiar enough with C++ to understand it. I thought it was just a binary array where 1 = use and 0 = ignore, but every mask I've tried doesn't return any keypoints. Here is some example code:
img1_gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
#ignore the left half of the first image
mask1 = np.ones(img1_gray.shape)
mask1[:,:mask1.shape[1]/2] = 0
#ignore the right half of the second image
mask2 = np.ones(img2_gray.shape)
mask2[:,mask2.shape[1]/2:] = 0
kp1, des1 =orb.detectAndCompute(img1_gray,mask1)
kp2, des2 =orb.detectAndCompute(img2_gray,mask2)
The documentation is here:http://docs.opencv.org/3.0-beta/modules/features2d/doc/feature_detection_and_description.html
img1
img2
I've used orb in OpenCV C++ and if I recall correctly the mask has to be CV_8UC1 type with values 0 and 255 (at least that's what I used). Have you already tried like this?
ORB is great for detecting pattern on image or finding image duplicates, but for such kind of task, ORB may be not enough informative, because it's designed as fast and lightweight binary descriptor. Try to look at SIFT and SURF
The bug in the above code was that the masks had to be changed to Uint8. Thus, this would change the masks to be of CV_8UC1 type with values 0 to 255. Here is the fully tested working Python code (Version 3) using SIFT features rather than ORB features:
import cv2
import numpy as np
import matplotlib.pyplot as plt
def showing_features(img1, key_points):
plt.imshow(cv2.drawKeypoints(img1, key_points, None))
plt.show()
img1 = cv2.imread('img1.jpg')
img2 = cv2.imread('img2.jpg')
img1_gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
sift= cv2.xfeatures2d.SIFT_create(nfeatures=0,
nOctaveLayers=3,
contrastThreshold=0.05,
edgeThreshold=10.0,
sigma=1.6)
"*-------------- Create Masks --------------*"
mask1 = np.ones(img1_gray.shape)
#ignore the left half of the first image
mask1[:,:int(mask1.shape[1]/2)] = 0
#ignore the right half of the second image
mask2 = np.ones(img2_gray.shape)
mask2[:,int(mask2.shape[1]/2):] = 0
"*-------------- Change Masks to Uint8 --------------*"
mask1 = mask1.astype(np.uint8)
mask2 = mask2.astype(np.uint8)
"*-------------- Extract SIFT Features --------------*"
kp1m, des1m =sift.detectAndCompute(img1_gray,mask1)
kp2m, des2m =sift.detectAndCompute(img2_gray,mask2)
"*-------------- Display SIFT features after using MASK --------------*"
showing_features(img1, kp1m)
showing_features(img2, kp2m)
I have two images that I want to align by using openCV. One of the images is a green band of true color imagery, the other is a NIR image of almost the same area (offset is about 180 pixels). For this alignment I want to use python-opencv 3.0 and the ORB algorithm. I use the following script to create the KNNmatches:
img1 = cv2.imread('rgb.png',1)
img2 = cv2.imread('nir.png',0)
img1=img1[:,:,1]
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2,outImg=None)
However, when I do that with my images I get just a few matches with the following images:
Would anyone of you know how I could best align these images? Thank you in advance and apologies if this was posted in the wrong forum.
The next step is to extract the keypoint locations from your "good matches", as use these to calculate a 3x3 transformation matrix that will transform the corners of one image to the other.
For this case, lets say that we want to transform img2 to align with img1. First we extract locations of good matches:
pts1 = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
pts2 = np.float32([kp2[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
Then we find the transformation matrix:
M = cv2.findHomography(pts2, pts1)
Finally, we can apply the transformation:
warpedImg2 = cv2.warpPerspective(img2, M, img1.shape)
Here is a great resource on feature detection in OpenCV using Python.