OpenCV doesn't come with "external" libraries - python

I tried this example from the OpenCV website:
import numpy as np
import cv2
from matplotlib import pyplot as plt
# changed the image names from box* since the sample images were not given on the site
img1 = cv2.imread('burger.jpg',0) # queryImage
img2 = cv2.imread('burger.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
Executing the example, viz. python test.py, gives the following error:
Traceback (most recent call last):
File "test.py", line 10, in <module>
sift = cv2.SIFT()
AttributeError: 'module' object has no attribute 'SIFT'
I had installed OpenCV from source, building manually. All modules were built by make, if I recall correctly.
This question suggested that I install opencv-contrib from its GitHub repository. I did, and I still get this error.
My system is Ubuntu 15.04 64-bit.

I'm not entirely sure if this is applicable, but at some point they stopped supporting SIFT in the later versions of opencv I believe due to the fact that it is patented or something related (source?), however an alternate is to use ORB which will have a similar effect.
You could try something like this:
from cv2 import ORB as SIFT
However in the event that you get an import error this also might work for you:
SIFT = cv2.ORB_create
If you insert those near the top of your file, then likely you can leave "SIFT" as it is throughout the file (well more or less, you get the idea, basically replace the cv2.Sift() with sift = SIFT() and you should be in better shape.)

Related

Poor results from matching against key points and descriptors in OpenCV Python using FAST, BRIEF and FLANN

I'm trying to write a scale and rotation insensitive pattern matcher using a key point approach but I'm struggling to get good results even without changes in scale and rotation.
I'd like to avoid SIFT and SURF if possible as I'd prefer the approach to be patent free. I've experimented with FAST and ORB and have found that FAST seems to be picking up suitable key points. I've then used Brief to extract the descriptors for those points and fed those descriptors into a FLANN pattern matcher (I also tried Brute Force). Despite the template image coming from the same image the search is being conducted on, the results are terrible. Using matchTemplate works well.
This is the code I'm using:
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
directory = "C:\\Users\\MickeyCrozier\\Documents\\Digital Microscope\\Default\\Picture\\"
image = "1.bmp"
tmplImage = "tmpl1.bmp"
img = cv.imread(directory+image,flags = cv.IMREAD_GRAYSCALE)
template = cv.imread(directory+tmplImage,flags = cv.IMREAD_GRAYSCALE)
fast = cv.FastFeatureDetector_create()
fast.setNonmaxSuppression(0)
kp1 = fast.detect(img,None)
kp2 = fast.detect(template,None)
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
kp1, d1 = brief.compute(img,kp1)
kp2, d2 = brief.compute(img,kp2)
d1 = np.float32(d1)
d2 = np.float32(d2)
output = cv.drawKeypoints(img, kp1, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
output = cv.drawKeypoints(template, kp2, -1, (255,0,0))
cv.imshow('Original',output)
cv.waitKey(0)
matcher = cv.FlannBasedMatcher()
knn_matches = matcher.knnMatch(d1, d2, 2)
#-- Filter matches using the Lowe's ratio test
ratio_thresh = 0.8
good_matches = []
for m,n in knn_matches:
if m.distance < ratio_thresh * n.distance:
good_matches.append(m)
print(good_matches)
output = cv.drawMatches(img,
kp1,
template,
kp2,
good_matches,None,
flags = cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imshow('Original',output)
cv.waitKey(0)
Here are the results of the key points and match:
Any idea how to improve the match results?

Detecting similar objects in an Image

I have to detect same sized and same colored rectangles with same areas in an image for a project. This is an example image.
I don't know how to go about it. I am using OpenCV and python which I am new to.
I tried SIFT and SURF feature descriptors to get the similar features. I also tried template matching but it is not feasible in the case as the trainImage could change. But the main idea is to get those similar rectangles from the image provided.
I am using python3 and openCV3.
I took this code from the opencv tutorial site.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('template.jpg',0) # queryImage
img2 = cv2.imread('input.jpg',0) # trainImage
sift=cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)
Image result for project
reslut
Here's a simple approach.
generate a list of the unique colours in the image
for each unique colour
make everything that colour in the image white and everything else black
run findContours() and compare shapes and sizes
end for
For increased fun, do each colour in a separate thread :-)

How to use surf and sift detector in OpenCV for Python

I was trying a code for feature matching which uses the function SURF(). Upon execution it gives an error saying "AttributeError: 'module' object has no attribute 'SURF'".
How can I download this module for Python (Windows) and fix this error?
You can try ORB (Oriented FAST and Rotated BRIEF) as an alternate to SURF in open cv. It almost works as good as SURF and SIFT and it's free unlike SIFT and SURF which are patented and can't be used commercially.
You can read about it more in opencv-python documentation
here
Here's the sample code for your ease
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('text.png',cv2.COLOR_BGR2GRAY) # queryImage
img2 = cv2.imread('original.png',cv2.COLOR_BGR2GRAY) # trainImage
# Initiate SIFT detector
orb = cv2.ORB_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10],None, flags=2)
plt.imshow(img3),plt.show()
At first
pip install opencv-contrib-python
and then use this hack to create sift object
sift = cv2.xfeatures2d.SIFT_create()
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('simple.jpg',0)
# Initiate FAST object with default values
fast = cv.FastFeatureDetector_create()
# find and draw the keypoints
kp = fast.detect(img,None)
img2 = cv.drawKeypoints(img, kp, None, color=(255,0,0))
# Print all default params
print( "Threshold: {}".format(fast.getThreshold()) )
print( "nonmaxSuppression:{}".format(fast.getNonmaxSuppression()) )
print( "neighborhood: {}".format(fast.getType()) )
print( "Total Keypoints with nonmaxSuppression: {}".format(len(kp)) )
cv.imwrite('fast_true.png',img2)
#Disable nonmaxSuppression
fast.setNonmaxSuppression(0)
kp = fast.detect(img,None)
print( "Total Keypoints without nonmaxSuppression: {}".format(len(kp)) )
img3 = cv.drawKeypoints(img, kp, None, color=(255,0,0))
cv.imwrite('fast_false.png',img3)
According to this source SIFT patent expired. https://github.com/opencv/opencv/issues/16736[SIFT to main directory]1 SIFT should be implemented in the main opencv repository by now. At least cv.SIFT_create() init is working for me. I have opencv version 4.5.5.
Example use with possible combination with other descriptors or detectors(alternative to detectAndCompute() function)
import cv2 as cv
MAX_FEATURES = 5500
GOOD_MATCH_PERCENT = 0.03
def find_matches(im1, im2, detector_type: str, descriptor_type: str):
# Convert images to grayscale
im1Gray = cv.cvtColor(im1, cv.COLOR_BGR2GRAY)
im2Gray = cv.cvtColor(im2, cv.COLOR_BGR2GRAY)
# init detector
if detector_type == 'sift':
detector = cv.SIFT_create(MAX_FEATURES)
elif detector_type == 'fast':
detector = cv.FastFeatureDetector_create()
# ... ORB etc.
# init descriptor
if descriptor_type == 'sift':
descriptor = cv.SIFT_create(MAX_FEATURES)
keypoints1 = detector.detect(im1Gray, None)
keypoints2 = detector.detect(im2Gray, None)
# find descriptors with descriptor SIFT
keypoints1, descriptors1 = descriptor.compute(im1Gray, keypoints1)
keypoints2, descriptors2 = descriptor.compute(im2Gray, keypoints2)
# BFMatcher object
matcher = cv.BFMatcher(cv.NORM_L1, crossCheck=True)
matches = list(matcher.match(descriptors1, descriptors2))
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv.drawMatches(
im1, keypoints1, im2, keypoints2, matches, None)
cv.imwrite("matches.jpg", imMatches)

opencv 3 python feature detection error on function cv2.detectAndCompute and also on cv2.compute

i use opencv 3.0.0 and python 2.7.5_x32
This is my code (ORB_feature_detection):
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('C:\\Python27\\madar1.jpg',0) # queryImage
img2 = cv2.imread('C:\\Python27\\madar2.jpg',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB_create()
# line 12
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
cv2.waitKey()
cv2.destroyAllWindows()
and this is the error message, the error message is on the
kp1, des1 = orb.detectAndCompute(img1,None) part,
i did work on opencv 2.4.11 and it doesn't work on opencv 3.0.0 !
Traceback (most recent call last):
File "C:\Python27\orb_matcher.py", line 12, in <module>
kp1, des1 = orb.detectAndCompute(img1,None)
error: ..\..\..\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
please help me, what should i do to make this work?
You can try add this script after import cv2 and others cv2.ocl.setUseOpenCL(False) this solved my issue.
This is a known bug. You can find more information here https://github.com/opencv/opencv/issues/6081.
Today someone is fixing this issue: T-API python support implemented #6847.

TypeError: Required argument 'outImg' (pos 6) not found

When I run my python code
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('/home/shar/home.jpg',0) # queryImage
img2 = cv2.imread('/home/shar/home2.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
plt.imshow(img3),plt.show()
From this line
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
I get this error
TypeError: Required argument 'outImg' (pos 6) not found
I am using python3 and opencv3
You seem to be following this tutorial page (based on the code you've shown in this and your two related questions 1, 2).
The function documentation is here (although I note it is still labelled "beta") and implies that outImg is optional. However, the python error message is explicit - an argument is required in position 6, it is named outImg in the function signature. I suspect the documentation may not exactly match the code requirements. It appears that the signature of the C++ code that the python binding is calling has no default value for outImg, so need that argument to be supplied.
Note that you can inspect the doc string for the actual binding in the python3 interpreter (if it exists) by looking at <function_name>.__doc__. In this case, you can see that outImg is not shown as optional. Here is the output from my installation:
>>> cv2.drawMatchesKnn.__doc__
'drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchC
olor[, singlePointColor[, matchesMask[, flags]]]]) -> outImg'
Solution (note - verified on a windows install, not Linux)
You might note the last example on that tutorial, which uses the following code - passing in None in the place of outImg. I think that will work for your case also.
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
You don't need to pass all the draw_params dict, you could try just passing flags i.e.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,flags=2)
I have verified this on a fresh install of OpenCV 3 (albeit on Windows, using a prebuilt binary)
Okay guys , I am too a newbie and learning a lot after hours of research online it appears to be a BUG on a error know as Error (-255) NumpyAllocator , many site will suggest you open cv2.cpp file and comment out the line 163 code , my suggestion is if you are using OpenCV 3.1 download grade to OpenCV 3.0.0
the bug seems to be within OpenCV 3.1 in addition to this the code for using ORB Algorithm which is documented on OpenCV.org is a bit outdated
where it states
enter code here# Initiate ORB detector
enter code hereorb = cv2.ORB() # note you will get a error as this has now enter code herechange to :
enter code hereorb = cv2.ORB_create()
Here is my example of the code using OpenCV 3.0.0 on Windows 10 :
# Example of Brute Force matching base on ORB Algorithm
#Modify Author : Waheed Rafiq R&D student Birmingham City University UK
#Original author : OpenCV.org
#Date Updated : 21/04/2016 : 13:45
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('wr-pb.jpg',0) # queryImage
img2 = cv2.imread('Waheed.jpg',0) # trainImage
# Initiate ORB detector
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches ,None, flags=2)
plt.imshow(img3),plt.show()
I hope this helps , I love stack Over flow its the best resource out on internet.
My code: img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good, flags=2)
worked after this code, keyword and parameter="None": img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches ,None, flags=2)
This is probably a bug. What you can do is you can pass the 6th argument as None.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2,None)
I was having a similar kind of problem when experimenting with SIFT. I was able to solve it when I used None as an argument.

Categories