OpenCV Homography gives error: The input arrays should have at least - python

Following this https://www.youtube.com/watch?v=I8tHLZDDHr4&list=WL&index=1&ab_channel=Pysource tutorial on YouTube for my own project, answers I found basically had the exact same code, but my code gives me this error:
Traceback (most recent call last): File "c:\Users\NoName69\Desktop\ProJects\PyJects\Homography\test.py", line 31, in <module> matrix, mask = cv2.findHomography(query_p, train_p, cv2.RANSAC, 5.0) cv2.error: OpenCV(4.5.1) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build 5rb_9df3\opencv\modules\calib3d\src\fundam.cpp:385: error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function "cv::findHomography"
Code:
from cv2 import cv2
import numpy as np
img1 = cv2.cvtColor(cv2.imread("Aim Circle.png"), cv2.COLOR_RGB2GRAY)
img2 = cv2.cvtColor(cv2.imread("Map.png"), cv2.COLOR_RGB2GRAY)
sift = cv2.xfeatures2d.SIFT_create()
kp_img1, desc_img1 = sift.detectAndCompute(img1, None)
kp_img2, desc_img2 = sift.detectAndCompute(img2, None)
img1 = cv2.drawKeypoints(img1, kp_img1, img1)
img2 = cv2.drawKeypoints(img2, kp_img2, img2)
index_params = dict(algorithm = 0, trees = 5)
search_params = dict()
flann = cv2.FlannBasedMatcher(index_params, search_params)
while True:
matches = flann.knnMatch(desc_img1, desc_img2, k = 2)
good_kp = []
for m, n in matches:
if m.distance < 0.75 * n.distance:
good_kp.append(m)
if len(good_kp) > 2:
query_p = np.float32([kp_img1[m.queryIdx].pt for m in good_kp]).reshape(-1, 1 ,2)
train_p = np.float32([kp_img2[m.trainIdx].pt for m in good_kp]).reshape(-1, 1, 2)
matrix, mask = cv2.findHomography(query_p, train_p, cv2.RANSAC, 5.0)
if cv2.waitKey(25) & 0xFF == ord(" "):
cv2.destroyAllWindows()
break
OpenCv also gives this warning:
[ WARN:0] global c:\users\appveyor\appdata\local\temp\1\pip-req-build-5rb_9df3\opencv_contrib\modules\xfeatures2d\misc\python\shadow_sift.hpp (15) cv::xfeatures2d::SIFT_create DEPRECATED: cv.xfeatures2d.SIFT_create() is deprecated due SIFT tranfer to the main repository. https://github.com/opencv/opencv/issues/16736

I have found out the problem. OpenCV cannot find enough features in the file "Aim Circle.png", and so gives this error. If anyone knows another way to find a picture that is similar embedded in another picture it would be greatly appreciated.

i think in
if len(good_kp) > 2:
you have to substiture the value of 2 with a value greater than 4:
example: if len(good_kp) > 10:

Related

yunet.onnx reading issue in opencv

i would like to detect faces with mask, here is one example of image :
classical side face detector
faceCascade =cv2.CascadeClassifier("haarcascade_profileface.xml")
does not detect faces well, therefore i tried to search a little more and found following documentation :
yunet documentation
i have tried to implement this model in my code :
import numpy as np
import matplotlib.pyplot as plt
import cv2
import math
#faceCascade =cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
#faceCascade =cv2.CascadeClassifier("haarcascade_profileface.xml")
frame =cv2.imread("distance_Measure.jpg")
frame =cv2.resize(frame,(500,500))
gray =cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
face_detector = cv2.FaceDetectorYN_create("yunet.onnx", "", (0, 0))
#faces =faceCascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(20,20))
_, faces = face_detector.detect(gray)
l =[]
lf =[]
i =1
for (x,y,w,h) in faces:
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
s =str(i)
cv2.putText(frame,s,(x,y),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2)
i+=1
l =[]
l.append(x)
l.append(y)
lf.append(l)
print(l)
print(lf)
close_person =""
for i in range(len(lf)):
for j in range(i+1,len(lf)):
d =math.sqrt((lf[j][1]-lf[i][1])**2+((lf[j][0]-lf[i][0])**2))
print("P",i+1,"-P",j+1,"=",d)
if d <150:
close_person ="Person " +str(i+1)+" and Person "+str(j+1)+";"
cv2.line(frame,(lf[i][0],lf[i][1]),(lf[j][0],lf[j][1]),(0,0,255),2)
close_person+=" are not following social distance"
print(close_person)
cv2.imshow("Displayed_Persons",frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
plt.show()
but it returns following error :
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\AI_Project\Social_Distance_Measurement_Example.py", line 10, in <module>
face_detector = cv2.FaceDetectorYN_create("yunet.onnx", "", (0, 0))
cv2.error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:194: error: (-5:Bad argument) Can't read ONNX file: yunet.onnx in function 'cv::dnn::dnn4_v20211220::ONNXImporter::ONNXImporter'
i have searched again and found following documentation :
searched solution
but i could not clarify how to solve given problem, please help me

OpenCV Stereo Camera Calibration Error: Assertion failed

I know this question was asked a few Times, but the answers doesn't solve my problem.
I want to calibrate a pair of Cameras to use as Stereo Input.
But when I run the code I get the error Message :
OpenCV(3.4.1) Error: Assertion failed (nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total())) in collectCalibrationData, file /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp, line 3133
Traceback (most recent call last):
File "/Users/MyName/Pycharm/Project/calibration.py", line 342, in <module>
TERMINATION_CRITERIA )
cv2.error: OpenCV(3.4.1) /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp:3133: error: (-215) nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in function collectCalibrationData
My Code is :
def distortion_matrix(path, objpoints, imgpoints):
for item in os.listdir(path):
if item.endswith(".jpg"):
cap = cv2.VideoCapture(path+item, cv2.CAP_IMAGES)
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
keypoints = blobDetector.detect(gray) # Detect blobs.
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 255, 0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4, 11), None,
flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11, 11), (-1, -1),
criteria)
imgpoints.append(corners2)
cap.release()
_, leftCameraMatrix, leftDistortionCoefficients, _, _ , objpoints0, imgpoints0 = distortion_matrix("./calibration/left/", objpoints0, imgpoints0)
_, rightCameraMatrix, rightDistortionCoefficients, _, _, objpoints1, imgpoints1 = distortion_matrix("./calibration/right/", objpoints1, imgpoints1)
(_, _, _, _, _, rotationMatrix, translationVector, _, _) = cv2.stereoCalibrate( objp, imgpoints0, imgpoints1,
leftCameraMatrix, leftDistortionCoefficients,
rightCameraMatrix, rightDistortionCoefficients,
imageSize, None, None, None, None,
cv2.CALIB_FIX_INTRINSIC, TERMINATION_CRITERIA )
Most times when this gets thrown, it seems that the Message refers to arrays (imgpoint and objpoint) which are empty or not evenly filled.
But at the end both got the length 20 (I scan 20 images so this seems right) and every cell of the array has 44 arrays stored (the circle grid I use has 44 points so this seems also right).
**Edit: **
my objp, imgpoint and objpoint are defined like this:
objp = np.zeros((np.prod(pattern_size), 3), np.float32)
objp[0] = (0, 0, 0)
objp[1] = (0, 2, 0)
objp[2] = (0, 4, 0)
objp[3] = (0, 6, 0)
...
objpoints0 = []
objpoints1 = []
imgpoints0 = []
imgpoints1 = []
** Edit 2: **
If NUM_IMAGES stands for Number of images, I thing I've got it now. But only when I add the new axis after I call distortion_matrix().
Then the code is able to complete. I need to test the results, but at least this problem seems be be solved.
Thank you very much
You said you are doing stereo calibration, is there any case where some of the points on your grid does not visible from other camera? This error may appear when one of your view unable to detect all points on the calibration pattern. Three points to consider are
1- Make sure your object points are 3d
2- Make sure your left points, right points and object points have same size (number of views).
3- Make sure your left points, right points and object points have same amount of points at each index of list.
Edit: Your object points objp must contain a list/vector of 3d points, currently its shape is something like (44, 3), it must be (NUM_IMAGES, 44, 3). You can achieve this with objp = np.repeat(objp[np.newaxis, :, :], NUM_IMAGES, axis=0)

OpenCV error while feature matching with FLANN

I have a piece of code for matching features between a movie clip and a reference image. It generally works well, but sometimes it throws an error in the middle of a clip. Since it's always in the same clips, and at the same time, I guess there is something wrong with the frame it tries to analyze.
My code:
cap = cv2.VideoCapture(clip_file)
img1 = cv2.imread(ref_image,0)
while(cap.isOpened()):
# read the frame and convert to gray-scale
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Initiate ORB detector
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(gray,None)
# FLANN parameters
FLANN_INDEX_LSH = 6
index_params= dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12
key_size = 12, # 20
multi_probe_level = 1) #2
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
cv2.imshow('img3',frame)
The error it throws sometimes during the clip playing:
Traceback (most recent call last):
File "movie_test.py", line 81, in <module>
flann_movie('data/movie.avi','data/ref.jpg')
File "movie_test.py", line 35, in flann_movie
matches = flann.knnMatch(des1,des2,k=1)
cv2.error: OpenCV(3.4.2) C:\projects\opencv-
python\opencv\modules\flann\src\miniflann.cpp:317: error: (-5:Bad
argument) Only continuous arrays are supported in function
'cv::flann::buildIndex_'
Any suggestions on what causes the error would be appreciated. Thanks.
I think error is caused by frames of your video where no trace of original feature template was detected. Check what mid-results of matching are for each frame and then if that is the cause change the parameters of FLANN or simply skip those frames before the error occurs.
You have the following:
matches = flann.knnMatch(des1,des2,k=2)
With k=2 it means each element needs to have 2-nearest neighbors. As a result, each list of descriptors needs to have more than 2 elements each:
if(des1 is not None and len(des1)>2 and des2 is not None and len(des2)>2):
matches = flann.knnMatch(des1,des2,k=2)
(k-nearest neighbors algorithm)

Python - Writing your own function with opencv giving an error

import numpy as np
import cv2
def resize(image, percentage):
img = image
fy=percentage
fx=percentage
img2 = cv2.resize(img, (0,0), fx, fy)
return cv2.img2
img = cv2.imread('test.png')
img2 = resize(img, 0.45)
cv2.imshow('image',img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Traceback (most recent call last):
File "C:\Users\Jay\Desktop\Portable Python\opencvprogram_ver1.py", line 14, in <module>
img2 = resize(img, 0.45)
File "C:\Users\Jay\Desktop\Portable Python\opencvprogram_ver1.py", line 10, in resize
img2 = cv2.resize(img, (0,0), fx, fy)
error: C:\builds\master_PackSlaveAddon-win32-vc12-static\opencv\modules\imgproc\src\imgwarp.cpp:3209: error: (-215) dsize.area() > 0 || (inv_scale_x > 0 && inv_scale_y > 0) in function cv::resize
Dear Python Council members,
I've been learning Python and OpenCV and I ran into a problem here.
I'm trying to see if it's possible to include an OpenCV function in my own function, but it seems like I'm doing this wrong. The traceback says something something about dsize.area in cv::resize, but this error message means very little too me because I do not know how this works in the smaller picture.
Can someone guide me in the right direction so the program works as I'd expect?
Thank you very much.
What you have looks almost right. Just change the last two lines of the function to:
img2 = cv2.resize(img, (0,0), fx=fx, fy=fy) # enter fx and fy as keyword arguments
return img2 # cv2.img2 was just a typo
Since fx and fy are not the 3rd and 4th arguments of the function, you have to specify them as keyword arguments.

Python openCV: kmeans example not working

I am working my way through the openCV examples and sometimes the examples won't run. In many cases I just have to make small changes and it works then. In this case I found no solution so far. If I run the following code I get an error on the kmeans line. I checked the data types and it seems everything is alright. Anyone an idea what is going wrong? Thanks!
Code example from https://github.com/Itseez/opencv:
'''
Keyboard shortcuts:
ESC - exit
space - generate new distribution
'''
import numpy as np
import cv2
from gaussian_mix import make_gaussians
if __name__ == '__main__':
cluster_n = 5
img_size = 512
print __doc__
# generating bright palette
colors = np.zeros((1, cluster_n, 3), np.uint8)
colors[0,:] = 255
colors[0,:,0] = np.arange(0, 180, 180.0/cluster_n)
colors = cv2.cvtColor(colors, cv2.COLOR_HSV2BGR)[0]
while True:
print 'sampling distributions...'
points, _ = make_gaussians(cluster_n, img_size)
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(points, cluster_n, None, term_crit, 10, 0)
img = np.zeros((img_size, img_size, 3), np.uint8)
for (x, y), label in zip(np.int32(points), labels.ravel()):
c = map(int, colors[label])
cv2.circle(img, (x, y), 1, c, -1)
cv2.imshow('gaussian mixture', img)
ch = 0xFF & cv2.waitKey(0)
if ch == 27:
break
cv2.destroyAllWindows()
Error Message:
TypeError: an integer is required
I was using the example from the 3.x openCV version while I am running on 2.4.8. The syntax is different in 2.x:
ret, labels, centers = cv2.kmeans(points, cluster_n, term_crit, 10, 0)

Categories