Problem with using cv2.pointPolygonTest() and cv2.polylines() - python

I am trying to draw a shape and then check whether the point is inside the shape or no. Thought using cv2.polylines() to draw it and cv2.pointPolygonTest() to test should work am getting an error which is not very informative.
Traceback (most recent call last):
File "C:\Users\XXX\Desktop\Heatmap\cvtest.py", line 32, in <module>
dist = cv2.pointPolygonTest(cv2.polylines(img,[pts],True,(0, 255 , 0), 2), (52,288), False)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\geometry.cpp:103: error: (-215:Assertion failed) total >= 0 && (depth == CV_32S || depth == CV_32F) in function 'cv::pointPolygonTest'
I am guessing the shape created with cv2.polylines() is not a contour. What would be the correct way to do it then? My current code:
import cv2
import numpy as np
img = cv2.imread('image.png')
pts = np.array([[18,306],[50,268],[79,294],[165,328],[253,294],[281,268],[313,306],[281,334],[270,341],[251,351],[230,360],[200,368],[165,371],[130,368],[100,360],[79,351],[50,334],[35,323]], np.int32)
pts = pts.reshape((-1,1,2))
dist = cv2.pointPolygonTest(cv2.polylines(img,[pts],True,(0, 255 , 0), 2), (52,288), False)
#print(dist)
cv2.imshow('test', img)
cv2.waitKey()
cv2.destroyAllWindows()

polylines is not the right input, it is used to draw a shape (docs)
pointPolygonTest instead needs the contour as an input (docs)
dist = cv2.pointPolygonTest(pts, (52,288), False) will return 1.0, meaning inside the contour.
Note that you can perform a pointPolygonTest without an image. But if you want to draw the results, you can use this code as a starter:
import cv2
import numpy as np
#create background
img = np.zeros((400,400),dtype=np.uint8)
# define shape
pts = np.array([[18,306],[50,268],[79,294],[165,328],[253,294],[281,268],[313,306],[281,334],[270,341],[251,351],[230,360],[200,368],[165,371],[130,368],[100,360],[79,351],[50,334],[35,323]], np.int32)
pts = pts.reshape((-1,1,2))
# draw shape
cv2.polylines(img,[pts],True,(255), 2)
# draw point of interest
cv2.circle(img,(52,288),1,(127),3)
# perform pointPolygonTest
dist = cv2.pointPolygonTest(pts, (52,288), False)
print(dist)
# show image
cv2.imshow('test', img)
cv2.waitKey()
cv2.destroyAllWindows()
Result:

Related

error: (-5:Bad argument) Wrong shapes for given matrices. Was size(src) = (1,41216), size(W) = (10304,100). in function 'cv::LDA::subspaceProject'

I get the error below when running the following code:
import os
import cv2 as cv
import numpy as np
import training_dataset_Eigenfaces as train
def FaceRecognition(path):
#Load the trained data
x,y,names = train.LoadImages(path)
cv2.face.EigenFaceRecognizer_create()
model = cv.face.EigenFaceRecognizer_create()
model.train(x,y)
#Turn on the camera
#Create a cascade classifier
while(True):
#Read a frame from camera
ret,frame = camera.read()
#Determine whether the image has been read successfully
if ret:
#There is importce
#Convert to grayscale
gray_img = cv.cvtColor(frame,cv.COLOR_BGR2BGRA)
# Face detection
face = face_detector.detectMultiScale(gray_img, scaleFactor=1.3, minNeighbors=5, minSize=(70, 70))
for x,y,w,h in face:
#Draw a rectangle in the original image
frame = cv.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
roi_gray = gray_img[y:y + h, x:x + w]
#Convert the image to a size suitable for recognition
roi_gray = cv.resize(roi_gray, (92, 112), interpolation=cv.INTER_LINEAR)
params = model.predict(roi_gray) # error comes here
print("successful!")
#Displays the name above the matrix
cv.putText(frame, names[params[0]], (x, y - 20), cv.FONT_HERSHEY_SIMPLEX, 1, 255, 2)
cv.imshow('Dynamic',frame)
#exit
the error that pops up is:
File "D:\opencv\python\Face Recognition\face-recognition-Eigenfaces.py", line 54, in FaceRecognition
params = model.predict(roi_gray)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\lda.cpp:183: error: (-5:Bad argument) Wrong shapes for given matrices. Was size(src) = (1,41216), size(W) = (10304,100). in function 'cv::LDA::subspaceProject'
In this program, the error tells me that the matrix is not the right size, but what is the wrong size? Interpolation
roi_gray = cv.resize(roi_gray, (92, 112), Interpolation)?The purpose of this statement is to reduce the size of the image to be processed. So may I ask how to modify my program? Thank you for your answer.

Getting Error: (-215:Assertion failed) When Using seamlessClone

For clarity, I am a beginner in Python. I'm creating a script that replaces the eyes with a robotic one from a series of images in a folder. I'm stuck on the seamlessClone function that keeps throwing this error.
The code is incomplete as far as writing out the entire script (it only clones one eye so far), but everything should be working. I've been stuck on this one for 6 hours and thought I would ask on here.
I've tried checking my filenames, paths, seeing if the image file was corrupt by using print, changing the image files with different dimensions, filetypes (png, jpg) and so on.
I've also tried converting every numpy array(cv2_image, Eye) into a 32bit array to see if that was the issue, but nothing prevailed.
# Import
from PIL import Image, ImageDraw, ImageFilter
from statistics import mean
import face_recognition
import cv2
import glob
import numpy as np
# Open Eye Images
Eye = cv2.imread('eye.jpg')
# Loop Through Images
for filename in glob.glob('images/*.jpg'):
cv2_image = cv2.imread(filename)
image = face_recognition.load_image_file(filename)
face_landmarks_list = face_recognition.face_landmarks(image)
for facemarks in face_landmarks_list:
# Get Eye Data
eyeLPoints = facemarks['left_eye']
eyeRPoints = facemarks['right_eye']
npEyeL = np.array(eyeLPoints)
npEyeR = np.array(eyeRPoints)
# Create Mask
mask = np.zeros(cv2_image.shape, cv2_image.dtype)
mask.fill(0)
poly = np.array([eyeLPoints])
cv2.fillPoly(mask, [poly], (255,255, 255))
# Get Eye Image Centers
npEyeL = np.array(eyeLPoints)
eyeLCenter = npEyeL.mean(axis=0).astype("int")
x1 = (eyeLCenter[0])
x2 = (eyeLCenter[1])
print(x1, x2)
# Get Head Rotation (No code yet)
# Apply Seamless Clone To Main Image
saveImage = cv2.seamlessClone(Eye, cv2_image, mask ,(x1, x2) , cv2.NORMAL_CLONE)
# Output and Save
cv2.imwrite('output/output.png', saveImage)
Here is the full error:
Traceback (most recent call last):
File "stack.py", line 45, in <module>
saveImage = cv2.seamlessClone(Eye, cv2_image, mask ,(x1, x2) , cv2.NORMAL_CLONE)
cv2.error: OpenCV(4.1.1) /io/opencv/modules/core/src/matrix.cpp:466: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'Mat'
The result I'm expecting is for the eye image to be cloned onto the original image, but this error keeps being thrown, preventing me from completing the script. If there were any hint as to what's going on, I feel that the culprit is the "Eye" file, but I could be wrong. Any help would be appreciated.
There are two sets of keypoints:
There is a set of keypoints for the eye in the destination image:
There is a set of keypoints for the eye in the source image:
You are using the wrong set of keypoints to make the mask for the cv2.seamlessClone() function. You should be using the keypoints from the source image. The mask needs to be a TWO channel image, in your code (among the other problems) you are using a THREE channel image.
This is the result. You can see there should also be a resize function to match the sizes of the eyes:
This is the code that I used:
import face_recognition
import cv2
import numpy as np
# Open Eye Images
eye = cv2.imread('eye.jpg')
# Open Face image
face = cv2.imread('face.jpeg')
# Get Keypoints
image = face_recognition.load_image_file('face.jpeg')
face_landmarks_list = face_recognition.face_landmarks(image)
for facemarks in face_landmarks_list:
# Get Eye Data
eyeLPoints = facemarks['left_eye']
eyeRPoints = facemarks['right_eye']
npEyeL = np.array(eyeLPoints)
# These points define the contour of the eye in the EYE image
poly_left = np.array([(51, 228), (100, 151), (233, 102), (338, 110), (426, 160), (373, 252), (246, 284), (134, 268)], np.int32)
# Create a mask for the eye
src_mask = np.zeros(face.shape, face.dtype)
cv2.fillPoly(src_mask, [poly_left], (255, 255, 255))
cv2.imwrite('src_mask.png', src_mask)
# Find where the eye should go
center, r = cv2.minEnclosingCircle(npEyeL)
center = tuple(np.array(center, int))
# Clone seamlessly.
output = cv2.seamlessClone(eye, face, src_mask, center, cv2.NORMAL_CLONE)

OpenCV Stereo Camera Calibration Error: Assertion failed

I know this question was asked a few Times, but the answers doesn't solve my problem.
I want to calibrate a pair of Cameras to use as Stereo Input.
But when I run the code I get the error Message :
OpenCV(3.4.1) Error: Assertion failed (nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total())) in collectCalibrationData, file /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp, line 3133
Traceback (most recent call last):
File "/Users/MyName/Pycharm/Project/calibration.py", line 342, in <module>
TERMINATION_CRITERIA )
cv2.error: OpenCV(3.4.1) /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp:3133: error: (-215) nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in function collectCalibrationData
My Code is :
def distortion_matrix(path, objpoints, imgpoints):
for item in os.listdir(path):
if item.endswith(".jpg"):
cap = cv2.VideoCapture(path+item, cv2.CAP_IMAGES)
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
keypoints = blobDetector.detect(gray) # Detect blobs.
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 255, 0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4, 11), None,
flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11, 11), (-1, -1),
criteria)
imgpoints.append(corners2)
cap.release()
_, leftCameraMatrix, leftDistortionCoefficients, _, _ , objpoints0, imgpoints0 = distortion_matrix("./calibration/left/", objpoints0, imgpoints0)
_, rightCameraMatrix, rightDistortionCoefficients, _, _, objpoints1, imgpoints1 = distortion_matrix("./calibration/right/", objpoints1, imgpoints1)
(_, _, _, _, _, rotationMatrix, translationVector, _, _) = cv2.stereoCalibrate( objp, imgpoints0, imgpoints1,
leftCameraMatrix, leftDistortionCoefficients,
rightCameraMatrix, rightDistortionCoefficients,
imageSize, None, None, None, None,
cv2.CALIB_FIX_INTRINSIC, TERMINATION_CRITERIA )
Most times when this gets thrown, it seems that the Message refers to arrays (imgpoint and objpoint) which are empty or not evenly filled.
But at the end both got the length 20 (I scan 20 images so this seems right) and every cell of the array has 44 arrays stored (the circle grid I use has 44 points so this seems also right).
**Edit: **
my objp, imgpoint and objpoint are defined like this:
objp = np.zeros((np.prod(pattern_size), 3), np.float32)
objp[0] = (0, 0, 0)
objp[1] = (0, 2, 0)
objp[2] = (0, 4, 0)
objp[3] = (0, 6, 0)
...
objpoints0 = []
objpoints1 = []
imgpoints0 = []
imgpoints1 = []
** Edit 2: **
If NUM_IMAGES stands for Number of images, I thing I've got it now. But only when I add the new axis after I call distortion_matrix().
Then the code is able to complete. I need to test the results, but at least this problem seems be be solved.
Thank you very much
You said you are doing stereo calibration, is there any case where some of the points on your grid does not visible from other camera? This error may appear when one of your view unable to detect all points on the calibration pattern. Three points to consider are
1- Make sure your object points are 3d
2- Make sure your left points, right points and object points have same size (number of views).
3- Make sure your left points, right points and object points have same amount of points at each index of list.
Edit: Your object points objp must contain a list/vector of 3d points, currently its shape is something like (44, 3), it must be (NUM_IMAGES, 44, 3). You can achieve this with objp = np.repeat(objp[np.newaxis, :, :], NUM_IMAGES, axis=0)

OpenCV error: bitwise_and throws error that mask and image are not same size

I am trying to apply a mask I have made to an image using openCV (3.3.1) in python (3.6.5) to extract all the skin. I am looping over a photo and checking windows and classifying them using two premade sklearm GMMs. If the window is skin I have changing that area of the mask to True (255) otherwise leaving it as 0.
I have initialized the numpy array to hold the mask before the loop to be the same dimensions as the image, but openCV keeps saying that the image and mask do not have the same dimensions (output and error message are below). I have seen other somewhat similar problems on the site but none with solutions that have worked for me.
Here is my code:
# convert the image to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
delta = 6
# create an empty np array to make the mask
#mask = np.zeros((img.shape[0], img.shape[1], 1))
mask = np.zeros(img.shape[:2])
# loop through image and classify each window
for i in range(0,hsv.shape[0],delta):
for j in range(0,hsv.shape[1],delta):
# get a copy of the window
arr = np.copy(hsv[i:i+delta,j:j+delta,0])
# create a normalized hue histogram for the window
if arr.sum() > 0:
arr = np.histogram(np.ravel(arr/arr.sum()), bins=100, range=(0,1))
else:
arr = np.histogram(np.ravel(arr), bins=100, range=(0,1))
# take the histogram and reshape it
arr = arr[0].reshape(1,-1)
# get the probabilities that the window is skin or not skin
skin = skin_gmm.predict_proba(arr)
not_skin = background_gmm.predict_proba(arr)
if skin > not_skin:
# becasue the window is more likely skin than not skin
# we fill that window of the mask with ones
mask[i:i+delta,j:j+delta].fill(255)
# apply the mask to the original image to extract the skin
print(mask.shape)
print(img.shape)
masked_img = cv2.bitwise_and(img, img, mask = mask)
The output is:
(2816, 2112)
(2816, 2112, 3)
OpenCV Error: Assertion failed ((mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1)) in cv::binary_op, file C:\ci\opencv_1512688052760
\work\modules\core\src\arithm.cpp, line 241
Traceback (most recent call last):
File "skindetector_hist.py", line 183, in <module>
main()
File "skindetector_hist.py", line 173, in main
skin = classifier_mask(img, skin_gmm, background_gmm)
File "skindetector_hist.py", line 63, in classifier_mask
masked_img = cv2.bitwise_and(img, img, mask = mask)
cv2.error: C:\ci\opencv_1512688052760\work\modules\core\src
\arithm.cpp:241: error: (-215) (mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1) in function cv::binary_op
As you can see in the output, the image and mask have the same width and height. I have also tried making the mask have depth one (line 5) but that didn't help. Thank you for any help!
It is not only complaining about the size of the mask. It is complaining about the type of the mask. The error:
OpenCV Error: Assertion failed ((mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1))
Means that either the type of the mask or the size (that in your case is equal) is not the same. In the documentation we see:
mask – optional operation mask, 8-bit single channel array, that
specifies elements of the output array to be changed.
And this is consistent with the error that asks for a type 0 (CV_8U) or 1 (CV_8S).
Also, even if it is not said, the img should not be float, since it will not give a desired result (probably it will do it anyways).
The solution is probably enough to change:
mask = np.zeros(img.shape[:2])
to
mask = np.zeros(img.shape[:2], dtype=np.uint8)
A small test shows what type you will get:
np.zeros((10,10)).dtype
gives you dtype('float64') which means doubles and not 8 bit

OpenCV/Python: Error in cv2.add() to add a number to image

I'm using OpenCV/Python and I'm trying to add a number to image.
My code is:
import cv2
import numpy as np
import math
from matplotlib import pyplot as plt
img = cv2.imread('messi.jpg',0)
img2 = img
img2 = cv2.add(img2, np.uint8([50]))
I got the next error:
OpenCV Error: Assertion failed (type2 == CV_64F && (sz2.height == 1 || sz2.heigh
t == 4)) in cv::arithm_op, file C:\builds\master_PackSlaveAddon-win64-vc12-stati
c\opencv\modules\core\src\arithm.cpp, line 1989
Traceback (most recent call last):
File "lab3_examples.py", line 27, in <module>
img2 = cv2.add(img, np.uint8([50]))
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\core
\src\arithm.cpp:1989: error: (-215) type2 == CV_64F && (sz2.height == 1 || sz2.h
eight == 4) in function cv::arithm_op
The image I'm using is messi.jpg
Instead, if I use img2 = np.add(img2, np.uint8([50])) intensities that pass the value 255 the value % 255 result, e.g. 260%255=4 the pixel's value is set to 4 instead of 255. As a result, white pixels are turned to black!
Here is the faulty resulted image.
Any ideas please?
In C++ for this purpose saturate_cast(...) is used.
In Python simply
img2 = cv2.add(img2, 50)
will do, if you want to increase brightness for gray-scale image. If implied to colored, color balance will be shifted. For colored image, to save the balance, good answer is by Alex, Bill Grates:
How to fast change image brightness with python + OpenCV?
The only remark - next part of code are not nessessary:
v[v > 255] = 255
v[v < 0] = 0
in my case (Python3,Opencv4).
I would suggest convert the BGR image to HSV image:
hsv= cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
Then split the channels using:
h_channel, s_channel, v_channel = cv2.split(hsv)
Now play with the h_channel:
h_channel + = 20 #---You can try any other value as well---
Now merge the channels back together again:
merged = cv2.merge((h_channel , s_channel , v_channel ))
Finally convert the image back to BGR and display it:
Final_image = cv2.cvtColor(merged, cv2.COLOR_HSV2BGR)
cv2.imshow('Final output', Final_image)
You will see an enhanced or a dimmed image depending on the value you add.
Hope it helps.... :D

Categories