Python Opencv Hough Circles , fatal errors in storage access - python

I am trying to detect circles using a Hough transform with opencv.
When there is no circle in the scene I get a null pointer error. I can handle this with exceptions I think.
However, when there is a circle, trying to manipulate the storage object gives me errors.
For instance I have tried to convert it to a numpy array and most but not all of the time I get the following fatal error.
Some times frames do show correctly.
OpenCV Error: Bad argument (unrecognized or unsupported array type) in cvSetData,
file /build/buildd/opencv-2.3.1/modules/core/src/array.cpp,
My code
while True:
img = billy.get_frame()
# Convert from BGR to HSV
grey = cv.CreateImage(cv.GetSize(img), 8, 1)
cv.CvtColor(img, grey, cv.CV_BGR2GRAY)
cv2.cv.Smooth(grey, grey, cv.CV_GAUSSIAN, 7, 7)
circles = np.array([], dtype=np.float32)
storage = cv.CreateMat(1, 2, cv.CV_32FC3)
try:
cv.HoughCircles(grey, storage, cv.CV_HOUGH_GRADIENT, 2, grey.height/4, 200, 100)
for i in range(0,len(np.asarray(storage))):
cv.Circle(img, ( int(np.asarray(storage)[i][0][0]), int(np.asarray(storage)[i][0][1]) ), int(np.asarray(storage)[i][0][2]), cv.CV_RGB(255, 0, 0), 2, 8, 0 )
except:
pass
cv.ShowImage("threshholded", img )

Related

Getting Error: (-215:Assertion failed) When Using seamlessClone

For clarity, I am a beginner in Python. I'm creating a script that replaces the eyes with a robotic one from a series of images in a folder. I'm stuck on the seamlessClone function that keeps throwing this error.
The code is incomplete as far as writing out the entire script (it only clones one eye so far), but everything should be working. I've been stuck on this one for 6 hours and thought I would ask on here.
I've tried checking my filenames, paths, seeing if the image file was corrupt by using print, changing the image files with different dimensions, filetypes (png, jpg) and so on.
I've also tried converting every numpy array(cv2_image, Eye) into a 32bit array to see if that was the issue, but nothing prevailed.
# Import
from PIL import Image, ImageDraw, ImageFilter
from statistics import mean
import face_recognition
import cv2
import glob
import numpy as np
# Open Eye Images
Eye = cv2.imread('eye.jpg')
# Loop Through Images
for filename in glob.glob('images/*.jpg'):
cv2_image = cv2.imread(filename)
image = face_recognition.load_image_file(filename)
face_landmarks_list = face_recognition.face_landmarks(image)
for facemarks in face_landmarks_list:
# Get Eye Data
eyeLPoints = facemarks['left_eye']
eyeRPoints = facemarks['right_eye']
npEyeL = np.array(eyeLPoints)
npEyeR = np.array(eyeRPoints)
# Create Mask
mask = np.zeros(cv2_image.shape, cv2_image.dtype)
mask.fill(0)
poly = np.array([eyeLPoints])
cv2.fillPoly(mask, [poly], (255,255, 255))
# Get Eye Image Centers
npEyeL = np.array(eyeLPoints)
eyeLCenter = npEyeL.mean(axis=0).astype("int")
x1 = (eyeLCenter[0])
x2 = (eyeLCenter[1])
print(x1, x2)
# Get Head Rotation (No code yet)
# Apply Seamless Clone To Main Image
saveImage = cv2.seamlessClone(Eye, cv2_image, mask ,(x1, x2) , cv2.NORMAL_CLONE)
# Output and Save
cv2.imwrite('output/output.png', saveImage)
Here is the full error:
Traceback (most recent call last):
File "stack.py", line 45, in <module>
saveImage = cv2.seamlessClone(Eye, cv2_image, mask ,(x1, x2) , cv2.NORMAL_CLONE)
cv2.error: OpenCV(4.1.1) /io/opencv/modules/core/src/matrix.cpp:466: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'Mat'
The result I'm expecting is for the eye image to be cloned onto the original image, but this error keeps being thrown, preventing me from completing the script. If there were any hint as to what's going on, I feel that the culprit is the "Eye" file, but I could be wrong. Any help would be appreciated.
There are two sets of keypoints:
There is a set of keypoints for the eye in the destination image:
There is a set of keypoints for the eye in the source image:
You are using the wrong set of keypoints to make the mask for the cv2.seamlessClone() function. You should be using the keypoints from the source image. The mask needs to be a TWO channel image, in your code (among the other problems) you are using a THREE channel image.
This is the result. You can see there should also be a resize function to match the sizes of the eyes:
This is the code that I used:
import face_recognition
import cv2
import numpy as np
# Open Eye Images
eye = cv2.imread('eye.jpg')
# Open Face image
face = cv2.imread('face.jpeg')
# Get Keypoints
image = face_recognition.load_image_file('face.jpeg')
face_landmarks_list = face_recognition.face_landmarks(image)
for facemarks in face_landmarks_list:
# Get Eye Data
eyeLPoints = facemarks['left_eye']
eyeRPoints = facemarks['right_eye']
npEyeL = np.array(eyeLPoints)
# These points define the contour of the eye in the EYE image
poly_left = np.array([(51, 228), (100, 151), (233, 102), (338, 110), (426, 160), (373, 252), (246, 284), (134, 268)], np.int32)
# Create a mask for the eye
src_mask = np.zeros(face.shape, face.dtype)
cv2.fillPoly(src_mask, [poly_left], (255, 255, 255))
cv2.imwrite('src_mask.png', src_mask)
# Find where the eye should go
center, r = cv2.minEnclosingCircle(npEyeL)
center = tuple(np.array(center, int))
# Clone seamlessly.
output = cv2.seamlessClone(eye, face, src_mask, center, cv2.NORMAL_CLONE)

Unsupported combination of input and output array formats in function cv::reduce

I am trying to run a code of line segmentation that I found on GitHub. When the program reaches this line of code:
def invert(im):
im = abs(255 - im)
im = im / 255
return im
def enhance(im):
kernel = np.ones((5, 5), np.uint8)
im = cv2.erode(im, kernel, iterations=1)
kernel = np.ones((15, 15), np.uint8)
im = cv2.dilate(im, kernel, iterations=1)
return im
im = cv2.imread(filename, 0)
imbw = sauvola.binarize(im, [20, 20], 128, 0.3)
im = invert(im)
im = enhance(im)
hist = cv2.reduce(im, 1, cv2.REDUCE_SUM, dtype=cv2.CV_32F)
I get this error:
error: OpenCV(3.4.1) C:\projects\opencv-python\opencv\modules\core\src\matrix_operations.cpp:1111: error: (-210) Unsupported combination of input and output array formats in function cv::reduce
I have tried all dtype values available for cv2.reduce() function. I have also tried to change the datatype of image using numpy. But, still I am getting the same error.
Full Code: https://github.com/smeucci/LineSegm/blob/master/python/linesegm/lib/linelocalization.py#L41
imread() returns uchar data, but you process it as if you get floats.
im = im / 255
For example this line of code will make all pixels equal zero if you will apply it uchar OpenCV matrix.
Convert your im pixel format to floats after you read the image. This code works for me:
im = cv2.imread("im.jpg", 0)
im = np.float32(im)
im = invert(im)
im = enhance(im)
hist = cv2.reduce(im, 1, cv2.REDUCE_SUM, dtype=cv2.CV_32F)

Can I blend or get rid of null pixels in my images?

Using opencv, I'm detecting circles in my images.
I get the "Unrecognized or unsupported array type in function cvGetMat" error. Other questions with this error mentioned null pixels, which some of my pictures do have. Can I get rid of the null pixels before the function where the error happens executes?
Additionally, I've been able to run the bit of code on the images with null pixels before. This error is only occurring when I try to it over a set of images with a for loop.
for i in range (1, 9):
filename = 'motrans/motrans_images_june/AMR' + str(i) + '.png'
img = cv2.imread(filename, 0)
img = cv2.medianBlur(img, 1)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img, cv.CV_HOUGH_GRADIENT, 1, 300, param1=60, param2=20, minRadius=55, maxRadius=75) #3
The error occurs at cv2.HoughCircles.

OpenCV dot target detection not finding all targets, and found circles are offset

I'm trying to detect the center of black/white dot targets, like in this picture.
I've tried to use the cv2.HoughCircles method but 1, am only able to detect 2 to 3 targets, and 2, when I plot the found circles back onto the image, they're always offset slightly.
Am I using the wrong method? Should I be using the findContours or something completely different?
Here is my code:
import cv2
from cv2 import cv
import os
import numpy as np
def showme(pic):
cv2.imshow('window',pic)
cv2.waitKey()
cv2.destroyAllWindows()
im=cv2.imread('small_test.jpg')
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
#I've tried blur,bw,tr... all give me poor results.
blur = cv2.GaussianBlur(gray,(3,3),0)
n,bw = cv2.threshold(blur,120,255,cv2.THRESH_BINARY)
tr=cv2.adaptiveThreshold(blur,255,0,1,11,2)
circles = cv2.HoughCircles(gray, cv.CV_HOUGH_GRADIENT, 3, 100, None, 200, 100, 5, 16)
try:
n = np.shape(circles)
circles=np.reshape(circles,(n[1],n[2]))
print circles
for circle in circles:
cv2.circle(im,(circle[0],circle[1]),circle[2],(0,0,255))
showme(im)
except:
print "no cicles found"
And this is my current output:
Playing the code I wrote in another post, I was able to achieve a slightly better result:
It's all about the parameters. It always is.
There are 3 important functions that are called in this program that you should experiment with: cvSmooth(), cvCanny(), and cvHoughCircles(). Each of them has the potential to change the result drastically.
And here is the C code:
IplImage* img = NULL;
if ((img = cvLoadImage(argv[1]))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img, gray, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray, gray, CV_GAUSSIAN, 7, 9);
IplImage* canny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
cvCanny(gray, canny, 40, 240, 3);
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 2, gray->height/8, 120, 10, 2, 25);
cvCvtColor(canny, rgbcanny, CV_GRAY2BGR);
for (size_t i = 0; i < circles->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
cvNamedWindow("circles", 1);
cvShowImage("circles", rgbcanny);
cvSaveImage("out.png", rgbcanny);
cvWaitKey(0);
I trust you have the skills to port this to Python.
Since that circle pattern is fixed and well distinguished from the object, simple template matching should work reasonably well, check out cvMatchTemplate. For a more complex conditions (warping due to object shape or view geometry), you may try more robust features like SIFT or SURF (cvExtractSURF).
Most Detect Circles using Python Code
import cv2
import numpy as np
img = cv2.imread('coin.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(7,9),6)
cimg = cv2.cvtColor(blur,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(blur,cv2.HOUGH_GRADIENT,1,50,
param1=120,param2=10,minRadius=2,maxRadius=30)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()

OpenCV Python HoughCircles error

I'm working on a program that detects circular shapes in images. I decided a Hough Transform would be the best, and I found one in the OpenCV library. The problem is that when I try to use it I get an error that I have no idea how to fix. Is OpenCV for Python not fully implemented? Is there a fix to the library I need for the program to work?
Here's the code:
import cv
#cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
gray = cv.CreateImage(cv.GetSize(img), 8, 1)
edges = cv.CreateImage(cv.GetSize(img), 8, 1)
cv.CvtColor(img, gray, cv.CV_BGR2GRAY)
cv.Canny(gray, edges, 50, 200, 3)
cv.Smooth(gray, gray, cv.CV_GAUSSIAN, 9, 9)
storage = cv.CreateMat(1, 2, cv.CV_32FC3)
#This is the line that throws the error
cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100)
#cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
And here is the error I'm getting:
OpenCV Error: Null pinter () in unknown function,
file ..\..\..\..\ocv\openc\src\cxcore\cxdatastructs.cpp, line 408
Traceback (most recent call last):
File "ellipse-detect-webcam.py", line 20, in
cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100)
cv.error
Thanks in advance for the help.
For what it's worth, I've found that cv.HoughCircles aborts if it can't detect a circular shape in the image, instead of gracefully returning an empty list.
Are the images valid?
Can you display them (the original and the grayscaled)
Otherwise are you sure the args to the function are correct? Are you passing pointers or references correctly
The storage must to be bigger, I thought that cvMat isn't dinamically allocated so you have to for example change the line:
storage = cv.CreateMat(1, 2, cv.CV_32FC3)
to:
storage = cv.CreateMat(1, img.rows * img.cols, cv.CV_32FC3)

Categories