OpenCV findContours only finds the border of the image - python

I'm working on a project to track a laser and a photodiode with a camera attached to a raspberry pi. The pi will send instructions to an arduino, which will reorient the laser until I get a response from the photodiode. Right now, I'm working on the camera aspect of the process.
I'm trying to find the contours of my image so that I can match them with the general contours of the objects I'll be using, but my findContours() only gives me the border of my image.
I wish I could post the images, but I don't have enough rep. The Canny Edge is black and white, white lines with a black background. The image with the contours on it is the captured image but with a drawn border and no other contours.
Here's my code:
def DED(grayImg): #Edge Detection, returns image array
minInt, maxInt, minLoc, maxLoc = cv2.minMaxLoc(grayImg) #Grayscale: MinIntensity, Max, and locations
beam = cv2.mean(grayImg) #Find the mean intensity in the img pls.
mean = float(beam[0])
CannyOfTuna = cv2.Canny(grayImg, (mean + minInt)/2, (mean + maxInt)/2) #Finds edges using thresholding and the Canny Edge process.
return CannyOfTuna
def con2z(Gray, ogImage): #Find contours from = Canny Edge Image, draw onto original
lines, pyramids = cv2.findContours(Gray, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
gimmeGimme = cv2.drawContours(ogImage, lines, -1, (128,255,0), 3) #draw contours on
#The -1 signifies ALL contours will be drawn.
return lines
with picamera.PiCamera() as camera:
camera.resolution = (640,480)
out = camera.capture('output.jpg') # Camera start
output = cv2.imread('output.jpg')
grayput = cv2.cvtColor(output, cv2.COLOR_BGR2GRAY) #Grayscale
cv2.imwrite('gray.jpg', grayput)
cans = DED(grayput) #Canny Edge
cv2.imwrite('Canny.jpg', cans)
lines = con2z(grayput, output) # Contours please
print(lines)
cv2.imwrite('contours.jpg', output)
EDIT: Here are the two photos
http://imgur.com/EVeMVdm,QLoYa2o#0
http://imgur.com/EVeMVdm,QLoYa2o#1

findContours returns this tuple (image, contours, hierarchy).
So in your case try this as L.H.S of your findContours function: _, lines, pyramids = cv2.findContours
EDIT:
Sorry, that was not the solution, below one worked for me.
Replace grayput with cans in con2z function call. findContours expects binary image, which grayput is not.

Related

how to draw outlines of objects on an image to a separate image

i am working on a puzzle, my final task here is to identify edge type of the puzzle piece.
as shown in the above image i have mange to rotate and crop out every edge of the piece in same angle. my next step is to separate the edge line into a separate image like as shown in the image bellow
then to fill up one side of the line with with a color and try to process it to decide what type of edge it is.
i dont see a proper way to separate the edge line from the image for now.
my approach::
one way to do is scan pixel by pixel and find the black pixels where there is a nun black pixel next to it. this is a code that i can implement. but it feels like a primitive and a time consuming approach.
so if there you can offer any help or ideas, or any completely different way to detect the hollows and humps.
thanks in advance..
First convert your color image to grayscale. Then apply a threshold, say zero to obtain a binary image. You may have to use morphological operations to further process the binary image if there are holes. Then find the contours of this image and draw them to a new image.
A simple code is given below, using opencv 4.0.1 in python 2.7.
bgr = cv2.imread('puzzle.png')
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
_, roi = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
cv2.imwrite('/home/dhanushka/stack/roi.png', roi)
cont = cv2.findContours(roi, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
output = np.zeros(gray.shape, dtype=np.uint8)
cv2.drawContours(output, cont[0], -1, (255, 255, 255))
# removing boundary
boundary = 255*np.ones(gray.shape, dtype=np.uint8)
boundary[1:boundary.shape[0]-1, 1:boundary.shape[1]-1] = 0
toremove = output & boundary
output = output ^ toremove

Opencv Python How to check the direction of the gear?

I'm checking the direction of the gear, whether it is "up or down", "and print the result to the screen". The background color is always black.
This gear is in the "up" position:
https://imgur.com/a/DON8GJs
This gear is in the "down" position:
https://imgur.com/a/4ODZQAt
I did try to binary the two images and canny edge detection but all I've found is the result of those algorithms, nothing more.! I'm wondering what should I need to do to check the direction of the gear? Your help would be greatly appreciated.!
The down position has a distinct shape, you can use shapematch to detect the presence of this shape.
To do that, you'll need a reference shape. I created this by detecting the edges, saved that image and used MS paint to only leave the shape needed.
The code below shows you how to detect the shape. It prints the position of the gear to the terminal and draws the shape if it is in the down position.
Shapematch is able to handle rotation, but you might need to test and tweak some of the settings if you want the use this in some sort of automation.
Result:
Image of reference gear:
https://i.stack.imgur.com/s6E9C.jpg
Code:
import numpy as np
import cv2
# load image of reference shape
image_reference = cv2.imread("ReferenceGear.jpg",0)
# threshold to remove artefacts
ret, img_ref = cv2.threshold(image_reference, 200, 255,0)
# detect contours in image
im, ref_cnts, hierarchy = cv2.findContours(img_ref, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# store the contour of the reference shape in a variable
ref_cnt = ref_cnts[0]
# load image
image = cv2.imread("gear.png")
# detect edges in image
edges = cv2.Canny(image,50,50)
# detect contours of edges in image
im, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for each edge-contour in the image: try to match with te reference.
# if the value is very small, it is a good match. store the result in a variable
found = False
for cnt in contours:
ret = cv2.matchShapes(cnt, ref_cnt,3,0.0)
if ret < 0.001:
cv2.drawContours(image, [cnt], 0, (255), 2)
found = True
break
if found:
print("Gear is in down position")
else:
print("Gear is in up position")
# show image and reference
cv2.imshow("image", image)
cv2.imshow("image_reference", image_reference)
# release resources
cv2.waitKey(0)
cv2.destroyAllWindows()

Finding bright spots in a image using opencv

I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.

Edge detection on dim edges using Python

I want to find dim edges using Python.
Input images (100 X 100) :
It consists of several horizontal boards: top, middle, bottom.
I want to find middle board bounding box like:
I used several edge detection methods (prewitt_x, sobel_x, cv2.findContours) but cannot detect well.
Because edge btw black region and board region is dim.
How can I find bounding box like red box?
Code below is example using prewitt_x and cv2.findContours:
import cv2
import numpy as np
img = cv2.imread('my_dir/my_img.bmp',0)
# prewitts_x
kernelx = np.array([[1,1,1],[0,0,0],[-1,-1,-1]])
img_prewittx = cv2.filter2D(img, -1, kernelx)
img_prewittx_gray = cv2.cvtColor(img_prewittx, cv2.COLOR_BGR2GRAY)
cv2.imwrite('my_outdir/my_outimg.bmp',img_prewittx)
# cv2.findContours
image, contours, hierarchy = cv2.findContours(img_prewittx_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(cnt) for cnt in contours]
print(rects)
In fact, I don't want to use slower one like Canny detector.
Help me :)
My suggestion:
use a simple edge detection filter such as Prewitt
project horizontally (sum of the pixels in every row)
analyze the resulting profile to detect the regions of low/high activity and delimit the desired slabs.
You can also try the maximum along rows instead of the sum.
But don't expect miracles, this is a hard problem.

Fisheye calibration OpenCV Python

I am using a fisheye camera and I would like to calibrate it and correct its barrel distortion using OpenCV or any other library in Python. I have tried different methods using the classic chessboard pattern through different images.
I have been following this approach:
# Prepare object points, like (0,0,0), (1,0,0), (2,0,0), ...,(7,5,0)
object_p = np.zeros((self.__ny*self.__nx,3),np.float32)
object_p[:,:2] = np.mgrid[0:self.__nx,0:self.__ny].T.reshape(-1,2) # s,y coordinates
for image_filename in calibration_filenames:
# Read in each image
image = cv2.imread(image_filename)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # RGB is standard in matlibplot
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (self.__nx, self.__ny), None)
# If found, draw corners
if ret == True:
# Store the corners found in the current image
object_points.append(object_p) # how it should look like
image_points.append(corners) # how it looks like
# Draw and display the corners
cv2.drawChessboardCorners(image, (self.__nx, self.__ny), corners, ret)
plt.figure()
plt.imshow(image)
plt.show()
# Do the calibration
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points, image_points, shape, None, None)
And then this to correct my image:
cv2.undistort(image, mtx, dist, None, mtx)
The main problem is that, though correction performs good with some images, is very susceptible to camera location. A variation of cm in the camera will result in very different corrections (see below).
Any idea of how could I control this zoom effect to show maintain the grid area always in my corrected image?
I am aware that this question may be similar to the one in here.
Your calibration is poorly estimated: as you can see, straight lines in the scene grid are not transformed to straight lines in the (allegedly) undistorted image.
Once you have a better calibration, you can compute (using the inverse mapping) the location of the 4 sides of the undistorted image in the original ones. These define your usable image boundaries, and can guide the placement of the camera with respect to the scene.

Categories