Simplecv and openkinect blob detection issues - python

I am currently on a robotics team for my high school and my objective is to use our kinect to detect the balls in this year's game by using simplecv and openkinect. I am currently stuck on an issue that my mentors and I cannot seem to resolve. What the code does is it takes a live-depth map feed from the kinect and binarizes it, once it has done that it looks for blobs and tries to find the circular blobs. Finally it draws a circle over the circular blob. The issue is the code is able to find blobs no problem, but it's not able to find the circular ones. I am at a dead end and am hoping someone in the community will be able to please help me. Thank you, here's the code and an image of what is seen
from SimpleCV import *
cam = Kinect()
display = SimpleCV.Display()
while display.isNotDone():
depth = cam.getDepth().flipHorizontal()
img2 = cam.getImage().flipHorizontal()
filtered = depth.stretch(200,255)
segmented = filtered.binarize(200)
blobs = segmented.findBlobs()
if blobs:
circles = blobs.filter([b.isCircle() for b in blobs])
if circles:
img2.drawCircle((circles[-1].x, circles[-1].y), circles[-1].radius(),SimpleCV.Color.RED,3)
img2.drawText("FOUND A BALL", 50,50,color=Color.RED,fontsize=48)
img2.sideBySide(segmented).show()

I have solved the problem, here is the new version of the code that works. Sorry, no screen cap this time.
import SimpleCV
from SimpleCV import *
display = SimpleCV.Display()
cam = Kinect()
normaldisplay = True
while display.isNotDone():
if display.mouseRight:
normaldisplay = not(normaldisplay)
print "Display Mode:", "Normal" if normaldisplay else "Segmented"
img = cam.getDepth().flipHorizontal()
img2 = cam.getImage().flipHorizontal()
dist = img.colorDistance(SimpleCV.Color.BLACK).dilate(2)
segmented = dist.stretch(200,255)
binar = segmented.binarize(200)
erode = binar.erode(2)
blobs = erode.findBlobs()
if blobs:
circles = blobs.filter([b.isCircle(0.2) for b in blobs])
if circles:
img.drawCircle((circles[-1].x, circles[-1].y), circles[-1].radius(),SimpleCV.Color.BLUE,3)
if normaldisplay:
img.show()
else:
segmented.show()

Related

OpenCV - Too many circles detected

Aim
Detect moon craters and draw circles on them using OpenCV and Tkinter
(This is for a school science competition - ideally, later craters will be highlighted red and areas between them that are above a certain size will be highlighted)
Problem
Too many circles are detected
The Need
Could someone please suggest what steps may be being missed, as plainly as possible. Or provide comment on actionable steps to take from here.
Additional challenge
This project is being undertaken by my 10yo son. He's working hard on this but I am not able to properly support this - it's outside of my expertise.
(I have asked on this previously on his behalf, but didn't have enough information and it has changed somewhat since. I have had him rewrite this query a number of times so he learns that the better information he provides, the better help he can receive. So please bear in mind, as simply as answers can be given, the better.)
More Detail
Using OpenCV in Tkinter, too many circles are detected. Despite looking through many articles, and tutorials, it has not been solved. I'm wondering if this is missing a step such as thresholding, or something similar. I'll be passing the replies to him, and helping through them if need be, as well.
Thanks in advance!
Images
Original Image
Processed
Notes
The aim: (In gui):
loads an image
finds the circles in the image
What I am doing(In the code)
1.Defines "Open"
2. In "Open" find image/file then load it up
3. Defines "Find Craters"
4. In find craters finds a circle with 'Min radius = 20 Max radius = 200'
5. Turns all the functions in to buttons
What I am trying to do
Blur the Image
Identify circles
Code
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
import numpy as np
import cv2
root = tk.Tk()
root.title("AAMLS")
root.geometry("1100x600")
def open():
global my_image
filename = filedialog.askopenfilename(initialdir="images", title="Select A File", filetypes=(("jpg files", "*.jpg"),("all files", "*.*")))
my_label.config(text=filename)
my_image = Image.open(filename)
tkimg = ImageTk.PhotoImage(my_image)
my_image_label.config(image=tkimg)
my_image_label.image = tkimg # save a reference of the image
def find_craters():
global image
global my_image_label
# convert PIL image to OpenCV image
circles_image = np.array(my_image.convert('RGB'))
gray_img = cv2.cvtColor(circles_image, cv2.COLOR_BGR2GRAY)
img = cv2.GaussianBlur(gray_img,(5,5),0)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 20,
param1=20, param2=60, minRadius=20, maxRadius=200)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0]:
# draw the outer circle
cv2.circle(circles_image, (i[0],i[1]), i[2], (0,255,0), 2)
# draw the center of the circle
cv2.circle(circles_image, (i[0],i[1]), 2, (0,0,255), 3)
#circles_img = cv2.Laplacian(circles_image,cv2.CV_64F)
# convert OpenCV image back to PIL image
image = Image.fromarray(circles_image)
# update shown image
my_image_label.image.paste(image)
btn1 = tk.Button(root, text="Load Terrain", command=open).pack()
btn2 = tk.Button(root, text="Find Craters", command=find_craters).pack()
# for the filename of selected image
my_label = tk.Label(root)
my_label.pack()
# for showing the selected image
my_image_label = tk.Label(root)
my_image_label.pack()
root.mainloop()
To reduce the number of circles, there are essentially these parameters in HoughCircles:
min_dist: increase this value to find circles with a larger distance
maxRadius: reduce this parameter to avoid circles that contain more than one crater
param1: higher values make the edge detector less sensitive; only clear cut edges will be used (set param2 always to 3*param1)
You'll need to play around with these parameters to find a good setting. As a next step, I would do:
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 30,
param1=40, param2=120, minRadius=20, maxRadius=80)
If then the circles are too few, relax the conditions to get more circles.

Why is detection rate of Charucos in cv2.aruco.detectMarkers() so poor?

I am in trouble figuring out why cv2.aruco.detectMarkers() has problems in finding more than just a few markers with my calibration board. Playing around with the paramters didn't essentially improve the quality. The dictionary is correct as I tried it with the digital template before printing.
Here is, what I do to detect CHAruco markers from a real image:
import cv2
from cv2 import aruco
#ChAruco board variables
CHARUCOBOARD_ROWCOUNT = 26
CHARUCOBOARD_COLCOUNT = 26
ARUCO_DICT = cv2.aruco.Dictionary_get(aruco.DICT_4X4_1000)
#Create constants to be passed into OpenCV and Aruco methods
CHARUCO_BOARD = aruco.CharucoBoard_create(
squaresX=CHARUCOBOARD_COLCOUNT,
squaresY=CHARUCOBOARD_ROWCOUNT,
squareLength=5, #mm
markerLength=4, #mm
dictionary=ARUCO_DICT)
#load image
img = cv2.imread('imgs\\frame25_crop.png', 1)
test image with CHAruco markers
#initialize detector
parameters = aruco.DetectorParameters_create()
parameters.adaptiveThreshWinSizeMin = 150
parameters.adaptiveThreshWinSizeMax = 186
#Find aruco markers in the query image
corners, ids, _ = aruco.detectMarkers(
image=img,
dictionary=ARUCO_DICT,
parameters=parameters)
#Outline the ChAruco markers found in our image
img = aruco.drawDetectedMarkers(
image=img,
corners=corners)
The result is the following: only 3 are markers are found, which is bad.
resulting image with found markers
Does anyone has an idea how to considerably improve the results of the detector?
Your image is flipped.
Fix it with this line of code:
img = cv2.flip(img, 0)
Without looking at your code I may say that the image quality and perspective you selected is a bit poor. You may try to work with more clear view of your markers. For instance, hang the markers on the wall take one or two step back and try to take photo of it with better light and if not necessary do not add extra rotation, and keep the contrast high :). This will probably give better results.

How do I project an image on to another image with homography

I have a python program that detects a rectangle from the the captured Video. Now I want to project another image into the detected square (just like in this video).
I have Tried using the warpPerspective and that does not seem to be working or maybe I'm using it in the wrong way.
my present output looks like this. I want my output to look like this
I tried to overlay images after using warpPerspective:
img = cv2.imread('cola.jpg')
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
background = cv2.imread('stadium.jpg')
background = cv2.cvtColor(background,cv2.COLOR_BGR2RGB)
rows,cols,ch = background.shape
pts1 = np.float32([[0,0],[974,0],[0,974],[974,974]]) # cola coords
pts2 = np.float32([[560,383],[940, 516],[5,527],[298,733]]) # stadium tile coords
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(cols,rows))
overlay = cv2.add(background, dst)
[Output image
I used OpenCV documentation

Program abruptly shutting off Raspberry Pi

so I have a Python program on my Raspberry Pi running on an infinite while loop that takes an image from the camera every second.
With every iteration, the program creates a thread which uses the image to process. This process consists of: a script extracts a phone screen out of the image using OpenCV and another script to extract a QR code out of that screen.
I used the program to use pre-taken images to process them without problem but it only chokes if I put the program through a continuous loop.
After a few iterations of the loop and trying to process the images, the program unexpectedly breaks and my Raspberry Pi abruptly shuts off. Does anyone know why this is happening?
I have been looking around for an answer but my suspicions are with the threads. Either, I'm overloading the CPU or RAM, there are memory leaks, the Raspberry Pi is using too much power
EDIT:
From the comments below it seems that the power supply to the Pi seems to be the problem. I'm currently running on a phone charger (5.0v, 1.0A) which is (very) below the 5.0v, 2.5A recommended power supply, darn me. I will update this post when I get a new power supply and test the code out.
Also, running the program on my Windows laptop poses no problems at all.
This is my main script:
import picamera
import threading
import time
from processing.qr import get_qr_code
from processing.scan import scan_image
# Thread method
def extract_code(file):
print 'Thread for: ' + file
# Scans image to extract phone screen from image and then gets QR code from it
scan_image(file)
get_qr_code(file)
return
# End methods
camera = picamera.PiCamera()
while True:
time.sleep(1)
# Make epoch time as file name
time_epoch = str(int(time.time()))
image_path = "images/" + time_epoch + ".jpg"
print "Taking photo: " + str(image_path)
camera.capture(image_path)
# Create thread to start processing image
t = threading.Thread(target=extract_code, args=[time_epoch])
t.start()
Below is the script to scan image (scan.py)
In a nutshell, it takes an image, blurs it, finds edges and draws contours, checks to see if there's a rectangle (e.g a phone screen) and transforms and warps it into a new image with only the phone screen.
from transform import four_point_transform
import imutils
import numpy as np
import argparse
import cv2
import os
def scan_image(file):
images_dir = "images/"
scans_dir = "scans/"
input_file = images_dir + file + ".jpg"
print "Scanning image: " + input_file
# load the image and compute the ratio of the old height
# to the new height, clone it, and resize it
image = cv2.imread(input_file)
ratio = image.shape[0] / 500.0
orig = image.copy()
image = imutils.resize(image, height = 500)
# convert the image to grayscale, blur it, and find edges
# in the image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 75, 200)
# show the original image and the edge detected image
print "STEP 1: Edge Detection"
# find the contours in the edged image, keeping only the
# largest ones, and initialize the screen contour
_, cnts, _ = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5]
screenCnt = 0
# loop over the contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if our approximated contour has four points, then we
# can assume that we have found our screen
if len(approx) == 4:
screenCnt = approx
break
# show the contour (outline) of the piece of paper
print "STEP 2: Find contours of paper"
# if screenCnt > 0 :
# cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
# apply the four point transform to obtain a top-down
# view of the original image
if screenCnt > 0:
warped = four_point_transform(orig, screenCnt.reshape(4, 2) * ratio)
print "STEP 3: Apply perspective transform"
output = scans_dir + file + "-result.png"
if not os.path.exists(scans_dir):
os.makedirs(scans_dir)
cv2.imwrite(output, imutils.resize(warped, height = 650))
else:
print "No screen detected"
This is the code to scan a QR code out of the image:
import os
from time import sleep
def get_qr_code(image_name):
scans_dir = "scans/"
codes_dir = "codes/"
input_scan_path = scans_dir + image_name + "-result.png"
output_qr_path = codes_dir + image_name + "-result.txt"
if not os.path.exists(codes_dir):
os.makedirs(codes_dir)
if os.path.exists(input_scan_path):
os.system("zbarimg -q " + input_scan_path + " > " + output_qr_path)
if os.path.exists(output_qr_path):
strqrcode = open(output_qr_path, 'r').read()
# print strqrcode
print "Results for " + image_name + ": " + strqrcode
else:
print "File does not exist"
After a few iterations of the loop and trying to process the images, the program unexpectedly breaks and my Raspberry Pi abruptly shuts off. Does anyone know why this is happening?
I had a similar experience. My program (C++) realtime processing camera frames unexpectedly crashed. Sometimes with segmentation error inside the OS code, sometimes just shutting off the whole board. Sometimes the program executable file just become zero bytes.
I taken long time looking for memory leaks or CPU peaks or other kind of programming error. Suddenly I realised that my power supply was not so strong. I changed to a new one more robust and I got no more problems.
See in this link the 2.5 A recommendation and check if your UPS actually meets this requirement.
Maybe not the answer you are looking for, but using Python with OpenCV on an embedded system is not really a good idea. I have experienced a lot of problems when using python on embedded systems (even with the powerful hardware that the rasp provides you).
All the performance problems I experienced vanished when I used C++ instead of Python. OpenCV on C++ is pretty solid and I highly recommend you to give it a try.

opencv/python : motion detect weird thresholding

I am trying to make a motion detect program using my webcam, but I'm getting this weird results when thresholding the difference of frames:
When Im moving: (seems okay I guess)
![enter image description here][1]
When Im not moving:
![enter image description here][2]
What can this be? I already ran a couple of programs that got exactly the same algorithm and the thresholding is doing fine..
Heres my code:
import cv2
import random
import numpy as np
# Create windows to show the captured images
cv2.namedWindow("window_a", cv2.CV_WINDOW_AUTOSIZE)
cv2.namedWindow("window_b", cv2.CV_WINDOW_AUTOSIZE)
# Structuring element
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,4))
## Webcam Settings
capture = cv2.VideoCapture(0)
#dimensions
frameWidth = capture.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
frameHeight = capture.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
while True:
# Capture a frame
flag,frame = capture.read()
current = cv2.blur(frame, (5,5))
difference = cv2.absdiff(current, previous) #difference is taken of the current frame and the previous frame
frame2 = cv2.cvtColor(difference, cv2.cv.CV_RGB2GRAY)
retval,thresh = cv2.threshold(frame2, 10, 0xff, cv2.THRESH_OTSU)
dilated1 = cv2.dilate(thresh, es)
dilated2 = cv2.dilate(dilated1, es)
dilated3 = cv2.dilate(dilated2, es)
dilated4 = cv2.dilate(dilated3, es)
cv2.imshow('window_a', dilated4)
cv2.imshow('window_b', frame)
previous = current
key = cv2.waitKey(10) #20
if key == 27: #exit on ESC
cv2.destroyAllWindows()
break
Thanks in advance!
[1]: http://i.stack.imgur.com/hslOs.png
[2]: http://i.stack.imgur.com/7fB95.png
The first thing that you need is a previous = cv2.blur(frame, (5,5)) to prime your previous sample after a frame grab before your while loop.
This will make the code you posted work, but will not solve your problem.
I think the issue that you are having is due to the type of thresholding algorithm that you are using. Try a binary, cv2.THRESH_BINARY, instead of Otsu's. It seemed to solve the problem for me.

Categories