OpenCV - Adaptive thresholding / Trackbar manipulation - python

I am still new to OpenCV(Python) and am trying out cv2.adaptiveThreshold() to draw proper contours with a webcam running when lighting is changing. The main problem is the insane amount of noise I am getting when drawing contours so I tried to set a cv2.countourArea() threshold but this seems not like the best solution.
Later on I deciced to try and manipulate the values of cv2.adaptiveThreshold with a simple trackbar.
Specifically the blockSize and CValue. Everything works fine on the CValue but I do really struggle on the blockSize since it needs to be an odd number. I tried something along the line of checking if the value of the empty callback function is even and adding +1. But this does not seem to work properly. Later on I will most likely use machine-learning to change these values but for now I'd like the trackbars to work for debugging purposes.
What is the best solution here to manipulate the blockSize with a trackbar?
Thank you in advance! :)
import cv2
import numpy as np
#####################################
winWidth = 640
winHeight = 840
brightness = 100
cap = cv2.VideoCapture(0)
cap.set(3, winWidth)
cap.set(4, winHeight)
cap.set(10, brightness)
kernel = (5, 5)
bSize_default = 1
#######################################################################
def empty(a):
pass
cv2.namedWindow("TrackBars")
cv2.resizeWindow("TrackBars", 640, 240)
cv2.createTrackbar("cVal", "TrackBars", 2, 20, empty)
def preprocessing(frame, cVal):
imgGray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# mask = cv2.inRange(imgHsv, lower, upper)
imgBlurred = cv2.GaussianBlur(imgGray, kernel, 3)
gaussC = cv2.adaptiveThreshold(imgBlurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, cVal)
imgDial = cv2.dilate(gaussC, kernel, iterations=3)
imgErode = cv2.erode(imgDial, kernel, iterations=1)
return imgDial
def getContours(imPrePro):
contours, hierarchy = cv2.findContours(imPrePro, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 60:
cv2.drawContours(imgCon, cnt, -1, (255, 0, 0), 3)
#######################################################################################################
while (cap.isOpened()):
success, frame = cap.read()
cVal = cv2.getTrackbarPos("cVal", "TrackBars")
if success == True:
frame = cv2.flip(frame, 1)
imgCon = frame.copy()
imPrePro = preprocessing(frame, cVal)
getContours(imPrePro)
cv2.imshow("Preprocessed", imPrePro)
cv2.imshow("Original", imgCon)
if cv2.waitKey(1) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break

The minimum value of blocksize must be 3 and also blocksize must be odd therefore:
value_BSize= cv2.getTrackbarPos("bSize", "TrackBars")
value_BSize = max(3,value_BSize)
if (value_BSize % 2 == 0):
value_BSize += 1

tbar = (cv2.getTrackbarPos('trackbar', 'window')&~1)+3 # for smooth operation
tbar = cv2.getTrackbarPos('trackbar', 'window')*2+3 # for fast operation

Related

Grayscaled image has dark lower border

I am using opencv to take images using my webcam.
cam = cv2.VideoCapture(0)
cv2.namedWindow("Handwritten Number Recognition")
img_counter = 0
while True:
ret, frame = cam.read()
if not ret:
print("failed to grab frame")
break
cv2.imshow("Handwritten Number Recognition", frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Prediction is underway...")
break
elif k%256 == 32:
# SPACE pressed
img_name = "opencv_frame_{}.png".format(img_counter)
cv2.imwrite(img_name, frame)
print("Image taken!")
img_counter += 1
cam.release()
cv2.destroyAllWindows()
I then convert the image into grayscale and downsize it:
user_test = img_name
col = Image.open(user_test)
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<100 else 255, '1')
bw.save("bw_image.jpg")
bw
img_array = cv2.imread("bw_image.jpg", cv2.IMREAD_GRAYSCALE)
img_array = cv2.bitwise_not(img_array)
plt.imshow(img_array, cmap = plt.cm.binary)
plt.show()
img_size = 28
new_array = cv2.resize(img_array, (img_size,img_size))
final_array = new_array.reshape(1,-1)
plt.imshow(new_array, cmap = plt.cm.binary)
plt.show()
But the images have a very dark patch in the bottom which hampers the predictions I want to make with my data:
Original image:
What can I do to get past this problem? Interestingly this only happens if I click the image using opencv. If I use an image clicked through the same webcam but through the camera appliucation the error is not visible (Adding path of the image for preprocessing).
You have two options:
Choosing a better global threshold for the gray values. This is the easier less generic solution. Normally, people would choose the Otsu method to automatically select the optimal threshold. Have a look at: Opencv Thresholding Tutorial
threshold, dst_img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
Using an adaptive threshold. Adaptive simply means using a calculated threshold for each sliding window location based on some criteria. Have a look at: Niblack's Binarization methods
Using option one:
img = cv2.imread("thresh.jpg", cv2.IMREAD_GRAYSCALE)
threshold, img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
cv2.imwrite("thresh_bin.jpg", img)
Output:
this problem because of used thresholding method
bw = gray.point(lambda x: 0 if x<100 else 255, '1')
to solve this you can change the low limit value (100) to 75 or using opencv auto threshold
the, bw = cv2.threshold(gray_img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Here is one way to do that in Python/OpenCV using division normalization, thresholding and some morphology.
Input:
import cv2
import numpy as np
# read the image
img = cv2.imread('five.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (555,555), 0)
# divide smooth by gray image
division = cv2.divide(smooth, gray, scale=255)
# invert
division = 255 - division
# threshold
thresh = cv2.threshold(division, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# add white border to help morphology close
border = cv2.copyMakeBorder(thresh, 60,60,60,60, cv2.BORDER_CONSTANT, value=(255,255,255))
hh, ww = border.shape
# morphology close
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (29,29))
result = cv2.morphologyEx(border, cv2.MORPH_CLOSE, kernel)
# remove border
result = result[60:hh-60, 60:ww-60]
# save results
cv2.imwrite('five_division_threshold.jpg',result)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

OpenCV - Draw contours on fingers using convex-hulls & adaptive thresholding

I am pretty new to OpenCV and am trying to achieve drawing simple contours along the outline of my hand using a webcam. I decided on using cv2.adaptiveThreshold() to deal with the different light intensities when the camera adjusts to the hand moving. Everything seems to work fine except that it is struggling with finding the fingers and then also drawing closed contours.
See here:
I thought about trying to detect a convex hull and detect anything deviating from it somehow.
How do I go about this best? Firstly I need to manage to maybe not find weird closed contours and then go from there?
Here's the code, I fixed the trackbar values for you :)
import cv2
import numpy as np
#####################################
winWidth = 640
winHeight = 840
brightness = 100
cap = cv2.VideoCapture(0)
cap.set(3, winWidth)
cap.set(4, winHeight)
cap.set(10, brightness)
kernel = (7, 7)
#######################################################################
def empty(a):
pass
cv2.namedWindow("TrackBars")
cv2.resizeWindow("TrackBars", 640, 240)
cv2.createTrackbar("cVal", "TrackBars", 10, 40, empty)
cv2.createTrackbar("bSize", "TrackBars", 77, 154, empty)
def preprocessing(frame, value_BSize, cVal):
imgGray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# mask = cv2.inRange(imgHsv, lower, upper)
imgBlurred = cv2.GaussianBlur(imgGray, kernel, 4)
gaussC = cv2.adaptiveThreshold(imgBlurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, value_BSize,
cVal)
imgDial = cv2.dilate(gaussC, kernel, iterations=3)
imgErode = cv2.erode(imgDial, kernel, iterations=1)
return imgDial
def getContours(imPrePro):
contours, hierarchy = cv2.findContours(imPrePro, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 60:
cv2.drawContours(imgCon, cnt, -1, (0, 255, 0), 2, cv2.FONT_HERSHEY_SIMPLEX)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
#######################################################################################################
while cap.isOpened():
success, frame = cap.read()
cVal = cv2.getTrackbarPos("cVal", "TrackBars")
bVal = cv2.getTrackbarPos("bVal", "TrackBars")
value_BSize = cv2.getTrackbarPos("bSize", "TrackBars")
value_BSize = max(3, value_BSize)
if (value_BSize % 2 == 0):
value_BSize += 1
if success == True:
frame = cv2.flip(frame, 1)
imgCon = frame.copy()
imPrePro = preprocessing(frame, value_BSize, cVal)
getContours(imPrePro)
cv2.imshow("Preprocessed", imPrePro)
cv2.imshow("Original", imgCon)
if cv2.waitKey(1) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break
L*a*b color space can help find objects brighter than the background. One advantage is that color space is hardware independent, so it should yield relatively similar results from any camera. Using the OTSU option to threshold the image can help it work in different lightning conditions, as it calculates the optimal threshold intensity to separate bright and dark areas in the image. Obviously it is not a silver bullet and will NOT work perfectly in every situation, especially in extreme cases, but as long your hand's brightness is relatively different from the background, it should work.
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)
tv, thresh = cv2.threshold(lab[:,:,0], 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
plt.imshow(thresh)
Once the hand is properly thresholded, you can proceed to find the contours and do your analysis as needed.
Note: the artifacts in the threholded image are caused by removing the green contour lines from the original posted image.
I'm using a light threshold so this might work differently depending on the image, but here's what works for this one.
import cv2
import numpy as np
# load image
img = cv2.imread("hand.jpg");
# lab
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB);
l,a,b = cv2.split(lab);
# threshold
thresh = cv2.inRange(l, 90, 255);
# contour
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
marked = img.copy();
cv2.drawContours(marked, contours, -1, (0, 255, 0), 3);
# show
cv2.imshow("marked", marked);
cv2.imshow("Thresh", thresh);
cv2.waitKey(0);
# save
cv2.imwrite("marked_hand.png", marked);
Have you looked into google's mediapipe, as an alternate to opencv?
Also, I wonder if adding a thin bottom black border to the frame would help the contour to know to wrap around the wrist.

Convert an opencv script

I found this code from this website (https://www.hackster.io/tinkernut/raspberry-pi-smart-car-8641ca) and try to rewrite it so that it works with a USB webcam and not with the PI camera.
I wanted to add the recorded material to a Numpy array, but I can't get any further. After a lot of fiddling around and searching the internet I didn't find anything to solve the problem.
I don't have much experience in Python or opencv programming so I hope someone can help me because I am overwhelmed and my brain just feels like a crushed potato.
Generally to my project, I want to build an infotainment system with a Raspberry Pi and everything works except the rear view camera. So navigation, music and video playback and so on.
As main program to use all other programs I use Kodi with the CarPc-Carbonskin.
Translated with www.DeepL.com/Translator
import time
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
import RPi.GPIO as GPIO
buzzer = 22
GPIO.setmode(GPIO.BCM)
GPIO.setup(buzzer, GPIO.OUT)
camera = PiCamera()
camera.resolution = (320, 240) #a smaller resolution means faster processing
camera.framerate = 24
rawCapture = PiRGBArray(camera, size=(320, 240))
kernel = np.ones((2,2),np.uint8)
time.sleep(0.1)
for still in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
GPIO.output(buzzer, False)
image = still.array
#create a detection area
widthAlert = np.size(image, 1) #get width of image
heightAlert = np.size(image, 0) #get height of image
yAlert = (heightAlert/2) + 100 #determine y coordinates for area
cv2.line(image, (0,yAlert), (widthAlert,yAlert),(0,0,255),2) #draw a line to show area
lower = [1, 0, 20]
upper = [60, 40, 200]
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
#use the color range to create a mask for the image and apply it to the image
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
dilation = cv2.dilate(mask, kernel, iterations = 3)
closing = cv2.morphologyEx(dilation, cv2.MORPH_GRADIENT, kernel)
closing = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel)
edge = cv2.Canny(closing, 175, 175)
contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
threshold_area = 400
centres = []
if len(contours) !=0:
for x in contours:
#find the area of each contour
area = cv2.contourArea(x)
#find the center of each contour
moments = cv2.moments(x)
#weed out the contours that are less than our threshold
if area > threshold_area:
(x,y,w,h) = cv2.boundingRect(x)
centerX = (x+x+w)/2
centerY = (y+y+h)/2
cv2.circle(image,(centerX, centerY), 7, (255, 255, 255), -1)
if ((y+h) > yAlert):
cv2.putText(image, "ALERT!", (centerX -20, centerY -20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255),2)
GPIO.output(buzzer, True)
cv2.imshow("Display", image)
rawCapture.truncate(0)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
GPIO.output(buzzer, False)
break

ball detection in python runs very slow, any tips on impoving it?

I am working on a sensor for PID controler, so basically I have a camera recording a scene. In order for control to work I need to extract tennis ball's position on a scene in each frame. Scene is set with white background and orange tennis ball as you can see on image:
And when I run it on my lenovo ideapad 100 (which is quite slow pc) I get ball's position each let's say 1.2 secs.
I think this could be done way faster but I don't know what to do.
Any tips and suggestions are welcome.
Im expecting a system that can generate at least 2 position each second
Here is my code:
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
lower_red = np.array([0,100,100])
upper_red = np.array([20,255,255])
while(1):
start = time.time()
# Load an color image in grayscale
_,img = cap.read()
print("read")
#get the image's width and height
blurred = cv2.GaussianBlur(img, (11, 11), 0)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
hsv2 = cv2.cvtColor(blurred,cv2.COLOR_BGR2HSV)
mask2 = cv2.inRange(hsv2, lower_red, upper_red)
mask2 = cv2.erode(mask2, None, iterations=2)
mask2 = cv2.dilate(mask2, None, iterations=2)
imageWidth = mask2.shape[1]
imageHeight = mask2.shape[0]
sum_i_white = 1
num_white = 1
sum_j_white = 1
for i in range(0,imageHeight):
for j in range(0,imageWidth):
if mask2[i][j] == 255:
sum_i_white += i
sum_j_white += j
num_white += 1
cv2.circle(img,(int(sum_j_white/num_white),int(sum_i_white/num_white)),3,(0,0,255), -1)
cv2.putText(img,'here',(int(sum_j_white/num_white),int(sum_i_white/num_white)), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 1, cv2.LINE_AA)
cv2.imshow("final",img)
print(time.time()-start)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
Do you know what is slow?
I know you can run C program with Python.
So, if your double for is very slow, you can make a C program running this.
If you have juste array and int as input, and int as output, I think it shouldn't be hard.
But I don't know if this would be useful.

Detect different color blob opencv

I'm new to opencv and for a school project i need to detect a red and a green circle with a camera, so i've use blobdetection, but it detect me the two colors, i think that my mask is bad, each color is linked to a specific action.
At the moment my code detect well red and green circle on the same page but i want it to detect only red circle on a white page.
Thank for your help
# Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.VideoCapture(0)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 100;
params.maxThreshold = 200;
# Filter by Area.
params.filterByArea = True
params.minArea = 200
params.maxArea = 20000
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.1
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.1
blueLower = (0,85,170) #100,130,50
blueUpper = (140,110,255) #200,200,130
while(1):
ret, frame=im.read()
mask = cv2.inRange(frame, blueLower, blueUpper)
mask = cv2.erode(mask, None, iterations=0)
mask = cv2.dilate(mask, None, iterations=0)
frame = cv2.bitwise_and(frame,frame,mask = mask)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(mask)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(mask, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Display the resulting frame
frame = cv2.bitwise_and(frame,im_with_keypoints,mask = mask)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
im.release()
cv2.destroyAllWindows()
EDIT 1: Code update
Now i got a issue where my full circle isn't detected.
No Blob Detection
Second Version
# Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.VideoCapture(0)
while(1):
ret, frame=im.read()
lower = (130,150,80) #130,150,80
upper = (250,250,120) #250,250,120
mask = cv2.inRange(frame, lower, upper)
lower, contours, upper = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
blob = max(contours, key=lambda el: cv2.contourArea(el))
M = cv2.moments(blob)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
canvas = im.copy()
cv2.circle(canvas, center, 2, (0,0,255), -1)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
im.release()
cv2.destroyAllWindows()
You need to work out what the BGR numbers for your green are (let's say for arguments sake [0, 255, 0]), then create a mask that ignores any colours outside a tolerance around your green:
mask = cv2.inRange(image, lower, upper)
Take a look at this tutorial for a step by step.
Play around with lower and upper to get the right behaviour. Then you can find the contours in the mask:
_, contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
Then go through the contours list to find the biggest one (filter out any possible noise):
blob = max(contours, key=lambda el: cv2.contourArea(el))
And that's your final 'blob'. You can find the center by doing:
M = cv2.moments(blob)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
You can draw this center onto a copy of your image, for checking:
canvas = im.copy()
cv2.circle(canvas, center, 2, (0,0,255), -1)
Obviously, this makes the assumption that there's only one green ball and nothing else green in the image. But it's a start.
EDIT - RESPONSE TO SECOND POST
I think the following should work. I haven't tested it, but you should be able to at least do a bit more debugging with the canvas and mask displayed:
# Standard imports
import cv2
import numpy as np;
# Read image
cam = cv2.VideoCapture(0)
while(1):
ret, frame = cam.read()
if not ret:
break
canvas = frame.copy()
lower = (130,150,80) #130,150,80
upper = (250,250,120) #250,250,120
mask = cv2.inRange(frame, lower, upper)
try:
# NB: using _ as the variable name for two of the outputs, as they're not used
_, contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
blob = max(contours, key=lambda el: cv2.contourArea(el))
M = cv2.moments(blob)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
cv2.circle(canvas, center, 2, (0,0,255), -1)
except (ValueError, ZeroDivisionError):
pass
cv2.imshow('frame',frame)
cv2.imshow('canvas',canvas)
cv2.imshow('mask',mask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
im.release()
cv2.destroyAllWindows()
You should use HSV color space for better results if you wanna make filter by color.
ret, frame=im.read()
frame= cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # Add this to your code
mask = cv2.inRange(frame, blueLower, blueUpper)

Categories