I am currently making a program with OpenCv that detects 2 colours. My next step is to leave a "translucent" path of where both these colours have moved. The idea is that every time they cross over their trail it will get a shade darker.
Here is my current code:
# required libraries
import cv2
import numpy as np
# main function
def main():
# returns vid from camera -- cameras are indexed(0 is the front camera, 1 is rear)
cap = cv2.VideoCapture(0)
# cap is opened if pc is receiving cam data
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False
while ret:
ret, frame = cap.read()
# setting color range
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# BLUE color range
blue_low = np.array([100, 50, 50])
blue_high = np.array([140, 255, 255])
# GREEN color range
green_low = np.array([40, 40, 40])
green_high = np.array(([80, 255, 255]))
# creating masks
blue_mask = cv2.inRange(hsv, blue_low, blue_high)
green_mask = cv2.inRange(hsv, green_low, green_high)
# combination of masks
blue_green_mask = cv2.bitwise_xor(blue_mask, green_mask)
blue_green_mask_colored = cv2.bitwise_and(blue_mask, green_mask, mask=blue_green_mask)
# create the masked version (shows the background black and the specified color the color coming through cam)
output = cv2.bitwise_and(frame, frame, mask=blue_green_mask)
# create/open windows
cv2.imshow("image mask", blue_green_mask)
cv2.imshow("orig webcam feed", frame)
cv2.imshow("color tracking", output)
# if q is pressed the project breaks
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# once broken the program will run remaining instructions (closing windows and stopping cam)
cv2.destroyAllWindows()
cap.release
if __name__ == "__main__":
main()
My question now is how would I add the trail of where both colours have gone? I have also read that I will run into a problem when the trail is implemented as insignificant objects may be detected as one of the colours and leave an unwanted trail.. meaning I will need to find a way to only trail the largest object of the specified colours.
EDIT:
For further clarification:
I am using 2 black highlighters (one with a blue cap and one with a green cap).
With regards to the trail I am referring to something similar to this:..
trail clarification
This guy did an okay job at explaining it but I was still very confused which is why I came to stack overflow for help.
with the trails; I would like for them to be 'translucent' and not solid like in the picture above. therefore if the object crosses over its path again that section of the path will become a shade darker.
hope this helped:)
Related
I'm relatively new to scripting. (I know quite a bit but I also don't know quite a bit.)
I'm trying to have a simple script use OpenCV-Python to subtract two frames from a webcam and draw a bounding box around the changed pixels. The issue is that when I try to define the boundingRect (x,y,w,h = cv2.boundingRect(contours)) it gives the error:
Message=OpenCV(4.5.3) :-1: error: (-5:Bad argument) in function 'boundingRect'
> Overload resolution failed:
> - array is not a numpy array, neither a scalar
> - Expected Ptr<cv::UMat> for argument 'array'
I've been searching around for quite a while but there seems to be a very small number of people who've had my issue and pretty much none of them had solutions that worked.
Here's my code:
import cv2
from time import sleep as wait
import numpy as np
lastFrame = "foobaz"
i = 0
#My webcam is on index 1, this isn't (at least shouldn't) be the issue. Make sure to set it back to 0 if you are testing
vid = cv2.VideoCapture(1)
#A placeholder black image for the 'subract' imshow window
black = np.zeros((512,512,3), np.uint8)
while(True):
wait(0.5)
ret, frame = vid.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blurframe = cv2.GaussianBlur(frame,(25,25),0)
#Makes sure the lastFrame has been assigned, if not set it as placeholder black image.
if lastFrame != "foobaz":
#Subtracts current frame and the last frame to find difference.
subFrame = cv2.subtract(blurframe,lastFrame)
else:
subFrame = black
#Assigns the next lastFrame
lastFrame = blurframe
#Gets the threshold of the subtracted image
ret,thresh1 = cv2.threshold(subFrame,40,255,cv2.THRESH_BINARY)
#Sets the thresholded image to grayscale if the loop was ran for the first time.
if i==0:
thresh1 = cv2.cvtColor(thresh1, cv2.COLOR_BGR2GRAY)
i+=1
#This is where issues arize. I'm trying to apply a bounding box using a contour but it always errors at line 44.
contours = cv2.findContours(thresh1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
print(len(contours))
x,y,w,h = cv2.boundingRect(contours)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('frame',frame)
cv2.imshow('subtract',thresh1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I saw that some other posts had the contour type() so here it is:
type(contours) = <class 'list'>
CLOSED: I found out the issue. You have to iterate contours for it to work.
I use the following function to obtain video frames. I either pass noise_type=None to obtain original frames or pass salt and pepper to and overlay frames with salt and pepper noise (randomly replacing some RGB pixels with (0, 0, 0) or (255, 255, 255) This is passed alongside some probability that a pixel will be replaced with a black or white pixel (e.g. prob=0.1 to replace 10% of pixels with either a black or white pixel).
Please note, I am using Python 3.7.9 and OpenCV 4.4.0. Also, as the videos are to be ultimately written alongside audio data using moviepy, they are in RGB space; so running this code and viewing the video will be in the wrong colourspace, but you should still see that the video hangs during playback.
def get_video_frames(filename, noise_type=None, prob=None):
all_frames = []
video_capture = cv2.VideoCapture()
if not video_capture.open(filename):
print('Error: Cannot open video file {}'.format(filename))
return
fps = video_capture.get(cv2.CAP_PROP_FPS)
print("fps: {}".format(fps))
while True:
has_frames, frame = video_capture.read()
if not has_frames:
video_capture.release()
break
if noise_type is None:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, dsize=(224, 224))
elif noise_type == 'salt and pepper':
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, dsize=(224, 224))
row,col,ch = frame.shape
s_vs_p = 0.5
salty_x_coords = np.random.choice(frame.shape[0], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
salty_y_coords = np.random.choice(frame.shape[1], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
frame[salty_x_coords, salty_y_coords] = 255, 255, 255
peppery_x_coords = np.random.choice(frame.shape[0], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
peppery_y_coords = np.random.choice(frame.shape[1], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
frame[peppery_x_coords, peppery_y_coords] = 0, 0, 0
all_frames.append(frame)
return all_frames, fps
The issue comes with playback, it seems. I generate clean frames and display them using opencv:
frames_clean, fps = get_video_frames('C:/some_video_file.mp4')
for f in frames_clean:
cv2.imshow('clean', f)
cv2.waitKey(33)
cv2.destroyAllWindows()
Then I generate noisy frames and display them using opencv:
frames_noisy, fps = get_video_frames('C:/some_video_file.mp4', noise_type='salt and pepper', prob=0.1)
for f in frames_noisy:
cv2.imshow('noisy', f)
cv2.waitKey(33)
cv2.destroyAllWindows()
The noisy video hangs/pauses/stutters on some frames. It's really unusual as both frames_clean and frames_noisy are lists of uint8 frames of the same shape. The only difference is that the noisy frames have some different pixel values. This behaviour is also present if I create a videoclip using moviepy with these frame lists, write them to disk, and play them with VLC/ Windows Media Player. After 2 days of scouring the internet, I can't find any explanation. I would like the noisy videos I generate to play as expected with a seemingly stable display rate as per the clean video without noise. Thanks for any help!
I want to run a normal software using object detection, i can detect an object or color with openCV but after that i can not take any action. like i want to push a button whenever camera detect any color or object.
with this code i can detect any yellow color object but i can't take any action after that.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
_, frame = cap.read()
hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
#yellow color
low_yellow = np.array([20,60,200])
high_yellow = np.array([60,255,255])
yellow_mask = cv2.inRange(hsv_frame, low_yellow, high_yellow)
yellow = cv2.bitwise_and(frame, frame, mask=yellow_mask)
cv2.imshow("OUR FRAME", frame)
cv2.imshow("YELLOW FRAME" , yellow)
key =cv2.waitKey(1)
if key ==27:
break
It looks to me like you are not really detecting objects yet, you are taking whatever image the camera is seeing and applying a yellow filter to it. But if there were no yellow object the screen would still display.
To get to what you are looking for I suggest looking into "blob detection" This is probably the simplest form of object detection. Once you can detect "blobs" I recommend setting a threshold for size and deciding if an object if worth reacting to based on that.
I wrote the following code to filter video feed from a camera to only show the bright spot from lights or reflections. When I run it, the window opens up but it wont respond and nothing is shown.
import cv2
import numpy as np
import keyboard
#Starts capturing video in a VideoCapture object called cap
cap = cv2.VideoCapture(0)
#While the q button is not pressed, the while loop runs
while not keyboard.is_pressed('q'):
#ret is a placeholder (not used)
ret, frame = cap.read()
#Sets the range of acepted colors in HSV
whiteRange = np.array([[0, 0, 200], [255, 40, 255]])
#Blurs the image to remove noise and smooth the details
gaussianBlur = cv2.GaussianBlur(frame, (5,5), 0)
#Converts from BGR to HSV to filter colors in the next step
hsvFrame = cv2.cvtColor(gaussianBlur, cv2.COLOR_BGR2HSV)
#Filters for white only and turns other colors black
whiteFilter = cv2.inRange(hsvFrame, whiteRange[0], whiteRange[1])
#Display the final image
cv2.imshow('Tape-Detection', whiteFilter)
#Ends the capture and destroys the windows
cap.release()
cv2.destroyAllWindows()
Any suggestions would be great too, I'm new to OpenCV Python.
You cannot use cv2.imshow() without cv2.waitKey(). The spare cycles while you are waiting are used to update the display.
I am able to use the code below to find anything blue within a frame:
How To Detect Red Color In OpenCV Python?
However I want to modify the code to look for a very specific color within a video that I have. I read in a frame from the video file, and convert the frame to HSV, and then print the HSV values at a particular element (that contains the color I want to detect):
print("hsv[x][y]: {}".format(hsv[x][y]))
"hsv" is what I get after I convert the frame from BGR to HSV using cvtColor().
The print command above gives me:
hsv[x][y]: [108 27 207]
I then define my lower and upper HSV values, and pass that to the inRange() function:
lowerHSV = np.array([107,26,206])
upperHSV = np.array([109,28,208])
maskHSV = cv2.inRange(hsv, lowerHSV, upperHSV)
I display maskHSV, but it doesn't seem to identify the item that contains that color. I try to expand the lowerHSV and upperHSV bounds, but that doesn't seem to work either.
I try something something similar using BGR but that doesn't appear to work.
The thing I'm trying to identify can best be described as a lemon-lime sports drink bottle...
Any suggestions would be appreciated.
=====================================================
The complete python code I am running is shown below, along with some relevant images...
import cv2
import numpy as np
import time
video_capture = cv2.VideoCapture("conveyorBeltShort.wmv")
xloc = 460
yloc = 60
dCounter = 0
while(1):
dCounter += 1
grabbed, frame = video_capture.read()
if grabbed == False:
break
time.sleep(2)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBGR = np.array([200,190,180])
upperBGR = np.array([210,200,190])
# HSV [108 27 209] observation
lowerHSV = np.array([105,24,206])
upperHSV = np.array([111,30,212])
maskBGR = cv2.inRange(frame, lowerBGR, upperBGR)
maskHSV = cv2.inRange(hsv, lowerHSV, upperHSV)
cv2.putText(hsv, "HSV:" + str(hsv[xloc][yloc]),(100,280),
cv2.FONT_HERSHEY_SIMPLEX, 3,(0,0,0),8,cv2.LINE_AA)
cv2.putText(frame, "BGR:" + str(frame[xloc][yloc]),(100,480),
cv2.FONT_HERSHEY_SIMPLEX, 3,(0,0,0),8,cv2.LINE_AA)
cv2.rectangle(frame, (0, 0), (xloc-1, yloc-1), (255, 0, 0), 2)
cv2.rectangle(hsv, (0, 0), (xloc-1, yloc-1), (255, 0, 0), 2)
cv2.imwrite("maskHSV-%d.jpg" % dCounter, maskHSV)
cv2.imwrite("maskBGR-%d.jpg" % dCounter, maskBGR)
cv2.imwrite("hsv-%d.jpg" % dCounter, hsv)
cv2.imwrite("frame-%d.jpg" % dCounter, frame)
cv2.imshow('frame',frame)
cv2.imshow('hsv',hsv)
cv2.imshow('maskHSV',maskHSV)
cv2.imshow('maskBGR',maskBGR)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
video_capture.release()
=====================================================
first image is "frame-6.jpg"
second image is "hsv-6.jpg"
third image is "maskHSV-6.jpg"
fourth image is "maskBGR-6.jpg"
maskHSV-6.jpg and maskBGR-6.jpg do not appear to show the lemon-lime bottle on the conveyor belt. I believe I have the lower and upper HSV/BGR limits set correctly...
I know only C++ for OpenCV, but according to this post you should use img[y][x] or img[x,y] to access pixel value.
Your rgb value are not correct.It should be something like [96,160,165].