I wrote the following code to filter video feed from a camera to only show the bright spot from lights or reflections. When I run it, the window opens up but it wont respond and nothing is shown.
import cv2
import numpy as np
import keyboard
#Starts capturing video in a VideoCapture object called cap
cap = cv2.VideoCapture(0)
#While the q button is not pressed, the while loop runs
while not keyboard.is_pressed('q'):
#ret is a placeholder (not used)
ret, frame = cap.read()
#Sets the range of acepted colors in HSV
whiteRange = np.array([[0, 0, 200], [255, 40, 255]])
#Blurs the image to remove noise and smooth the details
gaussianBlur = cv2.GaussianBlur(frame, (5,5), 0)
#Converts from BGR to HSV to filter colors in the next step
hsvFrame = cv2.cvtColor(gaussianBlur, cv2.COLOR_BGR2HSV)
#Filters for white only and turns other colors black
whiteFilter = cv2.inRange(hsvFrame, whiteRange[0], whiteRange[1])
#Display the final image
cv2.imshow('Tape-Detection', whiteFilter)
#Ends the capture and destroys the windows
cap.release()
cv2.destroyAllWindows()
Any suggestions would be great too, I'm new to OpenCV Python.
You cannot use cv2.imshow() without cv2.waitKey(). The spare cycles while you are waiting are used to update the display.
Related
I use the following function to obtain video frames. I either pass noise_type=None to obtain original frames or pass salt and pepper to and overlay frames with salt and pepper noise (randomly replacing some RGB pixels with (0, 0, 0) or (255, 255, 255) This is passed alongside some probability that a pixel will be replaced with a black or white pixel (e.g. prob=0.1 to replace 10% of pixels with either a black or white pixel).
Please note, I am using Python 3.7.9 and OpenCV 4.4.0. Also, as the videos are to be ultimately written alongside audio data using moviepy, they are in RGB space; so running this code and viewing the video will be in the wrong colourspace, but you should still see that the video hangs during playback.
def get_video_frames(filename, noise_type=None, prob=None):
all_frames = []
video_capture = cv2.VideoCapture()
if not video_capture.open(filename):
print('Error: Cannot open video file {}'.format(filename))
return
fps = video_capture.get(cv2.CAP_PROP_FPS)
print("fps: {}".format(fps))
while True:
has_frames, frame = video_capture.read()
if not has_frames:
video_capture.release()
break
if noise_type is None:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, dsize=(224, 224))
elif noise_type == 'salt and pepper':
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, dsize=(224, 224))
row,col,ch = frame.shape
s_vs_p = 0.5
salty_x_coords = np.random.choice(frame.shape[0], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
salty_y_coords = np.random.choice(frame.shape[1], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
frame[salty_x_coords, salty_y_coords] = 255, 255, 255
peppery_x_coords = np.random.choice(frame.shape[0], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
peppery_y_coords = np.random.choice(frame.shape[1], np.int(np.ceil(frame.shape[0]*frame.shape[1]*prob*s_vs_p)))
frame[peppery_x_coords, peppery_y_coords] = 0, 0, 0
all_frames.append(frame)
return all_frames, fps
The issue comes with playback, it seems. I generate clean frames and display them using opencv:
frames_clean, fps = get_video_frames('C:/some_video_file.mp4')
for f in frames_clean:
cv2.imshow('clean', f)
cv2.waitKey(33)
cv2.destroyAllWindows()
Then I generate noisy frames and display them using opencv:
frames_noisy, fps = get_video_frames('C:/some_video_file.mp4', noise_type='salt and pepper', prob=0.1)
for f in frames_noisy:
cv2.imshow('noisy', f)
cv2.waitKey(33)
cv2.destroyAllWindows()
The noisy video hangs/pauses/stutters on some frames. It's really unusual as both frames_clean and frames_noisy are lists of uint8 frames of the same shape. The only difference is that the noisy frames have some different pixel values. This behaviour is also present if I create a videoclip using moviepy with these frame lists, write them to disk, and play them with VLC/ Windows Media Player. After 2 days of scouring the internet, I can't find any explanation. I would like the noisy videos I generate to play as expected with a seemingly stable display rate as per the clean video without noise. Thanks for any help!
I am currently making a program with OpenCv that detects 2 colours. My next step is to leave a "translucent" path of where both these colours have moved. The idea is that every time they cross over their trail it will get a shade darker.
Here is my current code:
# required libraries
import cv2
import numpy as np
# main function
def main():
# returns vid from camera -- cameras are indexed(0 is the front camera, 1 is rear)
cap = cv2.VideoCapture(0)
# cap is opened if pc is receiving cam data
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False
while ret:
ret, frame = cap.read()
# setting color range
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# BLUE color range
blue_low = np.array([100, 50, 50])
blue_high = np.array([140, 255, 255])
# GREEN color range
green_low = np.array([40, 40, 40])
green_high = np.array(([80, 255, 255]))
# creating masks
blue_mask = cv2.inRange(hsv, blue_low, blue_high)
green_mask = cv2.inRange(hsv, green_low, green_high)
# combination of masks
blue_green_mask = cv2.bitwise_xor(blue_mask, green_mask)
blue_green_mask_colored = cv2.bitwise_and(blue_mask, green_mask, mask=blue_green_mask)
# create the masked version (shows the background black and the specified color the color coming through cam)
output = cv2.bitwise_and(frame, frame, mask=blue_green_mask)
# create/open windows
cv2.imshow("image mask", blue_green_mask)
cv2.imshow("orig webcam feed", frame)
cv2.imshow("color tracking", output)
# if q is pressed the project breaks
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# once broken the program will run remaining instructions (closing windows and stopping cam)
cv2.destroyAllWindows()
cap.release
if __name__ == "__main__":
main()
My question now is how would I add the trail of where both colours have gone? I have also read that I will run into a problem when the trail is implemented as insignificant objects may be detected as one of the colours and leave an unwanted trail.. meaning I will need to find a way to only trail the largest object of the specified colours.
EDIT:
For further clarification:
I am using 2 black highlighters (one with a blue cap and one with a green cap).
With regards to the trail I am referring to something similar to this:..
trail clarification
This guy did an okay job at explaining it but I was still very confused which is why I came to stack overflow for help.
with the trails; I would like for them to be 'translucent' and not solid like in the picture above. therefore if the object crosses over its path again that section of the path will become a shade darker.
hope this helped:)
I want to run a normal software using object detection, i can detect an object or color with openCV but after that i can not take any action. like i want to push a button whenever camera detect any color or object.
with this code i can detect any yellow color object but i can't take any action after that.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
_, frame = cap.read()
hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
#yellow color
low_yellow = np.array([20,60,200])
high_yellow = np.array([60,255,255])
yellow_mask = cv2.inRange(hsv_frame, low_yellow, high_yellow)
yellow = cv2.bitwise_and(frame, frame, mask=yellow_mask)
cv2.imshow("OUR FRAME", frame)
cv2.imshow("YELLOW FRAME" , yellow)
key =cv2.waitKey(1)
if key ==27:
break
It looks to me like you are not really detecting objects yet, you are taking whatever image the camera is seeing and applying a yellow filter to it. But if there were no yellow object the screen would still display.
To get to what you are looking for I suggest looking into "blob detection" This is probably the simplest form of object detection. Once you can detect "blobs" I recommend setting a threshold for size and deciding if an object if worth reacting to based on that.
I'm trying to use MOG background subtraction but the "history" function doesn't seem to work.
OpenCV 2.4.13
Python (2.7.6)
Observation: The program appears to use the very first frame it captures for all future background subtractions.
Expectation: The background should slowly evolve based on the "history" parameter, so that if the camera angle changes, or if a person/object leaves the field-of-view, the "background" image will change accordingly.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
mog = cv2.BackgroundSubtractorMOG(history=10, nmixtures=5, backgroundRatio=0.25)
while True:
ret, img1 = cap.read()
if ret is True:
cv2.imshow("original", img1)
img_mog = mog.apply(img1)
cv2.imshow("mog", img_mog)
if cv2.waitKey(10) & 0xFF == ord(q):
break
video_capture.release()
cv2.destroyAllWindows()
Thank you in advance for your help!
I am taking input from a video and I want to take the median value of the first 5 frames so that I can use it as background image for motion detection using deferential.
Also, I want to use a time condition that, say if motion is not detected then calculate the background again, else wait t seconds. I am new to opencv and I don't know how to do it.. Please help
Also, I want to take my video in 1 fps but this does not work. Here is the code I have:
import cv2
BLUR_SIZE = 3
NOISE_CUTOFF = 12
cam = cv2.VideoCapture('gh10fps.mp4')
cam.set(3, 640)
cam.set(4, 480)
cam.set(cv2.cv.CV_CAP_PROP_FPS, 1)
fps=cam.get(cv2.cv.CV_CAP_PROP_FPS)
print "Current FPS: ",fps
If you really want the median of the first 5 frames, then following should do what you are looking for:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
frames = []
for _ in range(5):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray)
median = np.median(frames, axis=0).astype(dtype=np.uint8)
cv2.imshow('frame', median)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()
Note, this is just taking the source from a webcam as an example.