I'm new here.
I'm having some problems with segmentating hands in OpenCV, Python from video. I looked over every topic I could, but I can't see my mistake in code I wrote.
Please, note I worked only once with Python and only with basics - it's my first OpenCV application ever.
My problem is that I'm getting output video as black with some white squares. I assume, I'm using wrong values in thresholds. I took colour from the middle of the hand, conversed it to HSV and set lower values to [99,100,100] and upper [119,255,255].
Can anyone help me?
Here is code, if anyone would like to look at it:
import numpy as np
import cv2
cap = cv2.VideoCapture('vid2.avi')
fourcc = cv2.cv.CV_FOURCC(*'MP4V')
out = cv2.VideoWriter('output.avi',fourcc, 29, (1920,1080))
while(1):
ret, frame = cap.read()
if ret==True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
blur = cv2.GaussianBlur(hsv, (15, 15), 0)
lower = np.array([99,100,100])
upper = np.array([119,255,255])
mask = cv2.inRange(blur, lower, upper)
res = cv2.bitwise_and(frame,frame, mask= mask)
out.write(res)
if ret==False:
break
cap.release()
cv2.destroyAllWindows()
out.release()
Related
I made a program that applies a mask over an object as described in this StackOverflow question. I did so using colour thresholding and making the mask select only the colour range of human skin (I don't know if it works for white people as I am not white and it works well for me). the problem is when I run it, some greys (grey area on the wall or a shadow) are also picked up on the mask and it is applied there.
I wanted to know whether there was a way to remove the unnecessary bits in the background, and/or if there was a way using object detection I could solve this. PS I tried using createBackgroundSubtractorGMG/MOG/etc but that came out very weird and way worse.
Here is my code:
import cv2
from cv2 import bitwise_and
from cv2 import COLOR_HSV2BGR
import numpy as np
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
image = cv2.imread('yesh1.jpg')
bg = cv2.imread('kruger.jpg')
bg = cv2.cvtColor(bg, cv2.COLOR_BGR2RGB)
kernel1 = np.ones((1,1),np.uint8)
kernel2 = np.ones((10,10),np.uint8)
while (1):
ret, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBound = np.array([1, 1, 1])
upperBound = np.array([140, 255 ,140])
mask = cv2.inRange(hsv, lowerBound, upperBound)
blur = cv2.GaussianBlur(mask,(5,5),0)
ret1,mask = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel1)
contourthickness = cv2.cvtColor(mask, cv2.IMREAD_COLOR)
res = bitwise_and(frame, frame, mask = mask)
crop_bg = bg[0:480, 0:640]
final = frame + res
final = np.where(contourthickness != 0, crop_bg, final)
cv2.imshow('frame', frame)
cv2.imshow('Final', final) # TIS WORKED BBYY
key = cv2.waitKey(1) & 0xFF
if key == 27:
break
cv2.destroyAllWindows()
EDIT:
Following #fmw42 's comment, I am adding the original image as well as a screenshot of how the different frames look. The masked image also changes colour. Something to fix that will also be helpful.
#Jeremi. Your code working 100%. Using White wall for background. Avoid door(it is not white, it is cream), shadow around edge, to prevent noising. If you have white bed sheet or white walls. I am using Raspberry pi 4b/8gb, 4k monitor. I can't get actual size of window.
Here is output:
What you see on my output. I placed my hand behind white sheet closer to camera. I do not have white wall on my room. My room is greener. That why you see logo on background. Btw, I can move my hand no problem.
I’m trying to segment the moving propeller of this Video. My approach is, to detect all black and moving pixels to separate the propeller from the rest.
Here is what I tried so far:
import numpy as np
import cv2
x,y,h,w = 350,100,420,500 # Croping values
cap = cv2.VideoCapture('Video Path')
while(1):
_, frame = cap.read()
frame = frame[y:y+h, x:x+w] # Crop Video
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_black = np.array([0,0,0])
upper_black = np.array([360,255,90])
mask = cv2.inRange(hsv, lower_black, upper_black)
res = cv2.bitwise_and(frame,frame, mask= mask)
nz = np.argwhere(mask)
cv2.imshow('Original',frame)
cv2.imshow('Propeller Segmentation',mask)
k = cv2.waitKey(30) & 0xff # press esc to exit
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
Screenshot form the Video
Result of the Segmentation
With function cv.createBackgroundSubtractorMOG2()
I think you should have a look at background subtraction. It should be the right approach for your problem.
OpenCV provides a good tutorial on this: Link
I'm trying to read a video into python (be it live or pre-recorded is irrelevant) then have each frame processed using a thresholding algorithm to convert the video to a 2 colour format.
with a simple thresholding method, i get this error:
cv2.imshow('newFrame',newFrame)
TypeError: Expected Ptr<cv::UMat> for argument 'mat'
Thresholding for images seems simple but i cant seem to convert the data produced from the thresholding method into a format that is recognised by anything further down the line.
I have included the full code below.
import numpy as np
import cv2
cap = cv2.VideoCapture('Loop_1.mov')
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
threshed = cv2.threshold(frame,50,255,cv2.THRESH_BINARY)
newFrame = np.array(threshed)
cv2.imshow('newFrame',newFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
#out.release()
cv2.destroyAllWindows()
The threshold function should return two parameters
retval, threshold = cv2.threshold(frame, 50, 255, cv2.THRESH_BINARY)
I'm trying to use OpenCV to analyze a 10 min video for green in the top right corner and printing out the time stamps every time there is green.
I've used meanshift and tried splitting it into frames but a 10 minute long video would require a lot of frames -- especially since I'm trying to be accurate.
I was thinking of masking over the green, but there isn't a way to print the time stamps... any tips on what packages or tools I should use?
Here's what I have, and I've tried basically everything in OpenCV but I've never used it before so I'm lost...
import cv2
import numpy as np
cap = cv2.VideoCapture('video.mp4')
# This drives the program into an infinite loop.
while(1):
# Captures the live stream frame-by-frame
_, frame = cap.read()
# Converts images from BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([121, 240, 9]) # darker green
upper_red = np.array([254,255,255]) # white green
# Defining range of green color in HSV
# This creates a mask
mask = cv2.inRange(hsv, lower_red, upper_red)
# that only the green coloured objects are highlighted
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
# if k == 27: break
# Destroys all of the HighGUI windows.
cv2.destroyAllWindows()
# release the captured frame
cap.release()
Splitting the frames gives me thousands of images and just reading the video doesn't give me time stamps, which is what I want it to print out. Thanks!
I'm new to openCv and I'm trying to extract myself from the background in a video that I got from my webcam. So far I'm able to remove background while the foreground(video of me) is still appearing in white. I want it to be exactly like it was in the original video. Below is my current code:
fgbg = cv2.createBackgroundSubtractorMOG2()
while(1):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
cv2.imshow('fgmask',frame)
cv2.imshow('frame',fgmask)
Any hint or some sense of direction will be much appreciated.
fgmask is a white mask of the foreground. You must mask your original frame with this mask. Now, the frame is an RGB or possibly a BGR image, while the mask is a single channel (you can check by printing frame.shape and fgmask.shape). Therefore you must convert the mask to RGB and then apply the mask:
mask_rgb = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2RGB)
out_frame = cv2.bitwise_and(frame, mask_rgb)
cv2.imshow("FG", out_frame)