I am having some difficulty combining matplotlib and opencv. I have the following code that shows a video stream and pauses when the space bar is hit. I then want to use matplotlib to draw some annotations on top of the paused image (The user can then hit space to unpause and repeat the process). However, whenever I add any matplotlib functions it opens a separate figure window and I would ideally like to have my annotations directly on the image. I also looked at the available drawing functions in opencv and there is not enough functionality there for me (I am looking to fit a complicated curve to the paused image). Any help would be greatly appreciated!!!!
import numpy as np
import cv2
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, im = cap.read()
# Our operations on the frame come here
im_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
im_g = cv2.GaussianBlur(im_gray,(5,5),0)
(thresh, im_bw) = cv2.threshold(im_g, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
cv2.imshow('Window',im)
k = cv2.waitKey(1)
if k > 0:
print k
if k == 32:
##########
# Plot on top of image using matplotlib
##########
cv2.waitKey(0)
continue
elif k == 27:
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Related
Using OpenCV and Mediapipe to create an application where multiple faces in the live feed is not desired. Hence I require a solution to either destroy video feed and display an image until there is only 1 face in the frame(and display the feed again of course)
or Overlay an image on the entire video feed display window(hence hide the feed display)
Here's the code so far:
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('images.png')
mul = cv2.resize(mul, (640, 480))
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
results = face_detection.process(imgRGB)
for count, detection in enumerate(results.detections):
continue
count += 1
if count > 1:
cv2.destroyAllWindows()
cv2.imshow("output", mul)
time.sleep(10)
continue
cv2.imshow("output", frame)
cap.release()
cv2.destroyAllWindows()
I'm trying to destroy the feed and display. The introduced delay using time.sleep(10) is because without it the windows switch between the video feed and the image at a very high rate, making it hard to see what's happening.
The problem is that image is not being displayed, the window appears blank grey; and after 10 seconds the video feed comes back up and is taking very long to display the image again even though the multiple faces never leave the frame
Thank you
You observe gray frames because you are destroying the window every time loop starts over. You are stucking at the if count > 1: statement since count is being increased for each frame because it does not depend on any condition and never initialize again (so count always >1 after detecting 2 faces although faces are in different frames). Here is my solution to the problem hope it helps.
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('image.jpg')
mul = cv2.resize(mul, (640, 480))
count = 0
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
cv2.imshow("Original", frame)
imgRGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = face_detection.process(imgRGB)
if(results.detections):
for count, detection in enumerate(results.detections):
count += 1
print("Number of Faces: ", count)
if count > 1:
cv2.imshow('StackOverflow', mul)
else:
cv2.imshow('StackOverflow', frame)
count = 0
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
Here is the result with one face in frame;
Result with multiple faces;
I made a program that applies a mask over an object as described in this StackOverflow question. I did so using colour thresholding and making the mask select only the colour range of human skin (I don't know if it works for white people as I am not white and it works well for me). the problem is when I run it, some greys (grey area on the wall or a shadow) are also picked up on the mask and it is applied there.
I wanted to know whether there was a way to remove the unnecessary bits in the background, and/or if there was a way using object detection I could solve this. PS I tried using createBackgroundSubtractorGMG/MOG/etc but that came out very weird and way worse.
Here is my code:
import cv2
from cv2 import bitwise_and
from cv2 import COLOR_HSV2BGR
import numpy as np
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
image = cv2.imread('yesh1.jpg')
bg = cv2.imread('kruger.jpg')
bg = cv2.cvtColor(bg, cv2.COLOR_BGR2RGB)
kernel1 = np.ones((1,1),np.uint8)
kernel2 = np.ones((10,10),np.uint8)
while (1):
ret, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBound = np.array([1, 1, 1])
upperBound = np.array([140, 255 ,140])
mask = cv2.inRange(hsv, lowerBound, upperBound)
blur = cv2.GaussianBlur(mask,(5,5),0)
ret1,mask = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel1)
contourthickness = cv2.cvtColor(mask, cv2.IMREAD_COLOR)
res = bitwise_and(frame, frame, mask = mask)
crop_bg = bg[0:480, 0:640]
final = frame + res
final = np.where(contourthickness != 0, crop_bg, final)
cv2.imshow('frame', frame)
cv2.imshow('Final', final) # TIS WORKED BBYY
key = cv2.waitKey(1) & 0xFF
if key == 27:
break
cv2.destroyAllWindows()
EDIT:
Following #fmw42 's comment, I am adding the original image as well as a screenshot of how the different frames look. The masked image also changes colour. Something to fix that will also be helpful.
#Jeremi. Your code working 100%. Using White wall for background. Avoid door(it is not white, it is cream), shadow around edge, to prevent noising. If you have white bed sheet or white walls. I am using Raspberry pi 4b/8gb, 4k monitor. I can't get actual size of window.
Here is output:
What you see on my output. I placed my hand behind white sheet closer to camera. I do not have white wall on my room. My room is greener. That why you see logo on background. Btw, I can move my hand no problem.
I do a "Background Subtraction" with a VideoStream. Then I want to check inside the interior of a specified polygon, if there are white dots.
I thought about using https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/point_polygon_test/point_polygon_test.html but I don't know how to do it, because the white points are existing after applying the filter. The original stream contains also white points which I also dont't want to count.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture()
cap.open("rtsp://LOGINNAME:PASSWORD#192.168.178.42:554")
#cap.open("C:\\Users\\001\\Desktop\\cam-20191025-220508-220530.mp4")
fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()
while(1):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
polygonInnenAutoErkennen_cnt = np.array( [(24, 719), (714,414), (1005,429),(1084,719)] )
cv2.drawContours(fgmask,[polygonInnenAutoErkennen_cnt],-1,(255,128,60))
#How can I check here?
cv2.imshow('frame',fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27: # exit on ESC
break
cap.release()
cv2.destroyAllWindows()
The simplest way is to use mask image. Plot your polygon on binary image, and use it as mask for your white dots. You can just do per-pixel multiplication or logical AND.
I'm trying to use OpenCV to analyze a 10 min video for green in the top right corner and printing out the time stamps every time there is green.
I've used meanshift and tried splitting it into frames but a 10 minute long video would require a lot of frames -- especially since I'm trying to be accurate.
I was thinking of masking over the green, but there isn't a way to print the time stamps... any tips on what packages or tools I should use?
Here's what I have, and I've tried basically everything in OpenCV but I've never used it before so I'm lost...
import cv2
import numpy as np
cap = cv2.VideoCapture('video.mp4')
# This drives the program into an infinite loop.
while(1):
# Captures the live stream frame-by-frame
_, frame = cap.read()
# Converts images from BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([121, 240, 9]) # darker green
upper_red = np.array([254,255,255]) # white green
# Defining range of green color in HSV
# This creates a mask
mask = cv2.inRange(hsv, lower_red, upper_red)
# that only the green coloured objects are highlighted
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
# if k == 27: break
# Destroys all of the HighGUI windows.
cv2.destroyAllWindows()
# release the captured frame
cap.release()
Splitting the frames gives me thousands of images and just reading the video doesn't give me time stamps, which is what I want it to print out. Thanks!
I have been searching through OpenCV Docs, but I was unable to find the answer to my problem
I want to remove all OpenCv prebuilt drawings from the image
Here is an example:
with the following generic code:
import numpy as np
import cv2
# Create a black image
img = np.zeros((512,512,3), np.uint8)
# Draw a diagonal blue line with thickness of 5 px
img = cv2.line(img,(0,0),(511,511),(255,0,0),5)
while True:
cv2.imshow('generic frame', img)
if cv2.waitKey(1) & 0xFF is ord('q'):
break
I want the output to be like this following image
The background needs to stay the same, without the OpenCv drawings