I am running this class function in loop and plotting the return image at the same time. However, if I have more than 2 cameras this does not seem to work.
I have this error (continuously):
CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638
def grab_frame(self):
image = []
timestamp = []
for i in range(0, len(self.devs)):
cap = self.cams[i]
ret, frame = cap.read()
if ret:
timestamp.extend([time.time()-self.def_time])
image.extend(frame)
return timestamp, image
Related
I am trying to use mediapipe to track hands. I am using Python 3.7.9 on Windows 10, my code is below:
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
cap = cv2.VideoCapture(0)
while (True):
success, img = cap.read()
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = mp_hands.Hands.process(imgRGB)
print(result)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
I'm getting this error:
Traceback (most recent call last):
File "C:/Users/Tomáš/PycharmProjects/pythonProject/hand_detect.py", line 11, in <module>
results = mp_hands.Hands.process(imgRGB)
TypeError: process() missing 1 required positional argument: 'image'
[ WARN:1] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
Error says that I need to pass one more argument 'self' before I pass argument 'image'. I've been browsing a lot and every related code doesnt use first argument in the process() function. Could anyone help me solve this error?
The problem is that you do not create an object of mp_hands.Hands before you want to process it. The following code solves it and prints some results. By the way, this was well documentated in the documentation link i commented before.. :
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW) # i had problems before reading webcam feeds, so i added cv2.CAP_DSHOW here
while True:
success, img = cap.read()
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# you have to create an object of mp_hands.Hands to get results
# alternatively you could do: results = mp_hands.Hands().process(imgRGB)
with mp_hands.Hands() as hands:
results = hands.process(imgRGB)
# continue loop if no results were found
if not results.multi_hand_landmarks:
continue
# print some results
for hand_landmarks in results.multi_hand_landmarks:
print(
f'Index finger tip coordinates: (',
f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x}, '
f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y})'
)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
Edit:
This is more or less the same code from here:
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands
# initialize webcam
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
with mp_hands.Hands(model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as hands:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(
image,
hand_landmarks,
mp_hands.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style(),
mp_drawing_styles.get_default_hand_connections_style())
# Flip the image horizontally for a selfie-view display.
cv2.imshow('MediaPipe Hands', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
So, basically i'm writing a program in Google Colab that will detect faces through the webcam using python and opencv2.
"I have Ubuntu 19.10, if this helps"
import cv2
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
At this point, an Assertion error appears
Traceback (most recent call last)
<ipython-input-94-ca2ba51b9064> in <module>()
7 ret, frame = video_capture.read()
8
----> 9 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
Nothing is using webcam while i'm running this code
!_src.empty() means you have empty frame.
When cv2 can't get frame from camera/file/stream then it doesn't show error but it set None in frame and False in ret - and you have to check one of these values
if frame is not None:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...
else:
print("empty frame")
exit(1)
or
if ret: # if ret is True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...
else:
print("empty frame")
exit(1)
or
if not ret:
print("empty frame")
exit(1)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...
or
if frame is None:
print("empty frame")
exit(1)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# ... other code ...
BTW: you can't use shorter if frame: because when it gets image from camera then frame gets numpy.array() which tries to check value in every cells separatelly and it shows warning which asks to use .all() or .any() - but .all() or .any() may give error when frame is None.
BTW: Sometimes cv2 has problem to find haarcascades file. And there is special variable with path to folder with .xml - cv2.data.haarcascades - and you may need
faceCascade = cv2.CascadeClassifier( os.path.join(cv2.data.haarcascades, "haarcascade_frontalface_default.xml") )
EDIT: (2022.05.10)
Main problem can be that Google Colab runs code on server and it doesn't have access to local webcam. Only web browser has access to local webcam and you have to use JavaScript to access this webcam, take image and send it to server.
Google Colab has example how to do it for single screenshot. See Camera Capture
I made code which can also get simulate VideoCapture() and cap.read() to work with video from local webcam.
https://colab.research.google.com/drive/1a2seyb864Aqpu13nBjGRJK0AIU7JOdJa?usp=sharing
#
# based on: https://colab.research.google.com/notebooks/snippets/advanced_outputs.ipynb#scrollTo=2viqYx97hPMi
#
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode, b64encode
import numpy as np
def init_camera():
"""Create objects and functions in HTML/JavaScript to access local web camera"""
js = Javascript('''
// global variables to use in both functions
var div = null;
var video = null; // <video> to display stream from local webcam
var stream = null; // stream from local webcam
var canvas = null; // <canvas> for single frame from <video> and convert frame to JPG
var img = null; // <img> to display JPG after processing with `cv2`
async function initCamera() {
// place for video (and eventually buttons)
div = document.createElement('div');
document.body.appendChild(div);
// <video> to display video
video = document.createElement('video');
video.style.display = 'block';
div.appendChild(video);
// get webcam stream and assing to <video>
stream = await navigator.mediaDevices.getUserMedia({video: true});
video.srcObject = stream;
// start playing stream from webcam in <video>
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
// <canvas> for frame from <video>
canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
//div.appendChild(input_canvas); // there is no need to display to get image (but you can display it for test)
// <img> for image after processing with `cv2`
img = document.createElement('img');
img.width = video.videoWidth;
img.height = video.videoHeight;
div.appendChild(img);
}
async function takeImage(quality) {
// draw frame from <video> on <canvas>
canvas.getContext('2d').drawImage(video, 0, 0);
// stop webcam stream
//stream.getVideoTracks()[0].stop();
// get data from <canvas> as JPG image decoded base64 and with header "data:image/jpg;base64,"
return canvas.toDataURL('image/jpeg', quality);
//return canvas.toDataURL('image/png', quality);
}
async function showImage(image) {
// it needs string "data:image/jpg;base64,JPG-DATA-ENCODED-BASE64"
// it will replace previous image in `<img src="">`
img.src = image;
// TODO: create <img> if doesn't exists,
// TODO: use `id` to use different `<img>` for different image - like `name` in `cv2.imshow(name, image)`
}
''')
display(js)
eval_js('initCamera()')
def take_frame(quality=0.8):
"""Get frame from web camera"""
data = eval_js('takeImage({})'.format(quality)) # run JavaScript code to get image (JPG as string base64) from <canvas>
header, data = data.split(',') # split header ("data:image/jpg;base64,") and base64 data (JPG)
data = b64decode(data) # decode base64
data = np.frombuffer(data, dtype=np.uint8) # create numpy array with JPG data
img = cv2.imdecode(data, cv2.IMREAD_UNCHANGED) # uncompress JPG data to array of pixels
return img
def show_frame(img, quality=0.8):
"""Put frame as <img src="data:image/jpg;base64,...."> """
ret, data = cv2.imencode('.jpg', img) # compress array of pixels to JPG data
data = b64encode(data) # encode base64
data = data.decode() # convert bytes to string
data = 'data:image/jpg;base64,' + data # join header ("data:image/jpg;base64,") and base64 data (JPG)
eval_js('showImage("{}")'.format(data)) # run JavaScript code to put image (JPG as string base64) in <img>
# argument in `showImage` needs `" "`
class BrowserVideoCapture():
def __init__(self, src=None):
init_camera()
def read(self):
return True, take_frame()
import cv2
cap = BrowserVideoCapture(0)
while True:
ret, frame = cap.read()
if ret:
show_frame(frame)
Oh god... I just realized that i was working in Google Colab virtual environment, that's why it can't connect to my local camera.
Have attached the source code for frame differentiating and storing the differentiated frames in a specified place but am getting an error in indentation of error..post this problem as a question in stack overflow..i am restricted to question for a particular period..upload the code too
filename.py
import cv2
import os
import glob
def extractFrames(pathIn, pathOut):
os.mkdir(pathOut)
cap = cv2.VideoCapture(pathIn)
count = 0
while (cap.isOpened()):
# Capture frame-by-frame
ret, frame = cap.read()
current_frame_gray = cv2.cvtColor(current_frame, cv2.COLOR_BGR2GRAY)
previous_frame_gray = cv2.cvtcolor(previous_frame, cv2.COLOR_BGR2GRAY)
frame_diff = cv2.absdiff(current_frame_gray,previous_frame_gray)
if ret == True:
print('Read %d frame: ' % count, ret)
cv2.imwrite(os.path.join(pathOut, "frame{:d}.jpg".format(count)), frame_diff) # save frame as JPEG file
count += 1
else:
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
def main():
extractFrames('C:/Users/yaazmoha/Desktop/BE PROJECT/INPUT/Tiger in field(1080P_HD).mp4', 'fd3')
if __name__=="__main__":
main()
Fixed your code. You had some indentation errors. Since Python does not use braces like C++, it requires proper indentation to separate code.
import cv2
import os
import glob
def extractFrames(pathIn, pathOut):
os.mkdir(pathOut)
cap = cv2.VideoCapture(pathIn)
count = 0
while (cap.isOpened()):
# Capture frame-by-frame
ret, current_frame = cap.read()
current_frame_gray = cv2.cvtColor(current_frame, cv2.COLOR_BGR2GRAY)
if count > 1:
previous_frame_gray = cv2.cvtcolor(previous_frame, cv2.COLOR_BGR2GRAY)
frame_diff = cv2.absdiff(current_frame_gray,previous_frame_gray)
if ret == True:
print('Read %d frame: ' % count, ret)
cv2.imwrite(os.path.join(pathOut, "frame{:d}.jpg".format(count)), frame_diff) # save frame as JPEG file
count += 1
else:
break
previous_frame = current_frame
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
def main():
extractFrames(r"C:\Users\mathesn\Downloads\Wildlife.mp4", 'fd3')
if __name__=="__main__":
main()
I took liberties to fix other sections of your code. But there are some other fixes that this code needs, like creating a directory only if it doesn't exist, maintaining a colored version of the frame so that cv2.cvtColor() does not fail, etc., but I''ll leave those to you.
I am trying to pull individual frames at specified times from an RTSP feed.
This works fine for video streaming:
vcap = cv2.VideoCapture(RTSP_URL)
while(1):
ret, frame = vcap.read()
cv2.imshow('VIDEO', frame)
cv2.waitKey(1)
But if I want to take an image every second and save it by doing something like this:
vcap = cv2.VideoCapture(RTSP_URL)
for t in range(60):
ret, frame = vcap.read()
if ret:
cv2.imwrite("{}.jpg".format(t), frame)
time.sleep(1);
Every image will look exactly the same as the first image. In every instance ret == True.
(Also this was working fine for me a week ago and then ipython did something that required me to do a re-install)
cv2.waitKey(1000) wouldn't do anything if you didn't show image with cv2.imshow(). Try:
vcap = cv2.VideoCapture(RTSP_URL)
for t in range(60):
ret, frame = vcap.read()
cv2.imwrite('{}.jpg'.format(t), frame)
# this will activate the waitKey funciton
cv2.imshow('preview', frame)
cv2.waitKey(1000)
On another note, iPython/jupyter doesn't play well with the cv2's imshow and the whole GUI functionality. If, for example, you can't break the loop by keypress
if (cv2.waitKey(1000) == 27 & 0xff): break;
Alright, after endless messing with it over the last few days, 1 second is not fast enough for the feed for whatever reason.
This will work:
vcap = cv2.VideoCapture(RTSP_URL)
for t in range(60):
ret, frame = vcap.read()
if ret and t % 1000 == 0:
cv2.imwrite("{}.jpg".format(t), frame)
time.sleep(0.001)
I tried to capture (stereo) images with Python's opencv and two cameras so therefore every 5 seconds an image should be saved. But the problem here is that an old frame is saved.
The minified code is as follows:
cap = cv2.VideoCapture(0)
for i in range(20):
time.sleep(5)
print "Taking image %d:" % i
ret, frame = cap.read()
cv2.imwrite("image %d" % i, frame)
print " image done." if ret else " Error while taking image..."
cap.release()
cv2.destroyAllWindows()
To check this, I changed the position of the camera after each taken image. But nevertheless an image from an old position is saved (actually not the same, but I assume some frames after the last saved image). After 5 (or more) images finally the captured location in the image does also change.
So, is there any problem with time.sleep? I guess that I'm not getting the actual frame, but a buffered one. If this is the case, how could I fix it and capture the actual frame?
VideoCapture is buffering.
If you always want the actual frame, do this:
while True:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cap.release()
cv2.imshow(" ", frame)
if cv2.waitKey(2000) != -1:
break
you need to count the elapsed time, but not stop read frames. like this:
import cv2
import time
cap = cv2.VideoCapture(0)
preframe_tm = time.time()
i = 0
while True:
ret, frame = cap.read()
elapsed_time = time.time() - preframe_tm
if elapsed_time < 5:
continue
preframe_tm = time.time()
i += 1
print("Taking image %d:" % i)
cv2.imwrite("image_%d.jpg" % i, frame)
if i >= 20:
break
cap.release()
cv2.destroyAllWindows()