import numpy as np
import cv2
cap = cv2.VideoCapture("rtsp://admin:admin123#10.0.51.110/h264/ch3/main/av_stream")
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Processing Frame -
# Running Computer Vision Algorithm
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
This code is using nearby 50% of CPU Usage. How can we reduce this CPU Usage ?
I have used time.sleep(0.05) but its delaying video feed processing, so won't work like realtime for me.
Use Mjpeg as the codec and lower the fps ( frames per second) for video streaming source.
As mjpeg is less compressed as compared to H.264 , so bandwidth usage will be higher but your cpu usage will be less.
Related
I am writing a basic program to read video frame from a video file and display it on screen using OpenCV functions. I was able to do that. But when I ran the same file after few days, I was unable to capture the frames using VideoCapture Function in OpenCV.
But reading frames from webcam still works. But I am unable to read frames from a video file
cap = cv2.VideoCapture('video1.mp4')
# print(cap.isOpened()) Just for debugging purpose, it prints false
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Title', frame)
if cv2.waitKey(10) & 0xFF==ord('q'):# or frame_count > 25:
break
cap.release()
cv2.destroyAllWindows()
The above snippet is the one I wrote. What might be the possible reason for it to work properly initially, but now failing to do so?
I'm working on a project that will eventually have to process webcam images in real-time. I have some suitable test videos that I use to test my program. However, I don't know how to simulate real-time processing with a video file. I can read in each frame and process it, but this is not realistic since the algorithm is too heavy to run on every frame. I would like to 'stream' the video separately and pull in a frame each time the algorithm starts to test with a realistic fps, but I don't know how to do this.
Here is the basic layout of the code you can use :
import cv2
cap = cv2.VideoCapture('path to video file')
while cap.isOpened():
ret,frame = cap.read()
### YOUR CODE HERE ###
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows() # destroy all opened windows
I'm using opencv as part of a beam profiler software. For this I have a high resolution camera (5496x3672, Daheng Imaging MER-2000-19U3M). I'm using now a basic program to show the captured frames. The program works fine for a normal webcam, however when I connect my high resolution camera (through USB 3.0) it becomes buggy. Most of the frame is black and at the top there are three small instances of the recording (screenshot here). On the other hand, the camera software displays the image properly, so I assume there must be a problem in how opencv access the camera. Here is the code used to diplay the image:
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,5496)
cap.set(4,3672)
while(True):
ret, frame = cap.read()
frame2=cv2.resize(frame,(1280,720))
cv2.imshow('frame',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I'm using a raspberry pi and trying to create a video stream using flask and the pi camera library. My understanding is that I need to use continuous_capture to get the lowest latency within the system.
However, I can't find a way to preview these images that are supposedly being taken. Is there a way I can view these before I try and implement it into flask which has many issues itself as I will be using the falsk website to control a robot as well.
Any suggestions on how to do this are appreciated as well as telling me there is an easier way to do it. Please note I am only an intermediate programmer so nothing too complex for me to understand as this is the whole idea of the project.
I believe you are looking for capture_continuous.
Here's the general process:
Import the necessary packages
Pause to allow the camera to warm up
Iterate through the camera frames
Capture and show the frame
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
I am wanting to play a video at its correct speed. When I play the current video it is very slow. The faster the frame rate of the video I play the slower it goes. I was wondering if there is a way to get the frame rate and based on the fps it will play it at the correct speed. Below is the code I have so far.
Another question: When I play the video it appears grainy, is that due to the video currently not playing at the correct speed or due to the inefficiency of the code?
Thanks for the help, it is much appreciated.
def openVideo():
cap = cv2.VideoCapture('./track.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
openVideo()