How to play a video in real time using python Opencv - python

I am wanting to play a video at its correct speed. When I play the current video it is very slow. The faster the frame rate of the video I play the slower it goes. I was wondering if there is a way to get the frame rate and based on the fps it will play it at the correct speed. Below is the code I have so far.
Another question: When I play the video it appears grainy, is that due to the video currently not playing at the correct speed or due to the inefficiency of the code?
Thanks for the help, it is much appreciated.
def openVideo():
cap = cv2.VideoCapture('./track.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
openVideo()

Related

Python openCV VideoCapture() returns false. But worked initially

I am writing a basic program to read video frame from a video file and display it on screen using OpenCV functions. I was able to do that. But when I ran the same file after few days, I was unable to capture the frames using VideoCapture Function in OpenCV.
But reading frames from webcam still works. But I am unable to read frames from a video file
cap = cv2.VideoCapture('video1.mp4')
# print(cap.isOpened()) Just for debugging purpose, it prints false
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Title', frame)
if cv2.waitKey(10) & 0xFF==ord('q'):# or frame_count > 25:
break
cap.release()
cv2.destroyAllWindows()
The above snippet is the one I wrote. What might be the possible reason for it to work properly initially, but now failing to do so?

How to stream and grab frames from a video file to test real-time processing in Python

I'm working on a project that will eventually have to process webcam images in real-time. I have some suitable test videos that I use to test my program. However, I don't know how to simulate real-time processing with a video file. I can read in each frame and process it, but this is not realistic since the algorithm is too heavy to run on every frame. I would like to 'stream' the video separately and pull in a frame each time the algorithm starts to test with a realistic fps, but I don't know how to do this.
Here is the basic layout of the code you can use :
import cv2
cap = cv2.VideoCapture('path to video file')
while cap.isOpened():
ret,frame = cap.read()
### YOUR CODE HERE ###
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows() # destroy all opened windows

How to read RTSP Video from OpenCV with Low CPU Usage?

import numpy as np
import cv2
cap = cv2.VideoCapture("rtsp://admin:admin123#10.0.51.110/h264/ch3/main/av_stream")
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Processing Frame -
# Running Computer Vision Algorithm
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
This code is using nearby 50% of CPU Usage. How can we reduce this CPU Usage ?
I have used time.sleep(0.05) but its delaying video feed processing, so won't work like realtime for me.
Use Mjpeg as the codec and lower the fps ( frames per second) for video streaming source.
As mjpeg is less compressed as compared to H.264 , so bandwidth usage will be higher but your cpu usage will be less.

Suggestion In Improving the code of webcam

I have written a basic code which captures an image from webcam using OpenCV & Python 2.7.
The code is as follows:
import numpy
import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imshow('image',frame)
cap.release()
cv2.waitKey(0)
cv2.destroyAllWindows()
This code gives the correct output but my camera takes a few seconds to focus so I get a black or dim image as output instead of a bright proper focused image..
How can I solve this problem in a more mature way?
You need an "auto capture" algorithm. Auto capturing algorithms are various depending on what your case is. For example if you need take a shoot for a document that you want to OCR it later, you have to check how much this text is OCRable in order to take the image. However, in the general case there is something called Reference-less Image Quality Assurance that will help you to rate how much this image is good. Then, if it is good enough, take a shoot. However, implementing it is not an easy task.
If you need something fast and easy, just compute the sharpness of the image and depend on it to take the photo or not. See this :http://answers.opencv.org/question/5395/how-to-calculate-blurriness-and-sharpness-of-a-given-image/
Another option could be using a face detector if you are taking photos for humans. OpenCV has a cascade classifier with pre-trained model for human face. Just try to detect it and when it is detected, take the shoot.
You may also combine the last two types together in a hybrid mode. In other words, Detect the face then make sure it is sharp enough then take the photo
You could wait till the video capturing has been initialized by modifying the code as:
import cv2
cv2.namedWindow("output")
cap = cv2.VideoCapture(0)
if cap.isOpened(): # Getting the first frame
ret, frame = cap.read()
else:
ret = False
while ret:
cv2.imshow("output", frame)
ret, frame = cap.read()
key = cv2.waitKey(20)
if key == 27: # exit on Escape key
break
cv2.destroyWindow("output")

Two windows at a time?

I am trying to activate a pi camera attached to a raspberry module using python. here is my sample code. ON running this code i am getting two windows showing the video. When i tried for processing the video, i get a window of a processed video and one real video. i am jst trying to get a single window. can anyone suggest why is it happening.
import cv2
cap = cv2.VideoCapture(0)
while True:
ret,th = cap.read()
cv2.imshow('video',th)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

Categories