I am trying to activate a pi camera attached to a raspberry module using python. here is my sample code. ON running this code i am getting two windows showing the video. When i tried for processing the video, i get a window of a processed video and one real video. i am jst trying to get a single window. can anyone suggest why is it happening.
import cv2
cap = cv2.VideoCapture(0)
while True:
ret,th = cap.read()
cv2.imshow('video',th)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Related
I am writing a basic program to read video frame from a video file and display it on screen using OpenCV functions. I was able to do that. But when I ran the same file after few days, I was unable to capture the frames using VideoCapture Function in OpenCV.
But reading frames from webcam still works. But I am unable to read frames from a video file
cap = cv2.VideoCapture('video1.mp4')
# print(cap.isOpened()) Just for debugging purpose, it prints false
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Title', frame)
if cv2.waitKey(10) & 0xFF==ord('q'):# or frame_count > 25:
break
cap.release()
cv2.destroyAllWindows()
The above snippet is the one I wrote. What might be the possible reason for it to work properly initially, but now failing to do so?
I'm working on a project that will eventually have to process webcam images in real-time. I have some suitable test videos that I use to test my program. However, I don't know how to simulate real-time processing with a video file. I can read in each frame and process it, but this is not realistic since the algorithm is too heavy to run on every frame. I would like to 'stream' the video separately and pull in a frame each time the algorithm starts to test with a realistic fps, but I don't know how to do this.
Here is the basic layout of the code you can use :
import cv2
cap = cv2.VideoCapture('path to video file')
while cap.isOpened():
ret,frame = cap.read()
### YOUR CODE HERE ###
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows() # destroy all opened windows
I'm using opencv as part of a beam profiler software. For this I have a high resolution camera (5496x3672, Daheng Imaging MER-2000-19U3M). I'm using now a basic program to show the captured frames. The program works fine for a normal webcam, however when I connect my high resolution camera (through USB 3.0) it becomes buggy. Most of the frame is black and at the top there are three small instances of the recording (screenshot here). On the other hand, the camera software displays the image properly, so I assume there must be a problem in how opencv access the camera. Here is the code used to diplay the image:
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,5496)
cap.set(4,3672)
while(True):
ret, frame = cap.read()
frame2=cv2.resize(frame,(1280,720))
cv2.imshow('frame',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I am using openCv for making video processing. What I do is reading a video frame by frame, then applying some processing on each frame and then displaying the new modified frame. My code looks like that :
video_capture = cv2.VideoCapture('video.mp4')
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
# Applying some processing to frame
.
.
.
# Displaying the new frame with processing
img=cv2.imshow('title', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
This way I can display instantaneously the processed video. The problem is the display is lagging much due to the presence of the 'waitkey'. Is there another way for display images in real time to form a video, but with another module than cv2?
Thank you
One option is Tkinter, you can find some information here. Along with Tkinter, it uses python-gstreamer and python-gobject. It is much more complicated to set up, however it allows for more customization options.
I am wanting to play a video at its correct speed. When I play the current video it is very slow. The faster the frame rate of the video I play the slower it goes. I was wondering if there is a way to get the frame rate and based on the fps it will play it at the correct speed. Below is the code I have so far.
Another question: When I play the video it appears grainy, is that due to the video currently not playing at the correct speed or due to the inefficiency of the code?
Thanks for the help, it is much appreciated.
def openVideo():
cap = cv2.VideoCapture('./track.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
openVideo()