I am new to object detection using USB webcam.
I have a USB webcam which is capable of recording at 30fps FHD. I've connected this camera to a linux machine to capture video. The USB camera is connected to USB 3.0 port.
ffmpeg command line is used to capture a minute long, 15fps, 640x720, bitrate 5M video.
A simple opencv based python program reads this video file, frame by frame using cap.read(). However, I've noticed that when there is an moving object (e.g. human) in the frame, it becomes very blurry. (Here is a link of an example) I am wondering if this is normal or some adjustments are missing.
I am asking this question because I would like to run an object detection algorithm (SSD + MobileNet v2) on this video that I am capturing. But for many of the frames, if the object is moving, object detection fails to spot the object. (Yes, of course there isn't a perfect detection algorithm for all video analytics and there are various reasons for it to fail object detection)
Could you give pointers to remove the blurriness of this video frames?
1) Is it due to the video recording resolution is too low?
2) Is it because the python program is reading at different frame rate? (approximately 13~14 fps)
Related
I am trying to build a sports analysis platform where I have a deep learning model which processes Live video(RTMP/Webcam) frames, applies overlays,score etc. and then I need to combine it with microphone audio and rebroadcast with audio and video in sync. I think I need the presentation time stamps of the frames (Since AI frame processing takes variable time) and somehow provide ffmpeg with it but I'm lost and could not find a similar example doing this.
I am trying to change the resolution of PS5 camera in OpenCV, Python.
The problem is that PS5 Camera officially isn't supported on PC, and I have to use custom camera drivers from GitHub: https://github.com/Hackinside/PS5_camera_files
Default image resolution by this code is 640x376
self.capture = cv2.VideoCapture(name)
I found out that supported resolutions of this camera are 640x376 and 5148×1088, so I tried to do next:
res = self.capture.set(cv2.CAP_PROP_FRAME_WIDTH, 5148)
res = self.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1088)
But in both cases res is False, and resolution doesn't change. I can recieve only small resolution frame.
Camera 100% can work in 5148×1088, because if I launch Windows Camera application it shows me high quality images
Okay, the problem was, that I had a piece of code, where I read a frame from the capture using a loop:
while True:
self.capture.read()
It was in a parallel thread so changing the resolution was at the same time as reading images. It was a reason why the change resolution process always failed.
So the provided code in question should work if you do it before starting reading images.
In OpenCV with Python, when the fps of the webcam and a video file in the directory are same, why does the video file play in fast forward whereas the webcam continues to show the frames at a normal rate? What role does the cv2.waitKey() function play here
The fps of a video file means how it was encrypted, how many frames contain within a second as the name reveals. For example, if extracted 1 second of this video will produce only that number of frames (images).
The corresponding fps of the web camera means how many frames that camera can capture in a second. If saved to a video file that would mean how many frames are contained within each 1-second span.
There is a third (probably hidden to you) concept here though. How fast the opencv can read a video file. Normally, and for typical resolutions on a modern computer this fps is larger than the actual video. So, your computer seem to playback the video in fast forward mode because it reads (and displays) frames in a faster pace than the video file's fps.
Theoretically, you can calculate the delay you should import to the video playback to force it to displayed with normal pace. I am not sure how easily you can accomplish that (in a scientific way and not trial and error mode).
Hope this clarifies the issue.
I would like to take pictures using the USB webcam. When I use the VideoCapture method of OpenCV, it actually gives frames from the video. In most cases, still images cover more area than the video. Therefore, I am looking for a way to take pictures using the webcam which cover more of the available camera FOV.
I'm using Opencv 2.4.5 with python 2.7 to track people in video surveillance. At the beginning I used .avi and .mpeg videos to test my code, now I want to use a hcv-m100c camera. I am using a simple difference between frames (an initial frame compared with each frame) to identify the objects in movement, It works very well with the .avi and .mpeg videos I have, but when I use the camera the results are so bad because a lot of noise and stains appear in my video. I thought that the problem was my camera, but I made an .avi video with the same camera and I tested that video with my code and it works fine.
Now, I'm using the cv2.BackgroundSubtractorMOG but the problem is still there.
So, I think I need to do a pre-processing when I use the camera
Just for completeness:
Solution concept:
Possibly you could stream the video camera with something like ffmpeg which can transcode as well and then use OpenCV to read the network stream. It might be easier to use VLC to stream instead.
Solution detail:
VLC Stream code (Shell):
vlc "http://192.168.180.60:82/videostream.cgi?user=admin&pwd=" --sout "#transcode{vcodec=mp2v,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=??44100}:duplicate{dst=rtp{sdp=rtsp://:8554/output.mpeg},dst=display}" --sout-keep
OpenCV Code (Python):
cap=cv2.VideoCapture("rtsp://:8554/output.mpeg")