I am facing a problem in recording a video from camera. I am using python and opencv to do so. I have my images in QImage format, I convert them to numpy array in order to display to stream it when video capturing of the cam (using VideoCapture of opencv), everything works fine.
When I try to record a video and save it in the folder (using VideoWriter_fourcc of opencv), I get no errors but I get an empty video. I did a lot of searching to find the problem but I couldn't.
Here's the code that I use to record a video:
import cv2
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
#img is a numpy array
videoWriter = cv2.VideoWriter('video.avi', fourcc, 20, (img.shape[0],img.shape[1]))
while True:
videoWriter.write(img)
videoWriter.release()
I tried to change the Framerate, the frameSize, the extension of the video and the code of codec but nothing worked.
I am so desperate. I appreciate all and every help and suggestion I can get.
Thank you
The issue is that you mixed up width and height as you passed them to VideoWriter.
VideoWriter wants (width, height) for the frameSize argument.
Note that width = img.shape[1] and height = img.shape[0]
Giving VideoWriter a, say, 1920x1080 frame to write, while having promised 1080x1920 frames in the constructor, will fail, but silently.
Related
Using cv2, I make a bunch of jpgs out of an mp4 file:
vidcap = cv2.VideoCapture(inp_file)
success,image = vidcap.read()
cv2.imwrite("raw_frames/frame%d.jpg"%count,image)
success,image = vidcap.read()
count+=1
I then do a bunch of operations on all those jpgs and store them in another folder. Then I go through all of those and create an avi out of them:
img = Image.open(file)
width,height = img.size
frameSize = (width,height)
outfile = inp_file.split(".")[0] + '_asc'
out = cv2.VideoWriter(outfile+'.avi',cv2.VideoWriter_fourcc(*'DIVX'),ffps,frameSize)
path = "rendered_frames"
for img_i in range(len(all_images)):
img = cv2.imread(path+"/rf_"+str(img_i)+".jpg")
out.write(img)
print("\tVideo complete!")
out.release()
Then I take that video and try to add the audio from the original mp4 file back to it, using moviepy:
def combine_audio(vidname,audname,outname,fps):
from moviepy.video.io.VideoFileClip import VideoFileClip
from moviepy.audio.io.AudioFileClip import AudioFileClip
my_clip = VideoFileClip(vidname)
audio_background = AudioFileClip(audname)
final_clip = my_clip.set_audio(audio_background)
final_clip.write_videofile(outname,fps=fps,codec='rawvideo')
combine_audio(outfile+'.avi',inp_file,outfile+"ii.avi",ffps)
However, there seems to be a problem with the codec. Under the current conditions, their skin turns purple, purple things turn red, white things stay pretty white etc..
Note that this happens when I add the audio back to them. Before adding audio to the video from cv2, it is perfect.
Here is a reference for what is happening now
Original:
After rendering:
I've tried some other values in cv2 and moviepy but no success.
I tried creating an mp4 instead with MP4V instead of DIVX with cv2.VideoWriter() and some mp4 codec in the audio function, but there is way too much compression for how detailed the video needs to be for what I am doing.
From moviepy's documentation it says that you could leave the codec field blank for an .avi but that causes an error.
Some additional information:
After making the video with cv with fourcc 'DIVX' (pre audio) VLC media player says the codec of the video is MPEG-4 (FMP4).
When passing 'FMP4' as the codec for moviepy, it doesn't recognize it.
Im looking for a way to extract one single image from an rtsp stream. My current solution is based on doing that with opencv, but i want to optimize this. So I'm going with a solution I found here Handle large number of rtsp cameras without real-time constraint
The drawback here is that ffmpeg opens a whole videostream, so a consequence is that this is not more efficient than opencv. I want to fetch one single image with ffmpeg, but currently im not able to do that. i get error
"ValueError: cannot reshape array of size 0 into shape (160,240,3)" - So there is no image in the frame. I'm not completely convinced that my ffmpeg_cmd is correct.
import numpy as np
import subprocess as sp
import cv2
# Use public RTSP Streaming for testing:
in_stream = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4"
# Use OpenCV for getting the video resolution.
cap = cv2.VideoCapture(in_stream)
# Get resolution of input video
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Release VideoCapture - it was used just for getting video resolution
cap.release()
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
FFMPEG_BIN = "ffmpeg"
ffmpeg_cmd = [FFMPEG_BIN,
'-vframes', '1',
'-rtsp_transport', 'tcp'
'-i', in_stream,'-c:v','mjpeg','-f','image2pipe', 'pipe:']
process = sp.Popen(ffmpeg_cmd, stdout=sp.PIPE)
raw_frame = process.stdout.read(width * height * 3)
frame = np.frombuffer(raw_frame, np.uint8).reshape((height, width, 3))
process.stdout.close()
process.wait()
cv2.destroyAllWindows()
If someone have a completely different solution, thats ok. I need something that is more efficient than doing it with opencv.
I am trying to write a video file without any loss in OpenCV, but so far any codec that I have selected from fourcc codec lists somehow results in loss of data.
regarding the recording parameters I am using:
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
used these codecs so far but they either to compression or upsize video bit rate
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
fourcc = cv2.VideoWriter_fourcc(*'RGBA')
fourcc = cv2.VideoWriter_fourcc(*'x265')
fourcc = cv2.VideoWriter_fourcc('H','2','6','4')
my video writer function is:
writer= cv2.VideoWriter(out_dest, fourcc, fps, (width,height))
Just to be clear, I do not want any sort of compression for the output video.
I also use
vid_format = int(cap.get(cv2.CAP_PROP_FOURCC))
to get the output video bit rate and compare it to the original video.
I also found someone on GitHub using skvideo but wasn't able to perform the same code
https://gist.github.com/docPhil99/a612c355cd31e69a0d3a6d2f87bfde8b
as it kept showing an extension error and couldn't find proper documentation on how to use it!
Thank you in advance
An Update on the topic:
the final output writer codec will be used as the video writer for BGR to RGB conversion in OpenCV, if you have any other ideas or suggestions that can do the job, I'm all ears!
If all you need to do is to get RGB data out of raw AVI file and feed the RGB frame data with MediaPipe, there is no need for intermediate file to store RGB data because FFmpeg can convert the pixel format on the fly by specifying output -pix_fmt rgb24 option.
To do this, you can try my ffmpegio package to load the data. My package is designed for your use case to remove the need to set up FFmpeg call. To install:
pip install ffmpegio
You also need FFmpeg in the system.
Then video.read() loads the video data:
import ffmpegio
fs, I = ffmpegio.video.read('datafile.avi',pix_fmt='rgb24')
# fs = framerate
# I = 250x480x640x3 numpy array containing RGB video data
If you don't want to read all 250 frames at once, you can use the stream interface to work on X number of frames at a time:
with ffmpegio.open('datafile.avi','rv', blocksize=25, pix_fmt='rgb24') as f:
for I in f: # loops 10 times, 25 frames at a time
# I = 25x480x640x3 numpy array
I'm trying to capture photos and videos using cv2.VideoCapture and cameras with an aspect ratio of 16:9. All image is returned by OpenCV have black sidebars, cropping the image. In my example, instead of returning an image with 1280 x 720 pixels, it returns a 960 x 720 image. The same thing happens with a C920 webcam (1920 x 1080).
What am I doing wrong?
import cv2
video = cv2.VideoCapture(0)
video.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
video.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while True:
conected, frame = video.read()
cv2.imshow("Video", frame)
if cv2.waitKey(1) == ord('s'):
video.release()
break
cv2.destroyAllWindows()
Using OpenCV:
Using Windows Camera:
I had this exact issue with a Logitech wide angle in windows camera and I was wondering about a driver problem.
So I solved it using the DirectShow driver instead of the native driver using this:
cv2.VideoCapture(cv2.CAP_DSHOW)
If you have more than one camera add the index to that value like this
cv2.VideoCapture(cv2.CAP_DSHOW + camera_index)
It will accept the desired resolution by applying the right aspect ratio without having the sidebars.
The Answer of #luismesas is completely right and worked for me.
But for people being as unskilled as I am you need to save the capture returned by cv2.VideoCapture. It is not a Parameter you can set like cv2.VideoCapture(cv2.CAP_DSHOW), it is a method.
camera_index = 0
cap = cv2.VideoCapture(camera_index, cv2.CAP_DSHOW)
ret, frame = cap.read()
Confirmed on Webcam device HD PRO WEBCAM C920.
I had the same issue too, but only on Windows 10, OpenCV 3.4 and Python 3.7.
I get the full resolution without the black side bars on a Mac OS.
I used PyGame to capture the webcam input and got the full resolution of 1920x1080 on Windows.
Just resize the received frame:
cv::Mat dst;
cv:resize(frame,dst,cv::Size(1280,720));
cv::imshow("Video",dst);
Check it!
I have a series of images and I want the opencv could read all the images and create a video for the first image to the last. The images just called 1,2,3,4....151.
import cv2
img=[]
for i in range(0,151):
img.append(cv2.imread(str(i)+'.png'))
height,width,layers=img[1].shape
video=cv2.VideoWriter('video.avi',-1,1,(width,height))
for j in range(0,151):
video.write(img[j])
cv2.destroyAllWindows()
video.release()
and the following error was raised:
OpenCV: Frame size does not match video size
after that a video was created but just a few images were actually used to produce the video.
where is incorrect?
For "mMovieWriter.status: 3. Error: Cannot Save", you can try removing the video.avi file created during your tests.