I need to split big video file into smaller pieces by time. Give me your suggestions, please, and if you can some tips for library usage. Thanks.
OpenCV has Python wrappers.
As you're interested in video IO, have a look at QueryFrame and its related functions there.
In the end, your code will look something like this (completely untested):
import cv
capture = cv.CaptureFromFile(filename)
while Condition1:
# Need a frame to get the output video dimensions
frame = cv.RetrieveFrame(capture) # Will return None if there are no frames
# New video file
video_out = cv.CreateVideoWriter(output_filenameX,
CV_FOURCC('M','J','P','G'), capture.fps, frame.size(), 1)
# Write the frames
cv.WriteFrame(video_out, frame)
while Condition2:
# Will return None if there are no frames
frame = cv.RetrieveFrame(capture)
cv.WriteFrame(video_out, frame)
By the way, there are also ways to do this without writing any code.
Check youtube-upload, it splits the videos using ffmpeg.
Youtube-upload is a command-line
script that uploads videos to Youtube.
If a video does not comply with
Youtube limitations (<2Gb and <15'),
it will be automatically splitted
before uploading. Youtube-upload
should work on any platform
(GNU/Linux, BSD, OS X, Windows, ...)
that runs Python and FFmpeg.
Related
from moviepy.editor import *
clip = ( VideoFileClip("https://filelink/file.mp4"))
clip.save_frame("frame.png", t = 3)
I am able to load video using moviepy but its loading complete video and then saving the frame. Is it possible not to load the complete video but only first four second and then save the frame at 3 second.
Unless I missed something, it's not possible using MoviePy.
You may use ffmpeg-python instead.
Here is a code sample using ffmpeg-python:
import ffmpeg
stream_url = "https://file-examples-com.github.io/uploads/2017/04/file_example_MP4_480_1_5MG.mp4"
# Input seeking example: https://trac.ffmpeg.org/wiki/Seeking
(
ffmpeg
.input(stream_url, ss='00:00:03') # Seek to third second
.output("frame.png", pix_fmt='rgb24', frames='1') # Select PNG codec in RGB color space and one frame.
.overwrite_output()
.run()
)
Notes:
The solution may not work for all mp4 URL files, because mp4 format is not so "WEB friendly" - I think the moov atom must be located at the beginning of the file.
You may need to manually install FFmpeg command line tool (but it supposed to be installed with MoviePy).
Result frame:
i have a video where the front view of a car was recorded. The file is an .mp4 and i want to process the single images so i can extract more information (Objects, Lane Lines etc.). The problem is, when i want to create a video out of the processed files, i get an error. Here is what i have done so far:
Opened the video with cv2.VideoCapture() - Works fine
Saved the single frames of the video with cv2.imwrite() - Works fine
Creating a video out of single frames with cv2.VideoWriter() - Works fine
Postprocessing the video with cv2.cvtColor(), cv2.GaussianBlur() and cv2.Canny() - Works fine
Creating a video out of the processed images - Does not work.
Here is the code i used:
enter code here
def process_image(image):
gray = functions.grayscale(image)
blur_gray = functions.gaussian_blur(gray, 5)
canny_blur = functions.canny(blur_gray, 100, 200)
return canny_blur
process_on =0
count =0
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200))
vidcap = cv2.VideoCapture('input.mp4')
success,image = vidcap.read()
success = True
while success:
processed = process_image(image)
video.write(processed)
This is the error i get:
OpenCV Error: Assertion failed (img.cols == width && img.rows == height*3) in cv::mjpeg::MotionJpegWriter::write, file D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp, line 834
Traceback (most recent call last):
File "W:/Roborace/03_Information/10_Data/01_Montreal/camera/right_front_camera/01_Processed/Roborace_CAMERA_create_video.py", line 30, in
video.write(processed)
cv2.error: D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp:834: error: (-215) img.cols == width && img.rows == height*3 in function cv::mjpeg::MotionJpegWriter::write
My suggestion is: The normal images have 3 dimensions because of the RGB-color field. The processed images are only having one dimension. How can i adjust this in the cv2.VideoWriter function.
Thanks for your help
The VideoWriter() class only writes color images, not grayscale images unless you are on Windows, which it looks like you might be judging from the paths in your output. On Windows, you can pass the optional argument isColor=0 or isColor=False to the VideoWriter() constructor to write single-channel images. Otherwise, the simple solution is to just stack your grayscale frames into a three-channel image (you can use cv2.merge([gray, gray, gray]) and write that.
From the VideoWriter() docs:
Parameters:
isColor – If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
So by default, isColor=True and the flag cannot be changed on a non-Windows system. So simply doing:
video.write(cv2.merge([processed, processed, processed])
should patch you up. Even though the Windows variant allows writing grayscale frames, it may be better to use this second method for platform independence.
Also as Zindarod mentions in the comments below there are a number of other possible issues with your code here. I'm assuming you've pasted modified code that you weren't actually running here, or that you would have modified otherwise...if that's the case, please only post minimal, complete, and verifiable code examples.
First and foremost, your loop has no end condition, so it's indefinite. Secondly, you're hard-coding the frame size but VideoWriter() does not simply resize the images to that size. You must provide the size of the frame that you will pass into the VideoWriter(). Either resize the frame to the same size before writing to be sure, or create your VideoWriter using the frame size as defined in your VideoCapture() device (using the .get() methods for the frame size properties).
Additionally, you're reading only the first frame outside the loop. Maybe that was intentional, but if you want to process each frame of the video, you'll need to of course read them in a loop, process them, and then write them.
Lastly you should have some better error catching in your code. For e.g., see the OpenCV "Getting Started with Video" Python tutorial. The "Saving a Video" section has the proper checks and balances: run the loop while the video capture device is opened, and process and write the frame only if the frame was read properly; then once it is out of frames, the .read() method will return False, which will allow you to break out of the loop and then close the capture and writer devices. Note the ordering here---the VideoCapture() device will still be "opened" even when you've read the last frame, so you need to close out of the loop by checking the contents of the frame.
Add isColor=False argument to the VideoWriter.
Adjusting VideoWriter this way will solve the issue.
Code:
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200), isColor=False)
First some background
I am trying to write my own set of tools for video analysis, mainly for detecting render errors like flashing frames and possibly some other stuff in the future.
The (obvious) goal is to write a script, that is faster and more accurate than me watching the file in real time.
Using OpenCV, I have something that looks like this:
import cv2
vid = cv2.VideoCapture("Video/OpenCV_Testfile.mov", cv2.CAP_FFMPEG)
width = 1024
height = 576
length = vid.get(cv2.CAP_PROP_FRAME_COUNT)
for f in range(length):
blue_values = []
vid.set(cv2.CAP_PROP_POS_FRAMES, f)
is_read, frame = vid.read()
if is_read:
for row in range(height):
for col in range(width):
blue_values.append(frame[row][col][0])
print(blue_values)
vid.release()
This just prints out a list of all blue values of every frame.
- Just for simplicity (My actual script compares a few values across each frame and only saves the frame number when all are equal)
Although this works, it is not a very fast operation. (Nested loops, but most important, the read() method has to be called for every frame, which is rather slow.
I tried to use multiprocessing but basically ended up having the same crashes as described here:
how to get frames from video in parallel using cv2 & multiprocessing in python
I have a 20s long 1024x576#25fps Testfile which performs as follows:
mov, ProRes: 15s
mp4, h.264: 30s (too slow)
My machine is capable of playing back h.264 in 1920x1080#50fps with mplayer (which uses ffmpeg to decode). So, I should be able to get more out of this. Which leads me to
my Question
How can I decode a video and simply dump all pixel values into a list for further (possibly multithreaded) operations? Speed is really all that matters. Note: I'm not fixated on OpenCV. Whatever works best.
Thanks!
I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)
I'm trying to grab the images from a video file but I can't succeed to open it and I don't know why.
Below is a code sample that print False where I'm expecting to get a True. I don't get why I can't open this simple video file, any lead would be very much appreciated!
I tried with a relative path first then moved to an absolute path to see if anything changed and it's still the same...
video = cv2.VideoCapture()
path = "C:\\Users\\Leo\\Dropbox\\Projet VISORD\\TP3\\video.mpg"
print video.open(path)
The codecs that cv2 supports out of the box are limited. A few of the formats can be found at the link below. I haven't tried them all yet.
http://opencv.willowgarage.com/wiki/documentation/cpp/highgui/VideoWriter
I've had some luck with mp42 codec. Had to convert my camera's mp4 (h264) format to an avi in the correct format.
Using a tool ffmpeg at the moment.
ffmpeg -i input.mp4 -codec:v msmpeg4v2 output.avi
This still leaves something to be desired as it loses resolution, so I am working toward a better solution myself. I only just started at this myself.
The following code works for me:
import cv2
Load the video file:
capture = cv2.VideoCapture('videos/my_video.avi')
Frame is the image you want, flag is success/failure:
flag, frame = capture.read()
Loop through the video's frames:
while True:
flag, frame = capture.read()
if flag == 0:
break
cv2.imshow("Video", frame)
key_pressed = cv2.waitKey(10) #Escape to exit
if key_pressed == 27:
break
However, MPEG is a compressed format, which means that you need the correct codecs to be installed and might have to do some more work to handle the conversion. You can read about the supported different types of video formats at the OpenCV VideoCodec documentation.
(However, if you just want a simple working example, try using a .AVI file and see if it works for you.)
Had a similar problem. Try changing
path = "C:\\Users\\Leo\\Dropbox\\Projet VISORD\\TP3\\video.mpg"
to
path = "C:/Users/Leo/Dropbox/Projet VISORD/TP3/video.mpg"
and see if it works.