I don't have much experience with ffmpeg and the somewhat not-so-beginner-friendly ffmpeg-python package's documentation has left me kind of confused.
I'm trying to trim a video from second s to second e and then grab one frame for every second (the video is 30fps). Then save all these frames (around e-s many frames) in a video file.
So far what I have is this:
(
in_file
.trim(start=s, end=e)
.filter("select", "not(mod(n\,30))")
.setpts("PTS-STARTPTS")
.output("test.mp4", loglevel="quiet")
.overwrite_output()
.run()
)
without using filter the script runs but I'm still not getting one frame per second. So I looked into the select filter and I believe you give it the expression not(mod(n\,30)) to make it falsify for all frames that are not a modulo of 30, and then it keeps going up one by one to stack the frames one after another.
this however gives me an error:
Traceback (most recent call last):
File "/home/***/Videos/test.py", line 26, in <module>
.run()
File "/home/***/.local/lib/python3.10/site-packages/ffmpeg/_run.py", line 325, in run
raise Error('ffmpeg', out, err)
ffmpeg._run.Error: ffmpeg error (see stderr output for detail)
Any thoughts?
Related
Dash is a great Python package for interactive visualization. My feelings with this library is that it is great when it comes to structure data analysis but when it comes to unstructured data like images and videos it is not so great.
As a work around, if there is a need to show images when working with Dash, I use matplotlib library. I show the following example to make my point clear:
#app.callback([Output("id_badframe_video", "children")],
[Input("id_generate_badframe_video_button","n_clicks")],
[State("id_dataset_name_list","value"),
State('id_video_name_list','value'),
State('id_video_index_list', 'value'])
def generate_bad_video(nclick, dataset_name, video_name, frame_index):
if nclick:
img = read_image(dataset_name, video_name, frame_index)
import matplotlib.pyplot as plt
plt.close()
fig, ax = plt.subplots(1)
ax.imshow(img, cmap='gray')
In the above code, I want to demonstrate an image based on the user's inputs: dataset name, video name and video index. Then when I press Show Image button, then the image will be shown in a separated window (not within http://127.0.0.1:8050/). At the beginning I thought it was a great idea, but then I found after showing several images the program will crash with the following error messages:
Error on request:
Traceback (most recent call last):
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 303, in run_wsgi
execute(self.server.app)
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 294, in execute
write(data)
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 257, in write
self.send_header(key, value)
File "/home/lib/python2.7/BaseHTTPServer.py", line 412, in send_header
self.wfile.write("%s: %s\r\n" % (keyword, value))
IOError: [Errno 32] Broken pipe
Tcl_AsyncDelete: cannot find async handler
Any ideas on how to solve this problem? Thanks.
You can't call matplotlib's imgshow inside a Dash callback. Even if it didn't throw that error, it still wouldn't get the image where you need it. Remember that your callback needs to return some data that (in the case of your code) will be injected into DOM element with ID id_badframe_video. So you need to somehow get the image data and return it from the callback.
The matplotlib docs have a recipe for base64 encoding the image data and creating an img element with the image data in the src attribute. I would suggest adapting this recipe so that your callback returns something like the following, (assuming you have the base64 encoded image in a variable img_data):
html.Img(src=f"data:image/png;base64,{data}")
(It's possible you don't actually need matplotlib for any of this)
Basically I followed this tutorial to stream processed video (not just retrieving frames and broadcasting) and it works for me (I'm new to html and flask). But I want to save some computation here:
I wonder if it's possible to avoid saving opencv image object to a jpeg file and then reading again? Is it a waste of computation?
I think it's even better if flask/html template could render the image by using raw 3 data channels RGB of the image.
Any idea? Thanks!
P/S: I actually tried this following code:
_, encoded_img = cv2.imencode('.jpg', img, [ int( cv2.IMWRITE_JPEG_QUALITY ), 95 ] )
But it gives the following error:
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/home/trungnb/virtual_envs/tf_cpu/lib/python3.5/site-packages/werkzeug/wsgi.py", line 704, in next
return self._next()
File "/home/trungnb/virtual_envs/tf_cpu/lib/python3.5/site-packages/werkzeug/wrappers.py", line 81, in _iter_encoded
for item in iterable:
File "/home/trungnb/workspace/coding/Mask_RCNN/web.py", line 25, in gen
if frame == None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
You would want to compress it to JPEG anyway as sending the raw RGB data would be slower due to the data size.
You could try using cv::imencode to compress the image. Then you may be able send the image in a similar way to flask return image created from database
i have a video where the front view of a car was recorded. The file is an .mp4 and i want to process the single images so i can extract more information (Objects, Lane Lines etc.). The problem is, when i want to create a video out of the processed files, i get an error. Here is what i have done so far:
Opened the video with cv2.VideoCapture() - Works fine
Saved the single frames of the video with cv2.imwrite() - Works fine
Creating a video out of single frames with cv2.VideoWriter() - Works fine
Postprocessing the video with cv2.cvtColor(), cv2.GaussianBlur() and cv2.Canny() - Works fine
Creating a video out of the processed images - Does not work.
Here is the code i used:
enter code here
def process_image(image):
gray = functions.grayscale(image)
blur_gray = functions.gaussian_blur(gray, 5)
canny_blur = functions.canny(blur_gray, 100, 200)
return canny_blur
process_on =0
count =0
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200))
vidcap = cv2.VideoCapture('input.mp4')
success,image = vidcap.read()
success = True
while success:
processed = process_image(image)
video.write(processed)
This is the error i get:
OpenCV Error: Assertion failed (img.cols == width && img.rows == height*3) in cv::mjpeg::MotionJpegWriter::write, file D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp, line 834
Traceback (most recent call last):
File "W:/Roborace/03_Information/10_Data/01_Montreal/camera/right_front_camera/01_Processed/Roborace_CAMERA_create_video.py", line 30, in
video.write(processed)
cv2.error: D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp:834: error: (-215) img.cols == width && img.rows == height*3 in function cv::mjpeg::MotionJpegWriter::write
My suggestion is: The normal images have 3 dimensions because of the RGB-color field. The processed images are only having one dimension. How can i adjust this in the cv2.VideoWriter function.
Thanks for your help
The VideoWriter() class only writes color images, not grayscale images unless you are on Windows, which it looks like you might be judging from the paths in your output. On Windows, you can pass the optional argument isColor=0 or isColor=False to the VideoWriter() constructor to write single-channel images. Otherwise, the simple solution is to just stack your grayscale frames into a three-channel image (you can use cv2.merge([gray, gray, gray]) and write that.
From the VideoWriter() docs:
Parameters:
isColor – If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
So by default, isColor=True and the flag cannot be changed on a non-Windows system. So simply doing:
video.write(cv2.merge([processed, processed, processed])
should patch you up. Even though the Windows variant allows writing grayscale frames, it may be better to use this second method for platform independence.
Also as Zindarod mentions in the comments below there are a number of other possible issues with your code here. I'm assuming you've pasted modified code that you weren't actually running here, or that you would have modified otherwise...if that's the case, please only post minimal, complete, and verifiable code examples.
First and foremost, your loop has no end condition, so it's indefinite. Secondly, you're hard-coding the frame size but VideoWriter() does not simply resize the images to that size. You must provide the size of the frame that you will pass into the VideoWriter(). Either resize the frame to the same size before writing to be sure, or create your VideoWriter using the frame size as defined in your VideoCapture() device (using the .get() methods for the frame size properties).
Additionally, you're reading only the first frame outside the loop. Maybe that was intentional, but if you want to process each frame of the video, you'll need to of course read them in a loop, process them, and then write them.
Lastly you should have some better error catching in your code. For e.g., see the OpenCV "Getting Started with Video" Python tutorial. The "Saving a Video" section has the proper checks and balances: run the loop while the video capture device is opened, and process and write the frame only if the frame was read properly; then once it is out of frames, the .read() method will return False, which will allow you to break out of the loop and then close the capture and writer devices. Note the ordering here---the VideoCapture() device will still be "opened" even when you've read the last frame, so you need to close out of the loop by checking the contents of the frame.
Add isColor=False argument to the VideoWriter.
Adjusting VideoWriter this way will solve the issue.
Code:
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200), isColor=False)
I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)
My file > https://drive.google.com/open?id=0BzmZiSDoM7l3U2poYWNTbUhBWVU
The problem is that I have this data from a software known as Geomodeller and want to load it in another software known as REDBACK.
In gemododeller, I made a 3D cube and loaded it with data(This data has layers like a cake) and somehow in REDBACK the data shown is only the intersection of the layers in 2D.
I've read the post Python base64 data decode
And implemented the code by https://stackoverflow.com/users/194586/nick-t
and i got :
dtgt=base64.b64decode(target)
format=">ff"
for i in range(100):
print struct.unpack_from(format,dtgt,8*i)
(2.350988701644575e-38, 1.1754943508222875e-38)
(1.7826336565709476e+29, 6.64613997892458e+35)
Traceback (most recent call last):
File "", line 2, in
error: unpack_from requires a buffer of at least 8 bytes
Can I please have help for this problem?
My supervisor thinks the problem lies in the appended data so he wants to get everything in the appended data out first and then further analyze the problem.
The AppendedData in your file does not appear to be valid Base64 data - there should be no equals signs except possibly at the very end. If it's really supposed to be composed of multiple individual chunks of encoded data like that, you would have to keep calling the decoder on successive chunks until the entire section of data had been processed. (You're only getting two data points because your decoder gave up at the first "==" in the file.)
Based on the compressor="vtkZLibDataCompressor" in the file header, the data may be in compressed format (this may explain why the two data points you managed to extract had such absurdly large/small values). Hopefully, Python's zlib module is compatible with this compression.