I am trying to create a movie with the animation.FuncAnimation function in matplotlib. The movie looks fine interactively, but when I save it with the command
anim2.save('somefilm.mp4',codec='mpeg4', fps=15)
It starts out fine, but then becomes blurry (both using QuickTime and vlc, so I figured it's the movie, not the player).
I've played around with blitting, since I thought it was maybe the fact that the canvas wasn't redrawn, but to no avail. Increasing the bitrate also doesn't help.
Setting dpi=500 does improve the quality of the movie somewhat, though then it gets stuck repeatedly, which makes it difficult to watch.
I was just wondering whether this is the best one can do, or am I missing something?
In order to dig into this problem it is important to understand that video files are usually compressed with a highly lossy compression whereas the interactive display is not compressed. The usual movie compressions are often extremely bad with graphs, and it is a matter of compression parameters.
There are four things you can do:
set the image resolution (by dpi), but this may actually make the output visually poorer, as the problem is usually not in the lacking pixels
set the image bitrate (by bitrate); the higher your bitrate, the better your movie will be - one possibility is to set bitrate=-1 and let matplotlib choose the best bitrate
change the codec (e.g., to codec="libx264")
give extra arguments to the codec (e.g., extra_args=['-pix_fmt', 'yuv420p'])
Unfortunately, these options really depend on the video codec, which is a third-party program (usually ffmpeg), the intended use of your video, and your platform. I would start by adding the kwarg bitrate=-1 to see if it improves things.
If you cannot make it work, please add a full (but as simple as possible) example of how to create a bad file. Then it is easier to debug!
I was having the same problem while animating ~3500 frames of some subsurface water current vectors over a basemap and finally fixed the problem. I had been trying to set the bitrate in the anim.save declaration but was still getting the same blurriness later in the animation. What I had to do was set the bitrate when defining the writer:
plt.rcParams['animation.ffmpeg_path']='C:/ffmpeg/bin/ffmpeg.exe'
writer=animation.FFMpegWriter(bitrate=500)
anim.save('T:/baysestuaries/USERS/TSansom/Tiltmeters/testdeployment/tilt2.mp4',
writer=writer,fps=8)
If I set the bitrate to anything less than 500 the animation would still get blurry. bitrate=-1 and codec='libx264' did nothing for me. Hope this helps!
Related
I'm a self taught python programmer working on a hobby project, but I'm having some difficulty and would like to address what I see as a potential XY problem.
My app takes an input of an audio file (converts it to wav) and produces visual representations of the audio (90x90, RGB, frames) in the form of numpy arrays. I used to save these frames to a video file using open-cv, then use ffmpeg to scale the video and add the (original, non-wav) audio over the top, but this meant waiting until the app had finished to play the file. I would like to be able to play the audio and display the frames as they are generated, in sync. My generation code takes at maximum 8ms of a 16ms frame (60fps), so I have a reasonable amount of cycles to play with.
From my research, I have found that SDL is the tool that is most appropriate to display frames at high speeds, and have managed to make a simple system to display frames 'in time', by brute-force pixel editing. I have also discovered that SDL can play audio, and it even seems that I could synchronize this with the video as I would like, via the callback function. However, being a decidedly non-c programmer, I am at a loss as to how to best to display frames, as directly assigning pixels cannot be the safest or fastest, and I would like to scale the frames as the are displayed. I am also at a loss as to how best to convert numpy arrays to textures efficiently, as well as how best to control the synchronicity of my generation code, the audio, and video frames.
I'm not specifically looking for an answer to any of those problems, though advice would be appreciated, I'm just making sure that this is a reasonable way forward. Is SDL/pysdl2 coupled with numpy appropriate in this scenario? Or is this asking too much from python overall?
I want to play a video using python-vlc. I have gotten everything to work, and the video plays without any technical issues. There is this one aesthetic issue, though. I only want to play part of the video. As in, I want to crop out a fair bit on the bottom and a good bit on the right. I know I can do this with a call to MediaPlayer.video_set_crop_geometry(), and I've done so semi-successfully. However, the actual window that opens is the one that is adjusted for the entire video, with the part that I want centered in the middle with black bars around it. (If I call MediaPlayer.video_set_scale(), then the cropped-out bit the same size as it would be if I didn't crop. If I don't call video_set_scale(), the cropped-out bit is stretched, maintaining aspect ratio, until it reaches the edge of the window. Regardless, there are black bars).
Can I get the window to adjust to this new, smaller video? Preferably automatically, but if I have to pass in the size I want, that's fine too.
I have tried shuffling around the order between the different calls to no avail. Clearly python-vlc has the capacity somewhere to adjust the window it's playing in, as it can open a window the correct size for the regular video to play, and it adjusts automaticallty after calling video_set_scale(), but only to fit the original video, not the cropped one.
You should probably share more details, such as your full code and platform used.
That being said, libvlc doesn't offer an API to resize the native Window it draws on, but you can easily do it yourself (with win32 APIs for HWND, on Windows, for example).
Using moviepy, I am trying to trim a section of a webm file like this:
my_file.write_videofile(name, codec = 'libvpx')
Of course, I have already defined the beginning and end of the clip, etc. The code is returning the segment I want, however, I am noticed a decrease in the quality of the file.
I am not resizing or constraiing the file size anywhere, so I don't understand why the clip has an inferior quality compared to the original.
There are some parameters that I could play with, which I suspect are set as defaults in moviepy to speed of video manipulation, but the documentation of moviepy does not say anything about them:
ffmpeg_params :
Any additional ffmpeg parameters you would like to pass, as a list of
terms, like [‘-option1’, ‘value1’, ‘-option2’, ‘value2’]
Anybody outhere is familiar with the right parameters to keep the quality of the original file? As an alternative, is anybody is familiar with any other library to trim webm files?
Below are two pics showing the difference in quality. The first one is a frame of the trimmed file, the second one is approximately the same frame, for the original file.
Thank you
The parameter you are looking for is "bitrate" (for some reason I omitted it in the docs, it will be fixed for the next versions). If you don't provide it, ffmpeg has a default value which is indeed very low.
myclip.write_videofile("test_1.webm", bitrate="50k") # low quality.
myclip.write_videofile("test_2.webm", bitrate="50000k") # high quality.
You can also tune the bitrate of the audio with `audio_bitrate='50k' by the way. The bitrate gives ffmpeg an upper bound on what the bitrate can be, but most of the time when you provide "50000k" the actual bitrate will be below "50000k". 50000k provides nice-quality videos, but keep in mind that webm is still a lossy format.
I'm using matplotlib to generate some figures via savefig. These figures are black and white and need to be saved at a very high resolution (1000 dpi) in TIFF format. It would therefore be beneficial to save them with a reduced bit depth so as to use less memory.
To that end, my question: how does one specify the bit depth when saving figures with matplotlib?
Thanks!
So far I get the impression that matplotlib doesn't support a bit-depth option. I'm thus using imagemagick to convert the image posthoc:
convert -monochrome +dither A.tiff B.tiff
Several things I'll mention in case someone else is trying to do similarly:
When I first changed the bitdepth by running convert -monochrome A.tiff B.tiff, the fonts looked unacceptably ugly (even at 1000 DPI!). This was because of antialiasing, which matplotlib performs by default. I couldn't find any option to turn this off, but its negative effects (when downsampling the DPI) can be largely circumvented by enabling dithering. Therefore, even if there is an option to change the DPI of the output image in matplotlib, it isn't useful unless it performs dithering or unless there's also an option to disable antialiasing.
Short answer, I would suggest to anyone in a similar situation as me to do their monochrome conversion posthoc as I have done.
I'm using Python 2.7, PyGTK 2.24, and PyGST (Gstreamer).
To ensure smooth playback from one clip to another (without a blink), I combined all the clips I needed into one larger video. This lets me seek to the exact place I need in code. One of the clips is like a "fill-in", which should loop whenever one of the other clips is not playing.
However, to make my code easier and more streamlined, I want to use segments to define the various clips within the larger video. Then, at the end of each segment (I know there is a segment end event), I seek to the fill-in clip. When I need another clip, I just seek to that segment.
My question is, how exactly do I create these segments? I'm guessing that would be the event_new_new_segment(), but I am not sure. Can I create multiple clips to seek with using this function? Is there another I should use. Are there any gotchas to this method of seeking in my video that I should be aware of?
Second, how do I seek to that segement?
Thank you!
Looks like only GstElement's can generate NEWSEGMENT events, you can't simply attach it to an existing element. The closest thing you could do if not using Python, would be creating a single shot or periodic GstClockID or and use gst_clock_id_wait_async until the clock time hit. But the problem is, GstClockID is not wrapped in PyGst.
I think I'm actually working on some similar problem. Some kind of solution I'm using now, is gluing video streams in real time with gnonlin. The good side: seems to work, didn't have time to thoroughly test it yet. Bad side: poorly documented and buggy. These sources from the flumotion project (and the comments inside!) were very, very helpful to me for understanding how to make the whole thing work.