How to display a group of videos in a single frame - python

I'm kind of new to python, so please bear with me. I am having trouble displaying videos inside a TKinter frame. What i want to do is play a group of videos inside a frame in TK.
For example: i have 3 videos named a.mp4, b.mp4, and c.mp4
i want them to play inside a frame without it reloading(closing, then play the next video)
i have tried OpenCV, but what it does is play a.mp4, closes, then plays b.mp4
Any help would be much appreciated, i have been stuck here for days

You can concatenate videos horizontally or vertically or in a grid to a single video file using ffmpeg or manually in python.
You can use ffmpeg outside python to concatenate videos into single video and show as a single video:
Vertically stack several videos using ffmpeg?
Or you can make a single video by concatenating those videos horizontally or vertically in python itself. e.g. use scikit-video or opencv to load videos into different arrays and concatenate horizontally or vertically or in grid and save as a single video.

Related

Can I display specific frames of a video using a list of timestamps in opencv python?

I have a video and a list of timestamps. I'd like to display a video with the frames that appear at these timestamps only (or as close to them as possible).
I am doing this for debugging purposes so it need not be an elegant solution. But having the outcome displayed in a video that I can control would make debugging much easier.
I am using opencv (python).

How can I dynamically stack gifs horizontally to create one gif in python?

What I need to do is combine two or more GIF files into one horizontally using Python. I thought about using PIL or ffmpeg but I could never really get anywhere. I found partial solutions on the internet, however they aren't exactly what I'm looking for. I'm looking for a way to add an uncertain amount of gifs to one horizontal strip gif. The sizes of each individual GIF are different, and the amount of GIFS being stacked horizontally changes depending on the user's input. Essentially a dynamic hstack filter for ffmpeg. Help would be appreciated!

ffmpeg drawtext with multiple colors

I'm using ffmpeg to work on some videos. I'm resizing each video down to a smaller size, padding it, and overlaying a static image overlaying the padding. Right now the static image has one textbox that says some information about the video. That was simple enough. I want to also add more information to the static image, though. I want to add information about the order of videos, with the current video being a different color. For example, I want text that looks like the following:
Video1→Video2→Video→VIDEO4→Video5→Video6
Where VIDEO4 is a different color, so we can easily see where we are in the order. It doesn't seem to be that easily done with ffmpeg:
I don't believe I could do multiple colors using one "drawtext" filter
If I have multiple textboxes (one for previous videos in the list, one for the current video in the list, and one for the videos-to-come), they don't line up very well horizontally, as the actual names of the video have varying lengths of text. It's a bit of a nightmare.
I can't use ASS scripts/subtitles because it's being put on static image, not a video
Is there any other solution to this other than just attempting to guess at the X value of these drawtext filters? Could I actually use some sort of subtitle script on an image? Am I able to reference other textboxes? If so, I could at least calculate the width of the textbox and position the next one accordingly. Everything I've seen so far has had some sort of timestamp beginning and end, and I just want it there for the whole video.
I'm using python and the ffmpeg-python library to interface with ffmpeg. This allows me to use a configuration file so I can dynamically add/remove videos to be created.
Just for more information, here's a snippit of how I'm making the videos:
overlay_input = ffmpeg.input(overlay_image)\
.drawtext(text="blahblahblah",
... text options...)
video_input = ffmpeg.input(video, re=None)\
.video\
.filter("scale",
...scaling options...)\
.filter("pad",
...padding options...)\
.overlay(overlay_input)
Any information would be very appreciated!

Python Play two videos in one window

Is it possible to play two videos in one single window then generate a new video using python? i.e. I have video_1.avi and video_2.avi and want to create video_3.avi which includes 1 and 2. As this picture shows:
Any ideas and suggestions will be greatly appreciated.

Sampling video and making image cutoffs in python

I've got a videostream (for now I just use a video). I need to get a one frame per every second or more seconds and I need to cut some part of these pictures based on 8 coordinates(leftupper x/y, rightupper x/y, rightlower x/y and leftlower x/y).
I thinkg that I'm able to do that cutting in java but I would rather do it in python as entire application is written in python / django.
It's possible to do both of that things directly in python?
Could you point me to some documentation or whatever?
You can start with some Video Handling in Python Using OpenCV
Python : Reading Video File and Saving Video File using OpenCV
It contains all the basic links like Reading from File and Camera , that gives a initial idea of how to process Frames .
Then after you get Each Frame in OpenCV mat , you can form a Bounding Box Rect to extract the Region or ROI from that
Close to this Question
Cropping Live Video Feed
Cropping a Single Frame can be done as done in
Cropping a Single Image in OpenCV Python
This can be repeated for every Frame and you can even Write to a Video File taking reference from First Reference.

Categories