Play audio and video with gnonlin - python

I've been messing around with Gstreamer and Gnonlin lately, I've been concatenating segments of video files but when I dynamically connect the src pad on the composition, I can choose either the audio or video portion of the files, producing silent playback or videoless audio. How can I attach my composition to an audioconverter and a video sink at the same time. Do I have to make two compositions and add the files to both them?

Yes, gnonlin compositions work on one media type at a time. Audio and Video are treated separately.

Related

I have a problem that the output audio clip contains a delay

I use a library movepy in Python to make several mp3 audio files in one file.
clips = [AudioFileClip(c) for c in audio_list]
final_clip_audio = concatenate_audioclips(clips)
I have a problem that the output audio clip contains a delay when it switches between one sound and another. I want it to become as a unified sound without any interruption.

How to direct realtime-synthesized sound to individual channels in multichannel audio output in Python

I need to
read in variable data from sensors
use those data to generate audio
spit out the generated audio to individual audio output channels in real time
My trouble is with item 3.
Parts 1&2 have a lot in common with a guitar effects pedal, I should think: take in some variable and then adjust the audio output in real time as the input variable changes but don't ever stop sending a signal while doing it.
I have had no trouble using pyaudio to drive wav files to specific channels using the mapping[] parameter of pyaudio.play nor have I had trouble generating sine waves dynamically and sending them out using pyaudio.stream.play.
I'm working with 8 audio output channels. My problem is that stream.play only lets you specify a count of channels and as far as I can tell I can't say, for example, "stream generated_audio to channel 5".

I need help combining image and audio clips into 1 video

My goal is to make a video of a text to speech Reading images i made.
I have the images and audio as files, my goal is to combine them as a slideshow fashion where the image durations last as long as the text to speech audio duration. Also would be nice to have a transition mp4 between the clips.
The problem is that I have no idea where to start. The pymovie documentations don't seem to cover this from my understanding.
I need directions on where to go/what to use/how to use.
I am also creating the images in a for loop and planning to make a function to add the image and audio into the file
I have searched for 10-20 minutes now and didn't find anything to help me.
Keep in mind i am a newbie python programmer.

Audio alignment (same sentence with different speakers)

I am super new to audio processing. I have one reference audio file and several other audio recordings (same sentence spoken by different speakers - differ in dialect and duration) and I want to align the all the audio files to the one audio reference file with the least warping. I tried using MFCC and Chroma features (python/librosa) but I don't know what to do next. I was reading about DTW (Dynamic Time Warping) for alignment, would that work? Is there an example/open source project or audio tool which already does this? It seems to be a solved problem but I couldn't find it. Please help.
I was following read this -
https://librosa.github.io/librosa_gallery/auto_examples/plot_music_sync.html but how do I save back the aligned audio in time domain?
This seems related - Dynamic time warping with python (final mapping)

The simplest video streaming?

I have a camera that is taking pictures one by one (about 10 pictures per second) and sending them to PC. I need to show this incoming sequence of images as a live video in PC.
Is it enough just to use some Python GUI framework, create a control that will hold a single image and just change the image in the control very fast?
Or would that be just lame? Should I use some sort of video streaming library? If yes, what do you recommend?
Or would that be just lame?
No. It wouldn't work at all.
There's a trick to getting video to work. Apple's QuickTime implements that trick. So does a bunch of Microsoft product. Plus some open source video playback tools.
There are several closely-related tricks, all of which are a huge pain in the neck.
Compression. Full-sized video is Huge. Do the math 640x480x24-bit color at 30 frames per second. It adds up quickly. Without compression, you can't read it in fast enough.
Buffering and Timing. Sometimes the data rates and frame rates don't align well. You need a buffer of ready-to-display frames and you need a deadly accurate clock to get them do display at exactly the right intervals.
Making a sequence of JPEG images into a movie is what iPhoto and iMovie are for.
Usually, what we do is create the video file from the image and play the video file through a standard video player. Making a QuickTime movie or Flash movie from images isn't that hard. There are a lot of tools to help make movies from images. Almost any photo management solution can create a slide show and save it as a movie in some standard format.
Indeed, I think that Graphic Converter can do this.

Categories