I wanna code a gstramer pipeline in python to convert a webm video into avi one.
I made the pipeline to display the webmvideo, which works.
How to perform what I want?
I though that just adding a "x264" element to the video queue and "lame" to the audio one, was sufficient.
I noted that a mux is necessary and I aded that.
What I get is:
gst.element_link_many(self.queuev, self.video_decoder,colorspace,x264enc)
gst.element_link_many(self.queuea, self.audio_decoder, audioconv,lame)
gst.element_link_many(avimux,filesink)
where there's a specific function to use the audiodecoder and videodecoder which is:
def demuxer_callback(self, demuxer, pad):
if pad.get_property("template").name_template == "video_%02d":
qv_pad = self.queuev.get_pad("sink")
pad.link(qv_pad)
elif pad.get_property("template").name_template == "audio_%02d":
qa_pad = self.queuea.get_pad("sink")
pad.link(qa_pad)
I think I've to code something similar for avimux.
And I've done this:
def avimux_callback(self, avimux, pad1):
if pad1.get_property("template").name_template == "video_%02d":
qv_pad1 = self.queuev.get_pad("sink")
pad1.link(qv_pad1)
elif pad1.get_property("template").name_template == "audio_%02d":
qa_pad1 = self.queuea.get_pad("sink")
pad1.link(qa_pad1)
but I get an error about the filesource and the script doesn't works.
Suggestions??
Thanks
FrankBr
Related
I am doing some simple DSP and playing the original and the filtered. I am using librosa to load the mp3 files in as arrays.
Here is the code of me loading the files in.
PATH = os.getcwd() + "/audio files"
to_load_array = os.listdir(PATH)
random_mp3 = to_load_array[randint(0, len(to_load_array) - 1)]
random_mp3_path = "audio files/" + random_mp3
data, sr = librosa.load(random_mp3_path, duration = MP3(random_mp3_path).info.length, mono = False, sr = 44100)
Else where in my program I have a GUI with a button that allows you to switch between playing the filtered version and and the unfiltered version. I am doing this by using different channels in pygame.mixer playing them both at the same time but alternating which one is being muted. I am using pygame.mixer because it allows for this simultaneous playing feature.
Currently I am filtering and then exporting the filtered file as an mp3 and then playing the mp3. This feels like too many I/O operations and I would like to speed up the program a little bit by playing the filtered and unfiltered arrays directly from within python.
I found the function pygame.mixer.sndarray.make_sound which is suppused to turn your array into a sound object that can be played on the mixer.play() function.
This is how I am using the function:
def setSoundObjects(self, original, filtered):
print('orignal data shape: ',original.shape)
print('format (From get_init): ', pygame.mixer.get_init()[2])
print("this is the original data: ", original)
original = numpy.int16(original * (2 ** 15))
filtered = numpy.int16(filtered * (2 ** 15))
print("this is the data after numpy conversion: ", original)
#set the sound objects
print("Number of channels: ", pygame.mixer.get_num_channels())
self.original = pygame.sndarray.make_sound(original)
self.filtered = pygame.sndarray.make_sound(filtered)
However, whenever I try to use it I keep getting this error:
self.original = pygame.mixer.Sound(array = original)
ValueError: Array depth must match number of mixer channels
here is the output from what I am printing to the terminal to try and debug this:
format (From get_init): 2
this is the original data: [[ 0. 0. 0. ... -0.00801174 -0.00784447
-0.01003712]
[ 0. 0. 0. ... -0.00695544 -0.00674583
-0.00865676]]
this is the data after numpy conversion: [[ 0 0 0 ... -262 -257 -328]
[ 0 0 0 ... -227 -221 -283]]
Number of channels: 8
I was trying the numpy conversion because of this post but it didn't help.
Any help would be greatly appreciated thank you!
let me know if I should add anything else to help clarify what im doing. I did not include the entire code because the program is quite large already.
This question is different than this question because I am already changing the format of the array. I have tried this solution and it does not work in my code. I get the same error if I try and pass the size = 32 parameter into the init.
I am using the Music21 python module for a project. The output of my code generates a MIDI file. I want this MIDI file to sound like a guitar, but it sounds like a Piano. I saw a similar question here which said I should add instrument.Guitar() before the notes in my output notes sequence.
But still the notes are being played like a keyboard.
This is the code which generates the output notes based on a sequence of input notes:
for pattern in in_notes:
# pattern is a chord
if len(self.intervals)<1:
self.set_intervals()
if ('.' in pattern) or pattern.isdigit():
notes_in_chord = pattern.split('.')
notes = []
for current_note in notes_in_chord:
new_note = note.Note(int(current_note))
notes.append(new_note)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(instrument.ElectricGuitar())
output_notes.append(new_chord)
# pattern is a note
else:
new_note = note.Note(pattern)
new_note.offset = offset
output_notes.append(instrument.ElectricGuitar())
output_notes.append(new_note)
# increase offset each iteration so that notes do not stack
offset += self.intervals[0]
self.intervals.pop(0)
And the part which generates the output MIDI file:
midi_stream = stream.Stream(output_notes)
key = midi_stream.analyze('key')
i=interval.Interval(key.tonic,pitch.Pitch(self.key))
midi = midi_stream.transpose(i)
midi.write('midi', fp=f'{self.file_name}.midi')
How do I make it sound like a guitar?
It seems to work if I try to change the instrument to a violin with output_notes.append(instrument.Violin()) but not for any of the Guitars:- Guitar(), AcousticGuitar(), ElectricGuitar()
I have the following code:
#Time integration
T=28
AT=5/(1440)
L=T/AT
tr=np.linspace(AT,T,AT) %I set minimum_value to AT,
to avoid a DivisionByZero Error (in function beta_cc(i))
np.savetxt('tiempo_real.csv',tr,delimiter=",")
#Parameters
fcm28=40
beta_cc=0
fcm=0
s=0
# Hardening coeficient (s)
ct=input("Cement Type (1, 2 or 3): ")
print("Cement Type: "+str(ct))
if int(ct)==1:
s=0.2
elif int(ct)==2:
s=0.25
elif int(ct)==3:
s=0.38
else: print("Invalid answer")
# fcm determination
iter=1
maxiter=8065
while iter<maxiter:
iter += 1
beta_cc = np.exp(s*(1-(28/tr))**0.5)
fcm = beta_cc*fcm28
np.savetxt('Fcm_Results.csv',fcm,delimiter=",")
The code runs without errors, and it creates the two desired files, but there is no information stored in neither.
What I would like the np.savetxt to do is to create a .CSV file with the result of fcm at every iteration, (so a 1:8064 array)
Instead of the while-loop, I had previously tried using a For-loop, but as the timestep is a float, I had some problems with it.
Thank you very much.
PS. Not sure if I should mention: I used Python3 on Ubuntu.
If anyone has the same issue, I solved this by changing the loop to a FOR loop, appending the iterative values of the functions (beta_cc & fcm) in an array, and using the savetxt command.
iteration=0
maxiteration=8064
fcmM1=[]
tiemporeal=[]
for i in range(iterat,maxiter):
def beta_cc(i):
return np.exp(s*1-(28/tr)**0.5))
def fcm(i):
return beta_cc(i)**fcm28
tr=tr+AT
fcmM1.append(fcm(i))
tiemporeal.append(tr)
np.savetxt('M1_Resultados_fcm.csv',fcmM1,delimiter=",",header="Fcm",fmt="%s")
I am making a program that should be able to extract the notes, rests, and chords from a certain midi file and write the respective pitch (in midi tone numbers - they go from 0-127) of the notes and chords to a csv file for later use.
For this project, I am using the Python Library "Music21".
from music21 import *
import pandas as pd
#SETUP
path = r"Pirates_TheCarib_midi\1225766-Pirates_of_The_Caribbean_Medley.mid"
#create a function for taking parsing and extracting the notes
def extract_notes(path):
stm = converter.parse(path)
treble = stm[0] #access the first part (if there is only one part)
bass = stm[1]
#note extraction
notes_treble = []
notes_bass = []
for thisNote in treble.getElementsByClass("Note"):
indiv_note = [thisNote.name, thisNote.pitch.midi, thisNote.offset]
notes_treble.append(indiv_note) # print's the note and the note's
offset
for thisNote in bass.getElementsByClass("Note"):
indiv_note = [thisNote.name, thisNote.pitch.midi, thisNote.offset]
notes_bass.append(indiv_note) #add the notes to the bass
return notes_treble, notes_bass
#write to csv
def to_csv(notes_array):
df = pd.DataFrame(notes_array, index=None, columns=None)
df.to_csv("attempt1_v1.csv")
#using the functions
notes_array = extract_notes(path)
#to_csv(notes_array)
#DEBUGGING
stm = converter.parse(path)
print(stm.parts)
Here is the link to the score I am using as a test.
https://musescore.com/user/1699036/scores/1225766
When I run the extract_notes function, it returns two empty arrays and the line:
print(stm.parts)
it returns
<music21.stream.iterator.StreamIterator for Score:0x1b25dead550 #:0>
I am confused as to why it does this. The piece should have two parts, treble and bass. How can I get each note, chord and rest into an array so I can put it in a csv file?
Here is small snippet how I did it. I needed to get all notes, chords and rests for specific instrument. So at first I iterated through part and found specific instrument and afterwards check what kind of type note it is and append it.
you can call this method like
notes = get_notes_chords_rests(keyboard_instruments, "Pirates_of_The_Caribbean.mid")
where keyboard_instruments is list of instruments.
keyboard_nstrument = ["KeyboardInstrument", "Piano", "Harpsichord", "Clavichord", "Celesta", ]
def get_notes_chords_rests(instrument_type, path):
try:
midi = converter.parse(path)
parts = instrument.partitionByInstrument(midi)
note_list = []
for music_instrument in range(len(parts)):
if parts.parts[music_instrument].id in instrument_type:
for element_by_offset in stream.iterator.OffsetIterator(parts[music_instrument]):
for entry in element_by_offset:
if isinstance(entry, note.Note):
note_list.append(str(entry.pitch))
elif isinstance(entry, chord.Chord):
note_list.append('.'.join(str(n) for n in entry.normalOrder))
elif isinstance(entry, note.Rest):
note_list.append('Rest')
return note_list
except Exception as e:
print("failed on ", path)
pass
P.S. It is important to use try block because a lot of midi files on the web are corrupted.
I am trying to make a video from a large number of images using MoviePy. The approach works fine for small numbers of images, but the process is killed for large numbers of images. At about 500 images added, the Python process is using all about half of the volatile memory available. There are many more images than that.
How should I address this? I want the processing to complete and I don't mind if the processing takes a bit longer, but it would be good if I could limit the memory and CPU usage in some way. With the current approach, the machine becomes almost unusable while processing.
The code is as follows:
import os
import time
from moviepy.editor import *
def ls_files(
path = "."
):
return([fileName for fileName in os.listdir(path) if os.path.isfile(
os.path.join(path, fileName)
)])
def main():
listOfFiles = ls_files()
listOfTileImageFiles = [fileName for fileName in listOfFiles \
if "_tile.png" in fileName
]
numberOfTiledImages = len(listOfTileImageFiles)
# Create a video clip for each image.
print("create video")
videoClips = []
imageDurations = []
for imageNumber in range(0, numberOfTiledImages):
imageFileName = str(imageNumber) + "_tile.png"
print("add image {fileName}".format(
fileName = imageFileName
))
imageClip = ImageClip(imageFileName)
duration = 0.1
videoClip = imageClip.set_duration(duration)
# Determine the image start time by calculating the sum of the durations
# of all previous images.
if imageNumber != 0:
videoStartTime = sum(imageDurations[0:imageNumber])
else:
videoStartTime = 0
videoClip = videoClip.set_start(videoStartTime)
videoClips.append(videoClip)
imageDurations.append(duration)
fullDuration = sum(imageDurations)
video = concatenate(videoClips)
video.write_videofile(
"video.mp4",
fps = 30,
codec = "mpeg4",
audio_codec = "libvorbis"
)
if __name__ == "__main__":
main()
If I understood correctly, you want to use the different images as the frames of your video.
In this case you should use ImageSequenceClip() (it's in the library, but not yet in the web docs, see here for the doc).
Basically, you just write
clip = ImageSequenceClip("some/directory/", fps=10)
clip.write_videofile("result.mp4")
And it will read the images in the directory in alpha-numerical order, while keeping only one frame at a time in the memory.
While I am at it, you can also provide a list of filenames or a list of numpy arrays to ImageSequenceClip.
Note that if you just want to transform images into a video, and not anything else fancy like adding titles or compositing with another video, then you can do it directly with ffmpeg. From memory the command should be:
ffmpeg -i *_tile.png -r 10 -o result.mp4