camera based event - how many event belongs to light system - python

I have a CSV file that contains: "timestamp", "x", "y", and "polarity" by a camera-based event.
For example:
timestamp,x,y,polarity
1673364315040516,34,477,1
1673364315040516,34,478,1
1673364315040516,36,473,1
1673364315040516,37,474,1
1673364315040516,38,469,1
1673364315040516,38,470,1
1673364315040516,38,472,1
1673364315040516,39,472,1
I need to take a window size e.g (0.1 sec) and determine how many events in this window size belong to the light system and how many to the other changes (like sensor movement or change in the vision).
Any advice or guidance on how to do this will be appropriate!
These results are needed for the final mission - from the pixels that belong to the light system which of them are present in a straight line -> no obstacle, and which of them does not -> there is an obstacle.

Related

Output more than one instrument using Music 21 and Python

I am using Python and Music21 to code an algorithm that composes melodies from input music files of violin accompanied by piano pieces.My problem is that when I input a midi file that has two instruments, the output is only in one instrument. I can currently change the output instrument to a guitar, trumpet etc. even though those instruments are not present in my original input files. I would like to know whether I could write some code that identifies the instruments in the input files and outputs those specific instruments. Alternatively, is there any way that I could code for two output instruments rather than one? I have tired to copy the existing code with another instrument but the algorithm only outputs the last instrument detected in the code.Below is my current running code:
def convert_to_midi(prediction_output):
offset=0
output_notes=[]
#Create note and chord objects based on the values generated by the model
for pattern in prediction_output:
#Pattern is a chord
if ('.' in pattern) or pattern.isdigit():
notes_in_chord=pattern.split('.')
notes=[]
for current_note in notes_in_chord:
output_notes.append(instrument.Guitar())
cn=int(current_note)
new_note=note.Note(cn)
notes.append(new_note)
new_chord=chord.Chord(notes)
new_chord.offset=offset
output_notes.append(new_note)
#Pattern is a note
else:
output_notes.append(instrument.Guitar())
new_note=note.Note(pattern)
new_note.offset=offset
output_notes.append(new_note)
Instrument objects go directly into the Stream object, not on a Note, and each Part can have only one Instrument object active at a time.

python webrtc voice activity detection is wrong

I need to do voice activity detection as a step to classify audio files.
Basically, I need to know with certainty if a given audio has spoken language.
I am using py-webrtcvad, which I found in git-hub and is scarcely documented:
https://github.com/wiseman/py-webrtcvad
Thing is, when I try it on my own audio files, it works fine with the ones that have speech but keeps yielding false positives when I feed it with other types of audio (like music or bird sound), even if I set aggressiveness at 3.
Audios are 8000 sample/hz
The only thing I changed to the source code was the way I pass the arguments to main function (excluding sys.args).
def main(file, agresividad):
audio, sample_rate = read_wave(file)
vad = webrtcvad.Vad(int(agresividad))
frames = frame_generator(30, audio, sample_rate)
frames = list(frames)
segments = vad_collector(sample_rate, 30, 300, vad, frames)
for i, segment in enumerate(segments):
path = 'chunk-%002d.wav' % (i,)
print(' Writing %s' % (path,))
write_wave(path, segment, sample_rate)
if __name__ == '__main__':
file = 'myfilename.wav'
agresividad = 3 #aggressiveness
main(file, agresividad)
I'm seeing the same thing. I'm afraid that's just the extent to which it works. Speech detection is a difficult task and webrtcvad wants to be light on resources so there's only so much you can do. If you need more accuracy then you would need different packages/methods that will necessarily take more computing power.
On aggressiveness, you're right that even on 3 there are still a lot of false positives. I'm also seeing false negatives however so one trick I'm using is running three instances of the detector, one for each aggressiveness setting. Then instead of classifying a frame 0 or 1 I give it the value of the highest aggressiveness that still said it was speech. In other words each sample now has a score of 0 to 3 with 0 meaning even the least strict detector said it wasn't speech and 3 meaning even the strictest setting said it was. I get a little bit more resolution like that and even with the false positives it is good enough for me.

How to create a menu where the user can sort blocks of data on parameter-values?

My program so far:
User gets 3 questions:
Mortgage rate:
Downpayment:
Interest deduction:
The input from theese questions is saved to variables.
Next: A textfile containing data of property-listings is opened in the program.py. I use the data (for example: price of each property, rent etc) to calculate monthly rate and square meter price for each property. Then I write this data in to the textfile (using writelines function) so these values is saved for every block of data (for every property). Then I print out the data for all property-listings to the user.
Now the output-screen is like this (with example-input from user):
Mortgage rate(in percent): 2
Downpayment(in swedish SEK, i.e kr): 100000
Interest deduction(in percent): 20
All listed properties:
Size: 655000
Price: 35
Rent: 1200
Phonenr: 0716-257681
Adress: Blablastreet2
Monthly rate: 1940.0
Price/Squaremeter: 18714.285714285714
"emptyline"
Size: 11840950
Price: 80
Rent: 3550
Phonenr: 08-6601502
Adress: Yadayadastreet3
Monthly rate: 19204.6
Price/Squaremeter: 148011.875
etc for all properties..
What i want to do:
Above the "All listed properties:" I want there to be an option to go to a menu, like:
(1) Menu
Then pressing 1 and enter I want the user to come to a new screen with the following options
0 - Quit
1 - Go back to show all listed properties
2 - Change wanted monthly rate (<200000kr)
3 - Change wanted rent (<500000kr)
4 - Price/squaremeter (<100000)
5 - Change wanted size (>20kvm)
6 - Create selection
Where the values in parenthesis is the current "standard choice"
When pressing for example (1) I want there to be a question such as
The monthly rate should be not more than (in kr):
And after input I want the user to come back to the menu above, but now with the new value the user put in to the question in parenthesis.
When the user is done adjusting the parameters he/she clicks on (5) to create the selection. And then I want all the property listings to be sorted after the property-parameters chosen and to be printed for the user one property at the time from lowest to highest value, where the user can go back and forth (or again choose to go to the menu).
Info about how the data from the txt-file is setup:
All parameters values are brought into the program via rows from a text-file and then sorted into different lists were every list-element represents a row of data from the textfile. There is the same row-distances from every same datapoint, meaning there is 8 rows from rent on property one to rent on property two, and in this way I have hardcoded the same data-category from every property to a list of its own, forexample prices[] or rents[].
Now, how to I make this happen?
(My thoughts have been: Should I create the menu with if statements in the program.py and then link them to a class containing objects for every choice in the menu somehow, where the methodfunctions are doing the sorting? How do I save the choices to the menu, in parenthesis, seems wrong to do it in a textfile like I have done with the property-data(?). How do I create new sides to go back and forth from, to the menu and back and to changing values?) I am very new to programming and python so excuse me if I am not totally clear in what I want to do here, please ask questions if you want me clarify something.

cut parts of a video using gstreamer/Python (gnonlin?)

I have a video file and I'd like to cut out some scenes (either identified by a time position or a frame). As far as I understand that should be possible with gnonlin but so far I wasn't able to find a sample how to that (ideally using Python). I don't want to modify the video/audio parts if possible (but conversion to mp4/webm would be acceptable).
Am I correct that gnonlin is the right component in the gstreamer universe to do that? Also I'd be glad for some pointers/recipes how to approach the problem (gstreamer newbie).
Actually it turns out that "gnonlin" is too low-level and still requires a lot of gstreamer knowledge. Luckily there is "gstreamer-editing-services" (gst-editing-services) which is a
library offering a higher level API on top of gstreamer and gnonlin.
With a tiny bit of RTFM reading and a helpful blog post with a Python example I was able to solve my basic problem:
Load the asset (video)
Create a Timeline with a single layer
add the asset multiple times to the layer, adjusting start, inpoint and duration so only the relevant parts of a video are present in the output video
Most of my code is directly taken from the referenced blog post above so I don't want to dump all of that here. The relevant stuff is this:
asset = GES.UriClipAsset.request_sync(source_uri)
timeline = GES.Timeline.new_audio_video()
layer = timeline.append_layer()
start_on_timeline = 0
start_position_asset = 10 * 60 * Gst.SECOND
duration = 5 * Gst.SECOND
# GES.TrackType.UNKNOWN => add every kind of stream to the timeline
clip = layer.add_asset(asset, start_on_timeline, start_position_asset,
duration, GES.TrackType.UNKNOWN)
start_on_timeline = duration
start_position_asset = start_position_asset + 60 * Gst.SECOND
duration = 20 * Gst.SECOND
clip2 = layer.add_asset(asset, start_on_timeline, start_position_asset,
duration, GES.TrackType.UNKNOWN)
timeline.commit()
The resulting video includes the segments 10:00–10:05 and 11:05-11:25 so essentially there are two cuts: One in the beginning and one in the middle.
From what I have seen this worked perfectly fine, audio and video in sync, no worries about key frames and whatnot. The only part left is to find out if I can translate the "frame number" into a timing reference for gst editing services.

How do I read a midi file, change its instrument, and write it back?

I want to parse an already existing .mid file, change its instrument, from 'acoustic grand piano' to 'violin' for example, and save it back or as another .mid file.
From what I saw in the documentation, the instrument gets altered with a program_change or patch_change directive but I cannot find any library that does this in MIDI files that exist already. They all seem to support it only MIDI files created from scratch.
The MIDI package will do this for you, but the exact approach depends on the original contents of the midi file.
A midi file consists of one or more tracks, and each track is a sequence of events on any of sixteen channels, such as Note Off, Note On, Program Change etc. The last of these will change the instrument assigned to a channel, and that is what you need to change or add.
Without any Program Change events at all, a channel will use program number (voice number) zero, which is an acoustic grand piano. If you want to change the instrument for such a channel then all you need to do is add a new Program Change event for this channel at the beginning of the track.
However if a channel already has a Program Change event then adding a new one at the beginning will have no effect because it is immediately overridden by the pre-existing one. In this case you will have to change the parameters of the existing event to use the instrument that you want.
Things could be even more complicated if there are originally several Program Change events for a channel, meaning that the instrument changes throughout the track. This is unusual, but if you come across a file like this you will have to decide how you want to change it.
Supposing you have a very simple midi file with a single track, one channel, and no existing Program Change events. This program creates a new MIDI::Opus object from the file, accesses the list of tracks (with only a single member), and takes a reference to the list of the first track's events. Then a new Program Change event (this module calls it patch_change) for channel 0 is unshifted onto the beginning of the event list. The new event has a program number of 40 - violin - so this channel will now be played with a violin instead of a piano.
With multiple tracks, multiple channels, and existing Program Change events the task becomes more complex, but the principle is the same - decide what needs to be done and alter the list of events as necessary.
use strict;
use warnings;
use MIDI;
my $opus = MIDI::Opus->new( { from_file => 'song.mid' } );
my $tracks = $opus->tracks_r;
my $track0_events = $tracks->[0]->events_r;
unshift #$track0_events, ['patch_change', 0, 0, 40];
$opus->write_to_file('newsong.mid');
Use the music21 library (plugging my own system, hope that's okay). If there are patches defined in the parts, do:
from music21 import converter,instrument # or import *
s = converter.parse('/Users/cuthbert/Desktop/oldfilename.mid')
for el in s.recurse():
if 'Instrument' in el.classes: # or 'Piano'
el.activeSite.replace(el, instrument.Violin())
s.write('midi', '/Users/cuthbert/Desktop/newfilename.mid')
or if there are no patch changes currently defined:
from music21 import converter,instrument # or import *
s = converter.parse('/Users/cuthbert/Desktop/oldfilename.mid')
for p in s.parts:
p.insert(0, instrument.Violin())
s.write('midi', '/Users/cuthbert/Desktop/newfilename.mid')

Categories