How do I handle Amazon AudioPlayer events? - python

I'm writing a simple Alexa Skill that utilizes AudioPlayer to play a long audio file. This StackOverflow answer nicely demonstrates the use of directives to play (and stop) audio, but I'm not quite sure how to intercept AudioPlayer events like PlayBackStopped and PlayBackPaused. Basically I'm trying to let the user pause an audio stream and then resume playing where they last left off. Any examples in Python would be very welcome!

I'm not quite sure how to intercept AudioPlayer events like
PlayBackStopped and PlayBackPaused
Events such as PlaybackPaused are Audio requests notifying the player about the state. So whenever a user pauses in an active session, you will get two events one is STOP and other is PlayBackPaused.
I'm trying to let the user pause an audio stream and then resume
playing where they last left off
So whenever you get PlayBackStopped you also get offset in milliseconds. You can take that offset and store in DynamoDB or any persistent storage. When the user returns just check is he has any offset and start from there.
Amazon Documentation
Example of python ask-SDK multistream audio player.

Related

Recording audio blocks for musical looper doesn't function correctly

First I want to explain what does this little program is supposed to do. It is a musical looper, when not recording anything, it will basically bypass the incoming audio blocks to the output. When in recording mode (which is triggered by a button in tkinter gui) it will still bypass the input to the output, but it will also record incoming audio. When the recording is done, it will loop the recording. This is basically it.
The program consists of 3 files: https://gist.github.com/ramazanemreosmanoglu/ec64f51f101d9324612bcbcc597fabba
So this is what happens when I run the program:
(Nothing is played automatically in the video. When I record something and do nothing, nothing happens. At first, I thought I should add a midi monitor but we don't need it, just know that every sound is triggered by me.)
https://www.youtube.com/watch?v=PdBORfjHAgQ
What is wrong in this code? What is the correct way to record audio blocks on python with jack?

Manipulate the volume of a constantly repeating audio in Pyglet

How can I manipulate the volume of the audio loaded via pyglet.media.load?
The reason is that I have to repeat a sound repeatedly (eg bullets), but if I use the Player, the sound is queued to be able to play it and plays it only once using .play() (eg bullet = pyglet.media.load("bullet.wav", streaming=False) audioPlayer = pyglet.media.Player() audioPlayer.queue(bullet) if the audioPlayer.play() command is used several times, for example from a key, it is executed only once and that's it)
If I don't use the Player, I can use the sound constantly, but at that point I can no longer manipulate the volume of the audio. (eg bullet = pyglet.media.load("bullet.wav", streaming=False) bullet.play())
So how can I go about solving the problem? I don't have much experience using Pyglet audio, so I'm probably ignoring something I'm not aware of.
The audio player is the correct approach as you can set volume on the player.
audioPlayer.volume = 0.5
Under the hood all the media.play() command does is create a Player instance, queue and play it. If you want to replay something, simply queue the source and play the player again. If you need audio overlaps, then you would create separate Player instances every time, like media.play() would.

playing sound in Python with the ability to cut it off mid-play

I am writing a bit of python code that plays a sound file (MP3 or the equivalent) and should cut that sound off if the user strikes a (hardware) button that is wired into the system. This will be on a Raspberry Pi running Raspbian. The libraries I’ve used historically for playing sound all play to completion. The closest approach I can think of would be using an external sound player (OMXplayer perhaps) and then searching for and killing its process if the button is pressed, but this feels inelegant. Can anybody suggest a better approach?
Using Pyaudio you could simply check a flag before writing each buffer to the stream, and if the audio should stop, fade out the current buffer and then either continue the stream with silence (write 0's) or stop the stream and exit the program. This could be easily be achieved with a latency of ~1/10s after button press.
You would also need to create a UI thread in the application to handle commands since the stopping flag is switched on user input.

asciimatics - how to export to a GIF?

I'm new to asciimatics and would like to export animations I'm making to a GIF, at the command line. Note that I want to ONLY record the animation itself, not me starting some command in the terminal to record the gif as well.
I've looked at the docs, but don't see an asciimatics way to do this?
Note that I'm aware of things like ttygif, but tried to use it and couldn't get it to work with asciimatics, probably due to me not understanding how to use it.
You can do this using toughtty - it allows you to control when the recording starts, which works great for staring the recording manually.
My use case is generating fun gifs to paste into Slack that are branded with lol animations and text ;-) So I don't want anyone to actually see it was a terminal window at all, just a retro looking animation that I made super fast!
So in summary:
get your animation ready to record
start the recorder, toughtty by running $ toughtty record frames.json
start your animation. Note that recording hasn't started yet.
once your animation is running, press Ctrl+T to start recording
when you think it's good, press Ctr+C to stop the recording
generate your gif by running toughtty encode --delay 100 out.json test.gif
open the test.gif in your browser to view the animation!
I found toughtty by browsing the active forks of ttystudio
https://techgaun.github.io/active-forks/index.html#chjj/ttystudio
and then installed it under node v8
Example: https://imgur.com/a/0Su6pI6

How to implement triggers in Python script

I have a psychopy script for a psychophysical experiment in which participants see different shapes and have to react in a specific way. I would like to take physiological measures (ECG) on a different computer and implement event points in my measures. So whenever the participant is shown a stimulus, I would like this to show up on the ECG.
Basically, I would like to add commands for parallel port i/o.
I don't even know where to start, to be honest. I have read quite a bit about this, but I'm still lost.
Here is probably the shortest fully working code to show a shape and send a trigger:
from psychopy import visual, parallel, core
win = visual.Window()
shape = visual.ShapeStim(win)
# Show it
shape.draw()
win.flip()
# Immediately send trigger
parallel.setData(10) # Trigger code, here 10
core.wait(0.020) # Number of seconds to send this trigger code (enough for the hardware to send and the receiver to recognize it)
parallel.setData(0) # Stop sending trigger.
You could, of course, extend this by putting the stimulus presentation and trigger in a loop (running trials), and do various other things alongside it, e.g. collecting responses and saving data. This is just a minimal example of stimulus presentation and sending a trigger.
It is very much on purpose that the trigger code is located immediately after flipping the window. On most hardware, sending a trigger is very fast (voltage on the port changes within 1 ms of the line being run in the script) whereas the monitor only updates its image around every 16.7 ms. and win.flip() waits for the latter. I've made some examples of good and bad practices for timing triggers here.
For parallel port io, you should use pyparallel. Make continuous ECG measurements, store those and store the timestamp and whatever metadata you want per stimulus. Pseudo code to get you started:
while True:
store_ecg()
if time_to_show():
stimulus()
time.sleep(0.1) # 10 Hz

Categories