Pure tones in Psychopy end with unwanted clicks - python

Pure tones in Psychopy are ending with clicks. How can I remove these clicks?
Tones generated within psychopy and tones imported as .wav both have the same problem. I tried adding 0.025ms of fade out in the .wav tones that I generated using Audacity. But still while playing them in psychopy, they end with a click sound.
Now I am not sure how to go ahead with this. I need to perform a psychoacoustic experiment and it can not proceed with tone presentation like that.

Crackling sounds or clicks are, to my knowledge, often associated with buffering errors. Many years back, I experienced similar problems on Linux systems when an incorrect bitrate was set. So there could be at least two possible culprits at work here: the bitrate, and the buffer size.
You already applied both an onset and offset ramp to allow the membranes to swing in/out, so this should not be the issue. (By the way, I think you meant 0.025 seconds instead of ms? Otherwise, the ramps would be too short!)
PyGame initializes the sound system with the following settings:
initPygame(rate=22050, bits=16, stereo=True, buffer=1024)
Whereas Pyo initializes it the following way:
initPyo(rate=44100, stereo=True, buffer=128)
The documentation of psychopy.sound states:
For control of bitrate and buffer size you can call psychopy.sound.init before
creating your first Sound object:
from psychopy import sound
sound.init(rate=44100, stereo=True, buffer=128)
s1 = sound.Sound('ding.wav')
So, I would suggest you:
Try out both sound backends, Pyo and PyGame -- you can change which one to use in the PsychoPy preferences under General / audio library. Change the field to ['pyo'] to use Pyo only, or to ['pygame'] to use only PyGame.
Experiment with different settings for bitrate and buffer size with both backends (Pyo, PyGame).
If you want to get started with serious psychoacoustics, however, I would suggest you do not use either of the proposed solutions, and get some piece of professional sound hardware or a data-acquisition board with analog outputs, which will deliver undistorted sound with sub-millisecond precision, such as the devices produced by National Instruments or competitors. The NI boards can be controlled from Python via PyLibNIDAQmx.

Clicks in the beginning and end of sounds often occur because the sound is stopped mid-way so that the wave abruptly goes from some value to zero. This waveform can only be made using high-amplitude high-frequency waves superimposed on the signal, i.e. a click. So the solution is to make the wave stop while on zero.
Are you using an old version of psychopy? If yes, then upgrade. Newer versions add a Hamming window (fade in/out) to self-generated tones which should avoid the click.
For the .wav files, try adding (extra) silence in the end, e.g. 50 ms. It might be that psychopy stops the sound prematurely.

Related

Playing audio in sync with video with frames generated on the fly, real time. Plausible?

I'm a self taught python programmer working on a hobby project, but I'm having some difficulty and would like to address what I see as a potential XY problem.
My app takes an input of an audio file (converts it to wav) and produces visual representations of the audio (90x90, RGB, frames) in the form of numpy arrays. I used to save these frames to a video file using open-cv, then use ffmpeg to scale the video and add the (original, non-wav) audio over the top, but this meant waiting until the app had finished to play the file. I would like to be able to play the audio and display the frames as they are generated, in sync. My generation code takes at maximum 8ms of a 16ms frame (60fps), so I have a reasonable amount of cycles to play with.
From my research, I have found that SDL is the tool that is most appropriate to display frames at high speeds, and have managed to make a simple system to display frames 'in time', by brute-force pixel editing. I have also discovered that SDL can play audio, and it even seems that I could synchronize this with the video as I would like, via the callback function. However, being a decidedly non-c programmer, I am at a loss as to how to best to display frames, as directly assigning pixels cannot be the safest or fastest, and I would like to scale the frames as the are displayed. I am also at a loss as to how best to convert numpy arrays to textures efficiently, as well as how best to control the synchronicity of my generation code, the audio, and video frames.
I'm not specifically looking for an answer to any of those problems, though advice would be appreciated, I'm just making sure that this is a reasonable way forward. Is SDL/pysdl2 coupled with numpy appropriate in this scenario? Or is this asking too much from python overall?

How to create a soundfile from fft-spectrum with Python?

Let's say I have a sound file (file1.wav) which is 1s long.
I can read it in via
from scipy.io import wavfile
samplerate, data = wavfile.read("file1.wav")
I can then fourier-transform it via:
from scipy.fft import fft
yf=fft(data)
Now lets say I have another file2 which contains a sound as well which does not have the same duration as file1 (it might also have another samplerate).
Now I would like to create a sound from the spectrum yf which is as long as the file2 and add both.
How can I compute a sound from file1 with the samplerate and duration from file2 in order to be able to add both?
It sounds like the essential question here is "how do I stretch/compress audio to another duration". This is a nontrivial task, there isn't a silver bullet method that works well in all cases. See Audio time stretching and pitch scaling on Wikipedia. It matters what kind of audio you are operating on: speech, music, vs. something else.
A decent place to start is waveform-similarity-based synchronized overlap-add or WSOLA algorithm. One way to perform WSOLA is with the free SoX command line utility using its "tempo" effect:
Change the audio playback speed but not its pitch. This effect uses the WSOLA algorithm. The audio is chopped up into segments which are then shifted in the time domain and overlapped (cross-faded) at points where their waveforms are most similar as determined by measurement of ‘least squares’.
Example use:
sox infile.wav outfile.wav tempo -s 1.1
where the 1.1 means "speed up by 10%" and the -s configures for speech (other options are -m for music or -l for generic "linear" processing). There are other options besides this, check out the documentation for more detail.
(Side note: A related problem is pitch shifting audio without changing the duration. SoX can do that too; see the "pitch" and "bend" effects.)
If you want to perform time stretching in Python, there is a pysox library that wraps SoX. Another possibility in Python is audiotsm, which implements WSOLA and a couple other time stretching methods.

How to extract audio after particular sound?

Let's say I have a few very long audio files (for ex., radio recordings). I need to extract 5 seconds after particular sound (for ex., ad start sound) from each file. Each file may contain 3-5 such sounds, so I should get *(3-5)number of source files result files.
I found librosa and scipy python libraries, but not sure if they can help. What should I start with?
You could start by calculating the correlation of the signal with your particular sound. Not sure if librosa offers this. I'd start with scipy.signal.correlate or scipy.signal.convolve.
Not sure what your background is. Start here if you need some theory.
Basically the correlation will be high if the audio matches your particular signal or is very similar to it. After identifying these positions you can select an area around them.

Get Treadmill Speed Using PyAudio

I'm trying to read the speed of a manual treadmill (the York Pacer 2120 - manual: http://www.yorkfitness.com.au/uploaded/pdf_40Pacer%202120%20Treadmill_5500.pdf) by intercepting the wire that comes out of its speed sensor. My understanding that I've garnered by taking apart as much of the treadmill as I can is that the speed sensor is basically a magnet attached to a big disk attached to the belt of the treadmill that generates current every time it passes a coil of wire.
The wire that comes out of the speed sensor ends in a 3.5mm jack. I plugged this into my laptop's microphone port and recorded the "sound" of me walking at both high and low speeds. I've attached images of the waveform recorded in Audacity for low and high speed respectively.
My aim is to measure the speed of the treadmill in real time so that I can pass it as input into my game engine and control the speed of a character in game. I'm not sure what the best method to do this is but at the moment I'm trying to measure the distance between the "beats" in python using PyAudio.
To do this I've copied the beat detection code from the answer to another question (Detect beat and play (wav) file in a syncronised manner) but that gave me an usably high level of false positives.
Does anyone have any ideas as to how else I could go about getting a usable speed out of this signal? If you do, a code example would be very much appreciated. Other than that, how else would people go about trying to measure the speed off a manual treadmill? I've tried everything from using a camera to measure the distance between pieces of tape stuck to the treadmill belt to physically sticking a mouse to the treadmill to measure the speed of the belt.
The sound files are here:
https://www.dropbox.com/s/jbyl8c3ajv9e6xg/Fast_Raw.wav?dl=0
https://www.dropbox.com/s/0fp1mzuixhf5uju/Slow_Raw.wav?dl=0
And the audacity projects here:
https://www.dropbox.com/s/3cjvo3m2ln2ldet/AudacityFiles.zip?dl=0
I might look here Convert multi-channel PyAudio into NumPy array
From looking at the audio, you just need a simple trigger for when the signal is <0, you can likely modify the callback method to detect when the amplitude was positive and has been negative for N samples, then count the occurrences per second to retrieve the speed
I did eventually solve this but I gave up on PyAudio and used a Raspberry Pi instead. I open sourced the code if anyone happens to be interested: https://bitbucket.org/grootteam/gpio-treadmill-speed/

How do I track an animated object in Python?

I want to automate playing a video game with Python. I want to write a script that can grab the screen image, diff it with the next frame and track an object to click on. What libraries would be useful for this other than PIL?
There are a few options here. The brute force diff'ing approach will lead to a lot of frustration unless what you're tracking is very consistent. For this you could use any number of genetic approaches to train your program what to follow. After enough generations it would do the right thing reliably. If the thing you want to track is visually obvious (like a red ball on a white screen) then you could detect it yourself through simple brute force scanning of the bitmap.
Another approach would be just looking at the memory of the running app, and figuring out what area is controlling the position of your object. For some more info and ideas on this, see how mumble got 3D positional audio working in various games.
http://mumble.sourceforge.net/HackPositionalAudio
Answer would depend on the platform and game too.
e.g. I did once similar things for helicopter flash game, as it was very simple 2d game with well defined colored maze
It was on widows with copy to clipboard and win32 key events using win32api bindings for python.

Categories