How to insert EEG triggers from one PsychoPy script into another? - python

I am working on adding EEG triggers to this script in PsychoPy I wrote through builder mode in PsychoPy, as I am new to coding. The experiment is a series of audio recordings of sentence stems and visual word endings – the recordings and words are called up through a spreadsheet. We are interested in participants’ responses upon viewing the word endings.
Below is my current script without the EEG triggers, and beneath it is a script from someone else with the same system they have used to insert EEG triggers. I am looking to record beginning at the end of the “Sentences” stimulus, including when during “target” and “response," and ending after they make their response.
Thank you very much for any help!
Here is the script I already have:
------Prepare to start Routine “trial1”-------
t = 0
trial1Clock.reset() # clock
frameN = -1
continueRoutine = True
**# update component parameters for each repeat**
target.setColor([1.000,1.000,1.000], colorSpace='rgb')
target.setText(word)
response = event.BuilderKeyResponse()
Sentences.setSound(sounds, secs=6)
**# keep track of which components have finished**
trial1Components = [target, response, Sentences, text_2]
for thisComponent in trial1Components:
if hasattr(thisComponent, 'status'):
thisComponent.status = NOT_STARTED
And here is the code to insert EEG triggers I am trying to integrate:
# Send event marker to NetStation
if mode=='eeg' and stage=='expt':
code = 'item'
ns.sync()
ns.send_event(code, label='item', timestamp=egi.ms_localtime(), table = { 'item' : curr_item })

You say that you "wrote the code using Builder". Did you change the code after Builder? If not, then it's always best to work from Builder itself to allow you to change other aspects of the experiment while keeping your triggers. Assuming that you can work in Builder:
If you send triggers via the parallel port, there's a component for that under I/O --> Parallel port.
Otherwise, you can insert a Code Component to run your code at the desired times:
In the "begin experiment" tab, add import xxxx as ns or however you created the ns object.
In the "begin routine" tab, add your trigger code to mark stimulus onset.
To mark stimulus offset, go to "each frame" tab and either (a) listen for the stimulus status like if stim.status == FINISHED: or (b) send the trigger at the predicted offset setting trigger_sent = False at "begin routine and then if t > 2 and not trigger_sent: (if your stimulus is 2 secs long)

Related

asyncpg connection was closed

I have built a scraper using selenium in one docker container, while a database lives in a small Linode server.
Scraped data will then be inserted into postgres database on Linode.
Scraped data is stored in a List with a Dict format. List[Dict]
However, there are times where this error is presented to me when trying to insert the data.
Client log:
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Server log:
LOG: could not receive data from client: Connection reset by peer
LOG: could not receive data from client: Operation timed out
I have tried numerous solutions from stackoverflow such as links:
Connection was closed in the middle of operation when accesing database using Python
and also trying to set tcp timeout parameters on postgres side to:\
# - TCP settings -
# see "man tcp" for details
tcp_keepalives_idle = 300 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
tcp_keepalives_interval = 60 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
tcp_keepalives_count = 100 # TCP_KEEPCNT;
but to no avail.
Additionally, I have tried to log any errors in postgres itself but there doesn't seem to be any.
These are my log settings
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
One theory I can come up with was that, some data takes longer to scrape and format which causes the connection to be reset. However, subsequent inserts would be successful.
Any help would be appreciated! Thanks

Audio signal split at word level boundary

I am working with audio file using webrtcvad and pydub. The split of any fragment is by silence of the sentence.
Is there any way by which the split can be done at word level boundry condition? (after each spoken word)?
If librosa/ffmpeg/pydub has any feature like this, can split is possible at each vocal? but after split, I need start and end time of the vocal exactly what that vocal part has positioned in the original file.
One simple solution or way to split by ffmpeg is also defined by :
https://gist.github.com/vadimkantorov/00bf4fbe4323360722e3d2220cc2915e
but this is also splitting by silence, and with each padding number or the frame size, the split is different. I am trying split by vocal.
As example, I have done this manually the original file, split words and its time position in json is in a folder provided here under the link:
www.mediafire.com/file/u4ojdjezmw4vocb/attached_problem.tar.gz
Simple audio segmentation problems can be handled by using a Hidden Markov Model, after preprocessing the audio into suitable features. Typical features for speech would be soundlevel, vocal activity / voicedness. To get word-level segmentation (as opposed to sentence), this needs to have rather high time resolution. Unfortunately the pyWebRTCVAD does not have adjustable time smoothening so it might not be suited for the task.
In your audio sample there is a radio host speaking rather quickly in German.
Looking at the soundlevels wrt to the word boundaries you have marked it is clear that between some words the soundlevel doesnt really drop. That rules out a simple soundlevel segmentation model.
All in all, getting good results for general speech signals can be quite hard. But fortunately this is very well researched, and with off-the-shelf solutions being available.
These use typically an acoustic model (how words and phonemes sound), as well as a language model (likely orders of words), learned over many hours of audio.
Word segmentation using Speech Recognition library
All these features are included in a Speech Recognition framework, and many allow to get word-level outputs with timing. Below is some working code for this using Vosk.
Alternatives to Vosk would be PocketSphinx. Or using an online speech recognition service from Google Cloud, Amazon Web Services, Azure Cloud etc.
import sys
import os
import subprocess
import json
import math
# tested with VOSK 0.3.15
import vosk
import librosa
import numpy
import pandas
def extract_words(res):
jres = json.loads(res)
if not 'result' in jres:
return []
words = jres['result']
return words
def transcribe_words(recognizer, bytes):
results = []
chunk_size = 4000
for chunk_no in range(math.ceil(len(bytes)/chunk_size)):
start = chunk_no*chunk_size
end = min(len(bytes), (chunk_no+1)*chunk_size)
data = bytes[start:end]
if recognizer.AcceptWaveform(data):
words = extract_words(recognizer.Result())
results += words
results += extract_words(recognizer.FinalResult())
return results
def main():
vosk.SetLogLevel(-1)
audio_path = sys.argv[1]
out_path = sys.argv[2]
model_path = 'vosk-model-small-de-0.15'
sample_rate = 16000
audio, sr = librosa.load(audio_path, sr=16000)
# convert to 16bit signed PCM, as expected by VOSK
int16 = numpy.int16(audio * 32768).tobytes()
# XXX: Model must be downloaded from https://alphacephei.com/vosk/models
# https://alphacephei.com/vosk/models/vosk-model-small-de-0.15.zip
if not os.path.exists(model_path):
raise ValueError(f"Could not find VOSK model at {model_path}")
model = vosk.Model(model_path)
recognizer = vosk.KaldiRecognizer(model, sample_rate)
res = transcribe_words(recognizer, int16)
df = pandas.DataFrame.from_records(res)
df = df.sort_values('start')
df.to_csv(out_path, index=False)
print('Word segments saved to', out_path)
if __name__ == '__main__':
main()
Run the program with the .WAV file and the path to an output file.
python vosk_words.py attached_problem/main.wav out.csv
The script outputs words and their times in the CSV. These timings can then be used to split the audio file. Here is example output:
conf,end,start,word
0.618949,1.11,0.84,also
1.0,1.32,1.116314,eine
1.0,1.59,1.32,woche
0.411941,1.77,1.59,des
Comparing the output (bottom) with the example file you provided (top), it looks pretty good.
It actually picked up a word that your annotations did not include, "und" at 42.25 seconds.
Delimiting words is out of the audio domain and requires a kind of intelligence. Doing it manually is easy because we are intelligent and know exactly what we are looking for, but automatizing the process is hard because, as you already noticed, a silence is not (not only, not always) a word delimiter.
At audio level, we can only approach a solution and this require both analyzing the amplitude of the signal and adding some time mechanisms. As an example, Protools provides a nice tool named Strip Silence that cuts audio regions automatically based on the amplitude of the signal. It always keeps the material at its original position in the timeline and naturally each region knows its own duration. In addition to the threshold in dB, and to prevent creating too much regions, it provides several useful parameters in the time domain : a minimum length for the created regions, a delay before the cut (the delay is computed from the point the amplitude passes below the threshold), an inverted delay before reopening the gate (the delay is computed backward from the point the amplitude passes above the threshold).
This could be a good starting point for you. Implementing such a system probably won't be 100 % successful, but you could obtain a quite good ratio if the settings are well adjusted to the speaker. Even if it's not perfect, it will significantly reduce the need for manual work.

GATT Characteristics with read-property not found by application

I am trying to develop an application that is communicating with an external device using BLE. I have decided to use pygatt (Python) with BGAPI (using a BlueGiga dongle).
The device I am communicating with has a custom primary service with a set of characteristics. According to their specs they have 2 READ characteristics, 8 NOTIFY chars and 1 WRITE char. Initially, I want to read one of the two READ chars, but I am unable to do so. Their UUIDs are not recognized as characteristics. How can this be? I am 100% certain that they are entered correctly.
import pygatt
import bleconnect
import blelib
import logging
logging.basicConfig()
logging.getLogger('pygatt').setLevel(logging.DEBUG)
adapter = pygatt.BGAPIBackend(serial_port='/dev/tty.usbmodem1')
adapter.start()
# Find the device
result = adapter.scan(timeout=5)
for item in result:
scan_name = item['name']
scan_rssi = item['rssi']
scan_address = item['address']
if scan_name == bleconnect.TARGET_NAME:
break
# Connect
device = adapter.connect(address=scan_address)
device.char_read(blelib.CHARACTERISTIC_DEVICE_FEATURES)
I can see in the debug messages that all the NOTIFY and WRITE characteristics are found, but not the two READ characteristics.
What am I missing?
This appears to be some kind of shortcoming in the pygatt API. I managed to find the actual value using bgapi only.

Capturing audio with python on the raspberry pi fails

I am currently working on an easy-to-use audio capturing device for digitizing old casette tapes (i.e. low fidelity). This device is based on a raspberry pi with an usb sound card, which does nothinge else than starting the listed python script on bootup.
import alsaaudio
import wave
import os.path
import RPi.GPIO as GPIO
import key
import usbstick
import time
try:
# Define storage
path = '/mnt/usb/'
prefix = 'Titel '
extension = '.wav'
# Configure GPIOs
GPIO.setmode(GPIO.BOARD)
button_shutdown = key.key(7)
button_record = key.key(11)
GPIO.setup(12, GPIO.OUT)
GPIO.setup(15, GPIO.OUT)
GPIO.output(12, GPIO.HIGH)
# Start thread to detect external memory
usb = usbstick.usbstick(path, 13)
# Configure volume
m = alsaaudio.Mixer('Mic', 0, 1)
m.setvolume(100, 0, 'capture')
# Only run until shutdown button gets pressed
while not (button_shutdown.pressed()):
# Only record if record button is pressed and memory is mounted
if (button_record.pressed() and usb.ismounted()):
# Create object to read input
inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL, 'sysdefault:CARD=Device')
inp.setchannels(1)
inp.setrate(44100)
inp.setformat(alsaaudio.PCM_FORMAT_S16_LE)
inp.setperiodsize(1024)
# Find next file name
i = 0
filename = ''
while (True):
i += 1
filename = path + prefix + str(i) + extension
if not (os.path.exists(filename)):
break
print 'Neue Aufnahme wird gespeichert unter ' + filename
# Create wave file
wavfile = wave.open(filename, 'w')
wavfile.setnchannels(1)
wavfile.setsampwidth(2)
wavfile.setframerate(44100)
# Record sound
while (button_record.pressed()):
l, data = inp.read()
wavfile.writeframes(data)
GPIO.output(15, GPIO.HIGH)
# Stop record an save
print 'Aufnahme beendet\n'
inp.close()
wavfile.close()
GPIO.output(15, GPIO.LOW)
# Record has been started but no memory is mounted
elif (button_record.pressed() and not usb.ismounted()):
print 'Massenspeichergeraet nicht gefunden'
print 'Warte auf Massenspeichergeraet'
# Restart after timeout
timestamp = time.time()
while not (usb.ismounted()):
if ((time.time() - timestamp) > 120):
time.sleep(5)
print 'Timeout.'
#reboot()
#myexit()
print 'Massenspeichergeraet gefunden'
myexit()
except KeyboardInterrupt:
myexit()
According to the documentation pyaudio, the routine inp.read() or alsaaudio.PCM.read() respectively should usually wait until a full period of 1024 samples has been captured. It then should return the number of captured samples as well as the samples itself. Most of the time it returns exactly one period of 1024 samples. I don't think that I have a performance problem, since I would expect it to return several periods then.
THE VERY MYSTERIOUS BEHAVIOR: After 01:30 of recording, inp.read() takes some milliseconds longer than normal to process (this is a useful information in my ignorant opinion) and then returns -32 and faulty data. Then the stream continues. After half a minute at 02:00 it takes about a second (i.e. longer than the first time) to process and returns -32 and faulty data again. This procedere repeats then every minute (02:30-03:00, 03:30-04:00, 04:30-05:00). This timing specification was roughly taken by hand.
-32 seems to result from the following code line in /pyalsaaudio-0.7/alsaaudio.c
return -EPIPE;
Some words about how this expresses: If the data stream is directly written into the wave file, i.e. including the faulty period, the file contains sections of white noise. These sections last 30 seconds. This is because the samples usually consist of 2 bytes. When the faulty period (1 byte) is written, the byte order gets inverted. With the next faulty period it gets inverted again and therefore is correct. If faulty data is refused and only correct data is written into the wave file, the file 'jumps' every 30 seconds.
I think the problem can either be found in
1. the sound card (but I tested 2 different)
2. the computing performance of the raspberry pi
3. the lib pyaudio
Further note: I am pretty new to the linux and python topic. If you need any logs or something, please describe how I can find them.
To cut a long story short: Could someone please help me? How can I solve this error?
EDIT: I already did this usb firmware updating stuff, which is needed, since the usb can be overwhelmed. BTW: What exactly is this EPIPE failure?
Upon further inverstigation, I found out, that this error is not python/pyaudio specific. When I try to record a wave file with arecord, I get the following output. The overrun messages are sent according to the timing described above.
pi#raspberrypi ~ $ sudo arecord -D sysdefault:CARD=Device -B buffersize=192000 -f cd -t wav /mnt/usb/test.wav
Recording WAVE '/mnt/usb/test.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo
overrun!!! (at least -1871413807.430 ms long)
overrun!!! (at least -1871413807.433 ms long)
overrun!!! (at least -1871413807.341 ms long)
overrun!!! (at least -1871413807.442 ms long)
Referring to this thread at raspberrypi.org, the problem seems to be the (partly) limited write speed to the SD card or the usb storage device with a raspberry pi. Recording to RAM (with tmpfs) or compressing the audio data (e.g. to mp3 with lame) before writing it somewhere else could be a good solution in some cases.
I can't say why the write speed is to low. In my opinion, the data stream is 192 kByte/s for 48 kSamples/s, 16 Bit, stereo. Any SD card or usb mass storage should be able to handle this. As seen above, buffering the audio stream doesn't help.

How to organize a Python GIS-project with multiple analysis steps?

I just started to use ArcPy to analyse geo-data with ArcGIS. The analysis has different steps, which are to be executed one after the other.
Here is some pseudo-code:
import arcpy
# create a masking variable
mask1 = "mask.shp"
# create a list of raster files
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
# step 1 (e.g. clipping of each raster to study extent)
for index, item in enumerate(files_to_process):
raster_i = "temp/ras_tem_" + str(index) + ".tif"
arcpy.Clip_management(item, '#', raster_i, mask1)
# step 2 (e.g. change projection of raster files)
...
# step 3 (e.g. calculate some statistics for each raster)
...
etc.
This code works amazingly well so far. However, the raster files are big and some steps take quite long to execute (5-60 minutes). Therefore, I would like to execute those steps only if the input raster data changes. From the GIS-workflow point of view, this shouldn't be a problem, because each step saves a physical result on the hard disk which is then used as input by the next step.
I guess if I want to temporarily disable e.g. step 1, I could simply put a # in front of every line of this step. However, in the real analysis, each step might have a lot of lines of code, and I would therefore prefer to outsource the code of each step into a separate file (e.g. "step1.py", "step2.py",...), and then execute each file.
I experimented with execfile(step1.py), but received the error NameError: global name 'files_to_process' is not defined. It seems that the variables defined in the main script are not automatically passed to scripts called by execfile.
I also tried this, but I received the same error as above.
I'm a total Python newbie (as you might have figured out by the misuse of any Python-related expressions), and I would be very thankful for any advice on how to organize such a GIS project.
I think what you want to do is build each step into a function. These functions can be stored in the same script file or in their own module that gets loaded with the import statement (just like arcpy). The pseudo code would be something like this:
#file 1: steps.py
def step1(input_files):
# step 1 code goes here
print 'step 1 complete'
return
def step2(input_files):
# step 2 code goes here
print 'step 2 complete'
return output # optionally return a derivative here
#...and so on
Then in a second file in the same directory, you can import and call the functions passing the rasters as your inputs.
#file 2: analyze.py
import steps
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
steps.step1(files_to_process)
#steps.step2(files_to_process) # uncomment this when you're ready for step 2
Now you can selectively call different steps of your code and it only requires commenting/excluding one line instead of a whle chunk of code. Hopefully I understood your question correctly.

Categories