I am attempting to read data from a load cell panel which is configured for Modbus RTU protocol. The goal of the program is to log data, I will link the entire program below, but the setup is where I am having issues. I have gotten the module to respond with data, and it is wired in the only configuration which allows communication over USB, so I assume this is done correctly.
The response, which I am saving as 'load' which is being returned to me is:
[[ 0* 2* 3245 0 -28 -1 1000 0]]
*The stars represent the parts of the response which look fully incorrect, based on the expected return values.
This seems to be incorrect, the response I expect is characterized by:
[Slave Address, Function, Byte Count, Data Hi, Data Lo, Data Hi, Data Lo, Error Check Lo, Error Check Hi]
a total of 9 bytes (72 bits). So, I would expect a response to look more like this:
[1*, 4*, 4, 00, 06, 00, 05, DB, 86]
*The stars represent the parts of the response which look fully incorrect, based on the expected return values.
**Expected response taken from: https://www.modbustools.com/modbus.html#function04
I would also expect the values of the Data Bytes to change as I add or remove load from the call, as the panel meter is reading correctly, but the response does not change as the program loops. Does anyone with MinimalModbus experiance have any guesses as to what might be going wrong to get this return?
This is the code of interest:
import minimalmodbus
import serial
import numpy as np
units = "lb."
comPort = "COM6"
baudRate = 9600
functionImplemented = 4
minimalmodbus.slaveaddress = 1
minimalmodbus.registeraddress = 3 #1, 2, 3 for this application
instrument = minimalmodbus.Instrument(comPort, minimalmodbus.slaveaddress)
instrument.serial.port = comPort
instrument.serial.baudrate = baudRate
instrument.serial.parity = serial.PARITY_EVEN
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.mode = minimalmodbus.MODE_RTU
instrument.serial.timeout = 2
n=0
run=True
while run is True:
#record new temperature values WHEN UPDATED?
load=np.array([[1,1,1,1,1,1,1,1]])
for x in range(8):
i=x+1
minimalmodbus.registeraddress=i
load[0,x]=instrument.read_register(minimalmodbus.registeraddress,
number_of_decimals=0, functioncode=functionImplemented, signed=True)
print("load ("+str(n)+"): " + str(load))
n = n + 1
This is the full program, if anyone's interested, but the part that is malfunctioning is what is listed above.:
##author: Jack M
import time
import minimalmodbus
import serial
import numpy as np
import matplotlib.pyplot as plt
units = "lb."
comPort = "COM6"
baudRate = 9600
functionImplemented = 4
minimalmodbus.slaveaddress = 1
minimalmodbus.registeraddress = 3 #1, 2, 3 for this application
instrument = minimalmodbus.Instrument(comPort, minimalmodbus.slaveaddress)
instrument.serial.port = comPort
instrument.serial.baudrate = baudRate
instrument.serial.parity = serial.PARITY_EVEN
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.mode = minimalmodbus.MODE_RTU
instrument.serial.timeout = 2
stime = np.array([[0]])
sload = np.array([[1,1,1,1,1,1,1,1],[2,2,2,2,2,2,2,2]])
print("sload: " + str(sload))
n=0
run=True
while run is True:
#record new temperature values WHEN UPDATED?
load=np.array([[1,1,1,1,1,1,1,1]])
for x in range(8):
i=x+1
minimalmodbus.registeraddress=i
load[0,x]=instrument.read_register(minimalmodbus.registeraddress,
number_of_decimals=0, functioncode=functionImplemented, signed=True)
print("load ("+str(n)+"): " + str(load))
sload=np.append(sload,load,axis=0)
n = n + 1
newTime = [[time.time()]]
#store the input values
stime=np.append(stime,newTime, axis=0)
plt.plot(stime,sload,'ro')
plt.title("Load (lbf) vs. Time (s)")
plt.xlabel("Time (s)")
plt.ylabel("Load (Lbf)")
plt.show()
print("Max: "+ str(np.max(sload)))
Related
I am collecting over 100k FastFrame images (100 frames, with 15k points each), with summary mode and collecting them via python pyvisa using ni-visa.
The error is as follows:
SEVERE
The system is low on memory. Some results may be incomplete. To remedy, reduce record length or remove one or more analytical features such as math, measurements, bus decode or search.
After that, I can disconnect, connect again, send commands which update the window, but cannot query anything.
I suspect it is something to do with a memory leak on MSO56 RAM, or communication queue.
Commands like *RST, CLEAR, LCS, and FACTORY do not fix the error.
import pyvisa
import time
if __name__ == '__main__':
## DEV Signal
rm = pyvisa.ResourceManager()
ll = rm.list_resources()
print('\n\n\n----------------------\nAvailable Resources\n----------------------')
for i in range(len(ll)):
print(F'Resource ID {i}: {ll[i]}')
#i = int(input(F"\n\nPlease select 'Resource ID' from above: "))
i=0;
inst = rm.open_resource(ll[i])
inst.timeout = 10000
reset = inst.write("*RST")
ind = inst.query("*IDN?")
print(F"\nResource {i}: {ind}")
inst.write('C1:OUTP ON')
inst.write('C2:OUTP ON')
# Wave signal
Ch = 1; # channel 1 || 2
wave_name = 'UT1'
Frq = 500000; #Hz
Peri = 1/Frq;# Length of waveform
print(F"Period: {Peri}")
# trigger on channel 2
inst.write(F'C2:BSWV WVTP,SQUARE,FRQ,{Frq},AMP,1,OFST,0,DUTY,1')
# signal on channel 1
inst.write(F'C1:BSWV WVTP,SQUARE,FRQ,{Frq},AMP,1,OFST,0,DUTY,10')
inst = []
scope_ip = '192.168.0.10';
rm = pyvisa.ResourceManager()
ll = rm.list_resources()
print(ll)
for l in ll:
if scope_ip in l:
vScope = rm.open_resource(l)
#vScope.clear()
#vScope.close()
vScope.timeout = 2000
## attempts to fix Memory Error
vScope.write_raw('FPANEL:PRESS RESTART')
vScope.write_raw('*PSC ON')
vScope.write_raw('*CLS')
vScope.write_raw('FACTORY\n')
vScope.write_raw('*RST')
vScope.write_raw('*CLEAR')
vScope.write_raw('*ESE 1')
vScope.write_raw('*SRE 0')
vScope.write_raw('DESE 1')
print('\nESR')
print(vScope.query('*ESR?'))
#print('\nEVMSG?')
#print(vScope.query('*EVMsg?'))
#print(vScope.query('*ESE ?'))
# Display Wave Forms
vScope.write_raw('DISPLAY:WAVEVIEW1:CH1:STATE 1')
vScope.write_raw('DISPLAY:WAVEVIEW1:CH2:STATE 1')
# Vertical Command Groups.
vScope.write_raw('CH1:Coupling DC')
vScope.write_raw('CH2:Coupling DC')
vScope.write_raw('CH1:SCALE .5') # *10 for the range
vScope.write_raw('CH2:SCALE .5')
vScope.write_raw('CH1:Position 0')
vScope.write_raw('CH2:Position 0')
vScope.write_raw('TRIGGER:A:TYPE EDGE')
vScope.write_raw('TRIGGER:A:EDGE:SOURCE CH2')
vScope.write_raw('TRIGger:A:LEVEL:CH2 0')
vScope.write_raw('TRIGger:A:EDGE:SLOpe RISE')
vScope.write_raw('Horizontal:Position 10')
vScope.write_raw('Horizontal:MODE MANUAL')
vScope.write_raw('Horizontal:Samplerate 25000000000')
vScope.write_raw('HORIZONTAL:MODE:RECORDLENGTH 25000')
vScope.write_raw('DATA:SOURCE CH1')
vScope.write_raw('ACQUIRE:STOPAFTER SEQUENCE')## triggers re-read
nframes = 100;
vScope.write_raw(F"HORIZONTAL:FASTFRAME:COUNT {nframes}")
if int(1):
vScope.write_raw(F"DATA:FRAMESTART {1+nframes}")
else:
vScope.write_raw('DATA:FRAMESTART 1')
vScope.write_raw(F"DATA:FRAMESTOP {1+nframes}")
vScope.write_raw('HORIZONTAL:FASTFRAME:STATE OFF')
vScope.write_raw('HORIZONTAL:FASTFRAME:STATE ON')
vScope.write_raw('HORIZONTAL:FASTFRAME:SUMFRAME:STATE ON')
vScope.write_raw(F"HORIZONTAL:FASTFRAME:SELECTED {1+nframes}")
t0 = time.time()
for i in range(1000000):
vScope.write_raw('ACQUIRE:STATE 1') ## triggers re-read
vScope.query('*opc?')
vScope.write_raw(b'WFMOUTPRE?')
wfmo = vScope.read_raw()
vScope.write_raw('CURVE?')
curve = vScope.read_raw()
if i%100 ==0:
print(F"Iteration {i}")
print(F"Time Delta: {time.time()-t0}")
t0=time.time()
Poor Solution.
Restarting the Scope with the power button works.
I should have added that in the question, but takes ~ 2 minutes and is not an elegant solution.
I am running below code to taking for runHeartBreathRateKraskov, I am facing the issue and below error.
Want to run below code to calculate Transfer Entropy in runHeartBreathRateKraskov program. I am new and not so much knowledge about Entropy transfer and Mutual Information. I also attached my data set for Information.
from jpype import *
^
IndentationError: unexpected indent
# Run e.g. python runHeartBreathRateKraskov.py 2 2 1,2,3,4,5,6,7,8,9,10
from jpype import *
import sys
import os
import random
import math
import string
import numpy
# Import our readFloatsFile utility in the above directory:
sys.path.append(os.path.relpath(".."))
import readFloatsFile
# Change location of jar to match yours:
#jarLocation = "../../../infodynamics.jar"
jarLocation = "/home/humair/Documents/Transfer Entropy/infodynamics-dist-1.5/infodynamics.jar"
# Start the JVM (add the "-Xmx" option with say 1024M if you get crashes due to not enough memory space)
startJVM(getDefaultJVMPath(), "-ea", "-Djava.class.path=" + jarLocation)
# Read in the command line arguments and assign default if required.
# first argument in argv is the filename, so program arguments start from index 1.
if (len(sys.argv) < 2):
kHistory = 1;
else:
kHistory = int(sys.argv[1]);
if (len(sys.argv) < 3):
lHistory = 1;
else:
lHistory = int(sys.argv[2]);
if (len(sys.argv) < 4):
knns = [4];
else:
knnsStrings = sys.argv[3].split(",");
knns = [int(i) for i in knnsStrings]
if (len(sys.argv) < 5):
numSurrogates = 0;
else:
numSurrogates = int(sys.argv[4]);
# Read in the data
datafile = '/home/humair/Documents/Transfer Entropy/SFI-heartRate_breathVol_bloodOx.txt'
rawData = readFloatsFile.readFloatsFile(datafile)
# As numpy array:
data = numpy.array(rawData)
# Heart rate is first column, and we restrict to the samples that Schreiber mentions (2350:3550)
heart = data[2349:3550,0]; # Extracts what Matlab does with 2350:3550 argument there.
# Chest vol is second column
chestVol = data[2349:3550,1];
# bloodOx = data[2349:3550,2];
timeSteps = len(heart);
print("TE for heart rate <-> breath rate for Kraskov estimation with %d samples:" % timeSteps);
# Using a KSG estimator for TE is the least biased way to run this:
teCalcClass = JPackage("infodynamics.measures.continuous.kraskov").TransferEntropyCalculatorKraskov
teCalc = teCalcClass();
teHeartToBreath = [];
teBreathToHeart = [];
for knnIndex in range(len(knns)):
knn = knns[knnIndex];
# Compute a TE value for knn nearest neighbours
# Perform calculation for heart -> breath (lag 1)
teCalc.initialise(kHistory,1,lHistory,1,1);
teCalc.setProperty("k", str(knn));
teCalc.setObservations(JArray(JDouble, 1)(heart),
JArray(JDouble, 1)(chestVol));
teHeartToBreath.append( teCalc.computeAverageLocalOfObservations() );
if (numSurrogates > 0):
teHeartToBreathNullDist = teCalc.computeSignificance(numSurrogates);
teHeartToBreathNullMean = teHeartToBreathNullDist.getMeanOfDistribution();
teHeartToBreathNullStd = teHeartToBreathNullDist.getStdOfDistribution();
# Perform calculation for breath -> heart (lag 1)
teCalc.initialise(kHistory,1,lHistory,1,1);
teCalc.setProperty("k", str(knn));
teCalc.setObservations(JArray(JDouble, 1)(chestVol),
JArray(JDouble, 1)(heart));
teBreathToHeart.append( teCalc.computeAverageLocalOfObservations() );
if (numSurrogates > 0):
teBreathToHeartNullDist = teCalc.computeSignificance(numSurrogates);
teBreathToHeartNullMean = teBreathToHeartNullDist.getMeanOfDistribution();
teBreathToHeartNullStd = teBreathToHeartNullDist.getStdOfDistribution();
print("TE(k=%d,l=%d,knn=%d): h->b = %.3f" % (kHistory, lHistory, knn, teHeartToBreath[knnIndex])), # , for no newline
if (numSurrogates > 0):
print(" (null = %.3f +/- %.3f)" % (teHeartToBreathNullMean, teHeartToBreathNullStd)),
print(", b->h = %.3f nats" % teBreathToHeart[knnIndex]),
if (numSurrogates > 0):
print("(null = %.3f +/- %.3f)" % (teBreathToHeartNullMean, teBreathToHeartNullStd)),
print
# Exercise: plot the results
Dataset is:
The first column is heart rate, the second is chest volume, and the third is blood oxygen concentration.
76.53 8320 7771
76.53 8117 7774
76.15 7620 7788
75.39 6413 7787
75.51 7518 7767
76.67 1247 7773
78.55 -3525 7784
79.96 2388 7764
79.71 8296 7775
78.30 7190 7784
77.02 6024 7777
76.62 5825 7784
76.53 5154 7809
76.65 7464 7805
76.95 5345 7806
78.46 -993 7813
I'm using pygatt on a RPI Zero W to access the HR notification stream from a Polar H10 chest belt. The goal is to let an LED blink with the heart rate. Notifications arrive for ca. 100s, then none arrive anymore. No error messages or any (for me recognizable) hint is shown in the debug log.
Any help is greatly appreciated.
The used code is:
from pygatt.util import uuid16_to_uuid
from pygatt.exceptions import NotConnectedError, NotificationTimeout
import binascii
import time
import logging
import RPi.GPIO as gpio
MAC = 'E7:17:FD:20:B1:AA' # MAC address of the Polar H10 belt
HR = 0
RRi1 = 0
RRi2 = 0
LED_On_time = 0.15 # seconds
GPIO_port = 19
gpio.setmode(gpio.BCM)
gpio.setup(GPIO_port, gpio.OUT)
logging.basicConfig(filename='/home/pi/python/debug.log',filemode='w',level=logging.DEBUG)
logging.getLogger('pygatt').setLevel(logging.DEBUG)
def callback(handle, measure):
global HR, RRi1, RRi2
if handle == 16:
for i in range(len(measure)):
if i == 1:
print('Heart rate = ',measure[1],' bpm')
HR = measure[1]
if i == 2:
RRi1 = round((measure[2] + 256*measure[3])/1024,2)
print('RR intervall = %.2f' % RRi1,' s')
if i == 4:
RRi2 = round((measure[4] + 256*measure[5])/1024,2)
print('RR intervall = %.2f' % RRi2,' s')
def Init():
adapter = pygatt.GATTToolBackend()
adapter.start()
try:
""" connect to bluetooth MAC addres with 5 seconds timeout"""
device = adapter.connect(MAC, address_type=pygatt.BLEAddressType.random)
device.bond()
""" generate characteristics uuid's """
uuid_heart_service = uuid16_to_uuid(0x2A37)
""" discover all characteristics uuid's"""
device.discover_characteristics()
device.subscribe(uuid_heart_service, callback, True)
except NotConnectedError:
print('No connection established ')
quit()
Init()
t = time.time() # Initialite with a reasonable value
while(1):
gpio.output(GPIO_port, gpio.HIGH)
time.sleep(LED_On_time)
gpio.output(GPIO_port, gpio.LOW)
time.sleep(max(0, 60/max(HR,30) - (time.time() - t)))
t = time.time()```
Currently using 3 Raspberry Pi's. Each one of them should be able to collect x, y and z with an accelerometer. However a problem is occuring when I run the following script on my newest Raspberry Pi:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Example on how to read the ADXL345 accelerometer.
# Kim H. Rasmussen, 2014
import sys, math, os, spidev, datetime, ftplib
# Setup SPI
spi = spidev.SpiDev()
#spi.mode = 3 <-- Important: Do not do this! Or SPI won't work as intended, or even at all.
spi.open(0,0)
spi.mode = 3
# Read the Device ID (should be xe5)
id = spi.xfer2([128,0])
print 'Device ID (Should be 0xe5):\n'+str(hex(id[1])) + '\n'
# Read the offsets
xoffset = spi.xfer2([30 | 128,0])
yoffset = spi.xfer2([31 | 128,0])
zoffset = spi.xfer2([32 | 128,0])
accres = 2
accrate = 13
print 'Offsets: '
print xoffset[1]
print yoffset[1]
# print str(zoffset[1]) + "\n\nRead the ADXL345 every half second:"
# Initialize the ADXL345
def initadxl345():
# Enter power saving state
spi.xfer2([45, 0])
# Set data rate to 100 Hz. 15=3200, 14=1600, 13=800, 12=400, 11=200, 10=100 etc.
spi.xfer2([44, accrate])
# Enable full range (10 bits resolution) and +/- 16g 4 LSB
spi.xfer2([49, accres])
# Enable measurement
spi.xfer2([45, 8])
# Read the ADXL x-y-z axia
def readadxl345():
rx = spi.xfer2([242,0,0,0,0,0,0])
#
out = [rx[1] | (rx[2] << 8),rx[3] | (rx[4] << 8),rx[5] | (rx[6] << 8)]
# Format x-axis
if (out[0] & (1<<16 - 1 )):
out[0] = out[0] - (1<<16)
# out[0] = out[0] * 0.004 * 9.82
# Format y-axis
if (out[1] & (1<<16 - 1 )):
out[1] = out[1] - (1<<16)
# out[1] = out[1] * 0.004 * 9.82
# Format z-axis
if (out[2] & (1<<16 - 1 )):
out[2] = out[2] - (1<<16)
# out[2] = out[2] * 0.004 * 9.82
return out
# Initialize the ADXL345 accelerometer
initadxl345()
# Read the ADXL345 every half second
timetosend = 60
while(1):
with open('/proc/uptime','r') as f: # get uptime
uptime_start = float(f.readline().split()[0])
uptime_last = uptime_start
active_file_first = "S3-" + str(pow(2,accrate)*25/256) + "hz10bit" + str(accres) + 'g' + str(datetime.datetime.utcnow().strftime('%y%m%d%H%M')) $
active_file = active_file_first.replace(":", ".")
wStream = open('/var/log/sensor/' + active_file,'wb')
finalcount = 0
print "Creating " + active_file
while uptime_last < uptime_start + timetosend:
finalcount += 1
time1 = str(datetime.datetime.now().strftime('%S.%f'))
time2 = str(datetime.datetime.now().strftime('%M'))
time3 = str(datetime.datetime.now().strftime('%H'))
time4 = str(datetime.datetime.now().strftime('%d'))
time5 = str(datetime.datetime.now().strftime('%m'))
time6 = str(datetime.datetime.now().strftime('%Y'))
axia = readadxl345()
wStream.write(str(round(float(axia[0])/1024,3))+','+str(round(float(axia[1])/1024,3))+','+str(round(float(axia[2])/1024,3))+','+time1+','+ti$
# Print the reading
# print axia[0]
# print axia[1]
# print str(axia[2]) + '\n'
# elapsed = time.clock()
# current = 0
# while(current < timeout):
# current = time.clock() - elapsed
with open('/proc/uptime', 'r') as f:
uptime_last = float(f.readline().split()[0])
wStream.close()
def doftp(the_active_file):
session = ftplib.FTP('192.0.3.6','sensor3','L!ghtSp33d')
session.cwd("//datalogger//")
file = open('/var/log/sensor/' + active_file, 'rb') # file to send
session.storbinary('STOR' + active_file, file) # send the file
file.close()
session.quit
My two other Raspberry Pi's show the following when I run the script:
Device ID (Should be 0xe5):
0xe5
Offsets:
0
0
This is supposed to be the same for my 3rd Raspberry Pi regardless of how the accelerometer is positioned before I run the script.
However for some reason I get an output like this with my new Raspberry Pi:
Device ID (Should be 0xe5):
0x1
Offsets:
1
1
Sometimes it shows a complete different Device ID and offset.
All 3 Raspberry Pi have the exact same in both /etc/modules and /boot/config.txt.
When I run ls /dev/*spi* on both I get /dev/spidev0.0 /dev/spidev0.1 for all 3 Raspberry Pi's.
After exchanging MicroSD cards between the Raspberry Pi's it became clear that the issue has nothing to do with the hardware. It comes down to the software.
Does anyone in here have an idea of how I fix this issue? The fact that it isn't showing proper Device ID and offsets just makes the data I collect messed up and useless.
I am using the parallel programming module for python I have a function that returns me an array but when I print the variable that contain the value of the function parallelized returns me "pp._Task object at 0x04696510" and not the value of the matrix.
Here is the code:
from __future__ import print_function
import scipy, pylab
from scipy.io.wavfile import read
import sys
import peakpicker as pea
import pp
import fingerprint as fhash
import matplotlib
import numpy as np
import tdft
import subprocess
import time
if __name__ == '__main__':
start=time.time()
#Peak picking dimensions
f_dim1 = 30
t_dim1 = 80
f_dim2 = 10
t_dim2 = 20
percentile = 80
base = 100 # lowest frequency bin used (peaks below are too common/not as useful for identification)
high_peak_threshold = 75
low_peak_threshold = 60
#TDFT parameters
windowsize = 0.008 #set the window size (0.008s = 64 samples)
windowshift = 0.004 #set the window shift (0.004s = 32 samples)
fftsize = 1024 #set the fft size (if srate = 8000, 1024 --> 513 freq. bins separated by 7.797 Hz from 0 to 4000Hz)
#Hash parameters
delay_time = 250 # 250*0.004 = 1 second#200
delta_time = 250*3 # 750*0.004 = 3 seconds#300
delta_freq = 128 # 128*7.797Hz = approx 1000Hz#80
#Time pair parameters
TPdelta_freq = 4
TPdelta_time = 2
#Cargando datos almacenados
database=np.loadtxt('database.dat')
songnames=np.loadtxt('songnames.dat', dtype=str, delimiter='\t')
separator = '.'
print('Please enter an audio sample file to identify: ')
userinput = raw_input('---> ')
subprocess.call(['ffmpeg','-y','-i',userinput, '-ac', '1','-ar', '8k', 'filesample.wav'])
sample = read('filesample.wav')
userinput = userinput.split(separator,1)[0]
print('Analyzing the audio sample: '+str(userinput))
srate = sample[0] #sample rate in samples/second
audio = sample[1] #audio data
spectrogram = tdft.tdft(audio, srate, windowsize, windowshift, fftsize)
mytime = spectrogram.shape[0]
freq = spectrogram.shape[1]
print('The size of the spectrogram is time: '+str(mytime)+' and freq: '+str(freq))
threshold = pea.find_thres(spectrogram, percentile, base)
peaks = pea.peak_pick(spectrogram,f_dim1,t_dim1,f_dim2,t_dim2,threshold,base)
print('The initial number of peaks is:'+str(len(peaks)))
peaks = pea.reduce_peaks(peaks, fftsize, high_peak_threshold, low_peak_threshold)
print('The reduced number of peaks is:'+str(len(peaks)))
#Store information for the spectrogram graph
samplePeaks = peaks
sampleSpectro = spectrogram
hashSample = fhash.hashSamplePeaks(peaks,delay_time,delta_time,delta_freq)
print('The dimensions of the hash matrix of the sample: '+str(hashSample.shape))
# tuple of all parallel python servers to connect with
ppservers = ()
#ppservers = ("10.0.0.1",)
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
# Creates jobserver with ncpus workers
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print ("Starting pp with", job_server.get_ncpus(), "workers")
print('Attempting to identify the sample audio clip.')
Here I call the function in fingerprint, the commented line worked, but when I try parallelize don't work:
timepairs = job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ))
# timepairs = fhash.findTimePairs(database, hashSample, TPdelta_freq, TPdelta_time)
print (timepairs)
#Compute number of matches by song id to determine a match
numSongs = len(songnames)
songbins= np.zeros(numSongs)
numOffsets = len(timepairs)
offsets = np.zeros(numOffsets)
index = 0
for i in timepairs:
offsets[index]=i[0]-i[1]
index = index+1
songbins[i[2]] += 1
# Identify the song
#orderarray=np.column_stack((songbins,songnames))
#orderarray=orderarray[np.lexsort((songnames,songbins))]
q3=np.percentile(songbins, 75)
q1=np.percentile(songbins, 25)
j=0
for i in songbins:
if i>(q3+(3*(q3-q1))):
print("Result-> "+str(i)+":"+songnames[j])
j+=1
end=time.time()
print('Tiempo: '+str(end-start)+' s')
print("Time elapsed: ", +time.time() - start, "s")
fig3 = pylab.figure(1003)
ax = fig3.add_subplot(111)
ind = np.arange(numSongs)
width = 0.35
rects1 = ax.bar(ind,songbins,width,color='blue',align='center')
ax.set_ylabel('Number of Matches')
ax.set_xticks(ind)
xtickNames = ax.set_xticklabels(songnames)
matplotlib.pyplot.setp(xtickNames)
pylab.title('Song Identification')
fig3.show()
pylab.show()
print('The sample song is: '+str(songnames[np.argmax(songbins)]))
The function in fingerprint that I try to parallelize is:
def findTimePairs(hash_database,sample_hash,deltaTime,deltaFreq):
"Find the matching pairs between sample audio file and the songs in the database"
timePairs = []
for i in sample_hash:
for j in hash_database:
if(i[0] > (j[0]-deltaFreq) and i[0] < (j[0] + deltaFreq)):
if(i[1] > (j[1]-deltaFreq) and i[1] < (j[1] + deltaFreq)):
if(i[2] > (j[2]-deltaTime) and i[2] < (j[2] + deltaTime)):
timePairs.append((j[3],i[3],j[4]))
else:
continue
else:
continue
else:
continue
return timePairs
The complete error is:
Traceback (most recent call last):
File "analisisPrueba.py", line 93, in <module>
numOffsets = len(timepairs)
TypeError: object of type '_Task' has no len()
The submit() method submits a task to the server. What you get back is a reference to the task, not its result. (How could it return its result? submit() returns before any of that work has been done!) You should instead provide a callback function to receive the results. For example, timepairs.append is a function that will take the result and append it to the list timepairs.
timepairs = []
job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ), callback=timepairs.append)
(Each findTimePairs call should calculate one result, in case that isn't obvious, and you should submit multiple tasks. Otherwise you're invoking all the machinery of Parallel Python for no reason. And make sure you call job_server.wait() to wait for all the tasks to finish before trying to do anything with your results. In short, read the documentation and some example scripts and make sure you understand how it works.)