I am collecting over 100k FastFrame images (100 frames, with 15k points each), with summary mode and collecting them via python pyvisa using ni-visa.
The error is as follows:
SEVERE
The system is low on memory. Some results may be incomplete. To remedy, reduce record length or remove one or more analytical features such as math, measurements, bus decode or search.
After that, I can disconnect, connect again, send commands which update the window, but cannot query anything.
I suspect it is something to do with a memory leak on MSO56 RAM, or communication queue.
Commands like *RST, CLEAR, LCS, and FACTORY do not fix the error.
import pyvisa
import time
if __name__ == '__main__':
## DEV Signal
rm = pyvisa.ResourceManager()
ll = rm.list_resources()
print('\n\n\n----------------------\nAvailable Resources\n----------------------')
for i in range(len(ll)):
print(F'Resource ID {i}: {ll[i]}')
#i = int(input(F"\n\nPlease select 'Resource ID' from above: "))
i=0;
inst = rm.open_resource(ll[i])
inst.timeout = 10000
reset = inst.write("*RST")
ind = inst.query("*IDN?")
print(F"\nResource {i}: {ind}")
inst.write('C1:OUTP ON')
inst.write('C2:OUTP ON')
# Wave signal
Ch = 1; # channel 1 || 2
wave_name = 'UT1'
Frq = 500000; #Hz
Peri = 1/Frq;# Length of waveform
print(F"Period: {Peri}")
# trigger on channel 2
inst.write(F'C2:BSWV WVTP,SQUARE,FRQ,{Frq},AMP,1,OFST,0,DUTY,1')
# signal on channel 1
inst.write(F'C1:BSWV WVTP,SQUARE,FRQ,{Frq},AMP,1,OFST,0,DUTY,10')
inst = []
scope_ip = '192.168.0.10';
rm = pyvisa.ResourceManager()
ll = rm.list_resources()
print(ll)
for l in ll:
if scope_ip in l:
vScope = rm.open_resource(l)
#vScope.clear()
#vScope.close()
vScope.timeout = 2000
## attempts to fix Memory Error
vScope.write_raw('FPANEL:PRESS RESTART')
vScope.write_raw('*PSC ON')
vScope.write_raw('*CLS')
vScope.write_raw('FACTORY\n')
vScope.write_raw('*RST')
vScope.write_raw('*CLEAR')
vScope.write_raw('*ESE 1')
vScope.write_raw('*SRE 0')
vScope.write_raw('DESE 1')
print('\nESR')
print(vScope.query('*ESR?'))
#print('\nEVMSG?')
#print(vScope.query('*EVMsg?'))
#print(vScope.query('*ESE ?'))
# Display Wave Forms
vScope.write_raw('DISPLAY:WAVEVIEW1:CH1:STATE 1')
vScope.write_raw('DISPLAY:WAVEVIEW1:CH2:STATE 1')
# Vertical Command Groups.
vScope.write_raw('CH1:Coupling DC')
vScope.write_raw('CH2:Coupling DC')
vScope.write_raw('CH1:SCALE .5') # *10 for the range
vScope.write_raw('CH2:SCALE .5')
vScope.write_raw('CH1:Position 0')
vScope.write_raw('CH2:Position 0')
vScope.write_raw('TRIGGER:A:TYPE EDGE')
vScope.write_raw('TRIGGER:A:EDGE:SOURCE CH2')
vScope.write_raw('TRIGger:A:LEVEL:CH2 0')
vScope.write_raw('TRIGger:A:EDGE:SLOpe RISE')
vScope.write_raw('Horizontal:Position 10')
vScope.write_raw('Horizontal:MODE MANUAL')
vScope.write_raw('Horizontal:Samplerate 25000000000')
vScope.write_raw('HORIZONTAL:MODE:RECORDLENGTH 25000')
vScope.write_raw('DATA:SOURCE CH1')
vScope.write_raw('ACQUIRE:STOPAFTER SEQUENCE')## triggers re-read
nframes = 100;
vScope.write_raw(F"HORIZONTAL:FASTFRAME:COUNT {nframes}")
if int(1):
vScope.write_raw(F"DATA:FRAMESTART {1+nframes}")
else:
vScope.write_raw('DATA:FRAMESTART 1')
vScope.write_raw(F"DATA:FRAMESTOP {1+nframes}")
vScope.write_raw('HORIZONTAL:FASTFRAME:STATE OFF')
vScope.write_raw('HORIZONTAL:FASTFRAME:STATE ON')
vScope.write_raw('HORIZONTAL:FASTFRAME:SUMFRAME:STATE ON')
vScope.write_raw(F"HORIZONTAL:FASTFRAME:SELECTED {1+nframes}")
t0 = time.time()
for i in range(1000000):
vScope.write_raw('ACQUIRE:STATE 1') ## triggers re-read
vScope.query('*opc?')
vScope.write_raw(b'WFMOUTPRE?')
wfmo = vScope.read_raw()
vScope.write_raw('CURVE?')
curve = vScope.read_raw()
if i%100 ==0:
print(F"Iteration {i}")
print(F"Time Delta: {time.time()-t0}")
t0=time.time()
Poor Solution.
Restarting the Scope with the power button works.
I should have added that in the question, but takes ~ 2 minutes and is not an elegant solution.
Related
I am attempting to read data from a load cell panel which is configured for Modbus RTU protocol. The goal of the program is to log data, I will link the entire program below, but the setup is where I am having issues. I have gotten the module to respond with data, and it is wired in the only configuration which allows communication over USB, so I assume this is done correctly.
The response, which I am saving as 'load' which is being returned to me is:
[[ 0* 2* 3245 0 -28 -1 1000 0]]
*The stars represent the parts of the response which look fully incorrect, based on the expected return values.
This seems to be incorrect, the response I expect is characterized by:
[Slave Address, Function, Byte Count, Data Hi, Data Lo, Data Hi, Data Lo, Error Check Lo, Error Check Hi]
a total of 9 bytes (72 bits). So, I would expect a response to look more like this:
[1*, 4*, 4, 00, 06, 00, 05, DB, 86]
*The stars represent the parts of the response which look fully incorrect, based on the expected return values.
**Expected response taken from: https://www.modbustools.com/modbus.html#function04
I would also expect the values of the Data Bytes to change as I add or remove load from the call, as the panel meter is reading correctly, but the response does not change as the program loops. Does anyone with MinimalModbus experiance have any guesses as to what might be going wrong to get this return?
This is the code of interest:
import minimalmodbus
import serial
import numpy as np
units = "lb."
comPort = "COM6"
baudRate = 9600
functionImplemented = 4
minimalmodbus.slaveaddress = 1
minimalmodbus.registeraddress = 3 #1, 2, 3 for this application
instrument = minimalmodbus.Instrument(comPort, minimalmodbus.slaveaddress)
instrument.serial.port = comPort
instrument.serial.baudrate = baudRate
instrument.serial.parity = serial.PARITY_EVEN
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.mode = minimalmodbus.MODE_RTU
instrument.serial.timeout = 2
n=0
run=True
while run is True:
#record new temperature values WHEN UPDATED?
load=np.array([[1,1,1,1,1,1,1,1]])
for x in range(8):
i=x+1
minimalmodbus.registeraddress=i
load[0,x]=instrument.read_register(minimalmodbus.registeraddress,
number_of_decimals=0, functioncode=functionImplemented, signed=True)
print("load ("+str(n)+"): " + str(load))
n = n + 1
This is the full program, if anyone's interested, but the part that is malfunctioning is what is listed above.:
##author: Jack M
import time
import minimalmodbus
import serial
import numpy as np
import matplotlib.pyplot as plt
units = "lb."
comPort = "COM6"
baudRate = 9600
functionImplemented = 4
minimalmodbus.slaveaddress = 1
minimalmodbus.registeraddress = 3 #1, 2, 3 for this application
instrument = minimalmodbus.Instrument(comPort, minimalmodbus.slaveaddress)
instrument.serial.port = comPort
instrument.serial.baudrate = baudRate
instrument.serial.parity = serial.PARITY_EVEN
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.mode = minimalmodbus.MODE_RTU
instrument.serial.timeout = 2
stime = np.array([[0]])
sload = np.array([[1,1,1,1,1,1,1,1],[2,2,2,2,2,2,2,2]])
print("sload: " + str(sload))
n=0
run=True
while run is True:
#record new temperature values WHEN UPDATED?
load=np.array([[1,1,1,1,1,1,1,1]])
for x in range(8):
i=x+1
minimalmodbus.registeraddress=i
load[0,x]=instrument.read_register(minimalmodbus.registeraddress,
number_of_decimals=0, functioncode=functionImplemented, signed=True)
print("load ("+str(n)+"): " + str(load))
sload=np.append(sload,load,axis=0)
n = n + 1
newTime = [[time.time()]]
#store the input values
stime=np.append(stime,newTime, axis=0)
plt.plot(stime,sload,'ro')
plt.title("Load (lbf) vs. Time (s)")
plt.xlabel("Time (s)")
plt.ylabel("Load (Lbf)")
plt.show()
print("Max: "+ str(np.max(sload)))
I am running the following code on my Raspberry Pi 4B, using Python 3.7.3:
from time import sleep
import RPi.GPIO as GPIO
import math
from watchgod import watch
g=open("/home/pi/Desktop/Int2/DesHeight.txt","r")
DesHeight = g.readline()
DesHeight1=float(DesHeight)
print(DesHeight1)
GPIO.cleanup()
DIR = 20
STEP = 21
CW = 0
CCW = 1
TX_ENC = 15
SPR = 200 # Steps per Rev [CONSTANT]
delay = .001 #Seconds per stepper pulse [CONSTANT]
ratio=24 #gear ratio [CONSTANT]
f = open("height.txt", "r")
y0 = f.readline()
y0 = float(y0)
d=1.25
r=d/2
theta0=y0/(2*math.pi*r)
GPIO.setmode(GPIO.BCM) # GPIO numbering
GPIO.setup(12,GPIO.OUT)
GPIO.output(12,1) # Turning on the "Enable Input"
GPIO.setup(DIR, GPIO.OUT)
GPIO.setup(STEP, GPIO.OUT)
GPIO.output(DIR, CW) # Setting CW Direction
GPIO.setup(TX_ENC, GPIO.IN) # Encoder input setup
GPIO.add_event_detect(TX_ENC, GPIO.BOTH)
Tx = 0
MODE = (16,17,18,19) # GPIO 16 is the standby input. It needs to be high for anything to move
GPIO.setup(MODE,GPIO.OUT)
RESOLUTION = {'Standby':(0,0,0,0),
'Full':(1,0,0,0),
'Half':(1,1,0,0),
'1/4':(1,0,1,0),
'1/8':(1,1,1,0),
'1/16':(1,0,0,1),
'1/32':(1,1,0,1),
'1/64':(1,0,1,1),
'1/128':(1,1,1,1)}
GPIO.output(MODE,RESOLUTION['Full'])
ass = (0,0,0,0)
pp = list(ass)
pp[0] = GPIO.input(MODE[0])
pp[1] = GPIO.input(MODE[1])
pp[2] = GPIO.input(MODE[2])
pp[3] = GPIO.input(MODE[3])
ass = tuple(pp)
if(ass == RESOLUTION['Standby']):
res = 0
elif(ass == RESOLUTION['Full']):
res = 200
elif(ass == RESOLUTION['Half']):
res = 400
elif(ass == RESOLUTION['1/4']):
res = 800
elif(ass == RESOLUTION['1/8']):
res = 1600
elif(ass == RESOLUTION['1/16']):
res = 3200
elif(ass == RESOLUTION['1/32']):
res = 6400
elif(ass == RESOLUTION['1/64']):
res=12800
elif(ass == RESOLUTION['1/128']):
res=25600
else:
print("Whoops lol")
while(True):
for changes in watch('/home/pi/Desktop/Int2/DesHeight.txt'):
g=open("/home/pi/Desktop/Int2/DesHeight.txt","r")
DesHeight = g.readline()
DesHeight1=float(DesHeight)
f = open("height.txt", "r")
y0 = f.readline()
y0=float(y0)
while(abs(y0-DesHeight1)>.001):
if(y0 < DesHeight1):
while(y0 < DesHeight1):
GPIO.output(DIR,CCW)
GPIO.output(STEP, GPIO.HIGH)
sleep(delay)
GPIO.output(STEP, GPIO.LOW)
sleep(delay)
Tx = Tx + 1
theta0=theta0+1/res*1/ratio#*1/gearratio
y0 = y0+2*5/4*14/15*.9944*math.pi*(1/res*1/ratio)*r
else:
while(y0 > DesHeight1):
if(y0>0):
GPIO.output(DIR,CW)
GPIO.output(STEP, GPIO.HIGH)
sleep(delay)
GPIO.output(STEP, GPIO.LOW)
sleep(delay)
Tx = Tx - 1
theta0=theta0-1/res*1/ratio#*1/gearratio
y0 = y0-2*5/4*14/15*.9944*math.pi*(1/res*1/ratio)*r
y0 = str(y0)
print(y0)
f.close()
f = open('height.txt', 'w')
f.write(y0)
f.close()
Essentially, what I am trying to do is read the height of a machine from a text file, then compare it with the desired height, as written in a separate text file. When the code detects a change in the desired height, it checks to make sure that the actual height and the desired height are within 1/1000 of an inch of each other, and if not, it moves a NEMA-17 motor until this condition is met.
The problem I am encountering is that if this code is left to run for a little bit (usually around 40 seconds) the stepper motor ceases to run when I change the desired height. The code itself runs, taking as long as expected to "move" the motor and also calculating the height and returning to the top of the while loop, but the motor itself remains stagnant. This does not occur if new changes to the desired height file are implemented immediately. I am at a loss as to what this could be and could use some help.
Okay, so following the advice of RootTwo, and learning how to use an oscilloscope, I was able to isolate part of the problem and determine a solution. Either the pi or the driver quits supplying a voltage to the motor after about 90 seconds of inactivity. I was not able to find a way to continuously keep the motor at holding torque, but I was able to make it so that the code will reinitialize after a long break by moving the while(True): and watchdog command to the beginning of the code. My problem is technically solved, but my question is yet unanswered: Why does my pi stop giving a signal to the driver board?
I am running below code to taking for runHeartBreathRateKraskov, I am facing the issue and below error.
Want to run below code to calculate Transfer Entropy in runHeartBreathRateKraskov program. I am new and not so much knowledge about Entropy transfer and Mutual Information. I also attached my data set for Information.
from jpype import *
^
IndentationError: unexpected indent
# Run e.g. python runHeartBreathRateKraskov.py 2 2 1,2,3,4,5,6,7,8,9,10
from jpype import *
import sys
import os
import random
import math
import string
import numpy
# Import our readFloatsFile utility in the above directory:
sys.path.append(os.path.relpath(".."))
import readFloatsFile
# Change location of jar to match yours:
#jarLocation = "../../../infodynamics.jar"
jarLocation = "/home/humair/Documents/Transfer Entropy/infodynamics-dist-1.5/infodynamics.jar"
# Start the JVM (add the "-Xmx" option with say 1024M if you get crashes due to not enough memory space)
startJVM(getDefaultJVMPath(), "-ea", "-Djava.class.path=" + jarLocation)
# Read in the command line arguments and assign default if required.
# first argument in argv is the filename, so program arguments start from index 1.
if (len(sys.argv) < 2):
kHistory = 1;
else:
kHistory = int(sys.argv[1]);
if (len(sys.argv) < 3):
lHistory = 1;
else:
lHistory = int(sys.argv[2]);
if (len(sys.argv) < 4):
knns = [4];
else:
knnsStrings = sys.argv[3].split(",");
knns = [int(i) for i in knnsStrings]
if (len(sys.argv) < 5):
numSurrogates = 0;
else:
numSurrogates = int(sys.argv[4]);
# Read in the data
datafile = '/home/humair/Documents/Transfer Entropy/SFI-heartRate_breathVol_bloodOx.txt'
rawData = readFloatsFile.readFloatsFile(datafile)
# As numpy array:
data = numpy.array(rawData)
# Heart rate is first column, and we restrict to the samples that Schreiber mentions (2350:3550)
heart = data[2349:3550,0]; # Extracts what Matlab does with 2350:3550 argument there.
# Chest vol is second column
chestVol = data[2349:3550,1];
# bloodOx = data[2349:3550,2];
timeSteps = len(heart);
print("TE for heart rate <-> breath rate for Kraskov estimation with %d samples:" % timeSteps);
# Using a KSG estimator for TE is the least biased way to run this:
teCalcClass = JPackage("infodynamics.measures.continuous.kraskov").TransferEntropyCalculatorKraskov
teCalc = teCalcClass();
teHeartToBreath = [];
teBreathToHeart = [];
for knnIndex in range(len(knns)):
knn = knns[knnIndex];
# Compute a TE value for knn nearest neighbours
# Perform calculation for heart -> breath (lag 1)
teCalc.initialise(kHistory,1,lHistory,1,1);
teCalc.setProperty("k", str(knn));
teCalc.setObservations(JArray(JDouble, 1)(heart),
JArray(JDouble, 1)(chestVol));
teHeartToBreath.append( teCalc.computeAverageLocalOfObservations() );
if (numSurrogates > 0):
teHeartToBreathNullDist = teCalc.computeSignificance(numSurrogates);
teHeartToBreathNullMean = teHeartToBreathNullDist.getMeanOfDistribution();
teHeartToBreathNullStd = teHeartToBreathNullDist.getStdOfDistribution();
# Perform calculation for breath -> heart (lag 1)
teCalc.initialise(kHistory,1,lHistory,1,1);
teCalc.setProperty("k", str(knn));
teCalc.setObservations(JArray(JDouble, 1)(chestVol),
JArray(JDouble, 1)(heart));
teBreathToHeart.append( teCalc.computeAverageLocalOfObservations() );
if (numSurrogates > 0):
teBreathToHeartNullDist = teCalc.computeSignificance(numSurrogates);
teBreathToHeartNullMean = teBreathToHeartNullDist.getMeanOfDistribution();
teBreathToHeartNullStd = teBreathToHeartNullDist.getStdOfDistribution();
print("TE(k=%d,l=%d,knn=%d): h->b = %.3f" % (kHistory, lHistory, knn, teHeartToBreath[knnIndex])), # , for no newline
if (numSurrogates > 0):
print(" (null = %.3f +/- %.3f)" % (teHeartToBreathNullMean, teHeartToBreathNullStd)),
print(", b->h = %.3f nats" % teBreathToHeart[knnIndex]),
if (numSurrogates > 0):
print("(null = %.3f +/- %.3f)" % (teBreathToHeartNullMean, teBreathToHeartNullStd)),
print
# Exercise: plot the results
Dataset is:
The first column is heart rate, the second is chest volume, and the third is blood oxygen concentration.
76.53 8320 7771
76.53 8117 7774
76.15 7620 7788
75.39 6413 7787
75.51 7518 7767
76.67 1247 7773
78.55 -3525 7784
79.96 2388 7764
79.71 8296 7775
78.30 7190 7784
77.02 6024 7777
76.62 5825 7784
76.53 5154 7809
76.65 7464 7805
76.95 5345 7806
78.46 -993 7813
I have an application in Tkinter.
Part of this application is a method:
It basically takes long lists of random values and checks if the random values are inside of a previously defined grid. Afterwards it writes them into another variable to export it.
This is a rather long process. So I would like to multiprocess it.
Read some stuff about how to do that. Here's the resulting code:
I've read around SO for stuff that might be relevant. I am running an up-to-date Spyder with Python 3.7 as part of the Anaconda-suite on both machines, all (at least included) packages are up-to-date and I've included the
if __name__ == '__main__':
-line. I've also experimented with indentation of
p.start()
and
processes.append(p)
Simply can't get it to work.
def ParallelStuff(myIn1, myIn2, myIn3, myIn4, anotherIn1, anotherIn2, anotherIn3, return_dict, processIterator):
tempOut1 = np.zeros(len(myIn1)) # myIn1, myIn2, myIn3 are of the same length
tempOut2 = np.zeros(len(myIn1))
tempOut3 = np.zeros(len(myIn1))
bb = 0
for i in range(len(myIn3)):
xx = myIn3[i]
yy = myIn4[i]
hits = np.isin(anotherIn1, xx)
goodY = anotherIn3[np.where(hits==1)]
if np.isin(yy, goodY):
tempOut1[bb] = myIn1[i]
tempOut2[bb] = myIn2[i]
tempOut3[bb] = anotherIn3
bb += 1
return_dict[processIterator] = [tempOut1, tempOut1, tempOut3]
nCores = multiprocessing.cpu_count()
def export_Function(self):
out1 = np.array([])
out2 = np.array([])
out3 = np.array([])
for loop_one in range(0, N):
# ...
# stuff that works on both systems with only one core...
# ... and on linux with all cores
processes = []
nTotal = int(len(xRand))
if nTotal%nCores == 0:
o = int(nTotal/nCores)
else:
o = int(nTotal/(nCores-1))
manager = multiprocessing.Manager()
return_dict = manager.dict()
for processIterator in range (nCores):
offset = o*i
myIn1 = in1[offset : min(nTotal, offset + o)]
myIn2 = in2[offset : min(nTotal, offset + o)]
myIn3 = in3[offset : min(nTotal, offset + o)]
myIn4 = in4[offset : min(nTotal, offset + o)]
if __name__ == '__main__':
p = multiprocessing.Process(target = ParallelStuff, args = (myIn1, myIn2, myIn3, myIn4, anotherIn1, anotherIn2, anotherIn3, return_dict, processIterator))
p.start()
processes.append(p)
for p in range(len(processes)):
processes[p].join()
myOut1 = return_dict[p][0]
myOut2 = return_dict[p][1]
myOut3 = return_dict[p][2]
out1 = np.concatenate((out1, myOut1[np.where(myOut1 != 0)]))
out2 = np.concatenate((out2, myOut2[np.where(myOut2 != 0)]))
out3 = np.concatenate((out3, myOut3[np.where(myOut3 != 0)]))
When I run my programm on my Linux machine it does exactly what it's supposed to do. Distribute to all 8 cores, computes, concatenates the 3 results in the respective arrays, exports.
When I run my programm on my Windows machine the application's window freezes, the process becomes inactive, a new kernel automatically opens and a new window appears.
I am using the parallel programming module for python I have a function that returns me an array but when I print the variable that contain the value of the function parallelized returns me "pp._Task object at 0x04696510" and not the value of the matrix.
Here is the code:
from __future__ import print_function
import scipy, pylab
from scipy.io.wavfile import read
import sys
import peakpicker as pea
import pp
import fingerprint as fhash
import matplotlib
import numpy as np
import tdft
import subprocess
import time
if __name__ == '__main__':
start=time.time()
#Peak picking dimensions
f_dim1 = 30
t_dim1 = 80
f_dim2 = 10
t_dim2 = 20
percentile = 80
base = 100 # lowest frequency bin used (peaks below are too common/not as useful for identification)
high_peak_threshold = 75
low_peak_threshold = 60
#TDFT parameters
windowsize = 0.008 #set the window size (0.008s = 64 samples)
windowshift = 0.004 #set the window shift (0.004s = 32 samples)
fftsize = 1024 #set the fft size (if srate = 8000, 1024 --> 513 freq. bins separated by 7.797 Hz from 0 to 4000Hz)
#Hash parameters
delay_time = 250 # 250*0.004 = 1 second#200
delta_time = 250*3 # 750*0.004 = 3 seconds#300
delta_freq = 128 # 128*7.797Hz = approx 1000Hz#80
#Time pair parameters
TPdelta_freq = 4
TPdelta_time = 2
#Cargando datos almacenados
database=np.loadtxt('database.dat')
songnames=np.loadtxt('songnames.dat', dtype=str, delimiter='\t')
separator = '.'
print('Please enter an audio sample file to identify: ')
userinput = raw_input('---> ')
subprocess.call(['ffmpeg','-y','-i',userinput, '-ac', '1','-ar', '8k', 'filesample.wav'])
sample = read('filesample.wav')
userinput = userinput.split(separator,1)[0]
print('Analyzing the audio sample: '+str(userinput))
srate = sample[0] #sample rate in samples/second
audio = sample[1] #audio data
spectrogram = tdft.tdft(audio, srate, windowsize, windowshift, fftsize)
mytime = spectrogram.shape[0]
freq = spectrogram.shape[1]
print('The size of the spectrogram is time: '+str(mytime)+' and freq: '+str(freq))
threshold = pea.find_thres(spectrogram, percentile, base)
peaks = pea.peak_pick(spectrogram,f_dim1,t_dim1,f_dim2,t_dim2,threshold,base)
print('The initial number of peaks is:'+str(len(peaks)))
peaks = pea.reduce_peaks(peaks, fftsize, high_peak_threshold, low_peak_threshold)
print('The reduced number of peaks is:'+str(len(peaks)))
#Store information for the spectrogram graph
samplePeaks = peaks
sampleSpectro = spectrogram
hashSample = fhash.hashSamplePeaks(peaks,delay_time,delta_time,delta_freq)
print('The dimensions of the hash matrix of the sample: '+str(hashSample.shape))
# tuple of all parallel python servers to connect with
ppservers = ()
#ppservers = ("10.0.0.1",)
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
# Creates jobserver with ncpus workers
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print ("Starting pp with", job_server.get_ncpus(), "workers")
print('Attempting to identify the sample audio clip.')
Here I call the function in fingerprint, the commented line worked, but when I try parallelize don't work:
timepairs = job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ))
# timepairs = fhash.findTimePairs(database, hashSample, TPdelta_freq, TPdelta_time)
print (timepairs)
#Compute number of matches by song id to determine a match
numSongs = len(songnames)
songbins= np.zeros(numSongs)
numOffsets = len(timepairs)
offsets = np.zeros(numOffsets)
index = 0
for i in timepairs:
offsets[index]=i[0]-i[1]
index = index+1
songbins[i[2]] += 1
# Identify the song
#orderarray=np.column_stack((songbins,songnames))
#orderarray=orderarray[np.lexsort((songnames,songbins))]
q3=np.percentile(songbins, 75)
q1=np.percentile(songbins, 25)
j=0
for i in songbins:
if i>(q3+(3*(q3-q1))):
print("Result-> "+str(i)+":"+songnames[j])
j+=1
end=time.time()
print('Tiempo: '+str(end-start)+' s')
print("Time elapsed: ", +time.time() - start, "s")
fig3 = pylab.figure(1003)
ax = fig3.add_subplot(111)
ind = np.arange(numSongs)
width = 0.35
rects1 = ax.bar(ind,songbins,width,color='blue',align='center')
ax.set_ylabel('Number of Matches')
ax.set_xticks(ind)
xtickNames = ax.set_xticklabels(songnames)
matplotlib.pyplot.setp(xtickNames)
pylab.title('Song Identification')
fig3.show()
pylab.show()
print('The sample song is: '+str(songnames[np.argmax(songbins)]))
The function in fingerprint that I try to parallelize is:
def findTimePairs(hash_database,sample_hash,deltaTime,deltaFreq):
"Find the matching pairs between sample audio file and the songs in the database"
timePairs = []
for i in sample_hash:
for j in hash_database:
if(i[0] > (j[0]-deltaFreq) and i[0] < (j[0] + deltaFreq)):
if(i[1] > (j[1]-deltaFreq) and i[1] < (j[1] + deltaFreq)):
if(i[2] > (j[2]-deltaTime) and i[2] < (j[2] + deltaTime)):
timePairs.append((j[3],i[3],j[4]))
else:
continue
else:
continue
else:
continue
return timePairs
The complete error is:
Traceback (most recent call last):
File "analisisPrueba.py", line 93, in <module>
numOffsets = len(timepairs)
TypeError: object of type '_Task' has no len()
The submit() method submits a task to the server. What you get back is a reference to the task, not its result. (How could it return its result? submit() returns before any of that work has been done!) You should instead provide a callback function to receive the results. For example, timepairs.append is a function that will take the result and append it to the list timepairs.
timepairs = []
job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ), callback=timepairs.append)
(Each findTimePairs call should calculate one result, in case that isn't obvious, and you should submit multiple tasks. Otherwise you're invoking all the machinery of Parallel Python for no reason. And make sure you call job_server.wait() to wait for all the tasks to finish before trying to do anything with your results. In short, read the documentation and some example scripts and make sure you understand how it works.)