NoneType error while finding minima with Scipy Optimize - python

I am trying to find the global minima of a function using scipy.optimizer methods and keep running into NoneType issues. I have tried multiple algorithms including differential_evolution, shgo, and brute but keep running into errors.
Here is the setup:
def sizing_trade_study(ranges, payload):
with open("config.yml", "r") as yml:
cfg = yaml.load(yml)
first_int = True
km = []
for range in ranges:
km.append( range * 1000)
print(km)
params = (km, payload)
if first_int:
x0 = [float(cfg['design_variables']['initial_guess']['prop_radius']),
float(cfg['design_variables']['initial_guess']['speed']),
float(cfg['design_variables']['initial_guess']['battery_mass']),
float(cfg['design_variables']['initial_guess']['motor_mass']),
float(cfg['design_variables']['initial_guess']['mtow'])]
lb = [float(cfg['design_variables']['lower_bound']['prop_radius']),
float(cfg['design_variables']['lower_bound']['speed']),
float(cfg['design_variables']['lower_bound']['battery_mass']),
float(cfg['design_variables']['lower_bound']['motor_mass']),
float(cfg['design_variables']['lower_bound']['mtow'])] # Min cruise at 1.3 * VStall
ub = [float(cfg['design_variables']['upper_bound']['prop_radius']),
float(cfg['design_variables']['upper_bound']['speed']),
float(cfg['design_variables']['upper_bound']['battery_mass']),
float(cfg['design_variables']['upper_bound']['motor_mass']),
float(cfg['design_variables']['upper_bound']['mtow'])]
# bounds = (slice(lb[0], ub[0]), slice(lb[1], ub[1]), slice(lb[2], ub[2]), slice(lb[3], ub[3]), slice(lb[4], ub[4]))
# bounds = [(lb[0], ub[0]), (lb[1], ub[1]), (lb[2], ub[2]), (lb[3], ub[3]), (lb[4], ub[4])]
bounds = optimize.Bounds(lb,ub)
result = optimize.differential_evolution(objective_function, bounds, args=(params,))
print(result)
def objective_function(x, *params):
global trials
trials = trials+1
print(trials)
performance.compute_performance(x, params[0][0], params[0][1])
Here is the function I am trying to optimize:
import yaml
import simple_mission
import reserve_mission
import config_weight
def compute_performance(x, range, payload):
rprop = x[0]
speed = x[1]
battery = x[2]
motors = x[3]
mtow = x[4]
w = mtow * 9.8
with open("config.yml", "r") as yml:
cfg = yaml.load(yml)
bat_energy_density = int(cfg['performance']['bat_energy_density'])
motor_power_density = int(cfg['performance']['motor_power_density'])
discharge_depth = float(cfg['performance']['discharge_depth'])
e_nominal, flight_time, hover_output, cruise_output = simple_mission.run_simple_mission(rprop, speed, w, range)
reserve_e = reserve_mission.reserve_mission(rprop,speed, w, range)
mass = config_weight.config_weight(battery,motors, rprop, w, mtow, hover_output, cruise_output, payload)
batt = reserve_e - battery * bat_energy_density * discharge_depth / 1000
motor = hover_output.pow_hover / 1000 - motors * motor_power_density
weight = mass - w
return batt+ motor+ weight
The failure doesn't happen immediately but after a couple of runs of the optimizer function. For example, with differential_evolution, it always happens after the 75th trial.
Here is the stacktrace:
69
70
71
72
73
74
75
76
Traceback (most recent call last):
File "sizing_trade_study.py", line 62, in <module>
sizing_trade_study(args.ranges, args.payload)
File "sizing_trade_study.py", line 42, in sizing_trade_study
result = optimize.differential_evolution(objective_function, bounds, args=(params,))
File "/usr/local/anaconda3/envs/simple_mission/lib/python3.7/site-packages/scipy/optimize/_differentialevolution.py", line 308, in differential_evolution
ret = solver.solve()
File "/usr/local/anaconda3/envs/simple_mission/lib/python3.7/site-packages/scipy/optimize/_differentialevolution.py", line 759, in solve
next(self)
File "/usr/local/anaconda3/envs/simple_mission/lib/python3.7/site-packages/scipy/optimize/_differentialevolution.py", line 1082, in __next__
self.constraint_violation[candidate]):
File "/usr/local/anaconda3/envs/simple_mission/lib/python3.7/site-packages/scipy/optimize/_differentialevolution.py", line 1008, in _accept_trial
return energy_trial <= energy_orig
TypeError: '>=' not supported between instances of 'float' and 'NoneType'
Any help is greatly appreciated!

The issue is with one of the retrieved values from your bounds or objective_function, which in turn is being passed in as a NoneType to energy_orig within differential_evolution()
Source: https://github.com/scipy/scipy/blob/master/scipy/optimize/_differentialevolution.py
if feasible_orig and feasible_trial:
return energy_trial <= energy_orig
You should make sure that each key value is not empty from your config.yml or other function parameters. It's hard to tell which could be the problem. However, you could wrap it around a try/catch to get this to not stop on the 75th try for the meantime.
try:
result = optimize.differential_evolution(
objective_function,
bounds,
args=(params,),
)
except TypeError:
import pdb; pdb.set_trace()
I've set pdb, which will allow to to debug the values of each parameter, feel free to swap it out with a pass if you need to continue swiftly

Related

Fitting model to data using scipy differential evolution: "RuntimeError: The map-like callable must be of the form f(func, iterable)..."

I am trying to fit a model to data (extracted from an Excel file and imported using pandas), using a likelihood method. However, when running the code I get a "RuntimeError: The map-like callable must be of the form f(func, iterable), returning a sequence of numbers the same length as 'iterable'" error, which occurred at the "result_simul_G = minimize(negLogLike, params, method = 'differential_evolution', args=(x, y),)" line. Below I have my code; it's very integrated so I couldn't find a way to illustrate what's happening without showing most of it.
#================================================================================
import numpy as np
import pandas as pd
import os
from lmfit import minimize, Parameters, Parameter, report_fit
params = Parameters()
params.add('gamma', value=.45, min=0, max=1, vary = True)
params.add('n', value = 1, min=0, max=3, vary = True)
filename = 'data.xlsx'
#================================================================================
def negLogLike(params, xData, yData):
new_xData = []
new_yData = []
for i in range(len(yData)):
if ((yData[i] != 0) and (xData[i] != 0)):
new_xData.append(xData[i])
new_yData.append(yData[i])
model_result = model(new_xData, params)
nll = 0
epsilon = 10**-10
for i in range(len(new_yData)):
if (model_result[i] < epsilon):
model_result[i] = epsilon
if (model_result[i] > 1 - epsilon):
model_result[i] = 1 - epsilon
nll += new_yData[i] * np.log(model_result[i]) + (1 - new_yData[i]) * np.log(1 - model_result[i])
return -nll
#================================================================================
def model(x, params):
try: # Get parameters
g = params['gamma'].value
n = params['n'].value
except KeyError:
g, n = params
y = 1 - np.exp(-g * x**n)
return y
#================================================================================
def GetFits(DataFrame):
cell_count = 2300000
GFP_GC_SIMUL = np.ones(DataFrame.shape[0], float)
GFP_IC_SIMUL = np.ones(DataFrame.shape[0], float)
# Data
for i in range(DataFrame.shape[0]):
GFP_GC_SIMUL[i] = DataFrame.loc[i, 'GFP genomes'] / cell_count
GFP_IC_SIMUL[i] = DataFrame.loc[i, 'GFP IU'] / cell_count
x = np.array(GFP_GC_SIMUL[10:-10])
y = np.array(GFP_IC_SIMUL[10:-10])
print('len=', len(x), x.dtype, ', x=', x)
print('------------------------')
print('len=', len(y), y.dtype, ', y=', y)
result_simul_G = minimize(negLogLike, params, method = 'differential_evolution', args=(x, y),)
#================================================================================
DataFrame = pd.read_excel('data.xlsx', engine='openpyxl')
GetFits(DataFrame)
When debugging on my own I used print statements to see what x and y data was being supplied to the minimizer and this is what it showed:
len= 34 float64 , x= [0.14478261 0.28695652 0.28695652 0.28695652 0.57391304 0.57391304
0.57391304 0.8738913 0.8738913 0.8738913 1.16086957 1.16086957
1.16086957 1.44780435 1.44780435 1.44780435 1.73478261 1.73478261
1.73478261 2.03476087 2.03476087 2.03476087 2.32173913 2.32173913
2.32173913 2.60869565 2.60869565 2.60869565 2.86956522 2.86956522
2.86956522 7.17391304 7.17391304 7.17391304]
------------------------
len= 34 float64 , y= [0.005 0.01180435 0.01226087 0.01158696 0.036 0.03704348
0.03467391 0.07030435 0.06556522 0.07567391 0.1001087 0.09852174
0.0986087 0.13626087 0.13978261 0.13956522 0.16847826 0.16408696
0.19391304 0.1945 0.21319565 0.19052174 0.32204348 0.23330435
0.25028261 0.28136957 0.26293478 0.25893478 0.28273913 0.29717391
0.273 0.60826087 0.60834783 0.59482609]
I know this is quite a lot but I would appreciate any and all help.

Memory error - double recursion at fault?

I want to generate well formed formulas in python, but I am running into a memory error. I think I am accidentally doing some double recursion, but I am not certain. I am using python 3.8.3 and am not really formally trained. Any tips are welcome. Here's my code:
from string import Template
vars = ['w','x','y','z', '$x', '$y', 's($x, $y)']
mxy = Template('m($x, $y)')
stage1 = []
for var1 in vars:
for var2 in vars:
stage1.append(mxy.substitute(x=var1, y=var2))
def extractFunctions(x):
ans = []
for formula in x:
if '$' in formula:
ans.append(formula)
return ans
def stageSub(stageSet, iterations):
currentStageSet = stageSet
wffs = []
newTemplates = extractFunctions(currentStageSet)
for phormula in newTemplates:
if ('$x' in phormula) and ('$y' not in phormula):
for varx in currentStageSet:
wffs.append(Template(phormula).substitute(x = varx))
elif '$y' in phormula and '$x' not in phormula:
for vary in currentStageSet:
wffs.append(Template(phormula).substitute(y = vary))
elif '$x' in phormula and '$y' in phormula:
for varx in currentStageSet:
for vary in currentStageSet:
wffs.append(Template(phormula).substitute(x = varx, y = vary))
iterations = iterations - 1
print(iterations)
if iterations == 0:
return wffs
if iterations > 0:
print('this happened', iterations)
return stageSub(wffs, iterations)
stage2 = stageSub(stage1, 2)
print(len(stage2))
If you run stageSub(stage1, 1) (so just 1 iteration) it does actually halt.
Here is the error and traceback:
1
this happened 1
Traceback (most recent call last):
File "d:\Python\ringSingleAxiom\generatingWffs.py", line 48, in <module>
stage2 = stageSub(stage1, 2)
File "d:\Python\ringSingleAxiom\generatingWffs.py", line 46, in stageSub
return stageSub(wffs, iterations)
File "d:\Python\ringSingleAxiom\generatingWffs.py", line 38, in stageSub
wffs.append(Template(phormula).substitute(x = varx, y = vary))
MemoryError

TypeError: '<' not supported between instances of pyshipping.Package after running 2to3

Python newbie here, very good chance I am doing a silly mistake here..
After losing a good amount of hair and searching for many hours, I am still not able to convert a whole project to python 3. I have a project made in django framework and it uses python 3.7, and I wanted to incorporate this library into my app. But, because pyshipping uses python 2.7, I thought it may cause compatibility issues. Following this answer,I converted the whole project and tried running this file binpack_simple.py. But it gives me an error I am not able to understand at all. When I run this file using my pycharm terminal when the project iterpreter is set to python 2.7 it runs perfectly, but when I set iterpreter to 3.7, it gives me the following error
return _pyprofile._Utils(Profile).run(statement, filename, sort)
File "C:\Users\idadarklord\AppData\Local\Programs\Python\Python37\lib\profile.py", line 53, in run
prof.run(statement)
File "C:\Users\idadarklord\AppData\Local\Programs\Python\Python37\lib\cProfile.py", line 95, in run
return self.runctx(cmd, dict, dict)
File "C:\Users\idadarklord\AppData\Local\Programs\Python\Python37\lib\cProfile.py", line 100, in runctx
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "C:/Users/idadarklord/PycharmProjects/untitled/pyshipping/binpack_simple.py", line 230, in test
bins, rest = binpack(packages)
File "C:/Users/idadarklord/PycharmProjects/untitled/pyshipping/binpack_simple.py", line 218, in binpack
return allpermutations(packages, bin, iterlimit)
File "C:/Users/idadarklord/PycharmProjects/untitled/pyshipping/binpack_simple.py", line 203, in allpermutations
trypack(bin, todo, bestpack)
File "C:/Users/idadarklord/PycharmProjects/untitled/pyshipping/binpack_simple.py", line 187, in trypack
bins, rest = packit(bin, packages)
File "C:/Users/idadarklord/PycharmProjects/untitled/pyshipping/binpack_simple.py", line 131, in packit
packages = sorted(originalpackages)
TypeError: '<' not supported between instances of 'Package' and 'Package'
Here is my file. Please let me know if I should upload the whole project for clarifications.
#!/usr/bin/env python
# encoding: utf-8
"""
binpack_simple.py
"""
from builtins import map
from builtins import range
from pyshipping.package import Package
from setuptools import setup, find_packages
from distutils.extension import Extension
import codecs
import time
import random
def packstrip(bin, p):
"""Creates a Strip which fits into bin.
Returns the Packages to be used in the strip, the dimensions of the strip as a 3-tuple
and a list of "left over" packages.
"""
# This code is somewhat optimized and somewhat unreadable
s = [] # strip
r = [] # rest
ss = sw = sl = 0 # stripsize
bs = bin.heigth # binsize
sapp = s.append # speedup
rapp = r.append # speedup
ppop = p.pop # speedup
while p and (ss <= bs):
n = ppop(0)
nh, nw, nl = n.size
if ss + nh <= bs:
ss += nh
sapp(n)
if nw > sw:
sw = nw
if nl > sl:
sl = nl
else:
rapp(n)
return s, (ss, sw, sl), r + p
def packlayer(bin, packages):
strips = []
layersize = 0
layerx = 0
layery = 0
binsize = bin.width
while packages:
strip, (sizex, stripsize, sizez), rest = packstrip(bin, packages)
if layersize + stripsize <= binsize:
packages = rest
if not strip:
# we were not able to pack anything
break
layersize += stripsize
layerx = max([sizex, layerx])
layery = max([sizez, layery])
strips.extend(strip)
else:
# Next Layer please
packages = strip + rest
break
return strips, (layerx, layersize, layery), packages
def packbin(bin, packages):
packages.sort()
layers = []
contentheigth = 0
contentx = 0
contenty = 0
binsize = bin.length
while packages:
layer, (sizex, sizey, layersize), rest = packlayer(bin, packages)
if contentheigth + layersize <= binsize:
packages = rest
if not layer:
# we were not able to pack anything
break
contentheigth += layersize
contentx = max([contentx, sizex])
contenty = max([contenty, sizey])
layers.extend(layer)
else:
# Next Bin please
packages = layer + rest
break
return layers, (contentx, contenty, contentheigth), packages
def packit(bin, originalpackages):
packedbins = []
packages = sorted(originalpackages)
while packages:
packagesinbin, (binx, biny, binz), rest = packbin(bin, packages)
if not packagesinbin:
# we were not able to pack anything
break
packedbins.append(packagesinbin)
packages = rest
# we now have a result, try to get a better result by rotating some bins
return packedbins, rest
# In newer Python versions these van be imported:
# from itertools import permutations
def product(*args, **kwds):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = list(map(tuple, args)) * kwds.get('repeat', 1)
result = [[]]
for pool in pools:
result = [x + [y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
def permutations(iterable, r=None):
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
for indices in product(list(range(n)), repeat=r):
if len(set(indices)) == r:
yield tuple(pool[i] for i in indices)
class Timeout(Exception):
pass
def allpermutations_helper(permuted, todo, maxcounter, callback, bin, bestpack, counter):
if not todo:
return counter + callback(bin, permuted, bestpack)
else:
others = todo[1:]
thispackage = todo[0]
for dimensions in set(permutations((thispackage[0], thispackage[1], thispackage[2]))):
thispackage = Package(dimensions, nosort=True)
if thispackage in bin:
counter = allpermutations_helper(permuted + [thispackage], others, maxcounter, callback,
bin, bestpack, counter)
if counter > maxcounter:
raise Timeout('more than %d iterations tries' % counter)
return counter
def trypack(bin, packages, bestpack):
bins, rest = packit(bin, packages)
if len(bins) < bestpack['bincount']:
bestpack['bincount'] = len(bins)
bestpack['bins'] = bins
bestpack['rest'] = rest
if bestpack['bincount'] < 2:
raise Timeout('optimal solution found')
return len(packages)
def allpermutations(todo, bin, iterlimit=5000):
random.seed(1)
random.shuffle(todo)
bestpack = dict(bincount=len(todo) + 1)
try:
# First try unpermuted
trypack(bin, todo, bestpack)
# now try permutations
allpermutations_helper([], todo, iterlimit, trypack, bin, bestpack, 0)
except Timeout:
pass
return bestpack['bins'], bestpack['rest']
def binpack(packages, bin=None, iterlimit=5000):
"""Packs a list of Package() objects into a number of equal-sized bins.
Returns a list of bins listing the packages within the bins and a list of packages which can't be
packed because they are to big."""
if not bin:
bin = Package("600x400x400")
return allpermutations(packages, bin, iterlimit)
def test():
fd = open('small.txt')
vorher = 0
nachher = 0
start = time.time()
for line in fd:
packages = [Package(pack) for pack in line.strip().split()]
if not packages:
continue
bins, rest = binpack(packages)
if rest:
print(("invalid data", rest, line))
else:
vorher += len(packages)
nachher += len(bins)
# print((time.time() - start))
print((vorher, nachher, float(nachher) / vorher * 100))
#
if __name__ == '__main__':
import cProfile
cProfile.run('test()')
# packlayer(bin, packages)
Here is an online link to the file inside the project.
In Python 3 support for the __cmp__ method has been removed. You need to provide a __lt__ method for the class instead if you want to compare two instances. The code for the original Package.__cmp__ is here.
The new method will probably look like:
def __lt__(self, other):
return self.volume < other.volume
but obviously you should test this thoroughly.
You need to set "volume" as your sorting key, meaning:
Change:
packages.sort()
packages = sorted(originalpackages)
To:
packages.sort(key=lambda x: x.volume)
packages = sorted(originalpackages, key=lambda x: x.volume)
Thus, you won't need to change any package inner code manually.

object of type '_Task' has no len() error

I am using the parallel programming module for python I have a function that returns me an array but when I print the variable that contain the value of the function parallelized returns me "pp._Task object at 0x04696510" and not the value of the matrix.
Here is the code:
from __future__ import print_function
import scipy, pylab
from scipy.io.wavfile import read
import sys
import peakpicker as pea
import pp
import fingerprint as fhash
import matplotlib
import numpy as np
import tdft
import subprocess
import time
if __name__ == '__main__':
start=time.time()
#Peak picking dimensions
f_dim1 = 30
t_dim1 = 80
f_dim2 = 10
t_dim2 = 20
percentile = 80
base = 100 # lowest frequency bin used (peaks below are too common/not as useful for identification)
high_peak_threshold = 75
low_peak_threshold = 60
#TDFT parameters
windowsize = 0.008 #set the window size (0.008s = 64 samples)
windowshift = 0.004 #set the window shift (0.004s = 32 samples)
fftsize = 1024 #set the fft size (if srate = 8000, 1024 --> 513 freq. bins separated by 7.797 Hz from 0 to 4000Hz)
#Hash parameters
delay_time = 250 # 250*0.004 = 1 second#200
delta_time = 250*3 # 750*0.004 = 3 seconds#300
delta_freq = 128 # 128*7.797Hz = approx 1000Hz#80
#Time pair parameters
TPdelta_freq = 4
TPdelta_time = 2
#Cargando datos almacenados
database=np.loadtxt('database.dat')
songnames=np.loadtxt('songnames.dat', dtype=str, delimiter='\t')
separator = '.'
print('Please enter an audio sample file to identify: ')
userinput = raw_input('---> ')
subprocess.call(['ffmpeg','-y','-i',userinput, '-ac', '1','-ar', '8k', 'filesample.wav'])
sample = read('filesample.wav')
userinput = userinput.split(separator,1)[0]
print('Analyzing the audio sample: '+str(userinput))
srate = sample[0] #sample rate in samples/second
audio = sample[1] #audio data
spectrogram = tdft.tdft(audio, srate, windowsize, windowshift, fftsize)
mytime = spectrogram.shape[0]
freq = spectrogram.shape[1]
print('The size of the spectrogram is time: '+str(mytime)+' and freq: '+str(freq))
threshold = pea.find_thres(spectrogram, percentile, base)
peaks = pea.peak_pick(spectrogram,f_dim1,t_dim1,f_dim2,t_dim2,threshold,base)
print('The initial number of peaks is:'+str(len(peaks)))
peaks = pea.reduce_peaks(peaks, fftsize, high_peak_threshold, low_peak_threshold)
print('The reduced number of peaks is:'+str(len(peaks)))
#Store information for the spectrogram graph
samplePeaks = peaks
sampleSpectro = spectrogram
hashSample = fhash.hashSamplePeaks(peaks,delay_time,delta_time,delta_freq)
print('The dimensions of the hash matrix of the sample: '+str(hashSample.shape))
# tuple of all parallel python servers to connect with
ppservers = ()
#ppservers = ("10.0.0.1",)
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
# Creates jobserver with ncpus workers
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print ("Starting pp with", job_server.get_ncpus(), "workers")
print('Attempting to identify the sample audio clip.')
Here I call the function in fingerprint, the commented line worked, but when I try parallelize don't work:
timepairs = job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ))
# timepairs = fhash.findTimePairs(database, hashSample, TPdelta_freq, TPdelta_time)
print (timepairs)
#Compute number of matches by song id to determine a match
numSongs = len(songnames)
songbins= np.zeros(numSongs)
numOffsets = len(timepairs)
offsets = np.zeros(numOffsets)
index = 0
for i in timepairs:
offsets[index]=i[0]-i[1]
index = index+1
songbins[i[2]] += 1
# Identify the song
#orderarray=np.column_stack((songbins,songnames))
#orderarray=orderarray[np.lexsort((songnames,songbins))]
q3=np.percentile(songbins, 75)
q1=np.percentile(songbins, 25)
j=0
for i in songbins:
if i>(q3+(3*(q3-q1))):
print("Result-> "+str(i)+":"+songnames[j])
j+=1
end=time.time()
print('Tiempo: '+str(end-start)+' s')
print("Time elapsed: ", +time.time() - start, "s")
fig3 = pylab.figure(1003)
ax = fig3.add_subplot(111)
ind = np.arange(numSongs)
width = 0.35
rects1 = ax.bar(ind,songbins,width,color='blue',align='center')
ax.set_ylabel('Number of Matches')
ax.set_xticks(ind)
xtickNames = ax.set_xticklabels(songnames)
matplotlib.pyplot.setp(xtickNames)
pylab.title('Song Identification')
fig3.show()
pylab.show()
print('The sample song is: '+str(songnames[np.argmax(songbins)]))
The function in fingerprint that I try to parallelize is:
def findTimePairs(hash_database,sample_hash,deltaTime,deltaFreq):
"Find the matching pairs between sample audio file and the songs in the database"
timePairs = []
for i in sample_hash:
for j in hash_database:
if(i[0] > (j[0]-deltaFreq) and i[0] < (j[0] + deltaFreq)):
if(i[1] > (j[1]-deltaFreq) and i[1] < (j[1] + deltaFreq)):
if(i[2] > (j[2]-deltaTime) and i[2] < (j[2] + deltaTime)):
timePairs.append((j[3],i[3],j[4]))
else:
continue
else:
continue
else:
continue
return timePairs
The complete error is:
Traceback (most recent call last):
File "analisisPrueba.py", line 93, in <module>
numOffsets = len(timepairs)
TypeError: object of type '_Task' has no len()
The submit() method submits a task to the server. What you get back is a reference to the task, not its result. (How could it return its result? submit() returns before any of that work has been done!) You should instead provide a callback function to receive the results. For example, timepairs.append is a function that will take the result and append it to the list timepairs.
timepairs = []
job_server.submit(fhash.findTimePairs, (database, hashSample, TPdelta_freq, TPdelta_time, ), callback=timepairs.append)
(Each findTimePairs call should calculate one result, in case that isn't obvious, and you should submit multiple tasks. Otherwise you're invoking all the machinery of Parallel Python for no reason. And make sure you call job_server.wait() to wait for all the tasks to finish before trying to do anything with your results. In short, read the documentation and some example scripts and make sure you understand how it works.)

SciPy: TypeError when using scipy.optimize.minimize

I'm encountering a vague error when attempting to minimise a function using scipy.optimize.minimize. The error I get is,
Traceback (most recent call last):
File "general_fd.py", line 103, in <module>:
a_tmp[index] = minimize(iF.hamiltonian,0,(x,u_last,m_tmp,dt,dx,k*dt,index),tol=1e-3)
TypeError: float() argument must be a string or number
where
def hamiltonian(alphas,x_array,u_array,m_array,dt,dx,time,index):
sigma2 = Sigma_local(time,x_array[index],alphas,m_array[index])**2
movement = f_global(time,x_array[index],alphas)
L_var = L_global(time,x_array[index],alphas,m_array[index])
dx2 = dx**2
if index==0:
sigma2R = Sigma_local(time,x_array[index+1],alphas,m_array[index])**2
tmp = u_array[0]*(abs(movement)/dx - sigma2/dx2) + u_array[1]*(sigma2R/dx2 - abs(movement)/dx) + L_var
elif index==x_array.size-1:
sigma2L = Sigma_local(time,x_array[index-1],alphas,m_array[index])**2
tmp = u_array[-1]*(abs(movement)/dx - sigma2/dx2) + u_array[-2]*(sigma2L/dx2 - abs(movement)/dx) + L_var
else:
sigma2L = Sigma_local(time,x_array[index-1],alphas,m_array[index])**2
sigma2R = Sigma_local(time,x_array[index+1],alphas,m_array[index])**2
tmp = u_array[index]*(abs(movement)/dx - sigma2/dx2) + u_array[index+1]*(sigma2R/(2*dx2) + min(movement,0)/dx) + u_array[index-1]*(sigma2L/(2*dx2) - max(movement,0)/dx) + L_var
return tmp[0]
def Sigma_local(time,x,a,m):
return 0*x
def f_global(time,x_array,a_array):
return 0.1*a_array*x_array
def L_global(time,x_array,a_array,m_array): #general cost
return a_array + np.sqrt(x_array) + a_array**2
The above code is pretty ugly as I've modified it somewhat in order to try and find the error; my apologies. I've found that minimize-function runs a few trial values of hamiltonian before throwing the error message upon the return statement in hamiltonian. I've used the minimize function on other, similar functions without this error occurring, and I'm honestly quite stumped. Any help would be appreciated.

Categories