Fully monotone interpolation in python - python
I have some data such as:
I would like to fit a differentiable monotone curve to it. I tried PchipInterpolator class but on a very similar graph it resulted in:
This is not monotone.
How can I fit a monotone curve to data like this?
Here is a sample set of y values for another similar graph:
[0.1109157119023644, 0.20187393816931934, 0.14466318670239758, 0.16535159414166822, 0.05452708697483864, 0.2153046237959556, 0.2200300476272603, 0.21012762463269324, 0.15947100322395022, 0.2819691842129948, 0.15567770052985092, 0.24850595803020692, 0.1329341593280457, 0.15595107081606913, 0.3232021121832229, 0.23707961921686588, 0.2415887076540357, 0.32363506549779797, 0.3584089204036798, 0.29232772580068433, 0.22145994836140775, 0.22797587985241133, 0.2717787840603025, 0.3245255944762287, 0.29301098282789195, 0.32417076823344143, 0.3450906550996232, 0.34272097408024904, 0.3868714875012437, 0.41876692320045755, 0.3544198724867363, 0.33073960954801895, 0.3921033666371904, 0.33349050060172974, 0.3608862044547096, 0.37375822841635425, 0.5396399750708429, 0.4209201143798284, 0.42004773793166883, 0.5217725632679073, 0.5911731474218788, 0.43389609315065386, 0.4287288396176006, 0.43007525393257007, 0.5687062142675405, 0.6030811498722173, 0.5292225577714743, 0.47710974351051355, 0.6182720730381119, 0.6241033581931327, 0.6236788197617511, 0.6643161356364049, 0.5577616524049582, 0.6888440258481371, 0.6867893120660341, 0.6685257606057502, 0.599481675493677, 0.7309075091448749, 0.7644365338580481, 0.6176797601816733, 0.6751467827192018, 0.6452178017908761, 0.6684778262246701, 0.7003380077556168, 0.667035916425416, 0.8434451759113093, 0.8419343615815968, 0.8657695361433773, 0.7392487161484605, 0.8773282098364621, 0.8265679895117846, 0.7246599961191632, 0.7251899061730714, 0.9271640780410231, 0.9180581424305536, 0.8099033021701689, 0.8268585329594615, 0.8519967080830176, 0.8711231413093845, 0.8689802343798663, 0.8299523829217353, 1.0057741699770046, 0.8538130788729608, 0.9662784297225102, 1.023419780920539, 0.913146849759822, 0.9900885996579213, 0.8740638988529978, 0.8900285618419457, 0.9065474574434158, 1.0749522597307315, 1.0319120938258166, 1.0051369663172995, 0.9893558841613622, 1.051384986916457, 1.0327996870915341, 1.0945543972861898, 0.9716604944496021, 1.1490370559566179, 1.1379231481207432, 1.6836433783615088, 1.8162068766097395, 2.072155286917785, 2.0395966998366, 2.191064589600466, 2.1581974932543617, 2.163403843819597, 2.133441151300847, 2.1726053994136922, 2.1157865673629526, 2.2249636455682866, 2.2313062166802147, 2.1731708496472764, 2.315203950110816, 2.1601242661726827, 2.174940281421225, 2.2653635413275945, 2.337227057574145, 2.3645767548381618, 2.3084919291392527, 2.314014515926446, 2.25166717296155, 2.2621157708115778, 2.2644578546265586, 2.313504860292943, 2.398969190357051, 2.309443951779675, 2.278946047410807, 2.4080802287121146, 2.353652872018618, 2.35527529074088, 2.4233001060410784, 2.428767198055608, 2.35677123091093, 2.497135132404064, 2.3978099128437282, 2.3970802609341972, 2.4967434818740024, 2.511209192435555, 2.541001050440798, 2.5760248002036525, 2.5960512284192245, 2.4778408861721037, 2.5757724103530046, 2.631148267999664, 2.538327346218921, 2.4878734713248507, 2.6133797275761066, 2.6282561527857395, 2.6150327104952447, 3.102757164382848, 3.3318503012160905, 3.3907776288198193, 3.6065313558941936, 3.601180295875859, 3.560491539319038, 3.650095006265445, 3.574812155815713, 3.686227315374108, 3.6338261415040867, 3.5661194785086288, 3.5747332336054645, 3.560674343726918, 3.5678550481603635, 3.5342848534390967, 3.4929538312485913, 3.564544653619436, 3.6861775399566126, 3.6390300636595216, 3.6656336332413666, 3.5731185631923945, 3.5965520044069854, 3.537434489989021, 3.5590937423870144, 3.5331656424410083, 3.640652819618705, 3.5971240740252126, 3.641793843012055, 3.6064014089254295, 3.530378938786505, 3.613631139461306, 3.519542268056021, 3.5416251524576, 3.524789618934195, 3.5519951806099512, 3.6435695455293975, 3.6825670484650863, 3.5993379768209217, 3.628367553897596, 3.633290480934276, 3.5772841681579535, 3.602326323397947, 3.518180278272883, 3.531054006706696, 3.5566645495066167, 3.5410992153240985, 3.630762839301216, 3.5924649123201053, 3.646230633817883, 3.568290612034935, 3.638356129262967, 3.566083243271712, 3.6064978645771797, 3.4942864293427633, 3.595438454812999, 3.681726879126678, 3.6501308156903463, 3.5490717955938593, 3.598535359345363, 3.6328331698421654, 3.595159538698094, 3.556715819008055, 3.6292942886764554, 3.6362895697392856, 3.5965220100874093, 3.6103542985016266, 3.5715010140382493, 3.658769915445062, 3.5939686395400416, 3.4974461928859917, 3.5232691556732267, 3.6145687814416614, 3.5682054018341005, 3.648937250575395, 3.4912089018613384, 3.522426560340423, 3.6757968409374637, 3.651348691084845, 3.5395070091675973, 3.5306275536360383, 3.6153498246329883, 3.599762785949876, 3.5351931286962333, 3.6488316987683054, 3.5198301490992963, 3.5696570079786687, 3.561553836008927, 3.5659475947331423, 3.553147100256108, 3.5475591872743664, 3.6097226797553317, 3.6849600324757934, 3.5264731043844413, 3.506658609738451, 3.5535775980874114, 3.5487291053913554, 3.570651383823912, 3.552993371839188, 3.5054297764661846, 3.5723024888238792]
Here's a monotone curve fitter in essentially 5 lines of python, with numpy
and a lowpass filter from scipy.signal:
#!/usr/bin/env python
"""https://stackoverflow.com/questions/56551114/fully-monotone-interpolation-in-python """
# see also
# https://en.wikipedia.org/wiki/Monotone-spline aka I-spline
# https://scikit-learn.org/stable/modules/isotonic.html
# denis 2 March 2020
from __future__ import division, print_function
import numpy as np
from scipy import signal as sig
from matplotlib import pyplot as plt
import seaborn as sns
def butter_filtfilt( x, Wn=0.5, axis=0 ):
""" butter( 2, Wn ), filtfilt
axis 0 each col, -1 each row
"""
b, a = sig.butter( N=2, Wn=Wn )
return sig.filtfilt( b, a, x, axis=axis, method="gust" ) # twice, forward backward
def ints( x ):
return x.round().astype(int)
def minavmax( x ):
return "min av max %.3g %.3g %.3g" % (
x.min(), x.mean(), x.max() )
def pvec( x ):
n = len(x) // 25 * 25
return "%s \n%s \n" % (
minavmax( x ),
ints( x[ - n : ]) .reshape( -1, 25 ))
#...............................................................................
def monofit( y, Wn=0.1 ):
""" monotone-increasing curve fit """
y = np.asarray(y).squeeze()
print( "\n{ monofit: y %d %s Wn %.3g " % (
len(y), minavmax( y ), Wn ))
ygrad = np.gradient( y )
print( "grad y:", pvec( ygrad ))
# lowpass filter --
gradsmooth = butter_filtfilt( ygrad, Wn=Wn )
print( "gradsmooth:", pvec( gradsmooth ))
ge0 = np.fmax( gradsmooth, 0 )
ymono = np.cumsum( ge0 ) # integrate, sensitive to first few
ymono += (y - ymono).mean()
err = y - ymono
print( "y - ymono:", pvec( err ))
errstr = "average |y - monofit|: %.2g" % np.abs( err ).mean()
print( errstr )
print( "} \n" )
return ymono, err, errstr
#...............................................................................
if __name__ == "__main__":
import sys
np.set_printoptions( threshold=20, edgeitems=15, linewidth=120,
formatter = dict( float = lambda x: "%.2g" % x )) # float arrays %.2g
print( 80 * "=" )
thispy = sys.argv[0]
infile = sys.argv[1] if len(sys.argv) > 1 \
else "so-mono.txt"
Wn = 0.1 # ?
params = "%s %s Wn %g " % (thispy, infile, Wn)
print( params )
y = np.loadtxt( infile ) * 100
print( "y:", y )
ymono, err, errstr = monofit( y, Wn=Wn )
if 1:
sns.set_style("whitegrid")
fig, ax = plt.subplots( figsize=[10, 5] )
plt.subplots_adjust( left=.05, right=.99, bottom=.05, top=.90 )
fig.suptitle(
"Easy monotone curve fit: np.gradient | lowpass filter | clip < 0 | integrate \n"
+ errstr, multialignment="left" )
ax.plot( ymono, color="orangered" )
j = np.where( ymono < y )[0]
xax = np.arange( len(y) )
plt.vlines( xax[j], ymono[j], y[j], color="blue", lw=1 )
j = np.where( ymono > y )[0]
plt.vlines( xax[j], y[j], ymono[j], color="blue", lw=1 )
png = thispy.replace( ".py", ".png" )
print( "writing", png )
plt.savefig( png )
plt.show()
You can try to do preprocessing, to make you input data monotone say using isotonic regression as described at
https://stats.stackexchange.com/questions/156327/fit-monotone-polynomial-to-data
Related
Numba compilation fails in parallel for loop
I am trying to make use of prange in Numba to paralellize a routine that shifts arrays over an irregularly defined grid. In this example, the grid is just equidistant for simplicity. Here's the code import numpy as np from numba import jit, prange #jit( nopython=True, parallel=True ) def roll_arrays( a, shift_values,x_grid ): x_max = np.amax(x_grid) total_items = a.shape[0] the_ddtype = a.dtype result = np.zeros( (a.shape[0], a.shape[1] ), dtype=the_ddtype ) for k in prange( total_items ): edge_val_left = a[k,0] edge_val_right = a[k,-1] #extend grid to edges with boundary values (flat extrapolation) extended_boundary = np.abs( shift_values[k] )#positive or negative depending on shift if( shift_values[k] != 0.0 ): x0_right = np.linspace( x_max, x_max + extended_boundary, 10 ) x0_left = np.linspace( -x_max-extended_boundary, -x_max, 10 ) if( shift_values[k]>0.0 ): #we fill left values x_dense_grid = np.concatenate( ( x0_left, x_grid + shift_values[k] ) ) x_dense_grid = np.unique( x_dense_grid ) ynew = np.concatenate( ( edge_val_left*np.ones( 10 ), a[k,:] ) ) elif( shift_values[k]<0.0 ): x_dense_grid = np.concatenate( ( x_grid + shift_values[k], x0_right ) ) x_dense_grid = np.unique( x_dense_grid ) ynew = np.concatenate( ( a[k,:], edge_val_right*np.ones( 10 ) ) ) max_index = x_dense_grid.shape[0] - 1 mid_x_index = np.int( x_dense_grid.shape[0]/2 ) shift_index = - ( np.argmin( np.abs( x_dense_grid - shift_values[k]) ) - mid_x_index ) if ( shift_index < 0 ): ynew[ max_index+shift_index: ] = edge_val_right*np.ones( np.int(1+ np.abs(shift_index)) ) elif ( shift_index > 0 ): ynew[ 0:shift_index ] = edge_val_left*np.ones( np.int(np.abs(shift_index)) ) if( shift_index ==0 ): print("I'm zero") #return on the original grid f_interp = np.interp( x_grid, x_dense_grid, ynew ) result[k,:] = f_interp else: #no shift result[k,:] = a[k,:] return result x = np.linspace(-10,10, 2001) shifts = np.array([-1.0,-2.0,1.0]) f = np.array( [ k/( x**2 + k**2 ) for k in range(1,shifts.shape[0]) ] ) fs = roll_arrays( f, shifts, x) print("done") The problem is the use of "prange" in the k-th loop, as the error displayed i: TypeError: Failed in nopython mode pipeline (step: convert to parfors) object of type 'NoneType' has no len() So I'm guessing the convert to parfors fails due to some race condition or equivalent; is there a way to overcome this issue in the above routine? The error does not occur if I drop the "parallel=True" decorator.
Python OGR: Finding bad spot in source code for code displacement
I've tried to adapt a python ogr script which displaces points too near to each other (originally from http://wiki.gis-lab.info/w/Displacement_of_points_with_same_coordinates,_ShiftPoints/OGR for python2) to Python3. I've also changed the GDAL driver from Shapefile to GeoJSON: #!/usr/bin/env python import sys import math from itertools import cycle from optparse import OptionParser try: from osgeo import ogr except ImportError: import ogr driverName = "GeoJSON" class progressBar( object ): '''Class to display progress bar ''' def __init__( self, maximum, barLength ): '''Init progressbar instance. #param maximum maximum progress value #param barLength length of the bar in characters ''' self.maxValue = maximum self.barLength = barLength self.spin = cycle(r'-\|/').__next__ self.lastLength = 0 self.tmpl = '%-' + str( barLength ) + 's ] %c %5.1f%%' sys.stdout.write( '[ ' ) sys.stdout.flush() def update( self, value ): '''Update progressbar. #param value Input new progress value ''' # Remove last state. sys.stdout.write( '\b' * self.lastLength ) percent = value * 100.0 / self.maxValue # Generate new state width = int( percent / 100.0 * self.barLength ) output = self.tmpl % ( '-' * width, self.spin(), percent ) # Show the new state and store its length. sys.stdout.write( output ) sys.stdout.flush() self.lastLength = len( output ) def __log( msg, abort = True, exitCode = 1 ): ''' display log message and abort execution #param msg message to display #param abort abort program execution or no. Default: True #param exitCode exit code passed to sys.exit. Default: 1 ''' print(msg) if abort: sys.exit( exitCode ) def doDisplacement( src, dst, radius, rotate, logFile = None, logField = None ): ''' read source file and write displacement result to output file #param src source shapefile #param dst destination shapefile #param radius displacement radius #param rotate flag indicates rotate points or no ''' # try to open source shapefile inShape = ogr.Open( src, False ) if inShape is None: __log( "Unable to open source shapefile " + src ) print(inShape) inLayer = inShape.GetLayer( 0 ) print(inLayer) print(inLayer.GetGeomType()) if inLayer.GetGeomType() not in [ ogr.wkbPoint, ogr.wkbPoint25D ]: __log( "Input layer should be point layer." ) print(inLayer.GetFeatureCount()) if inLayer.GetFeatureCount() == 0: __log( "There are no points in given shapefile." ) drv = ogr.GetDriverByName( driverName ) if drv is None: __log( "%s driver not available." % driverName ) crs = inLayer.GetSpatialRef() # initialize output shapefile outShape = drv.CreateDataSource( dst ) if outShape is None: __log( "Creation of output file %s failed." % dst ) outLayer = outShape.CreateLayer( "point", crs, ogr.wkbPoint ) if outLayer is None: __log( "Layer creation failed." ) # create fields in output layer logIndex = 0 featDef = inLayer.GetLayerDefn() for i in range( featDef.GetFieldCount() ): fieldDef = featDef.GetFieldDefn( i ) if logField and logField.lower() == fieldDef.GetNameRHef().lower(): logIndex = i if outLayer.CreateField ( fieldDef ) != 0: __log( "Creating %s field failed." % fieldDef.GetNameRef() ) __log( "Search for overlapped features", abort = False ) cn = 0 pb = progressBar( inLayer.GetFeatureCount(), 65 ) # find overlapped points displacement = dict() inLayer.ResetReading() inFeat = inLayer.GetNextFeature() while inFeat is not None: wkt = inFeat.GetGeometryRef().ExportToWkt() print(wkt) if wkt not in displacement: lst = [ inFeat.GetFID() ] displacement[ wkt ] = lst else: lst = displacement[ wkt ] j = [ inFeat.GetFID() ] j.extend( lst ) displacement[ wkt ] = j cn += 1 pb.update( cn ) inFeat = inLayer.GetNextFeature() __log( "\nDisplace overlapped features", abort = False ) cn = 0 pb = progressBar( len( displacement ), 65 ) lineIds = [] lineValues = [] lines = [] pc = 0 # move overlapped features and write them to output file for k, v in displacement.items(): featNum = len( v ) if featNum == 1: f = inLayer.GetFeature( v[ 0 ] ) if outLayer.CreateFeature( f ) != 0: __log( "Failed to create feature in output shapefile." ) else: pc += featNum fullPerimeter = 2 * math.pi angleStep = fullPerimeter / featNum if featNum == 2 and rotate: currentAngle = math.pi / 2 else: currentAngle = 0 for i in v: sinusCurrentAngle = math.sin( currentAngle ) cosinusCurrentAngle = math.cos( currentAngle ) dx = radius * sinusCurrentAngle dy = radius * cosinusCurrentAngle ft = inLayer.GetFeature( i ) lineIds.append( str( ft.GetFID() ) ) lineValues.append( ft.GetFieldAsString( logIndex ) ) geom = ft.GetGeometryRef() x = geom.GetX() y = geom.GetY() pt = ogr.Geometry( ogr.wkbPoint ) pt.SetPoint_2D( 0, x + dx, y + dy ) f = ogr.Feature( outLayer.GetLayerDefn() ) f.SetFrom( ft ) f.SetGeometry( pt ) if outLayer.CreateFeature( f ) != 0: __log( "Failed to create feature in output shapefile." ) f.Destroy() currentAngle += angleStep # store logged info lines.append( ";".join( lineIds ) + "(" + ";".join( lineValues ) + ")\n" ) lineIds = [] lineValues = [] cn += 1 pb.update( cn ) # cleanup print("\nPoints displaced: %d\n" % pc) inShape = None outShape = None # write log if logFile: fl = open( logFile, "w" ) fl.writelines( lines ) fl.close() if __name__ == '__main__': parser = OptionParser( usage = "displacement.py [OPTIONS] INPUT OUTPUT", version = "%prog v0.1", description = "Shift overlapped points so all of them becomes visible", epilog = "Experimental version. Use at own risk." ) parser.add_option( "-d", "--distance", action="store", dest="radius", default=0.015, metavar="DISTANCE", type="float", help="displacement distanse [default: %default]" ) parser.add_option( "-r", "--rotate", action="store_true", dest="rotate", default=False, help="place points horizontaly, if count = 2 [default: %default]" ) parser.add_option( "-l", "--log", action="store", dest="logFile", metavar="FILE", help="log overlapped points to file [default: %default]. Makes no sense without '-f' option" ) parser.add_option( "-f", "--field", action="store", dest="logField", metavar="FIELD", help="Shape field that will be logged to file. Makes no sense without '-l' option" ) ( opts, args ) = parser.parse_args() if len( args ) != 2: parser.error( "incorrect number of arguments" ) if opts.logFile and opts.logField: doDisplacement( args[ 0 ], args[ 1 ], opts.radius, opts.rotate, opts.logFile, opts.logField ) else: doDisplacement( args[ 0 ], args[ 1 ], opts.radius, opts.rotate ) The script finds displacement candidates by drawing a circle around each point calculates the angle depending on the number of points to displace (e.g. 2 - 0 & 180) moves the points I'm using this input file. Executing python displacement.py input.geojson output.geojson -d 100 should displace all points, but there are always 0 points displaced. May you please help me finding out the cause?
There is not bad spot in this code, it does exactly what it says it does. It only displaces points which have the exact same coordinates. Your file has different coordinates. I tried it with a shapefile with the same coordinates, and it worked fine. I guess you'll have to look for another solution.
tf.while_loop only makes only one loop
After days of trying to apply a tf.while_loop, I still fail to understand how it works (or rather why it does not). The documentations and various questions here on StackOverflow haven't helped so far. The main idea is to train the different columns of a tensor trueY separately using a while_loop. The problem is that when I trace this code, I see that the while_loop gets called only once. I'd like to dynamically assign names to variables created in the while_loop so as to be able to access them outside the while_loop after they have been created (thus the "gen_name" function trying to dynamically generate names for the dense layers created in each loop), and make tf.while_loop run n times this way. Here is a sample of my code with the issue (not the full code and modified to demonstrate this problem) ................... config['dim_y'] = 10 Xl = tf.placeholder( self.dtype, shape=(batchsize, config['dim_x']) ) Yl = tf.placeholder( self.dtype, shape=(batchsize, config['dim_y']) ) Gl = tf.placeholder( self.dtype, shape=(batchsize, config['dim_g']) ) costl, cost_m, self.cost_b = self.__cost( Xl, Yl, Gl, False ) def __eval_cost( self, A, X, Y, G, reuse ): AGXY = tf.concat( [A, G, X, Y], -1 ) Z, mu_phi3, ls_phi3 = build_nn( AGXY, ...., reuse ) _cost = -tf.reduce_sum( ls_phi3, -1 ) _cost += .5 * tf.reduce_sum( tf.pow( mu_phi3, 2 ), -1 ) _cost += .5 * tf.reduce_sum( tf.exp( 2*ls_phi3 ), -1 ) return _cost def __cost( self, trueX, trueY, trueG, reuse ): ........ columns = tf.unstack(trueY, axis=-1) AGX = tf.concat( [ AX, G ], -1 ) pre_Y = self.build_nn( AGX, ....., reuse ) index_loop = (tf.constant(0), _cost, _cost_bl) def condition(index, _cost, _cost_supervised_bi_label): return tf.less(index, self.config['dim_y']) def bodylabeled(index, _cost, _cost_bl): def gen_name(var_name): # split eg 'cost/while/strided_slice_5:0' => '5' # split eg 'cost/while/strided_slice:0' => 'slice' iter = var_name.split('/')[-1].split(':')[0].split('_')[-1] if iter == "slice": return '0phi2y' else: return '{}phi2y'.format(int(iter) % self.config['dim_y']) y_i = tf.gather(columns, index) y = tf.expand_dims( tf.one_hot(tf.to_int32(y_i, name='ToInt32'), depth, dtype=self.dtype ), 0 ) Y = tf.tile( y, [self.config['L'],1,1] ) c = tf.constant(0, name='test') log_pred_Y = tf.layers.dense( pre_Y, 2, name=gen_name(iter[index].name), reuse=reuse ) log_pred_Y = log_pred_Y - tf.reduce_logsumexp( log_pred_Y, -1, keep_dims=True ) _cost += self.__eval_cost_given_axgy( A, X, Y, G, reuse=tf.AUTO_REUSE ) _cost_bl += -tf.reduce_sum( tf.multiply( Y, log_pred_Y ), -1 ) return tf.add(index, 1), _cost, _cost_supervised_bi_label _cost, _bl = tf.while_loop(condition, bodylabeled, index_loop, parallel_iterations=1, shape_invariants=(index_loop[0].get_shape(), tf.TensorShape([None, 100]), tf.TensorShape([None, 100])))[1:] op = costl + cost_m + cost_b with tf.Session(config=config) as sess: sess.run( tf.global_variables_initializer() ) sess.run(tf.local_variables_initializer()) for batchl in batches: sess.run( op, feed_dict={Xl:Xl[batchl,:], Yl:Yl[batchl,:].toarray(), Gl:Gl[batchl,:].toarray(), is_training:True } ) for n in tf.get_default_graph().as_graph_def().node: print(n.name)
Faster Method Of Calculating Portfolio Volatility
I am writing a numba function to calculate a portfolio's volatility: Some functions that I am using to do this are here: import numba as nb import numpy as np def portfolio_s2( cv, weights ): """ Calculate the variance of a portfolio """ return weights.dot( cv ).dot( weights ) #nb.jit( nopython=True ) def portfolio_s2c( cv, weights ): s0 = 0.0 for i in range( weights.shape[0]-1 ): wi = weights[i] s0 += cv[i,i]*wi*wi s1 = 0.0 for j in range( i+1, weights.shape[0] ): s1 += wi*weights[j]*cv[i,j] s0 += 2.0*s1 i = weights.shape[0]-1 wi = weights[ i ] s0 += cv[i,i]*wi**2 return s0 #nb.jit( nopython=True ) def portfolio_s2b( cv, weights ): s0 = 0.0 for i in range( weights.shape[0] ): s0 += weights[i]*weights[i]*cv[i,i] s1 = 0.0 for i in range( weights.shape[0]-1 ): s2 = 0.0 for j in range( i+1, weights.shape[0] ): s2 += weights[j]*cv[i,j] s1+= weights[i]*s2 return s0+2.0*s1 I am testing the performance of the functions using this code: N = 1000 num_tests = 10000 times_2b = [] times_2c = [] times_np = [] matrix_sizes = [ 2,4,8, 10, 20, 40, 80, 160 ]#, 320, 640, 1280, 2560 ] for m in matrix_sizes: X = np.random.randn( N, m ) cv = np.cov( X, rowvar=0 ) w = np.ones( cv.shape[0] ) / cv.shape[0] s2 = helpers.portfolio_s2( cv, w ) s2b = helpers.portfolio_s2b( cv, w ) s2c = helpers.portfolio_s2c( cv, w ) np.testing.assert_almost_equal( s2, s2b ) np.testing.assert_almost_equal( s2, s2c ) with Timer( 'nb2b' ) as t2b: for _ in range(num_tests): helpers.portfolio_s2b( cv, w ) with Timer( 'nb2c' ) as t2c: for _ in range(num_tests): helpers.portfolio_s2c( cv, w ) with Timer( 'np' ) as tnp: for _ in range(num_tests): helpers.portfolio_s2( cv, w ) times_2b.append( t2b.timetaken ) times_2c.append( t2c.timetaken ) times_np.append( tnp.timetaken ) plt.figure() plt.plot( matrix_sizes, times_2b, label='2b' ) plt.plot( matrix_sizes, times_2c, label='2c' ) plt.plot( matrix_sizes, times_np, label='np' ) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show() This is the Timer class: import time class Timer( object ): def __init__(self, name=''): self._name = name def __enter__(self): self.start = time.time() return self def __exit__(self,a,b,c): self.end = time.time() self.timetaken = self.end-self.start print( '{0} Took {1} seconds'.format( self._name, self.timetaken )) The results are plotted here: The results show that: The numba versions of the function outperform the numpy version for matrix sizes under 80 The numba versions seem to scale worse than the numpy function Why is this? Is there some sort of overhead associated with calling numpy, as opposed to calling numba? Why does the numpy function scale better? Is it doing something fancy with BLAS in the background, or is it using a superior algorithm for the calculation? Can I make the numba function scale as well as the numpy function?
Blender export addon and bl_idname
I'm trying to write a simple exporter for Blender in Python and I think the issue is with bl_idname. I'm not sure exactly how the value should be formatted. class ExportS3D(bpy.types.Operator, ExportHelper) : bl_idname = "object.ExportS3D"; bl_label = "S3D Exporter"; bl_options = {'PRESET'}; filename_ext = ".S3D"; I could be completely wrong on what I'm doing wrong, so here's my code: bl_info = { "name": "S3D Exporter", "author": "M---", "blender": (2,7,1), "version": (0,0,1), "location": "File > Import-Export > S3D", "description": "Export S3D files", "category": "Import-Export" } import bpy from bpy_extras.io_utils import ExportHelper import time class ExportS3D(bpy.types.Operator, ExportHelper) : bl_idname = "object.ExportS3D"; bl_label = "S3D Exporter"; bl_options = {'PRESET'}; filename_ext = ".S3D"; def execute(self, context): export() #end ExportS3D def menu_func(self, context): self.layout.operator(object.bl_idname, text="Stupid 3D(.S3D)"); def register(): bpy.utils.register_module(__name__); bpy.types.INFO_MT_file_export.append(menu_func); def unregister(): bpy.utils.unregister_module(__name__); bpy.types.INFO_MT_file_export.remove(menu_func); def export(): print( '\n--------------------\n' ) #-------------------------------------- #Change to OBJECT mode #Do this to ensure any changes made in EDIT mode (like UV Unwrap) are committed bpy.ops.object.mode_set( mode='OBJECT' ) #-------------------------------------- #Get the active object and its data ob = bpy.context.active_object if ( ob.type != 'MESH' ): print( 'Error: please select a MESH object.\n' ) return mesh = ob.data #Not sure if this is needed with this script??? if not mesh.tessfaces and mesh.polygons: mesh.calc_tessface() #-------------------------------------- #Comments comments = '' localtime = time.localtime() timeString = time.strftime( '%B %d, %Y %H:%M:%S', localtime ) comments += ( '%s\n' % (timeString) ) comments += ( '3D object created in Blender%s\n' % (bpy.app.version_string) ) comments += ( 'Object name: %s\n' % (ob.name) ) comments += ( 'Blender vertices count: %i\n' % ( len(mesh.vertices) ) ) comments += ( 'Blender tessfaces count: %i\n' % ( len(mesh.tessfaces) ) ) #-------------------------------------- #UV Layer if ( mesh.uv_layers.active is not None ): uv_layer = mesh.uv_layers.active.data else: print( 'Error: the object needs to be unwrapped.\n' ) return #-------------------------------------- #Vertices and Indices vertices = 'Vertices\n'; indices = 'Indices\n'; image = 'Image Name\n'; i = 0 t = 0 c = 0 for poly in mesh.polygons: for loop_index in poly.loop_indices: v = mesh.vertices[mesh.loops[loop_index].vertex_index] #Right-handed coordinate systems (OpenGL convention) use: v.co.z and v.normal.z #Left-handed coordinate systems (DirectX convention) use: -1*v.co.z and -1*v.normal.z #OpenGL textures use: uv_layer[loop_index].uv[1] #DirectX textures use: 1-uv_layer[loop_index].uv[1] vertices += ( '%f, %f, %f, %f, %f, %f, %f, %f\n' % \ ( v.co.x, v.co.y, v.co.z, v.normal.x, v.normal.y, v.normal.z, uv_layer[loop_index].uv[0], uv_layer[loop_index].uv[1], ) ) c += 1 #OpenGL convention is counter-clockwise winding. #DirectX convention is clockwise winding. if ( len(poly.vertices) == 3 ): #clockwise winding: #indices += ( '%i, %i, %i, ' % ( i, i+2, i+1 ) ) #counter-clockwise winding: indices += ( '%i, %i, %i, ' % ( i, i+1, i+2 ) ) i += 3 t += 1 elif ( len(poly.vertices) == 4 ): #clockwise winding: #indices += ( '%i, %i, %i, ' % ( i, i+2, i+1 ) ) #indices += ( '%i, %i, %i, ' % ( i, i+3, i+2 ) ) #counter-clockwise winding: indices += ( '%i, %i, %i, ' % ( i, i+1, i+2 ) ) indices += ( '%i, %i, %i, ' % ( i, i+2, i+3 ) ) i += 4 t += 2 else: print( 'Error: faces with less than 3 or more than 4 vertices found.\n' ) return image += '%s' % (bpy.context.object.data.uv_textures.active.data[0].image.name) #Remove indices last comma and space indices = indices[:-2] comments += ( 'Exported vertices: %i\n' % ( i ) ) comments += ( 'Exported triangles: %i\n' % ( t ) ) comments += ( 'Exported indices: %i\n\n' % ( t * 3 ) ) comments += 'Format\nVertex: px, py, pz, nx, ny, nz, u, v\nIndices: i' #-------------------------------------- #Write File filenameSuffix = time.strftime( '%Y%d%m%H%M%S', localtime ) #File path filenameFull = ( 'c:/Users/1043468/Desktop/%s.%s.S3D' % ( ob.name, filenameSuffix ) ) out_file = open( filenameFull, 'wt' ) out_file.write( '%s\n\n%s\n%s\n%s\n' % ( comments, vertices, indices, image) ) out_file.close() print( '%s\n\nCompleted: %s\n' %(comments, filenameFull) ) if __name__ == "__main__": register() The filename is S3D_Eporter.py and it's placed in Blender/2.7.1/scripts/addons/
bl_idname defines the name used to access the operator within blender. Using "object.ExportS3D" your import operator is available as bpy.ops.object.ExportS3D except teh bl_idname needs to be lower case, so use "object.exports3d" To get your importer to start working change register_module(__name__) to register_class(ExportS3D) same for unregister. You will need to add return {'FINISHED'} to execute() and you can get rid of all the ';' you have.