I'm having two issues with attempting to define my own class. First, the most basic issue is that if I write a python script and try to import it into a second script, the import fails (the scripts are in the same directory). For example, I wrote a script called first.py:
def foo(): print("foo")
If I try to import this into a second script, I get 'no module found'
import first
first.foo()
ImportError: No module named first
Second, I wrote a script that defines a class for non-linear regression. The script imports the modules within the class. However, I'm also required to import the modules OUTSIDE of the class as well. The script won't work if the modules aren't imported both inside and outside of the class definition:
class NLS:
''' This provides a wrapper for scipy.optimize.leastsq to get the relevant output for nonlinear least squares.
Although scipy provides curve_fit for that reason, curve_fit only returns parameter estimates and covariances.
This wrapper returns numerous statistics and diagnostics'''
# IMPORT THE MODULES THE FIRST TIME - WILL NOT RUN WITHOUT THESE
import numpy as np
from scipy.optimize import leastsq
import scipy.stats as spst
def __init__(self, func, p0, xdata, ydata):
# Check the data
if len(xdata) != len(ydata):
msg = 'The number of observations does not match the number of rows for the predictors'
raise ValueError(msg)
# Check parameter estimates
if type(p0) != dict:
msg = "Initial parameter estimates (p0) must be a dictionry of form p0={'a':1, 'b':2, etc}"
raise ValueError(msg)
self.func = func
self.inits = p0.values()
self.xdata = xdata
self.ydata = ydata
self.nobs = len( ydata )
self.nparm= len( self.inits )
self.parmNames = p0.keys()
for i in range( len(self.parmNames) ):
if len(self.parmNames[i]) > 5:
self.parmNames[i] = self.parmNames[i][0:4]
# Run the model
self.mod1 = leastsq(self.func, self.inits, args = (self.xdata, self.ydata), full_output=1)
# Get the parameters
self.parmEsts = np.round( self.mod1[0], 4 )
# Get the Error variance and standard deviation
self.RSS = np.sum( self.mod1[2]['fvec']**2 )
self.df = self.nobs - self.nparm
self.MSE = self.RSS / self.df
self.RMSE = np.sqrt( self.MSE )
# Get the covariance matrix
self.cov = self.MSE * self.mod1[1]
# Get parameter standard errors
self.parmSE = np.diag( np.sqrt( self.cov ) )
# Calculate the t-values
self.tvals = self.parmEsts/self.parmSE
# Get p-values
self.pvals = (1 - spst.t.cdf( np.abs(self.tvals), self.df))*2
# Get biased variance (MLE) and calculate log-likehood
self.s2b = self.RSS / self.nobs
self.logLik = -self.nobs/2 * np.log(2*np.pi) - self.nobs/2 * np.log(self.s2b) - 1/(2*self.s2b) * self.RSS
del(self.mod1)
del(self.s2b)
del(self.inits)
# Get AIC. Add 1 to the df to account for estimation of standard error
def AIC(self, k=2):
return -2*self.logLik + k*(self.nparm + 1)
del(np)
del(leastsq)
# Print the summary
def summary(self):
print
print 'Non-linear least squares'
print 'Model: ' + self.func.func_name
print 'Parameters:'
print " Estimate Std. Error t-value P(>|t|)"
for i in range( len(self.parmNames) ):
print "% -5s % 5.4f % 5.4f % 5.4f % 5.4f" % tuple( [self.parmNames[i], self.parmEsts[i], self.parmSE[i], self.tvals[i], self.pvals[i]] )
print
print 'Residual Standard Error: % 5.4f' % self.RMSE
print 'Df: %i' % self.df
## EXAMPLE USAGE
import pandas as pd
# IMPORT THE MODULES A SECOND TIME. WILL NOT RUN WITHOUT THESE
import numpy as np
from scipy.optimize import leastsq
import scipy.stats as spst
# Import data into dataframe
respData = pd.read_csv('/Users/Nate/Documents/FIU/Research/MTE_Urchins/Data/respiration.csv')
# Standardize to 24 h
respData['respDaily'] = respData['C_Resp_Mass'] * 24
# Create the Arrhenius temperature
respData['Ar'] = -1 / (8.617 * 10**-5 * (respData['Temp']+273))
# Define the likelihood null model
def nullMod(params, mass, yObs):
a = params[0]
c = params[1]
yHat = a*mass**c
err = yObs - yHat
return(err)
p0 = {'a':1, 'b':1}
tMod = NLS(nullMod, p0, respData['UrchinMass'], respData['respDaily'] )
tMod.summary()
tMod.AIC()
tMod.logLik
These problems are related because I try to import this class into another script and I can't (as in the first problem). Can anyone tell me what's going on?
Update
I just started being able to import scripts. Whatever that funky clean on start path was appears to have finally been deleted somehow. No idea what that was. However, I still don't understand why, if I import the necessary modules in my class definition, I MUST import them in my other scripts as well. It seems to me that if my class imports the modules, I shouldn't need to import them globally as well. Is this correct?
I think the first comment from zhangxaochen is the best starting place for problem number number 1. sys.path should contain all the paths that python is searching for when you try to import a module. These are the steps I'd go through to solve this one:
Make sure os.getcwd() returns the same directory as os.path.dirname(sys.argv[0])
Next I'd make sure that that path is in sys.path list.
If both those check out....
Related
I am fitting a linear model using maximum likelihood estimation based on the GenericLikelihoodModel class. The errors exhibit heteroskedasticity and serial correlation so I want to estimate HAC standard errors and display this in the main output. Although this is straight forward to do for the OLS estimation, I am unable to implement this for the ML model.
I have spent a lot of time searching but no answer has come up. My background is in econometrics (Eviews, Stata, Matlab), so I am familiar with ML estimation and HAC standard errors, I am just struggling to implement it in Python. I understand that it could just be done manually after the estimation, but I would like to do it using the available statsmodels tools and presented in the main estimation results.
Is there a way to use the statsmodels cov_type paramater with the GenericLikelihoodModel class, or would we need to just code the errors from scratch afterwards?
The code is below.
# = = = = = = = = = = = = = = = = = = = #
# MLE with a linear model #
# = = = = = = = = = = = = = = = = = = = #
# Gives the (almost) exact same results as OLS when using normal errors
# https://www.statsmodels.org/dev/examples/notebooks/generated/generic_mle.html
# https://rlhick.people.wm.edu/posts/estimating-custom-mle.html
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
from statsmodels.base.model import GenericLikelihoodModel
from scipy.optimize import minimize
# --- Set up --- #
def _ll_ols(y, X, beta, sigma): # This is a python function that calculates the log-likelihood value.
mu = X.dot(beta)
log_likelihood= norm(mu,sigma).logpdf(y).sum() # log_likelihood = np.sum(norm.logpdf(y,mu,sigma)) is another way
return log_likelihood
class linear_MLE(GenericLikelihoodModel): # We are creating a class called 'linear_MLE' using 'GenericLikelihoodModel' as a template
def __init__(self, endog, exog, **kwds): # **kwds just carries the
super(linear_MLE, self).__init__(endog, exog, **kwds) # This gives the 'linear_MLE' we are creating all of the same properties as the 'GenericLikelihoodModel' class
def nloglikeobs(self, params): # 'GenericLikelihoodModel' has a set of associated "methods" (basically functions), here we create a new one
sigma = params[-1] # Pull the sigma and the beta from the model, this is just anything that you need in the log likelihood function
beta = params[:-1]
ll = _ll_ols(self.endog, self.exog, beta, sigma) # Calculate the log likelihood based on the function that we created below.
return -ll # Basically 'GenericLikelihoodModel' gives us the freedom/ability to set 'nloglikeobs' with our own likelihood value.
def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds): # Update the fit for any other values we needed in the likelihood function and the starting values
self.exog_names.append('*** sigma ***') # we have one additional parameter and we need to add it for summary
if start_params == None:
start_params = np.append(sm.OLS(y,X).fit().params.values, sm.OLS(y,X).fit().scale**.5) # Set the starting values as the OLS estimates.
#start_params = np.append(np.ones(self.exog.shape[1]), .5) # Set some reasonable starting values. Play around with this if you have issues with the Hessian.
return super(linear_MLE, self).fit(start_params=start_params,maxiter=maxiter,maxfun=maxfun,**kwds)
# --- Data --- #
n = 100
k = 2
error = np.random.randn(n)
heteroskedastic_error = np.append(error[:np.int(n/2)],error[np.int(n/2):]*3)
HA_error = (np.append(heteroskedastic_error[-1:],heteroskedastic_error[:-1])+np.append(heteroskedastic_error[-2:],heteroskedastic_error[:-2]))/2
X = pd.DataFrame(np.append([[1]]*n,np.random.randn(k)*np.random.randn(n,k)+np.random.randn(k),axis=1))
y = pd.DataFrame(np.dot(X,np.random.randn(k+1))+HA_error)
# --- Models --- #
ols_results = sm.OLS(y,X).fit()
print(ols_results.summary())
ols_results = sm.OLS(y,X).fit(cov_type='HAC',cov_kwds={'maxlags':2})
print(ols_results.summary())
mle_results = linear_MLE(y,X).fit()
print(mle_results.summary())
mle_results = linear_MLE(y,X).fit(cov_type='HAC',cov_kwds={'maxlags':2})
print(mle_results.summary())
I am trying to deconvolve complex gas chromatogram signals into individual gaussian signals. Here is an example, where the dotted line represents the signal I am trying to deconvolve.
I was able to write the code to do this using scipy.optimize.curve_fit; however, once applied to real data the results were unreliable. I believe being able to set bounds to my parameters will improve my results, so I am attempting to use lmfit, which allows this. I am having a problem getting lmfit to work with a variable number of parameters. The signals I am working with may have an arbitrary number of underlying gaussian components, so the number of parameters I need will vary. I found some hints here, but still can't figure it out...
Creating a python lmfit Model with arbitrary number of parameters
Here is the code I am currently working with. The code will run, but the parameter estimates do not change when the model is fit. Does anyone know how I can get my model to work?
import numpy as np
from collections import OrderedDict
from scipy.stats import norm
from lmfit import Parameters, Model
def add_peaks(x_range, *pars):
y = np.zeros(len(x_range))
for i in np.arange(0, len(pars), 3):
curve = norm.pdf(x_range, pars[i], pars[i+1]) * pars[i+2]
y = y + curve
return(y)
# generate some fake data
x_range = np.linspace(0, 100, 1000)
peaks = [50., 40., 60.]
a = norm.pdf(x_range, peaks[0], 5) * 2
b = norm.pdf(x_range, peaks[1], 1) * 0.1
c = norm.pdf(x_range, peaks[2], 1) * 0.1
fake = a + b + c
param_dict = OrderedDict()
for i in range(0, len(peaks)):
param_dict['pk' + str(i)] = peaks[i]
param_dict['wid' + str(i)] = 1.
param_dict['mult' + str(i)] = 1.
# In case, you'd like to see the plot of fake data
#y = add_peaks(x_range, *param_dict.values())
#plt.plot(x_range, y)
#plt.show()
# Initialize the model and fit
pmodel = Model(add_peaks)
params = pmodel.make_params()
for i in param_dict.keys():
params.add(i, value=param_dict[i])
result = pmodel.fit(fake, params=params, x_range=x_range)
print(result.fit_report())
I think you would be better off using lmfits ability to build composite model.
That is, with a single peak defined with
from scipy.stats import norm
def peak(x, amp, center, sigma):
return amp * norm.pdf(x, center, sigma)
(see also lmfit.models.GaussianModel), you can build a model with many peaks:
npeaks = 3
model = Model(peak, prefix='p1_')
for i in range(1, npeaks):
model = model + Model(peak, prefix='p%d_' % (i+1))
params = model.make_params()
Now model will be a sum of 3 Gaussian functions, and the params created for that model will have names like p1_amp, p1_center, p2_amp, ..., which you can add sensible initial values and/or bounds and/or constraints.
Given your example data, you could pass in initial values to make_params like
params = model.make_params(p1_amp=2.0, p1_center=50., p1_sigma=2,
p2_amp=0.2, p2_center=40., p2_sigma=2,
p3_amp=0.2, p3_center=60., p3_sigma=2)
result = model.fit(fake, params, x=x_range)
I was able to find a solution here:
https://lmfit.github.io/lmfit-py/builtin_models.html#example-3-fitting-multiple-peaks-and-using-prefixes
Building on the code above, the following accomplishes what I was trying to do...
from lmfit.models import GaussianModel
gauss1 = GaussianModel(prefix='g1_')
gauss2 = GaussianModel(prefix='g2_')
gauss3 = GaussianModel(prefix='g3_')
gauss4 = GaussianModel(prefix='g4_')
gauss5 = GaussianModel(prefix='g5_')
gauss = [gauss1, gauss2, gauss3, gauss4, gauss5]
prefixes = ['g1_', 'g2_', 'g3_', 'g4_', 'g5_']
mod = np.sum(gauss[0:len(peaks)])
pars = mod.make_params()
for i, prefix in zip(range(0, len(peaks)), prefixes[0:len(peaks)]):
pars[prefix + 'center'].set(peaks[i])
init = mod.eval(pars, x=x_range)
out = mod.fit(fake, pars, x=x_range)
print(out.fit_report(min_correl=0.5))
out.plot_fit()
plt.show()
I am using the PyGMO package for Python, for multi-objective optimisation. I am unable to fix the dimension of the fitness function in the constructor, and the documentation is not very descriptive either. I am wondering if anyone here has had experience with PyGMO in the past: this could be fairly simple.
I try to construct a minimum example below:
from PyGMO.problem import base
from PyGMO import algorithm, population
import numpy as np
import matplotlib.pyplot as plt
class my_problem(base):
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS)
self.set_bounds(0.01, 100)
def _objfun_impl(self, K):
E1 = K[0] + K[2]
E2 = K[1] + K[3]
return (E1, E2, )
if __name__ == '__main__':
prob = my_problem() # Create the problem
print (prob)
algo = algorithm.sms_emoa(gen=100)
pop = population(prob, 50)
pop = algo.evolve(pop)
F = np.array([ind.cur_f for ind in pop]).T
plt.scatter(F[0], F[1])
plt.xlabel("$E_1$")
plt.ylabel("$E_2$")
plt.show()
fdim=2 above is a failed attempt to set the fitness dimension. The code fails with the following error:
ValueError: ..\..\src\problem\base.cpp,584: fitness dimension was changed inside objfun_impl().
I'd be grateful if someone can help figure this out. Thanks!
Are you looking at the correct documentation?
There is no fdim (which anyway does nothing in your example since it is only a local variable and is not used). But there is n_obj:
n_obj: number of objectives. Defaults to 1
So, I think you want something like (corrected thanks to #Distopia):
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS, 0, fdim)
self.set_bounds(0.01, 100)
#(...)
I modified their example and this seemed to work for me.
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
# We call the base constructor as 'dim' dimensional problem, with 0 integer parts and 2 objectives.
super(my_problem, self).__init__(NUM_PARAMS,0,fdim)
self.set_bounds(0.01, 100)
#(...)
I am using rpy2 for regressions. The returned object has a list that includes coefficients, residuals, fitted values, rank of the fitted model, etc.)
However I can't find the standard errors (nor the R^2) in the fit object. Running lm directly model in R, standard errors are displayed with the summary command, but I can't access them directly in the model's data frame.
How can I get extract this info using rpy2?
Sample python code is
from scipy import random
from numpy import hstack, array, matrix
from rpy2 import robjects
from rpy2.robjects.packages import importr
def test_regress():
stats=importr('stats')
x=random.uniform(0,1,100).reshape([100,1])
y=1+x+random.uniform(0,1,100).reshape([100,1])
x_in_r=create_r_matrix(x, x.shape[1])
y_in_r=create_r_matrix(y, y.shape[1])
formula=robjects.Formula('y~x')
env = formula.environment
env['x']=x_in_r
env['y']=y_in_r
fit=stats.lm(formula)
coeffs=array(fit[0])
resids=array(fit[1])
fitted_vals=array(fit[4])
return(coeffs, resids, fitted_vals)
def create_r_matrix(py_array, ncols):
if type(py_array)==type(matrix([1])) or type(py_array)==type(array([1])):
py_array=py_array.tolist()
r_vector=robjects.FloatVector(flatten_list(py_array))
r_matrix=robjects.r['matrix'](r_vector, ncol=ncols)
return r_matrix
def flatten_list(source):
return([item for sublist in source for item in sublist])
test_regress()
So this seems to work for me:
def test_regress():
stats=importr('stats')
x=random.uniform(0,1,100).reshape([100,1])
y=1+x+random.uniform(0,1,100).reshape([100,1])
x_in_r=create_r_matrix(x, x.shape[1])
y_in_r=create_r_matrix(y, y.shape[1])
formula=robjects.Formula('y~x')
env = formula.environment
env['x']=x_in_r
env['y']=y_in_r
fit=stats.lm(formula)
coeffs=array(fit[0])
resids=array(fit[1])
fitted_vals=array(fit[4])
modsum = base.summary(fit)
rsquared = array(modsum[7])
se = array(modsum.rx2('coefficients')[2:4])
return(coeffs, resids, fitted_vals, rsquared, se)
Although, as I said, this is literally my first foray into RPy2, so there may be a better way to do that. But this version appears to output arrays containing the R-squared value along with the standard errors.
You can use print(modsum.names) to see the names of the components of the R object (kind of like names(modsum) in R) and then .rx and .rx2 are the equivalent of [ and [[ in R.
#joran: Pretty good. I'd say that it is pretty much the way to do it.
from rpy2 import robjects
from rpy2.robjects.packages import importr
base = importr('base')
stats = importr('stats') # import only once !
def test_regress():
x = base.matrix(stats.runif(100), nrow = 100)
y = (x.ro + base.matrix(stats.runif(100), nrow = 100)).ro + 1 # not so nice
formula = robjects.Formula('y~x')
env = formula.environment
env['x'] = x
env['y'] = y
fit = stats.lm(formula)
coefs = stats.coef(fit)
resids = stats.residuals(fit)
fitted_vals = stats.fitted(fit)
modsum = base.summary(fit)
rsquared = modsum.rx2('r.squared')
se = modsum.rx2('coefficients')[2:4]
return (coefs, resids, fitted_vals, rsquared, se)
I have a code in Python that draws wave functions and energy for different potentials:
# -*- coding: cp1250 -*-
from math import *
from scipy.special import *
from pylab import *
from scipy.linalg import *
firebrick=(178./255.,34./255.,34./255.)
indianred=(176./255.,23./255.,31./255.)
steelblue=(70./255.,130./255.,180./255.)
slategray1=(198./255.,226./255.,255./255.)
slategray4=(108./255.,123./255.,139./255.)
lavender=(230./255.,230./255.,230./255.)
cobalt=(61./255.,89./255.,171./255.)
midnightblue=(25./255.,25./255.,112./255.)
forestgreen=(34./255.,139./255.,34./255.)
#definiranje mreze
Nmesh=512
L=4.0
dx=L/Nmesh
Xmax=L
x=arange(-L,L+0.0001,dx)
Npts=len(x)
numwav=0 #redni broj valne funkcije koji se iscrtava
V=zeros([Npts],float)
for i in range(Npts):
V[i]=x[i]**50
a=zeros([2,Npts-2],float)
wave=zeros([Npts],float)
wave1=zeros([Npts],float)
encor=3.0/4*(3.0/4)**(1.0/3)
#numericko rjesenje
for i in range(1,Npts-1,1):
a[0,i-1]= 1.0/dx**2+V[i] #dijagonalni elementi
a[1,i-1]=-1.0/dx**2/2 #elementi ispod dijagonale
a[1,Npts-3]=-99.0 #element se ne koristi
eig,vec=eig_banded(a,lower=1) #rutina koja dijagonalizira tridijagonalnu matricu
for i in range(1,Npts-1,1):
wave[i]=vec[i-1,numwav]
wave[0]=0.0 #valna funkcija u prvoj tocki na mrezi ima vrijednost nula
wave[Npts-1]=0.0 #valna funkcija u zadnjoj tocki na mrezi ima vrijednost nula
for i in range(1,Npts-1,1):
wave1[i]=(2.0/pi*(3.0/4)**(1.0/3))**0.25*exp(-(3.0/4)**(1.0/3)*x[i]**2)
wave1[0]=0.0 #valna funkcija u prvoj tocki na mrezi ima vrijednost nula
#wave1[Npts-1]=0.0 #valna funkcija u zadnjoj tocki na mrezi ima vrijednost nula
#wave1=omjer*150*wave1+encor
wave=150*wave+eig[numwav]
#graf potencijala
line=plt.plot(x,V)
plt.setp(line,color='firebrick',linewidth=2)
#crtanje odabranog nivoa i odgovarajuce valne funkcije
plt.axhline(y=eig[numwav],linewidth=2,color='steelblue')
#plt.axhline(y=encor,linewidth=2,color='midnightblue')
#crtanje tocaka valne funkcije
plt.plot(x,wave,"b-",linewidth=2,color='forestgreen')
#plt.plot(x,wave1,"-",linewidth=2,color='indianred')
plt.xlabel(r'$x$',size=14)
plt.ylabel(r'$V(x)$',size=14)
plt.title(r'Valna funkcija i energija 3. pobuđenog stanja za $V(x)=x^{50}$')
plt.axis([-3.0,3.0,-8.0,100.0]) #raspon x i y osi
plt.grid(True)
plt.legend((r'$V(x)$',r'$E_0$',r'$\psi_0$'))
plt.show()
Ignore the ignored lines, they are not important for this case, and the language :D
Anyhow, I have a problem. If I draw the potentials (the V part), for let's say up to x^20 it draws nice, like this for x^6:
If I put the potential, say x^50 it becomes this:
So what seems to be the problem? Why is he making such big mistake? It should be smooth, and from the theory as I reach the point V(x)=x^p for very large p (p → ∞) the potential should go to the famous infinite square well, which looks like this:
So I'm suspecting that for bigger potentials I need more points to draw it in the given range. So should I just increase the number of the Nmesh (grid)? Since he says that the Npts=len(x) - the number of points he's taking. Am I right? That seems logical, but I want to be certain.
Thanks for any advice and help
EDIT: I tried expanding the Nmesh but at very large numbers I either get that grid is too big, or that there is memory problems.
If I take, say 2048 I get the same picture but just shifted a bit and narrower.
Use select argument for eig_banded():
#!/usr/bin/env python
from __future__ import division
import functools
import math
import sys
from timeit import default_timer as timer
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as linalg
def report_time(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
start = timer()
try: return func(*args, **kwargs)
finally:
print '%s takes %.2f seconds' % (func.__name__, timer()-start)
return wrapper
#report_time
def calc(Nmesh,POWER,L ,numwav=0):
#
dx=L/Nmesh
x = np.arange(-L,L+0.0001,dx)
Npts=len(x)
V = x**POWER
#
ai = np.empty((2,Npts)) # ai[:,i] = a[:,i-1]
ai[0,:] = 1/dx**2 + V #
ai[1,:] = -1.0/dx**2/2 #
ai[1,Npts-2] = -99.0 #
a = ai[:,1:-1]
f = report_time(linalg.eig_banded)
eig, vec = f(a, lower=True,overwrite_a_band=True,
select='i',select_range=(0,numwav)
) #
wave = np.empty(Npts)
wave[1:-1] = vec[:,numwav]
wave[0] = 0 #
wave[Npts-1] = 0 #
wave = 150*wave + eig[numwav]
return x, V, wave, eig[numwav]
def main():
try: numwav = int(sys.argv[1])
except (IndexError, ValueError):
numwav = 0
POWER=100
L=4.0
Nmesh = 512
print 'Nmesh=%d' % Nmesh
x, V, wave, y = calc(Nmesh, POWER, L,numwav)
#
line = plt.plot(x,V)
plt.setp(line,color='firebrick',linewidth=2)
#
plt.plot(x,wave,"b-",linewidth=2,color='forestgreen')
#
plt.axhline(y=y,linewidth=2,color='steelblue')
plt.xlabel(r'$x$',size=14)
plt.ylabel(r'$V(x)$',size=14)
plt.title(r'$V(x)=x^{%d}$, ' % POWER)
plt.axis([-(abs(L)-1), abs(L)-1,min(min(wave), y, min(V))-1, max(max(wave), y)+1]) #
plt.grid(True)
plt.legend((r'$V(x)$',r'$E_%d$' % numwav,r'$\psi_%d$' % numwav))
plt.savefig('V_%d_%d_%d.png' % (Nmesh, POWER, numwav))
plt.show()
if __name__=="__main__":
main()
Output
Nmesh=512
eig_banded takes 0.01 seconds
calc takes 0.01 seconds
Here's a variant with numwav=4: