I am using the PyGMO package for Python, for multi-objective optimisation. I am unable to fix the dimension of the fitness function in the constructor, and the documentation is not very descriptive either. I am wondering if anyone here has had experience with PyGMO in the past: this could be fairly simple.
I try to construct a minimum example below:
from PyGMO.problem import base
from PyGMO import algorithm, population
import numpy as np
import matplotlib.pyplot as plt
class my_problem(base):
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS)
self.set_bounds(0.01, 100)
def _objfun_impl(self, K):
E1 = K[0] + K[2]
E2 = K[1] + K[3]
return (E1, E2, )
if __name__ == '__main__':
prob = my_problem() # Create the problem
print (prob)
algo = algorithm.sms_emoa(gen=100)
pop = population(prob, 50)
pop = algo.evolve(pop)
F = np.array([ind.cur_f for ind in pop]).T
plt.scatter(F[0], F[1])
plt.xlabel("$E_1$")
plt.ylabel("$E_2$")
plt.show()
fdim=2 above is a failed attempt to set the fitness dimension. The code fails with the following error:
ValueError: ..\..\src\problem\base.cpp,584: fitness dimension was changed inside objfun_impl().
I'd be grateful if someone can help figure this out. Thanks!
Are you looking at the correct documentation?
There is no fdim (which anyway does nothing in your example since it is only a local variable and is not used). But there is n_obj:
n_obj: number of objectives. Defaults to 1
So, I think you want something like (corrected thanks to #Distopia):
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
super(my_problem, self).__init__(NUM_PARAMS, 0, fdim)
self.set_bounds(0.01, 100)
#(...)
I modified their example and this seemed to work for me.
#(...)
def __init__(self, fdim=2):
NUM_PARAMS = 4
# We call the base constructor as 'dim' dimensional problem, with 0 integer parts and 2 objectives.
super(my_problem, self).__init__(NUM_PARAMS,0,fdim)
self.set_bounds(0.01, 100)
#(...)
Related
Sorry for bothering you with this. I have a serious issue and now im on clock to solve it, so here is my question.
I have an issue where I lambdify a quantity, but the result of the quantity differs from the ".subs" result, and sometimes it's way off, or it's a NaN, where in reality there is a real number (found by subs)
Here, I have a small MWE where you can see the issue! Thanks in advance for ur time
import sympy as sy
import numpy as np
##STACK
#some quantities needed before u see the problem
r = sy.Symbol('r', real=True)
th = sy.Symbol('th', real=True)
e_c = 1e51
lf0 = 100
A = 1.6726e-24
#here are some quantities I define to go the problem
lfac = lf0+2
rd = 4*3.14/4/sy.pi/A/lfac**2
xi = r/rd #rescaled r
#now to the problem:
#QUANTITY
lfxi = xi**(-3)*(lfac+1)/2*(sy.sqrt( 1 + 4*lfac/(lfac+1)*xi**(3) + (2*xi**(3)/(lfac+1))**2) -1)
#RESULT WITH SUBS
print(lfxi.subs({th:1.00,r:1.00}).evalf())
#RESULT WITH LAMBDIFY
lfxi_l = sy.lambdify((r,th),lfxi)
lfxi_l(0.01,1.00)
##gives 0
The issue is that your mpmath precision needs to be set higher!
By default mpmath uses prec=53 and dps=15, but your expression requires a much higher resolution than this for it
# print(lfxi)
3.0256512324559e+62*(sqrt(1.09235114769539e-125*pi**6*r**6 + 6.74235013645028e-61*pi**3*r**3 + 1) - 1)/(pi**3*r**3)
...
from mpmath import mp
lfxi_l = sy.lambdify((r,th),lfxi, modules=["mpmath"])
mp.dps = 125
print(lfxi_l(1.00,1.00))
# 101.999... result
Changing a couple of the constants to "modest" values:
In [89]: e_c=1; A=1
The different methods produce essentially the same thing:
In [91]: lfxi.subs({th:1.00,r:1.00}).evalf()
Out[91]: 1.00000000461176
In [92]: lfxi_l = sy.lambdify((r,th),lfxi)
In [93]: lfxi_l(1.0,1.00)
Out[93]: 1.000000004611762
In [94]: lfxi_m = sy.lambdify((r,th),lfxi, modules=["mpmath"])
In [95]: lfxi_m(1.0,1.00)
Out[95]: mpf('1.0000000046117619')
I am naive in Machine Learning ,following text book (Pyhton Machine Learning ) and online course on coursera . I am trying to implement single perceptron algorithm on standard iris dataset containing only two classes ('sentosa' and 'versicolor') but error function is not converging .Here is my code :-
import numpy as np
from sklearn import datasets
import matplotlib.pyplot as plt
class perceptron(object):
def __init__(self,a,iter):
self.a=a
self.iter=iter
def fit(self,x,y):
self.w_=np.zeros(1+x.shape[1])
self.errors_=[]
for i in range(self.iter):
errors = 0
for xi ,target in zip(x,y):
update=self.a*(target-self.predict(xi))
self.w_[1:]=xi*update
self.w_[0]=update
errors+=int(update != 0.0)
self.errors_.append(errors)
print(self.errors_)
return self
def net_input(self,x):
return np.dot(x,self.w_[1:])
def predict(self,x):
return np.where(self.net_input(x)>=0.0,1,-1)
iris=datasets.load_iris()
x=iris.data[:100,:2]
y=iris.target
y=np.where(y==0,-1,1)
ppn=perceptron(a=0.01,iter=10)
ppn.fit(x,y)
plt.plot(range(1, len(ppn.errors_) + 1),ppn.errors_,marker='_')
plt.xlabel('epochs')
plt.ylabel('number of classification')
plt.show()
Number of misclassification (errors) remains same in every iteration
These lines are wrong:
self.w_[1:]=xi*update
self.w_[0]=update
Change them to:
self.w_[1:] += update * xi
self.w_[0] += update
It also looks like your input implementation is wrong:
def net_input(self,x):
return np.dot(x,self.w_[1:])
Should be:
return np.dot(X, self.w_[1:]) + self.w_[0]
You can see the full implementation on my github
Let me know if that doesn't solve your problem.
As per my language knowledge my code is written correct. But It is not giving me correct solution (plot). When I had solved same system of ODE's in mathematica, I have correct solution and both solutions are totally different. I am writing a research project so I need a proper code in python. could you please let me know the mistake of mine code.
python code solution
Mathematica solution
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as si
##Three system
def func(state, T):
H = state[0]
P = state[1]
R = state[2]
Hd = -(16./3.)*np.pi*P
Pd = -4.*H*P
Rd = H*R
return Hd,Pd,Rd
T = np.linspace(0.1,0.9,50)
state0 = [1,0.0001, 0.1]
s = si.odeint(func, state0, T)
h = np.transpose(s)
plt.plot(T,h[0])
plt.show()
Mathematica code
Clear[H,\[Rho],a]
Eq1=(H'[t] == -16 \[Pi] \[Rho][t]/3)
Eq2= (\[Rho]'[t] == -4 H[t] \[Rho][t])
Eq3 = (a'[t] == H[t] a[t])
sol=NDSolve[{Eq1,Eq2, Eq3,
H[0.1]==0.1, \[Rho][0.1]==0.1, a[0.1]==0.1},
{H[t],\[Rho][t],a[t]}, {t,0.1, 0.9}]
Plot[Evaluate[{H[t]}/.sol],{t,0.1,0.9}]
Both codes are correct, I just turned off my laptop and on it again, and it gives me the correct result (as mathematica)
I'm trying to solve an overdetmined system of equations with three unknowns. I'm able to get solution with fsolve and lsqnonlin in MATLAB by calling the system of equations through a for loop.
But in python using scipy, I'm getting the following error message:
fsolve: there is a mismatch between the input and output shape of the 'func' argument 'fnz'
The code is given below:
from xlrd import open_workbook
import numpy as np
from scipy import optimize
g = [0.5,1,1.5]
wb = open_workbook('EThetaValuesA.xlsx')
sheet=wb.sheet_by_index(0)
y=sheet.col_values(0,1)
t1=sheet.col_values(1,1)
t2=sheet.col_values(2,1)
t3=sheet.col_values(3,1)
def fnz(g):
i=0
sol=[0 for i in range(len(t1))]
x1 = g[0]
x2 = g[1]
x3 = g[2]
print len(t1)
for i in range(len(t1)):
# various set of t1,t2 and t3 gives the various eqns
print i
sol[i]=x1+t1[i]/(x2*t2[i]+x3*t3[i])-y[i]
return sol
Anz = optimize.fsolve(fnz,g)
print Anz
Could anyone please suggest where I'm wrong? Thank you in advance.
The exception means that the result from fnz() function call does not has the same dimension as the input g, which is a list of 3 elements, or can be seen as an array of shape (3,).
To illustrate the problem, if we define:
def fnz(g):
return [2,3,5]
Anz = optimize.fsolve(fnz,g)
There will not be such an exception. But this will:
def fnz(g):
return [2,3,4,5]
Anz = optimize.fsolve(fnz,g)
The result from fnz() should have the same length as t1, which I am sure is longer than 3 elements.
I'm having two issues with attempting to define my own class. First, the most basic issue is that if I write a python script and try to import it into a second script, the import fails (the scripts are in the same directory). For example, I wrote a script called first.py:
def foo(): print("foo")
If I try to import this into a second script, I get 'no module found'
import first
first.foo()
ImportError: No module named first
Second, I wrote a script that defines a class for non-linear regression. The script imports the modules within the class. However, I'm also required to import the modules OUTSIDE of the class as well. The script won't work if the modules aren't imported both inside and outside of the class definition:
class NLS:
''' This provides a wrapper for scipy.optimize.leastsq to get the relevant output for nonlinear least squares.
Although scipy provides curve_fit for that reason, curve_fit only returns parameter estimates and covariances.
This wrapper returns numerous statistics and diagnostics'''
# IMPORT THE MODULES THE FIRST TIME - WILL NOT RUN WITHOUT THESE
import numpy as np
from scipy.optimize import leastsq
import scipy.stats as spst
def __init__(self, func, p0, xdata, ydata):
# Check the data
if len(xdata) != len(ydata):
msg = 'The number of observations does not match the number of rows for the predictors'
raise ValueError(msg)
# Check parameter estimates
if type(p0) != dict:
msg = "Initial parameter estimates (p0) must be a dictionry of form p0={'a':1, 'b':2, etc}"
raise ValueError(msg)
self.func = func
self.inits = p0.values()
self.xdata = xdata
self.ydata = ydata
self.nobs = len( ydata )
self.nparm= len( self.inits )
self.parmNames = p0.keys()
for i in range( len(self.parmNames) ):
if len(self.parmNames[i]) > 5:
self.parmNames[i] = self.parmNames[i][0:4]
# Run the model
self.mod1 = leastsq(self.func, self.inits, args = (self.xdata, self.ydata), full_output=1)
# Get the parameters
self.parmEsts = np.round( self.mod1[0], 4 )
# Get the Error variance and standard deviation
self.RSS = np.sum( self.mod1[2]['fvec']**2 )
self.df = self.nobs - self.nparm
self.MSE = self.RSS / self.df
self.RMSE = np.sqrt( self.MSE )
# Get the covariance matrix
self.cov = self.MSE * self.mod1[1]
# Get parameter standard errors
self.parmSE = np.diag( np.sqrt( self.cov ) )
# Calculate the t-values
self.tvals = self.parmEsts/self.parmSE
# Get p-values
self.pvals = (1 - spst.t.cdf( np.abs(self.tvals), self.df))*2
# Get biased variance (MLE) and calculate log-likehood
self.s2b = self.RSS / self.nobs
self.logLik = -self.nobs/2 * np.log(2*np.pi) - self.nobs/2 * np.log(self.s2b) - 1/(2*self.s2b) * self.RSS
del(self.mod1)
del(self.s2b)
del(self.inits)
# Get AIC. Add 1 to the df to account for estimation of standard error
def AIC(self, k=2):
return -2*self.logLik + k*(self.nparm + 1)
del(np)
del(leastsq)
# Print the summary
def summary(self):
print
print 'Non-linear least squares'
print 'Model: ' + self.func.func_name
print 'Parameters:'
print " Estimate Std. Error t-value P(>|t|)"
for i in range( len(self.parmNames) ):
print "% -5s % 5.4f % 5.4f % 5.4f % 5.4f" % tuple( [self.parmNames[i], self.parmEsts[i], self.parmSE[i], self.tvals[i], self.pvals[i]] )
print
print 'Residual Standard Error: % 5.4f' % self.RMSE
print 'Df: %i' % self.df
## EXAMPLE USAGE
import pandas as pd
# IMPORT THE MODULES A SECOND TIME. WILL NOT RUN WITHOUT THESE
import numpy as np
from scipy.optimize import leastsq
import scipy.stats as spst
# Import data into dataframe
respData = pd.read_csv('/Users/Nate/Documents/FIU/Research/MTE_Urchins/Data/respiration.csv')
# Standardize to 24 h
respData['respDaily'] = respData['C_Resp_Mass'] * 24
# Create the Arrhenius temperature
respData['Ar'] = -1 / (8.617 * 10**-5 * (respData['Temp']+273))
# Define the likelihood null model
def nullMod(params, mass, yObs):
a = params[0]
c = params[1]
yHat = a*mass**c
err = yObs - yHat
return(err)
p0 = {'a':1, 'b':1}
tMod = NLS(nullMod, p0, respData['UrchinMass'], respData['respDaily'] )
tMod.summary()
tMod.AIC()
tMod.logLik
These problems are related because I try to import this class into another script and I can't (as in the first problem). Can anyone tell me what's going on?
Update
I just started being able to import scripts. Whatever that funky clean on start path was appears to have finally been deleted somehow. No idea what that was. However, I still don't understand why, if I import the necessary modules in my class definition, I MUST import them in my other scripts as well. It seems to me that if my class imports the modules, I shouldn't need to import them globally as well. Is this correct?
I think the first comment from zhangxaochen is the best starting place for problem number number 1. sys.path should contain all the paths that python is searching for when you try to import a module. These are the steps I'd go through to solve this one:
Make sure os.getcwd() returns the same directory as os.path.dirname(sys.argv[0])
Next I'd make sure that that path is in sys.path list.
If both those check out....