I have recently started in doing some OR, and have been trying to use Pyomo and NEOS to do some optimation problems. I have been following along with one of the UT Austin Pyomo lectures, and when my GLPT was being difficult to be installed, I moved on to NEOS. I am having some difficulty in now receiving a solved answer from NEOS.
What I have so far is this:
from pyomo import environ as pe
import os
os.environ['NEOS_EMAIL'] = 'my registered email'
model = pe.ConcreteModel()
model.x1 = pe.Var(domain=pe.Binary)
model.x2 = pe.Var(domain=pe.Binary)
model.x3 = pe.Var(domain=pe.Binary)
model.x4 = pe.Var(domain=pe.Binary)
model.x5 = pe.Var(domain=pe.Binary)
obj_expr = 3 * model.x1 + 4 * model.x2 + 5 * model.x3 + 8 * model.x4 + 9 * model.x5
model.obj = pe.Objective(sense=pe.maximize, expr=obj_expr)
con_expr = 2 * model.x1 + 3 * model.x2 + 4 * model.x3 + 5 * model.x4 + 9 * model.x5 <= 20
model.con = pe.Constraint(expr=con_expr)
solver_manager = pe.SolverManagerFactory('neos')
results = solver_manager.solve(model, solver = "minos")
print(results)
What I receive in return is number of solutions = 0, while I know for a fact that one exits. I also see that I don't have any bounds set, so how would I go about doing that? Once again, I am very new to this, and have not been able to find any sort of documentation regarding this elsewhere, or perhaps I just don't know how to look.
Thanks for any help!
This is a "problem" with the design of the current results object. For historical reasons, that field reports the number of solutions contained in the results object and is not the number of solutions generated by the solver. By default, Pyomo solvers directly load the solution returned by the solver into the original model (both for convenience and efficiency) and do not return it in the results object. You can change that behavior by providing load_solutions=False to the solve() call.
As for the bounds, what bounds are you referring to? Variable bounds are set using either the bounds= argument to the Var() declaration, or the domain= argument. For your example, because the variables are declared to be Binary, they all have bounds of [0..1]. Bounds on the objective are gathered by parsing the solver output. This is dependent on bother the solver that you are using (many do not report bounds information), and the interface used to parse the solver results.
As a final note, you are sending a MIP problem to a LP/NLP solver (minos). You will get fractional valies for your binary variables back from the solver.
To retrieve the solution from the model, you can use something like:
print(model.x1.value, model.x2.value, model.x3.value, model.x4.value, model.x5.value)
And using solver="cbc" you can avoid fractional values in this example.
Related
My objective is to create a hydropower generation model and identify the optimal water level dispatching process. However, I'm encountering errors when attempting to call the Gurobi solver using Pyomo. Although I have recently learned Pyomo, I have made several attempts to resolve the errors without success. Although there are no grammatical errors in my code, I'm still quite confused by the issue:
ValueError: More than one active objective defined for input model 'unknown'; Cannot write legal LP file
Objectives: obj[1] obj[2]
I think there may be an issue with this section of the code:
def object_rule(model, a):
return K * model.Q[a] * (model.H[a] + model.H[a + 1]) / 2 * 8760 * 3600 / 1000
#K is the coefficient, Q is the flow rate, H is the head, and a represents the time period
and
model.obj = Objective(model.T, rule=object_rule, sense=maximize)
This is my second time editing this question. I have taken into account the suggestions offered by many experts and have come to the realization that I still need to learn and make improvements.
You're seeing the error message More than one active objective defined because you have an indexed objective function. Most solvers can only handle a single objective function at a time. I'm guessing the fix in your case is to sum over model.T in your object_rule rule something like:
def object_rule(model):
return sum(K * model.Q[a] * (model.H[a] + model.H[a + 1]) / 2 * 8760 * 3600 / 1000 for a in model.T)
model.obj = Objective(rule=object_rule, sense=maximize)
This question is related to my previous question found here. I have managed to solve this problem (big thanks to #AirSquid!) My objective function is something like:
So the avgPrice_n variable is indexed by n. However, it is actually defined as
Meaning that it is indexed by n and i.
So at the moment my objective function is very messy as I have three sums. It looks something like (I expanded the brackets in the objective function and added each component separately, so the avgPrice_n*demand_n looks like):
expr += sum(sum(sum((1/12)*model.c[i]*model.allocation[i,n] for i in model.MP[t]) for t in model.M)*model.demand_n[n] for n in model.N)
And while this works, debugging was quite difficult because the terms are very long. So intead of using the actual definition of avgPrice_n, I was wondering if it would be possible to create a avgPrice_n variable, use this in the objective function and then create a constraint where I define avgPrice_n as I showed above.
The issue I am having is that I created my decision variable, x_{i,n}, as a variable but apparently I can't create a avgPrice_n as a variable where I index it by x_{i,n} as this results in a TypeError: Cannot apply a Set operator to an indexed Var component (allocation) error.
So as of now my decision variable looks like:
model.x = Var(model.NP_flat, domain = NonNegativeReals)
And I tried to create:
model.avg_Price = Var(model.x, domain = NonNegativeReals)
Which resulted in the above error. Any ideas or suggestions would be much appreciated!
You have a couple options. Realize you do not need the model.avg_price variable because you can construct it from other variables and you would have to make some constraints to constrain the value, etc. etc. and pollute your model.
The basic building blocks in the model are pyomo expressions, so you could put in a little "helper function" to build expressions (the cost function shown, which is dependent on n) which are not defined within the model, but just pop out an expression...totally legal). You can also "break up" large expressions into smaller expressions (like the other_stuff below) and then just kludge them all together in the objective (or where needed) this gives you the opportunity to evaluate them independently. I've made several models with an objective function that has a "cost" component and a "penalty" component by dividing it into 2 expressions.... Then when solved, you can inspect them independently.
My suggestion (if you don't like the triple sum in your current model) is to make an avg_cost(n) function to build the expression similar to what is done in the nonsensical function below, and use that as a substitute for a new variable.
Note: the initialization of the variables here is generally unnecessary. I just did it to "simulate solving" or they would be None...
Code:
import pyomo.environ as pyo
m = pyo.ConcreteModel()
m.N = pyo.Set(initialize=[0,1,2])
m.x = pyo.Var(m.N, initialize = 2.0)
def cost(n):
return m.x[n] + 2*m.x[n+1]
m.other_stuff = 3 * m.x[1] + 4 * m.x[2]
m.costs = sum(cost(n) for n in {0,1})
m.obj_expr = m.costs + m.other_stuff
m.obj = pyo.Objective(expr= m.obj_expr)
# inspect cost at a particular value of n...
print(cost(1))
print(pyo.value(cost(1)))
# inspect the pyomo expressions "other_stuff" and total costs...
print(m.other_stuff)
print(pyo.value(m.other_stuff))
print(m.costs)
print(pyo.value(m.costs))
# inspect the objective... which can be accessed by pprint() and display()
m.obj.pprint()
m.obj.display()
Output:
x[1] + 2*x[2]
6.0
3*x[1] + 4*x[2]
14.0
12.0
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : minimize : x[0] + 2*x[1] + x[1] + 2*x[2] + 3*x[1] + 4*x[2]
obj : Size=1, Index=None, Active=True
Key : Active : Value
None : True : 26.0
I have a cplex/docplex model with an "active risk" term. I believe I'm messing up the mix of Pandas and DocPlex, but I'm worried I'm trying to do something impossible.
The term should just be the quadratic form (Target-Optimal) \Sigma (Target-Optimal).
from docplex.mp.advmodel import AdvModel
from numpy import identity
from pandas import Series, DataFrame
model = AdvModel()
assets = ['AAA', 'BBB', 'CCC', 'DDD']
optimal = Series(1 / 4, assets)
covariances = DataFrame(identity(4) * 0.10, index=assets, columns=assets)
target = Series(model.continuous_var_list(assets, name='Target', lb=0, ub=1), index=assets)
active_risk = model.quad_matrix_sum(covariances, target - optimal) / 2
print(active_risk)
giving the error
AttributeError: 'LinearExpr' object has no attribute '_index'
Interestingly, something like the following works. So I could shift all the variables to be the difference but I'm trying to avoid this if possible as that would make other complicated terms in the optimization less clear.
# lb, ub are complicated now
difference = Series(model.continuous_var_list(assets, name='Target', lb=lb, ub=ub), index=assets)
model.quad_matrix_sum(covariances, difference) / 2
The problem comes from the conjonction of two events:
Model.quad_sum expects variables, not expressions, as indicated in the documentation
For performance reasons, class AdvModel disables type-checking of arguments. But this can be re-enabled.
Re-enabling type-checking for AdvModel (e.g. calling AdvModel(checker='on') yields the right error message:
docplex.mp.utils.DOcplexException: Expecting an iterable returning variables, docplex.mp.LinearExpr(Target_AAA-0.250) was passed at position 0
To compute a quadratic form over expressions, use Model.sum() as in:
#active_risk = model.quad_matrix_sum(covariances, target - optimal) / 2
size = len(assets)
active_risk = model.sum(covariances.iloc[i,j] * (target[i] - optimal[i]) * (target[j] - optimal[j])
for i in range(size) for j in range(size))
print(active_risk)
which yields
0.100Target_AAA^2+0.100Target_BBB^2+0.100Target_CCC^2+0.100Target_DDD^2-0.050Target_AAA-0.050Target_BBB-0.050Target_CCC-0.050Target_DDD+0.025
Hopefully someone can help me out. I am developing an optimization model where I minimize electricity costs over time (t) and over different transactions (s). (where: standard power(p)*costs of electricity (c) = electricity costs).
Now I am trying to implement a cost component in the objective function that is based on the maximum power consumption that occurred (some alike: max(P[s,t])). However, np.max() returns an error because P[s,t] is an unsupported class for np.max(). Also the Gurobi function gp.max_(P[s,t]) also gives an unsupported class error. Is there someone who has a solution?
Code:
obj = gp.quicksum(p[s, t] * Cost_elect[t]e for t in range(T) for s in range(S)) + gp.max_(p_batt_ch[s,t]*fixed_cost for t in range(T) for s in range(S))
You need to assign the max constraint to a new auxiliary variable and the put this variable into the objective instead of the actual constraint.
maxobj = model.addVar()
max_constr = model.addConstr(maxobj == gp.max_(p_batt_ch[s,t] * fixed_cost
for t in range(T) for s in range(S)))
obj = gp.quicksum(p[s,t] * Cost_elect[t] for t in range(T) for s in range(S)) + maxobj)
Gurobi documentation
Im trying to speed up my python code by porting a bunch of my nested loops over to fortran and calling them as subroutines.
But alot of my loops call numpy, and special functions from scipy like bessel functions.
Before I try and use fortran I was wondering if it was possible to import scipy and numpy to my fortran subroutine and call the modules for bessel functions?
Else would I have to create the bessel function in fortran in order to use it?
Ideally, I would create some sort of subroutine that would optimize this code below. This is just a snippet of my entire project to give you an idea of what I'm trying to accomplish.
I understand that there are other practices I should implement to improve the speed, but for now I was investigating the benefits of calling fortran subroutines in my main python program.
for m in range(self.MaxNum_Eigen):
#looping throught the eigenvalues for the given maximum number of eigenvalues allotted
bm = self.beta[m]
#not sure
#*note: rprime = r. BUT tprime ~= t.
#K is a list of 31 elements for this particular case
K = (bm / math.sqrt( (self.H2**2) + (bm**2) ))*(math.sqrt(2) / self.b)*((scipy.special.jv(0, bm * self.r))/ (scipy.special.jv(0, bm * self.b))) # Kernel, K0(bm, r).
#initial condition
F = [37] * (self.n1)
# Integral transform of the initial condition
#Fbar = (np.trapz(self.r,self.r*K*F))
'''
matlab syntax trapz(X,Y), x ethier spacing or vector
matlab: trapz(r,r.*K.*F) trapz(X,Y)
python: np.trapz(self.r*K*F, self.r) trapz(Y,X)
'''
#*(np.trapz(self.r,self.r*K*F))
Fbar = np.ones((self.n1,self.n2))*(np.trapz(self.r*K*F, self.r))
#steady state condition: integral is in steady state
SS = np.zeros((sz[0],sz[1]))
coeff = 5000000*math.exp(-(10**3)) #defining value outside of loop with higher precision
for i in range(sz[0]):
for j in range(sz[1]):
'''
matlab reshape(Array, size1, size2) takes multiple arguments the item its resizeing and the new desired shape
create self variables and so we are not re-initializing them over and over agaian?
using generators? How to use generators
'''
s = np.reshape(tau[i,j,:],(1,n3))
# will be used for rprime and tprime in Ozisik solution.
[RR,TT] = np.meshgrid(self.r,s)
'''
##### ERROR DUE TO ROUNDING OF HEAT SOURCE ####
error in rounding 5000000*math.exp(-(10**3)) becomes zero
#log10(e−10000)=−10000∗(0.4342944819)=−4342.944819
#e−1000=10−4342.944819=10−4343100.05518=1.13548386531×10−4343
'''
#g = 5000000*math.exp(-(10**3)) #*(RR - self.c*TT)**2) #[W / m^2] heat source.
g = coeff * (RR - self.c*TT)**2
K = (bm/math.sqrt(self.H2**2 + bm**2))*(math.sqrt(2)/self.b)*((scipy.special.jv(0,bm*RR))/(scipy.special.jv(0,bm*self.b)))
#integral transform of heat source
gbar = np.trapz(RR*K*g, self.r, 2) #trapz(Y,X,dx (spacing) )
gbar = gbar.transpose()
#boundary condition. BE SURE TO WRITE IN TERMS OF s!!!
f2 = self.h2 * 37
A = (self.alpha/self.k)*gbar + ((self.alpha*self.b)/self.k2)*((bm/math.sqrt(self.H2**2 + bm**2))*(math.sqrt(2)/self.b)*((scipy.special.jv(0,bm*self.b))/(scipy.special.jv(0,bm*self.b))))*f2
#A is essentially a constant is this correct all the time?
#What does A represent
SS[i, j] = np.trapz(np.exp( (-self.alpha*bm**2)*(T[i,j] - s) )*A, s)
#INSIDE M loop
K = (bm / math.sqrt((self.H2 ** 2) + (bm ** 2)))*(math.sqrt(2) /self.b)*((scipy.special.jv(0, bm * R))/ (scipy.special.jv(0, bm * self.b)))
U[:,:, m] = np.exp(-self.alpha * bm ** 2 * T)* K* Fbar + K* SS
#print(['Eigenvalue ' num2str(m) ', found at time ' num2str(toc) ' seconds'])
Compilation of answers given in the comments
Answers specific to my code:
As vorticity mentioned my code in itself was not using the numpy, and scipy packages to the fullest extent.
In regards to Bessel, function 'royvib' mentions using using .jo from scipy rather than .jv. Calling the special Bessel function jv. is much more computationally expensive, especially since I knew that I would be using a zeroth order bessel function for many of my declarations the minor change from jv -> j0 solved speed up the process.
In addition, I declared variables outside the loop to prevent expensive calls to searching for my appropriate functions. Example below.
Before
for i in range(SomeLength):
some_var = scipy.special.jv(1,coeff)
After
Bessel = scipy.special.jv
for i in range(SomeLength):
some_var = Bessel(1,coeff)
Storing the function saved time by not using the dot ('.') the command to look through the libraries every single loop. However keep in mind this does make python less readable, which is the main reason I choose to do this project in python. I do not have an exact amount of time this step cut from my process.
Fortran specific:
Since I was able to improve my python code I did not go this route an lack of specifics, but the general answer as stated by 'High Performance Mark' is that yes there are libraries that have been made to handle Bessel functions in Fortran.
If I do port my code over to Fortran or use f2py to mix Fortran and python I will update this answer accordingly.