I'm working on a MILP problem written in python (using pulp and gurobi).
Obviously the original developer used Windows and I am running the code on Ubuntu. I am new to this and there is this piece of the code that I really don't understand.
"FOR WINDOWS: Catch infeasible model and compute the IIS: to capture the infeasibility source"
if MILP.status == pulp.constants.LpStatusInfeasible:
print 'Model Infeasible catched in i='+str(i)+''
#'remove all the existing .mps'
os.system('"del '+str('*.mps')+'"')
#'create the .mps being transformed in .ilp'
MILP.solve(pulp.PULP_CBC_CMD())
MPS_file_created = glob.glob("*.mps")
#'transform the .mps into a .ilp'
os.system('"C:\gurobi600\gurobi600\win64'+str(r'\bin')+'\gurobi_cl.exe ResultFile=violated_constraint.ilp '
+MPS_file_created[0]+'"')
print HEMS_MILP
I would like to know if this influences the solution and if so - how to use this method in ubuntu.
Related
This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.
I'm getting used to VSCode in my daily Data Science remote workflow due to LiveShare feature.
So, upon executing functions it just executes the first line of code; if I mark the whole region then it does work, but it's cumbersome way of dealing with the issue.
I tried number of extensions, but none of them seem to solve the problem.
def gini_normalized(test, pred):
"""Simple normalized Gini based on Scikit-Learn's roc_auc_score"""
gini = lambda a, p: 2 * roc_auc_score(a, p) - 1
return gini(test, pred)
Executing the beginning of the function results in error:
def gini_normalized(test, pred):...
File "", line 1
def gini_normalized(test, pred):
^
SyntaxError: unexpected EOF while parsing
There's a solution for PyCharm: Python Smart Execute - https://plugins.jetbrains.com/plugin/11945-python-smart-execute. Also Atom's Hydrogen doesn't have such issue either.
Any ideas regarding VSCode?
Thanks!
I'm a developer on the VSCode DataScience features. Just to make sure that I'm understanding correctly. You would like the shift-enter command to send the entire function to the Interactive Window if you run it on the definition of the function?
If so, then yes, we don't currently support that. Shift-enter can run line by line or run a section of code that you manually highlight. If you want, you can use #%% lines in your code to put functions into code cells. Then when you are in a cell shift-enter will run that entire cell, might be the best current approach for you.
That smart execute does look interesting, if you would like to file that as a suggestion you can use our GitHub here to get it on our backlog to look at.
https://github.com/Microsoft/vscode-python
Hi you could click the symbol before each line and turn it into > (the indented codes of the function was hidden now). Then if you select the whole line and the next line, shift+enter could run them together.
enter image description here
#step(u'Child step')
def login_to_something(context):
context.execute_steps(u'parent step 1')
context.execute_steps(u'parent step 2')
It is unable execute_steps as mentioned above for parent step 1 and it throws the following error:-
"behave.parser.ParserError: Failed to parse "
When the Behave engine is not able to identify or distinguish the steps within a step, probably the error you see. Then there is something probably not in semantic as expected by engine.
I got your point, yes the preposition should not matter and just the step is good enough.. But there is something missing in expected semantic so the parser error.
def login_to_something(context):
context.execute_steps('''
when write the step 1 here
then write the step 2 here
'''
)
I'm unable to get from more information shared by you in problem statement.
Check the Indentations of your feature file. We also faced this issues multiple times.
I need to write an optimization file for Gurobi (Python) that is a modified version of a classic TSP. I tried to run the example file from their website:
examples.gurobi.com/traveling-salesman-problem/
I always get the following error:
TypeError: object of type 'NoneType' has no len()
What do I need to change?
Thx
Full code: https://www.dropbox.com/s/ewisx805b3o2wq5/beispiel_opt.py?dl=0
I can confirm the error with the example code from Gurobi's website. At the first look the problem seems to be inside the subtour function, that returns None if sum(lengths) == n and the missing check for if tour is None inside the subtourlim function.
Instead of providing a fix for the specific code, I first checked the examples that Gurobi installs inside the specific installation directory:
Mac: /Library/gurobi810/mac64/examples/python/
Linux: /opt/gurobi800/linux64/examples/python/
Windows: c:\gurobi800\win64\examples\python\
And surprisingly the tsp.py from there runs without any errors. Note also that the two mentioned functions are revised. So I guess the example from the website is just a old version of the code.
I am trying to run my own version of baselines code source of reinforcement learning on github: (https://github.com/openai/baselines/tree/master/baselines/ppo2).
Whatever I do, I keep having the same display which looks like this :
Where can I edit it ? I know I should edit the "learn" method but I don't know how
Those prints are the result of the following block of code, which can be found at this link (for the latest revision at the time of writing this at least):
if update % log_interval == 0 or update == 1:
ev = explained_variance(values, returns)
logger.logkv("serial_timesteps", update*nsteps)
logger.logkv("nupdates", update)
logger.logkv("total_timesteps", update*nbatch)
logger.logkv("fps", fps)
logger.logkv("explained_variance", float(ev))
logger.logkv('eprewmean', safemean([epinfo['r'] for epinfo in epinfobuf]))
logger.logkv('eplenmean', safemean([epinfo['l'] for epinfo in epinfobuf]))
logger.logkv('time_elapsed', tnow - tfirststart)
for (lossval, lossname) in zip(lossvals, model.loss_names):
logger.logkv(lossname, lossval)
logger.dumpkvs()
If your goal is to still print some things here, but different things (or the same things in a different format) your only option really is to modify this source file (or copy the code you need into a new file and apply your changes there, if allowed by the code's license).
If your goal is just to suppress these messages, the easiest way to do so would probably be by running the following code before running this learn() function:
from baselines import logger
logger.set_level(logger.DISABLED)
That's using this function to disable the baselines logger. It might also disable other baselines-related output though.