I know that some examples from the Pyomo book can be run from Anaconda company prompt, eg. by the command “runef -m ReferenceModel.py” for the farmer example.
I would like to run the examples within the Spyder IDE. Spyder doesn’t recognise any of the code. For example, I get the following error message ‘from pyomo.core import *’ used; unable to detect undefined names
How can I run the examples within Spyder? I am not sure if adding a line
pyomo solve my_model.ph my_data.dat —-solver=‘glpk’ at the end of the script would work
Assuming you've set up your abstract model, you can instantiate it with data using:
data = DataPortal()
data.load(filename="my_data.dat", model=my_model)
Then you can solve in Spyder and present results with the following:
from pyomo.opt import SolverFactory
opt = pyomo.environ.SolverFactory('glpk')
instance = model.create_instance(data)
opt.solve(instance)
instance.display()
References:
(1) https://www.osti.gov/servlets/purl/1376827
(2) https://pyomo.readthedocs.io/en/stable/working_models.html
I am trying to generate a feature set with the Essentia MusicExtractor from a yaml profile as described in the documentation here and here via python.
My code snippet:
from essentia.standard import MusicExtractor
profile = "some_profile.yaml"
audio = "some_audio.mp3"
features, frames = MusicExtractor(profile=profile)(audio)
My yaml profile:
This produces the folling error:
RuntimeError:
Error while configuring MusicExtractor:
Pool: Cannot set/add/merge value to the pool under the name 'rhythm.stats'
because that name already exists but contains a different data type than value.
It does not really look that i am doing something wrong.
I ran into the same problem and fixed it this way:
Downloaded a sample profile from the essentia repos examples.
Ran the profile.
Commented out the conflicting lines after each run, which are just a few. Basically the stats and statsMFCC lines.
From this I could derive a working profile.
from slimit import minify
if __name__ == "__main__":
print("start")
# Normally, I pass real JavaScript. For this issue, an empty string reproduces problem.
minify("", mangle=True)
print("exit")
This triggers the following console output.
start
WARNING: Couldn't write lextab module <module 'slimit.lextab' from '/Users/kurtostfeld/samba/wrapad/venv/lib/python2.7/site-packages/slimit/lextab.pyc'>. Won't overwrite existing lextab module
WARNING: yacc table file version is out of date
WARNING: Token 'IMPORT' defined, but not used
WARNING: Token 'BLOCK_COMMENT' defined, but not used
WARNING: Token 'ENUM' defined, but not used
WARNING: Token 'EXTENDS' defined, but not used
WARNING: Token 'LINE_COMMENT' defined, but not used
WARNING: Token 'LINE_TERMINATOR' defined, but not used
WARNING: Token 'CONST' defined, but not used
WARNING: Token 'EXPORT' defined, but not used
WARNING: Token 'CLASS' defined, but not used
WARNING: Token 'SUPER' defined, but not used
WARNING: There are 10 unused tokens
WARNING: Couldn't create <module 'slimit.yacctab' from '/Users/kurtostfeld/samba/wrapad/venv/lib/python2.7/site-packages/slimit/yacctab.pyc'>. Won't overwrite existing tabmodule
exit
These warnings are flooding my application console output. How can I use minify without generating warnings?
I'm using Python 2.7.12, and what is currently the latest library versions: slimit 0.8.1, ply 3.10.
According to this issue on Github, slimit depends of the ply package. After few tries, it seems that theses warnings appear since version 3.8 of ply. . You could update ply to 3.6 which is the last version that doesn't bring these messages :
pip uninstall ply -y && pip install ply==3.6
It solved my problem.
UPDATE
Install a sooner version of ply was really a bad work around since some of my tests were failing. Original slimit version seems not maintained well so I suggest to update to a newer version, metatoaster did a good job to improve it and fixed that problem of warning message. The solution for me was to uninstall slimit and then install it's version:
pip install git+https://github.com/metatoaster/slimit.git#egg=slimit
FINAL UPDATE In fact, slimit seems not maintened anymore and its successor is called calmjs, there is few differences but it is really more stable and don't shows these annoying warning message. See: https://github.com/calmjs/calmjs.parse
Switching versions did not change anything for me, I found another workaround: simply delete (or move, if you want to be cautious) the mentioned files (yourpython/site-packages/slimit/yacctab.py and yourpython/site-packages/slimit/lextab.py).
I believe the module will re-create these files and stop bothering you with warning messages.
Slimit uses ply under the hood, which uses logging from stdlib. AFAICS slimit does not allow you to pass the logging parameters that ply's lex and yacc expect.
While you therefor can't (directly) access ply's logging, you should be able to suppress those messages my raising the global logging level:
import logging
...
logging.disable(logging.CRITICAL)
minify("", mangle=True)
logging.disable(logging.NOTSET)
You can use this parser, too: https://github.com/PiotrDabkowski/pyjsparser
It works and is easy to use. It does not handle comments though.
(Neither does calmjs seems to handle comments fully: Its parse function has a parameter to indicate that you want the comments, but as of now, some comments seem to get lost.)
Here was the solution that I went with. I made a custom variant of two slimit functions that pass an extra errorlog=ply.yacc.NullLogger() call to the ply.yacc.yacc function.
class SlimitNoLoggingParser(Parser):
"""
This is a simple customized variant to slimit.parser.Parser.
The only difference is that this passes a errorlog=ply.yacc.NullLogger() to ply.yacc.yacc to suppress unwanted
stderr logging output.
"""
def __init__(self, lex_optimize=True, lextab=lextab,
yacc_optimize=True, yacctab=yacctab, yacc_debug=False):
self.lex_optimize = lex_optimize
self.lextab = lextab
self.yacc_optimize = yacc_optimize
self.yacctab = yacctab
self.yacc_debug = yacc_debug
self.lexer = Lexer()
self.lexer.build(optimize=lex_optimize, lextab=lextab)
self.tokens = self.lexer.tokens
self.parser = ply.yacc.yacc(
module=self, optimize=yacc_optimize,
errorlog=ply.yacc.NullLogger(),
debug=yacc_debug, tabmodule=yacctab, start='program')
# https://github.com/rspivak/slimit/issues/29
# lexer.auto_semi can cause a loop in a parser
# when a parser error happens on a token right after
# a newline.
# We keep record of the tokens that caused p_error
# and if the token has already been seen - we raise
# a SyntaxError exception to avoid looping over and
# over again.
self._error_tokens = {}
# This is a simply variant of slimit.minify that suppresses unwanted noisy stderr logging output.
def warning_free_minify(text, mangle=False, mangle_toplevel=False):
parser = SlimitNoLoggingParser(lex_optimize=False)
tree = parser.parse(text)
if mangle:
mangler.mangle(tree, toplevel=mangle_toplevel)
minified = ECMAMinifier().visit(tree)
return minified
I'm trying to add cross-references to external API into my documentation but I'm facing three different behaviors.
I am using sphinx(1.3.1) with Python(2.7.3) and my intersphinx mapping is configured as:
{
'python': ('https://docs.python.org/2.7', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'cv2' : ('http://docs.opencv.org/2.4/', None),
'h5py' : ('http://docs.h5py.org/en/latest/', None)
}
I have no trouble writing a cross-reference to numpy API with :class:`numpy.ndarray` or :func:`numpy.array` which gives me, as expected, something like numpy.ndarray.
However, with h5py, the only way I can have a link generated is if I omit the module name. For example, :class:`Group` (or :class:`h5py:Group`) gives me Group but :class:`h5py.Group` fails to generate a link.
Finally, I cannot find a way to write a working cross-reference to OpenCV API, none of these seems to work:
:func:`cv2.convertScaleAbs`
:func:`cv2:cv2.convertScaleAbs`
:func:`cv2:convertScaleAbs`
:func:`convertScaleAbs`
How to properly write cross-references to external API, or configure intersphinx, to have a generated link as in the numpy case?
In addition to the detailed answer from #gall, I've discovered that intersphinx can also be run as a module:
python -m sphinx.ext.intersphinx 'http://python-eve.org/objects.inv'
This outputs nicely formatted info. For reference: https://github.com/sphinx-doc/sphinx/blob/master/sphinx/ext/intersphinx.py#L390
I gave another try on trying to understand the content of an objects.inv file and hopefully this time I inspected numpy and h5py instead of only OpenCV's one.
How to read an intersphinx inventory file
Despite the fact that I couldn't find anything useful about reading the content of an object.inv file, it is actually very simple with the intersphinx module.
from sphinx.ext import intersphinx
import warnings
def fetch_inventory(uri):
"""Read a Sphinx inventory file into a dictionary."""
class MockConfig(object):
intersphinx_timeout = None # type: int
tls_verify = False
class MockApp(object):
srcdir = ''
config = MockConfig()
def warn(self, msg):
warnings.warn(msg)
return intersphinx.fetch_inventory(MockApp(), '', uri)
uri = 'http://docs.python.org/2.7/objects.inv'
# Read inventory into a dictionary
inv = fetch_inventory(uri)
# Or just print it
intersphinx.debug(['', uri])
File structure (numpy)
After inspecting numpy's one, you can see that keys are domains:
[u'np-c:function',
u'std:label',
u'c:member',
u'np:classmethod',
u'np:data',
u'py:class',
u'np-c:member',
u'c:var',
u'np:class',
u'np:function',
u'py:module',
u'np-c:macro',
u'np:exception',
u'py:method',
u'np:method',
u'np-c:var',
u'py:exception',
u'np:staticmethod',
u'py:staticmethod',
u'c:type',
u'np-c:type',
u'c:macro',
u'c:function',
u'np:module',
u'py:data',
u'np:attribute',
u'std:term',
u'py:function',
u'py:classmethod',
u'py:attribute']
You can see how you can write your cross-reference when you look at the content of a specific domain. For example, py:class:
{u'numpy.DataSource': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.DataSource.html#numpy.DataSource',
u'-'),
u'numpy.MachAr': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.MachAr.html#numpy.MachAr',
u'-'),
u'numpy.broadcast': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast.html#numpy.broadcast',
u'-'),
...}
So here, :class:`numpy.DataSource` will work as expected.
h5py
In the case of h5py, the domains are:
[u'py:attribute', u'std:label', u'py:method', u'py:function', u'py:class']
and if you look at the py:class domain:
{u'AttributeManager': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/attr.html#AttributeManager',
u'-'),
u'Dataset': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/dataset.html#Dataset',
u'-'),
u'ExternalLink': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/group.html#ExternalLink',
u'-'),
...}
That's why I couldn't make it work as numpy references. So a good way to format them would be :class:`h5py:Dataset`.
OpenCV
OpenCV's inventory object seems malformed. Where I would expect to find domains there is actually 902 function signatures:
[u':',
u'AdjusterAdapter::create(const',
u'AdjusterAdapter::good()',
u'AdjusterAdapter::tooFew(int',
u'AdjusterAdapter::tooMany(int',
u'Algorithm::create(const',
u'Algorithm::getList(vector<string>&',
u'Algorithm::name()',
u'Algorithm::read(const',
u'Algorithm::set(const'
...]
and if we take the first one's value:
{u'Ptr<AdjusterAdapter>': (u'OpenCV',
u'2.4',
u'http://docs.opencv.org/2.4/detectorType)',
u'ocv:function 1 modules/features2d/doc/common_interfaces_of_feature_detectors.html#$ -')}
I'm pretty sure it is then impossible to write OpenCV cross-references with this file...
Conclusion
I thought intersphinx generated the objects.inv based on the content of the documentation project in an standard way, which seems not to be the case.
As a result, it seems that the proper way to write cross-references is API dependent and one should inspect a specific inventory object to actually see what's available.
An additional way to inspect the objects.inv file is with the sphobjinv module.
You can search local or even remote inventory files (with fuzzy matching). For instance with scipy:
$ sphobjinv suggest -t 90 -u https://docs.scipy.org/doc/scipy/reference/objects.inv "signal.convolve2d"
Remote inventory found.
:py:function:`scipy.signal.convolve2d`
:std:doc:`generated/scipy.signal.convolve2d`
Note that you may need to use :py:func: and not :py:function: (I'd be happy to know why).
How to use OpenCV 2.4 (cv2) intersphinx
Inspired by #Gall's answer, I wanted to compare the contents of the OpenCV & numpy inventory files. I couldn't get sphinx.ext.intersphinx.fetch_inventory to work from ipython, but the following does work:
curl http://docs.opencv.org/2.4/objects.inv | tail -n +5 | zlib-flate -uncompress > cv2.inv
curl https://docs.scipy.org/doc/numpy/objects.inv | tail -n +5 | zlib-flate -uncompress > numpy.inv
numpy.inv has lines like this:
numpy.ndarray py:class 1 reference/generated/numpy.ndarray.html#$ -
whereas cv2.inv has lines like this:
cv2.imread ocv:pyfunction 1 modules/highgui/doc/reading_and_writing_images_and_video.html#$ -
So presumably you'd link to the OpenCV docs with :ocv:pyfunction:`cv2.imread` instead of :py:function:`cv2.imread`. Sphinx doesn't like it though:
WARNING: Unknown interpreted text role "ocv:pyfunction".
A bit of Googling revealed that the OpenCV project has its own "ocv" sphinx domain: https://github.com/opencv/opencv/blob/2.4/doc/ocv.py -- presumably because they need to document C, C++ and Python APIs all at the same time.
To use it, save ocv.py next to your Sphinx conf.py, and modify your conf.py:
sys.path.insert(0, os.path.abspath('.'))
import ocv
extensions = [
'ocv',
]
intersphinx_mapping = {
'cv2': ('http://docs.opencv.org/2.4/', None),
}
In your rst files you need to say :ocv:pyfunc:`cv2.imread` (not :ocv:pyfunction:).
Sphinx prints some warnings (unparseable C++ definition: u'cv2.imread') but the generated html documentation actually looks ok with a link to http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#cv2.imread. You can edit ocv.py and remove the line that prints that warning.
The accepted answer no longer works in the new version (1.5.x) ...
import requests
import posixpath
from sphinx.ext.intersphinx import read_inventory
uri = 'http://docs.python.org/2.7/'
r = requests.get(uri + 'objects.inv', stream=True)
r.raise_for_status()
inv = read_inventory(r.raw, uri, posixpath.join)
Stubborn fool that I am, I used 2to3 and the Sphinx deprecated APIs chart to revive #david-röthlisberger's ocv.py-based answer so it'll work with Sphinx 2.3 on Python 3.5.
The fixed-up version is here:
https://gist.github.com/ssokolow/a230b27b7ea4a31f7fb40621e6461f9a
...and the quick version of what I did was:
Run 2to3 -w ocv.py && rm ocv.py.bak
Cycle back and forth between running Sphinx and renaming functions to their replacements in the chart. I believe these were the only changes I had to make on this step:
Directive now has to be imported from docutils.parsers.rst
Replace calls to l_(...) with calls to _(...) and remove the l_ import.
Replace calls to env.warn with calls to log.warn where log = sphinx.util.logging.getLogger(__name__).
Then, you just pair it with this intersphinx definition and you get something still new enough to be relevant for most use cases:
'cv2': ('https://docs.opencv.org/3.0-last-rst/', None)
For convenience, I made a small extension for aliasing intersphinx cross references. This is useful as sometimes the object inventory gets confused when an object from a submodule is imported from a package's __init__.py.
See also https://github.com/sphinx-doc/sphinx/issues/5603
###
# Workaround of
# Intersphinx references to objects imported at package level can"t be mapped.
#
# See https://github.com/sphinx-doc/sphinx/issues/5603
intersphinx_aliases = {
("py:class", "click.core.Group"):
("py:class", "click.Group"),
("py:class", "click.core.Command"):
("py:class", "click.Command"),
}
def add_intersphinx_aliases_to_inv(app):
from sphinx.ext.intersphinx import InventoryAdapter
inventories = InventoryAdapter(app.builder.env)
for alias, target in app.config.intersphinx_aliases.items():
alias_domain, alias_name = alias
target_domain, target_name = target
try:
found = inventories.main_inventory[target_domain][target_name]
try:
inventories.main_inventory[alias_domain][alias_name] = found
except KeyError:
print("could not add to inv")
continue
except KeyError:
print("missed :(")
continue
def setup(app):
app.add_config_value("intersphinx_aliases", {}, "env")
app.connect("builder-inited", add_intersphinx_aliases_to_inv)
To use this, I paste the above code in my conf.py and add aliases to the intersphinx_aliases dictionary.
This question concerns Matlab 2014b, Python 3.4 and Mac OS 10.10.
I have the following Python file tmp.py:
from statsmodels.tsa.arima_process import ArmaProcess
import numpy as np
def generate_AR_time_series():
arparams = np.array([-0.8])
maparams = np.array([])
ar = np.r_[1, -arparams]
ma = np.r_[1, maparams]
arma_process = ArmaProcess(ar, ma)
return arma_process.generate_sample(100)
I want to call generate_AR_time_series from Matlab so I used:
py.tmp.generate_AR_time_series()
which gave a vague error message
Undefined variable "py" or class "py.tmp.generate_AR_time_series".
To look into the problem further, I tried
tmp = py.eval('__import__(''tmp'')', struct);
which gave me a detailed but still obscured error message:
Python Error:
dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so, 2): Symbol
not found: __gfortran_stop_numeric_f08
Referenced from: /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
Expected in: /Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
I can call the function within Python just fine, so I guess the problem is with Matlab. From the detailed message, it seems that the problem lies in something is expected in the Matlab installation path, but of course Matlab installation path does not contain those things since they are third-party libraries for Python.
How to solve this problem?
Edit 1:
libgfortran.3.dylib can be found in a lot of places:
/Applications/MATLAB_R2014a.app/sys/os/maci64/libgfortran.3.dylib
/Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
/opt/local/lib/gcc48/libgfortran.3.dylib
/opt/local/lib/gcc49/libgfortran.3.dylib
/opt/local/lib/libgcc/libgfortran.3.dylib
/Users/wdg/Documents/MATLAB/mcode/nativelibs/macosx/bin/libgfortran.3.dylib
Try:
setenv('DYLD_LIBRARY_PATH', '/usr/local/bin/');
For me, using the setenv approach from within MATLAB did not work. Also, MATLAB modifies the DYLD_LIBRARY_PATH variable during startup to include necessary libraries.
First, you have to make sure which version of gfortran scipy was linked against: in Terminal.app, enter otool -L /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so and look for 'libgfortran' in the output.
It worked for me to copy $(MATLABROOT)/bin/.matlab7rc.sh to my home directory and change the line LDPATH_PREFIX='' in the mac section (around line 195 in my case) to LDPATH_PREFIX='/opt/local/lib/gcc49', or whatever path to libgfortran you found above.
This ensures that /opt/local/lib/gcc49/libgfortran.3.dylib is found before the MATLAB version, but leaves other paths intact.