I'm trying to add Seaborn dependency to my module, using Poetry.
I've tried it on different ways, but always without success, maybe I'm doing it wrong.
Here's my current toml config file:
[tool.poetry]
name = "seaborn"
version = "0.1.0"
description = ""
authors = ["me"]
[tool.poetry.dependencies]
python = "3.9.6"
pandas = "^1.4.1"
jupyter = "^^.0.0"
scipy = "1.7.0"
numpy = "^1.22.3"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
I've tried on the CLI:
poetry add seaborn
But no success.
Here's the output
poetry add seaborn
Using version ^0.11.2 for seaborn
Updating dependencies
Resolving dependencies... (0.0s)
AssertionError
at ~/.pyenv/versions/3.10.0/lib/python3.10/site-packages/poetry/mixology/incompatibility.py:111 in __str__
107│ )
108│
109│ def __str__(self):
110│ if isinstance(self._cause, DependencyCause):
→ 111│ assert len(self._terms) == 2
112│
113│ depender = self._terms[0]
114│ dependee = self._terms[1]
115│ assert depender.is_positive()
If I try to add it to the toml config file like seaborn = "^0.0.1"
The out put is very similar:
poetry update
Updating dependencies
Resolving dependencies... (0.0s)
AssertionError
at ~/.pyenv/versions/3.10.0/lib/python3.10/site-packages/poetry/mixology/incompatibility.py:111 in __str__
107│ )
108│
109│ def __str__(self):
110│ if isinstance(self._cause, DependencyCause):
→ 111│ assert len(self._terms) == 2
112│
113│ depender = self._terms[0]
114│ dependee = self._terms[1]
115│ assert depender.is_positive()
Can anyone help me?
Thank you so much!
After a few hours of dropping modules/restarting Pycharm / Invalidating cache... My project is up-to date without any issue!
For future note:
Do not name your modules/scripts with an already existing package (eg: scipy, seaborn, and so on)
I cannot comment yet, so need to supply a new answer.
This issue has broad applicability beyond just the Seaborn module and should be renamed something like, "Cannot add package using Poetry, AssertionError incompatibility.py:111".
The existing Answer, by #diguex, that I up-voted is exactly the fix needed and this answer helped me with the same problem, attempting to import 'flask-restx' into a demo project named 'flask-restx'.
Long and short, Poetry cannot import a dependency into itself. Naming the module with an already existing package name will confuse Poetry into thinking it is doing just that. For more discussion, see: https://github.com/python-poetry/poetry/issues/3491
I'm using robotidy but I haven't quite managed to wrap my head around how I'm supposed to tell robotidy to not wrap long-lines.
I have tried the following in the robotidy.toml:
[tool.robotidy]
transform = [
"SplitTooLongLine:line_length=9999"
]
Unfortunately even though this does indeed disable long-line-wrapping it also disables all other kinds of transformations as well which is obviously not the intended effect.
There are few ways of configuring the transformers in robotidy.
--transform - like you noticed - will select and run only transformers listed using --transform option. You can optionally pass configuration through --transform option. Not suitable for your case because you want to run rest of the transformers.
--configure - pass configuration to your transformer:
[tool.robotidy]
configure = [
"SplitTooLongLine:line_length=9999"
]
It will run all default transformers and additionaly configure SplitTooLongLine with line_length parameter 9999.
Hovewer I think it would be better to disable SplitTooLong altogether since you don't want to run it - you can use enabled parameter for that:
[tool.robotidy]
configure = [
"SplitTooLongLine:enabled=False"
]
It's described in the docs (I admit though I should link it better, for example in every transformer provide url to this page): https://robotidy.readthedocs.io/en/latest/configuration/configuring_transformers.html#configuring-transformers
We've been using pipenv for dependency management for a while, and using micropipenv's protected functionality to check lock freshness - the idea here being that micropipenv is lightweight, so this is a cheap and cheerful way of ensuring that our dependencies haven't drifted during CI or during a docker build.
Alas, micropipenv has no such feature for poetry (it skips the hash check completely), and I am therefore left to "reverse-engineer" the feature on my own. Ostensibly this should be super easy - I've assembled the code posted later from what I traced through the poetry and poetry-core repos (Locker, Factory, core.Factory, and PyProjectTOML, primarily). This absolutely does not do the trick, and I'm at a loss as to why.
_relevant_keys = ["dependencies", "group", "source", "extras"]
def _get_content_hash(pyproject):
content = pyproject["tool"]["poetry"]
print(content)
relevant_content = {}
for key in _relevant_keys:
relevant_content[key] = content.get(key)
print(json.dumps(relevant_content, sort_keys=True).encode())
content_hash = sha256(
json.dumps(relevant_content, sort_keys=True).encode()
).hexdigest()
print(f"Calculated: {content_hash}")
return content_hash
def is_fresh(lockfile, pyproject):
metadata = lockfile.get("metadata", {})
print(f"From file: {lockfile['metadata']['content-hash']}")
if "content-hash" in metadata:
return _get_content_hash(pyproject) == lockfile["metadata"]["content-hash"]
return False
Would love to figure out what exactly the heck I'm missing here - i'm guessing that the poetry locker _local_config gets changed at some point and I've failed to notice it.
References:
Locker: https://github.com/python-poetry/poetry/blob/a1a5bce96d85bdc0fdc60b8abf644615647f969e/poetry/packages/locker.py#L454
core.Factory: https://github.com/python-poetry/poetry-core/blob/afaa6903f654b695d9411fb548ad10630287c19f/poetry/core/factory.py#L24
Naturally, this ended up being a PEBKAC error. I was using the hash generation function from the master branch but using an earlier version of poetry on the command line. Once I used the function from the correct code version, everything was hunky dory.
I think this functionality actually exists in micropipenv now anyways lol
I'm trying to determine how the gen_io_ops module is generated by bazel when building TensorFlow from source.
In tensorflow/python/ops/io_ops.py, there is this piece of code:
from tensorflow.python.ops.gen_io_ops
[...]
# used in the TextLineReader initialization
rr = gen_io_ops._text_line_reader_v2(...)
referring to the bazel-genfiles/tensorflow/python/ops/gen_io_ops.py module (and generated by bazel when building TensorFlow).
The _text_line_reader_v2 is a wrapper of the TextLineReaderV2 defined in tensorflow/tensorflow/core/kernels/text_line_reader_op.cc.
As far as I understand, the build step are the followings:
1) The kernel library for the text_line_reader_op is built in tensorflow/tensorflow/core/kernels/BUILD
tf_kernel_library(
name = "text_line_reader_op",
prefix = "text_line_reader_op",
deps = IO_DEPS,)
where tf_kernel_library basically looks for text_line_reader_op.c file and build it.
2) The :text_line_reader_op kernel library is then used as a dependency by the io library defined in the same file:
cc_library(
name = "io",
deps = [
":text_line_reader_op", ...
],
)
I suppose the io library now contains the definition of the TextLineReaderV2kernel.
From what I get from this answer, there should be a third step where the io library is used to generate the python wrappers that are in the bazel-genfiles/tensorflow/python/ops/gen_io_ops.py module. This file generation can be done by the tf_op_gen_wrapper_py rule in Basel or by thetf.load_op_library() method, but none of them seem involved.
Does someone know where this third step is defined in the build process?
I finally got it.
There is indeed a call to tf_op_gen_wrapper_py but it's hidden in a call to tf_gen_op_wrapper_private_py:
def tf_gen_op_wrapper_private_py(name, out=None, deps=[],
require_shape_functions=True,
visibility=[]):
if not name.endswith("_gen"):
fail("name must end in _gen")
[...]
bare_op_name = name[:-4]
tf_gen_op_wrapper_py(name=bare_op_name, ...
So the steps are the following.
In tensorflow/tensorflow/python/BUILD, there is this rule
tf_gen_op_wrapper_private_py(
name = "io_ops_gen",
[...]
)
And so, in this rule the _gen suffix will be removed (in tf_gen_op_wrapper_private_py) and a gen_ prefix will be added in tf_gen_op_wrapper_py and therefore the gen_io_ops.py module will be generated by this rule.
I'm trying to add cross-references to external API into my documentation but I'm facing three different behaviors.
I am using sphinx(1.3.1) with Python(2.7.3) and my intersphinx mapping is configured as:
{
'python': ('https://docs.python.org/2.7', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'cv2' : ('http://docs.opencv.org/2.4/', None),
'h5py' : ('http://docs.h5py.org/en/latest/', None)
}
I have no trouble writing a cross-reference to numpy API with :class:`numpy.ndarray` or :func:`numpy.array` which gives me, as expected, something like numpy.ndarray.
However, with h5py, the only way I can have a link generated is if I omit the module name. For example, :class:`Group` (or :class:`h5py:Group`) gives me Group but :class:`h5py.Group` fails to generate a link.
Finally, I cannot find a way to write a working cross-reference to OpenCV API, none of these seems to work:
:func:`cv2.convertScaleAbs`
:func:`cv2:cv2.convertScaleAbs`
:func:`cv2:convertScaleAbs`
:func:`convertScaleAbs`
How to properly write cross-references to external API, or configure intersphinx, to have a generated link as in the numpy case?
In addition to the detailed answer from #gall, I've discovered that intersphinx can also be run as a module:
python -m sphinx.ext.intersphinx 'http://python-eve.org/objects.inv'
This outputs nicely formatted info. For reference: https://github.com/sphinx-doc/sphinx/blob/master/sphinx/ext/intersphinx.py#L390
I gave another try on trying to understand the content of an objects.inv file and hopefully this time I inspected numpy and h5py instead of only OpenCV's one.
How to read an intersphinx inventory file
Despite the fact that I couldn't find anything useful about reading the content of an object.inv file, it is actually very simple with the intersphinx module.
from sphinx.ext import intersphinx
import warnings
def fetch_inventory(uri):
"""Read a Sphinx inventory file into a dictionary."""
class MockConfig(object):
intersphinx_timeout = None # type: int
tls_verify = False
class MockApp(object):
srcdir = ''
config = MockConfig()
def warn(self, msg):
warnings.warn(msg)
return intersphinx.fetch_inventory(MockApp(), '', uri)
uri = 'http://docs.python.org/2.7/objects.inv'
# Read inventory into a dictionary
inv = fetch_inventory(uri)
# Or just print it
intersphinx.debug(['', uri])
File structure (numpy)
After inspecting numpy's one, you can see that keys are domains:
[u'np-c:function',
u'std:label',
u'c:member',
u'np:classmethod',
u'np:data',
u'py:class',
u'np-c:member',
u'c:var',
u'np:class',
u'np:function',
u'py:module',
u'np-c:macro',
u'np:exception',
u'py:method',
u'np:method',
u'np-c:var',
u'py:exception',
u'np:staticmethod',
u'py:staticmethod',
u'c:type',
u'np-c:type',
u'c:macro',
u'c:function',
u'np:module',
u'py:data',
u'np:attribute',
u'std:term',
u'py:function',
u'py:classmethod',
u'py:attribute']
You can see how you can write your cross-reference when you look at the content of a specific domain. For example, py:class:
{u'numpy.DataSource': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.DataSource.html#numpy.DataSource',
u'-'),
u'numpy.MachAr': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.MachAr.html#numpy.MachAr',
u'-'),
u'numpy.broadcast': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast.html#numpy.broadcast',
u'-'),
...}
So here, :class:`numpy.DataSource` will work as expected.
h5py
In the case of h5py, the domains are:
[u'py:attribute', u'std:label', u'py:method', u'py:function', u'py:class']
and if you look at the py:class domain:
{u'AttributeManager': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/attr.html#AttributeManager',
u'-'),
u'Dataset': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/dataset.html#Dataset',
u'-'),
u'ExternalLink': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/group.html#ExternalLink',
u'-'),
...}
That's why I couldn't make it work as numpy references. So a good way to format them would be :class:`h5py:Dataset`.
OpenCV
OpenCV's inventory object seems malformed. Where I would expect to find domains there is actually 902 function signatures:
[u':',
u'AdjusterAdapter::create(const',
u'AdjusterAdapter::good()',
u'AdjusterAdapter::tooFew(int',
u'AdjusterAdapter::tooMany(int',
u'Algorithm::create(const',
u'Algorithm::getList(vector<string>&',
u'Algorithm::name()',
u'Algorithm::read(const',
u'Algorithm::set(const'
...]
and if we take the first one's value:
{u'Ptr<AdjusterAdapter>': (u'OpenCV',
u'2.4',
u'http://docs.opencv.org/2.4/detectorType)',
u'ocv:function 1 modules/features2d/doc/common_interfaces_of_feature_detectors.html#$ -')}
I'm pretty sure it is then impossible to write OpenCV cross-references with this file...
Conclusion
I thought intersphinx generated the objects.inv based on the content of the documentation project in an standard way, which seems not to be the case.
As a result, it seems that the proper way to write cross-references is API dependent and one should inspect a specific inventory object to actually see what's available.
An additional way to inspect the objects.inv file is with the sphobjinv module.
You can search local or even remote inventory files (with fuzzy matching). For instance with scipy:
$ sphobjinv suggest -t 90 -u https://docs.scipy.org/doc/scipy/reference/objects.inv "signal.convolve2d"
Remote inventory found.
:py:function:`scipy.signal.convolve2d`
:std:doc:`generated/scipy.signal.convolve2d`
Note that you may need to use :py:func: and not :py:function: (I'd be happy to know why).
How to use OpenCV 2.4 (cv2) intersphinx
Inspired by #Gall's answer, I wanted to compare the contents of the OpenCV & numpy inventory files. I couldn't get sphinx.ext.intersphinx.fetch_inventory to work from ipython, but the following does work:
curl http://docs.opencv.org/2.4/objects.inv | tail -n +5 | zlib-flate -uncompress > cv2.inv
curl https://docs.scipy.org/doc/numpy/objects.inv | tail -n +5 | zlib-flate -uncompress > numpy.inv
numpy.inv has lines like this:
numpy.ndarray py:class 1 reference/generated/numpy.ndarray.html#$ -
whereas cv2.inv has lines like this:
cv2.imread ocv:pyfunction 1 modules/highgui/doc/reading_and_writing_images_and_video.html#$ -
So presumably you'd link to the OpenCV docs with :ocv:pyfunction:`cv2.imread` instead of :py:function:`cv2.imread`. Sphinx doesn't like it though:
WARNING: Unknown interpreted text role "ocv:pyfunction".
A bit of Googling revealed that the OpenCV project has its own "ocv" sphinx domain: https://github.com/opencv/opencv/blob/2.4/doc/ocv.py -- presumably because they need to document C, C++ and Python APIs all at the same time.
To use it, save ocv.py next to your Sphinx conf.py, and modify your conf.py:
sys.path.insert(0, os.path.abspath('.'))
import ocv
extensions = [
'ocv',
]
intersphinx_mapping = {
'cv2': ('http://docs.opencv.org/2.4/', None),
}
In your rst files you need to say :ocv:pyfunc:`cv2.imread` (not :ocv:pyfunction:).
Sphinx prints some warnings (unparseable C++ definition: u'cv2.imread') but the generated html documentation actually looks ok with a link to http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#cv2.imread. You can edit ocv.py and remove the line that prints that warning.
The accepted answer no longer works in the new version (1.5.x) ...
import requests
import posixpath
from sphinx.ext.intersphinx import read_inventory
uri = 'http://docs.python.org/2.7/'
r = requests.get(uri + 'objects.inv', stream=True)
r.raise_for_status()
inv = read_inventory(r.raw, uri, posixpath.join)
Stubborn fool that I am, I used 2to3 and the Sphinx deprecated APIs chart to revive #david-röthlisberger's ocv.py-based answer so it'll work with Sphinx 2.3 on Python 3.5.
The fixed-up version is here:
https://gist.github.com/ssokolow/a230b27b7ea4a31f7fb40621e6461f9a
...and the quick version of what I did was:
Run 2to3 -w ocv.py && rm ocv.py.bak
Cycle back and forth between running Sphinx and renaming functions to their replacements in the chart. I believe these were the only changes I had to make on this step:
Directive now has to be imported from docutils.parsers.rst
Replace calls to l_(...) with calls to _(...) and remove the l_ import.
Replace calls to env.warn with calls to log.warn where log = sphinx.util.logging.getLogger(__name__).
Then, you just pair it with this intersphinx definition and you get something still new enough to be relevant for most use cases:
'cv2': ('https://docs.opencv.org/3.0-last-rst/', None)
For convenience, I made a small extension for aliasing intersphinx cross references. This is useful as sometimes the object inventory gets confused when an object from a submodule is imported from a package's __init__.py.
See also https://github.com/sphinx-doc/sphinx/issues/5603
###
# Workaround of
# Intersphinx references to objects imported at package level can"t be mapped.
#
# See https://github.com/sphinx-doc/sphinx/issues/5603
intersphinx_aliases = {
("py:class", "click.core.Group"):
("py:class", "click.Group"),
("py:class", "click.core.Command"):
("py:class", "click.Command"),
}
def add_intersphinx_aliases_to_inv(app):
from sphinx.ext.intersphinx import InventoryAdapter
inventories = InventoryAdapter(app.builder.env)
for alias, target in app.config.intersphinx_aliases.items():
alias_domain, alias_name = alias
target_domain, target_name = target
try:
found = inventories.main_inventory[target_domain][target_name]
try:
inventories.main_inventory[alias_domain][alias_name] = found
except KeyError:
print("could not add to inv")
continue
except KeyError:
print("missed :(")
continue
def setup(app):
app.add_config_value("intersphinx_aliases", {}, "env")
app.connect("builder-inited", add_intersphinx_aliases_to_inv)
To use this, I paste the above code in my conf.py and add aliases to the intersphinx_aliases dictionary.