I am trying to create a wheel package from bazel using py_wheel. py_wheel has an option to provide the required python dependencies using the requires param. However, I don't want to provide the list of dependencies manually. Is there a way that I can read my dependencies from the requirements.txt file and provide it in the list in bazel?
py_wheel(
name = "dummy",
distribution = "dummy",
python_tag = "py3",
version = "latest",
entry_points={"console_scripts": ["dummy = dummy.app:main"]},
requires = [?],
deps = [":dummy-dependencies"],
)
You can deploy the requirements in this way
load("#deps_1//:requirements.bzl", deps_1_requirement = "requirement")
The attribute option requires is for "List of requirements for this package ". find it here
Related
I am evaluating using Poetry for packaging and building a desktop application.
The only roadblocker is that poetry doesn't seam to allow specifying the same package twice. For instance I couldn't do the following:
[tool.poetry.dependencies]
python = "^3.9"
lru-dict = {path = "./packages/lru_dict-1.1.6-cp39-cp39-win_amd64.whl"}
lru-dict = {path = "./packages/lru_dict-1.1.6-cp39-cp39-win32.whl"}
Notice that the lru-dict package is specified twice with the only difference being the bitness (i.e. the CPU architecture) that the package is built for.
I know I can upload the package to PyPI and pip will choose the appropriate version dynamically. But what about private or local packages?
From the poetry documentation:
Poetry supports environment markers via the markers property.
One of those markers is platform_machine, which is the output of platform.machine().
So you should be able to do something like this:
[tool.poetry.dependencies]
python = "^3.9"
lru-dict = [
{path = "./packages/lru_dict-1.1.6-cp39-cp39-win_amd64.whl", markers = "platform_machine == 'amd64'"},
{path = "./packages/lru_dict-1.1.6-cp39-cp39-win32.whl", markers = "platform_machine == 'win32'"}
]
Python's poetry dependency manager allows specifying optional dependencies via command:
$ poetry add --optional redis
Which results in this configuration:
[tool.poetry.dependencies]
python = "^3.8"
redis = {version="^3.4.1", optional=true}
However how do you actually install them? Docs seem to hint to:
$ poetry install -E redis
but that just throws and error:
Installing dependencies from lock file
[ValueError]
Extra [redis] is not specified.
You need to add a tool.poetry.extras group to your pyproject.toml if you want to use the -E flag during install, as described in this section of the docs:
[tool.poetry.extras]
caching = ["redis"]
The key refers to the word that you use with poetry install -E, and the value is a list of packages that were marked as --optional when they were added. There currently is no support for making optional packages part of a specific group during their addition, so you have to maintain this section in your pyproject.toml file by hand.
The reason behind this additional layer of abstraction is that extra-installs usually refer to some optional functionality (in this case caching) that is enabled through the installation of one or more dependencies (in this case just redis). poetry simply mimics setuptools' definition of extra-installs here, which might explain why it's so sparingly documented.
I will add that not only you have to have this extras section added by hand, as well your optional dependencies cannot be in dev section.
Example of code that won't work:
[tool.poetry]
name = "yolo"
version = "1.0.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "2.7"
Django = "*"
[tool.poetry.dev-dependencies]
pytest = "*"
ipdb = {version = "*", optional = true}
[tool.poetry.extras]
dev_tools = ["ipdb"]
But this WILL work:
[tool.poetry]
name = "yolo"
version = "1.0.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "2.7"
Django = "*"
ipdb = {version = "*", optional = true}
[tool.poetry.dev-dependencies]
pytest = "*"
[tool.poetry.extras]
dev_tools = ["ipdb"]
Up-voted Drachenfels's answer.
Dev dependency could not be optional, otherwise, no matter how you tweak it with extras or retry with poetry install -E, it will just never get installed.
This sounds like a bug but somehow by design,
...this is not something I want to add. Extras will be referenced in the distributions metadata when packaging the project but development dependencies do not which will lead to a broken extras.
— concluded in Poetry PR#606 comment by one maintainer. See here for detailed context: https://github.com/python-poetry/poetry/pull/606#issuecomment-437943927
I would say that I can accept the fact that optional dev-dependency cannot be implemented. However, at least Poetry should warn me when I have such a config. If so, I wouldn't have been confused for a long time, reading each corner of the help manual and found nothing helpful.
I found some people did get trap in this problem (Is poetry ignoring extras or pyproject.toml is misconfigured?) but their questions are closed, marked duplicated and re-linked to this question. Thus I decided to answer here and give more details about this problem.
This is now possible (with Poetry version 1.2; perhaps even an earlier version), using the "extras" group:
poetry add redis --group=extras
It will appear in the section
[tool.poetry.group.extras.dependencies]
which is also newer style (compared to [tool.poetry.extras] or [tool.poetry.extras.dependencies]
See the documentation. Interestingely, this still follows the older style, [tool.poetry.extras], and doesn't show the use of poetry add, but the above result is what I get.
I have implemented a kernel for my custom Op, and put it into /tensorflow/core/user_ops as custom_op.cc. Inside the Op I do all the registering stuff, like REGISTER_OP and REGISTER_KERNEL_BUILDER.
Then I implemented gradient for this Op in Python, and I put it in the same folder as custom_op_grad.py. I did all the registering here as well (#ops.RegisterGradient).
I have created the BUILD file, with the following content:
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "custom_op.so",
srcs = ["custom_op.cc"],
)
py_library(
name = "custom_op_grad",
srcs = ["custom_op_grad.py"],
srcs_version = "PY2",
deps = [
":custom_op_grad",
"//tensorflow:tensorflow_py",
],
)
After that, I rebuild Tensorflow:
pip uninstall tensorflow
bazel clean
bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
cp -r bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/__main__/* bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-0.8.0-py2-none-any.whl
When I try to use my Op after all this, by calling tf.user_ops.custom_op it tells me that module doesn't have it.
Maybe there are some additional steps I have to do? Or I am doing something wrong with the BUILD file?
Ok, I found the solution. I just removed the BUILD file, and my custom Op was successfully built and was importable in Python using tensorflow.user_ops.custom_op().
To use the gradient I had to put it's code directly inside the tensorflow/python/user_ops/user_ops.py. Not the most elegant solution, but working for now.
In a python virtualenv on Windows, I've installed a custom package that over-rides setuptools.command.install.easy_install.get_script_args to create a new custom type of entry point 'custom_entry'
I have another package that I want to prepare with setuptools exposing a custom entry point.
If I prepare an egg distribution of this package and install it with my modified easy_install.exe, this creates the custom entry points correctly.
However, if I prepare a wheel distribution and install it with a modified pip.exe, the custom entry points do not get added.
Why does pip not follow the same install procedure as easy_install?
Reading the source for pip, it seems that the function get_entrypoints in wheel.py excludes all entry points other than console_scripts and gui_scripts. Is this correct?
If so, how should I install custom entry points for pip installations?
---- Edit
It looks like I should provide more details.
In my first package, custom-installer, I'm over-riding (monkey-patching, really) easy_install.get_script_args, in custom_install.__init__.py:
from setuptools.command import easy_install
_GET_SCRIPT_ARGS = easy_install.get_script_args
def get_script_args(dist, executable, wininst):
for script_arg in _GET_SCRIPT_ARGS(dist, executable, wininst):
yield script_arg # replicate existing behaviour, handles console_scripts and other entry points
for group in ['custom_entry']:
for name, _ in dist.get_entry_map(group).items():
script_text = (
## some custom stuff
)
## do something else
yield (## yield some other stuff) # to create adjunct files to the -custom.py script
yield (name + '-custom.py', script_text, 't')
easy_install.get_script_args = get_script_args
main = easy_install.main
And in that package's setup.py, I provide a (console_script) entry point for my custom installer:
entry_points={
'console_scripts': [
'custom_install = custom_install.__init__:main'
]
}
Installing this package with pip correctly creates the installer script /venv/Scripts/custom_install.exe
With my second package, customized, I have both regular and custom entry points to install from setup.py, for two modules custom and console.
entry_points={
'console_scripts': [
'console = console.__main__:main'
],
'custom_entry': [
'custom = custom.__main__:main'
]
}
I would like to see both of these entry points installed regardless of the install procedure.
If I build the package customized as an egg distribution and install this with custom_install.exe created by custom-installer, then both entry points of customized are installed.
I would like to be able to install this package as a wheel file using pip, but from reading the source code, pip seems to explicitly skip and any entry points other than 'console_scripts' and 'gui_scripts':
def get_entrypoints(filename):
if not os.path.exists(filename):
return {}, {}
# This is done because you can pass a string to entry_points wrappers which
# means that they may or may not be valid INI files. The attempt here is to
# strip leading and trailing whitespace in order to make them valid INI
# files.
with open(filename) as fp:
data = StringIO()
for line in fp:
data.write(line.strip())
data.write("\n")
data.seek(0)
cp = configparser.RawConfigParser()
cp.readfp(data)
console = {}
gui = {}
if cp.has_section('console_scripts'):
console = dict(cp.items('console_scripts'))
if cp.has_section('gui_scripts'):
gui = dict(cp.items('gui_scripts'))
return console, gui
Subsequently, pip generates entry point scripts using a completely different set of code to easy_install. Presumably, I could over-ride pip's implementations of these, as done with easy_install, to create my custom entry points, but I feel like I'm going the wrong way.
Can anyone suggest a simpler way of implementing my custom entry points that is compatible with pip? If not, I can override get_entrypoints and move_wheel_files.
You will probably need to use the keyword console_scripts in your setup.py file. See the following answer:
entry_points does not create custom scripts with pip or easy_install in Python?
It basically states that you need to do the following in your setup.py script:
entry_points = {
'console_scripts': ['custom_entry_point = mypackage.mymod.test:foo']
}
See also: http://calvinx.com/2012/09/09/python-packaging-define-an-entry-point-for-console-commands/
When installing my python package, I want to be able to tell the user about various optional dependencies. Ideally I would also like to print out a message about these optional requirements and what each of them do.
I haven't seen anything yet in the docs of either pip or docutils. Do tools these support optional dependencies?
These are called extras, here is how to use them in your setup.py, setup.cfg, or pyproject.toml.
The base support is in pkg_resources. You need to enable distribute in your setup.py. pip will also understand them:
pip install 'package[extras]'
Yes, at stated by #Tobu and explained here. In your setup.py file you can add a line like this:
extras_require = {
'full': ['matplotlib', 'tensorflow', 'numpy', 'tikzplotlib']
}
I have an example of this line here.
Now you can either install via PIP basic/vanilla package like pip install package_name or the package with all the optional dependencies like pip install package_name[full]
Where package_name is the name of your package and full is because we put "full" in the extras_require dictionary but it depends on what you put as a name.
If someone is interested in how to code a library that can work with or without a package I recommend this answer
Since PEP-621, this information is better placed in the pyproject.toml rather than setup.py. Here's the relevant specification from PEP 621. Here's an example snippet from a pyproject.toml (credit to #GwynBleidD):
[project.optional-dependencies]
test = [
"pytest < 5.0.0",
"pytest-cov[all]"
]
lint = [
"black",
"flake8"
]
ci = [
"pytest < 5.0.0",
"pytest-cov[all]",
"black",
"flake8"
]
A more complete example is found in the PEP