I have implemented a kernel for my custom Op, and put it into /tensorflow/core/user_ops as custom_op.cc. Inside the Op I do all the registering stuff, like REGISTER_OP and REGISTER_KERNEL_BUILDER.
Then I implemented gradient for this Op in Python, and I put it in the same folder as custom_op_grad.py. I did all the registering here as well (#ops.RegisterGradient).
I have created the BUILD file, with the following content:
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "custom_op.so",
srcs = ["custom_op.cc"],
)
py_library(
name = "custom_op_grad",
srcs = ["custom_op_grad.py"],
srcs_version = "PY2",
deps = [
":custom_op_grad",
"//tensorflow:tensorflow_py",
],
)
After that, I rebuild Tensorflow:
pip uninstall tensorflow
bazel clean
bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
cp -r bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/__main__/* bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-0.8.0-py2-none-any.whl
When I try to use my Op after all this, by calling tf.user_ops.custom_op it tells me that module doesn't have it.
Maybe there are some additional steps I have to do? Or I am doing something wrong with the BUILD file?
Ok, I found the solution. I just removed the BUILD file, and my custom Op was successfully built and was importable in Python using tensorflow.user_ops.custom_op().
To use the gradient I had to put it's code directly inside the tensorflow/python/user_ops/user_ops.py. Not the most elegant solution, but working for now.
Related
I am trying to use a private Python package as a model using the mlflow.pyfunc.PythonModel.
My conda.yaml looks like
channels:
- defaults
dependencies:
- python=3.10.4
- pip
- pip:
- mlflow==2.1.1
- pandas
- --extra-index-url <private-pypa-repo-link>
- <private-package>
name: model_env
python_env.yaml
python: 3.10.4
build_dependencies:
- pip==23.0
- setuptools==58.1.0
- wheel==0.38.4
dependencies:
- -r requirements.txt
requirements.txt
mlflow==2.1.1
pandas
--extra-index-url <private-pypa-repo-link>
<private-package>
When running the following
import mlflow
model_uri = '<run_id>'
# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(model_uri)
# Predict on a Pandas DataFrame.
import pandas as pd
t = loaded_model.predict(pd.read_json("test.json"))
print(t)
The result is
WARNING mlflow.pyfunc: Encountered an unexpected error (InvalidRequirement('Parse error at "\'--extra-\'": Expected W:(0-9A-Za-z)')) while detecting model dependency mismatches. Set logging level to DEBUG to see the full traceback.
Adding in the following before loading the mode makes it work
dep = mlflow.pyfunc.get_model_dependencies(model_uri)
print(dep)
import subprocess
import sys
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", dep])
Is there a way automatically install these dependencies rather than doing it explicitly? What are my options to get mlflow to install the private package?
Answering my own question here. Turns out the issue is that I was trying to use the keyring library which needs to be pre-installed and is not supported when doing inference in a virtual environment.
There are ways to get around it though.
Add the authentication token to the extra-index-url itself. You can find it documented in this stackoverflow question.
MlFlow allows you to log any dependencies with the model itself using the code_path argument (link). Using this method, you can skip adding in your private package as a requirement. This question also touches on the same topic. The code would look a bit like this.
mlflow.pyfunc.save_model(
path=dest_path,
python_model=MyModel(),
artifacts=_get_artifact_dict(t_dir),
conda_env=conda_env,
# Adding the current script file as dependency
code_path=[os.path.realpath(__file__), #Add any other script]
)
Opt for first approach if saving authentication token in the requirements.txt is feasible, otherwise use the second approach. The downside of using the code_path solution is that with each model, your packages' code is getting replicated.
I am using kivy to create an android app. I need to install the deepspeech framework, however, in order for deepspeech to be installed it is necessary to create a recipe.
I created a recipe and built the apk, there were no errors in the build, it created the apk and also, as far as I could see in the folders, the deepspeech was built. However after I install the app in the phone and try to run the app, it crashes and says there is no module named deepspeech.
Does anyone know what i am doing wrong? I've been stuck on this for a while now, and can't seem to find the end of this :/.
from pythonforandroid.recipe import PythonRecipe
from pythonforandroid.toolchain import current_directory, shprint
import sh
class deepspeechRecipe(PythonRecipe):
version = 'v0.9.2'
url = 'https://github.com/mozilla/DeepSpeech/archive/{version}.tar.gz'
depends = ['numpy', 'setuptools']
call_hostpython_via_targetpython = False
site_packages_name = 'deepspeech'
def build_arch(self, arch):
env = self.get_recipe_env(arch)
with current_directory(self.get_build_dir(arch.arch)):
# Build python bindings
hostpython = sh.Command(self.hostpython_location)
shprint(hostpython,
'setup.py',
'build_ext', _env=env)
# Install python bindings
super().build_arch(arch)
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
numpy_recipe = self.get_recipe('numpy', self.ctx)
env['CFLAGS'] += ' -I' + numpy_recipe.get_build_dir(arch.arch)
#env['LDFLAGS'] += ' -L' + sqlite_recipe.get_lib_dir(arch)
env['LIBS'] = env.get('LIBS', '') + ' -lnumpy'
return env
recipe = deepspeechRecipe()
Buildozer:1.4.0
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw
If you need any extra information I can add.
I have already tried using tensorflow to run the model, however, the model gives an array as the output and I don't know the right procedures to transform that into a text form.
I have already tried other recipes (like opencv) and all work fine.
Edit:
I found out that when i use the recipe it does run, and it does build properly, but only the deepspeech_training part because the setup.py only installs that. To install other parts like the model class it is necessary to use another setup.py located in "native_client/python", but that requires the rest of the folders, so I still need to figure that out.
Edit2:I was able to build the packages that i wanted (the inference of deepspeech) however when i run it gives the following error.
python : ImportError: dlopen failed: library "libc++_shared.so" not found: needed by /data/user/0/org.test.myapp/files/app/_python_bundle/site-packages/deepspeech/_impl.so in namespace classloader-namespace
python : Python for android ended.
Add pillow in your requirements and check if it works!
requirements = python3==3.7.14, hostpython3==3.7.14, kivy, kivymd, sqlite3, numpy==1.14.5, deepspeech, apsw, pillow
I am trying to create a wheel package from bazel using py_wheel. py_wheel has an option to provide the required python dependencies using the requires param. However, I don't want to provide the list of dependencies manually. Is there a way that I can read my dependencies from the requirements.txt file and provide it in the list in bazel?
py_wheel(
name = "dummy",
distribution = "dummy",
python_tag = "py3",
version = "latest",
entry_points={"console_scripts": ["dummy = dummy.app:main"]},
requires = [?],
deps = [":dummy-dependencies"],
)
You can deploy the requirements in this way
load("#deps_1//:requirements.bzl", deps_1_requirement = "requirement")
The attribute option requires is for "List of requirements for this package ". find it here
I am trying to create a python package (deb & rpm) from cmake, ideally using cpack. I did read
https://cmake.org/cmake/help/latest/cpack_gen/rpm.html and,
https://cmake.org/cmake/help/latest/cpack_gen/deb.html
The installation works just fine (using component install) for my shared library. However I cannot make sense of the documentation to install the python binding (glue) code. Using the standard cmake install mechanism, I tried:
install(
FILES __init__.py library.py
DESTINATION ${ACME_PYTHON_PACKAGE_DIR}/project_name
COMPONENT python)
And then using brute-force approach ended-up with:
# debian based package (relative path)
set(ACME_PYTHON_PACKAGE_DIR lib/python3/dist-packages)
and
# rpm based package (full path required)
set(ACME_PYTHON_PACKAGE_DIR /var/lang/lib/python3.8/site-packages)
The above is derived from:
debian % python -c 'import site; print(site.getsitepackages())'
['/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.9/dist-packages']
while:
rpm % python -c 'import site; print(site.getsitepackages())'
['/var/lang/lib/python3.8/site-packages']
It is pretty clear that the brute-force approach will not be portable, and is doomed to fail on the next release of python. The only possible solution that I can think of is generating a temporary setup.py python script (using setuptools), that will do the install. Typically cmake would call the following process:
% python setup.py install --root ${ACME_PYTHON_INSTALL_ROOT}
My questions are:
Did I understand the cmake/cpack documentation correctly for python package ? If so this means I need to generate an intermediate setup.py script.
I have been searching through the cmake/cpack codebase (git grep setuptools) but did not find helper functions to handle generation of setup.py and passing the result files back to cpack. Is there an existing cmake module which I could re-use ?
I did read, some alternative solution, such as:
How to build debian package with CPack to execute setup.py?
Which seems overly complex, and geared toward Debian-only based system. I need to handle RPM in my case.
As mentionned in my other solution, the ugly part is dealing with absolute path in cmake install() commands. I was able to refactor the code to avoid usage of absolute path in install(). I simply changed the installation into:
install(
# trailing slash is important:
DIRECTORY ${SETUP_OUTPUT}/
# "." syntax is a reliable mechanism, see:
# https://gitlab.kitware.com/cmake/cmake/-/issues/22616
DESTINATION "."
COMPONENT python)
And then one simply needs to:
set(CMAKE_INSTALL_PREFIX "/")
set(CPACK_PACKAGING_INSTALL_PREFIX "/")
include(CPack)
At this point all install path now need to include explicitely /usr since we've cleared the value for CMAKE_INSTALL_PREFIX.
The above has been tested for deb and rpm packages. CPACK_BINARY_TGZ does properly run with the above solution:
https://gitlab.kitware.com/cmake/cmake/-/issues/22925
I am going to post the temporary solution I am using at the moment, until someone provide something more robust.
So I eventually manage to stumble upon:
https://alioth-lists.debian.net/pipermail/libkdtree-devel/2012-October/000366.html and,
Using CMake with setup.py
Re-using the above to do an install step instead of a build step can be done as follow:
find_package(Python COMPONENTS Interpreter)
set(SETUP_PY_IN "${CMAKE_CURRENT_SOURCE_DIR}/setup.py.in")
set(SETUP_PY "${CMAKE_CURRENT_BINARY_DIR}/setup.py")
set(SETUP_DEPS "${CMAKE_CURRENT_SOURCE_DIR}/project_name/__init__.py")
set(SETUP_OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/build-python")
configure_file(${SETUP_PY_IN} ${SETUP_PY})
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp
COMMAND ${Python_EXECUTABLE} ARGS ${SETUP_PY} install --root ${SETUP_OUTPUT}
COMMAND ${CMAKE_COMMAND} -E touch ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp
DEPENDS ${SETUP_DEPS})
add_custom_target(target ALL DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp)
And then the ugly part is:
install(
# trailing slash is important:
DIRECTORY ${SETUP_OUTPUT}/
DESTINATION "/" # FIXME may cause issues with other cpack generators
COMPONENT python)
Turns out that the documentation for install() is pretty clear about absolute paths:
https://cmake.org/cmake/help/latest/command/install.html#introduction
DESTINATION
[...]
As absolute paths are not supported by cpack installer generators,
it is preferable to use relative paths throughout.
For reference, here is my setup.py.in:
from setuptools import setup
if __name__ == '__main__':
setup(name='project_name_python',
version='${PROJECT_VERSION}',
package_dir={'': '${CMAKE_CURRENT_SOURCE_DIR}'},
packages=['project_name'])
You can be fancy and remove the __pycache__ folder using the -B flag:
COMMAND ${Python_EXECUTABLE} ARGS -B ${SETUP_PY} install --root ${SETUP_OUTPUT}
You can be extra fancy and add debian option such as:
if(CPACK_BINARY_DEB)
set(EXTRA_ARG "--install-layout" "deb")
endif()
use as:
COMMAND ${Python_EXECUTABLE} ARGS -B ${SETUP_PY} install --root ${SETUP_OUTPUT} ${EXTRA_ARG}
I'm looking for the most elegant way to notify users of my library that they need a specific unix command to ensure that it will works...
When is the bet time for my lib to raise an error:
Installation ?
When my app call the command ?
At the import of my lib ?
both?
And also how should you detect that the command is missing (if not commands.getoutput("which CommandIDependsOn"): raise Exception("you need CommandIDependsOn")).
I need advices.
IMO, the best way is to check at install if the user has this specific *nix command.
If you're using distutils to distribute your package, in order to install it you have to do:
python setup.py build
python setup.py install
or simply
python setup.py install (in that case python setup.py build is implicit)
To check if the *nix command is installed, you can subclass the build method in your setup.py like this :
from distutils.core import setup
from distutils.command.build import build as _build
class build(_build):
description = "Custom Build Process"
user_options= _build.user_options[:]
# You can also define extra options like this :
#user_options.extend([('opt=', None, 'Name of optionnal option')])
def initialize_options(self):
# Initialize here you're extra options... Not needed in your case
#self.opt = None
_build.initialize_options(self)
def finalize_options(self):
# Finalize your options, you can modify value
if self.opt is None :
self.opt = "default value"
_build.finalize_options(self)
def run(self):
# Extra Check
# Enter your code here to verify if the *nix command is present
.................
# Start "classic" Build command
_build.run(self)
setup(
....
# Don't forget to register your custom build command
cmdclass = {'build' : build},
....
)
But what if the user uninstall the required command after the package installation? To solve this problem, the only "good" solution is to use a packaging systems such as deb or rpm and put a dependency between the command and your package.
Hope this helps
I wouldn't have any check at all. Document that your library requires this command, and if the user tries to use whatever part of your library needs it, an exception will be raised by whatever runs the command. It should still be possible to import your library and use it, even if only a subset of functionality is offered.
(PS: commands is old and broken and shouldn't be used in new code. subprocess is the hot new stuff.)