tensorflow: "trying to mutate a frozen object", bazel - python

MacOS high sierra, MBP 2016, in terminal.
I'm following the directions here:
https://github.com/tensorflow/models/tree/master/research/syntaxnet
All options for ./configure chosen as default (and all python directories double-checked.). All steps have completed cleanly until this:
bazel test ...
# On Mac, run the following:
bazel test --linkopt=-headerpad_max_install_names \
dragnn/... syntaxnet/... util/utf8/...
I assume I'm supposed to run the latter line ("bazel test --linkopt" etc.). But I get the same result either way, interestingly.
This throws about 10 errors, each of the same type "trying to mutate a frozen object", and concludes tests not run, error loading package dragnn/protos, and couldn't start build.
This is the general form of the errors:
syntaxnet>> bazel test --linkopt=-headerpad_max_install_names
dragnn/... syntaxnet/... util/utf8/...
.
ERROR:
/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD:35:1:
Traceback (most recent call last): File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD",
line 35 tf_proto_library_py(name = "data_py_pb2", srcs = ["dat..."])
File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/syntaxnet/syntaxnet.bzl",
line 53, in tf_proto_library_py py_proto_library(name = name, srcs =
srcs, srcs_versi...", <5 more arguments>) File
"/private/var/tmp/_bazel_XXX/f74e5a21c3ad09aeb110d9de15110035/external/protobuf_archive/protobuf.bzl",
line 374, in py_proto_library py_libs += [default_runtime] trying to
mutate a frozen object ERROR: package contains errors: dragnn/protos
... [same error for various 'name = "...pb2"' files] ...
INFO: Elapsed time: 0.709s FAILED: Build did NOT complete successfully
(17 packages loaded) ERROR: Couldn't start the build. Unable to run
tests
Any idea what could be doing this? Thanks.

This error indicates a bug in the py_proto_library rule implementation.
tf_proto_library_py is defined in syntaxnet.bzl. It is a wrapper around py_proto_library, which is defined by the tf_workspace macro's protobuf_archive rule.
"protobuf_archive" downloads Protobuf 3.3.0, which contains //:protobuf.bzl with the buggy py_proto_library rule implementation: in line #374 it tries to mutate an immutable object py_libs.
Make sure you use the latest Bazel version, currently that's 0.8.1.
If the problem still persists, then:
I suggest filing a bug with:
Protobuf, to fix the py_proto_library rule
TensorFlow, to update their Protobuf version in tf_workspace, and
Syntaxnet to update their TF submodule reference in //research/syntaxnet to the bugfixed version.
As a workaround, perhaps you can patch protobuf.bzl.
The patch is to change these lines:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs += [default_runtime]
375
376 native.py_library(
377 name=name,
378 srcs=outs+py_extra_srcs,
379 deps=py_libs+deps,
380 imports=includes,
381 **kargs)
to these:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs2 = py_libs + [default_runtime]
375 else:
376 py_libs2 = py_libs
377
378 native.py_library(
379 name=name,
380 srcs=outs+py_extra_srcs,
381 deps=py_libs2+deps,
382 imports=includes,
383 **kargs)
Disclaimer: this is a "blind" fix; I have not tried whether it works.

Tried same pattern patch for cc_libs.
if default_runtime and not default_runtime in cc_libs:
cc_libs2 = cc_libs + [default_runtime]
else:
cc_libs2 = cc_libs
if use_grpc_plugin:
cc_libs += ["//external:grpc_lib"]
native.cc_library(
name=name,
srcs=gen_srcs,
hdrs=gen_hdrs,
deps=cc_libs2 + deps,
includes=includes,
**kargs)
Shows new error, but keeps compiling. (Ubuntu 16 on Windows System for Linux--don't ask, native tensorflow 1.4 winx64 works, but not syntaxnet).
greg#FX11:/mnt/c/code/models/research/syntaxnet$ bazel test ...
ERROR: /home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD:451:1: in _transitive_hdrs rule #org_tensorflow//tensorflow/core/kernels:bounds_check_lib_gather:
Traceback (most recent call last):
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD", line 451
_transitive_hdrs(name = 'bounds_check_lib_gather')
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/tensorflow.bzl", line 869, in _transitive_hdrs_impl
set()
Just changed set() to depset() and that seems to have avoided the error.

To make a long story short. I was inspired by a sstrasburg's comment.
Firstly, uninstall a fresh version of bazel.
brew uninstall bazel
Download bazel 0.5.4 from here.
chmod +x bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
./bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
After that, again run
bazel test --linkopt=-headerpad_max_install_names dragnn/... syntaxnet/... util/utf8/...
Finally, I got
Executed 57 out of 57 tests: 57 tests pass.

Related

function serving deployment failed

Here, I'm attaching actual error showed. im using mlrun with docker. specifically mlrun 1.2.0.
--------------------------------------------------------------------------
RunError Traceback (most recent call last)
<ipython-input-20-aab97e08b914> in <module>
1 serving_fn.with_code(body=" ") # adds the serving wrapper, not required with MLRun >= 1.0.3
----> 2 project.deploy_function(serving_fn)
/opt/conda/lib/python3.8/site-packages/mlrun/projects/project.py in deploy_function(self, function, dashboard, models, env, tag, verbose, builder_env, mock)
2307 :param mock: deploy mock server vs a real Nuclio function (for local simulations)
2308 """
-> 2309 return deploy_function(
2310 function,
2311 dashboard=dashboard,
/opt/conda/lib/python3.8/site-packages/mlrun/projects/operations.py in deploy_function(function, dashboard, models, env, tag, verbose, builder_env, project_object, mock)
344 )
345
--> 346 address = function.deploy(
347 dashboard=dashboard, tag=tag, verbose=verbose, builder_env=builder_env
348 )
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/serving.py in deploy(self, dashboard, project, tag, verbose, auth_info, builder_env)
621 logger.info(f"deploy root function {self.metadata.name} ...")
622
--> 623 return super().deploy(
624 dashboard, project, tag, verbose, auth_info, builder_env=builder_env
625 )
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/function.py in deploy(self, dashboard, project, tag, verbose, auth_info, builder_env)
550 self.status = data["data"].get("status")
551 self._update_credentials_from_remote_build(data["data"])
--> 552 self._wait_for_function_deployment(db, verbose=verbose)
553
554 # NOTE: on older mlrun versions & nuclio versions, function are exposed via NodePort
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/function.py in _wait_for_function_deployment(self, db, verbose)
620 if state != "ready":
621 logger.error("Nuclio function failed to deploy", function_state=state)
--> 622 raise RunError(f"function {self.metadata.name} deployment failed")
623
624 #min_nuclio_versions("1.5.20", "1.6.10")
RunError: function serving deployment failed
I don't have any idea what is the reason behind this error. as I'm new bee here. so someone pls help me to resolve this error.
I see two steps, how to solve the issue:
1. Relevant installation
The MLRun Community Edition in desktop docker has to be install under relevant HOST_IP (not with localhost or 127.0.0.1, but with stable IP address, see ipconfig) and with relevant SHARED_DIR. See relevant command line (from OS windows):
set HOST_IP=192.168.0.150
set SHARED_DIR=c:\Apps\mlrun-data
set TAG=1.2.0
mkdir %SHARED_DIR%
docker-compose -f "c:\Apps\Desktop Docker Tools\compose.with-jupyter.yaml" up
BTW: YAML file see https://docs.mlrun.org/en/latest/install/local-docker.html
2. Access to the port
In case of call serving_fn.invoke you have to open relevant port (from deploy_function) on your IP address (based on setting of HOST_IP, see the first point).
Typically this port can be blocked based on your firewall policy or your local antivirus. It means, you have to open access to this port before invoke call.
BTW: You can see focus on the issue https://github.com/mlrun/mlrun/issues/2102

I have a problem with creating gym_super_mario_bros env and have KeyError: 'render_modes'

I'm trying to follow "Build an Mario AI Model with Python | Gaming Reinforcement Learning" by Nicholas Renotte tutorial and can't move on beacouse of some error.
Here is my code:
!pip install gym_super_mario_bros==7.3.0 nes_py
# Import the game
import gym_super_mario_bros
# Import the Joypad wrapper
from nes_py.wrappers import JoypadSpace
# Import the simplified controls
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT
# Setup game
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = JoypadSpace(env, SIMPLE_MOVEMENT)
and this line of code: env = gym_super_mario_bros.make('SuperMarioBros-v0') couses following error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_16900\3897944130.py in <module>
1 # Setup game
----> 2 env = gym_super_mario_bros.make('SuperMarioBros-v0')
3 env = JoypadSpace(env, SIMPLE_MOVEMENT)
D:\Anaconda\envs\gamesAi\lib\site-packages\gym\envs\registration.py in make(id, max_episode_steps, autoreset, new_step_api, disable_env_checker, **kwargs)
623 # If we have access to metadata we check that "render_mode" is valid
624 if hasattr(env_creator, "metadata"):
--> 625 render_modes = env_creator.metadata["render_modes"]
626
627 # We might be able to fall back to the HumanRendering wrapper if 'human' rendering is not supported natively
KeyError: 'render_modes'
I've already tried using python 3.7 instead of 3.9 and reinstalling packages
Most of the custom envs are not yet ready for the 0.25 Version. Changing the version to 0.24.1 should solve the issue.
Try:
pip install gym=0.24.1
Kind regards

python problem from nipype.interfaces.ants import N4BiasFieldCorrection

OSError: No command "N4BiasFieldCorrection" found on host pc. Please check that the corresponding package is installed.
i have problem using nipype.can you plese help?
from nipype.interfaces.ants import N4BiasFieldCorrection
n4 = N4BiasFieldCorrection()
n4.inputs.dimension = 3
n4.inputs.input_image =('/home/abhayadev/Desktop/project/Dataset/BRATS2013_CHALLENGE/Challenge/HG/0301/VSD.Brain.XX.O.MR_Flair/VSD.Brain.XX.O.MR_Flair.17572.mha')
n4.inputs.n_iterations = [20, 20, 10, 5]
n4.run()
res=n4.run()
print(res)
output
:
``OSError Traceback (most recent call last)
<ipython-input-10-49ae4ec58583> in <module>
5 n4.inputs.input_image =`enter code here`('/home/abhayadev/Desktop/project/Dataset/BRATS2013_CHALLENGE/Challenge/HG/0301/VSD.Brain.XX.O.MR_Flair/VSD.Brain.XX.O.MR_Flair.17572.mha')
6 n4.inputs.n_iterations = [20, 20, 10, 5]
----> 7 n4.run()
8 res=n4.run()
9 print(res)
~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/core.py in run(self, cwd, ignore_exception, **inputs)
374 try:
375 runtime = self._pre_run_hook(runtime)
--> 376 runtime = self._run_interface(runtime)
377 runtime = self._post_run_hook(runtime)
378 outputs = self.aggregate_outputs(runtime)
~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/core.py in _run_interface(self, runtime, correct_return_codes)
750 'No command "%s" found on host %s. Please check that the '
751 'corresponding package is installed.' % (executable_name,
--> 752 runtime.hostname))
753
754 runtime.command_path = cmd_path
OSError: No command "N4BiasFieldCorrection" found on host abhayadev. Please check that the corresponding package is installed.
You need to have ANTS's install directory in your PATH variable.
From the install guide:
Assuming your install prefix was /opt/ANTs, there will now be a binary directory /opt/ANTs/bin, containing the ANTs executables and scripts. The scripts additionally require ANTSPATH to point to the bin directory including a trailing slash.
For the bash shell (default on Mac and some Linux), you need to set
export ANTSPATH=/opt/ANTs/bin/
export PATH=${ANTSPATH}:$PATH
If you use nipype within neurodocker (would recommend) you have to:
source activate neuro to have the ANTS command available.

"command 'bet' could not be found on host" error while using BET in FSL on Windows 10 via Python 3.5

I need to perform brain extraction on .nii images.
I am using Anaconda on Windows 10 and have an environment based on Python 3.5.4.
On Nipype I found the BET from FSL and I followed the code:
mybet = fsl.BET()
mybet.inputs.in_file = 'example.nii'
mybet.inputs.out_file = 'example_bet.nii'
result = mybet.run()
Please note that I expect the output file example_bet.nii to be created by fsl.BET, not to be an image to be overwritten.
I can only find solutions based on Unix systems and it seems one needs to have FSL installed on a Unix-based OS, which is not possible without a Virtual Machine in Windows.
Well, this is the output I get:
171122-12:02:48,988 interface WARNING:
FSLOUTPUTTYPE environment variable is not set. Setting FSLOUTPUTTYPE=NIFTI
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-12-5b900fbd5263> in <module>()
2 mybet.inputs.in_file = 'prova.nii'
3 mybet.inputs.out_file = 'prova_bet.nii'
----> 4 result = mybet.run()
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in run(self, **inputs)
1079 version=self.version)
1080 try:
-> 1081 runtime = self._run_wrapper(runtime)
1082 outputs = self.aggregate_outputs(runtime)
1083 runtime.endTime = dt.isoformat(dt.utcnow())
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_wrapper(self, runtime)
1722
1723 def _run_wrapper(self, runtime):
-> 1724 runtime = self._run_interface(runtime)
1725 return runtime
1726
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\fsl\preprocess.py in _run_interface(self, runtime)
142 # in stderr and if it's set, then update the returncode
143 # accordingly.
--> 144 runtime = super(BET, self)._run_interface(runtime)
145 if runtime.stderr:
146 self.raise_exception(runtime)
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_interface(self, runtime, correct_return_codes)
1748 if not exist_val:
1749 raise IOError("command '%s' could not be found on host %s" %
-> 1750 (self.cmd.split()[0], runtime.hostname))
1751 setattr(runtime, 'command_path', cmd_path)
1752 setattr(runtime, 'dependencies', get_dependencies(executable_name,
OSError: command 'bet' could not be found on host DESKTOP-MYPC
Interface BET failed to run.
Do I need to switch to Linux or is there a way around it?
You can only use FSL on Windows via Docker, Virtual Machine, or Windows Subsystem for Linux. Running it naively is not possible.

unable to install graphlab after typing graphlab.get_dependencies() function

The code shows following errors:
ACTION REQUIRED: Dependencies libstdc++-6.dll and libgcc_s_seh-1.dll not found.
Ensure user account has write permission to C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab
Run graphlab.get_dependencies() to download and install them.
Restart Python and import graphlab again.
By running the above function, you agree to the following licenses.
when i try to write get_dependencies() afterwards it shows the errors shown in image
ContentTooShortError Traceback (most recent call last)
<ipython-input-4-9e64085fb919> in <module>()
----> 1 graphlab.get_dependencies()
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab\dependencies.pyc in get_dependencies()
39
40 print('Downloading gcc-libs.')
---> 41 (dllarchive_file, dllheaders) = urllib.urlretrieve('http://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-gcc-libs-5.1.0-1-any.pkg.tar.xz')
42 dllarchive_dir = tempfile.mkdtemp()
43
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\urllib.pyc in urlretrieve(url, filename, reporthook, data, context)
96 else:
97 opener = _urlopener
---> 98 return opener.retrieve(url, filename, reporthook, data)
99 def urlcleanup():
100 if _urlopener:
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\urllib.pyc in retrieve(self, url, filename, reporthook, data)
287 if size >= 0 and read < size:
288 raise ContentTooShortError("retrieval incomplete: got only %i out "
--> 289 "of %i bytes" % (read, size), result)
290
291 return result
ContentTooShortError: retrieval incomplete: got only 105704 out of 546800 bytes
Well, I faced the same questions 1 hour ago, and I fixed it now.
For the 2 .dll files, you can search the internet to download them, copy them to you directory:C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab.
In ipython notebook, run import graphlab, and then run graphlab.get_dependencies(). Wait 1 minute, the base package will download.
After the 2 steps, you may restart you computer, then you will find everything back to normal.
The error exists for me as well after the following the above steps. What i realised is that the these two dependencies needs to be extracted in the "cython" folder inside "graphlab" folder. So i copied the same folder from a different installation that was working for me previously and volla.. "import graphlab" was successful. In case anyone needs it, below is the link to the zip of my "cython" folder. Just replace this "cython" folder inside graphlab (Usual location is '/Anaconda2/envs/gl-env/Lib/site-packages/graphlab'. I hope it helps someone
https://drive.google.com/open?id=0B1voSQs3jo7Jc2l6RTBzWGhYUUU

Categories