python problem from nipype.interfaces.ants import N4BiasFieldCorrection - python

OSError: No command "N4BiasFieldCorrection" found on host pc. Please check that the corresponding package is installed.
i have problem using nipype.can you plese help?
from nipype.interfaces.ants import N4BiasFieldCorrection
n4 = N4BiasFieldCorrection()
n4.inputs.dimension = 3
n4.inputs.input_image =('/home/abhayadev/Desktop/project/Dataset/BRATS2013_CHALLENGE/Challenge/HG/0301/VSD.Brain.XX.O.MR_Flair/VSD.Brain.XX.O.MR_Flair.17572.mha')
n4.inputs.n_iterations = [20, 20, 10, 5]
n4.run()
res=n4.run()
print(res)
output
:
``OSError Traceback (most recent call last)
<ipython-input-10-49ae4ec58583> in <module>
5 n4.inputs.input_image =`enter code here`('/home/abhayadev/Desktop/project/Dataset/BRATS2013_CHALLENGE/Challenge/HG/0301/VSD.Brain.XX.O.MR_Flair/VSD.Brain.XX.O.MR_Flair.17572.mha')
6 n4.inputs.n_iterations = [20, 20, 10, 5]
----> 7 n4.run()
8 res=n4.run()
9 print(res)
~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/core.py in run(self, cwd, ignore_exception, **inputs)
374 try:
375 runtime = self._pre_run_hook(runtime)
--> 376 runtime = self._run_interface(runtime)
377 runtime = self._post_run_hook(runtime)
378 outputs = self.aggregate_outputs(runtime)
~/anaconda3/lib/python3.7/site-packages/nipype/interfaces/base/core.py in _run_interface(self, runtime, correct_return_codes)
750 'No command "%s" found on host %s. Please check that the '
751 'corresponding package is installed.' % (executable_name,
--> 752 runtime.hostname))
753
754 runtime.command_path = cmd_path
OSError: No command "N4BiasFieldCorrection" found on host abhayadev. Please check that the corresponding package is installed.

You need to have ANTS's install directory in your PATH variable.
From the install guide:
Assuming your install prefix was /opt/ANTs, there will now be a binary directory /opt/ANTs/bin, containing the ANTs executables and scripts. The scripts additionally require ANTSPATH to point to the bin directory including a trailing slash.
For the bash shell (default on Mac and some Linux), you need to set
export ANTSPATH=/opt/ANTs/bin/
export PATH=${ANTSPATH}:$PATH
If you use nipype within neurodocker (would recommend) you have to:
source activate neuro to have the ANTS command available.

Related

ModuleNotFoundError: No java install detected. Please install java to use language-tool-python

I would like to check the number if issues in a given sentence.
my code is
import language_tool_python
tl = language_tool_python.LanguageTool('en-US')
txt = "good mooorning sirr and medam my namee anderen i am from amerecia !"
m = tl.check(txt)
len(m)
Instead of returning the number i am getting error message as shown below.
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-1c4c9134d6f4> in <module>
1 import language_tool_python
----> 2 tool = language_tool_python.LanguageTool('en-US')
3
4 text = "Your the best but their are allso good !"
5 matches = tool.check(text)
E:\Anaconda\lib\site-packages\language_tool_python\server.py in __init__(self, language, motherTongue, remote_server, newSpellings, new_spellings_persist)
43 self._update_remote_server_config(self._url)
44 elif not self._server_is_alive():
---> 45 self._start_server_on_free_port()
46 if language is None:
47 try:
E:\Anaconda\lib\site-packages\language_tool_python\server.py in _start_server_on_free_port(self)
212 self._url = 'http://{}:{}/v2/'.format(self._HOST, self._port)
213 try:
--> 214 self._start_local_server()
215 break
216 except ServerError:
E:\Anaconda\lib\site-packages\language_tool_python\server.py in _start_local_server(self)
222 def _start_local_server(self):
223 # Before starting local server, download language tool if needed.
--> 224 download_lt()
225 err = None
226 try:
E:\Anaconda\lib\site-packages\language_tool_python\download_lt.py in download_lt(update)
142 ]
143
--> 144 confirm_java_compatibility()
145 version = LATEST_VERSION
146 filename = FILENAME.format(version=version)
E:\Anaconda\lib\site-packages\language_tool_python\download_lt.py in confirm_java_compatibility()
73 # found because of a PATHEXT-related issue
74 # (https://bugs.python.org/issue2200).
---> 75 raise ModuleNotFoundError('No java install detected. Please install java to use language-tool-python.')
76
77 output = subprocess.check_output([java_path, '-version'],
ModuleNotFoundError: No java install detected. Please install java to use language-tool-python.
When I run the code I get no java install detected
How to solve this issue?
I think this is not an issue with the Code itself when I run the code you provided
import language_tool_python
tl = language_tool_python.LanguageTool('en-US')
txt = "good mooorning sirr and medam my namee anderen i am from amerecia !"
m = tl.check(txt)
len(m)
I get as result a number in this case
OUT: 8
In the Documentation of the language-tool-python is written:
By default, language_tool_python will download a LanguageTool server .jar and run that in the background to detect grammar errors locally. However, LanguageTool also offers a Public HTTP Proofreading API that is supported as well. Follow the link for rate-limiting details. (Running locally won't have the same restrictions.)
So You will need Java (JRE and SKD). Also it's Written in the Requirements of the library:
Prerequisites
Python 3.5+
LanguageTool (Java 8.0 or higher)
The installation process should take care of downloading LanguageTool (it may take a few minutes). Otherwise, you can manually download LanguageTool-stable.zip and unzip it into where the language_tool_python package resides.
Source:
https://pypi.org/project/language-tool-python/
Python 2.7 - JavaError when using grammar-check 1.3.1 library
I Hope I could help.

tensorflow: "trying to mutate a frozen object", bazel

MacOS high sierra, MBP 2016, in terminal.
I'm following the directions here:
https://github.com/tensorflow/models/tree/master/research/syntaxnet
All options for ./configure chosen as default (and all python directories double-checked.). All steps have completed cleanly until this:
bazel test ...
# On Mac, run the following:
bazel test --linkopt=-headerpad_max_install_names \
dragnn/... syntaxnet/... util/utf8/...
I assume I'm supposed to run the latter line ("bazel test --linkopt" etc.). But I get the same result either way, interestingly.
This throws about 10 errors, each of the same type "trying to mutate a frozen object", and concludes tests not run, error loading package dragnn/protos, and couldn't start build.
This is the general form of the errors:
syntaxnet>> bazel test --linkopt=-headerpad_max_install_names
dragnn/... syntaxnet/... util/utf8/...
.
ERROR:
/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD:35:1:
Traceback (most recent call last): File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD",
line 35 tf_proto_library_py(name = "data_py_pb2", srcs = ["dat..."])
File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/syntaxnet/syntaxnet.bzl",
line 53, in tf_proto_library_py py_proto_library(name = name, srcs =
srcs, srcs_versi...", <5 more arguments>) File
"/private/var/tmp/_bazel_XXX/f74e5a21c3ad09aeb110d9de15110035/external/protobuf_archive/protobuf.bzl",
line 374, in py_proto_library py_libs += [default_runtime] trying to
mutate a frozen object ERROR: package contains errors: dragnn/protos
... [same error for various 'name = "...pb2"' files] ...
INFO: Elapsed time: 0.709s FAILED: Build did NOT complete successfully
(17 packages loaded) ERROR: Couldn't start the build. Unable to run
tests
Any idea what could be doing this? Thanks.
This error indicates a bug in the py_proto_library rule implementation.
tf_proto_library_py is defined in syntaxnet.bzl. It is a wrapper around py_proto_library, which is defined by the tf_workspace macro's protobuf_archive rule.
"protobuf_archive" downloads Protobuf 3.3.0, which contains //:protobuf.bzl with the buggy py_proto_library rule implementation: in line #374 it tries to mutate an immutable object py_libs.
Make sure you use the latest Bazel version, currently that's 0.8.1.
If the problem still persists, then:
I suggest filing a bug with:
Protobuf, to fix the py_proto_library rule
TensorFlow, to update their Protobuf version in tf_workspace, and
Syntaxnet to update their TF submodule reference in //research/syntaxnet to the bugfixed version.
As a workaround, perhaps you can patch protobuf.bzl.
The patch is to change these lines:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs += [default_runtime]
375
376 native.py_library(
377 name=name,
378 srcs=outs+py_extra_srcs,
379 deps=py_libs+deps,
380 imports=includes,
381 **kargs)
to these:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs2 = py_libs + [default_runtime]
375 else:
376 py_libs2 = py_libs
377
378 native.py_library(
379 name=name,
380 srcs=outs+py_extra_srcs,
381 deps=py_libs2+deps,
382 imports=includes,
383 **kargs)
Disclaimer: this is a "blind" fix; I have not tried whether it works.
Tried same pattern patch for cc_libs.
if default_runtime and not default_runtime in cc_libs:
cc_libs2 = cc_libs + [default_runtime]
else:
cc_libs2 = cc_libs
if use_grpc_plugin:
cc_libs += ["//external:grpc_lib"]
native.cc_library(
name=name,
srcs=gen_srcs,
hdrs=gen_hdrs,
deps=cc_libs2 + deps,
includes=includes,
**kargs)
Shows new error, but keeps compiling. (Ubuntu 16 on Windows System for Linux--don't ask, native tensorflow 1.4 winx64 works, but not syntaxnet).
greg#FX11:/mnt/c/code/models/research/syntaxnet$ bazel test ...
ERROR: /home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD:451:1: in _transitive_hdrs rule #org_tensorflow//tensorflow/core/kernels:bounds_check_lib_gather:
Traceback (most recent call last):
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD", line 451
_transitive_hdrs(name = 'bounds_check_lib_gather')
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/tensorflow.bzl", line 869, in _transitive_hdrs_impl
set()
Just changed set() to depset() and that seems to have avoided the error.
To make a long story short. I was inspired by a sstrasburg's comment.
Firstly, uninstall a fresh version of bazel.
brew uninstall bazel
Download bazel 0.5.4 from here.
chmod +x bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
./bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
After that, again run
bazel test --linkopt=-headerpad_max_install_names dragnn/... syntaxnet/... util/utf8/...
Finally, I got
Executed 57 out of 57 tests: 57 tests pass.

"command 'bet' could not be found on host" error while using BET in FSL on Windows 10 via Python 3.5

I need to perform brain extraction on .nii images.
I am using Anaconda on Windows 10 and have an environment based on Python 3.5.4.
On Nipype I found the BET from FSL and I followed the code:
mybet = fsl.BET()
mybet.inputs.in_file = 'example.nii'
mybet.inputs.out_file = 'example_bet.nii'
result = mybet.run()
Please note that I expect the output file example_bet.nii to be created by fsl.BET, not to be an image to be overwritten.
I can only find solutions based on Unix systems and it seems one needs to have FSL installed on a Unix-based OS, which is not possible without a Virtual Machine in Windows.
Well, this is the output I get:
171122-12:02:48,988 interface WARNING:
FSLOUTPUTTYPE environment variable is not set. Setting FSLOUTPUTTYPE=NIFTI
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-12-5b900fbd5263> in <module>()
2 mybet.inputs.in_file = 'prova.nii'
3 mybet.inputs.out_file = 'prova_bet.nii'
----> 4 result = mybet.run()
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in run(self, **inputs)
1079 version=self.version)
1080 try:
-> 1081 runtime = self._run_wrapper(runtime)
1082 outputs = self.aggregate_outputs(runtime)
1083 runtime.endTime = dt.isoformat(dt.utcnow())
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_wrapper(self, runtime)
1722
1723 def _run_wrapper(self, runtime):
-> 1724 runtime = self._run_interface(runtime)
1725 return runtime
1726
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\fsl\preprocess.py in _run_interface(self, runtime)
142 # in stderr and if it's set, then update the returncode
143 # accordingly.
--> 144 runtime = super(BET, self)._run_interface(runtime)
145 if runtime.stderr:
146 self.raise_exception(runtime)
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_interface(self, runtime, correct_return_codes)
1748 if not exist_val:
1749 raise IOError("command '%s' could not be found on host %s" %
-> 1750 (self.cmd.split()[0], runtime.hostname))
1751 setattr(runtime, 'command_path', cmd_path)
1752 setattr(runtime, 'dependencies', get_dependencies(executable_name,
OSError: command 'bet' could not be found on host DESKTOP-MYPC
Interface BET failed to run.
Do I need to switch to Linux or is there a way around it?
You can only use FSL on Windows via Docker, Virtual Machine, or Windows Subsystem for Linux. Running it naively is not possible.

unable to install graphlab after typing graphlab.get_dependencies() function

The code shows following errors:
ACTION REQUIRED: Dependencies libstdc++-6.dll and libgcc_s_seh-1.dll not found.
Ensure user account has write permission to C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab
Run graphlab.get_dependencies() to download and install them.
Restart Python and import graphlab again.
By running the above function, you agree to the following licenses.
when i try to write get_dependencies() afterwards it shows the errors shown in image
ContentTooShortError Traceback (most recent call last)
<ipython-input-4-9e64085fb919> in <module>()
----> 1 graphlab.get_dependencies()
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab\dependencies.pyc in get_dependencies()
39
40 print('Downloading gcc-libs.')
---> 41 (dllarchive_file, dllheaders) = urllib.urlretrieve('http://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-gcc-libs-5.1.0-1-any.pkg.tar.xz')
42 dllarchive_dir = tempfile.mkdtemp()
43
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\urllib.pyc in urlretrieve(url, filename, reporthook, data, context)
96 else:
97 opener = _urlopener
---> 98 return opener.retrieve(url, filename, reporthook, data)
99 def urlcleanup():
100 if _urlopener:
C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\urllib.pyc in retrieve(self, url, filename, reporthook, data)
287 if size >= 0 and read < size:
288 raise ContentTooShortError("retrieval incomplete: got only %i out "
--> 289 "of %i bytes" % (read, size), result)
290
291 return result
ContentTooShortError: retrieval incomplete: got only 105704 out of 546800 bytes
Well, I faced the same questions 1 hour ago, and I fixed it now.
For the 2 .dll files, you can search the internet to download them, copy them to you directory:C:\Users\dungeon_master\Anaconda3\envs\gl-env\lib\site-packages\graphlab.
In ipython notebook, run import graphlab, and then run graphlab.get_dependencies(). Wait 1 minute, the base package will download.
After the 2 steps, you may restart you computer, then you will find everything back to normal.
The error exists for me as well after the following the above steps. What i realised is that the these two dependencies needs to be extracted in the "cython" folder inside "graphlab" folder. So i copied the same folder from a different installation that was working for me previously and volla.. "import graphlab" was successful. In case anyone needs it, below is the link to the zip of my "cython" folder. Just replace this "cython" folder inside graphlab (Usual location is '/Anaconda2/envs/gl-env/Lib/site-packages/graphlab'. I hope it helps someone
https://drive.google.com/open?id=0B1voSQs3jo7Jc2l6RTBzWGhYUUU

Why can't my Python code load *.SO files when running in a IPython Notebook on Linux?

I have some Python code, which loads a DLL/SO, via the ctypes.CDLL() function.
It runs fine when run under python/ipython, from the command prompt on all 3 OSes:
Windows 7 Pro
Mac OS X
Linux (Ubuntu 14.04)
It also runs fine in a IPython Notebook, under Windows and Mac OS X.
However, it does NOT work in a IPython Notebook, under Linux. It gives this error:
OSError: example_tx_x86_amd64.so: cannot open shared object file: No such file or directory
The funny thing is: the *.SO file doesn't show up in the directory listing shown in the Home tab of the browser (i.e. - the one from which you select/launch your desired notebook file), either! (All other files in the directory appear there.)
This seems awfully suspicious to me. It's almost as if *.SO files are made invisible to the IPython Notebook Server, for security reasons, but only under Linux! I checked all the Notebook Server options available, but didn't see anything apropos.
Does anyone know what's going on?
Thanks!
Code and backtrace, as requested:
%matplotlib inline
from matplotlib import pyplot as plt
from numpy import array
from pyibisami import amimodel as ami
gTxDLLName = "example_tx_x86_amd64.so"
gBitRate = 10.e9
gSampsPerBit = 32
gNumBits = 100
bit_time = 1. / gBitRate
sample_interval = bit_time / gSampsPerBit
row_size = gNumBits * gSampsPerBit
channel_response = array([0.0, 1.0 / bit_time,])
channel_response.resize(row_size)
my_tx = ami.AMIModel(gTxDLLName)
tx_init = ami.AMIModelInitializer({'root_name' : "example_tx",
'tx_tap_nm1' : 10,
})
tx_init.bit_time = bit_time
tx_init.sample_interval = sample_interval
tx_init.channel_response = channel_response
my_tx.initialize(tx_init)
print "Message from model:"
print "\t", my_tx.msg
print "Parameter string from model:"
print "\t", my_tx.ami_params_out
--------------------------------------------------------------------------- OSError Traceback (most recent call
last) in ()
15 channel_response.resize(row_size)
16
---> 17 my_tx = ami.AMIModel(gTxDLLName)
18 tx_init = ami.AMIModelInitializer({'root_name' : "example_tx",
19 'tx_tap_nm1' : 10,
/home/dbanas/prj/PyAMI/pyibisami/amimodel.pyc in init(self,
filename)
214 " Load the dll and bind the 3 AMI functions."
215
--> 216 my_dll = CDLL(filename)
217 self._amiInit = my_dll.AMI_Init
218 try:
/home/dbanas/anaconda/lib/python2.7/ctypes/init.pyc in
init(self, name, mode, handle, use_errno, use_last_error)
363
364 if handle is None:
--> 365 self._handle = _dlopen(self._name, mode)
366 else:
367 self._handle = handle
OSError: example_tx_x86_amd64.so: cannot open shared object file: No
such file or directory

Categories