Here, I'm attaching actual error showed. im using mlrun with docker. specifically mlrun 1.2.0.
--------------------------------------------------------------------------
RunError Traceback (most recent call last)
<ipython-input-20-aab97e08b914> in <module>
1 serving_fn.with_code(body=" ") # adds the serving wrapper, not required with MLRun >= 1.0.3
----> 2 project.deploy_function(serving_fn)
/opt/conda/lib/python3.8/site-packages/mlrun/projects/project.py in deploy_function(self, function, dashboard, models, env, tag, verbose, builder_env, mock)
2307 :param mock: deploy mock server vs a real Nuclio function (for local simulations)
2308 """
-> 2309 return deploy_function(
2310 function,
2311 dashboard=dashboard,
/opt/conda/lib/python3.8/site-packages/mlrun/projects/operations.py in deploy_function(function, dashboard, models, env, tag, verbose, builder_env, project_object, mock)
344 )
345
--> 346 address = function.deploy(
347 dashboard=dashboard, tag=tag, verbose=verbose, builder_env=builder_env
348 )
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/serving.py in deploy(self, dashboard, project, tag, verbose, auth_info, builder_env)
621 logger.info(f"deploy root function {self.metadata.name} ...")
622
--> 623 return super().deploy(
624 dashboard, project, tag, verbose, auth_info, builder_env=builder_env
625 )
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/function.py in deploy(self, dashboard, project, tag, verbose, auth_info, builder_env)
550 self.status = data["data"].get("status")
551 self._update_credentials_from_remote_build(data["data"])
--> 552 self._wait_for_function_deployment(db, verbose=verbose)
553
554 # NOTE: on older mlrun versions & nuclio versions, function are exposed via NodePort
/opt/conda/lib/python3.8/site-packages/mlrun/runtimes/function.py in _wait_for_function_deployment(self, db, verbose)
620 if state != "ready":
621 logger.error("Nuclio function failed to deploy", function_state=state)
--> 622 raise RunError(f"function {self.metadata.name} deployment failed")
623
624 #min_nuclio_versions("1.5.20", "1.6.10")
RunError: function serving deployment failed
I don't have any idea what is the reason behind this error. as I'm new bee here. so someone pls help me to resolve this error.
I see two steps, how to solve the issue:
1. Relevant installation
The MLRun Community Edition in desktop docker has to be install under relevant HOST_IP (not with localhost or 127.0.0.1, but with stable IP address, see ipconfig) and with relevant SHARED_DIR. See relevant command line (from OS windows):
set HOST_IP=192.168.0.150
set SHARED_DIR=c:\Apps\mlrun-data
set TAG=1.2.0
mkdir %SHARED_DIR%
docker-compose -f "c:\Apps\Desktop Docker Tools\compose.with-jupyter.yaml" up
BTW: YAML file see https://docs.mlrun.org/en/latest/install/local-docker.html
2. Access to the port
In case of call serving_fn.invoke you have to open relevant port (from deploy_function) on your IP address (based on setting of HOST_IP, see the first point).
Typically this port can be blocked based on your firewall policy or your local antivirus. It means, you have to open access to this port before invoke call.
BTW: You can see focus on the issue https://github.com/mlrun/mlrun/issues/2102
Related
I am trying to install Pyspark on windows since yesterday but I am constantly getting this error. It's been more then 48 hours, I tried everything to resolve the problem. Reinstalled Pyspark from scratch numerous times but still could not get it to work.
Whenever I am running -
spark = SparkSession.builder.getOrCreate()
I am getting this error -
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_20592/2335384691.py in <module>
1 # create a spark session
----> 2 spark = SparkSession.builder.getOrCreate()
c:\users\bhola\appdata\local\programs\python\python38\lib\site-packages\pyspark\sql\session.py in getOrCreate(self)
226 sparkConf.set(key, value)
227 # This SparkContext may be an existing one.
--> 228 sc = SparkContext.getOrCreate(sparkConf)
229 # Do not update `SparkConf` for existing `SparkContext`, as it's shared
230 # by all sessions.
c:\users\bhola\appdata\local\programs\python\python38\lib\site-packages\pyspark\context.py in getOrCreate(cls, conf)
390 with SparkContext._lock:
391 if SparkContext._active_spark_context is None:
--> 392 SparkContext(conf=conf or SparkConf())
393 return SparkContext._active_spark_context
394
c:\users\bhola\appdata\local\programs\python\python38\lib\site-packages\pyspark\context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
142 " is not allowed as it is a security risk.")
143
--> 144 SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
145 try:
146 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
c:\users\bhola\appdata\local\programs\python\python38\lib\site-packages\pyspark\context.py in _ensure_initialized(cls, instance, gateway, conf)
337 with SparkContext._lock:
338 if not SparkContext._gateway:
--> 339 SparkContext._gateway = gateway or launch_gateway(conf)
340 SparkContext._jvm = SparkContext._gateway.jvm
341
c:\users\bhola\appdata\local\programs\python\python38\lib\site-packages\pyspark\java_gateway.py in launch_gateway(conf, popen_kwargs)
106
107 if not os.path.isfile(conn_info_file):
--> 108 raise RuntimeError("Java gateway process exited before sending its port number")
109
110 with open(conn_info_file, "rb") as info:
RuntimeError: Java gateway process exited before sending its port number
I Tried the solution given in this stackoveflow post and in this stackoverflow2 post.
export PYSPARK_SUBMIT_ARGS="--master local[2] pyspark-shell"
In my windows system I used variable name = PYSPARK_SUBMIT_ARGS and variable value = "--master local[2] pyspark-shell"
But it's not working.
Other system variables that is set on my machine are during installations are-
SPARK_HOME = D:\spark\spark-3.2.0-bin-hadoop3.2
HADOOP_HOME = D:\spark\spark-3.2.0-bin-hadoop3.2
Path = D:\spark\spark-3.2.0-bin-hadoop3.2\bin
PYSPARK_DRIVER_PYTHON = jupyter
PYSPARK_DRIVER_PYTHON_OPTS = jupyter
JAVA_HOME = C:\Program Files\Java\jdk1.8.0_301
Can anyone help me with this?
Did you download the winutils.exe from https://github.com/kontext-tech/winutils? You'll need to put that in \Hadoop\bin and add paths, etc.
MacOS high sierra, MBP 2016, in terminal.
I'm following the directions here:
https://github.com/tensorflow/models/tree/master/research/syntaxnet
All options for ./configure chosen as default (and all python directories double-checked.). All steps have completed cleanly until this:
bazel test ...
# On Mac, run the following:
bazel test --linkopt=-headerpad_max_install_names \
dragnn/... syntaxnet/... util/utf8/...
I assume I'm supposed to run the latter line ("bazel test --linkopt" etc.). But I get the same result either way, interestingly.
This throws about 10 errors, each of the same type "trying to mutate a frozen object", and concludes tests not run, error loading package dragnn/protos, and couldn't start build.
This is the general form of the errors:
syntaxnet>> bazel test --linkopt=-headerpad_max_install_names
dragnn/... syntaxnet/... util/utf8/...
.
ERROR:
/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD:35:1:
Traceback (most recent call last): File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/dragnn/protos/BUILD",
line 35 tf_proto_library_py(name = "data_py_pb2", srcs = ["dat..."])
File
"/Users/XXX/Desktop/NLP/syntaxnet/models/research/syntaxnet/syntaxnet/syntaxnet.bzl",
line 53, in tf_proto_library_py py_proto_library(name = name, srcs =
srcs, srcs_versi...", <5 more arguments>) File
"/private/var/tmp/_bazel_XXX/f74e5a21c3ad09aeb110d9de15110035/external/protobuf_archive/protobuf.bzl",
line 374, in py_proto_library py_libs += [default_runtime] trying to
mutate a frozen object ERROR: package contains errors: dragnn/protos
... [same error for various 'name = "...pb2"' files] ...
INFO: Elapsed time: 0.709s FAILED: Build did NOT complete successfully
(17 packages loaded) ERROR: Couldn't start the build. Unable to run
tests
Any idea what could be doing this? Thanks.
This error indicates a bug in the py_proto_library rule implementation.
tf_proto_library_py is defined in syntaxnet.bzl. It is a wrapper around py_proto_library, which is defined by the tf_workspace macro's protobuf_archive rule.
"protobuf_archive" downloads Protobuf 3.3.0, which contains //:protobuf.bzl with the buggy py_proto_library rule implementation: in line #374 it tries to mutate an immutable object py_libs.
Make sure you use the latest Bazel version, currently that's 0.8.1.
If the problem still persists, then:
I suggest filing a bug with:
Protobuf, to fix the py_proto_library rule
TensorFlow, to update their Protobuf version in tf_workspace, and
Syntaxnet to update their TF submodule reference in //research/syntaxnet to the bugfixed version.
As a workaround, perhaps you can patch protobuf.bzl.
The patch is to change these lines:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs += [default_runtime]
375
376 native.py_library(
377 name=name,
378 srcs=outs+py_extra_srcs,
379 deps=py_libs+deps,
380 imports=includes,
381 **kargs)
to these:
373 if default_runtime and not default_runtime in py_libs + deps:
374 py_libs2 = py_libs + [default_runtime]
375 else:
376 py_libs2 = py_libs
377
378 native.py_library(
379 name=name,
380 srcs=outs+py_extra_srcs,
381 deps=py_libs2+deps,
382 imports=includes,
383 **kargs)
Disclaimer: this is a "blind" fix; I have not tried whether it works.
Tried same pattern patch for cc_libs.
if default_runtime and not default_runtime in cc_libs:
cc_libs2 = cc_libs + [default_runtime]
else:
cc_libs2 = cc_libs
if use_grpc_plugin:
cc_libs += ["//external:grpc_lib"]
native.cc_library(
name=name,
srcs=gen_srcs,
hdrs=gen_hdrs,
deps=cc_libs2 + deps,
includes=includes,
**kargs)
Shows new error, but keeps compiling. (Ubuntu 16 on Windows System for Linux--don't ask, native tensorflow 1.4 winx64 works, but not syntaxnet).
greg#FX11:/mnt/c/code/models/research/syntaxnet$ bazel test ...
ERROR: /home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD:451:1: in _transitive_hdrs rule #org_tensorflow//tensorflow/core/kernels:bounds_check_lib_gather:
Traceback (most recent call last):
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/core/kernels/BUILD", line 451
_transitive_hdrs(name = 'bounds_check_lib_gather')
File "/home/greg/.cache/bazel/_bazel_greg/adb8eb0eab8b9680449366fbebe59ec2/external/org_tensorflow/tensorflow/tensorflow.bzl", line 869, in _transitive_hdrs_impl
set()
Just changed set() to depset() and that seems to have avoided the error.
To make a long story short. I was inspired by a sstrasburg's comment.
Firstly, uninstall a fresh version of bazel.
brew uninstall bazel
Download bazel 0.5.4 from here.
chmod +x bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
./bazel-0.5.4-without-jdk-installer-darwin-x86_64.sh
After that, again run
bazel test --linkopt=-headerpad_max_install_names dragnn/... syntaxnet/... util/utf8/...
Finally, I got
Executed 57 out of 57 tests: 57 tests pass.
I need to perform brain extraction on .nii images.
I am using Anaconda on Windows 10 and have an environment based on Python 3.5.4.
On Nipype I found the BET from FSL and I followed the code:
mybet = fsl.BET()
mybet.inputs.in_file = 'example.nii'
mybet.inputs.out_file = 'example_bet.nii'
result = mybet.run()
Please note that I expect the output file example_bet.nii to be created by fsl.BET, not to be an image to be overwritten.
I can only find solutions based on Unix systems and it seems one needs to have FSL installed on a Unix-based OS, which is not possible without a Virtual Machine in Windows.
Well, this is the output I get:
171122-12:02:48,988 interface WARNING:
FSLOUTPUTTYPE environment variable is not set. Setting FSLOUTPUTTYPE=NIFTI
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-12-5b900fbd5263> in <module>()
2 mybet.inputs.in_file = 'prova.nii'
3 mybet.inputs.out_file = 'prova_bet.nii'
----> 4 result = mybet.run()
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in run(self, **inputs)
1079 version=self.version)
1080 try:
-> 1081 runtime = self._run_wrapper(runtime)
1082 outputs = self.aggregate_outputs(runtime)
1083 runtime.endTime = dt.isoformat(dt.utcnow())
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_wrapper(self, runtime)
1722
1723 def _run_wrapper(self, runtime):
-> 1724 runtime = self._run_interface(runtime)
1725 return runtime
1726
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\fsl\preprocess.py in _run_interface(self, runtime)
142 # in stderr and if it's set, then update the returncode
143 # accordingly.
--> 144 runtime = super(BET, self)._run_interface(runtime)
145 if runtime.stderr:
146 self.raise_exception(runtime)
~\Anaconda3\envs\tensorflow\lib\site-packages\nipype\interfaces\base.py in _run_interface(self, runtime, correct_return_codes)
1748 if not exist_val:
1749 raise IOError("command '%s' could not be found on host %s" %
-> 1750 (self.cmd.split()[0], runtime.hostname))
1751 setattr(runtime, 'command_path', cmd_path)
1752 setattr(runtime, 'dependencies', get_dependencies(executable_name,
OSError: command 'bet' could not be found on host DESKTOP-MYPC
Interface BET failed to run.
Do I need to switch to Linux or is there a way around it?
You can only use FSL on Windows via Docker, Virtual Machine, or Windows Subsystem for Linux. Running it naively is not possible.
I am using nltk (installed and works fine in IDLE) for a project in Python (2.7.4) which runs perfectly fine on IDLE, but using the same code in xampp or wamp (cgi-bin), everything related to 'nltk' doesn't works and this is the error shown by adding these lines
import cgitb
cgitb.enable()
The errors are in lines are marked by '=>' and details are enclosed within ** and **. I've tried printing 'sys.path' after importing os, which shows the 'site-packages' directory in which 'nltk' resides. I even copied and pasted this folder into 'C:\Python27\' but still the same errors. Things other than nltk works as desired.
<type 'exceptions.ValueError'> Python 2.7.4: C:\python27\python.exe
Wed May 14 16:13:34 2014
A problem occurred in a Python script. Here is the sequence of function calls
leading up to the error, in the order they occurred.
C:\xampp\cgi-bin\Major\project.py in ()
=> 73 import nltk
**nltk undefined**
C:\python27\nltk\__init__.py in ()
159 import cluster; from cluster import *
160
=> 161 from downloader import download, download_shell
162 try:
163 import Tkinter
**downloader undefined, download undefined, download_shell undefined**
C:\python27\nltk\downloader.py in ()
2199
2200 # Aliases
=> 2201 _downloader = Downloader()
2202 download = _downloader.download
2203 def download_shell(): DownloaderShell(_downloader).run()
**_downloader undefined, Downloader = None**
C:\python27\nltk\downloader.py in __init__(self=<nltk.downloader.Downl
oader object>, server_index_url=None, download_dir=None)
425 # decide where we're going to save things to.
426 if self._download_dir is None:
=> 427 self._download_dir = self.default_download_dir()
428
429 #/////////////////////////////////////////////////////////////////
**self = <nltk.downloader.Downloader object>, self._download_dir = None,
self.default_download_dir = <bound method Downloader.default_download_dir of
<nltk.downloader.Downloader object>>**
C:\python27\nltk\downloader.py in default_download_dir(self=<nltk.downloa
der.Downloader object>)
926 homedir = os.path.expanduser('~/')
927 if homedir == '~/':
=> 928 raise ValueError("Could not find a default
download directory")
929
930 # append "nltk_data" to the home directory
**<pre>builtin ValueError = < type 'exceptions.ValueError' > <pre/>**
<type 'exceptions.ValueError'>: Could not find a default download directory
args = ('Could not find a default download directory',)
message = 'Could not find a default download directory'
I'm trying to talk to supervisor over xmlrpc. Based on supervisorctl (especially this line), I have the following, which seems like it should work, and indeed it works, in so far as it connects enough to receive an error from the server:
#socketpath is the full path to the socket, which exists
# None and None are the default username and password in the supervisorctl options
In [12]: proxy = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport(None, None, serverurl='unix://'+socketpath))
In [13]: proxy.supervisor.getState()
Resulting in this error:
---------------------------------------------------------------------------
ProtocolError Traceback (most recent call last)
/home/marcintustin/webapps/django/oneclickcosvirt/oneclickcos/<ipython-input-13-646258924bc2> in <module>()
----> 1 proxy.supervisor.getState()
/usr/local/lib/python2.7/xmlrpclib.pyc in __call__(self, *args)
1222 return _Method(self.__send, "%s.%s" % (self.__name, name))
1223 def __call__(self, *args):
-> 1224 return self.__send(self.__name, args)
1225
1226 ##
/usr/local/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params)
1576 self.__handler,
1577 request,
-> 1578 verbose=self.__verbose
1579 )
1580
/home/marcintustin/webapps/django/oneclickcosvirt/lib/python2.7/site-packages/supervisor/xmlrpc.pyc in request(self, host, handler, request_body, verbose)
469 r.status,
470 r.reason,
--> 471 '' )
472 data = r.read()
473 p, u = self.getparser()
ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>
This is the unix_http_server section of supervisord.conf:
[unix_http_server]
file=/home/marcintustin/webapps/django/oneclickcosvirt/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
;username=user ; (default is no username (open server))
;password=123 ; (default is no password (open server))
So, there should be no authentication problems.
It seems like my code is in all material respects identical to the equivalent code from supervisorctl, but supervisorctl actually works. What am I doing wrong?
Your code looks substantially correct. I'm running Supervisor 3.0 with Python 2.7, and given the following:
import supervisor.xmlrpc
import xmlrpclib
p = xmlrpclib.ServerProxy('http://127.0.0.1',
transport=supervisor.xmlrpc.SupervisorTransport(
None, None,
'unix:///home/lars/lib/supervisor/tmp/supervisor.sock'))
print p.supervisor.getState()
I get:
{'statename': 'RUNNING', 'statecode': 1}
Are you certain that your running Supervisor instance is using the configuration file you think it is? What if you run supervisord in debug mode, do you see the connection?
I don't use the ServerProxy from xmlrpclib, I use the Server class instead and I don't have to define any transports or paths to sockets. Not sure if your purposes require that, but here's a thin client I use fairly frequently. It's pretty much straight out of the docs.
python -c "import xmlrpclib;\
supervisor_client = xmlrpclib.Server('http://localhost:9001/RPC2');\
print( supervisor_client.supervisor.stopProcess(<some_proc_name>) )"
I faced the same issue; the problem was simple; supervisord was not running!
First:
supervisord
And then:
supervisorctl start all
Done! :)
If you've set nodaemon to true, you must keep the process runing in another tab of your terminal.