PySpark RuntimeError: Set changed size during iteration - python

I'm running a pyspark script and encountered an error below. It seems saying "RuntimeError: Set changed size during iteration" due to my code "if len(rdd.take(1)) > 0:". I'm not sure if that's the real reason and wonder what exactly went wrong. Any help will be greatly appreciated.
thanks!
17/03/23 21:54:17 INFO DStreamGraph: Updated checkpoint data for time 1490320070000 ms
17/03/23 21:54:17 INFO JobScheduler: Finished job streaming job 1490320072000 ms.0 from job set of time 1490320072000 ms
17/03/23 21:54:17 INFO JobScheduler: Starting job streaming job 1490320072000 ms.1 from job set of time 1490320072000 ms
17/03/23 21:54:17 ERROR JobScheduler: Error running job streaming job 1490320072000 ms.0
org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/streaming/util.py",
line 65, in call
r = self.func(t, *rdds)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/streaming/dstream.py",
line 159, in
func = lambda t, rdd: old_func(rdd)
File "/home/richard/Documents/spark_code/with_kafka/./mongo_kafka_spark_script.py",
line 96, in _compute_glb_max
if len(rdd.take(1)) > 0:
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1343, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py", line 965, in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2439, in _jrdd
self._jrdd_deserializer, profiler)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2372, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2363, in _prepare_for_python_RDD
broadcast_vars = [x._jbroadcast for x in sc._pickled_broadcast_vars]
RuntimeError: Set changed size during iteration
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:95)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:253)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Traceback (most recent call last):
File "/home/richard/Documents/spark_code/with_kafka/./mongo_kafka_spark_script.py",
line 224, in
ssc.awaitTermination();
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/streaming/context.py",
line 206, in awaitTermination
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py",
line 1133, in call
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63,
in deco
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line
319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o38.awaitTermination.
: org.apache.spark.SparkException: An exception was raised by Python:
Traceback (most recent call last):
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/streaming/util.py",
line 65, in call
r = self.func(t, *rdds)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/streaming/dstream.py",
line 159, in
func = lambda t, rdd: old_func(rdd)
File "/home/richard/Documents/spark_code/with_kafka/./mongo_kafka_spark_script.py",
line 96, in _compute_glb_max
if len(rdd.take(1)) > 0:
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1343, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py", line 965, in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2439, in _jrdd
self._jrdd_deserializer, profiler)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2372, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2363, in _prepare_for_python_RDD
broadcast_vars = [x._jbroadcast for x in sc._pickled_broadcast_vars]
RuntimeError: Set changed size during iteration
at org.apache.spark.streaming.api.python.TransformFunction.callPythonTransformFunction(PythonDStream.scala:95)
at org.apache.spark.streaming.api.python.TransformFunction.apply(PythonDStream.scala:78)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.api.python.PythonDStream$$anonfun$callForeachRDD$1.apply(PythonDStream.scala:179)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:253)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

It seems not to be the best practice creating broadcast variables among iterations. Always use updateStateByKey if possible when stateful data is required.

try
if rdd.count() <1 :
take() can give exceptions, but, if more details were available we could have pinpointed the error.

Related

Papermill Run Errors in Github Actions with AttributeError

I am running my Python Notebook as part of my GitHub Action CI and it worked for quite some time. But today it stopped working complaining about this error:
Input Notebook: 03_Pcap.ipynb
Output Notebook: /tmp/ipynb/03_Pcap.ipynb
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/ipython_genutils/ipstruct.py", line 132, in __getattr__
result = self[key]
KeyError: 'language'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.2/x64/bin/papermill", line 8, in <module>
sys.exit(papermill())
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/cli.py", line 242, in papermill
execute_notebook(
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/execute.py", line 81, in execute_notebook
nb = parameterize_notebook(nb, parameters, report_mode)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/parameterize.py", line 75, in parameterize_notebook
language = nb.metadata.kernelspec.language
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/ipython_genutils/ipstruct.py", line 134, in __getattr__
raise AttributeError(key)
AttributeError: language
I haven't changed anything inbetween:
https://github.com/DSPJ2021/syncmesh/compare/e97b1ced4ae90e27c7eb653dc27602edd9be81fa...1bdd32aefa621249a6d6223854d2c41f494da2d1
Still it just stopped working: Last Working Action / First Failing Action
I already compared the Workers version, both are 2.286.1
I compared the installed python version, both are 3.10.2
I compared the installed dependencies, but most are the same. (WORKING / FAILING)
papermill is the same version (2.3.3)
papermill-nb-runner is the same (1.1.16)
ipykernel is the same (6.7.0)
nbformat is the same (5.1.3)
nbconvert is the same (6.4.1)
I tracked down the language error and resolved it (Commit) (Action Run) But run straight into the next one:
Input Notebook: 03_Pcap.ipynb
Output Notebook: /tmp/ipynb/03_Pcap.ipynb
Input notebook does not contain a cell with tag 'parameters'
Executing: 0%| | 0/27 [00:00<?, ?cell/s]Notebook JSON is invalid: Additional properties are not allowed ('id' was unexpected)
Failed validating 'additionalProperties' in code_cell:
On instance['cells'][0]:
{'cell_type': 'code',
'execution_count': None,
'id': '4e48dff5',
'metadata': {'papermill': {'duration': None,
'end_time': None,
'exception': None,
'start_time': None,
'status': 'pending'},
'tags': ['injected-parameters']},
'outputs': ['...0 outputs...'],
'source': '# Parameters\n'
'( = ["\'", "c", "i", "\'", ",", " ", "\'", "t", "r", "...'}
Executing: 0%| | 0/27 [00:00<?, ?cell/s]
Notebook JSON is invalid: Additional properties are not allowed ('id' was unexpected)
Failed validating 'additionalProperties' in code_cell:
On instance['cells'][0]:
{'cell_type': 'code',
'execution_count': None,
'id': '4e48dff5',
'metadata': {'papermill': {'duration': None,
'end_time': None,
'exception': None,
'start_time': None,
'status': 'completed'},
'tags': ['injected-parameters']},
'outputs': ['...0 outputs...'],
'source': '# Parameters\n'
'( = ["\'", "c", "i", "\'", ",", " ", "\'", "t", "r", "...'}
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.2/x64/bin/papermill", line 8, in <module>
sys.exit(papermill())
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/cli.py", line 242, in papermill
execute_notebook(
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/execute.py", line 91, in execute_notebook
nb = papermill_engines.execute_notebook_with_engine(
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 49, in execute_notebook_with_engine
return self.get_engine(engine_name).execute_notebook(nb, kernel_name, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 310, in execute_notebook
nb = cls.execute_managed_notebook(nb_man, kernel_name, log_output=log_output, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 372, in execute_managed_notebook
preprocessor.preprocess(nb_man, safe_kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/preprocess.py", line 20, in preprocess
with self.setup_preprocessor(nb_man.nb, resources, km=km):
AttributeError: 'PapermillExecutePreprocessor' object has no attribute 'setup_preprocessor'
Error: Process completed with exit code 1.
Looked up the error and found:
this issue. But I am already using a more recent version than in the issue (nbformat>=5.1.0)
another issue. But I also use a more recent version of nbconvert>= 5.5
this unresolved issue
this also looks related
After updating my .ipynpb version (from 4.2 to 4.5). I am down to only this error:
Input Notebook: 03_Pcap.ipynb
Output Notebook: /tmp/ipynb/03_Pcap.ipynb
Input notebook does not contain a cell with tag 'parameters'
Executing: 0%| | 0/27 [00:00<?, ?cell/s]
Executing: 0%| | 0/27 [00:00<?, ?cell/s]
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.2/x64/bin/papermill", line 8, in <module>
sys.exit(papermill())
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/cli.py", line 242, in papermill
execute_notebook(
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/execute.py", line 91, in execute_notebook
nb = papermill_engines.execute_notebook_with_engine(
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 49, in execute_notebook_with_engine
return self.get_engine(engine_name).execute_notebook(nb, kernel_name, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 310, in execute_notebook
nb = cls.execute_managed_notebook(nb_man, kernel_name, log_output=log_output, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/engines.py", line 372, in execute_managed_notebook
preprocessor.preprocess(nb_man, safe_kwargs)
File "/opt/hostedtoolcache/Python/3.10.2/x64/lib/python3.10/site-packages/papermill/preprocess.py", line 20, in preprocess
with self.setup_preprocessor(nb_man.nb, resources, km=km):
AttributeError: 'PapermillExecutePreprocessor' object has no attribute 'setup_preprocessor'
Anybody who knows what is going wrong and how to make it work again?
BTW: It works fine on my machine.
Complex Problem easy solution:
My requirements.txt referenced an older package named papermill-nb-runner installing it (and papermill) pip somehow got confused and installed the old version of papermill but showed only the newest version.
The solution was to remove papermill-nb-runner.
Also, it only came to highlight through a bug in pip: https://github.com/pypa/pip/issues/10861

pymongo: Resolver configuration could not be read or specified no nameservers

It's my first time using MongoDB but I can't seem to fix this one issue, my friend who uses MongoDB doesn't know how to use python so he can't really help me.
Here's my code:
import pymongo
# Replace the uri string with your MongoDB deployment's connection string.
conn_str = "mongodb+srv://sqdnoises:{mypass}#sqd.d4kjb.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"
# set a 5-second connection timeout
client = pymongo.MongoClient(conn_str, serverSelectionTimeoutMS=5000)
try:
print(client.server_info())
print('\n\n\n aka connected')
except Exception:
print("Unable to connect to the server.")
Where {mypass} is my MongoDB password.
I keep getting this error:
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 782, in read_resolv_conf
f = stack.enter_context(open(f))
FileNotFoundError: [Errno 2] No such file or directory: '/etc/resolv.conf'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/srv_resolver.py", line 88, in _resolve_uri
results = _resolve('_' + self.__srv + '._tcp.' + self.__fqdn,
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/srv_resolver.py", line 41, in _resolve
return resolver.resolve(*args, **kwargs)
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 1305, in resolve
return get_default_resolver().resolve(qname, rdtype, rdclass, tcp, source,
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 1278, in get_default_resolver
reset_default_resolver()
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 1290, in reset_default_resolver
default_resolver = Resolver()
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 734, in __init__
self.read_resolv_conf(filename)
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/dns/resolver.py", line 785, in read_resolv_conf
raise NoResolverConfiguration
dns.resolver.NoResolverConfiguration: Resolver configuration could not be read or specified no nameservers.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/storage/emulated/0/! workspace/mongolearn/main.py", line 7, in <module>
client = pymongo.MongoClient(conn_str, serverSelectionTimeoutMS=5000)
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/mongo_client.py", line 677, in __init__
res = uri_parser.parse_uri(
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/uri_parser.py", line 532, in parse_uri
nodes = dns_resolver.get_hosts()
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/srv_resolver.py", line 119, in get_hosts
_, nodes = self._get_srv_response_and_hosts(True)
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/srv_resolver.py", line 99, in _get_srv_response_and_hosts
results = self._resolve_uri(encapsulate_errors)
File "/data/data/com.termux/files/usr/lib/python3.9/site-packages/pymongo/srv_resolver.py", line 95, in _resolve_uri
raise ConfigurationError(str(exc))
pymongo.errors.ConfigurationError: Resolver configuration could not be read or specified no nameservers.
How do I fix this?
I am following https://docs.mongodb.com/drivers/pymongo/
indeed, the problem is that dnspython tries to open /etc/resolv.conf
import dns.resolver
dns.resolver.default_resolver=dns.resolver.Resolver(configure=False)
dns.resolver.default_resolver.nameservers=['8.8.8.8']
Just adde this code to the top of your main code, and that should be sufficient to get you past this hurdle..

Pysyft Federated learning, Error with Websockets

I am trying to run a federated learning from pysyft (https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/advanced/websockets-example-MNIST-parallel/Asynchronous-federated-learning-on-MNIST.ipynb) that creates remote workers and connect to them via websockets. however I am getting an error in folllowing evaluation step.
future: <Task finished coro=<WebsocketServerWorker._producer_handler() done, defined at C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py:95> exception=AttributeError("'dict' object has no attribute 'owner'")>
Traceback (most recent call last):
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 663, in register_response
register_response_function = register_response_functions[attr_id]
KeyError: 'evaluate'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py", line 113, in _producer_handler
response = self._recv_msg(message)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py", line 124, in _recv_msg
return self.recv_msg(message)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\base.py", line 310, in recv_msg
response = self._message_router[type(msg)](msg.contents)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\base.py", line 457, in execute_command
command_name, response, list(return_ids), self
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 672, in register_response
new_response = register_response_function(response, response_ids=response_ids, owner=owner)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 766, in <lambda>
return lambda x, **kwargs: f(lambdas, x, **kwargs)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 522, in two_fold
return lambdas[0](args[0], **kwargs), lambdas[1](args[1], **kwargs)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 744, in <lambda>
else lambda i, **kwargs: register_tensor(i, **kwargs)
File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 712, in register_tensor
tensor.owner = owner
AttributeError: 'dict' object has no attribute 'owner'
There are no clear answer from their forum. does anyone have any clue as to what the issue is in this script.
My syft version:
syft : 0.2.3a1
syft-proto : 0.1.1a1.post12
torch : 1.4.0
I came across this problem as well and pushed a fix in https://github.com/OpenMined/PySyft/pull/2948

OSError: [Errno 2] No such file or directory: '/home/parallels/chromium/_gclient_src_kz4Qr8'

When fetch the chromium source code follow the steps describe in
https://chromium.googlesource.com/chromium/src/+/master/docs/android_build_instructions.md
I was stuck at the fourth step:
fetch --nohooks android
The exception log:
[0:35:17] Receiving objects: 51% (5417602/10525454), 1.24 GiB | 1.13 MiB/s
[0:35:18] Receiving objects: 51% (5458536/10525454), 1.24 GiB | 1.75 MiB/s
[0:35:19] Receiving objects: 51% (5463338/10525454), 1.24 GiB | 2.19 MiB/s
[0:35:30] Receiving objects: 51% (5471915/10525454), 1.24 GiB | 2.35 MiB/s
error: index-pack died of signal 90525454), 1.24 GiB | 202.00 KiB/s
[0:35:30] error: index-pack died of signal 9
fatal: index-pack failed
[0:35:30] fatal: index-pack failed
Traceback (most recent call last):
File "/home/parallels/depot/depot_tools/gclient_scm.py", line 906, in _Clone
print_stdout=print_stdout, stdout=stdout)
File "/home/parallels/depot/depot_tools/gclient_scm.py", line 1210, in _Run
gclient_utils.CheckCallAndFilterAndHeader(cmd, env=env, **kwargs)
File "/home/parallels/depot/depot_tools/gclient_utils.py", line 314, in CheckCallAndFilterAndHeader
return CheckCallAndFilter(args, **kwargs)
File "/home/parallels/depot/depot_tools/gclient_utils.py", line 576, in CheckCallAndFilter
rv, args, kwargs.get('cwd', None), None, None)
CalledProcessError: Command 'git -c core.deltaBaseCacheLimit=2g clone --no-checkout --progress https://chromium.googlesource.com/chromium/src.git /home/parallels/chromium/_gclient_src_kz4Qr8' returned non-zero exit status 128 in /home/parallels/chromium
Traceback (most recent call last):
File "/home/parallels/depot/depot_tools/gclient.py", line 2960, in
sys.exit(main(sys.argv[1:]))
File "/home/parallels/depot/depot_tools/gclient.py", line 2946, in main
return dispatcher.execute(OptionParser(), argv)
File "/home/parallels/depot/depot_tools/subcommand.py", line 252, in execute
return command(parser, args[1:])
File "/home/parallels/depot/depot_tools/gclient.py", line 2692, in CMDsync
ret = client.RunOnDeps('update', args)
File "/home/parallels/depot/depot_tools/gclient.py", line 1635, in RunOnDeps
work_queue.flush(revision_overrides, command, args, options=self._options)
File "/home/parallels/depot/depot_tools/gclient_utils.py", line 1075, in run
self.item.run(*self.args, **self.kwargs)
File "/home/parallels/depot/depot_tools/gclient.py", line 977, in run
file_list)
File "/home/parallels/depot/depot_tools/gclient_scm.py", line 130, in RunCommand
return getattr(self, command)(options, args, file_list)
File "/home/parallels/depot/depot_tools/gclient_scm.py", line 419, in update
self._Clone(revision, url, options)
File "/home/parallels/depot/depot_tools/gclient_scm.py", line 914, in _Clone
if os.listdir(tmp_dir):
OSError: [Errno 2] No such file or directory: '/home/parallels/chromium/_gclient_src_kz4Qr8'
After search respective information on the internet, I can't find any useful help information.
**
Hope any one could help infer the real problem when fetch the chromium source code.
Thanks very much!
Yes the reason is the same as others said, It because of the memory of system is small. after change the virtual machine's memory to 8 G. the problem was fixed.

Simple *IDN? query resulted in "Timeout expired before operation completed"

I tried to do a simple query to my LAB instrument by:
>>> import visa
>>> rm = visa.ResourceManager()
>>> viavi = rm.open_resource("TCPIP0::10.0.2.76::5001::SOCKET")
>>> print(viavi.query("*IDN?"))
The result was:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python35\lib\site-packages\pyvisa\resources\messagebase
d.py", line 407, in query
return self.read()
File "C:\Program Files\Python35\lib\site-packages\pyvisa\resources\messagebase
d.py", line 332, in read
message = self.read_raw().decode(enco)
File "C:\Program Files\Python35\lib\site-packages\pyvisa\resources\messagebase
d.py", line 306, in read_raw
chunk, status = self.visalib.read(self.session, size)
File "C:\Program Files\Python35\lib\site-packages\pyvisa\ctwrapper\functions.p
y", line 1582, in read
ret = library.viRead(session, buffer, count, byref(return_count))
File "C:\Program Files\Python35\lib\site-packages\pyvisa\ctwrapper\highlevel.p
y", line 188, in _return_handler
raise errors.VisaIOError(ret_value)
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before op
eration completed.
According to what I learned till now (from the experience of others). This timeout error is somehow related to line termination ("\n"). How can I solve this problem ?
I found out that it was all related to the read_termination. My LAB instrument simply terminated its response by a '\n'.
While my script was looking for a '\r' all that time.

Categories