error when running this command: sudo ./run-analysis.py COOJA.testlog - python

I use Contiki-ng and I want to visualize the results of my simulation "simTCP.csc" but I have an error message when I run this command: sudo ./run-analysis.py COOJA.testlog.
the error is:
Traceback (most recent call last):
File "./run-analysis.py", line 389, in <module>
main()
File "./run-analysis.py", line 374, in main
results, ll_par, ll_queue_dropped, e2e_pdr = analyze_results(input_file, is_testbed)
File "./run-analysis.py", line 169, in analyze_results
fields2 = fields1[2].split()
IndexError: list index out of range
anyone can help me please?

Related

Problems with let otree devserver run

I have a Python Code to conduct an experiment. To ensure that the Code is working smoothly I wanted to run a trial round via 'otree devserver', but I always get an AttributeError message and I am stuck here. Can anyone help me out?
I have downloaded the Code from GitHub and downloaded all required packages. I am using macOS, otree-5.10.2 and django-4.1.7. Here is my full traceback / error message:
Traceback (most recent call last): File
"/Users/.otreevenv/bin/otree", line 8, in
sys.exit(execute_from_command_line()) File "/Users/.otreevenv/lib/python3.10/site-packages/otree/main.py", line
108, in execute_from_command_line
setup() File "/Users/.otreevenv/lib/python3.10/site-packages/otree/main.py", line
132, in setup
from otree import settings File "/Users/.otreevenv/lib/python3.10/site-packages/otree/settings.py",
line 50, in
OTREE_APPS = get_OTREE_APPS(settings.SESSION_CONFIGS) File "/Users/.otreevenv/lib/python3.10/site-packages/django/conf/__init__.py",
line 94, in __getattr__
val = getattr(_wrapped, name) File "/Users/.otreevenv/lib/python3.10/site-packages/django/conf/__init__.py",
line 270, in __getattr__
return getattr(self.default_settings, name) AttributeError: module 'django.conf.global_settings' has no attribute 'SESSION_CONFIGS'. Did
you mean: 'SESSION_ENGINE'?

py4j.protocol.Py4JError: An error occurred while calling o112.save

I'm running a pyspark job submit on a university server:
My configuration is :
--master yarn --deploy-mode cluster --num-executors 150 --executor-cores 4 --executor-memory 28g --driver-memory 28g
My first few steps runs correctly :
df = spark.read.format('csv') \
.option('header',True) \
.option('multiLine', True) \
.load(data_file)
df.show()
udf_function = udf(stamp, StringType())
new_df = df.withColumn("column_a", udf_function(struct([df[x] for x in df.columns])))
new_df.show()
When I try to run the following commands separately, I get two very similar errors:
Command 1:
new_df.select("column_a").distinct().show(100)
Error:
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1159, in send_command
raise Py4JNetworkError("Answer from Java side is empty")
py4j.protocol.Py4JNetworkError: Answer from Java side is empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 985, in send_command
response = connection.send_command(command)
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1164, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
py4j.protocol.Py4JNetworkError: Error while receiving
Traceback (most recent call last):
File "python_stamp.py", line 93, in <module>
main()
File "python_stamp.py", line 82, in main
new_df.select("planning_cluster_id").distinct().show(100)
File "/hadoop4/yarn/nm/usercache/apps/appcache/application_1593105789029_2249545/container_e01_1593105789029_2249545_02_000002/pyspark.zip/pyspark/sql/dataframe.py", line 380, in show
Command 2:
new_df.write.mode("overwrite").format("csv").option("delimiter", ",").option("header", "true").save(save_path)
Error:
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1159, in send_command
raise Py4JNetworkError("Answer from Java side is empty")
py4j.protocol.Py4JNetworkError: Answer from Java side is empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 985, in send_command
response = connection.send_command(command)
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1164, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
py4j.protocol.Py4JNetworkError: Error while receiving
Traceback (most recent call last):
File "python_stamp.py", line 91, in <module>
main()
File "python_stamp.py", line 83, in main
new_df.write.mode("overwrite").format("csv").option("delimiter", ",").option("header", "true").save(save_path)
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/pyspark.zip/pyspark/sql/readwriter.py", line 738, in save
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/hadoop1/yarn/nm/usercache/apps/appcache/application_1593105789029_2249417/container_e01_1593105789029_2249417_02_000002/py4j-0.10.7-src.zip/py4j/protocol.py", line 336, in get_return_value
py4j.protocol.Py4JError: An error occurred while calling o112.save
Does anyone know the reason behind it? I'm pretty confident that it's not because of any memory error, as the previous steps which show the table, load the table all are running correctly.
Additional information: When I run all of these commands on pyspark shell, they run perfectly well.

Index Error while trying to run a neural network

I am new to both python and neural networks.
I am trying to run this
First I did
sh ./create_dataset.sh
And gave the correct path to scenflow_data_path.
Now I am trying to run this:
python main.py --maxdisp 192 --with_spn
as given, I am getting this following error. Please let me know how to correct this.
The error is:
Traceback (most recent call last):
File "main.py", line 194, in <module>
main()
File "main.py", line 51, in main
args.datapath)
File "/home/kbdp5524/Downloads/AnyNet-master/dataloader/listflowfile.py", line 23, in dataloader
monkaa_path = filepath + [x for x in image if 'monkaa' in x][0]
IndexError: list index out of range

Blender IndexError: bpy_prop_collection

I try write a game with GoranM/bdx plugin. When i create plate with texture and try export to code I get fatal error.
Traceback (most recent call last):
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 225, in execute
export(self, context, bpy.context.scene.bdx.multi_blend_export, bpy.context.scene.bdx.diff_export)
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\ops\exprun.py", line 123, in export
bpy.ops.export_scene.bdx(filepath=file_path, scene_name=scene.name, exprun=True)
File "C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py", line 189, in call
ret = op_call(self.idname_py(), None, kw)
RuntimeError: Error: Traceback (most recent call last):
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 903, in execute
return export(context, self.filepath, self.scene_name, self.exprun, self.apply_modifier)
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 829, in export
"models": srl_models(objects, apply_modifier),
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 117, in srl_models
verts = vertices(mesh)
File "C:\Users\Myuser\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\bdx\exporter.py", line 53, in vertices
vert_uv = list(uv_layer[li].uv)
IndexError: bpy_prop_collection[index]: index 0 out of range, size 0
location: C:\Program Files\Blender Foundation\Blender\2.79\scripts\modules\bpy\ops.py:189
location: :-1
Maybe someone had same problem and you know how to fix it?
Go into Object Mode before calling that function.
bpy.ops.object.mode_set(mode='OBJECT', toggle=False)

OSError: Result too large

I'am playing around with scapy but i cant get it to work. I tried different code's but all gave me the same output:
Traceback (most recent call last):
File "<module1>", line 7, in <module>
File "C:\Python26\lib\site-packages\scapy\sendrecv.py", line 357, in srp
s = conf.L2socket(iface=iface, filter=filter, nofilter=nofilter, type=type)
File "C:\Python26\lib\site-packages\scapy\arch\pcapdnet.py", line 313, in __init__
self.outs = dnet.eth(iface)
File "dnet.pyx", line 112, in dnet.eth.__init__
OSError: Result too large
Iam using python 2.6 with all dependencies installed for scapy.
How to fix this?

Categories