Modify VHDL generic value with ghdl in cocotb - python

I managed to read generic module value with cocotb without problem. But if I can't manage to write it.
My VHDL generic is :
...
generic (
...
C_M00_AXI_BURST_LEN : integer := 16;
...
)
I can read it in cocotb:
self.dut.log.info("C_M00_AXI_BURST_LEN 0x{:x}".format(
int(self.dut.c_m00_axi_burst_len)))
But if I try to change it :
self.dut.c_m00_axi_burst_len = 32
I get this python error :
Send raised exception: Not permissible to set values on object c_m00_axi_burst_len
File "/opt/cocotb/cocotb/decorators.py", line 197, in send
return self._coro.send(value)
File "/usr/local/projects/axi_pattern_tester/vivado_ip/axi_pattern_tester_1.0/cocotb/test_axi_pattern_tester_v1_0.py", line 165, in axi4_master_test
dutest.print_master_generics()
File "/usr/local/projects/axi_pattern_tester/vivado_ip/axi_pattern_tester_1.0/cocotb/test_axi_pattern_tester_v1_0.py", line 86, in print_master_generics
self.dut.c_m00_axi_burst_len = 32
File "/opt/cocotb/cocotb/handle.py", line 239, in __setattr__
return getattr(self, name)._setcachedvalue(value)
File "/opt/cocotb/cocotb/handle.py", line 378, in _setcachedvalue
raise TypeError("Not permissible to set values on object %s" % (self._name))
Is there a way to do it using GHDL as simulator ?

In fact, user1155120, Paebbels and scary_jeff respond to the question : It's not possible.
But it's possible to use configuration differently to solve this problem. VHDL Generic value can be configured in Makefile adding "-g" option to SIM_ARGS parameter :
SIM_ARGS+=-gC_M00_AXI_BURST_LEN=16
This value can then be read under cocotb "dut" object like any others signal and used as simulation parameter :
C_M00_AXI_BURST_LEN = int(dut.C_M00_AXI_BURST_LEN.value)

Related

Unable to implement ICodePipelineActionFactory in Python

I am trying to create an arbitrary CodePipeline action as part of a CDK pipeline implemented in Python. Specifically in this case it's a step function invocation, but I would like to call other types as well. No matter what I do, I keep getting the same error saying it can't find the add_action attribute on the stage object.
jsii.errors.JSIIError: '' object has no attribute 'add_action'
I have tried different variations of the method name, checking the object with dir() (stage is a very opaque InterfaceDynamicProxy object), reading jsii documentation to see if they have a way to list available attributes, but got nowhere.
Does anyone have a working example of jsii interface implementation in Python? Or can you tell what's wrong with the code below?
I am using CDK 1.118.0 with Python 3.9.6 and node.js 16.6.2 on Mac OS X.
from aws_cdk import core, pipelines, aws_codepipeline_actions, aws_codepipeline, aws_stepfunctions
import jsii
#jsii.implements(pipelines.ICodePipelineActionFactory)
class SomeStep(pipelines.Step):
def __init__(self, id_):
super().__init__(id_)
#jsii.member(jsii_name="produceAction")
def produce_action(
self, stage: aws_codepipeline.IStage,
options: pipelines.ProduceActionOptions,
# TODO why are these not passed?
# *,
# action_name, artifacts, pipeline, run_order, scope,
# before_self_mutation=None,
# code_build_defaults=None,
# fallback_artifact=None
) -> pipelines.CodePipelineActionFactoryResult:
stage.add_action(
aws_codepipeline_actions.StepFunctionInvokeAction(
state_machine=aws_stepfunctions.StateMachine.from_state_machine_arn("..."),
action_name="foo",
state_machine_input=aws_codepipeline_actions.StateMachineInput.literal({"foo": "bar"}),
run_order=options["run_order"],
)
)
return pipelines.CodePipelineActionFactoryResult(run_orders_consumed=1)
app = core.App()
stage = core.Stage(app, "stage")
stack = core.Stack(stage, "stack")
pipeline_stack = core.Stack(app, "pipeline-stack")
pipeline = pipelines.CodePipeline(
pipeline_stack,
"pipeline",
synth=pipelines.ShellStep("synth", input=pipelines.CodePipelineSource.git_hub("foo/bar", "main"), commands=["cdk synth"])
)
pipeline.add_wave("wave").add_stage(stage, pre=[SomeStep("some")])
app.synth()
The complete error:
jsii.errors.JavaScriptError:
Error: '' object has no attribute 'add_action'
at KernelHost.completeCallback (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:9462:35)
at KernelHost.callbackHandler (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:9453:41)
at Step.value (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/tmpwwmvzicu/lib/program.js:8323:49)
at CodePipeline.pipelineStagesAndActionsFromGraph (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/codepipeline/codepipeline.js:154:48)
at CodePipeline.doBuildPipeline (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/codepipeline/codepipeline.js:116:14)
at CodePipeline.buildPipeline (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:93:14)
at CodePipeline.buildJustInTime (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:101:18)
at Object.visit (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/pipelines/lib/main/pipeline-base.js:42:57)
at recurse (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/core/lib/private/synthesis.js:86:20)
at recurse (/private/var/folders/ln/r1dlp_xj6t57ddclvh7zgl8m0000gp/T/jsii-kernel-x3iY7A/node_modules/#aws-cdk/core/lib/private/synthesis.js:98:17)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~/dev/cdk-playground/app.py", line 55, in <module>
app.synth()
File ".../lib/python3.9/site-packages/aws_cdk/core/__init__.py", line 16432, in synth
return typing.cast(aws_cdk.cx_api.CloudAssembly, jsii.invoke(self, "synth", [options]))
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 128, in wrapped
return _recursize_dereference(kernel, fn(kernel, *args, **kwargs))
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 348, in invoke
return _callback_till_result(self, response, InvokeResponse)
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 216, in _callback_till_result
response = kernel.sync_complete(
File ".../lib/python3.9/site-packages/jsii/_kernel/__init__.py", line 386, in sync_complete
return self.provider.sync_complete(
File ".../lib/python3.9/site-packages/jsii/_kernel/providers/process.py", line 382, in sync_complete
resp = self._process.send(_CompleteRequest(complete=request), response_type)
File ".../lib/python3.9/site-packages/jsii/_kernel/providers/process.py", line 326, in send
raise JSIIError(resp.error) from JavaScriptError(resp.stack)
jsii.errors.JSIIError: '' object has no attribute 'add_action'
Subprocess exited with error 1
AWS has resolved the issue in jsii.
https://github.com/aws/jsii/issues/2963
It should be available with CDK 1.121.0.

How to call an Odoo model's method with no parameter(except self) on a specific record through xmlrpc in Odoo 13?

I am developing a script to create a record in a model of an Odoo. I need to run this model's methods on specific records. In my case the method which I need to run on a specific record doesn't have any parameter (just has self). I want to know how can I run the method on a specific record of the model through xmlrpc call from client to Odoo server. Below is the way I tried to call the method and pass the id of a specific record regarding this question.
xmlrpc_object.execute('test_db', user, 'admin', 'test.test', 'action_check_constraint', [record_id])
action_check_constraint checks some constraints on each record of the model and if all the constraints passed, changes the state of the record or raise validation errors. But the above method call with xmlrpc raise below error:
xmlrpc.client.Fault: <Fault cannot marshal None unless allow_none is enabled: 'Traceback (most recent call last):\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 60, in xmlrpc_1\n response = self._xmlrpc(service)\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 50, in _xmlrpc\n return dumps((result,), methodresponse=1, allow_none=False)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 968, in dumps\n data = m.dumps(params)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 501, in dumps\n dump(v, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 523, in __dump\n f(self, value, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 527, in dump_nil\n raise TypeError("cannot marshal None unless allow_none is enabled")\nTypeError: cannot marshal None unless allow_none is enabled\n'>
> /home/ibrahim/workspace/scripts/automate/automate_record_creation.py(328)create_record()
Can anyone help with the correct and best way of calling a model's method (with no parameter except self) on a specific record through xmlrpc client to Odoo server?
That error is raised, because the xmlrpc library is not allowing None as return value as default. But you should change that behaviour by just allowing it.
Following line is from Odoo's external API documentation, extended to allow None as return value:
models = xmlrpc.client.ServerProxy(
'{}/xmlrpc/2/object'.format(url), allow_none=True)
For more information about xmlrpc ServerProxy look into the python documentation
You can get the error if action_check_constraint does not return anything (by default None).
Try to run the server with the log-level option set to debug_rpc_answer to get more details.
After lost of search and try first I used this fix to solve the error but I think this fix is not a best practice. So, I found OdooRPC which does the same job but it handled the above case and there's no such error for model methods which return None. Using OdooRPC solved my problem and I done what I needed to do with xmlrpc in Odoo.

Don't understand this ConfigParser.InterpolationSyntaxError

So I have tried to write a small config file for my script, which should specify an IP address, a port and a URL which should be created via interpolation using the former two variables. My config.ini looks like this:
[Client]
recv_url : http://%(recv_host):%(recv_port)/rpm_list/api/
recv_host = 172.28.128.5
recv_port = 5000
column_list = Name,Version,Build_Date,Host,Release,Architecture,Install_Date,Group,Size,License,Signature,Source_RPM,Build_Host,Relocations,Packager,Vendor,URL,Summary
In my script I parse this config file as follows:
config = SafeConfigParser()
config.read('config.ini')
column_list = config.get('Client', 'column_list').split(',')
URL = config.get('Client', 'recv_url')
If I run my script, this results in:
Traceback (most recent call last):
File "server_side_agent.py", line 56, in <module>
URL = config.get('Client', 'recv_url')
File "/usr/lib64/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib64/python2.7/ConfigParser.py", line 691, in _interpolate
self._interpolate_some(option, L, rawval, section, vars, 1)
File "/usr/lib64/python2.7/ConfigParser.py", line 716, in _interpolate_some
"bad interpolation variable reference %r" % rest)
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
I have tried debugging, which resulted in giving me one more line of error code:
...
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
Exception AttributeError: "'NoneType' object has no attribute 'path'" in <function _remove at 0x7fc4d32c46e0> ignored
Here I am stuck. I don't know where this _remove function is supposed to be... I tried searching for what the message is supposed to tell me, but quite frankly I have no idea. So...
Is there something wrong with my code?
What does '< function _remove at ... >' mean?
There was indeed a mistake in my config.ini file. I did not regard the s at the end of %(...)s as a necessary syntax element. I suppose it refers to "string" but I couldn't really confirm this.
My .ini file for starting the Python Pyramid server had a similar problem.
And to use the variable from the .env file, I needed to add the following: %%(VARIEBLE_FOR_EXAMPLE)s
But I got other problems, and I solved them with this: How can I use a system environment variable inside a pyramid ini file?

Passing binary data to a python logger

I want to log raw bytes. But if I change the file mode in FileHandler from "w" to "wb" the logger fails with error, whichever data I pass to it: string or bytes.
logging.getLogger("clientIn").error(b"bacd")
Traceback (most recent call last):
File "/usr/lib/python3.4/logging/__init__.py", line 980, in emit
stream.write(msg)
TypeError: 'str' does not support the buffer interface
Call stack:
File "<string>", line 1, in <module>
File "/usr/lib/python3.4/multiprocessing/spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "/usr/lib/python3.4/multiprocessing/spawn.py", line 119, in _main
return self._bootstrap()
File "/usr/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/usr/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/serj/work/proxy_mult/proxy/connection_worker_process.py", line 70, in __call__
self._do_work(ipc_socket)
File "/home/serj/work/proxy_mult/proxy/connection_worker_process.py", line 76, in _do_work
logging.getLogger("clientIn").error("bacd")
Message: 'bacd'
I need the way to adapt logging module to binary data.
The easiest solution would be to store the bytes in a bytestring.
The other possible way is to customize your logging. The documentation is a start but you will need to look into examples of how people have done it. Personally I have gone only as far as to using a slightly customized record, handler and formatter for allowing my logger to use a SQLite backend.
There are multiple things you need to modify (sorry for not being that specific but I am also still a beginner when it comes to the logging module of Python):
LogRecord - if you inherit from it, you will see that the __init__(...) specifies an argument msg of type object. As the documentation states msg is the event description message, possibly a format string with placeholders for variable data. Imho if msg was supposed to be just a string it would not have been of type object. This is a place, where you can investigate further incl. the use of args. Inheriting is not really necessary for many cases and a simple namedtuple would do just fine.
LoggerAdapter - there is the contextual information of a message, which can contain arbitrary data (from what I understand). You will need a custom adapter to work with that.
In addition you will probably have to use a custom Formatter and/or Handler. Worst case you will have to use some arbitrary string message while passing the extra data (binary or otherwise) alongside it.
Here is a quick and dirty example, where I use a namedtuple to hold the extra data. Note that I was unable to just pass the extra data without an actual message but you might be able to go around this issue if you implement your actual custom LogRecord. Also note that I am omitting the rest of my code since this is just a demonstration for customization:
TensorBoardLogRecord = namedtuple('TensorBoardLogRecord' , 'dtime lvl src msg tbdata')
TensorBoardLogRecordData = namedtuple('tbdata', 'image images scalar scalars custom_scalars')
class TensorBoardLoggerHandler(logging.Handler):
def __init__(self, level=logging.INFO, tboard_dir='./runs') -> None:
super().__init__(level)
self.tblogger = SummaryWriter(tboard_dir)
def emit(self, record: TensorBoardLogRecord) -> None:
# For debugging call print record.__dict__ to see how the record is structured
# If record contains Tensorboard data, add it to TB and flush
if hasattr(record, 'args'):
# TODO Do something with the arguments
...
class TensorBoardLogger(logging.Logger):
def __init__(self, name: str='TensorBoardLogger', level=logging.INFO, tboard_dir='./runs') -> None:
super().__init__(name, level)
self.handler = TensorBoardLoggerHandler(level, tboard_dir)
self.addHandler(self.handler)
...
logging.setLoggerClass(TensorBoardLogger)
logger = logging.getLogger('TensorBoardLogger')
logger.info('Some message', TensorBoardLogRecordData(None, None, 10000, None, None))
What I am trying to do is add the ability to the logger (still work in progress) to actually write a Tensorboard (in my case from the PyTorch utilities module) log entry that can be visualized via the tool inside the web browser. Yours doesn't need to be that complicated. This "solution" is mostly in case you can't find a way to override the msg handling.
I found also this repository - visual logging, which uses the logging facilities of the Python module to handle images. Following the code provided by the repo I was able to get
<LogRecord: TensorBoardLogger, 20, D:\Projects\remote-sensing-pipeline\log.py, 86, "TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None)">
{'name': 'TensorBoardLogger', 'msg': TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None), 'args': (), 'levelname': 'INFO', 'levelno': 20, 'pathname': 'D:\\Projects\\remote-sensing-pipeline\\log.py', 'filename': 'log.py', 'module': 'log', 'exc_info': None, 'exc_text': None, 'stack_info': None, 'lineno': 86, 'funcName': '<module>', 'created': 1645193616.9026344, 'msecs': 902.6343822479248, 'relativeCreated': 834.2068195343018, 'thread': 6508, 'threadName': 'MainThread', 'processName': 'MainProcess', 'process': 16208}
by just calling
logger = TensorBoardLogger(tboard_dir='./LOG')
logger.info(TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None))
where I changed TensorBoardLogRecord to be
TensorBoardLogRecord = namedtuple('TensorBoardLogRecord' , 'image images scalar scalars custom_scalars')
As you can see the msg is my object TensorBoardLogRecord, which confirms both my statement above as well as the statement in the documentation - as long as you customize your logging properly, you can log whatever you want. In the case of the repo I've pointed at the author is using images, which are numpy objects. However ultimately those images are read from image files hence binary data is also there.

Generating SSH keypair with paramiko in Python

I am trying to generate a SSH key pair with the python module paramiko. There doesn't seem to be much info about key generation. I've read through the paramiko docs but can't figure out whats wrong. I can generate a private and public key without password encryption. However, when I try to encrypt the private key I get the following error.
ValueError: IV must be 8 bytes long
I believe the above error is from pycrypto. I've looked through the relevant code in paramiko.pkey and pycrypto without any luck.
Here is a small example.
import paramiko
def keygen(filename,passwd=None,bits=1024):
k = paramiko.RSAKey.generate(bits)
#This line throws the error.
k.write_private_key_file(filename,password = 'cleverpassword')
o = open(fil+'.pub' ,"w").write(k.get_base64())
traceback
Traceback (most recent call last):
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Documents/test.py", line 14, in keygen
k.write_private_key_file(filename,password = 'cleverpassword')
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/paramiko/rsakey.py", line 127, in write_private_key_file
self._write_private_key_file('RSA', filename, self._encode_key(), password)
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/paramiko/pkey.py", line 323, in _write_private_key_file
self._write_private_key(tag, f, data, password)
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/paramiko/pkey.py", line 341, in _write_private_key
data = cipher.new(key, mode, salt).encrypt(data)
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/Crypto/Cipher/DES3.py", line 114, in new
return DES3Cipher(key, *args, **kwargs)
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/Crypto/Cipher/DES3.py", line 76, in __init__
blockalgo.BlockAlgo.__init__(self, _DES3, key, *args, **kwargs)
File "/var/mobile/Applications/149E4C21-2F92-4712-BAC6-151A171C6687/Pythonista.app/pylib/site-packages/Crypto/Cipher/blockalgo.py", line 141, in __init__
self._cipher = factory.new(key, *args, **kwargs)
ValueError: IV must be 8 bytes long
The Problem
This looks like a bug in paramiko.
If you look at the line that threw the error in pkey.py, it is the following line:
data = cipher.new(key, mode, salt).encrypt(data)
Let us now look at the lines before it, which set the mode by first selecting a cipher_name.
# since we only support one cipher here, use it
cipher_name = list(self._CIPHER_TABLE.keys())[0]
cipher = self._CIPHER_TABLE[cipher_name]['cipher']
keysize = self._CIPHER_TABLE[cipher_name]['keysize']
blocksize = self._CIPHER_TABLE[cipher_name]['blocksize']
mode = self._CIPHER_TABLE[cipher_name]['mode']
Here are the contents of _CIPHER_TABLE.
_CIPHER_TABLE = {
'AES-128-CBC': {'cipher': AES, 'keysize': 16, 'blocksize': 16, 'mode': AES.MODE_CBC},
'DES-EDE3-CBC': {'cipher': DES3, 'keysize': 24, 'blocksize': 8, 'mode': DES3.MODE_CBC},
}
Observe how the comment contradicts the code. Two ciphers are available, and the line above which selects the cipher_name assumes there is only one.
Based on the error, it appears that 'DES-EDE3-CBC' is selected. If we look at the comment in DES3.py, we see the following requirement for an IV.
IV : byte string
The initialization vector to use for encryption or decryption.
It is ignored for `MODE_ECB` and `MODE_CTR`.
For `MODE_OPENPGP`, IV must be `block_size` bytes long for encryption
and `block_size` +2 bytes for decryption (in the latter case, it is
actually the *encrypted* IV which was prefixed to the ciphertext).
It is mandatory.
From paramiko's source, we observe that no IV is passed, and hence the error we saw.
Workaround
Change the following line in pkey.py to hardcode the 'AES-128-CBC' cipher instead.
# cipher_name = list(self._CIPHER_TABLE.keys())[1]
cipher_name = 'AES-128-CBC'

Categories