How to know which simulator is used in cocotb testbench? - python

To test my Verilog design I'm using two differents simulators : Icarus and Verilator. It's work, but there are some variations between them.
For example, I can't read module parameter with verilator, but Icarus works.
Is there a way to know which simulator is in use in python testfile ?
I would like to write something like that :
if SIM == 'icarus':
self.PULSE_PER_NS = int(dut.PULSE_PER_NS)
self.DEBOUNCE_PER_NS = int(dut.DEBOUNCE_PER_NS)
else:
self.PULSE_PER_NS = 4096
self.DEBOUNCE_PER_NS = 16777216
To be able to manage both simulator and compare them.

The running simulator name (as a string) can be determined using cocotb.SIM_NAME. If cocotb was not loaded from a simulator, it returns None.

Related

Setting NSSpeechSynthesizer mode from Python

I am using PyObjC bindings to try to get a spoken sound file from phonemes.
I figured out that I can turn speech into sound as follows:
import AppKit
ss = AppKit.NSSpeechSynthesizer.alloc().init()
ss.setVoice_('com.apple.speech.synthesis.voice.Alex')
ss.startSpeakingString_toURL_("Hello", AppKit.NSURL.fileURLWithPath_("hello.aiff"))
# then wait until ve.isSpeaking() returns False
Next for greater control I'd like to turn the text first into phonemes, and then speak them.
phonemes = ss.phonemesFromText_("Hello")
But now I'm stuck, because I know from the docs that to get startSpeakingString to accept phonemes as input, you first need to set NSSpeechSynthesizer.SpeechPropertyKey.Mode to "phoneme". And I think I'm supposed to use setObject_forProperty_error_ to set that.
There are two things I don't understand:
Where is NSSpeechSynthesizer.SpeechPropertyKey.Mode in PyObjC? I grepped the entire PyObjC directory and SpeechPropertyKey is not mentioned anywhere.
How do I use setObject_forProperty_error_ to set it? I think based on the docs that the first argument is the value to set (although it's called just "an object", so True in this case?), and the second is the key (would be phoneme in this case?), and finally there is an error callback. But I'm not sure how I'd pass those arguments in Python.
Where is NSSpeechSynthesizer.SpeechPropertyKey.Mode in PyObjC?
Nowhere.
How do I use setObject_forProperty_error_ to set it?
ss.setObject_forProperty_error_("PHON", "inpt", None)
"PHON" is the same as NSSpeechSynthesizer.SpeechPropertyKey.Mode.phoneme
"inpt" is the same as NSSpeechSynthesizer.SpeechPropertyKey.inputMode
It seems these are not defined anywhere in PyObjC, but I found them by firing up XCode and writing a short Swift snippet:
import Foundation
import AppKit
let synth = NSSpeechSynthesizer()
let x = NSSpeechSynthesizer.SpeechPropertyKey.Mode.phoneme
let y = NSSpeechSynthesizer.SpeechPropertyKey.inputMode
Now looking at x and y in the debugger show that they are the strings mentioned above.
As for how to call setObject_forProperty_error_, I simply tried passing in those strings and None as the error handler, and that worked.

Can you use mock_open to simulate serial connections?

Morning folks,
I'm trying to get a few unit tests going in Python to confirm my code is working, but I'm having a real hard time getting a Mock anything to fit into my test cases. I'm new to Python unit testing, so this has been a trying week thus far.
The summary of the program is I'm attempting to do serial control of a commercial monitor I got my hands on and I thought I'd use it as a chance to finally use Python for something rather than just falling back on one of the other languages I know. I've got pyserial going, but before I start shoving a ton of commands out to the TV I'd like to learn the unittest part so I can write for my expected outputs and inputs.
I've tried using a library called dummyserial, but it didn't seem to be recognising the output I was sending. I thought I'd give mock_open a try as I've seen it works like a standard IO as well, but it just isn't picking up on the calls either. Samples of the code involved:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8')
read_text = 'Stuff\r'
mo = mock_open(read_data=read_text)
mo.in_waiting = len(read_text)
with patch('__main__.open', mo):
with open('./serial', 'a+b') as com:
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
com.write(b'some junk')
print(mo.mock_calls)
mo().write.assert_called_with('{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8'))
And in the SharpTV class, the function in question:
def sendCmd(self, type, msg):
sent = self.com.write('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
print('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
Obviously, I'm attempting to control a Sharp TV. I know the commands are correct, that isn't the issue. The issue is just the testing. According to documentation on the mock_open page, calling mo.mock_calls should return some data that a call was made, but I'm getting just an empty set of []'s even in spite of the blatantly wrong com.write(b'some junk'), and mo().write.assert_called_with(...) is returning with an assert error because it isn't detecting the write from within sendCmd. What's really bothering me is I can do the examples from the mock_open section in interactive mode and it works as expected.
I'm missing something, I just don't know what. I'd like help getting either dummyserial working, or mock_open.
To answer one part of my question, I figured out the functionality of dummyserial. The following works now:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK'])
com = dummyserial.Serial(
port='COM1',
baudrate=9600,
ds_responses={powerCheck : powerCheck}
)
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
self.assertEqual(tv.recv(), powerCheck)
Previously I was encoding the dictionary values as utf-8. The dummyserial library decodes whatever you write(...) to it so it's a straight string vs. string comparison. It also encodes whatever you're read()ing as latin1 on the way back out.

IronPython: Message: expected c_double, got c_double_Array_3

I’m currently developing a script using the python script editor in Rhino. As I’m currently working in a Windows machine, the script editor uses IronPython as language.
In the same script, I want to interact with an FE software (Straus7) which has a Python API. When doing so, I have experienced some problems as the ctypes module does not seem to work in IronPython the same way it does in regular Python. Especially, I’m finding problems when initializing arrays using the command:
ctypes.c_double*3
For example, if I want to obtain the XYZ coordinates of a node #1 in the FE model, I regular Python I would write the following:
XYZType = ctypes.c_double*3
XYZ = XYZType()
node_num = 1
st.St7GetNodeXYZ(1,node_num,XYZ)
And this returns me a variable XYZ which is a 3D array such that:
XYZ -> <straus_userfunctions.c_double_Array_3 at 0xc5787b0>
XYZ[0] = -0.7xxxxx -> (X_coord)
XYZ[1] = -0.8xxxxx -> (Y_coord)
XYZ[2] = -0.9xxxxx -> (Z_coord)
On the other side, I copy the same exact script in IronPython, the following error message appears
Message: expected c_double, got c_double_Array_3
Obviously, If I change the variable XYZ to c_double; then it becomes a double variable which contains only a single entry, which corresponds to the first element of the array (in this case, the X-coordinate)
This situation is quite annoying as all FEM softwares, the usage of matrices and arrays is widely used. Consequently, I wanted to ask if anyone nows a simple fix to this situation.
I was thinking of using the memory allocation of the first element of the array to obtain the rest but I’m not so sure how to do so.
Thanks a lot. Gerard
I've found when working with IronPython you need to explicitly cast the "Array of three doubles" to a "Pointer to double". So if you're using Grasshopper with the Strand7 / Straus7 API you will need to add an extra bit like this:
import St7API
import ctypes
# Make the pointer conversion functions
PI = ctypes.POINTER(ctypes.c_long)
PD = ctypes.POINTER(ctypes.c_double)
XYZType = ctypes.c_double*3
XYZ = XYZType()
node_num = 1
# Cast arrays whenever you pass them to St7API from IronPython
St7API.St7GetNodeXYZ(1, node_num, PD(XYZ))
I don't have access to IronPython or Strand7 / Straus7 at the moment but from memory that will do it. If that doesn't work for you you can email Strand7 Support - you would typically get feedback on something like this within a day or so.

What is the easiest way to generate a Control Flow-Graph for a method in Python?

I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.

Loading a document on OpenOffice using an external Python program

I'm trying to create a python program (using pyUNO ) to make some changes on a OpenOffice calc sheet.
I've launched previously OpenOffice on "accept" mode to be able to connect from an external program. Apparently, should be as easy as:
import uno
# get the uno component context from the PyUNO runtime
localContext = uno.getComponentContext()
# create the UnoUrlResolver
resolver = localContext.ServiceManager.createInstanceWithContext(
"com.sun.star.bridge.UnoUrlResolver", localContext)
# connect to the running office
ctx = resolver.resolve("uno:socket,host=localhost,port=2002;"
"urp;StarOffice.ComponentContext")
smgr = ctx.ServiceManager
# get the central desktop object
DESKTOP =smgr.createInstanceWithContext("com.sun.star.frame.Desktop", ctx)
#The calling it's not exactly this way, just to simplify the code
DESKTOP.loadComponentFromURL('file.ods')
But I get an AttributeError when I try to access loadComponentFromURL. If I make a dir(DESKTOP), I've see only the following attributes/methods:
['ActiveFrame', 'DispatchRecorderSupplier', 'ImplementationId', 'ImplementationName',
'IsPlugged', 'PropertySetInfo', 'SupportedServiceNames', 'SuspendQuickstartVeto',
'Title', 'Types', 'addEventListener', 'addPropertyChangeListener',
'addVetoableChangeListener', 'dispose', 'disposing', 'getImplementationId',
'getImplementationName', 'getPropertySetInfo', 'getPropertyValue',
'getSupportedServiceNames', 'getTypes', 'handle', 'queryInterface',
'removeEventListener', 'removePropertyChangeListener', 'removeVetoableChangeListener',
'setPropertyValue', 'supportsService']
I've read that there are where a bug doing the same, but on OpenOffice 3.0 (I'm using OpenOffice 3.1 over Red Hat5.3). I've tried to use the workaround stated here, but they don't seems to be working.
Any ideas?
It has been a long time since I did anything with PyUNO, but looking at the code that worked last time I ran it back in '06, I did my load document like this:
def urlify(path):
return uno.systemPathToFileUrl(os.path.realpath(path))
desktop.loadComponentFromURL(
urlify(tempfilename), "_blank", 0, ())
Your example is a simplified version, and I'm not sure if you've removed the extra arguments intentionally or not intentionally.
If loadComponentFromURL isn't there, then the API has changed or there's something else wrong, I've read through your code and it looks like you're doing all the same things I have.
I don't believe that the dir() of the methods on the desktop object will be useful, as I think there's a __getattr__ method being used to proxy through the requests, and all the methods you've printed out are utility methods used for the stand-in object for the com.sun.star.frame.Desktop.
I think perhaps the failure could be that there's no method named loadComponentFromURL that has exactly 1 argument. Perhaps giving the 4 argument version will result in the method being found and used. This could simply be an impedance mismatch between Python and Java, where Java has call-signature method overloading.
This looks like issue 90701: http://www.openoffice.org/issues/show_bug.cgi?id=90701
See also http://piiis.blogspot.com/2008/10/pyuno-broken-in-ooo-30-with-system.html and http://udk.openoffice.org/python/python-bridge.html

Categories