IronPython: Message: expected c_double, got c_double_Array_3 - python

I’m currently developing a script using the python script editor in Rhino. As I’m currently working in a Windows machine, the script editor uses IronPython as language.
In the same script, I want to interact with an FE software (Straus7) which has a Python API. When doing so, I have experienced some problems as the ctypes module does not seem to work in IronPython the same way it does in regular Python. Especially, I’m finding problems when initializing arrays using the command:
ctypes.c_double*3
For example, if I want to obtain the XYZ coordinates of a node #1 in the FE model, I regular Python I would write the following:
XYZType = ctypes.c_double*3
XYZ = XYZType()
node_num = 1
st.St7GetNodeXYZ(1,node_num,XYZ)
And this returns me a variable XYZ which is a 3D array such that:
XYZ -> <straus_userfunctions.c_double_Array_3 at 0xc5787b0>
XYZ[0] = -0.7xxxxx -> (X_coord)
XYZ[1] = -0.8xxxxx -> (Y_coord)
XYZ[2] = -0.9xxxxx -> (Z_coord)
On the other side, I copy the same exact script in IronPython, the following error message appears
Message: expected c_double, got c_double_Array_3
Obviously, If I change the variable XYZ to c_double; then it becomes a double variable which contains only a single entry, which corresponds to the first element of the array (in this case, the X-coordinate)
This situation is quite annoying as all FEM softwares, the usage of matrices and arrays is widely used. Consequently, I wanted to ask if anyone nows a simple fix to this situation.
I was thinking of using the memory allocation of the first element of the array to obtain the rest but I’m not so sure how to do so.
Thanks a lot. Gerard

I've found when working with IronPython you need to explicitly cast the "Array of three doubles" to a "Pointer to double". So if you're using Grasshopper with the Strand7 / Straus7 API you will need to add an extra bit like this:
import St7API
import ctypes
# Make the pointer conversion functions
PI = ctypes.POINTER(ctypes.c_long)
PD = ctypes.POINTER(ctypes.c_double)
XYZType = ctypes.c_double*3
XYZ = XYZType()
node_num = 1
# Cast arrays whenever you pass them to St7API from IronPython
St7API.St7GetNodeXYZ(1, node_num, PD(XYZ))
I don't have access to IronPython or Strand7 / Straus7 at the moment but from memory that will do it. If that doesn't work for you you can email Strand7 Support - you would typically get feedback on something like this within a day or so.

Related

Working Around the Windows-numpy astype(int) Bug in Pandas

I have a codebase I've been developing on a Mac (and running on Linux machines) based largely on pandas (and therefore numpy). Very commonly I type-cast with astype(int).
Recently a Windows-based developer joined our team. In an effort to make the code base more platform-independent, we're trying to gracefully tackle this tricky issue whereby numpy uses a 32-bit type instead of the 64-bit type, which breaks longer integers.
On a Mac, we see:
ipdb> ids.astype(int)
id
1818726176 1818726176
1881879486 1881879486
2590366906 2590366906
284399109 284399109
299981685 299981685
370708200 370708200
387277023371 387277023371
387343898032 387343898032
406885699892 406885699892
5262665206 5262665206
544687374 544687374
6978317806 6978317806
Whereas on a Windows machine (in PowerShell), we see:
ipdb> ids.astype(int)
id
1818726176 1818726176
1881879486 1881879486
2590366906 -1704600390
284399109 284399109
299981685 299981685
370708200 370708200
387277023371 729966731
387343898032 796841392
406885699892 -1136193228
5262665206 967697910
544687374 544687374
6978317806 -1611616786
Other than using a sed call to change every astype(int) to astype(np.int64) (which would also require an import numpy as np at the top of every module where currently that doesn't exist), is there a way to do this?
In particular, I was hoping to map int to numpy.int64 somehow in a pandas option or something.
Thank you!
I'm not saying that this is a really good idea, but you can simply redefine int to whatever you want:
import numpy as np
x = 2384351503.0
print(np.array(x).astype(int))
#-2147483648
old_int = int
int = np.int64
print(np.array(x).astype(int))
#2384351503
int = old_int
print(np.array(x).astype(int))
#-2147483648
In the case you described I'd, however, strongly prefer to fix the source code instead of redefining standard data types. It's a one-time effort and any IDE can do it easyly.
Numpy is already implicitely imported by pandas, so it doesn't cost any additional time or resources. If you really want to avoid it (for whatever reason), you can use pd.Int64Dtype.type instead of np.int64 (see source).

How to write a custom debugging helper for nlohmann::basic_json?

I am faced with the task of writing a simple debug helper for Qt Creator 4.13.1 / Qt 5.12.5 / MSVC 2017 compiler for the C++ JSON implementation nlohmann::basic_json (https://github.com/nlohmann/json).
An object of nlohmann::basic_json can contain the contents of a single JSON data type (null, boolean, number, string, array, object) at a time.
There's a dump() member function which can be used to output the current content formatted as a std::string regardless of the current data type. I always want to use this function.
What I've done so far:
I've looked at https://doc.qt.io/qtcreator/creator-debugging-helpers.html, as well as at the given example files (qttypes.py, stdtypes.py...).
I made a copy of the file personaltypes.py and told Qt Creator about its existence at
Tools / Options / Debugger / Locals & Expressions / Extra Debugging Helpers
The following code works and displays a "Hello World" in the debugger window for nlohmann::basic_json objects.
import dumper
def qdump__nlohmann__basic_json(d, value):
d.putNumChild(0)
d.putValue("Hello World")
Unfortunately, despite the documentation, I have no idea how to proceed from here on.
I still have absolutely no clue how to correctly call basic_json's dump() function with the dumper from Python (e.g. with d.putCallItem ?).
I also have no starting point how to format the returned std::string so that it is finally displayed in the debugger window.
I imagined something like this, but it doesn't work.
d.putValue("data")
d.putNumChild(1)
d.putCallItem('dump', '#std::string', value, 'dump')
I hope someone can give me a little clue so that I can continue thinking in the right direction.
For example, can I call qdump__std__string from stdtypes.py myself to interpret the std::string?

How to know which simulator is used in cocotb testbench?

To test my Verilog design I'm using two differents simulators : Icarus and Verilator. It's work, but there are some variations between them.
For example, I can't read module parameter with verilator, but Icarus works.
Is there a way to know which simulator is in use in python testfile ?
I would like to write something like that :
if SIM == 'icarus':
self.PULSE_PER_NS = int(dut.PULSE_PER_NS)
self.DEBOUNCE_PER_NS = int(dut.DEBOUNCE_PER_NS)
else:
self.PULSE_PER_NS = 4096
self.DEBOUNCE_PER_NS = 16777216
To be able to manage both simulator and compare them.
The running simulator name (as a string) can be determined using cocotb.SIM_NAME. If cocotb was not loaded from a simulator, it returns None.

Using Python modules in Swift and Pythonkit

I was looking to get some help or clarification of the limitations of using PythonKit in Swift. Well I say PythonKit, I actually installed the Tensorflow toolchain in Xcode as I couldn't get Pythonkit to work on its own as a single dependancy (MacBook would spin its wheels with fans blasting trying to import numpy).
Anyway I wanted to say its brilliant that I can use Python modules in Swift, makes it much easier to potentially start using swift for more than just iOS apps.
My issue is that I have imported Python modules fine but its not clear how much functionality they will have. I assume ones like numpy will be pretty much the same but as a scientist I use netcdf files a lot so have been trying to use netCDF4. This imports fine and I can load the data object and attributes etc fine but I can't get the actual array out.
Here is an example:
import PythonKit
PythonLibrary.useVersion(3, 7)
let nc = Python.import("netCDF4")
var Data = nc.Dataset("ncfile path")
var lat_z = Data.variables["lat_z"][:]
The [:] is causing an error that is picked up by Xcode, removing it allows the script to run but results in the variable object rather than the array. I can add stuff to the end to get the attributes etc e.g. lat_z.long_name but not sure of how to extract the array without using [:]
I am hoping this is just a syntax difference that I need to learn with swift (very much early days with using it) or is it a limitation of the PythonKit? I have not found anyone atcually using netcdf4 (examples are mostly numpy and Matplotlib) If so are there some general limitations with using python modules in swift?
I am also trying to get Matplotlib to work but am pretty sure thats due to using an commandline tool project in Xcode which hasn't got a view so makes sense it can't show me an image.
Any pointers and maybe links to upto date documentation would be great there seems to be some changes that have occurred e.g. import PythonKit rather than import Python.
Many Thanks
You can use the count property on a python iterable, which is equivalent to len. You can index Numpy array in two ways: (i) with Swift range syntax and (ii) with Numpy range objects:
import Foundation
import PythonKit
let np = Python.import("numpy")
let array = np.array([1, 2, 3, 4, 5])
print(array) // [1, 2, 3, 4, 5]
let subArray = array[0..<array.count]
print(subArray) // [1, 2, 3, 4, 5]
let subArray2 = array[np.arange(0, 2)]
print(subArray2) // [1, 2]
// Swift equivalent of Python ":"
let subArray3 = array[...]
You can also convert numpy arrays to Swift arrays and use Swift methods and subscripts:
let swiftArray = Array(array)
let swiftSubArray = swiftArray[0..<3]
print(swiftSubArray) // [1, 2, 3]
Note that you should prefer using Python.len(...) over the count property while working with PythonObjects because count will incur performance penalty because of the implementation of PythonKit that does not automatically conforms Python Object to RandomAccessCollection (thus count is O(n)).

Python mmap ctypes - read only

I think I have the opposite problem as described here. I have one process writing data to a log, and I want a second process to read it, but I don't want the 2nd process to be able to modify the contents. This is potentially a large file, and I need random access, so I'm using python's mmap module.
If I create the mmap as read/write (for the 2nd process), I have no problem creating ctypes object as a "view" of the mmap object using from_buffer. From a cursory look at the c-code, it looks like this is a cast, not a copy, which is what I want. However, this breaks if I make the mmap ACCESS_READ, throwing an exception that from_buffer requires write privileges.
I think I want to use ctypes from_address() method instead, which doesn't appear to need write access. I'm probably missing something simple, but I'm not sure how to get the address of the location within an mmap. I know I can use ACCESS_COPY (so write operations show up in memory, but aren't persisted to disk), but I'd rather keep things read only.
Any suggestions?
I ran into a similar issue (unable to setup a readonly mmap) but I was using only the python mmap module. Python mmap 'Permission denied' on Linux
I'm not sure it is of any help to you since you don't want the mmap to be private?
Ok, from looking at the mmap .c code, I don't believe it supports this use case. Also, I found that the performance pretty much sucks - for my use case. I'd be curious what kind performance others see, but I found that it took about 40 sec to walk through a binary file of 500 MB in Python. This is creating a mmap, then turning the location into a ctype object with from_buffer(), and using the ctypes object to decipher the size of the object so I could step to the next object. I tried doing the same thing directly in c++ from msvc. Obviously here I could cast directly into an object of the correct type, and it was fast - less than a second (this is with a core 2 quad and ssd).
I did find that I could get a pointer with the following
firstHeader = CEL_HEADER.from_buffer(map, 0) #CEL_HEADER is a ctypes Structure
pHeader = pointer(firstHeader)
#Now I can use pHeader[ind] to get a CEL_HEADER object
#at an arbitrary point in the file
This doesn't get around the original problem - the mmap isn't read-only, since I still need to use from_buffer for the first call. In this config, it still took around 40 sec to process the whole file, so it looks like the conversion from a pointer into ctypes structs is killing the performance. That's just a guess, but I don't see a lot of value in tracking it down further.
I'm not sure my plan will help anyone else, but I'm going to try to create a c module specific to my needs based on the mmap code. I think I can use the fast c-code handling to index the binary file, then expose only small parts of the file at a time through calls into ctypes/python objects. Wish me luck.
Also, as a side note, Python 2.7.2 was released today (6/12/11), and one of the changes is an update to the mmap code so that you can use a python long to set the file offset. This lets you use mmap for files over 4GB on 32-bit systems. See Issue #4681 here
Ran into this same problem, we needed the from_buffer interface and wanted read only access. From the python docs https://docs.python.org/3/library/mmap.html "Assignment to an ACCESS_COPY memory map affects memory but does not update the underlying file."
If it's acceptable for you to use an anonymous file backing you can use ACCESS_COPY
An example: open two cmd.exe or terminals and in one terminal:
mm_file_write = mmap.mmap(-1, 4096, access=mmap.ACCESS_WRITE, tagname="shmem")
mm_file_read = mmap.mmap(-1, 4096, access=mmap.ACCESS_COPY, tagname="shmem")
write = ctypes.c_int.from_buffer(mm_file_write)
read = ctypes.c_int.from_buffer(mm_file_read)
try:
while True:
value = int(input('enter an integer using mm_file_write: '))
write.value = value
print('updated value')
value = int(input('enter an integer using mm_file_read: '))
#read.value assignment doesnt update anonymous backed file
read.value = value
print('updated value')
except KeyboardInterrupt:
print('got exit event')
In the other terminal do:
mm_file = mmap.mmap(-1, 4096, access=mmap.ACCESS_WRITE, tagname="shmem")
i = None
try:
while True:
new_i = struct.unpack('i', mm_file[:4])
if i != new_i:
print('i: {} => {}'.format(i, new_i))
i = new_i
time.sleep(0.1)
except KeyboardInterrupt:
print('Stopped . . .')
And you will see that the second process does not receive updates when the first process writes using ACCESS_COPY

Categories