I'm using llvm-py to create a DIY compiler for some artificial and need to have a virtual method table in the globe scope. My concept is to have several arrays of function pointers (one for each class). Unfortunately there's no LLVM IR Builder for a global scope and I cannot use ptrtoint in order to have uniform type of all array elements (otherwise I would store function addresses as 64-bit ints and cast them to appropriate types before calling). Do you know any reasonable solution? It can also be illustrated with C++ LLVM api, because llvm-py is very similar.
Indeed, IRBuilder does not expose an interface to do that, but you can create it manually - e.g. by using the constructors of GlobalVariable. You can store all the pointers in an array using conversion constant expressions, i.e. by generating:
#global = global [4 x i64*] [
i64* bitcast (void()* #f to i64*),
i64* bitcast (float(i32)* #g to i64*),
...
]
So, use ConstantExpr::getBitCast() to generate the casts from the Function to the array element type (which should preferrably be a pointer, I don't see the advantage in storing an i64). Then create a new GlobalVariable in the module and initialize it with all the constant expressions you've created.
Related
In module source to create new classes I must use next declarative style constructions:
class_<MyClass>("MyClass")
.def("my_method", &MyClass::my_method)
.def("my_second_method", &MyClass::my_second_method);
But if I need to create classes at run-time? For example module function will be return new class. What I must do?
In fact I need create new types on the fly. In my case this typed fixed dictonairies and typed arrays. I need this for optimize my existing code: overhead problem. In my project used data types that transmitted via network. Its a reason to create fixed dict classes in runtime (every class will store individual fields with specified names, like a structs in C++, it's a fixed dict and typed arrays, which the holds element type and array of data of this type).
This will be looks something like this in python code:
from MyCPPLib import DeclareFixedDictonary, DeclareTypedArray
# new used defined data type
NewClass = DeclareFixedDictonary([('field_1', int32), ('field_2', String])
# instance of this type
new_instance = NewClass(4, "Hi")
new_instance['field_1'] = 6
new_instance['field_2'] = "qweqwe"
# ----------------------------------------------
NewArrayClass = DeclareTypedArray(NewClass)
new_array_instance = NewArrayClass([new_instance, NewClass()])
# ----------------------------------------------
NewComplexClass = DeclareFixedDictonary([('f1', float), ('f2', NewArrayClass), (f3, NewClass)])
# ... etc ...
I think if I will create this features with C++ using Python::Boost or/and Python API then I will get maximum speed-up with my types.
My problem is creating new classes in runtime (by other function, in example this DeclareFixedDictonary and DeclareTypedArray).
Following the docs to declare new python class with Boost I must do something like this:
BOOST_PYTHON_MODULE(DataTypes)
{
class_<DataTypesClass>("DataTypesClass")
.def("set", &DataTypesClass::set)
.def("get", &DataTypesClass::get)
.def("set_item", &DataTypesClass::SetItem)
.def("get_item", &DataTypesClass::GetItem)
;
}
But this is module classes, they can be created only in my module, and I can't use class_ in another module function, it's gives error. Maybe Boost has API to create new classes in run-time? Maybe type instances and dynamically filling attributes. Or maybe best way to do this is code generation? But is more hard than reflection system in Python.
I have a dictionary and i would like to know if is it possible to use it as a parameter of a kernel.
for instance
i have the cuda kernel signature
__global__ void calTab(Tableaux)
Tableaux is a C structure corresponding to
typedef struct
{
float *Tab1;
float *Tab2;
} Tableaux;
in python Tableaux correspond to the dictionary below:
Tableaux={}
Tableaux["Tab1"]=[]
Tableaux["Tab2"]=[]
is it possible to use the dictionary as the C structure without using a C API?
Thank you in advance
None of what you are proposing is possible. In PyCUDA, you cannot
Pass a dictionary to a kernel
Pass a list to a kernel
Directly translate a dictionary to C++ structure in device code
Directly translate a list to a C++ linear array in device
PyCUDA can use Python classes as C++ structures and it has a numpy like array for use on the GPU. So points 3 and 4 are possible, but not as you would like to do them. Both techniques are discussed in the documentation,here for gpuarray and here for structures.
Q: Is there something like numpy.(C/C++)vectorize ?
Suppose I have a numpy vector and a C++ DLL (with C extern interface) containing a function accepting a float and returning a float (no array!). With ctypes I can wrap the DLL function and do numpy.vectorize on that, so I can write:
myfunc = numpy.vectorize(lambda x: ctypes_wrapped_DLL_fun(x))
x_vec = numpy.linspace(...) # a vector
print myfunc(x_vec)
Works like a charm. Of course this function call is slow, because it has to pass from the C-ish numpy context with every single value through python, through the DLL and back.
I have found a mindboggling array of ways to interface C/C++ with numpy. Essentially only the chapter about Cython 2.8.5.2. "Numpy Support" gives an indication how to deal with vectors without going through the numpy/python/C interfaces all the time with every single element of the vector. The downside of this is that I would have to change the interface of the DLL from simple floats to arrays (and it liked to crash because I probably did things wrong).
numpy.ctypeslibalso seems not to be what I need, because it also would require the DLL to have an array interface.
Any good examples for that (? I do not mind use Cython or another solution requiring a compiler. I would strongly prefer not to change the interface of the DLL.
Let us say I have a Cython extension type named Point.
Then, I need to create a class called Points, which has as one of its attributes, a Python list of Point objects. Now, Python lists, as far as I understand, cannot be attributes of an extension type, as only C data types can be.
I want to use Python lists to hold the list of Cython extension types, because I have heard that its the easiest way to do so, and access of Python lists through Cython is quit efficient.
Thus, Points has to be a normal Python/Cython class, not an extension type, correct? That way, I can do the following:
def class Points:
def __init__(self, num_points):
self.points = [Point() for x in range(num_points)]
Do I understand correctly?
No, there is no such limitation, Cython extension types can have arbitrary Python objects as attributes. For example you could declare your Points class in the following way:
cdef class Points:
cdef list points
def __init__(self, num_points):
self.points = [Point() for x in range(num_points)]
Note that you need to declare all attributes of Cython extension types in advance. If you do not want to restrict the attribute to a specific Python object you can also use object instead of list. If you want to expose your points attribute to Python, i.e. allow direct access from Python, you need to declare it as public (i.e. cdef public list points).
Have a look at the documentation about attributes of extension types for more details and the second example of the properties section also provides an example how an extension type can wrap a list without providing direct access.
I don't know about any such restriction with Cython extension types. But, from the standard documentation on general extension types, it's clear you can create arbitrary PyObjects as member variables in an extension type (see the Noddy2 example with unrestricted PyObject members first and last which are later refined to be restricted to string types). If you went this route, you could expose a member as a PyObject and pass it a list by convention or fully restrict it to be a list.
I am trying to create a Mesh object in Python. I am using the python bindings which are being installed from the following web page. As far as the c++ code is concerned we can do it as follows
MeshType::Pointer mesh = MeshType::New();
I am very new to use even ITK. Have no idea how to create it. In the constructor of Mesh in the c++ documentation, it says one necessary argument which is the TPixelType. Unable to locate that as well.
Could anybody help me please with this.
Thanks
If I were you I would take a look at the Python bindings that come with ITK 4.0. You can get access to them by turning on the option WRAP_ITK_PYTHON in cmake.
Once, you compile ITK with the python bindings turned on you can create 2 mesh types out of the box:
import itk
meshType2D = itk.Mesh.D2Q.New()
meshType3D = itk.Mesh.D3Q.New()
alternatively you can explicitly instantiate your classes as follows:
import itk
meshType2D = itk.Mesh[itk.D, 2, itk.QuadEdgeMeshTraits.D2BBFF]
meshType3D = itk.Mesh[itk.D, 3, itk.QuadEdgeMeshTraits.D3BBFF]
This will give you 2 and 3 dimensional meshes of double type pixel values with default mesh traits. As far as pixel types in ITK go, these amount to the basic C++ variables types: double, float, unsigned int, etc. These basic types are wrapped in python and can be found in the ITK namespace: itk.D, itk.F, itk.UI, etc.