in python, there is a built-in function called dir which displays all properties and methods of the specified object so is there a function like that in Dart if it's what is called?
and also in python, there is another function called type which displays the type of the argument you give so is there a function like that in Dart also?
Dart does not have this kind of feature in runtime unless you are running your code with the Dart VM, in which case you can use dart:mirrors: https://api.dart.dev/stable/2.13.4/dart-mirrors/dart-mirrors-library.html
One reason for this restriction is that Dart are a compiled language, so when compiled to native or JavaScript, the compiler identifies what code are used and only takes what is needed for your program to be executed. It also makes optimizations based on how the code are being used. Both kind of optimizations is rather difficult if you are allowing every method to be accessed.
The Dart VM can do this since it does has access to all the code when running the program.
There are some ongoing discussion about adding meta programming to Dart which might be the solution for most cases where dart:mirrors are used today:
https://github.com/dart-lang/language/issues/1518
https://github.com/dart-lang/language/issues/1565
An alternative solution is use some kind of builder which makes static analysis of your code and compiles some classes based on that which you are then includes in your project. E.g. a class which contains the name and type of all members of the classes you need that information from.
I'm trying to create something and I don't know if it's possible or "clean" :
From python, call function A of my C++ code to compute something complicated
The C++ code returns just the pointer to the python
Do other things in python...
From python, call function B of my C++ code, it takes the pointer and other things as arguments.
I really don't need to use my complicated C++ class in my Python algorithm, that's why I just want to save the pointer in python.
Anyone has any advice on how to do that ?
Edit : In the end I wrapped the c++ class in python, thank you everyone.
A pointer is just data that can be marshaled and sent to anything. It is however a very bad idea because when doing that, you have to assure that that pointer remains valid as long as the python part has the pointer. There is no possibility to check whether the pointer is still valid, so dereferencing a pointer that you receive from an external party could crash your program.
A better idea in a lot of situations is to send a key to a table. When that key is sent back, it can be used to get the needed information from that table and it can be handled when the table doesn't have the key anymore. It is easiest to use std::map for the table. Of course, you could store the pointer in a container and check for that, but a string or number is easier to debug.
It would be better to create a class in C++ and store that pointer in the class itself as private. Then create function calls to access those pointers.
Once the class is implemented generate the .so file of your lib and import it in python. This way you can simply use your C++ code in python and also will not have to save the pointer.
I would like to know if it is possible (and if so, how) to call a routine from a DLL by the Proc address instead of by name - in Python.
Case in point: I am analyzing a malicious dll, and one of the routines I want to call is not exported (name to reference it by), however I do know the address to the base of the routine.
This is possible in C/C++ by casting the function pointer to a typedef'ed function prototype.
Is there a similar way to do this in Python?
If not, are there any concerns with modifying the export table of the dll, to make a known exported name map to the address.
I am also looking for the solution to get function by address in Python.
I still don't know how to do that, but I found an alternative approach to get unnamed function by its ordinal (https://docs.python.org/2/library/ctypes.html), like this:
dll = CDLL("mydll.dll")
myfunc = dll[32] # the 32-th function
I was able to modify the export table, changing the base address of an already exported routine to my own routine.
This allowed me to execute the subroutine I was interested in via Python by using the exported name.
My question is going to use examples from Python, but it seems like this could be a general question.
I've been using load-time dynamic linking, but for various reasons (it's recommended in the link below) I'd like to dynamically load the Python library:
HINSTANCE hModPython = LoadLibrary(_T("Python27.dll"));
I'm able to load Py_Initialize and other functions from the DLL, but it's a dirty process:
int (*pPy_Initialize)(void);
pPy_Initialize = (int (*)(void))GetProcAddress(hModPython, "Py_Initialize");
pPy_Initialize();
In this conversation it's said that:
Macros can make using these pointers transparent to any C code that calls routines in Python’s C API.
My question is really how to do what this writer suggests when I'm going to be importing a wide variety of functions, with various signatures. It would be nice to use the signatures that are already in Python.h (including that header somehow).
I would do like the system linker does: construct a symbol table containing all the function names. Then just initialise the pointers in that table. Function names can either be fixed string constants, or they might be read from the DLL itself (i.e. Win32 API to enumerate dll export functions?).
Significant drawback of that table approach, though, is impossibility to use it with existing code, which calls the functions by name (pPy_Initialize();) -- you'd have to use the pointer in the table, perhaps indexed via enum (pPy[Initialize]();).
Different signatures can be handled by using different tables (a table per signature). Signatures can also be stored along with the names in some symbolic form, and then you'd wrap it in some accessor magic which could parse and check it -- but that could quickly become too complex, like inventing yet another programming language.
IMHO, the only significant advantage of all that weird machinery over macros is that you might be able to load arbitrary DLLs with it. Other than that, I wouldn't go that route.
I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually.
IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP?
For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
Finally a question that I can really put a value answer to :).
I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown:
Disclaimer: This is my personal experience. I am not involved with any of these projects.
swig:
does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it.
Ctypes:
I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs.
Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python.
Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop.
Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython.
Summary: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question.
Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
I haven't used SWIG or SIP, but I find writing Python wrappers with boost.python to be very powerful and relatively easy to use.
I'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic boost::python::object argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa.
If you plan use boost.python, the tutorial is a good place to start.
I have implemented something somewhat similar to what you need. I have a C++ function that
accepts a python function and an image as arguments, and applies the python function to each pixel in the image.
Image* unary(boost::python::object op, Image& im)
{
Image* out = new Image(im.width(), im.height(), im.channels());
for(unsigned int i=0; i<im.size(); i++)
{
(*out)[i] == extract<float>(op(im[i]));
}
return out;
}
In this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a __call__ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function):
import image
im = image.load('somefile.tiff')
double_im = image.unary(lambda x: 2.0*x, im)
As for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - this might be helpful.
The best way to plan for an eventual transition to compiled code is to write the performance sensitive portions as a module of simple functions in a functional style (stateless and without side effects), which accept and return basic data types.
This will provide a one-to-one mapping from your Python prototype code to the eventual compiled code, and will let you use ctypes easily and avoid a whole bunch of headaches.
For peak fitting, you'll almost certainly need to use arrays, which will complicate things a little, but is still very doable with ctypes.
If you really want to use more complicated data structures, or modify the passed arguments, SWIG or Python's standard C-extension interface will let you do what you want, but with some amount of hassle.
For what you're doing, you may also want to check out NumPy, which might do some of the work you would want to push to C, as well as offering some additional help in moving data back and forth between Python and C.
f2py (part of numpy) is a simpler alternative to SWIG and boost.python for wrapping C/Fortran number-crunching code.
In my experience, there are two easy ways to call into C code from Python code. There are other approaches, all of which are more annoying and/or verbose.
The first and easiest is to compile a bunch of C code as a separate shared library and then call functions in that library using ctypes. Unfortunately, passing anything other than basic data types is non-trivial.
The second easiest way is to write a Python module in C and then call functions in that module. You can pass anything you want to these C functions without having to jump through any hoops. And it's easy to call Python functions or methods from these C functions, as described here: https://docs.python.org/extending/extending.html#calling-python-functions-from-c
I don't have enough experience with SWIG to offer intelligent commentary. And while it is possible to do things like pass custom Python objects to C functions through ctypes, or to define new Python classes in C, these things are annoying and verbose and I recommend taking one of the two approaches described above.
Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
In C you cannot pass a function as an argument to a function but you can pass a function pointer which is just as good a function.
I don't know how much that would help when you are trying to integrate C and Python code but I just wanted to clear up one misconception.
In addition to the tools above, I can recommend using Pyrex
(for creating Python extension modules) or Psyco (as JIT compiler for Python).