My application embeds python by dynamically loading it. I need to obtain the values from the dictionary of the script being executed.
pFnPyDict_GetItemString *pFGetItemString = NULL;
pFGetItemString = (pFnPyDict_GetItemString *)::GetProcAddress(hModulePython, PyDict_GetItemString);
if (pFGetItemString)
{
PyObject *pGet = pFGetItemString(pLocals, pVar);
if (pGet)
{
//The following code will not work as PyInt_Check is a macro
pFnPyInt_Check *pIsInt = (pFnPyInt_Check *)::GetProcAddress(hModulePython, "PyInt_Check");
if (PyInt_Check(get))
{
}
// There fore i am using PyObject_IsInstance
pFnPyObject_IsInstance *pFIsInstance = (pFnPyObject_IsInstance*)::GetProcAddress(hModulePython, "PyObject_IsInstance");
if (pFIsInstance)
{
int i = pFIsInstance(pGet, (PyObject*)&PyInt_Type); ----> the problem is here. This call fails.
}
}
}
How do I specify the second parameter to PyObject_IsInstance? Here i want to check if the value in pGet is of type int.
Do you only want to check for ints? If so you're better off using PyInt_Check instead.
Additional: Some advice, that you didn't ask for but which might help you. :) Are you using C or C++. If it's the later, consider using Boost.Python instead of the Python C API — it will make things a lot easier. Exposing functions and classes is trivial with Boost.
Surely the correct approach here is to include the header file and use PyInt_Check().
I assume that you have not included the Python C API header file because you don't want to use implicit linking. But you are making life hard for yourself by trying to work without the header file. Just because you include the header file, doesn't mean that the DLL functions will be implicitly linked to your program. This will only happen if you actually call some of the functions in the DLL.
If you want to be 100% sure that you don't implicitly link to the DLL then simply ensure that you don't link the .lib file.
Related
Is it possible to pass a python dict into a function expecting a nlohmann::json (nlohmann/json) object via cppyy? This question has to have come up by now, but I wasn't able to find anything on it.
Minimal example to reproduce (without regard to performance/safety, pls forgive):
test-json.h
#include <iostream>
#include <nlohmann/json.hpp>
using nlohmann::json;
void print_name_and_age(json j) {
std::cout << j["name"] << "\n"
<< j["age"] << "\n";
}
test-cppyy.py
import cppyy
cppyy.include('test-json.h')
from cppyy.gbl import print_name_and_age
some_dict = {
"name": "alfred",
"age": 25
}
print_name_and_age(some_dict)
runs into
print_name_and_age(some_dict)
NotImplementedError: void ::print_name_and_age(nlohmann::json j) =>
NotImplementedError: could not convert argument 1 (this method cannot (yet) be called)
I would like to be able to pass a python dict into the C++ function, and receive it as a nlohmann::json object. I presume I would need to write some custom converter for this?
Design requirement/background (optional)
I have a reinforcement learning environment class (written in C++) that needs to accept some configuration to initialize it (in its constructor). Everything's all fine passing a nlohmann::json object into the constructor while in the C++ domain, but I have a Python wrapper around the class too, written with cppyy that provides similar functionality to the C++ interface.
Uptill now, because of the aforementioned issue, I've been forced to receive a const std::map<std::string, float>& in the constructor instead of a nlohmann::json, which is what a python dict containing only str -> float mappings easily gets converted to by cppyy. But this obviously limits my input json files to only contain floats as values (my usecase requires having strings as keys but strings, ints, floats and bools as values in the JSON file). I can ofcourse write some pre-processing code to encode my heterogenous python dict into a homogenous str->float mapping on the python front (and do the same for C++) but I'd like a cleaner solution, if possible.
Could anyone please help me achieve passing a python dict into the cppyy-imported C++ function and have it converted into a nlohmann::json object in the C++ function? If this requires forking cppyy to add extra converter code/too much trouble I presume I would need to use a std::map<std::string, std::any / variant> alternative? I haven't worked alot with std::any/variant, would like to ask if this would even be possible - python dict to map<string, any> - if this is the best alternative to a custom converter - in terms of performance / clean elegant code.
Environment:
Python 3.9.13
C++17 ( I believe cppyy-2.4.0 doesn't support C++20 yet, I don't have any constraint on the C++ standard)
cppyy==2.3.1
cppyy-backend==1.14.8
cppyy-cling==6.25.3
cppyythonizations==1.2.1
MacOS Monterey, Apple M1
This has been answered at GitHub: Conversion from python dict into nlohmann::json
New to cpp (Java guy).
I have 3rd party library that has method sendMail(txt).
I don't want to test the library. i want to test my own method, so in order to do this , i need to mock the library calls .
My own method is looking like this:
#include "mailsender.h"
int run(txt){
analysis(txt);
...
...
int status = sendMail(txt);//sendMail is a 3rd party library call. i need to mock it.its not part of the unit test
return status;
}
In Java the mailsender was interface and it was injected to my class, so in case of test i inject mock.
What is a good practice in cpp to mock library calls?
I can wrap the 3rd party library call in a class and inject this class, but i am looking for something simpler and for the common practice (maybe ifndf).
I am familiar with googlemock.
googlemock allow me to mock classes . i am not aware to option how to mock a call in my tested method.
So I assume you have a 'global' function that is implemented in a library that you both include a header file for (to get the definition) and link (to get the implementation).
You obviously need to replace the implementation of the library with your own - one that does "nothing", so you can do this in 2 ways:
you replace the .dll (or .so) with your own implementation that has all the methods the 3rd party library exposes. This is easy once you've written a new version of all the 3rd party lib functions, but writing them all out can be a pain.
you remove the library temporarily, and replace the calls you make to that in a .cpp source file that implements those functions. So you'd create your own sendMail() function in a .cpp file and include this into the program instead of the mailsender.h include.
The latter is easier, but you might also have to modify your program to not link with the 3rd party lib. This can also require changing the #include as well, as some compilers (eg VC++) allow you to embed linker directives in the source. If your does this, then you won't be able to stop the linker from including the 3rd party lib.
The other option is to modify your code to use a different call to the sendMail call, eg test__sendMail() that you implement yourself. Wrap this is a macro to conditionally include your, or the real, function call depending on your build options.
If this was a c++ library then you'd probably be able to use a mocking framework like you're used to, but it sounds like its a C library, and they simply provide a list of functions that you use directly in your code. You could wrap the library in your own class and use that instead of calling the 3rd party lib functions directly.
There is a list of C mocking frameworks.
This is an old question, with an already choosen response, but maybe the following contribution can help someone else.
First solution
You still have to create a custom library to redefine the functions, but you do not need to change Makefiles to link to your "fake-library", just use LD_PRELOAD with the path to the fake-library and that will be the first that the linker will find and then use.
example
Second solution
ld (GNU) linker has an option --wrap that let you wrap only one function with another provided by the user. This way you do not have to create a new library/class just to mock the behavior
Here is the example from the man page
--wrap=symbol
Use a wrapper function for symbol. Any undefined reference to symbol will be resolved to "__wrap_ symbol ". Any undefined reference
to "__real_ symbol " will be resolved to symbol.
This can be used to provide a wrapper for a system function. The wrapper function should be called "__wrap_ symbol ". If it wishes to
call the system function, it should call "__real_ symbol ".
Here is a trivial example:
void *
__wrap_malloc (size_t c)
{
printf ("malloc called with %zu\n", c);
return __real_malloc (c);
}
If you link other code with this file using --wrap malloc, then all calls to "malloc" will call the function "__wrap_malloc" instead.
The call to "__real_malloc" in "__wrap_malloc" will call the real
"malloc" function.
You may wish to provide a "__real_malloc" function as well, so that links without the --wrap option will succeed. If you do this, you
should not put the definition of "__real_malloc" in the same file as
"__wrap_malloc"; if you do, the assembler may resolve the call before
the linker has a chance to wrap it to "malloc".
Disclaimer: I wrote ELFspy.
Using ELFspy, the following code will allow you to fake/mock the sendMail function by replacing it with an alternative implementation.
void yourSendMail(const char* txt) // ensure same signature as sendMail
{
// your mocking code
}
int main(int argc, char** argv)
{
spy::initialise(argc, argv);
auto sendMail_hook = SPY(&sendMail); // grab a hook to sendMail
// use hook to reroute all program calls to sendMail to yourSendMail
auto sendMail_fake = spy::fake(sendMail_hook, &yourSendMail);
// call run here..
}
Your program must be compiled with position independent code (built with shared libraries) to achieve this.
Further examples are here:
https://github.com/mollismerx/elfspy/wiki
Though there is no interface keyword, you can use Abstract Base Classes for similar things in C++.
If the library you are using doesn't come with such abstractions, you can wrap it behind your own "interface". If your code separates construction of objects from usage (e.g. by IoC), you can either use this to inject a fake or use Mocks:
https://stackoverflow.com/questions/38493/are-there-any-good-c-mock-object-frameworks
I've been looking for a simple answer to this question, but it seems that I can't find one. I would prefer to stay away from any external libraries that aren't already included in Python 2.6/2.7.
I have 2 c header files that resemble the following:
//constants_a.h
const double constant1 = 2.25;
const double constant2 = -0.173;
const int constant3 = 13;
...
//constants_b.h
const double constant1 = 123.25;
const double constant2 = -0.12373;
const int constant3 = 14;
...
And I have a python class that I want to import these constants into:
#pythonclass.py
class MyObject(object):
def __init(self, mode):
if mode is "a":
# import from constants_a.h, like:
# self.constant1 = constant1
# self.constant2 = constant2
elif mode is "b":
# import from constants_b.h, like:
# self.constant1 = constant1
# self.constant2 = constant2
...
I have c code which uses the constants as well, and resembles this:
//computations.c
#include <stdio.h>
#include <math.h>
#include "constants_a.h"
// do some calculations, blah blah blah
How can I import the constants from the header file into the Python class?
The reason for the header files constants_a.h and constants_b.h is that I am using python to do most of the calculations using the constants, but at one point I need to use C to do more optimized calculations. At this point I am using ctypes to wrap the c code into Python. I want to keep the constants away from the code just in case I need to update or change them, and make my code much cleaner as well. I don't know if it helps to note I am also using NumPy, but other than that, no other non-standard Python extensions. I am also open to any suggestions regarding the design or architecture of this program.
In general, defining variables in C header file is poor style. The header file should only declare objects, leaving their definition for the appropriate ".c" source code file.
One thing you may want to do is to declare the library-global constants like extern const whatever_type_t foo; and define (or "implement") them (i.e. assigning values to them) somewhere in your C code (make sure you do this only once).
Anyway, let's ignore how you do it. Just suppose you've already defined the constants and made their symbols visible in your shared object file "libfoo.so". Let us suppose you want to access the symbol pi, defined as extern const double pi = 3.1415926; in libfoo, from your Python code.
Now you typically load your object file in Python using ctypes like this:
>>> import ctypes
>>> libfoo = ctypes.CDLL("path/to/libfoo.so")
But then you'll see ctypes thinks libfoo.pi is a function, not a symbol for constant data!
>>> libfoo.pi
<_FuncPtr object at 0x1c9c6d0>
To access its value, you have to do something rather awkward -- casting what ctypes thinks is a function back to a number.
>>> pi = ctypes.cast(foo.pi, ctypes.POINTER(ctypes.c_double))
>>> pi.contents.value
3.1415926
In C jargon, this vaguely corresponds to the following thing happening: You have a const double pi, but someone forces you to use it only via a function pointer:
typedef int (*view_anything_as_a_function_t)(void);
view_anyting_as_a_function_t pi_view = π
What do you do with the pointer pi_view in order to use the value of pi? You cast it back as a const double * and dereference it: *(const double *)(pi_view).
So this is all very awkward. Maybe I'm missing something but this I believe is by design of the ctypes module -- it's there chiefly for making foreign function calls, not for accessing "foreign" data. And exporting pure data symbol in a loadable library is arguably rare.
And this will not work if the constants are only C macro definitions. There's in general no way you can access macro-defined data externally. They're macro-expanded at compile time, leaving no visible symbol in the generated library file, unless you also export their macro values in your C code.
I recommend using regular expressions (re module) to parse the information you want out of the files.
Building a full C parser would be huge, but if you only use the variables and the file is reasonably simple/predictable/under control, then what you need to write is straightforward.
Just watch out for 'gotcha' artifacts such as commented-out code!
I would recommend using some kind of configuration file readable by both Python and C program, rather than storing constant values in headers. E.g. a simple csv, ini-file, or even your own simple format of 'key:value' pairs. And there will be no need to recompile the C program every time you'd like to change one of the values :)
I'd up-vote emilio, but I'm lacking rep!
Although you have requested to avoid other non-standard libraries, you may wish to take a look at Cython (Cython: C-Extensions for Python www.cython.org/), which offers the flexibility of Python coding and the raw speed of execution of C/C++-compiled code.
This way you can use regular Python for everything, but handle the expensive elements of code using its built-in C-types. You can then convert your Python code into .c files too (or just wrap external C-libraries themselves. ), which can then be compiled into a binary. I've achieved up to 10x speed-ups doing so for numerical routines. I also believe NumPy uses it.
I have statically declared a large structure in C, but I need to use this same data to do some analysis in Python. I'd rather not re-copy this data in to Python to avoid errors, is there a way to access (read only) this data directly in Python? I have looked at "ctypes" and SWIG, and neither one of them seems to provide what I'm looking for....
For example I have:
/* .h file */
typedef struct
{
double data[10];
} NestedStruct;
typedef struct
{
NestedStruct array[10];
} MyStruct;
/* .c file */
MyStruct the_data_i_want =
{
{0},
{
{1,2,3,4}
},
{0},
};
Ideally, I'd like something that would allow me to get this into python and access it via the_data_i_want.array[1].data[2] or something similar. Any thoughts? I got swig to "work" in the sense that I was able to compile/import a .so created from my .c file, but I couldn't access any of it through cvars. Maybe there's another way? It seems like this should't be that hard....
Actually, I figured it out. I'm adding this because my reputation does not allow me to answer my own question within 8 hours, and since I don't want to have to remember in 8 hours I will add it now. I'm sure there's a good reason for this that I don't understand.
Figured it out.
1st I compiled my .c file into an library:
Then, I used types to define a python class that would hold the data:
from ctypes import *
class NestedStruct(Structure):
_fields_ = [("data", c_double*10)]
class MyStruct(Structure):
_fields_ = [("array", NestedStruct*10)]
Then, I loaded the shared library into python:
my_lib = cdll.LoadLibrary("my_lib.so")
Then, I used the "in_dll" method to get the data:
the_data_i_want = MyStruct.in_dll(my_lib, "the_data_i_want")
Then, I could access it as if it were C. the_data_i_want.array[1].data[2]
Note I may have messed up the syntax slightly here because my actual data structure is nested 3 levels and I wanted to simplify for illustration purposes here.
You could've also in C read the data and written to a JSON-File, which you could then easily parse (usually there's a library which will even do that for you; python import json) and access form any different platform with almost every language setup you could think of. And at the same time you could've accessed you're data very similar compared to how you accessed it within you're original C code.
Just as a suggestion. This would make you're data also more portable and versatile I think, but you'll spend more time on writing and parsing the JSON as if you just read the stream of data directly from you're C code into python.
I'm using the Python C API to call Python functions from my application. I'd like to present a list of functions that could be called and would like to be able to limit this list to just the ones with the expected number of parameters.
I'm happy that I can walk the dictionary to extract a list of functions and use PyCallable_Check to find out if they're callable, but I'm not sure how I can find out how many parameters each function is expecting?
I've found one technique involving Boost::Python, but would rather not add that for what (I hope!) will be a minor addition.
Thanks :)
Okay, so in the end I've discovered how to do it. User-defined Python functions have a member called func_code (in Python 3.0+ it's __code__), which itself has a member co_argcount, which is presumably what Boost::Python extracts in the example given by Christophe.
The code I'm using looks like this (it's heavily based on a documentation example of how to walk a Python dictionary):
PyObject *key, *value;
int pos = 0;
while(PyDict_Next(pyDictionary, &pos, &key, &value)) {
if(PyCallable_Check(value)) {
PyObject* fc = PyObject_GetAttrString(value, "func_code");
if(fc) {
PyObject* ac = PyObject_GetAttrString(fc, "co_argcount");
if(ac) {
const int count = PyInt_AsLong(ac);
// we now have the argument count, do something with this function
Py_DECREF(ac);
}
Py_DECREF(fc);
}
}
}
Thanks anyway - that thread did indeed lead me in the right direction :)
Maybe this is helpful? (not tested, there could be relevant pieces of information along this thread)...
Your C code can call inspect.getargspec just like any Python code would (e.g. via PyObject_CallMethod or other equivalent ways) and get all the scoop about the signature of each function or other callable that it may care about.