objective:
executing a string of c(++) code with some kind of function comparable to the exec() function in python.
example in python:
exec('print("hello world")')
#out:
#hello world
question:
is there a c++ version of exec in python?
but, is there a c++ version of exec in python?
you wan to execute C language statements from a string! so that is not possible with c.
why
because c is compiled language, the program first compiled and then executed.
its possible in python as its interpreted language,means program is compiled by
just-in-time compiler at runtime.
hope this will help.
Well, technicall, you (maybe) can. But it's hardly a justifiable effort, there are other scripting languages that can be integrated in C++. For example Lua. Just to think about it, the following could work, if you have a method int excuteCode(std::string code)
Copy that string into a template that wraps it in some function. The following is an idea of such a template:
int userFunc()
{
%code%
}
Write the template to a file
Build a dynamic library (e.g. a .dll on windows) from that file (call compiler and linker via system or OS-specific methods)
Load the dynamic library into your running program (again, OS-specific methods)
Load the required method userFunc and execute it.
#include <iostream>
int main(void) {
system("python -c \"print('hello world')\"");
return 0;
}
For system commands...?
Related
Constellation / Context:
A C++ Executable (1) which dynamically links a C++ shared library emb.so (2)
which in turn is running an embedded python interpreter (3) that calls custom python functions (4).
Embedding the Python interpreter (3) is happening by using pybind11.
A call to a Python function from C++ can be simplified as:
py::module::import("test").attr("my_func")();
The executable (1) has a main loop in which it can do some other work, but it will call the python function at regular intervals.
Observation:
Variant 1: If I block inside the python function, the python code executes smoothly and quickly, but the main executable loop is obviously blocked
Variant 2: If I create a python thread inside the python function to return from the function immediately, the main executable is running, but the python code is running extremely slow (I can watch the iterations of a for-loop with a print one by one)
Question:
Why is Variant 2 so slow and how can I fix it?
My guess is that this has something to do with the GIL, and I tried to release the GIL inside the wrapper emb.so before returning to the main loop, but I wasn't able to do this without a segfault.
Any ideas?
It turned out that this is very much related to the following question:
Embedding python in multithreaded C application
(see answer https://stackoverflow.com/a/21365656/12490068)
I solved the issue by explicitely releasing the GIL after calling embedded Python code like this:
state = PyGILState_Ensure();
// Call Python/C API functions...
PyGILState_Release(state);
If you are doing this in a function or other C++ scope and you are creating python objects, you have to make sure that the python object's desctructor is not called after releasing the GIL.
So don't do:
void my_func() {
gil_state = PyGILState_Ensure();
py::int_ ret = pymodule->attr("GiveMeAnInt")();
PyGILState_Release(gil_state);
return ret.cast<int>();
}
but instead do
void my_func() {
int ret_value;
gil_state = PyGILState_Ensure();
{
py::int_ ret = pymodule->attr("GiveMeAnInt")();
ret_value = ret.cast<int>();
}
PyGILState_Release(gil_state);
return ret_value;
}
I am basically looking to see if it's possible to compile Python code into a C++ program such that a single binary is produced, then call the (compiled) python code as/with a function from within the C++ code.
Background: I have a C++ code that does some work and produces data that I want to plot. I then wrote a seperate Python script using SciPy that reads in the output data, processes it, and plots it to files. This all works as it is.
Basically, I am picturing:
void plotstuff() {
my_python_func(); // This is the python script compiled into the c++ binary
}
I don't need to pass anything between the python code and the C++ code other than being sure it's executed in the same directory. It may make things easier if I can pass string arguments, but again - not essential.
Can this be accomplished? Or is it generally a bad idea and I should just stick to having my C++ and python separate?
Yes: this is called embedding.
The best place to start is the official Python documentation on the topic. There's some sample code that shows how to call a Python function from C code.
Here's an extremely basic example, also from the Python docs:
#include <Python.h>
int
main(int argc, char *argv[])
{
Py_SetProgramName(argv[0]); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print 'Today is',ctime(time())\n");
Py_Finalize();
return 0;
}
Make a native extension (Python 2 or Python 3) out of your C++ code, and have your Python program import it. Then use py2exe or your favorite platform's counterpart to turn the Python program and your native extension into an executable.
I want to interface between Python and C++. I am trying to initially do it in simplest case, e.g. to find mean. My main is in python
1)Function getInput (python)
2)Function CalculateMean(C++)
3)Function DisplayMean(python)
My python file (main.py) looks like this:
function getInput(x,y)
//Here I want to add the function CalculateMean written in cpp file
function displayMean(m)
"CalcMean.h"
int CalculateMean(int x,int y)
"CalcMean.cpp"
mean = CalculateMean(x,y)
{
mean = (x+y)/2;
return mean;
}
I have tried using SWIG, but I am a begineer and unable to solve it. Any basic help will be highly appreciated.
you need a simple swig interface file that imports your CalcMean.h and produces a module, something like this into a mymodule.i file:
%module mymodule
%{
#include "CalcMean.h"
%}
%include "CalcMean.h"
then you can launch swig with something like
./swig -python -o mymodule.o mymodule.i
this will produce a .o file to be compiled as an .so/.dll file, as well as the required .py file to import the module.
I have used the NASM assembler to compile a simple assembly file (code below). I am going to then attempt to take this .obj file created, and have Cython link it to a .pyd file, so Python can import it. Basically I need a way of telling Cython to include an .obj file for use with other Cython / Python code.
First, here is my assembly code:
myfunc.asm
;http://www.nasm.us/doc/nasmdoc9.html
global _myfunc
section .text
_myfunc:
push ebp
mov ebp,esp
sub esp,0x40 ; 64 bytes of local stack space
mov ebx,[ebp+8] ; first parameter to function
; some more code
leave
ret
I compile this code by using nasm -f win32 myfunc.asm
This gives me myfunc.obj, which is what I want to include into a Cython compiled .pyd.
I may be completely mislead, and there may be a better method to do this entirely. Is there a simple one liner extern that I can do to declare an external object from Cython?
P.S. The label _myfunc should be the entry point.
To call the _myfunc entry point from Cython, you need to declare it:
cdef extern:
void _myfunc()
After that declaration, you may call _myfunc() in your Cython module as if it were a Python function. Of course, you will need to link myfunc.obj into your .pyd as explained in the answer to your other question.
I'm writing a Python class in C and I want to put assertions in my debug code. assert.h suits me fine. This only gets put in debug compiles so there's no chance of an assert failure impacting a user of the Python code*.
I'm trying to divide my 'library' code (which should be separate to the code linked against Python) so I can use it from other C code. My Python methods are therefore thinnish wrappers around my pure-C code.
So I can't do this in my 'library' code:
if (black == white)
{
PyErr_SetString(PyExc_RuntimeError, "Remap failed");
}
because this pollutes my pure-C code with Python. It's also far uglier than a simple
assert(black != white);
I believe that the Distutils compiler always sets NDEBUG, which means I can't use assert.h even in debug builds.
Mac OS and Linux.
Help!
*one argument I've heard against asserting in C code called from Python.
Just use assert.h. It's a myth that distutils always defines NDEBUG; it only does so for Microsoft's msvc on Windows, and then only when invoked from a Python release build (not from a Python debug build).
To then define NDEBUG in your own release builds, pass a -D command line option to setup.py build_ext.
Edit: It seems that NDEBUG is defined by default through Python's Makefile's OPT setting. To reset this, run
OPT="-g -O3" python setup.py build
Create your own macro, such as myassert() for different situations. Or create a macro, which checks a global variable to see if the macro is used from Python code or "normal" C. The Python module entry point would have to set this variable to true, or you could use function pointers, one for Python code, another default one for C code.
Undefine the NDEBUG macro in your setup.py:
ext_modules = [Extension(
...
undef_macros=['NDEBUG'],
)]
This will result in a command line like
gcc ... -DNDEBUG ... -UNDEBUG ...
Which (while ugly) does the correct thing, i.e. it keeps assertions enabled.