Constellation / Context:
A C++ Executable (1) which dynamically links a C++ shared library emb.so (2)
which in turn is running an embedded python interpreter (3) that calls custom python functions (4).
Embedding the Python interpreter (3) is happening by using pybind11.
A call to a Python function from C++ can be simplified as:
py::module::import("test").attr("my_func")();
The executable (1) has a main loop in which it can do some other work, but it will call the python function at regular intervals.
Observation:
Variant 1: If I block inside the python function, the python code executes smoothly and quickly, but the main executable loop is obviously blocked
Variant 2: If I create a python thread inside the python function to return from the function immediately, the main executable is running, but the python code is running extremely slow (I can watch the iterations of a for-loop with a print one by one)
Question:
Why is Variant 2 so slow and how can I fix it?
My guess is that this has something to do with the GIL, and I tried to release the GIL inside the wrapper emb.so before returning to the main loop, but I wasn't able to do this without a segfault.
Any ideas?
It turned out that this is very much related to the following question:
Embedding python in multithreaded C application
(see answer https://stackoverflow.com/a/21365656/12490068)
I solved the issue by explicitely releasing the GIL after calling embedded Python code like this:
state = PyGILState_Ensure();
// Call Python/C API functions...
PyGILState_Release(state);
If you are doing this in a function or other C++ scope and you are creating python objects, you have to make sure that the python object's desctructor is not called after releasing the GIL.
So don't do:
void my_func() {
gil_state = PyGILState_Ensure();
py::int_ ret = pymodule->attr("GiveMeAnInt")();
PyGILState_Release(gil_state);
return ret.cast<int>();
}
but instead do
void my_func() {
int ret_value;
gil_state = PyGILState_Ensure();
{
py::int_ ret = pymodule->attr("GiveMeAnInt")();
ret_value = ret.cast<int>();
}
PyGILState_Release(gil_state);
return ret_value;
}
Related
When working with Python's asyncio package, I've noticed that I can't step into any code of its tasks.Task class. For example, when the calling code invokes the class's constructor, my next 'step into' get's me into a get_debug() function outside the class. After that, I return to the calling code with an initialised Task object. I've observed similar behaviour with Task.__next_step(): I'll just step into code that gets called by this method.
All Python versions (3.9, 3.10), IDEs (PyCharm, Visual Studio Code) and OSs (macOS, Windows) that I tested showed the same issue.
Does anyone know the reason for the debugger’s strange behaviour and, possibly, how to overcome it?
The call_soon() in the last line of the screenshot is issued from within Task.__init__. However, as you can see, the debugger never stepped into the initializer.
Update: Surprisingly, with Python 3.6 (Pythonista on iPadOS) I can step into Task.__init__ from base_events.BaseEventLoop.create_task().
The implementation of Task is in C (where available). The debugger cannot step into C code.
You can see this in the asyncio.tasks module:
class Task(futures._PyFuture):
...
_PyTask = Task
try:
import _asyncio
except ImportError:
pass
else:
# _CTask is needed for tests.
Task = _CTask = _asyncio.Task
That last line shows the Python implementation being overridden by the C implementation.
You can verify which implementation you have by inspecting the __module__ attribute of Task. e.g.
import asyncio
print(asyncio.Task.__module__)
A pure Python implementation will print asyncio.tasks. The C implementation will print _asyncio.
Lets consider the following piece of code in Python:
def main():
abc()
def abc():
.
.
.
<statements....>
.
.
.
main()
#Why the above line of code is used in python???
Lets consider other programming languages like C, C++ or Java or may be an interpreted language like BASIC. In BASIC, the code goes like this:
if the name of the module is module1 then:
Module Module1
Sub Main()
Console.WriteLine("Hello World")
Console.ReadKey()
End Sub
End Module
This is in vbnet.
No Main() is called.
The above code is just for illustration.
So why is main() called at the end of the program in Python?
TL;DR
Because the creator of Python decided that was the way to use the language.
Details
Very simply put, it's a similar convention in those compiled languages you mentioned, like Visual Basic, C or C++ (or also Java, C#, and many other), that a program is started by launching the Main() function (with some variation, like in a module that is defined as the startup when you compile it).
So, basically, when you compile, in the binary .exe. the compiler adds a call to this Main() function, even if you haven't written it in your code.
Whereas in a Python program, it's simply the file itself that is run, line by line. There is no convention that any function would be called. Python is not a compiled program, not much happens "behind the scenes" (actually, not entirely true, some global variables are set by the interpreter Python.exe).
So, in your Python program, if there wasn't a line main() at the end, you would just have defined 2 functions abc and main, but nothing would be actually called and your program would stop after the definition, nothing inside the two functions would be executed.
In the end, it's just that Python language has a different rule than those languages you mentioned. It's by design of the language.
Of course, it's not "rnadom", this design is more "natural" depending on the type - compiled or interpreted - of the language. For instance, you have a similar behavior in JavaScript, there is no "main()" function either, a JavaScript code is also executed starting from the top, line by line, with potential functions defined in the main code.
(Simplified explanation, some special case may apply).
Its because you have to define the function before calling it in Python. If you call it before the definition, it may give you an error. A VB program launches the Main() function automatically inside the startup module.
The general rule in Python is that a function should be defined before its usage, which does not necessarily mean it needs to be higher in the code.
lol()
def lol():
print("Hello")
Should not work, but if you try
def test():
lol()
def lol():
print("Hello")
test()
It works.
objective:
executing a string of c(++) code with some kind of function comparable to the exec() function in python.
example in python:
exec('print("hello world")')
#out:
#hello world
question:
is there a c++ version of exec in python?
but, is there a c++ version of exec in python?
you wan to execute C language statements from a string! so that is not possible with c.
why
because c is compiled language, the program first compiled and then executed.
its possible in python as its interpreted language,means program is compiled by
just-in-time compiler at runtime.
hope this will help.
Well, technicall, you (maybe) can. But it's hardly a justifiable effort, there are other scripting languages that can be integrated in C++. For example Lua. Just to think about it, the following could work, if you have a method int excuteCode(std::string code)
Copy that string into a template that wraps it in some function. The following is an idea of such a template:
int userFunc()
{
%code%
}
Write the template to a file
Build a dynamic library (e.g. a .dll on windows) from that file (call compiler and linker via system or OS-specific methods)
Load the dynamic library into your running program (again, OS-specific methods)
Load the required method userFunc and execute it.
#include <iostream>
int main(void) {
system("python -c \"print('hello world')\"");
return 0;
}
For system commands...?
I am basically looking to see if it's possible to compile Python code into a C++ program such that a single binary is produced, then call the (compiled) python code as/with a function from within the C++ code.
Background: I have a C++ code that does some work and produces data that I want to plot. I then wrote a seperate Python script using SciPy that reads in the output data, processes it, and plots it to files. This all works as it is.
Basically, I am picturing:
void plotstuff() {
my_python_func(); // This is the python script compiled into the c++ binary
}
I don't need to pass anything between the python code and the C++ code other than being sure it's executed in the same directory. It may make things easier if I can pass string arguments, but again - not essential.
Can this be accomplished? Or is it generally a bad idea and I should just stick to having my C++ and python separate?
Yes: this is called embedding.
The best place to start is the official Python documentation on the topic. There's some sample code that shows how to call a Python function from C code.
Here's an extremely basic example, also from the Python docs:
#include <Python.h>
int
main(int argc, char *argv[])
{
Py_SetProgramName(argv[0]); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print 'Today is',ctime(time())\n");
Py_Finalize();
return 0;
}
Make a native extension (Python 2 or Python 3) out of your C++ code, and have your Python program import it. Then use py2exe or your favorite platform's counterpart to turn the Python program and your native extension into an executable.
I am porting a program (VMD, Visual Molecular Dynamics), which is written in C++ and has both Python and TCL interpreters embedded, to Python 3.x. Most of its UI is hard coded using the TCL/TK framework and OpenGl, so UI refreshs are done manually. When the Python interpreter is running it is possible to dynamically create new windows and even add new menus to the main UI using Tkinter. In this case all TK events are flushed by periodically calling some code in the Python side (see below). This ensures that all updates are thread-safe and don't break the interpreter.
int PythonTextInterp::doTkUpdate() {
// Don't recursively call into dooneevent - it makes Tkinter crash for
// some infathomable reason.
if (in_tk) return 0;
if (have_tkinter) {
in_tk = 1;
int rc = evalString(
"import Tkinter\n"
"while Tkinter.tkinter.dooneevent(Tkinter.tkinter.DONT_WAIT):\n"
" pass\n"
);
in_tk = 0;
if (rc) {
return 1; // success
}
// give up
have_tkinter = 0;
}
return 0;
}
However the function tkinter.dooneevent was removed from Python 3 and I can not find a substitute for it. I tried calling the low-level Tcl_DoOneEvent(TCL_DONT_WAIT) but when I dynamically created a new window I ended up crashing the Python interpreter with the error Fatal Python error: PyEval_RestoreThread: NULL tstate.
The answers in tkinter woes when porting 2.x code to 3.x, 'tkinter' module attribute doesn't exist doesn't help since I don't have a list of all windows that may created by the user.
Does anyone have any suggestion on how to flush the TK events in this case? It could be either on the Python side or in C++.
Thanks in advance
It looks like this is equivalent:
root = tkinter.Tk()
# Here's your event handler. Put it in a loop somewhere.
root.tk.dooneevent(tkinter._tkinter.DONT_WAIT)
# I don't know if it's possible to access this method without a Tk object.
Now, I don't know how exactly to convert this into your code- do you have a root Tk object with which you can access dooneevent? I'm not at all familiar with python 2 tkinter so I don't know exactly how evenly my code maps to yours. However, I discovered this when I was doing something very similar to you- trying to integrate the tkinter event loop into the asyncio event loop. I was able to create a coroutine that calls this method in a loop, yielding each time (and sleeping occasionally), so that the GUI remains responsive without blocking the asyncio event loop with tkinter._tkinter.create().
#asyncio.coroutine
def update_root(root):
while root.tk.dooneevent(tkinter._tkinter.DONT_WAIT):
yield
EDIT: I just read your comment about not having a widget. I know that the root.tk object is a tkinter._tkinter.TkappType instance created by calling tkinter._tkinter.create, and I don't think it's global. I'm pretty sure it's the core Tcl interpreter. You might be able to create your own by calling create. While it isn't documented, you can look at its usage in tkinter.Tk.__init__