Printing a variable in an embedded Python interpreter - python

I have written a small C program that embeds Python. I'm setting it up correctly using Py_Initialize() and Py_Finalize(), and am able to run scripts either using PyRun_SimpleString or PyRun_SimpleFile. However, I don't know how mimic the behavior of Python's own interpreter when printing variables.
Specifically:
a = (1, 2, 3)
print a
Works fine for me: it prints out (1, 2, 3)
However:
a = (1, 2, 3)
a
Prints out nothing at all. In Python's own interpreter, this would print out (1, 2, 3) as well. How can I make my code do what users would expect and print out the value?
Thanks in advance!

To run the interpreters interactive loop, you should use the function PyRun_InteractiveLoop(). Otherwise, your code will behave as if it were written in a Python script file, not entered interactively.
Edit: Here's the full code of a simple interactive interpreter:
#include <Python.h>
int main()
{
Py_Initialize();
PyRun_InteractiveLoop(stdin, "<stdin>");
Py_Finalize();
}
Edit2: Implementing a full interactive interpreter in a GUI is a bit of a project. Probably the easiest way to get it right is to write a basic terminal emulator connected to a pseudo-terminal device, and use the above code on that device. This will automatically get all subtleties right.
If your aim isn't a full-blown interactive editor, an option might be to use PyRun_String() with Py_single_input as start token. This will allow you to run some Python code as in the interactive interpreter, and if that code happened to be a single expression that doesn't evaluate to None, a representation of its value is printed -- to stdout of course. Here is some example code (without error checking for simplicity):
#include <Python.h>
int main()
{
PyObject *main, *d;
Py_Initialize();
main = PyImport_AddModule("__main__");
d = PyModule_GetDict(main);
PyRun_String("a = (1, 2, 3)", Py_single_input, d, d);
PyRun_String("a", Py_single_input, d, d);
Py_Finalize();
}
This will print (1, 2, 3).
There are still a lot of problems:
No error handling and traceback printing.
No "incremental input" for block commands like in the interactive interpreter. The input needs to be complete.
Output to stdout.
If multiple lines of input are given, nothing is printed.
To really replicate the behaviour of the interactive interpreter is not easy. That's why my initial recommendation was to write a basic terminal emulator in your GUI, which shouldn't be too hard -- or maybe there's even one available.

Related

Handling embedded Python interpreter calls with GIL and multi-threading

Constellation / Context:
A C++ Executable (1) which dynamically links a C++ shared library emb.so (2)
which in turn is running an embedded python interpreter (3) that calls custom python functions (4).
Embedding the Python interpreter (3) is happening by using pybind11.
A call to a Python function from C++ can be simplified as:
py::module::import("test").attr("my_func")();
The executable (1) has a main loop in which it can do some other work, but it will call the python function at regular intervals.
Observation:
Variant 1: If I block inside the python function, the python code executes smoothly and quickly, but the main executable loop is obviously blocked
Variant 2: If I create a python thread inside the python function to return from the function immediately, the main executable is running, but the python code is running extremely slow (I can watch the iterations of a for-loop with a print one by one)
Question:
Why is Variant 2 so slow and how can I fix it?
My guess is that this has something to do with the GIL, and I tried to release the GIL inside the wrapper emb.so before returning to the main loop, but I wasn't able to do this without a segfault.
Any ideas?
It turned out that this is very much related to the following question:
Embedding python in multithreaded C application
(see answer https://stackoverflow.com/a/21365656/12490068)
I solved the issue by explicitely releasing the GIL after calling embedded Python code like this:
state = PyGILState_Ensure();
// Call Python/C API functions...
PyGILState_Release(state);
If you are doing this in a function or other C++ scope and you are creating python objects, you have to make sure that the python object's desctructor is not called after releasing the GIL.
So don't do:
void my_func() {
gil_state = PyGILState_Ensure();
py::int_ ret = pymodule->attr("GiveMeAnInt")();
PyGILState_Release(gil_state);
return ret.cast<int>();
}
but instead do
void my_func() {
int ret_value;
gil_state = PyGILState_Ensure();
{
py::int_ ret = pymodule->attr("GiveMeAnInt")();
ret_value = ret.cast<int>();
}
PyGILState_Release(gil_state);
return ret_value;
}

Debugging Python and C++ exposed by boost together

I can debug Python code using ddd -pydb prog.py. All the python command line arguments can be passed too after prog.py. In my case, many classes have been implemented in C++ that are exposed to python using boost-python. I wish I could debug python code and C++ together. For example I want to set break points like this :
break my_python.py:123
break my_cpp.cpp:456
cont
Of course I am trying it after compiling c++ codes with debug option but the debugger does not cross boost boundary. Is there any way?
EDIT:
I saw http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/faq/how_do_i_debug_my_python_extensi.html.
I followed it and I can do debugging both for python and C++. But I preferably want to do visual debugging with DDD but I don't know how to give 'target exec python' command inside DDD. If not (just using gdb as in the link) I should be able to debug for a Python script not interactively giving python commands as in the link.
I found out how to debug the C++ part while running python. (realized it while reading about process ID detection in Python book..).
First you run the python program which includes C++ programs. At the start of the python program, use raw_input() to make the program wait for you input. But just before that do print os.getpid() (of course you should have imported os package). When you run the python program, it will have printed the pid of the python program you are running and will be waiting for your keyboard input.
python stop code :
import os
def w1(str):
print (str)
wait = raw_input()
return
print os.getpid()
w1('starting main..press a key')
result :
27352
starting main..press a key
Or, you can use import pdb, pdb.set_trace() as comment below.(thanks #AndyG) and see EDIT* to get pid using ps -aux.
Now, suppose the C++ shared library is _caffe.so (which is my case. This _caffe.so library has all the C++ codes and boost python wrapper functions). 27352 is the pid. Then in another shell start gdb like
gdb caffe-fast-rcnn/python/caffe/_caffe.so 27352
or if you want to use graphical debugging using like DDD, do
ddd caffe-fast-rcnn/python/caffe/_caffe.so 27352
Then you'll see gdb starts and wait with prompt. The python program is interrupted by gdb and waits in stopped mode (it was waiting for your key input but now it's really in stopeed mode, and it needs gdb continue command from the second debugger to proceed with the key waiting).
Now you can give break point command in gdb like
br solver.cpp:225
and you can see message like
Breakpoint 1 at 0x7f2cccf70397: file src/caffe/solver.cpp, line 226. (2 locations)
When you give continue command in the second gdb window(that was holding the program), the python code runs again. Of course you should give a key input in the first gdb window to make it proceed.
Now at least you can debug the C++ code while running python program(that's what I wanted to do)!
I later checked if I can do python and C++ debugging at the same time and it works. You start the debugger(DDD) like ddd -pydb prog1.py options.. and attach another DDD using method explained above. Now you can set breakpoints for python and C++ and using other debug functions in each window(I wish I had known this a couple of months earlier.. It should have helped tons.).
EDIT : to get the pid, you can do ps -aux | grep python instead. This pid is the next of ddd's pid.
I had a similar problem, but failed to get the solutions in Chan's answer to work (on MAC OS X 10.12.4). Instead the following worked for me.
Write a python script test.py that imports and uses the boost.Python module.
start python in the debugger
lldb python3 test.py
giving
> lldb python3 test.py
(lldb) target create "python3"
Current executable set to 'python3' (x86_64).
(lldb) settings set -- target.run-args "test.py"
(lldb) run
Process 46189 launched: '/Users/me/anaconda/bin/python3' (x86_64)
test.cpython-36m-darwin.so was compiled with optimization - stepping may behave oddly; variables may not be available.
Process 46189 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10d4b3000)
frame #0: 0x00000001019f49c2 test.cpython-36m-darwin.so`std::__1::enable_if<true, void>::type (anonymous namespace)::Render2D<double>::add_particle<true, 5ul>(float*, float, float, float, float) const [inlined] mylib::SSE::packed<8ul, float>::loadu(
944 { return {_mm256_load_ps(p)}; }
945 /// load from unaligned memory location
946 static __always__inline packed loadu(const element_type*p) noexcept
-> 947 { return {_mm256_loadu_ps(p)}; }
948 /// load from aligned memory location, using template arg for alignment
949 template<bool aligned>
950 static __always_inline enable_if_t< aligned, packed>
No need to obtain the pid and start the debugger from a separate window or set any breakpoints.

Executing Python Script from C++ program in Windows XP

I am attempting to execute a python script from a C++ program. The problem that I am having is that I am unable to execute my python script.
If I take out the lpParameter value by setting it equal to NULL everything works fine, my program launches the python terminal and then my program finishes when I exit the python terminal.
I have a feeling that it has to do with the lpParameters field separating arguments with spaces, so I attempted to the entire python script in escaped quotation marks.
#include "windows.h"
#include "shellapi.h"
#include <iostream>
using namespace std;
int main()
{
cout<<"About to execute the shell command";
SHELLEXECUTEINFO shExecInfo;
shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO);
shExecInfo.fMask = NULL;
shExecInfo.hwnd = NULL;
shExecInfo.lpVerb = "runas";
shExecInfo.lpFile = "C:\\Python25\\python.exe";
shExecInfo.lpParameters = "\"C:\\Documents and Settings\\John Williamson\\My Documents\\MyPrograms\\PythonScripts\\script.py\"";
shExecInfo.lpDirectory = NULL;
shExecInfo.nShow = SW_NORMAL;
shExecInfo.hInstApp = NULL;
ShellExecuteEx(&shExecInfo);
return 0;
}
What happens when I launch this code is my program runs, quickly pops up another terminal that is quickly gone and then my original terminal says the task is complete. In reality though the python script that I specified is never executed.
Not really an answer, but too long for a comment.
The problem with those kind of execution in a new window, it the as soon as the program has ended the window is closed. As a window has been opened, it is likely from the point of view of the launching program all is fine.
My advice here would be to use a cmd /k that forces a window to keep opened after the end of the program :
shExecInfo.lpFile = "cmd";
shExecInfo.lpParameters = "/k C:\\Python25\\python.exe \"C:\\Documents and Settings\\John Williamson\\My Documents\\MyPrograms\\PythonScripts\\script.py\"";
At least if there is an error anywhere, you will be given a chance to see it.
Turns out the issue was with permissions and setting this parameter:
shExecInfo.lpVerb = "runas";
Instead I left it
shExecInfo.lpVerb = NULL;
and also filled in the directory parameter and it is working now.

Eclipse and Python 3: why does printf() from ctypes display in console output after subsequent print() statements

I am running Eclipse with PyDev and Python 3.2 on Windows Vista, and was working through a tutorial on Python and ctypes.
However, I found that when I call msvcrt.printf() to print a string, this is not displayed in the console output for Eclipse until all other print statements have displayed.
Here is the exact code I use:
from ctypes import *
msvcrt = cdll.msvcrt
message_string = "Hello Worlds!\n"
printf = msvcrt.printf
print(printf("Testing: %s".encode('ascii'),message_string.encode('ascii')))
print("foo")
print("why!?")
and here is the output:
23
foo
why!?
Testing: Hello Worlds!
The only explanations I have seen elsewhere (for C in general) mention how printf is buffered and needs a newline before displaying, but there is a newline in the string, and I also added one directly to the printf statement ('printf("Testing: %s\n",...') and it made no difference.
I want to work in Eclipse, I don't want to have to keep opening a command prompt every time i want to test scripts, so is there any way I can fix this ordering in the console output? And why does this happen?
If the C standard library thinks stdout is connected to a file or a pipe rather than a console, it will block-buffer its output. You can work around this by issuing a fflush after printf:
msvcrt.fflush(msvcrt.stdout)
You may also be able to force stdout into non-buffered mode:
msvcrt.setvbuf(msvcrt.stdout, None, _IONBF, 0)
Although this does not answer your question but printf is returning 23 which is the number of printed characters, you could replace it with sprintf and that will return the string and will be displayed in the console in the right expected order.
However, I don't see a reason for using mscvcrt's printf when you can do the same with python.

Can I put a breakpoint in a running Python program that drops to the interactive terminal?

I'm not sure if what I'm asking is possible at all, but since python is an interpreter it might be. I'm trying to make changes in an open-source project but because there are no types in python it's difficult to know what the variables have as data and what they do. You can't just look up the documentation on the var's type since you can't be sure what type it is. I want to drop to the terminal so I can quickly examine the types of the variables and what they do by typing help(var) or print(var). I could do this by changing the code and then re-running the program each time but that would be much slower.
Let's say I have a program:
def foo():
a = 5
my_debug_shell()
print a
foo()
my_debug_shell is the function I'm asking about. It would drop me to the '>>>' shell of the python interpreter where I can type help(a), and it would tell me that a is an integer. Then I type 'a=7', and some 'continue' command, and the program goes on to print 7, not 5, because I changed it.
http://docs.python.org/library/pdb.html
import pdb
pdb.set_trace()
Here is a solution that doesn't require code changes:
python -m pdb prog.py <prog_args>
(pdb) b 3
Breakpoint 1 at prog.py:3
(pdb) c
...
(pdb) p a
5
(pdb) a=7
(pdb) ...
In short:
start your program under debugger control
set a break point at a given line of code
let the program run up to that point
you get an interactive prompt that let's you do what you want (type 'help' for all options)
Python 3.7 has a new builtin way of setting breakpoints.
breakpoint()
The implementation of breakpoint() will import pdb and call pdb.set_trace().
Remember to include the braces (), since breakpoint is a function, not a keyword.
A one-line partial solution is simply to put 1/0 where you want the breakpoint: this will raise an exception, which will be caught by the debugger. Two advantages of this approach are:
Breakpoints set this way are robust against code modification (no dependence on a particular line number);
One does not need to import pdb in every program to be debugged; one can instead directly insert "breakpoints" where needed.
In order to catch the exception automatically, you can simply do python -m pdb prog.py… and then type c(ontinue) in order to start the program. When the 1/0 is reached, the program exits, but variables can be inspected as usual with the pdb debugger (p my_var). Now, this does not allow you to fix things and keep running the program. Instead you can try to fix the bug and run the program again.
If you want to use the powerful IPython shell, ipython -pdb prog.py… does the same thing, but leads to IPython's better debugger interface. Alternatively, you can do everything from within the IPython shell:
In IPython, set up the "debug on exception" mode of IPython (%pdb).
Run the program from IPython with %run prog.py…. When an exception occurs, the debugger is automatically activated and you can inspect variables, etc.
The advantage of this latter approach is that (1) the IPython shell is almost a must; and (2) once it is installed, debugging can easily be done through it (instead of directly through the pdb module). The full documentation is available on the IPython pages.
You can run the program using pdb, and add breakpoints before starting execution.
In reality though, it's usually just as fast to edit the code and put in the set_trace() call, as another user stated.
Not sure what the real question is. Python gives you the 'pdb' debugger (google yourself) and in addition you can add logging and debug output as needed.

Categories