I'm trying to execute a Python callback when a certain function is called. It works if the function is called by running the process, but it fails when I call the function with SBTarget.EvaluateExpression
Here's my C code:
#include <stdio.h>
int foo(void) {
printf("foo() called\n");
return 42;
}
int main(int argc, char **argv) {
foo();
return 0;
}
And here's my Python script:
import lldb
import os
def breakpoint_cb(frame, bpno, err):
print('breakpoint callback')
return False
debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
target = debugger.CreateTargetWithFileAndArch('foo', 'x86_64-pc-linux')
assert target
# Break at main and start the process.
main_bp = target.BreakpointCreateByName('main')
process = target.LaunchSimple(None, None, os.getcwd())
assert process.state == lldb.eStateStopped
foo_bp = target.BreakpointCreateByName('foo')
foo_bp.SetScriptCallbackFunction('breakpoint_cb')
# Callback is executed if foo() is called from the program
#process.Continue()
# This causes an error and the callback is never called.
opt = lldb.SBExpressionOptions()
opt.SetIgnoreBreakpoints(False)
v = target.EvaluateExpression('foo()', opt)
err = v.GetError()
if err.fail:
print(err.GetCString())
else:
print(v.value)
I get the following error:
error: Execution was interrupted, reason: breakpoint 2.1.
The process has been left at the point where it was interrupted, use "thread
return -x" to return to the state before expression evaluation
I get the same error when the breakpoint has no callback, so it's really the breakpoint that is causing problems, not the callback. The expression is evaluated when opt.SetIgnoreBreakpoints(True) set, but that doesn't help in my case.
Is this something that can be fixed or is it a bug or missing feature?
Operating system is Arch Linux, LLDB version is 6.0.0 from the repository.
The IgnoreBreakpoints setting doesn't mean you don't hit breakpoints while running. For instance, you will notice that the breakpoint hit count will get updated either way. Rather it means:
True: that if we hit a breakpoint we will auto-resume
False: if we hit a breakpoint we will stop regardless
The False feature is intended for calling a function because you want to stop in it or some function it called for the purposes of debugging that function. So overriding the breakpoint conditions and commands is the right thing to do.
For your purposes, I think you want IgnoreBreakpoints to be True, since you also want the expression evaluation to succeed.
OTOH, if I understand your intent, the thing that's causing you a problem is that when IgnoreBreakpoints is false, lldb doesn't call the breakpoint's commands. It should only skip that bit of work when we are forcing the stop.
Related
I have a python wrapper holding some c++ code. In it is a function that I setup as a process from my python code. Its a while statement that I need to setup a condition for when it should shut down.
For this situation , the while statement is simple.
while(TERMINATE == 0)
I have data that is being sent back from within the while loop. I'm using pipe() to create 'in' and 'out' objects. I send the 'out' object to the function when I create the process.
fxn = self.FG.do_videosequence
(self.inPipe, self.outPipe) = Pipe()
self.stream = Process(target=fxn, args=(self.outPipe,))
self.stream.start()
As I mentioned, while inside the wrapper I am able to send data back to the python script with
PyObject *send = Py_BuildValue("s", "send_bytes");
PyObject_CallMethodObjArgs(pipe, send, temp, NULL);
This works just fine. However, I'm having issues with sending a message to the C++ code, in the wrapper, that tells the loop to stop.
What I figured I would do is just check poll(), as that is what I do on the python script side. I want to keep it simple. When the system sees that there is an incoming signal from the python script it would set TERMINATE = 1. so i wrote this.
PyObject *poll = Py_BuildValue("p", "poll");
As I'm expecting a true or false from the python function poll(). I figured "p" would be ideal as it would convert true to 1 and false to 0.
in the loop I have
if(PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL))
TERMINATE = 1;
I wanted to use poll() as its non-blocking, like recv() is. This way I could just go about my other work and check poll() once a cycle.
however, when I send a signal from the python script it never trips.
self.inPipe.send("Hello");
I'm not sure where the disconnect is. When I print the poll() request, I get 0 the entire time. I'm either not calling it correctly, and its just defaulting to 0. or I'm not actually generating a signal to trip the poll() call. Thus its always 0.
Does anyone have any insight as what i am doing wrong?
*****UPDATE******
I found some other information.
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
as I'm passing a string as a reference to the function im calling it should be referenced as a string. It has nothing to do with the return type.
From there the return of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
is a pyobject so it needs to be checked against a pyobject. such as making a call to
PyObject_IsTrue
to determine if its true or false. I'll make changes to my code and if I have solution I'll update the post with an answer.
So I've been able to find the solution. In the end I was making two mistakes.
The first mistake was when I created the pyobject reference to the python function I was calling. I mistook the information and inserted a "p" thinking before reading the context. So
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
The second mistake was how I was handling the return value of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
while its true that its calling a python object, it is not returning a simple true false value, but rather a python object. So I specificly needed to handle the python object, by calling
PyObject_IsTrue(Pyobject o)
with the return of the poll() request as the argument. I now have the ability to send/recieve from both the python script and the C api contained in the wrapper.
In a c++ program, I used "Python.h" to implement a c++ function which can be used in python.
In python, I want this c++ function to run in a limited time.
So, I used a function decorator in python to let the c++ function run in a limited time. If the function exceed the given time, it will raise a RuntimeError, then I just let the function return.
But I the result seems not good, after the c++ function being called many times, the program runs slower and slower, and finally crash.
This is the decorator:
def set_timeout(num, callback):
def wrap(func):
def handle(signum, frame):
raise RuntimeError()
def to_do(*args, **kwargs):
try:
signal.signal(signal.SIGALRM, handle)
signal.alarm(num)
print('start alarm signal.')
r = func(*args, **kwargs)
print('close alarm signal.')
signal.alarm(0)
return r
except RuntimeError as e:
callback()
return to_do
return wrap
def after_timeout():
return
This is the python invoked c++ function with decorator:
#set_timeout(3,after_timeout)
def pytowr_run(gait,tgt,time_interval,tm,posture=None,init_dic={}):
pos,cost,varDict = pytowr.run(gait,tgt[0],tgt[1],time_interval,tm,posture,init_dic)
return pos
Is there any way to let the python invoked c++ function stop running in python?
To stop the c++ function you'll need to send it a signal. But just sending a signal is not enough. In the c++ function you will probably need to set some interrupt points to check if a signal was sent. Example:
#include <csignal>
#include <atomic>
constexpr int SIGALRM = 14;
std::atomic_bool signal_received;
void signal_handler(int signal) {
// check 'signal' if necessary
signal_received.store(true);
}
void config() { //call it at some start point...
signal_received.store(false);
std::signal(SIGALRM, signal_handler);
}
void your_function(/*args*/) {
/* do some work */
if (signal_received.load()) return; //interrupt point. Exiting your c++ function...
// the interrupt point must be placed in a strategic place
/* maybe more work */
}
I think a better solution if i'm understanding your goal, is invoke a C/C++ function and pass in the allotted time into the C/C++ function instead of trying kill the function from python which is doable but makes the code harder to read and debug.
#include datetime.h
PyDateTime_IMPORT
bool your_function(passed_in_vars, start_time * PyDateTime_DateTime allotted_time* int )
{
kill_time = PyDateTime_DATE_GET_MICROSECOND(PyDateTime_DateTime *o) + allotted
/* do some work */
if ( allotted_time > kill_time )
return; //interrupt point. Exiting your c++ function...
//do more work...
}
This makes the code easier no loops or wait states on the python side and makes its easier on the C++ side to know when to kill the function and fire off any clean up code.
And finding Bottles necks easier on the C++ side..
I am using the GDB Python interface to handle Breakpoints
import gdb
class MyBP(gdb.Breakpoint):
def stop(self):
print("stop called "+str(self.hit_count))
return True
bp = MyBP("test.c:22")
This works as expected. The hit_count is increased after the "stop" method returns.
Now when I want to use a conditional breakpoint:
bp.condition="some_value==2"
it is not working as expected. The stop method is always executed regardless whether the condition is true or false. If the stop method returns "True" the breakpoint will only halt the program if the condition is also true. The hit_count is increased after the Stop method returns and the condition holds.
So it seems as if GDB is only applying the condition check after the Stop method was called.
How can I ensure that the Stop method is only called when the condition holds?
How can I ensure that the Stop method is only called when the condition holds?
Currently, you can't. See bpstat_check_breakpoint_conditions() in gdb/breakpoint.c
Relevant portions:
/* Evaluate extension language breakpoints that have a "stop" method
implemented. */
bs->stop = breakpoint_ext_lang_cond_says_stop (b);
...
condition_result = breakpoint_cond_eval (cond);
...
if (cond && !condition_result)
{
bs->stop = 0;
}
else if (b->ignore_count > 0)
{
...
++(b->hit_count);
...
}
So the python stop method is always called before the condition is evaluated. You can implement your condition in python though, e.g. using gdb.parse_and_eval, if you want to write expressions in the source language.
I have a python programm and the user shall be able to do some scripting of his own in the interface. The language here shall be Lua. So far I have Lua sandboxed and can execute the user code all at once like this:
lua_sandbox = lua.eval('''
function()
local function run(untrusted_code)
local untrusted_function, message = load(untrusted_code, nil, 't', _ENV)
if not untrusted_function then return nil, message end
return xpcall(untrusted_function, errorHandler)
end
-------------------- Defining Python functions and objects in Lua -------------------
local function PythonFunctionInLua(arg)
return python.eval('PythonFunction(' .. arg ..')')
end
PythonObjectInLua = python.eval('PythonObject')
-- Pass them to the local Lua sandbox
_ENV = { print = print,
PythonFunction = PythonFunctionInLua,
PythonObject = PythonObjectInLua,
coroutine = coroutine }
assert(run [[''' + code + ''' ]])
end
''')
So far, so good. Now I want to add the possibility to execute the code stepwise. Best Idea I had so far is using a debug hook for this, so I tried this:
... (from above until _ENV)
debug.sethook(scripterDebug, "l")
func = assert( run [[''' + code + ''' ]] )
co = coroutine.create(func)
coroutine.resume(co)
Here comes the problem: I can't call coroutine.yield to pause the routine until the user clicks again to process the next line. Calling it from the outside, e.g. in the scripterDebug hook, as well as calling it from the inside (inserting it in the code part) result in attempt to yield across C-Call boundaryt (I suppose beacuse I keep going in and out of the sandbox in the latter case).
I also tried creating the coroutine within the run[[ ]]], an then yielding works, but I can't call coroutine.resume(co) on the outside because `co' is only known inside the sandbox. I have to call it from the outside because it is called when the user clicks to continue, and if I pass access to the UI etc. into the sandbox, the whole thing becomes useless.
Is there any way to execute this code stepwise in a sandbox nice and short?
I have some C code that calls a Python function. This Python function accepts an address and uses WINFUNCTYPE to eventually convert it to a function that Python can call. The C function send as a parameter to the Python function will eventually call another Python function. It is at this last step which causes a crash. So in short I go from C -> Python -> C -> Python. The last C -> Python causes a crash. I've been trying to understand the problem, but I have been unable to.
Can someone point out my problem?
C code compiled with Visual Studio 2010 and run with the args "c:\...\crash.py" and "func1":
#include <stdlib.h>
#include <stdio.h>
#include <Python.h>
PyObject* py_lib_mod_dict; //borrowed
void __stdcall cfunc1()
{
PyObject* py_func;
PyObject* py_ret;
int size;
PyGILState_STATE gil_state;
gil_state = PyGILState_Ensure();
printf("Hello from cfunc1!\n");
size = PyDict_Size(py_lib_mod_dict);
printf("The dictionary has %d items!\n", size);
printf("Calling with GetItemString\n");
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
if (py_ret)
{
printf("PyObject_CallFunction from cfunc1 was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from cfunc1 failed!\n");
printf("Goodbye from cfunc1!\n");
PyGILState_Release(gil_state);
}
int wmain(int argc, wchar_t** argv)
{
PyObject* py_imp_str;
PyObject* py_imp_handle;
PyObject* py_imp_dict; //borrowed
PyObject* py_imp_load_source; //borrowed
PyObject* py_dir; //stolen
PyObject* py_lib_name; //stolen
PyObject* py_args_tuple;
PyObject* py_lib_mod;
PyObject* py_func;
PyObject* py_ret;
Py_Initialize();
//import our python script
py_dir = PyUnicode_FromWideChar(argv[1], wcslen(argv[1]));
py_imp_str = PyString_FromString("imp");
py_imp_handle = PyImport_Import(py_imp_str);
py_imp_dict = PyModule_GetDict(py_imp_handle); //borrowed
py_imp_load_source = PyDict_GetItemString(py_imp_dict, "load_source"); //borrowed
py_lib_name = PyUnicode_FromWideChar(argv[2], wcslen(argv[2]));
py_args_tuple = PyTuple_New(2);
PyTuple_SetItem(py_args_tuple, 0, py_lib_name); //stolen
PyTuple_SetItem(py_args_tuple, 1, py_dir); //stolen
py_lib_mod = PyObject_CallObject(py_imp_load_source, py_args_tuple);
py_lib_mod_dict = PyModule_GetDict(py_lib_mod); //borrowed
printf("Calling cfunc1 from main!\n");
cfunc1();
py_func = PyDict_GetItem(py_lib_mod_dict, py_lib_name);
py_ret = PyObject_CallFunction(py_func, "(I)", &cfunc1);
if (py_ret)
{
printf("PyObject_CallFunction from wmain was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from wmain failed!\n");
Py_DECREF(py_imp_str);
Py_DECREF(py_imp_handle);
Py_DECREF(py_args_tuple);
Py_DECREF(py_lib_mod);
Py_Finalize();
fflush(stderr);
fflush(stdout);
return 0;
}
Python code:
from ctypes import *
def func1(cb):
print "Hello from func1!"
cb_proto = WINFUNCTYPE(None)
print "C callback: " + hex(cb)
call_me = cb_proto(cb)
print "Calling callback from func1."
call_me()
print "Goodbye from func1!"
def func2():
print "Hello and goodbye from func2!"
Output:
Calling cfunc1 from main!
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
Done with GetItemString
Hello and goodbye from func2!
PyObject_CallFunction from cfunc1 was successful!
Goodbye from cfunc1!
Hello from func1!
C callback: 0x1051000
Calling callback from func1.
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
PyObject_CallFunction from wmain failed!
I added a PyErr_Print() to the end and this was the result:
Traceback (most recent call last):
File "C:\Programming\crash.py", line 9, in func1
call_me()
WindowsError: exception: access violation writing 0x0000000C
EDIT: Fixed a bug that abarnert pointed out. Output is unaffected.
EDIT: Added in the code that resolved the bug (acquiring the GIL lock in cfunc1). Thanks again abarnert.
The problem is this code:
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
Py_DECREF(py_func);
As the docs say, PyDict_GetItemString returns a borrowed reference. So, the first time you call here, you borrow the reference, and decref it, causing it to be destroyed. The next time you call, you get back garbage, and try to call it.
So, to fix it, just remove the Py_DECREF(py_func) (or add Py_INCREF(py_func) after the pyfunc = line).
Actually, you will usually get back a special "dead" object, so you can test this pretty easily: put a PyObject_Print(py_func, stdout) after the py_func = line and after the Py_DECREF line, and you'll probably see something like <function func2 at 0x10b9f1230> the first time, <refcnt 0 at 0x10b9f1230> the second and third times (and you won't see the fourth, because it'll crash before you get there).
I don't have a Windows box handy, but changing wmain, wchar_t, PyUnicode_FromWideChar, WINFUNCTYPE, etc. to main, char, PyString_FromString, CFUNCTYPE, etc., I was able to build and run your code, and I get a crash in the same place… and the fix works.
Also… shouldn't you be holding the GIL inside cfunc1? I don't often write code like this, so maybe I'm wrong. And I don't get a crash with the code as-is. Obviously, spawning a thread to run cfunc1 does crash, and PyGILState_Ensure/Release solves that crash… but that doesn't prove you need anything in the single-threaded case. So maybe this isn't relevant… but if you get another crash after fixing the first one (in the threaded case, mine looked like Fatal Python error: PyEval_SaveThread: NULL tstate), look into this.
By the way, if you're new to Python extending and embedding: A huge number of unexplained crashes are, like this one, caused by manual refcounting errors. That's the reason things like boost::python, etc. exist. It's not that it's impossible to get it right with the plain C API, just that it's so easy to get it wrong, and you will have to get used to debugging problems like this.
abarnert's answer provided the correct functions to call, however the explanation bothered me so I came home early and poked around some more.
Before I go into the explanation, I want to mention that when I say GIL, I strictly mean the mutex, semaphore, or whatever that the Global Interpreter Lock uses to do the thread synchronization. This does not include any other housekeeping that Python does before/after it acquires and releases the GIL.
Single threaded programs do not initialize the GIL because you never call PyEval_InitThreads(). Thus there is no GIL. Even if there was locking going on, it shouldn't matter because it's single threaded. However, functions that acquire and release the GIL also do some funny stuff like mess with the thread state in addition to acquiring/releasing the GIL. Documentation on WINFUNCTYPE objects explicitly states that it releases the GIL before making the jump to C. So when the C callback was called in Python, I suspect something like PyEval_SaveThread() is called (maybe in error because it's only suppose to be called in threaded operations at least from my understanding). This would release the GIL (if it existed) and set the thread state to become NULL, however there's no GIL in single threaded Python programs, thus all it really does is just set the thread state to NULL. This causes the majority of Python functions in the C callback to fail hard.
Really the only benefit of calling PyGILState_Ensure/Release is to tell Python to set the thread state to something valid before running off and doing things. There's not a GIL to acquire (not initialized because I never called PyEval_InitThreads()).
To test my theory: In the main function I use PyThreadState_Swap(NULL) to grab a copy of the thread state object. I restore it during the callback and everything works fine. If I keep the thread state at null, I get pretty much the same access violation even without doing a Python -> C callback. Inside cfunc1, I restore the thread state and there's no more problems cfunc1 itself during the Python -> C callback.
There is an issue when cfunc1 returns into Python code, but that's probably because I messed with the thread state and the WINFUNCTYPE object is expecting something totally different. If you keep the thread the state without setting it back to null when returning, Python just sits there and does nothing. If you restore it back to null, it crashes. However, it does successfully executes cfunc1 so I'm not sure I care too much.
I may eventually go poke around in the Python source code to be 100% sure, but I'm sure enough to be satisfied.