I am using the GDB Python interface to handle Breakpoints
import gdb
class MyBP(gdb.Breakpoint):
def stop(self):
print("stop called "+str(self.hit_count))
return True
bp = MyBP("test.c:22")
This works as expected. The hit_count is increased after the "stop" method returns.
Now when I want to use a conditional breakpoint:
bp.condition="some_value==2"
it is not working as expected. The stop method is always executed regardless whether the condition is true or false. If the stop method returns "True" the breakpoint will only halt the program if the condition is also true. The hit_count is increased after the Stop method returns and the condition holds.
So it seems as if GDB is only applying the condition check after the Stop method was called.
How can I ensure that the Stop method is only called when the condition holds?
How can I ensure that the Stop method is only called when the condition holds?
Currently, you can't. See bpstat_check_breakpoint_conditions() in gdb/breakpoint.c
Relevant portions:
/* Evaluate extension language breakpoints that have a "stop" method
implemented. */
bs->stop = breakpoint_ext_lang_cond_says_stop (b);
...
condition_result = breakpoint_cond_eval (cond);
...
if (cond && !condition_result)
{
bs->stop = 0;
}
else if (b->ignore_count > 0)
{
...
++(b->hit_count);
...
}
So the python stop method is always called before the condition is evaluated. You can implement your condition in python though, e.g. using gdb.parse_and_eval, if you want to write expressions in the source language.
Related
I have a python wrapper holding some c++ code. In it is a function that I setup as a process from my python code. Its a while statement that I need to setup a condition for when it should shut down.
For this situation , the while statement is simple.
while(TERMINATE == 0)
I have data that is being sent back from within the while loop. I'm using pipe() to create 'in' and 'out' objects. I send the 'out' object to the function when I create the process.
fxn = self.FG.do_videosequence
(self.inPipe, self.outPipe) = Pipe()
self.stream = Process(target=fxn, args=(self.outPipe,))
self.stream.start()
As I mentioned, while inside the wrapper I am able to send data back to the python script with
PyObject *send = Py_BuildValue("s", "send_bytes");
PyObject_CallMethodObjArgs(pipe, send, temp, NULL);
This works just fine. However, I'm having issues with sending a message to the C++ code, in the wrapper, that tells the loop to stop.
What I figured I would do is just check poll(), as that is what I do on the python script side. I want to keep it simple. When the system sees that there is an incoming signal from the python script it would set TERMINATE = 1. so i wrote this.
PyObject *poll = Py_BuildValue("p", "poll");
As I'm expecting a true or false from the python function poll(). I figured "p" would be ideal as it would convert true to 1 and false to 0.
in the loop I have
if(PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL))
TERMINATE = 1;
I wanted to use poll() as its non-blocking, like recv() is. This way I could just go about my other work and check poll() once a cycle.
however, when I send a signal from the python script it never trips.
self.inPipe.send("Hello");
I'm not sure where the disconnect is. When I print the poll() request, I get 0 the entire time. I'm either not calling it correctly, and its just defaulting to 0. or I'm not actually generating a signal to trip the poll() call. Thus its always 0.
Does anyone have any insight as what i am doing wrong?
*****UPDATE******
I found some other information.
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
as I'm passing a string as a reference to the function im calling it should be referenced as a string. It has nothing to do with the return type.
From there the return of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
is a pyobject so it needs to be checked against a pyobject. such as making a call to
PyObject_IsTrue
to determine if its true or false. I'll make changes to my code and if I have solution I'll update the post with an answer.
So I've been able to find the solution. In the end I was making two mistakes.
The first mistake was when I created the pyobject reference to the python function I was calling. I mistook the information and inserted a "p" thinking before reading the context. So
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
The second mistake was how I was handling the return value of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
while its true that its calling a python object, it is not returning a simple true false value, but rather a python object. So I specificly needed to handle the python object, by calling
PyObject_IsTrue(Pyobject o)
with the return of the poll() request as the argument. I now have the ability to send/recieve from both the python script and the C api contained in the wrapper.
Given a function call and a try block that immediately follows it, is there any scenario where the call returns normally but an exception is raised and not caught by the try block?
For example:
# example 1
resource = acquire_a_resource()
try:
resource.do_something()
# some more code...
finally:
resource.close()
Is it possible that acquire_a_resource() returns normally but resource.close() will not be called?
Or in other words, is there any scenario where:
# example 2
resource = None
try:
resource = acquire_a_resource()
resource.do_something()
# some more code...
finally:
if resource:
resource.close()
would be safer than example #1?
Maybe because of something to do with KeyboardInterrupt/threads/signals?
Yes, at least in theory, though not in CPython (see footnote for details). Threading is not particularly relevant, but your KeyboardInterrupt scenario is just right:
resource = acquire_a_resource()
calls the function. The function acquires the resource and returns the handle, and then during the assignment to the variable,1 the keyboard interrupt occurs. So:
try:
does not run—the KeyboardInterrupt exception happens instead, leaving the current function and unbinding the variable.
The second version passes through the finally clause, so assuming if resource finds it boolean-truth-y, resource.close() does get called.
(Note that actually triggering this is often very difficult: you have to time the interrupt just right. You can increase the race window a lot by, e.g., adding a time.sleep(1) before the try.)
For many cases, a with statement works well:
with acquire_a_resource() as resource:
resource.do_something()
where the close is built into the __exit__ method. The method runs even if the block is escaped via exception.
1In general, the implementation is obligated to complete the binding of the acquired resource to the variable, otherwise there's an irrecoverable race. In CPython this happens because the interpreter checks for interrupts between statements, and occasionally in strategic places in the source.
CPython actually adds another special case:
/* Do periodic things. Doing this every time through
the loop would add too much overhead, so we do it
only every Nth instruction. We also do it if
``pendingcalls_to_do'' is set, i.e. when an asynchronous
event needs attention (e.g. a signal handler or
async I/O handler); see Py_AddPendingCall() and
Py_MakePendingCalls() above. */
if (_Py_atomic_load_relaxed(&_PyRuntime.ceval.eval_breaker)) {
opcode = _Py_OPCODE(*next_instr);
if (opcode == SETUP_FINALLY ||
opcode == SETUP_WITH ||
opcode == BEFORE_ASYNC_WITH ||
opcode == YIELD_FROM) {
/* Few cases where we skip running signal handlers and other
pending calls:
- If we're about to enter the 'with:'. It will prevent
emitting a resource warning in the common idiom
'with open(path) as file:'.
- If we're about to enter the 'async with:'.
- If we're about to enter the 'try:' of a try/finally (not
*very* useful, but might help in some cases and it's
traditional)
- If we're resuming a chain of nested 'yield from' or
'await' calls, then each frame is parked with YIELD_FROM
as its next opcode. If the user hit control-C we want to
wait until we've reached the innermost frame before
running the signal handler and raising KeyboardInterrupt
(see bpo-30039).
*/
goto fast_next_opcode;
}
(Python/ceval.c, near line 1000).
So actually the try line does run, in effect, because there's a SETUP_FINALLY here. It's not at all clear to me whether other Python implementations do the same thing.
I'm trying to execute a Python callback when a certain function is called. It works if the function is called by running the process, but it fails when I call the function with SBTarget.EvaluateExpression
Here's my C code:
#include <stdio.h>
int foo(void) {
printf("foo() called\n");
return 42;
}
int main(int argc, char **argv) {
foo();
return 0;
}
And here's my Python script:
import lldb
import os
def breakpoint_cb(frame, bpno, err):
print('breakpoint callback')
return False
debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
target = debugger.CreateTargetWithFileAndArch('foo', 'x86_64-pc-linux')
assert target
# Break at main and start the process.
main_bp = target.BreakpointCreateByName('main')
process = target.LaunchSimple(None, None, os.getcwd())
assert process.state == lldb.eStateStopped
foo_bp = target.BreakpointCreateByName('foo')
foo_bp.SetScriptCallbackFunction('breakpoint_cb')
# Callback is executed if foo() is called from the program
#process.Continue()
# This causes an error and the callback is never called.
opt = lldb.SBExpressionOptions()
opt.SetIgnoreBreakpoints(False)
v = target.EvaluateExpression('foo()', opt)
err = v.GetError()
if err.fail:
print(err.GetCString())
else:
print(v.value)
I get the following error:
error: Execution was interrupted, reason: breakpoint 2.1.
The process has been left at the point where it was interrupted, use "thread
return -x" to return to the state before expression evaluation
I get the same error when the breakpoint has no callback, so it's really the breakpoint that is causing problems, not the callback. The expression is evaluated when opt.SetIgnoreBreakpoints(True) set, but that doesn't help in my case.
Is this something that can be fixed or is it a bug or missing feature?
Operating system is Arch Linux, LLDB version is 6.0.0 from the repository.
The IgnoreBreakpoints setting doesn't mean you don't hit breakpoints while running. For instance, you will notice that the breakpoint hit count will get updated either way. Rather it means:
True: that if we hit a breakpoint we will auto-resume
False: if we hit a breakpoint we will stop regardless
The False feature is intended for calling a function because you want to stop in it or some function it called for the purposes of debugging that function. So overriding the breakpoint conditions and commands is the right thing to do.
For your purposes, I think you want IgnoreBreakpoints to be True, since you also want the expression evaluation to succeed.
OTOH, if I understand your intent, the thing that's causing you a problem is that when IgnoreBreakpoints is false, lldb doesn't call the breakpoint's commands. It should only skip that bit of work when we are forcing the stop.
I have a python programm and the user shall be able to do some scripting of his own in the interface. The language here shall be Lua. So far I have Lua sandboxed and can execute the user code all at once like this:
lua_sandbox = lua.eval('''
function()
local function run(untrusted_code)
local untrusted_function, message = load(untrusted_code, nil, 't', _ENV)
if not untrusted_function then return nil, message end
return xpcall(untrusted_function, errorHandler)
end
-------------------- Defining Python functions and objects in Lua -------------------
local function PythonFunctionInLua(arg)
return python.eval('PythonFunction(' .. arg ..')')
end
PythonObjectInLua = python.eval('PythonObject')
-- Pass them to the local Lua sandbox
_ENV = { print = print,
PythonFunction = PythonFunctionInLua,
PythonObject = PythonObjectInLua,
coroutine = coroutine }
assert(run [[''' + code + ''' ]])
end
''')
So far, so good. Now I want to add the possibility to execute the code stepwise. Best Idea I had so far is using a debug hook for this, so I tried this:
... (from above until _ENV)
debug.sethook(scripterDebug, "l")
func = assert( run [[''' + code + ''' ]] )
co = coroutine.create(func)
coroutine.resume(co)
Here comes the problem: I can't call coroutine.yield to pause the routine until the user clicks again to process the next line. Calling it from the outside, e.g. in the scripterDebug hook, as well as calling it from the inside (inserting it in the code part) result in attempt to yield across C-Call boundaryt (I suppose beacuse I keep going in and out of the sandbox in the latter case).
I also tried creating the coroutine within the run[[ ]]], an then yielding works, but I can't call coroutine.resume(co) on the outside because `co' is only known inside the sandbox. I have to call it from the outside because it is called when the user clicks to continue, and if I pass access to the UI etc. into the sandbox, the whole thing becomes useless.
Is there any way to execute this code stepwise in a sandbox nice and short?
Any code after while loop will execute when the condition in the while loop becomes False. It is the same for the code in the 'else clause' section of while loop in python. So What's the advantage of having 'else' in the while loop?
else will not execute if there is a break statement in the loop. From the docs:
The while statement is used for
repeated execution as long as an
expression is true:
while_stmt ::= "while" expression ":" suite
["else" ":" suite]
This repeatedly tests the expression
and, if it is true, executes the first
suite; if the expression is false
(which may be the first time it is
tested) the suite of the else clause,
if present, is executed and the loop
terminates.
A break statement executed in the
first suite terminates the loop
without executing the else clause’s
suite. A continue statement executed
in the first suite skips the rest of
the suite and goes back to testing the
expression.
(emphasis mine) This also works for forloops, by the way. It's not often useful, but usually very elegant when it is.
I believe the standard use case is when you are searching through a container to find a value:
for element in container:
if cond(element):
break
else:
# no such element
Notice also that after the loop, element will be defined in the global scope, which is convenient.
I found it counterintuitive until I heard a good explanation from some mailing list:
else suites always execute when a condition has been evaluated to False
So if the condition of a while loop is executed and found false, the loop will stop and the else suite will run. break is different because it exits the loop without testing the condition.
The else clauses for the looping constructs was to eliminate flags to distinguish between normal and "abnormal" loop exits. For example, in C you might have:
int found = 0;
for(int i = 0; i < BUFSIZ; i++) {
if(...predicate..) {
found++;
break;
}
}
if(found) {
// I broke out of the for
} else {
// the for loop hit BUFSIZ
}
Whereas with a loop-else you can eliminate the (somewhat contrived) found flag
quoting ars: "The else clause is only executed when your while condition becomes false. If you break out of the loop, or if an exception is raised, it won't be executed."
See Else clause on Python while statement .
The else suite on Python loops is best thought of for the case where the loop is performing a search. It's where you handle the case where your search was unsuccessful. (There may be other cases where you might use this, but this is the most common and easily remembered user/case).
The alternative would be to use a sentinel value:
sentinel = object()
result = sentinel
for each_item in some_container:
if matches_some_criteria(each_item):
result = each_item
break
if result is sentinel:
do_something_about_failure()