I use Python with C (with SWIG), where the main module in Python processes data using C library. It crashes somewhere in the C library, so I want to analyse core dump and find the bug.
But, I do not have a regular executable to run with gdb, I have main.py, the library _library.so generated from my C code, and .o object files from C. How should feed gdb to see the core dump with this mixed code?
IIRC you can do this by running python through gdb, i.e.
gdb python
gdb> run -m main.py
Related
I need a c++ script to automate GDB debugging another c++ program. the c++ script has to be able to run gdb commands and get the results from it and use it inside the script to save variable values for later usage, my main questions are:
is there a c++ library for using GDB programmatically (running gdb commands like
continue
step into
) and getting value variables
if there is no library, how can I implement the c++ script myself?
can I use a python script instead of a c++ script to debug the c++ program?
if it is possible to use python, what libraries can I use for using GDB programmatically in python?
giving an implementation example for scripts would be a good guideline and help
thanks
I have a core dump of a running CPython program and would like to execute Python code in the dumped process's context.
I have loaded the core and the interpreter into gdb with gdb python core-dump-file.
I know about python-interactive, but it isn't able to see the context (ex: import sys; sys.modules doesn't give me any of the process's modules)
How can I do this?
I don't mind calling CPython's C functions if that is the only possible way.
1) First check if your gdb has been built with python from source.
You can do this(in the gdb prompt) by:
(gdb) python print("Hi from python")
If you want to check the version of python in your system try:
(gdb) python import sys
(gdb) python print(sys.version)
If these commands fail. It probably means that your gdb was never built with python support in the first place.
You should build gdb from source, and in the configure step add --with-python="Path to python"
eg.
./configure --with-python=/usr/bin/python36
Hope this helps!!
I am debugging decode_raw_op_test from TensorFlow. The test file is written in python however it executes code from underlying C++ files.
Using pdb, I could debug python test file however it doesn't recognize c++ file. Is there a way in which we can debug underlying c++ code?
(I tried using gdb on decode_raw_op_test but it gives "File not in executable format: File format not recognized")
Debugging a mixed Python and C++ program is tricky. You can use gdb to debug the C++ parts of TensorFlow, however. There are two main ways to do this:
Run python under gdb, rather than the test script itself. Let's say that your test script is in bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test. You would run the following command:
$ gdb python bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test
(gdb) run
Note that gdb does not have great support for debugging the Python parts of the code. I'd recommend narrowing down the test case that you run to a single, simple test, and setting a breakpoint on a TensorFlow C API method, such as TF_Run, which is the main entry point from Python into C++ in TensorFlow.
Attach gdb to a running process. You can get the process ID of the Python test using ps and then run (where $PID is the process ID):
$ gdb -p $PID
You will probably need to arrange for your Python code to block so that there's time to attach. Calling the raw_input() function is an easy way to do this.
Could debug using below steps:
gdb python
then on gdb prompt, type
run bazel-bin/tensorflow/python/kernel_tests/decode_raw_op_test
Adding on mrry's answer, in today's TF2 environment, the main entry point would be TFE_Execute, this should be where you add the breakpoint.
On a server I am working on, I need to run the following commands to ensure the xlsxwriter is available to import from python:
module load swdev
module load python/xlsxwriter_py3.4.2/0.7.2
However, I would like this to be done automatically when the python script that needs it is run, from within the python script. Running os.system or subprocess.call doesn't work. How do I do this?
You can call module from a Python script. The module command is provided by the environment-modules software, which also provides a python.py initialization script.
Evaluating this script in a Python script enables the module python function. If environment-modules is installed in /usr/share/Modules, you can find this script at /usr/share/Modules/init/python.py.
Following code enables module python function:
import os
exec(open('/usr/share/Modules/init/python.py').read())
Thereafter you can load your modules:
module('load', 'swdev')
module('load', 'python/xlsxwriter_py3.4.2/0.7.2')
I am using pdb to debug a python program, and the python program uses a module written by C. I want to use "step" command to enter the function in the module written by C, but I find this operation can't be successful. Is there any method to use pdb debug module written by C? Thanks in advance!
pdb won't allow you to debug modules written in C. You can however use gdb to debug errors you might be encountering in C code.
To launch a Python script using gdb you can use the following command:
gdb python
and then to execute your script:
(gdb) run <myscript>.py