Python pdb for a background process - python

I have a python process, spawning multiple background processes. I am currently seeing one/many of the background process get stuck in deadloop or they are becoming irresponsive.
I want to attach a debugger to the background process. So, I can figure out what is going wrong. I have registered a signal handler for my background process (SIGUSR1) which I sets pdb trace.
I am sending the signal from another console to the background process, whenever it hangs. However, I don't see any terminal which can help me debug the code.
Am I doing something wrong? or is there a better way to attach a debugger to background python process.
I am running on MAC and so using gdb is not straight forward.
def installHandlers():
signal.signal(signal.SIGUSR1,debugHandle)
def debugHandle(sig,frame):
global processLog
processLog.info("got the SIGUSR1")
import pdb
pdb.Pdb().set_trace(frame)
-Thanks

I think it happens because Python closes stdin of the parent process in the multiprocessing.Process._bootstrap(). Therefore pdb.set_trace() doesn't work in child processes.
It usually fails with error. Maybe you don't see error because you redirect stdout somewhere?

There is a clone of pdb, imaginatively called pdb-clone, which allows for debugging of background processes.
You simply add from pdb_clone import pdbhandler; pdbhandler.register() to the code for the main process, and then you can start pdb with pdb-attach --kill --pid PID.

Related

Python subprocess kill is working for "notepad.exe" but not working for "calc.exe"

OS: Windows 10
Python: 3.5.2
I am trying to open calc.exe do some actions and than close it.
Here is my code sample
import subprocess, os, time
p = subprocess.Popen('calc.exe')
#Some actions
time.sleep(2)
p.kill()
So this is not working for calc.exe, it just opens the calculator, but does not close it, But same code is working fine for "notepad.exe".
I am guessing that there is a bug in subprocess lib for process kill method. so the notepad.exe process name in task manager is notepad.exe, but the calc.exe process name is calculator.exe, so I am guessing it is trying to kill by name and do not find it.
There's no bug in subprocess.kill. If you're really worried about that, just check the source, which is linked from the docs. The kill method just calls send_signal, which just calls os.kill unless the process is already done, and you can see the Windows implementation for that function. In short: subprocess.Process.kill doesn't care what name the process has in the kernel's process table (or the Task Manager); it remembers the PID (process ID) of the process it started, and kills it that way.
The most likely problem is that, like many Windows apps, calc.exe has some special "single instance" code: when you launch it, if there's already a copy of calc.exe running in your session, it just tells that copy to come to the foreground (and open a window, if it doesn't have one), and then exits. So, by the time you try to kill it 2 seconds later, the process has already exited.
And if the actual running process is calculator.exe, that means calc.exe is just a launcher for the real program, so it always tells calculator.exe to come to the foreground, launching it if necessary, and then exits.
So, how can you kill the new calculator you started? Well, you can't, because you didn't start a new one. You can kill all calc.exe and/or calculator.exe processes (the easiest way to do this is with a third-party library like psutil—see the examples on filtering and then kill the process once you've found it), but that will kill any existing calculator process you had open before running your program, not just the new one you started. Since calc.exe makes it impossible to tell if you've started a new process or not, there's really no way around that.
This is one way to kill it, but it will close every open calculator.
It calls a no window command prompt and gives the command to close the Calculator.exe process.
import subprocess, os, time
p = subprocess.Popen('calc.exe')
print(p)
#Some actions
time.sleep(2)
CREATE_NO_WINDOW = 0x08000000
subprocess.call('taskkill /F /IM Calculator.exe', creationflags=CREATE_NO_WINDOW)

How to run a python process in the background continuosly

I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.

lldb python handle process crash or killed

I want to do something when a process is crash or killed in a python script.
However I can't find anyway to know when a process is stop by lldb.
I've tried to catch a SIGKILL signal but no use.
import lldb
import signal
def debug(sig, frame):
print "stop!\n"
def listen():
signal.signal(signal.SIGKILL, debug) # Register handler
I've find that we can use this to handle a breakpoint hit, but it can't deal with my situation.
def breakpoint_function_wrapper(frame, bp_loc, dict):
Anyone has some solutions?
There's a little sample program in the lldb python examples that shows how to handle process events using the lldb library:
http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/process_events.py
That might help get you started.

Execute a shell script with ampersand from a python program

I want to submit my long running Python job using ampersand. I'm going to kick this process off from an interactive Python program by using a sub process call it.
How would I keep track of the submitted job programmatically in case I want to end the job from a menu option?
Example of interactive program:
Main Menu
1. Submit long running job &
2. End long running job
If you're using python's subprocess module, you don't really need to background it again with & do you? You can just keep your Popen object around to track the job, and it will run while the other python process continues.
If your "outer" python process is going to terminate what sort of track do you need to keep? Would pgrep/pkill be suitable? Alternately, you could have the long running job log its PID, often under /var/run somewhere, and use that to track if the process is still alive and/or signal it.
You could use Unix signals. Here we capture SIGUSR1 to tell the process to communicate some info to STDOUT.
#!/usr/bin/env python
import signal
import sys
def signal_handler(signal, frame):
print('Caught SIGUSR1!')
print("Current job status is " + get_job_status())
signal.signal(signal.SIGUSR1, signal_handler)
and then from the shell
kill <pid> --signal SIGUSR1

How to check which line of a Python script is being executed?

I've got a Python script which is running on a Linux server for hours, crunching some numbers for me. I'd like to check its progress, so I'd like to see what line is being executed right now. If that was a C or C++ program then I would just attach to the process with gdb -p <pid> and examine the stacktrace with where. Of course, I can do the same with the Python interpreter process, but I can't see the Python script's line in the stacktrace.
So, how can I find out which line of the Python script is being executed currently?
You can add a signal handler to the Python script that sends this information to the terminal, or to a file, then hit ^C in the terminal to send the signal to the process.
import signal
def print_linenum(signum, frame):
print "Currently at line", frame.f_lineno
signal.signal(signal.SIGINT, print_linenum)
You could also use some other signal and use the kill command to send the signal, if you need ^C to be able to interrupt the script, or set a signal.alarm() to print the information periodically, e.g. once a second.
You could print out other things from the stack frame if you like; there's a lot there. See the attributes of frame objects in this table.

Categories