So I have a python script which I run in the command prompt as you'd expect
python myscript.py
Inside my script I have a line that intercepts Ctrl+C and is suppose to tidy up a few things and shut down the script.
def check_for():
while True:
# Perform check and operation if needed
if __name__ == '__main__':
try:
threading.Thread(target=check_for).start()
# Perform script duties
except KeyboardInterrupt:
print "Tidying up a few things"
# Performs tidying duties
sys.exit()
But it doesn't seem to actual exit the program. It prints the message so I know its getting that interrupt signal, but I don't have control back in the command prompt and when I look in Task Manager I see the python process for it still running. I only regain control once I end that process. In fact, even if the program runs to completion normally without being interrupted the process remains.
My program does spawn child threads, which is why I use sys.exit() over other methods which I heard can't be excepted and do more abrupt ends.
Am I doing something wrong? Is there a cross platform method of doing this (which is what I thought sys.exit() was)?
Edit: So I've isolated the issue. I created a small test script and it seems to be when I create a thread it just doesn't die and thus the program never exits. I've changed my initial code to show the setup.
Related
I want to create a program that does something in which someone terminates the script by clicking the stop button in PyCharm. I tried
from sys import exit
def handler(signal_received, frame):
# Handle any cleanup here
print('SIGINT or CTRL-C detected. Exiting gracefully')
exit(0)
if __name__ == '__main__':
signal(SIGINT, handler)
print('Running. Press CTRL-C to exit.')
while True:
# Do nothing and hog CPU forever until SIGINT received.
pass
from https://www.devdungeon.com/content/python-catch-sigint-ctrl-c.
I tried on both Mac and Windows. On the Mac, PyCharm behaved as expected, when I click the stop button it catches the SIGINT. But on Windows, I did exactly the same thing, but it just straightly returns to me a
Process finished with exit code -1. Is there something I can do to change to make the Windows behave like what on Mac?
Any help is appreciated!
I don't think it's a strange question at all. On unix systems, pycham sends a SIGTERM, waits one second, then send a SIGKILL. On windows, it does something else to end the process, something that seems untrappable. Even during development you need a way to cleanly shut down a process that uses native resources. In my case, there is a CAN controller that, if not shut down properly, can't ever be opened again. My work around was to build a simple UI with a stop button that shuts the process down cleanly. The problem is, out of habit, from using pycharm, goland, and intellij, is to just hit the red, square button. Every time I do that I have to reboot the development system. So I think it is clearly also a development time question.
This actually isnt a simple thing, because PyCharm sends SIGKILL with the stop button. Check the discussion here https://youtrack.jetbrains.com/issue/PY-13316
There is a comment that you can enable "kill windows process softly", however it didnt work for me. The one that does work is emulate terminal in the debug config, then use control c when you select the console window
I am developing a program to control a machine. Frequently the movement needs to be stopped before it does harm. At the moment, immediate control is by switching off the power because the machine continues to respond to stored commands. This leaves a lot of loose ends. I need to send a command to stop the machine immediately, and clear the queue before it can be started up. This can happen at any time/place during the running of the program.
All the examples I have seen here appear to assume that the placing of the C and keyboardinterrupt response is predicable, e.g.
How do I capture SIGINT in Python?
How do I capture C at any (unpredicted) point in the program?
This question reveals that I don't really understand how the underlying processes work.
You could execute all of your code within a "main" function and catch it within an except.
def Main():
# Running code...
try:
Main()
except KeyboardInterrupt:
# Execute actions to stop immediately.
except Exception as e:
print("An unexpected error occurred:\n" + str(e))
# Execute actions to stop immediately.
I've been ripping my hair out over this. I've searched the internet and can't seem to find a solution to my problem. I'm trying to auto test some code using the gdb module from python. I can do basic command and things are working except for stopping a process that's running in the background. Currently I continue my program in the background after a break point with this:
gdb.execute("c&")
I then interact with the running program reading different constant values and getting responses from the program.
Next I need to get a chunk of memory so I run these commands:
gdb.execute("interrupt") #Pause execution
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)") #dump memory to file
But when I run the memory dump I get an error saying the command can't be run when the target is running, after the error the interrupt command is run and the target is paused, then from the gdb console window I can run the memory dump.
I found a similar issue from awhile ago that seems to not be answered here.
I'm using python2.7.
I also found this link which seems to be the issue but no indication if it's in my build of gdb (which seems unlikely).
I had the same problem, from what I can tell from googling it is a current limitation of gdb: interrupt simply doesn't work in batch mode (when specifying commands with --ex, or -x file, or on stdin, or sourcing from file), it runs the following commands before actually stopping the execution (inserting a delay doesn't help). Building on the #dwjbosman's solution, here's a compact version suitable for feeding to gdb with --ex arguments for example:
python import threading, gdb
python threading.Timer(1.0, lambda: gdb.post_event(lambda: gdb.execute("interrupt"))).start()
cont
thread apply all bt full # or whatever you wanted to do
It schedules an interrupt after 1 second and resumes the program, then you can do whatever you wanted to do after the pause right in the main script.
I had the same problem, but found that none of the other answers here really work if you are trying to script everything from python. The issue that I ran into was that when I called gdb.execute('continue'), no code in any other python thread would execute. This appears to be because gdb does not release the python GIL while the continue command is waiting for the program to be interrupted.
What I found that actually worked for me was this:
def delayed_interrupt():
time.sleep(1)
gdb.execute('interrupt')
gdb.post_event(delayed_interrupt)
gdb.execute('continue')
I just ran into this same issue while writing some automated testing scripts. What I've noticed is that the 'interrupt' command doesn't stop the application until after the current script has exited.
Unfortunately, this means that you would need to segment your scripts anytime you are causing an interrupt.
Script 1:
gdb.execute('c&')
gdb.execute('interrupt')
Script 2:
gdb.execute("dump binary memory montiormem.bin 0x0 (&__etext + 4)")
I used multi threading to get arround this issue:
def post(cmd):
def _callable():
print("exec " + cmd , flush=True)
gdb.execute(cmd)
print("schedule " + cmd , flush=True)
gdb.post_event(_callable)
class ScriptThread (threading.Thread):
def run (self):
while True:
post("echo hello\n")
time.sleep(1)
x = ScriptThread()
x.start()
Save this as "test_script.py"
Use the script as follows:
gdb
> source test_script.py
Note: that you can also pipe "source test_script.py", but you need to keep the pipe open.
Once the thread is started GDB will wait for the thread to end and will process any commands you send to it via the "post_event" function. Even "interrupt"!
I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.
When debugging scripts in Python (2.7, running on Linux) I occasionally inject pdb.set_trace() (note that I'm actually using ipdb), e.g.:
import ipdb as pdb
try:
do_something()
# I'd like to look at some local variables before running do_something_dangerous()
pdb.set_trace()
except:
pass
do_something_dangerous()
I typically run my script from the shell, e.g.
python my_script.py
Sometimes during my debugging session I realize that I don't want to run do_something_dangerous(). What's the easiest way to halt program execution so that do_something_dangerous() is not run and I can quit back to the shell?
As I understand it pressing ctrl-d (or issuing the debugger's quit command) will simply exit ipdb and the program will continue running (in my example above). Pressing ctrl-c seems to raise a KeyboardInterrupt but I've never understood the context in which it was raised.
I'm hoping for something like ctrl-q to simply take down the entire process, but I haven't been able to find anything.
I understand that my example is highly contrived, but my question is about how to abort execution from pdb when the code being debugged is set up to catch exceptions. It's not about how to restructure the above code so it works!
I found that ctrl-z to suspend the python/ipdb process, followed by 'kill %1' to terminate the process works well and is reasonably quick for me to type (with a bash alias k='kill %1'). I'm not sure if there's anything cleaner/simpler though.
From the module docs:
q(uit)
Quit from the debugger. The program being executed is aborted.
Specifically, this will cause the next debugger function that gets called to raise a BdbQuit exception.