Stop KeyboardInterrupt reaching batch file processor - python

I'm writing some code in Python 3 on Windows that looks like this:
try:
do something that takes a long time
(training a neural network in TensorFlow, as it happens)
except KeyboardInterrupt:
print('^C')
print a summary of results
still useful even if the training was cut short early
This works perfectly if run directly from the console with python foo.py.
However, if the call to Python was within a batch file, it ends up doing all the above but then still spamming the console with the 'terminate batch job' prompt.
Is there a way to stop that happening? By fully eating the ^C within Python, jumping all the way out of the batch file or otherwise?

Use the break (More info here) command in the batch file, which will disable CTRL+C halting the file
EDIT: According to this site of the break command
Newer versions of Windows (Windows ME, Windows 2000, Windows XP, and higher) only include this command for backward compatibility and turning the break off has no effect.
I personally tested this, and can confirm, I will edit when I find a workaround
EDIT #2: If you could have a second batch script that runs start "" /b /wait cmd /c "yourfile.bat" although, this is known to cause glitches with other nested batch files
The flag to disable Ctrl+C is inherited by child processes, so Python will no longer raise a KeyboardInterrupt. Plus we still have bugs here in Python if reading from the console gets interrupted by Ctrl+C without getting a SIGINT from the CRT. The Python script should manually enable Ctrl+C via ctypes. Use import ctypes; kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #3 As pointed by Eryksyn (in the comments), you can use cytpes to ENABLE it;
import ctypes;
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #4: I think I found it, try this (Although it may not work) Can you use the threading import?
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"

Related

How to restart a Python script?

In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command:
while True:
if reboot == True:
os.execv(sys.argv[0], sys.argv)
When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages.
Wrap it using a bash script.
The script should handle exits and restart your program. A really basic version could be:
#!/bin/bash
while :
do
python program.py
sleep 1
done
Start the program as a sub-process of another program.
Start by wrapping your program's code to a function. Then your __main__ could look like this:
def program():
### Here is the code of your program
...
while True:
from multiprocessing import Process
process = Process(target=program)
process.start()
process.join()
print("Restarting...")
This code is relatively basic, and it requires error handling to be implemented.
Use a process manager
There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar.
IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
This has worked for me. Please add the shebang at the top of your code and os.execv() as shown below
#!/usr/bin/env python3
import os
import sys
if __name__ == '__main__':
while True:
reboot = input('Enter:')
if reboot == '1':
sys.stdout.flush()
os.execv(sys.executable, [sys.executable, __file__] + [sys.argv[0]])
else:
print('OLD')
I got the same "Exec Format Error", and I believe it is basically the same error you get when you simply type a python script name at the command prompt and expect it to execute. On linux it won't work because a path is required, and the execv method is basically encountering the same error.
You could add the pathname of your python compiler, and that error goes away, except that the name of your script then becomes a parameter and must be added to the argv list. To avoid that, make your script independently executable by adding "#!/usr/bin/python3" to the top of the script AND chmod 755.
This works for me:
#!/usr/bin/python3
# this script is called foo.py
import os
import sys
import time
if (len(sys.argv) >= 2):
Arg1 = int(sys.argv[1])
else:
sys.argv.append(None)
Arg1 = 1
print(f"Arg1: {Arg1}")
sys.argv[1] = str(Arg1 + 1)
time.sleep(3)
os.execv("./foo.py", sys.argv)
Output:
Arg1: 1
Arg1: 2
Arg1: 3
.
.
.

Is on Python 3 any library to relaunch the script?

I have some script in Python, which does some work. I want to re-run this script automatically. Also, I want to relaunch it on any crashes/freezes.
I can do something like this:
while True:
try:
main()
except Exception:
os.execv(sys.executable, ['python'] + sys.argv)
But, for unknown reason, this still crashes or freezes one time in few days. So I see crash, write "Python main.py" in cmd and it started, so I don't know why os.execv don't do this work by self. I guess it's because this code is part of this app. So, I prefer some script/app, which will control relaunch in external way. I hope it will be more stable.
So this script should work in this way:
Start any script
Check that process of this script is working, for example check some file time change and control it by process name|ID|etc.
When it dissapears from process list, launch it again
When file changed more than 5 minutes ago, stop process, wait few sec, launch it again.
In general: be cross-platform (Linux/Windows)
not important log all crashes.
I can do this by self (right now working on it), but I'm pretty sure something like this must already be done by somebody, I just can't find it in Google\Github.
UPDATE: added code from the #hansaplast answer to GitHub. Also added some changes to it: relauncher. Feel free to copy/use it.
As it needs to work both in windows and on linux I don't know a way to do that with standard tools, so here's a DIY solution:
from subprocess import Popen
import os
import time
# change into scripts directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
while True:
p = Popen(['python', 'my_script.py', 'arg1', 'arg2'])
time.sleep(20) # give the program some time to write into logfile
while True:
if p.poll() != None:
print('crashed or regularly terminated')
break
file_age_in_s = time.time() - os.path.getmtime('output.log')
if file_age_in_s > 60:
print('frozen, killing process')
p.kill()
break
time.sleep(1)
print('restarting..')
Explanation:
time.sleep(20): give script 20 seconds to write into the log file
poll(): regularly check if script died (either crashed or regularly terminated, you can check the return value of poll() to differentiate that)
getmtime(): regularly check output.log and check if that was changed the past 60 seconds
time.sleep(1): between every check wait for 1s as otherwise it would eat up too many system resources
The script assumes that the check-script and the run-script are in the same directory. If that is not the case, change the lines beneath "change into scripts directory"
I personally like supervisor daemon, but it has two issues here:
It is only for unix systems
It restarts app only on crashes, not freezes.
But it has simple XML-RPC API, so It makes your job to write an freeze-watchdog app simplier. You could just start your process under supervisor and restart it via supervisor API when you see it freezes.
You could install it via apt install supervisor on ubuntu and write config like this:
[program:main]
user=vladimir
command=python3 /var/local/main/main.py
process_name=%(program_name)s
directory=/var/local/main
autostart=true
autorestart=true

python2.7 using debug behave different then without debug

I have a bug in my program and want to check it out using debug. In my IDE (WingIDE) I have a debug functionality. But I can not use that call the program from shell. So I use the Python module pdb. My application is single threaded.
I have looked into Code is behaving differently in Release vs Debug Mode but that seems something different to me.
I limited it down this the following code.
What I did :
I created a short method it will only be called when using no IDE.
def set_pdb_trace():
run_in_ide = not sys.stdin.isatty()
if not run_in_ide:
import pdb; pdb.set_trace() # use only in python interpreter
This work fine, I used it in many situations.
I want to debug the following method :
import sys
import os
import subprocess32
def call_backported():
command = 'lsb_release -r'
timeout1 = 0.001 # make value too short, so time-out will enforced
try:
p = subprocess32.Popen(command, shell=True,
stdout=subprocess32.PIPE,
stderr=subprocess32.STDOUT)
set_pdb_trace()
tuple1 = p.communicate(input=b'exit %errorlevel%\r\n', timeout=timeout1)
print('No time out')
value = tuple1[0].decode('utf-8').strip()
print('Value : '+ value)
except subprocess32.TimeoutExpired, e:
print('TimeoutExpired')
Explanation.
I want to call subprocess with a timeout. For Python 3.3+ it is build in, but my application has be able to run using Python2.7 also. So I used https://pypi.python.org/pypi/subprocess32/3.2.6 as a backport.
To read the returned value I used How to retrieve useful result from subprocess?
Without timeout, setting timeout to f.e. 1 sec the method works as expected. The result value and 'No time out' is printed.
I want to enforce a timeout so I set the timeout very short time 0.001 . So now only 'TimeoutExpired' should be printed.
I want to execute this is shell.
When if first comment out line #set_pdb_trace() 'TimeoutExpired' is printed, so expected behaviour.
Now I uncomment set_pdb_trace() and execute in shell.
The debugger displays, I press 'c' (continue) and 'No time out' with the result is printed. This result is different then without debug. The generate output is :
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$ python test_subprocess32.py
--Return--
> /home/bernard/clones/it-should-work/unit_test/test_subprocess32.py(22)set_pdb_trace()->None
-> import pdb; pdb.set_trace() # use only in python interpreter
(Pdb) c
No time out
Value : Release: 13.10
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$
How is this possible? And how to solve?
You introduced a delay between opening the subprocess and writing to it.
When you create the Popen() object, the child process is started immediately. When you then call p.communicate() and try to write to it, the process is not quite ready yet to receive input, and that delay together with the time it takes to read the process output is longer than your 0.0.1 timeout.
When you insert the breakpoint, the process gets a chance to spin up; the lsb_release command doesn't wait for input and produces its output immediately. By the time p.communicate() is called there is no need to wait for the pipe anymore and the output is produced immediately.
If you put your breakpoint before the Popen() call, then hit c, you'll see the timeout trigger again.

why process doesn't join and doesn't run?

i have a simple problem to solve(more or less)
if i watch python multiprocessing tutorials i see that a process should be started more or less like this:
from multiprocessing import *
def u(m):
print(m)
return
A=Process(target=u,args=(0,))
A.start()
A.join()
It should print a 0 but nothing gets printed. Instead it hangs forever at the A.join().
if i manually start the function u doing this
A.run()
it actually prints 0 on the shell but it doesn't work simultaneously
for example the output of following code:
from multiprocessing import *
from time import sleep
def u(m):
sleep(1)
print(m)
return
A=Process(target=u,args=(1,))
A.start()
print(0)
should be
0
1
but actually is
0
and if i add before the last line
A.run()
then the output becomes
1
0
this seems confusing to me...
and if i try to join the process it waits forever.
however,if it can help giving me an answer
my OS is Mac os x 10.6.8
python versions used are 3.1 and 3.3
my computer has 1 intel core i3 processor
--Update--
I have noticed that this strange behaviour is present only when launching the program from IDLE ,if i run the program from the terminal everything works as it is supposed to,so this problem must be connected to some IDLE bug.
But runnung programs from terminal is even weirder: using something like range(100000000) activates all my computer's ram until the end of the program; if i remember well this shouldn't happen in python 3,only in older python versions.
I hope these new informations will help you giving an answer
--Update 2--
the bug occurs even if i don't perform output from my process,because setting this:
def u():
return
as the target of the process and then starting it , if i try to join the process,idle waits forever
As suggested here and here, the problem is that IDLE overrides sys.stdin and sys.stdout in some weird ways, which do not propagate cleanly to processes you spawn from it (they are not real filehandles).
The first link also indicates it's unlikely to be fixed any time soon ("may be a 'cannot fix' issue", they say).
So unfortunately the only solution I can suggest is not to use IDLE for this script...
Have you tried adding A.join() to your program? I am guessing that your main process is exiting before the child process prints which is causing the output to be hidden. If you tell the main process to wait for the child process (A.join()), I bet you'll see the output you expect.
Given that it only happens with IDLE, I suspect the problem has to do with the stdout used by both processes. Perhaps it's some file-like object that's not safe to use from two different processes.
If you don't have the child process write to stdout, I suspect it will complete and join properly. For example, you could have it write to a file, instead. Or you could set up a pipe between the parent and child.
Have you tried unbuffered output? Try importing the sys module and change the print statement:
print >> sys.stderr, m
How does this affect the behavior? I'm with the others that suspect that IDLE is mucking with the stdio . . .

Stopping python using ctrl+c

I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
On Windows, the only sure way is to use CtrlBreak. Stops every python script instantly!
(Note that on some keyboards, "Break" is labeled as "Pause".)
Pressing Ctrl + c while a python program is running will cause python to raise a KeyboardInterrupt exception. It's likely that a program that makes lots of HTTP requests will have lots of exception handling code. If the except part of the try-except block doesn't specify which exceptions it should catch, it will catch all exceptions including the KeyboardInterrupt that you just caused. A properly coded python program will make use of the python exception hierarchy and only catch exceptions that are derived from Exception.
#This is the wrong way to do things
try:
#Some stuff might raise an IO exception
except:
#Code that ignores errors
#This is the right way to do things
try:
#Some stuff might raise an IO exception
except Exception:
#This won't catch KeyboardInterrupt
If you can't change the code (or need to kill the program so that your changes will take effect) then you can try pressing Ctrl + c rapidly. The first of the KeyboardInterrupt exceptions will knock your program out of the try block and hopefully one of the later KeyboardInterrupt exceptions will be raised when the program is outside of a try block.
If it is running in the Python shell use Ctrl + Z, otherwise locate the python process and kill it.
The interrupt process is hardware and OS dependent. So you will have very different behavior depending on where you run your python script. For example, on Windows machines we have Ctrl+C (SIGINT) and Ctrl+Break (SIGBREAK).
So while SIGINT is present on all systems and can be handled and caught, the SIGBREAK signal is Windows specific (and can be disabled in CONFIG.SYS) and is really handled by the BIOS as an interrupt vector INT 1Bh, which is why this key is much more powerful than any other. So if you're using some *nix flavored OS, you will get different results depending on the implementation, since that signal is not present there, but others are. In Linux you can check what signals are available to you by:
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGPWR 30) SIGUSR1
31) SIGUSR2 32) SIGRTMAX
So if you want to catch the CTRL+BREAK signal on a linux system you'll have to check to what POSIX signal they have mapped that key. Popular mappings are:
CTRL+\ = SIGQUIT
CTRL+D = SIGQUIT
CTRL+C = SIGINT
CTRL+Z = SIGTSTOP
CTRL+BREAK = SIGKILL or SIGTERM or SIGSTOP
In fact, many more functions are available under Linux, where the SysRq (System Request) key can take on a life of its own...
Ctrl+D Difference for Windows and Linux
It turns out that as of Python 3.6, the Python interpreter handles Ctrl+C differently for Linux and Windows. For Linux, Ctrl+C would work mostly as expected however on Windows Ctrl+C mostly doesn't work especially if Python is running blocking call such as thread.join or waiting on web response. It does work for time.sleep, however. Here's the nice explanation of what is going on in Python interpreter. Note that Ctrl+C generates SIGINT.
Solution 1: Use Ctrl+Break or Equivalent
Use below keyboard shortcuts in terminal/console window which will generate SIGBREAK at lower level in OS and terminate the Python interpreter.
Mac OS and Linux
Ctrl+Shift+\ or Ctrl+</kbd>
Windows:
General: Ctrl+Break
Dell: Ctrl+Fn+F6 or Ctrl+Fn+S
Lenovo: Ctrl+Fn+F11 or Ctrl+Fn+B
HP: Ctrl+Fn+Shift
Samsung: Fn+Esc
Solution 2: Use Windows API
Below are handy functions which will detect Windows and install custom handler for Ctrl+C in console:
#win_ctrl_c.py
import sys
def handler(a,b=None):
sys.exit(1)
def install_handler():
if sys.platform == "win32":
import win32api
win32api.SetConsoleCtrlHandler(handler, True)
You can use above like this:
import threading
import time
import win_ctrl_c
# do something that will block
def work():
time.sleep(10000)
t = threading.Thread(target=work)
t.daemon = True
t.start()
#install handler
install_handler()
# now block
t.join()
#Ctrl+C works now!
Solution 3: Polling method
I don't prefer or recommend this method because it unnecessarily consumes processor and power negatively impacting the performance.
import threading
import time
def work():
time.sleep(10000)
t = threading.Thread(target=work)
t.daemon = True
t.start()
while(True):
t.join(0.1) #100ms ~ typical human response
# you will get KeyboardIntrupt exception
This post is old but I recently ran into the same problem of Ctrl+C not terminating Python scripts on Linux. I used Ctrl+\ (SIGQUIT).
On Mac press Ctrl+\ to quit a python process attached to a terminal.
Capture the KeyboardInterrupt (which is launched by pressing ctrl+c) and force the exit:
from sys import exit
try:
# Your code
command = input('Type your command: ')
except KeyboardInterrupt:
# User interrupt the program with ctrl+c
exit()
On a mac / in Terminal:
Show Inspector (right click within the terminal window or Shell >Show Inspector)
click the Settings icon above "running processes"
choose from the list of options under "Signal Process Group" (Kill, terminate, interrupt, etc).
Forcing the program to close using Alt+F4 (shuts down current program)
Spamming the X button on CMD for e.x.
Taskmanager (first Windows+R and then "taskmgr") and then end the task.
Those may help.
You can open your task manager (ctrl + alt + delete, then go to task manager) and look through it for python and the server is called (for the example) _go_app (naming convention is: _language_app).
If I end the _go_app task it'll end the server, so going there in the browser will say it "unexpectedly ended", I also use git bash, and when I start a server, I cannot break out of the server in bash's shell with ctrl + c or ctrl + pause, but once you end the python task (the one using 63.7 mb) it'll break out of the server script in bash, and allow me to use the git bash shell.
For the record, what killed the process on my Raspberry 3B+ (running raspbian) was Ctrl+'. On my French AZERTY keyboard, the touch ' is also number 4.

Categories