Python3: Catch SIGTTOU from subprocess and print warning - python

Hello I have a problem with a small debug GUI I have written (using PySimpleGUI). Part of the GUI is the capability to call a local Linux shell command.
One of the shell commands/programs I want to execute returns a SIGTTOU signal, if I start the GUI in background (with &). Which will freeze the GUI until I bring the GUI to foreground with 'fg'.
Since it's kind of normal to start the GUI in background, I just want to catch the SIGTTOU signal, print a warning and continue.
The following code snippet kind of works, but leaves the shell commands as zombies (and I get no return value from the commands. What even works better is to use signal.signal(signal.SIGTTOU, signal.SIG_IGN) but I really want to print a warning. Is that possible? What does signal.SIG_IGN to remove the zombies?
cmd_output = collections.deque()
...
def _handle_sigttou(signum, frame):
sys.__stdout__.write('WARNING: <SIGTTOU> received\n')
def _run_shell_command(cmd, timeout=None):
# run command
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# process output
for line in p.stdout:
line = line.decode(errors='backslashreplace').rstrip()
cmd_output.appendleft((f'{line}'))
# wait for return code
retval = p.wait(timeout)
# print result
cmd_output.appendleft((f'✘ ({retval})') if retval else ('✅'))
# send command
signal.signal(signal.SIGTTOU, _handle_sigttou)
#signal.signal(signal.SIGTTOU, signal.SIG_IGN) # this works without zombies, but I want to print a warning
threading.Thread(target=_run_shell_command, args=(_line, 60), daemon=True).start()

Using window.write_event_value(event, value) to send an event to your window, then maybe call sg.popup to show the value when event in your event loop. Not try to update GUI on your thread.
...
def _handle_sigttou(signum, frame):
window.write_event_value("<SIGTTOU>", ("WARNING", "<SIGTTOU> received"))
...
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
elif event == "<SIGTTOU>":
title, message = values[event]
sg.popup(message, title=title, auto_close=True, auto_close_duration=2)
window.close()

Related

Is there a way to never exit on KeyboardInterrupt in python?

I'm creating sort of a interactive command line in python. I have something like this:
def complete_menu():
while True:
cmd = input('cmd> ')
if cmd == "help":
print('help')
elif cmd == "?":
print('?')
When the user presses CTRL-C, instead of exiting the program I'm trying to make it so that it prints "please type exit to exit" and goes back to the while True. I have something like this at the moment:
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print('Please use exit to exit')
complete_menu()
Although this works, there is a number of issues. For one, when CTRL-C is pressed the first time, it prints out the text and works perfectly. However, the second time the user presses CTRL-C, it exists with a bunch of messy text like any other program after pressing CTRL-C. Can this be fixed?
The better way to do this is to register a signal handler:
import signal
def handler(signum, frame):
print("Please use exit to exit")
# or: just call sys.exit("goodbye")
...
def main():
signal.signal(signal.SIGINT, handler) # prevent "crashing" with ctrl+C
...
if __name__ == "__main__":
main()
Now when a Ctrl+C is received in your code, instead of a KeyboardInterrupt exception being raised, the function handler will be executed. This is a basic example, customize the code within handler to do what you want.
Note: My recommendation is to actually let the user exit with Ctrl+C, i.e. execute any cleanup code that you might need to run and then call sys.exit here. Programs that require a stronger signal to kill are annoying.

SIGTERM signal not received by python on shutdown command

I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)
this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).

Window freezes after clicking of button in python GTK3

Hello I have some command, which runs for average 30 min, when I click on button created by GTK3, python starts to executing command but my all application freezes. My python code for button clicked is:
def on_next2_clicked(self,button):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
self.fper = float(line)/100.0
self.ui.progressbar1.set_fraction(self.fper)
print "Done"
I also have to set output of command to progress bar in my window. Can any one help to solve my problem ? I also tried with Threading in python, but it also falls useless...
Run a main loop iteration from within your loop:
def on_next2_clicked(self,button):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
self.fper = float(line)/100.0
self.ui.progressbar1.set_fraction(self.fper)
while Gtk.events_pending():
Gtk.main_iteration() # runs the GTK main loop as needed
print "Done"
You are busy-waiting, not letting the UI main event loop run. Put the loop in a separate thread so the main thread can continue its own event loop.
Edit: Adding example code
import threading
def on_next2_clicked(self,button):
def my_thread(obj):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
obj.fper = float(line)/100.0
obj.ui.progressbar1.set_fraction(obj.fper)
print "Done"
threading.Thread(target=my_thread, args=(self,)).start()
The above modification to your function will start a new thread that will run in parallel with your main thread. It will let the main event loop continue while the new thread does the busy waiting.

wxPython, capturing an output from subprocess in real-time

I'm working on application in wxPython which is a GUI for a command line utility. In the GUI there is a text control which should display the output from the application. I'm launching the shell command using subprocess, but I don't get any output from it until it has completed.
I have tried several solutions but none of them seems to work. Below is the code I'm using at the moment (updated):
def onOk(self,event):
self.getControl('infotxt').Clear()
try:
thread = threading.Thread(target=self.run)
thread.setDaemon(True)
thread.start()
except Exception:
print 'Error starting thread'
def run(self):
args = dict()
# creating a command to execute...
cmd = ["aplcorr", "-vvfile", args['vvfile'], "-navfile", args['navfile'], "-lev1file", args['lev1file'], "-dem", args['dem'], "-igmfile", args['outfile']]
proc = subprocess.Popen(' '.join(cmd), shell=True, stdout=subprocess.PIPE, stderr.subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
class RedirectInfoText:
""" Class to redirect stdout text """
def __init__(self,wxTextCtrl):
self.out=wxTextCtrl
def write(self,string):
self.out.WriteText(string)
class RedirectErrorText:
""" Class to redirect stderr text """
def __init__(self,wxTextCtrl):
self.out.SetDefailtStyle(wx.TextAttr())
self.out=wxTextCtrl
def write(self,string):
self.out.SetDefaultStyle(wx.TextAttr(wx.RED))
self.out.WriteText(string)
In particular I'm going to need the output in real-time to create a progress-bar.
Edit: I changed my code, based on Mike Driscoll's suggestion. It seems to work sometimes, but most of the time I'm getting one of the following errors:
(python:7698): Gtk-CRITICAL **: gtk_text_layout_real_invalidate:
assertion `layout->wrap_loop_count == 0' failed
or
(python:7893): Gtk-WARNING **: Invalid text buffer iterator: either
the iterator is uninitialized, or the characters/pixbufs/widgets in
the buffer have been modified since the iterator was created. You must
use marks, character numbers, or line numbers to preserve a position
across buffer modifications. You can apply tags and insert marks
without invalidating your iterators, but any mutation that affects
'indexable' buffer contents (contents that can be referred to by
character offset) will invalidate all outstanding iterators
Segmentation fault (core dumped)
Any clues?
The problem is because you are trying to wx.Yield and to update the output widgets from the context of the thread running the process, instead of doing the update from the GUI thread.
Since you are running the process from a thread there should be no need to call wx.Yield, because you are not blocking the GUI thread, and so any pending UI events should be processed normally anyway.
Take a look at the wx.PyOnDemandOutputWindow class for an example of how to handle prints or other output that originate from a non-GUI thread.
This can be a little tricky, but I figured out one way to do it which I wrote about here: http://www.blog.pythonlibrary.org/2010/06/05/python-running-ping-traceroute-and-more/
After you have set up the redirection of the text, you just need to do something like this:
def pingIP(self, ip):
proc = subprocess.Popen("ping %s" % ip, shell=True,
stdout=subprocess.PIPE)
print
while True:
line = proc.stdout.readline()
wx.Yield()
if line.strip() == "":
pass
else:
print line.strip()
if not line: break
proc.wait()
The article shows how to redirect the text too. Hopefully that will help!

Kill or terminate subprocess when timeout?

I would like to repeatedly execute a subprocess as fast as possible. However, sometimes the process will take too long, so I want to kill it.
I use signal.signal(...) like below:
ppid=pipeexe.pid
signal.signal(signal.SIGALRM, stop_handler)
signal.alarm(1)
.....
def stop_handler(signal, frame):
print 'Stop test'+testdir+'for time out'
if(pipeexe.poll()==None and hasattr(signal, "SIGKILL")):
os.kill(ppid, signal.SIGKILL)
return False
but sometime this code will try to stop the next round from executing.
Stop test/home/lu/workspace/152/treefit/test2for time out
/bin/sh: /home/lu/workspace/153/squib_driver: not found ---this is the next execution; the program wrongly stops it.
Does anyone know how to solve this? I want to stop in time not execute 1 second the time.sleep(n) often wait n seconds. I do not want that I want it can execute less than 1 second
You could do something like this:
import subprocess as sub
import threading
class RunCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = sub.Popen(self.cmd)
self.p.wait()
def Run(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate() #use self.p.kill() if process needs a kill -9
self.join()
RunCmd(["./someProg", "arg1"], 60).Run()
The idea is that you create a thread that runs the command and to kill it if the timeout exceeds some suitable value, in this case 60 seconds.
Here is something I wrote as a watchdog for subprocess execution. I use it now a lot, but I'm not so experienced so maybe there are some flaws in it:
import subprocess
import time
def subprocess_execute(command, time_out=60):
"""executing the command with a watchdog"""
# launching the command
c = subprocess.Popen(command)
# now waiting for the command to complete
t = 0
while t < time_out and c.poll() is None:
time.sleep(1) # (comment 1)
t += 1
# there are two possibilities for the while to have stopped:
if c.poll() is None:
# in the case the process did not complete, we kill it
c.terminate()
# and fill the return code with some error value
returncode = -1 # (comment 2)
else:
# in the case the process completed normally
returncode = c.poll()
return returncode
Usage:
return = subprocess_execute(['java', '-jar', 'some.jar'])
Comments:
here, the watchdog time out is in seconds; but it's easy to change to whatever needed by changing the time.sleep() value. The time_out will have to be documented accordingly;
according to what is needed, here it maybe more suitable to raise some exception.
Documentation: I struggled a bit with the documentation of subprocess module to understand that subprocess.Popen is not blocking; the process is executed in parallel (maybe I do not use the correct word here, but I think it's understandable).
But as what I wrote is linear in its execution, I really have to wait for the command to complete, with a time out to avoid bugs in the command to pause the nightly execution of the script.
I guess this is a common synchronization problem in event-oriented programming with threads and processes.
If you should always have only one subprocess running, make sure the current subprocess is killed before running the next one. Otherwise the signal handler may get a reference to the last subprocess run and ignore the older.
Suppose subprocess A is running. Before the alarm signal is handled, subprocess B is launched. Just after that, your alarm signal handler attempts to kill a subprocess. As the current PID (or the current subprocess pipe object) was set to B's when launching the subprocess, B gets killed and A keeps running.
Is my guess correct?
To make your code easier to understand, I would include the part that creates a new subprocess just after the part that kills the current subprocess. That would make clear there is only one subprocess running at any time. The signal handler could do both the subprocess killing and launching, as if it was the iteration block that runs in a loop, in this case event-driven with the alarm signal every 1 second.
Here's what I use:
class KillerThread(threading.Thread):
def __init__(self, pid, timeout, event ):
threading.Thread.__init__(self)
self.pid = pid
self.timeout = timeout
self.event = event
self.setDaemon(True)
def run(self):
self.event.wait(self.timeout)
if not self.event.isSet() :
try:
os.kill( self.pid, signal.SIGKILL )
except OSError, e:
#This is raised if the process has already completed
pass
def runTimed(dt, dir, args, kwargs ):
event = threading.Event()
cwd = os.getcwd()
os.chdir(dir)
proc = subprocess.Popen(args, **kwargs )
os.chdir(cwd)
killer = KillerThread(proc.pid, dt, event)
killer.start()
(stdout, stderr) = proc.communicate()
event.set()
return (stdout,stderr, proc.returncode)
A bit more complex, I added an answer to solve a similar problem: Capturing stdout, feeding stdin, and being able to terminate after some time of inactivity and/or after some overall runtime.

Categories