HI, guys,
I am developing a GUI with python 2.4.3 and wxpython. Everything works fine except when I exit main program(close the main window of GUI). The wierd thing is that sometimes there is such error, sometimes there is no error at all. Although I found the same error reports from the python mailing list(the link is http://bugs.python.org/issue1722344, I am not sure if my case is the same as this one). I do not know how it is finally solved and what I should do to overcome this problem.
The error message from the console is as follows.
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
File "/opt/company/workspace/username/application/src/mainwidget.py", line 1066, in run
File "/usr/lib/python2.4/Queue.py", line 89, in put
File "/usr/lib/python2.4/threading.py", line 237, in notify
exceptions.TypeError: exceptions must be classes, instances, or strings (deprecated), not NoneType
Unhandled exception in thread started by
Error in sys.excepthook:
Original exception was:
The following is part of my code(the thread related code is complete, I extract the main operations for the rest). when I use the GUI to launch an external subprocess, at the same time, a wx.TextCtrl object is created. This wx.TextCtrl object is used to give input and print output of the external subprocess
class BashProcessThread(threading.Thread):
def __init__(self, readlineFunc):
threading.Thread.__init__(self)
self.readlineFunc = readlineFunc
self.lines = []
self.outputQueue = Queue.Queue()
self.setDaemon(True)
def run(self):
while True:
line = self.readlineFunc()
self.outputQueue.put(line)
if (line==""):
break
return ''.join(self.lines)
def getOutput(self):
""" called from other thread """
while True:
try:
line = self.outputQueue.get_nowait()
lines.append(line)
except Queue.Empty:
break
return ''.join(self.lines)
class ExternalProcWindow(wx.Window):
def __init__(self, parent, externapp):
wx.Window.__init__(self, parent, -1, pos=wx.DefaultPosition, size = wx.Size(1200, 120))
self.externapp=externapp
self.prompt = externapp.name.lower() + '>>'
self.textctrl = wx.TextCtrl(self, -1, '', size= wx.Size(1200, 120), style=wx.TE_PROCESS_ENTER|wx.TE_MULTILINE)
self.default_txt = self.textctrl.GetDefaultStyle()
self.textctrl.AppendText(self.prompt)
self.outputThread = BashProcessThread(self.externapp.sub_process.stdout.readline)
self.outputThread.start()
self.textctrl.SetFocus()
self.__bind_events()
self.Fit()
def __bind_events(self):
self.Bind(wx.EVT_TEXT_ENTER, self.__enter)
def __enter(self, e):
nl=self.textctrl.GetNumberOfLines()
ln = self.textctrl.GetLineText(nl-1)
ln = ln[len(self.prompt):]
self.externapp.sub_process.stdin.write(ln+"\n")
time.sleep(.3)
self.textctrl.AppendText(self.outputThread.getOutput())
class ExternApp:
def launch(self):
self.sub_process = subprocess.Popen(launchcmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
The problem is caused by the use of threading.Thread.setDaemon. Threads set daemonic don't prevent the Python intepreter from exiting, but they still keep running. Because Python cleans up the environment before the process is terminated, the threads can run into trouble when stuff is removed from under them. That raises an exception, which the thread class tries to print for your convenience -- but that, then, too fails because the process is exiting.
You could try to silence the exception, but that's tricky (and if the thread does anything substantial, it might hide a real problem. Not the case here, though.) Or you could ask the thread to stop before exiting, and not set the thread daemonic. Or you can simply avoid using threads altogether. I do not remember if wxPython has a convenient mechanism for getting a process's output or even of doing asynchronous I/O, but many GUI toolkits do. And there's always Twisted, which does it all for you.
Related
An Exception is raised in threading._wait_for_tstate_lock when I transfere hugh data between a Process and a Thread via multiprocessing.Queue.
My minimal working example looks a bit complex first - sorry. I will explain. The original application loads a lot of (not so important) files into RAM. This is done in a separate process to save ressources. The main gui thread shouldn't freez.
The GUI start a separate Thread to prevent the gui event loop from freezing.
This separate Thread then starts one Process which should does the work.
a) This Thread instanciates a multiprocess.Queue (be aware that this is a multiprocessing and not threading!)
b) This is givin to the Process for sharing data from Process back to the Thread.
The Process does some work (3 steps) and .put() the result into the multiprocessing.Queue.
When the Process ends the Thread takes over again and collect the data from the Queue, store it to its own attribute MyThread.result.
The Thread tells the GUI main loop/thread to call a callback function if it has time for.
The callback function (MyWindow::callback_thread_finished()) get the results from MyWindow.thread.result.
The problem is if the data put to the Queue is to big something happen I don't understand - the MyThread never ends. I have to cancle the application via Strg+C.
I got some hints from the docs. But my problem is I did not fully understand the documentation. But I have the feeling that the key of my problems can be found there.
Please see the two red boxex in "Pipes and Queues" (Python 3.5 docs).
That is the full output
MyWindow::do_start()
Running MyThread...
Running MyProcess...
MyProcess stoppd.
^CProcess MyProcess-1:
Exception ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown
t.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 252, in _bootstrap
util._exit_function()
File "/usr/lib/python3.5/multiprocessing/util.py", line 314, in _exit_function
_run_finalizers()
File "/usr/lib/python3.5/multiprocessing/util.py", line 254, in _run_finalizers
finalizer()
File "/usr/lib/python3.5/multiprocessing/util.py", line 186, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.5/multiprocessing/queues.py", line 198, in _finalize_join
thread.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
This is the minimal working example
#!/usr/bin/env python3
import multiprocessing
import threading
import time
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
from gi.repository import GLib
class MyThread (threading.Thread):
"""This thread just starts the process."""
def __init__(self, callback):
threading.Thread.__init__(self)
self._callback = callback
def run(self):
print('Running MyThread...')
self.result = []
queue = multiprocessing.Queue()
process = MyProcess(queue)
process.start()
process.join()
while not queue.empty():
process_result = queue.get()
self.result.append(process_result)
print('MyThread stoppd.')
GLib.idle_add(self._callback)
class MyProcess (multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def run(self):
print('Running MyProcess...')
for i in range(3):
self.queue.put((i, 'x'*102048))
print('MyProcess stoppd.')
class MyWindow (Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self)
self.connect('destroy', Gtk.main_quit)
GLib.timeout_add(2000, self.do_start)
def do_start(self):
print('MyWindow::do_start()')
# The process need to be started from a separate thread
# to prevent the main thread (which is the gui main loop)
# from freezing while waiting for the process result.
self.thread = MyThread(self.callback_thread_finished)
self.thread.start()
def callback_thread_finished(self):
result = self.thread.result
for r in result:
print('{} {}...'.format(r[0], r[1][:10]))
if __name__ == '__main__':
win = MyWindow()
win.show_all()
Gtk.main()
Possible duplicate but quite different and IMO without an answer for my situation: Thread._wait_for_tstate_lock() never returns.
Workaround
Using a Manager by modifing line 22 to queue = multiprocessing.Manager().Queue() solve the problem. But I don't know why. My intention of this question is to understand the things behind and not only to make my code work. Even I don't really know what a Manager() is and if it has other (problem causing) implications.
According to the second warning box in the documentation you are linking to you can get a deadlock when you join a process before processing all items in the queue. So starting the process and immediately joining it and then processing the items in the queue is the wrong order of steps. You have to start the process, then receive the items, and then only when all items are received you can call the join method. Define some sentinel value to signal that the process is finished sending data through the queue. None for example if that can't be a regular value you expect from the process.
class MyThread(threading.Thread):
"""This thread just starts the process."""
def __init__(self, callback):
threading.Thread.__init__(self)
self._callback = callback
self.result = []
def run(self):
print('Running MyThread...')
queue = multiprocessing.Queue()
process = MyProcess(queue)
process.start()
while True:
process_result = queue.get()
if process_result is None:
break
self.result.append(process_result)
process.join()
print('MyThread stoppd.')
GLib.idle_add(self._callback)
class MyProcess(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def run(self):
print('Running MyProcess...')
for i in range(3):
self.queue.put((i, 'x' * 102048))
self.queue.put(None)
print('MyProcess stoppd.')
The documentation in question
reads:
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
This is supplementary to the accepted answer, but the edit queue is full.
I'm using a custom timeout exception to get around iter(subprocess.Popen.stdout.readline,'') blocking when there is no more output to read, but the exception isn't being caught properly. This is a code that has both a main process and a separate process (implemented with multiprocessing.Process), where timeouts can happen in either. The relevant sections are:
class Timeout(Exception):
def __init__(self, message):
self.message = message
def handle_timeout(signal, frame):
raise Timeout("Timed out")
This custom exception is caught just fine in the main process, but in the child process, whenever the Timeout is raised, it is never caught despite using (I believe) the appropriate standard syntax:
from subprocess import Popen, PIPE
subProc = Popen(('tail', '-f', fileName), stdout=PIPE, stderr=PIPE, shell=False, close_fds=True)
lines = iter(subProc.stdout.readline,'')
for line in lines:
try:
process_line(line)
except Timeout as time_out:
print(time_out.message)
subProc.terminate()
break
Instead of printing the timeout message and terminating subProc, I get the following output:
Traceback (most recent call last):
File "/home/username/anaconda2/envs/Py2.7/lib/python2.7/multiprocessing/process.py", line 267, in _bootstrap
self.run()
File "reader.py", line 50, in run
for line in lines:
File "reader.py", line 13, in handle_timeout
raise Timeout("Timed out")
Timeout
handle_timeout appears to be working fine since the timeout is being raised, but the exception handling is being ignored or skipped. Am I doing anything wrong syntax-wise, or do I need to define a separate custom exception, presumably within the child process?
Edit:
The second code block before was incomplete. Here it is, as it currently exists (with chepner's advice on the irrelevance of iter(stdout.readline,'') included):
from subprocess import Popen, PIPE
signal.signal(signal.SIGALRM, handle_timeout)
subProc = Popen(('tail', '-f', fileName), stdout=PIPE, stderr=PIPE, shell=False, close_fds=True)
for line in subProc.stdout:
signal.alarm(CHILD_TIMEOUT)
try:
process_line(line)
except Timeout as time_out:
print(time_out.message)
subProc.terminate()
break
In the parent process (where the timeout exception works exactly as desired), the format is:
# signal masking as in last block
while True:
try:
signal.alarm(MASTER_TIMEOUT) # different from CHILD_TIMEOUT
other_processing()
except Timeout:
shutDown(children) # method to shut down child processes
break
SOLVED:
I've found a solution.
subProc = Popen(('tail', '-f', fileName), stdout=PIPE, stderr=PIPE, shell=False, close_fds=True)
while not exit.is_set(): # exit is defined by multiprocessing.Event()
signal.alarm(3)
try:
for line in subProc.stdout:
process_line(line)
except Timeout:
print("Process timed out while waiting for log output")
subProc.terminate()
exit.set()
Now when the alarm goes off, the timeout exception is raised and caught as it should be, ending the subprocess before triggering the exit condition, after which the child process shuts down gracefully.
You can't actually trap an error inside a subproccess the way your working your code. What you think of as error handling using an event to catch or what not is actually a subprocess being raised, executing your code, and managing the response. Since you are using popopen to manually control the subprocess you need to manually process its response.
When your subprocess ends it should return a 0. If it returns a -1 or 1 that means an error has occurred you and need to read from stderr to capture the error.
Edit1
I see your problem. The way you have it written the handler handle_timeout will grab the error and re-raise it every-time. You can't handle an exception in multiple places. As it is you have two separate functions trying to handle the same error concurrently. This will always produce a conflict and the first one that catches the error will cause your main process to exit. You can do a couple different things here, but let me implore you - do not eat an error for no reason.
fix 1:
Remove your error handler
def handle_timeout(signal, frame):
raise Timeout("Timed out")
fix 2:
try:
process_line(line)
finally:
subProc.terminate()
The above will guarantee the termination of the sub process without eating an error. Also, catching the error with a custom handle like your handle_timeout handler is a technique almost exclusively used to deconstruct a complex run or object before re-raising the error. Its a last ditch solution basically for when you have A LOT of clean up after a particular error. If you want to do that, do not use an except block.
Can someone tell me where i can put the lock inside a custom thread in python?
import threading
lock = threading.Lock()
class WorkerThread(threading.Thread):
def __init__(self,lock):
super(WorkerThread,self).__init__()
self.lock = lock
def run(self):
self.lock.acquire()
print "Hello World"
self.lock.release()
worker = WorkerThread(lock)
Error Traceback:
Traceback (most recent call last):
File "<buffer>", line 14, in <module>
File "<buffer>", line 11, in __init__
AssertionError: release() of un-acquire()d lock
You've mixed tabs and spaces. Most of the definition of run is actually nested inside __init__, and the self.lock.release() is actually outside run and inside __init__. Your thread ends up trying to release the unlocked lock on thread creation.
Don't mix tabs and spaces. Stick to spaces. Turn on "show whitespace" in your editor to make the problem more visible, and get a better editor if your editor can't do that. Running Python with the -tt flag can also help catch these errors.
Here's some slimmed down code that demonstrates my use of threading:
import threading
import Queue
import time
def example():
""" used in MainThread as the example generator """
while True:
yield 'asd'
class ThreadSpace:
""" A namespace to be shared among threads/functions """
# set this to True to kill the threads
exit_flag = False
class MainThread(threading.Thread):
def __init__(self, output):
super(MainThread, self).__init__()
self.output = output
def run(self):
# this is a generator that contains a While True
for blah in example():
self.output.put(blah)
if ThreadSpace.exit_flag:
break
time.sleep(0.1)
class LoggerThread(threading.Thread):
def __init__(self, output):
super(LoggerThread, self).__init__()
self.output = output
def run(self):
while True:
data = self.output.get()
print data
def main():
# start the logging thread
logging_queue = Queue.Queue()
logging_thread = LoggerThread(logging_queue)
logging_thread.daemon = True
logging_thread.start()
# launch the main thread
main_thread = MainThread(logging_queue)
main_thread.start()
try:
while main_thread.isAlive():
time.sleep(0.5)
except KeyboardInterrupt:
ThreadSpace.exit_flag = True
if __name__ == '__main__':
main()
I have one main thread which gets data yielded to it from a blocking generator. In the real code, this generator yields network related data it sniffs out over the socket.
I then have a logging, daemon, thread which prints the data to screen.
To exit the program cleanly, I'm catching a KeyboardInterrupt which will set an exit_flag to try - This tells the main thread to return.
9 times out of 10, this will work fine. The program will exit cleanly. However, there's a few occasions when I'll receive the following two errors:
Error 1:
^CTraceback (most recent call last):
File "demo.py", line 92, in <module>
main('')
File "demo.py", line 87, in main
time.sleep(0.5)
KeyboardInterrupt
Error 2:
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
I've run this exact sample code a few times and haven't been able to replicate the errors. The only difference between this and the real code is the example() generator. This, like I said, yields network data from the socket.
Can you see anything wrong with how I'm handling the threads?
KeyboardInterrupts are received by arbitrary threads. If the receiver isn't the main thread, it dies, the main thread is unaffected, ThreadSpace.exit_flag remains false, and the script keeps running.
If you want sigint to work, you can have each thread catch KeyboardInterrupt and call thread.interrupt_main() to get Python to exit, or use the signal module as the official documentation explains.
How can you have a function or something that will be executed before your program quits? I have a script that will be constantly running in the background, and I need it to save some data to a file before it exits. Is there a standard way of doing this?
Check out the atexit module:
http://docs.python.org/library/atexit.html
For example, if I wanted to print a message when my application was terminating:
import atexit
def exit_handler():
print 'My application is ending!'
atexit.register(exit_handler)
Just be aware that this works great for normal termination of the script, but it won't get called in all cases (e.g. fatal internal errors).
If you want something to always run, even on errors, use try: finally: like this -
def main():
try:
execute_app()
finally:
handle_cleanup()
if __name__=='__main__':
main()
If you want to also handle exceptions you can insert an except: before the finally:
If you stop the script by raising a KeyboardInterrupt (e.g. by pressing Ctrl-C), you can catch that just as a standard exception. You can also catch SystemExit in the same way.
try:
...
except KeyboardInterrupt:
# clean up
raise
I mention this just so that you know about it; the 'right' way to do this is the atexit module mentioned above.
If you have class objects, that exists during the whole lifetime of the program, you can also execute commands from the classes with the __del__(self) method:
class x:
def __init__(self):
while True:
print ("running")
sleep(1)
def __del__(self):
print("destructuring")
a = x()
this works on normal program end as well if the execution is aborted, for sure there will be some exceptions:
running
running
running
running
running
Traceback (most recent call last):
File "x.py", line 14, in <module>
a = x()
File "x.py", line 8, in __init__
sleep(1)
KeyboardInterrupt
destructuring
This is a version adapted from other answers.
It should work (not fully tested) with graceful exits, kills, and PyCharm stop button (the last one I can confirm).
import signal
import atexit
def handle_exit(*args):
try:
... do computation ...
except BaseException as exception:
... handle the exception ...
atexit.register(handle_exit)
signal.signal(signal.SIGTERM, handle_exit)
signal.signal(signal.SIGINT, handle_exit)