Twisted/tkinter program crashes on exit - python

I am running an app using twisted and tkinter that sends the result to the server, waits for the server to send back a confirmation, and then exits. So, the function I use to exit is this:
def term():
'''To end the program'''
reactor.stop()
root.quit()
root.destroy()
This is then set in the factory and called in the dataReceived function of the protocol. I run it, and the program runs fine and even sends the necessary data and closes, but it also gives me the following error report:
Unhandled error in Deferred:
Traceback (most recent call last):
File "D:\Python25\Lib\site-packages\twisted\internet\base.py", line 1128, in run
self.mainLoop()
File "D:\Python25\Lib\site-packages\twisted\internet\base.py", line 1137, in mainLoop
self.runUntilCurrent()
File "D:\Python25\Lib\site-packages\twisted\internet\base.py", line 757, in runUntilCurrent
call.func(*call.args, **call.kw)
File "D:\Python25\Lib\site-packages\twisted\internet\task.py", line 114, in __call__
d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- <exception caught here> ---
File "D:\Python25\Lib\site-packages\twisted\internet\defer.py", line 106, in maybeDeferred
result = f(*args, **kw)
File "D:\Python25\lib\lib-tk\Tkinter.py", line 917, in update
self.tk.call('update')
_tkinter.TclError: can't invoke "update" command: application has been destroyed
Does anyone know why?

You only need to call reactor.stop to exit: the root.quit() and root.destroy() calls are superfluous. Consider this short example which runs Twisted and Tk for three seconds and then exits:
import Tkinter
from twisted.internet import tksupport
root = Tkinter.Tk()
tksupport.install(root)
from twisted.internet import reactor
reactor.callLater(3, reactor.stop)
reactor.run()

Related

Twisted task.LoopingCall : Unhandled error in Deferred:

My Code
from twisted.internet import task, reactor
def stuff():
print('Hello, world!')
scheduler = task.LoopingCall(stuff())
scheduler.start(10)
reactor.run()
This is the Error I am getting
Hello, world! Unhandled error in Deferred:
Traceback (most recent call last): File "C:\Users\Usama
fiaz\AppData\Local\Programs\Python\Python39\lib\site-packages\twisted\internet\base.py",
line 1315, in run
self.mainLoop() File "C:\Users\Usama fiaz\AppData\Local\Programs\Python\Python39\lib\site-packages\twisted\internet\base.py",
line 1325, in mainLoop
reactorBaseSelf.runUntilCurrent() File "C:\Users\Usama fiaz\AppData\Local\Programs\Python\Python39\lib\site-packages\twisted\internet\base.py",
line 991, in runUntilCurrent
call.func(*call.args, **call.kw) File "C:\Users\Usama fiaz\AppData\Local\Programs\Python\Python39\lib\site-packages\twisted\internet\task.py",
line 251, in call
d = maybeDeferred(self.f, *self.a, **self.kw)
--- --- File "C:\Users\Usama fiaz\AppData\Local\Programs\Python\Python39\lib\site-packages\twisted\internet\defer.py",
line 190, in maybeDeferred
result = f(*args, **kwargs) builtins.TypeError: 'NoneType' object is not callable
Per the API documentation, LoopingCall.__init__ accepts a callable parameter f as its first argument.
In your example:
from twisted.internet import task, reactor
def stuff():
print('Hello, world!')
scheduler = task.LoopingCall(stuff())
scheduler.start(10)
reactor.run()
the value being passed for f is the result of evaluating stuff() - in other words, it is the return value of the stuff function. stuff implicitly returns None so your LoopingCall construction is equivalent to:
scheduler = task.LoopingCall(stuff())
Then, when you start the loop with LoopingCall.start you don't attach any error handlers. When LoopingCall tries to call None an exception is raised. Since there are no error handlers attached, the exception is reported as an "Unhandled error in Deferred" and logged for you.
If you want the LoopingCall to call stuff at the defined interval, pass stuff to it (instead of None). If you want to deal with errors in your loop, attach an error handler to the Deferred returned by LoopingCall.start.

Flask: RuntimeError: Working outside of request context

I'm trying to control a robot with ROS and flask. The problem is that when i kill ROS with ctrl-c (SIGINT) flask is slowing this process down because it is not closing right away. I have implemented a signal_handler to handle the ctrl-c and close flask.
The problem is that when i run this and press
ctrl-c i closes everything right away but i get the following error:
RuntimeError: Working outside of request context.
How can i fix this error?
#!/usr/bin/env python
import rospy
from raspimouse_ros.msg import MotorFreqs
from time import sleep
from flask import Flask, request
from os.path import join, dirname
from signal import signal, SIGINT
cwd = dirname(__file__)
open(join(cwd, "file.html"))
app = Flask(__name__)
deltaX = 0
deltaY = 0
pub = rospy.Publisher('/motor_raw', MotorFreqs, queue_size=10)
rospy.init_node('control')
msg = MotorFreqs()
def signal_handler(signal_received, frame):
msg.left = 0
msg.right = 0
pub.publish(msg)
print("Quitting .......")
func = request.environ.get('werkzeug.server.shutdown')
func()
signal(SIGINT,signal_handler)
#app.route("/")
def main():
with open(join(cwd, "file.html"), 'r') as f:
program = f.read()
return program
#app.route("/SetSpeed")
def SetSpeed():
global deltaX
global deltaY
deltaX = int(float(request.args.get('x')) * 4)
deltaY = int(float(request.args.get('y')) * 10)
publisher()
return ""
def publisher():
msg.left = int(-deltaY+deltaX)
msg.right = int(-deltaY-deltaX)
rospy.loginfo(msg)
pub.publish(msg)
app.run(host="0.0.0.0")
[control-1] killing on exit
Quitting .......
Traceback (most recent call last):
File "/home/pi/workspace/src/manual_control/scripts/control.py", line 54, in <module>
app.run(host="0.0.0.0")
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 841, in run
run_simple(host, port, self, **options)
File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 708, in run_simple
inner()
File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 673, in inner
srv.serve_forever()
File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 511, in serve_forever
HTTPServer.serve_forever(self)
File "/usr/lib/python2.7/SocketServer.py", line 231, in serve_forever
poll_interval)
File "/usr/lib/python2.7/SocketServer.py", line 150, in _eintr_retry
return func(*args)
File "/home/pi/workspace/src/manual_control/scripts/control.py", line 28, in signal_handler
func = request.environ.get('werkzeug.server.shutdown')
File "/usr/lib/python2.7/dist-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python2.7/dist-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/usr/lib/python2.7/dist-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
shutting down processing monitor...
... shutting down processing monitor complete
done
The request context is active when a request arrives and is destroyed after it finishes, so it is not availale in the signal_handler function, you can, however, access app_context.

Program with flask-socketio and multiprocessing thorws 'LoopExit: This operation would block forever'

first: I am an absolute beginner in python, I used to write PHP before, so if I am getting something complitly wrong please let me know.
I am writing an app. It should serve its information via websockets. I choosed flask-socketio for this. In the background I want to process the data. Because I would like to have the app small I decided against a solution like Celery.
I have shortened the code to:
# -*- coding: utf8 -*-
from flask import Flask, jsonify, abort, make_response, url_for, request, render_template
from flask.ext.socketio import SocketIO, emit
from multiprocessing import Pool
from multiprocessing.managers import BaseManager
import time
import os
def background_stuff(args):
while True:
try:
print args
time.sleep(1)
except Exception as e:
return e
thread = None
_pool = None
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
_pool = Pool(1)
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
workers = _pool.apply_async(
func=background_stuff,
args=('do background stuff',),
)
socketio.run(app)
# app.run()
When starting this, i get the following messages:
python test/multitest.py
* Running on http://127.0.0.1:5000/
* Restarting with stat
do background stuff
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 336, in _handle_tasks
for taskseq, set_length in iter(taskqueue.get, None):
File "/usr/lib/python2.7/Queue.py", line 168, in get
self.not_empty.wait()
File "/usr/lib/python2.7/threading.py", line 340, in wait
waiter.acquire()
File "gevent/_semaphore.pyx", line 112, in gevent._semaphore.Semaphore.acquire (gevent/gevent._semaphore.c:3386)
File "/home/phil/work/ttimer/server/local/lib/python2.7/site-packages/gevent/hub.py", line 338, in switch
return greenlet.switch(self)
LoopExit: This operation would block forever
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
127.0.0.1 - - [2015-09-30 00:06:23] "GET / HTTP/1.1" 200 120 0.001860
do background stuff
do background stuff
do background stuff
do background stuff
^CProcess PoolWorker-1:
Process PoolWorker-1:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 113, in worker
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
result = (True, func(*args, **kwds))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
File "test/multitest.py", line 14, in background_stuff
KeyboardInterrupt
time.sleep(1)
KeyboardInterrupt
return recv()
KeyboardInterrupt
So the background process is working and it answers http requests (127.0.0.1 - - [2015-09-30 00:06:23] "GET / HTTP/1.1" 200 120 0.001860). But just ignoring an error because it seems to work does not seem to be a solution for me. Does anyone can tell my what I am doing wrong here?
If you say I can't do it that way can you tell me why? I would like to learn and understand what I am doing wrong.
I read something about monkepatching, but everything suggested threw just more or other errors. I think it is better to work on the first error instead of blindly trying fixes.
python -V
Python 2.7.9
Greetings
update
I added the 2 lines for monkeypatching, this is what I got:
$python multitest2.py
^CProcess PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
return recv()
KeyboardInterrupt
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 380, in _handle_results
task = get()
KeyboardInterrupt
* Running on http://127.0.0.1:5000/
* Restarting with stat
^CProcess PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 380, in _handle_results
task = get()
KeyboardInterrupt
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
return recv()
KeyboardInterrupt
do background stuff
FAILED to start flash policy server: [Errno 98] Address already in use: ('127.0.0.1', 10843)
$do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
On start there is no output at all. After hittinc ctrl-c several times, i get the background stuff output. This continues until i kill the python process via SIGKILL
update 2
what I expect to see is
* Running on http://127.0.0.1:5000/
* Restarting with stat
do background stuff
do background stuff
do background stuff
right after the running of the script. But before I press ctrl-c nothing is happening.
First of all, you need to be aware that the version of Flask-SocketIO that you are using requires gevent, which is a coroutine framework. Using the asynchronous coroutines of gevent with a multiprocessing pool is a strange combination. You are using gevent, so what would make the most sense is to use the gevent pool functionality so that everything is consistent.
Now regarding the problem, I think it is likely due to not having the standard library monkey patched at an early stage. I recommend that you add the following lines at the very top of your script (above your imports, make them lines 1 and 2):
from gevent import monkey
monkey.patch_all()
These will ensure that any calls into standard library for things such as threads, semaphores, etc. go to the gevent implementations.
Update: I tried your example. The original version, without monkey-patching, works fine for me, I do not see the LoopExit error that you reported. Adding the monkey patching prevents the background processes from running, as you reported.
In any case, I converted your script to use gevent.pool and that works reliably for me. Here is the edited script:
from flask import Flask, jsonify, abort, make_response, url_for, request, render_template
from flask.ext.socketio import SocketIO, emit
from gevent.pool import Pool
import time
import os
def background_stuff(args):
while True:
try:
print args
time.sleep(1)
except Exception as e:
return e
thread = None
_pool = None
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
_pool = Pool(1)
workers = _pool.apply_async(
func=background_stuff,
args=('do background stuff',),
)
socketio.run(app)
Hope this helps!
I read a tutorial about gevent and fount a solution which is simple and clean for my needs:
# -*- coding: utf8 -*-
from flask import Flask
from flask.ext.socketio import SocketIO
import gevent
import os
def background_stuff():
while True:
try:
print 'doing background work ... '
gevent.sleep(1)
except Exception as e:
return e
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
gevent.spawn(background_stuff)
socketio.run(app)
The tutorial can be found here: http://sdiehl.github.io/gevent-tutorial/#long-polling
It even tells about problems with gevent and multiprocesing: http://sdiehl.github.io/gevent-tutorial/#subprocess , but because I found a simple solution fitting to my needs I didn't try anything else.

Method to make a new twisted reactor?

I am making a IRC Log Bot which saves thel logs datewise. I want the program to close the present reactor and make a new one ( this is because, it will save the logs in a new file). I wrote a sample program but it is unable to work-
def event():
if no date_change:
do normal work that has to be done
else:
stop present reactor
make a new reactor
Here is the actual code that I am using:-
def irc_NICK(self, prefix, params):
"""Called when an IRC user changes their nickname."""
old_nick = prefix.split('!')[0]
new_nick = params[0]
if self.factory.filename.find(file_name_gen())!=-1:
self.logger.log("<em>%s is now known as %s</em>" % (old_nick, new_nick),1)
else:
print "new itng"
reactor.stop()
irc.IRCClient.connectionLost(self, "Day Change")
#earlier the LogBotFactory object is f
f1 = LogBotFactory("meeting-test", file_name_gen())
reactor.connectTCP("irc.freenode.net", 6667, f1)
reactor.run()
The second LogBotFactory object gets created but, the program stops due to some Unhandled error.
This is the traceback that I am getting...
1971-01-02 23:59:41+0530 [-] Log opened.
1971-01-02 23:59:41+0530 [-] Starting factory <__main__.LogBotFactory instance at 0x27318c0>
1971-01-03 00:00:10+0530 [LogBot,client] new itng
1971-01-03 00:00:10+0530 [LogBot,client] Starting factory <__main__.LogBotFactory instance at 0x2989cb0>
1971-01-03 00:00:10+0530 [LogBot,client] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 221, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/usr/lib/python2.7/dist-packages/twisted/words/protocols/irc.py", line 2412, in dataReceived
basic.LineReceiver.dataReceived(self, data.replace('\r', ''))
File "/usr/lib/python2.7/dist-packages/twisted/protocols/basic.py", line 581, in dataReceived
why = self.lineReceived(line)
File "/usr/lib/python2.7/dist-packages/twisted/words/protocols/irc.py", line 2420, in lineReceived
self.handleCommand(command, prefix, params)
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/words/protocols/irc.py", line 2464, in handleCommand
method(prefix, params)
File "irc.py", line 141, in irc_NICK
reactor.run()
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1191, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1171, in startRunning
ReactorBase.startRunning(self)
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 681, in startRunning
raise error.ReactorAlreadyRunning()
twisted.internet.error.ReactorAlreadyRunning:
1971-01-03 00:00:10+0530 [-] Main loop terminated.
I am new to python twisted.
Please help, Thanks.
print "new itng"
reactor.stop()
irc.IRCClient.connectionLost(self, "Day Change")
#earlier the LogBotFactory object is f
f1 = LogBotFactory("meeting-test", file_name_gen())
reactor.connectTCP("irc.freenode.net", 6667, f1)
reactor.run()
This problem is even easier to solve than you think. Delete the lines reactor.stop() and reactor.run() and you'll be all set. In other words, just leave the reactor running.
Separately, you also need to replace the line irc.IRCClient.connectionLost(self, "Day Change") with self.loseConnection(). Calling connectionLost does not close the connection. It gets called when Twisted sees the connection has been closed. If you call it yourself, your program might think the connection has been closed but it won't really have been closed - and after this happens enough times you'll be out of resources and your program won't work anymore.
You should only stop the reactor when you're done using Twisted (usually right before your program exits).

Stopping Twisted from swallowing exceptions

Is there a way to stop Twisted reactor from automatically swallowing exceptions (eg. NameError)? I just want it to stop execution, and give me a stack trace in console?
There's even a FAQ question about it, but to say the least, it's not very helpful.
Currently, in every errback I do this:
def errback(value):
import traceback
trace = traceback.format_exc()
# rest of the errback...
but that feels clunky, and there has to be a better way?
Update
In response to Jean-Paul's answer, I've tried running the following code (with Twisted 11.1 and 12.0):
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.internet import protocol, reactor
class Broken(protocol.Protocol):
def connectionMade(self):
buggy_user_code()
e = TCP4ClientEndpoint(reactor, "127.0.0.1", 22)
f = protocol.Factory()
f.protocol = Broken
e.connect(f)
reactor.run()
After running it, it just hangs there, so I have to Ctrl-C it:
> python2.7 tx-example.py
^CUnhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.
Let's explore "swallow" a little bit. What does it mean to "swallow" an exception?
Here's the most direct and, I think, faithful interpretation:
try:
user_code()
except:
pass
Here any exceptions from the call to user code are caught and then discarded with no action taken. If you look through Twisted, I don't think you'll find this pattern anywhere. If you do, it's a terrible mistake and a bug, and you would be helping out the project by filing a bug pointing it out.
What else might lead to "swallowing exceptions"? One possibility is that the exception is coming from application code that isn't supposed to be raising exceptions at all. This is typically dealt with in Twisted by logging the exception and then moving on, perhaps after disconnecting the application code from whatever event source it was connected to. Consider this buggy application:
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.internet import protocol, reactor
class Broken(protocol.Protocol):
def connectionMade(self):
buggy_user_code()
e = TCP4ClientEndpoint(reactor, "127.0.0.1", 22)
f = protocol.Factory()
f.protocol = Broken
e.connect(f)
reactor.run()
When run (if you have a server running on localhost:22, so the connection succeeds and connectionMade actually gets called), the output produced is:
Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 84, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 69, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/selectreactor.py", line 146, in _doReadOrWrite
why = getattr(selectable, method)()
File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 674, in doConnect
self._connectDone()
File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 681, in _connectDone
self.protocol.makeConnection(self)
File "/usr/lib/python2.7/dist-packages/twisted/internet/protocol.py", line 461, in makeConnection
self.connectionMade()
File "/usr/lib/python2.7/dist-packages/twisted/internet/endpoints.py", line 64, in connectionMade
self._wrappedProtocol.makeConnection(self.transport)
File "/usr/lib/python2.7/dist-packages/twisted/internet/protocol.py", line 461, in makeConnection
self.connectionMade()
File "proderr.py", line 6, in connectionMade
buggy_user_code()
exceptions.NameError: global name 'buggy_user_code' is not defined
This error clearly isn't swallowed. Even though the logging system hasn't been initialized in any particular way by this application, the logged error still shows up. If the logging system had been initialized in a way that caused errors to go elsewhere - say some log file, or /dev/null - then the error might not be as apparent. You would have to go out of your way to cause this to happen though, and presumably if you direct your logging system at /dev/null then you won't be surprised if you don't see any errors logged.
In general there is no way to change this behavior in Twisted. Each exception handler is implemented separately, at the call site where application code is invoked, and each one is implemented separately to do the same thing - log the error.
One more case worth inspecting is how exceptions interact with the Deferred class. Since you mentioned errbacks I'm guessing this is the case that's biting you.
A Deferred can have a success result or a failure result. When it has any result at all and more callbacks or errbacks, it will try to pass the result to either the next callback or errback. The result of the Deferred then becomes the result of the call to one of those functions. As soon as the Deferred has gone though all of its callbacks and errbacks, it holds on to its result in case more callbacks or errbacks are added to it.
If the Deferred ends up with a failure result and no more errbacks, then it just sits on that failure. If it gets garbage collected before an errback which handles that failure is added to it, then it will log the exception. This is why you should always have errbacks on your Deferreds, at least so that you can log unexpected exceptions in a timely manner (rather than being subject to the whims of the garbage collector).
If we revisit the previous example and consider the behavior when there is no server listening on localhost:22 (or change the example to connect to a different address, where no server is listening), then what we get is exactly a Deferred with a failure result and no errback to handle it.
e.connect(f)
This call returns a Deferred, but the calling code just discards it. Hence, it has no callbacks or errbacks. When it gets its failure result, there's no code to handle it. The error is only logged when the Deferred is garbage collected, which happens at an unpredictable time. Often, particularly for very simple examples, the garbage collection won't happen until you try to shut down the program (eg via Control-C). The result is something like this:
$ python someprog.py
... wait ...
... wait ...
... wait ...
<Control C>
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.
If you've accidentally written a large program and fallen into this trap somewhere, but you're not exactly sure where, then twisted.internet.defer.setDebugging might be helpful. If the example is changed to use it to enable Deferred debugging:
from twisted.internet.defer import setDebugging
setDebugging(True)
Then the output is somewhat more informative:
exarkun#top:/tmp$ python proderr.py
... wait ...
... wait ...
... wait ...
<Control C>
Unhandled error in Deferred:
(debug: C: Deferred was created:
C: File "proderr.py", line 15, in <module>
C: e.connect(f)
C: File "/usr/lib/python2.7/dist-packages/twisted/internet/endpoints.py", line 240, in connect
C: wf = _WrappingFactory(protocolFactory, _canceller)
C: File "/usr/lib/python2.7/dist-packages/twisted/internet/endpoints.py", line 121, in __init__
C: self._onConnection = defer.Deferred(canceller=canceller)
I: First Invoker was:
I: File "proderr.py", line 16, in <module>
I: reactor.run()
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1162, in run
I: self.mainLoop()
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1174, in mainLoop
I: self.doIteration(t)
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/selectreactor.py", line 140, in doSelect
I: _logrun(selectable, _drdw, selectable, method, dict)
I: File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 84, in callWithLogger
I: return callWithContext({"system": lp}, func, *args, **kw)
I: File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 69, in callWithContext
I: return context.call({ILogContext: newCtx}, func, *args, **kw)
I: File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
I: return self.currentContext().callWithContext(ctx, func, *args, **kw)
I: File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
I: return func(*args,**kw)
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/selectreactor.py", line 146, in _doReadOrWrite
I: why = getattr(selectable, method)()
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 638, in doConnect
I: self.failIfNotConnected(error.getConnectError((err, strerror(err))))
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 592, in failIfNotConnected
I: self.connector.connectionFailed(failure.Failure(err))
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1048, in connectionFailed
I: self.factory.clientConnectionFailed(self, reason)
I: File "/usr/lib/python2.7/dist-packages/twisted/internet/endpoints.py", line 144, in clientConnectionFailed
I: self._onConnection.errback(reason)
)
Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.
Notice near the top, where the e.connect(f) line is given as the origin of this Deferred - telling you a likely place where you should be adding an errback.
However, the code should have been written to add an errback to this Deferred in the first place, at least to log the error.
There are shorter (and more correct) ways to display exceptions than the one you've given, though. For example, consider:
d = e.connect(f)
def errback(reason):
reason.printTraceback()
d.addErrback(errback)
Or, even more succinctly:
from twisted.python.log import err
d = e.connect(f)
d.addErrback(err, "Problem fetching the foo from the bar")
This error handling behavior is somewhat fundamental to the idea of Deferred and so also isn't very likely to change.
If you have a Deferred, errors from which really are fatal and must stop your application, then you can define a suitable errback and attach it to that Deferred:
d = e.connect(f)
def fatalError(reason):
err(reason, "Absolutely needed the foo, could not get it")
reactor.stop()
d.addErrback(fatalError)
What you could do as a workaround is register a log listener and stop the reactor whenever you see a critical error! This is a twisted(verb) approach but luckily all "Unhandled errors" are raised with LogLevel.critical.
from twisted.logger._levels import LogLevel
def analyze(event):
if event.get("log_level") == LogLevel.critical:
print "Stopping for: ", event
reactor.stop()
globalLogPublisher.addObserver(analyze)

Categories