Error im receiving in Python (Tkinter) - python

Currently trying to construct a program with multiple windows (Main screen -> A Landlord / Customer section -> calendar/calculator ect)
I am very much a beginner at this moment in time, i keep coming across two errors:
Exception in Tkinter callback Traceback (most recent call last):
File
"C:\Users\HP\AppData\Local\Programs\Python\Python35-32\lib\idlelib\run.py",
line 119, in main
seq, request = rpc.request_queue.get(block=True, timeout=0.05) File
"C:\Users\HP\AppData\Local\Programs\Python\Python35-32\lib\queue.py",
line 172, in get
raise Empty queue.Empty
Also another query; a error i receive a lot is how to define "Self" ("self is not defined")
EDIT
My code is very much dis-functional - i think looking at my code will probably give you a heart attack. When running the code i want there to be 1 screen at one time, currently 3 come up at the start, im assuming this is too me using wrong inheritance or something
It was too big to place in here so you can easily view the code here
http://textuploader.com/525p5
To be honest any help will really be appreciated. My first time doing something complex on python such as a working program with features such as a calendar, calculator ect
Cheers
Ross

File "C:\Users\HP\AppData\Local\Programs\Python\Python35-32\lib\idlelib\run.py",
line 119, in main seq, request = rpc.request_queue.get(block=True, timeout=0.05)
File "C:\Users\HP\AppData\Local\Programs\Python\Python35-32\lib\queue.py", line 172,
in get raise Empty queue.Empty
these two lines
request = rpc.request_queue.get(block=True, timeout=0.05)
raise Empty queue.Empty
Suggest you are trying to get data from an empty queue, and an exception is raised. The proper way to deal with this is to put this code in a try..except block, catch the exception and deal with it accordingly.
Here is a decent tutorial on this matter.
try:
....
request = rpc.request_queue.get(block=True, timeout=0.05)
....
except Exception,e:
# this will cath all exceptions derived from the exception class
About your second query, I strongly advise you to post a new question instead of bundling them up.
But I'll try and give some advice: self is used to address an instance of a class, it's the current object in use. You can't 'define' self, you use it in a class implementation to tell python that your method is to be used with a specific instance, not the global scope.
class demo:
def __init__(self):
self.a = 5
def foo(self):
self.a = 6
def global_foo():
print 'global_foo'

Related

How to catch when *any* error occurs in a whole script? [duplicate]

How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace

Can an exception handler write to the stream opened in an enclosing with-block?

I would like to do something roughly like this1:
with open(somepath, 'w') as writer:
def handler(*args, **kwargs):
print(format_exception_info(*args, **kwargs), file=writer)
sys.__excepthook__(*args, **kwargs)
sys.excepthook = handler
return do_stuff()
In other words, if an exception gets raised during the execution of do_stuff, I would like the program to write some information about the exception to the write-stream writer.
The above won't work as written; as far as I can tell, it appears that if an exception is raised during the execution of do_stuff, the __exit__ method associated with block will be executed (thereby closing writer) before sys.excepthook (i.e. handler) gets invoked.
Is there some way to modify this code (still preserving the with-block) so that writer is still open when handler runs?
NB: I am interested in the answer to this question for both Python 3.x and Python 2.7.x.
1 The variable somepath and the functions format_exception_info and do_stuff mentioned in this snippet are supposed to be defined elsewhere. I hope that their names are suggestive enough to describe what they stand for.

Python shorthand exception handling

There's a certain problem I've been having with exception handling in Python. There have been many situations where there is an area of code where I want all exceptions to be ignored. Say I have 100 lines of code where I want this to happen.
This is what most would think would be the solution:
try:
line 1
line 2
line 3
...
line 99
line 100
except:
pass
This actually does not work in my situation (and many other situations). Assume line 3 has an exception. Once the exception is thrown, it goes straight to "pass", and skips lines 4-100. The only solution I've been able to come up with is this:
try:
line 1
except:
pass
try:
line 2
except:
pass
try:
line 3
except:
pass
...
try:
line 99
except:
pass
try:
line 100
except:
pass
But, as is obvious, this is extremely ugly, sloppy, and takes absolutely forever. How can I do the above code in a shorter, cleaner way? Bonus points if you give a method that allows "pass" to be replaced with other code.
As other answers have already stated, you should consider refactoring your code.
That said, I couldn't resist not hacking something together to be able to execute your function without failing and breaking out in case an exception occurs.
import ast, _ast, compiler
def fail():
print "Hello, World!"
raise Exception
x = [4, 5]
print x
if __name__ == '__main__':
with open(__file__, 'r') as source:
tree = ast.parse(source.read(), __file__)
for node in ast.iter_child_nodes(tree):
if isinstance(node, _ast.FunctionDef):
_locals = {}
for line in node.body:
mod = ast.Module()
mod.body = [line]
try:
exec(compile(mod, filename='<ast>', mode='exec'), _locals, globals())
except:
import traceback
traceback.print_exc()
The code executes any function it finds in global scope, and prevents it from exiting in the event it fails. It does so by iterating over the AST of the file, and creating a new module to execute for each line of the the function.
As you would expect, the output of the program is:
Hello, World!
Traceback (most recent call last):
File "kek.py", line 18, in <module>
exec(compile(m, filename='<ast>', mode='exec'), _locals, globals())
File "<ast>", line 3, in <module>
Exception
[4, 5]
I should emphasize that using this in any production code is a bad idea. But for the sake of argument, something like this would work. You could even wrap it in a nice decorator, though that wouldn't do anything to the fact that it's a bad idea.
Happy refactoring!
You could try breaking the code into smaller chunks so that it can properly handle errors instead of needing to abandon all progress and loop back through.
Another solution that can be used in addition to that is making checks, if set flags for your code to check if it can proceed or if it needs to repeat the last step you would be able to prevent extra iterations.
Example:
while not continue:
try:
a = input('Enter data')
except:
pass
if a != null:
continue = true

Python callback handlers- better error messages?

I'm using the stomp.py library to get JSON messages from over a network. I've adapted the simple example they give here which uses a callback to provide message handling.
But I made a simple error when I modified that callback- for example, I called json.load() instead of json.loads() when trying to parse my JSON string.
class MyListener(object):
def on_message(self, headers, message):
data = json.load(message) ## Should be .loads() for a string!
Usually that would be fine- it would AttributeError and I'd see a traceback. But in this case Python prints:
No handlers could be found for logger "stomp.py"
... no traceback, no crash out, and that was all. Very confusing to debug and find out what I did wrong! I was expecting at least the normal traceback along the lines of:
Traceback (most recent call last):
File "./ncl/stomp.py-3.1.3/stompJSONParser.py", line 32, in <module>
[etc etc ...]
... rather than it borking the whole listener. I guess it's because that happens on a different thread?
Now that I've worked out it's like a kind of runtime error in the callback I at least know I've done something wrong when it errors- but if it just spews that error for every mistake I make rather than giving me some kind of useful message, it makes it a bit difficult to code.
What causes this? And what could I do to get the regular, more verbose traceback back?
Looks like its expecting a log handler from the Python Logging Module to be set up in order to capture output. There are lots of possible configurations for logging. But for simple debugging I would use something along the lines of
import logging
logging.basicConfig(level=logging.DEBUG)
That should capture all output of log level DEBUG and above. Read the logging docs for more info :)
Instructions for getting a logger (which is what is being asked for directly) can be found here, but the verbose traceback is suppressed.
If you take a look at the code which is calling on_message, you'll notice that that block is in a try block without an except.
Line 703 is where the method is actually called:
notify_func = getattr(listener, 'on_%s' % frame_type)
notify_func(headers, body)
which is in method __notify (declared on line 639):
def __notify(self, frame_type, headers=None, body=None):
These are the times when it is not in a try block:
331 (for the connected event)
426 for the send event
743 for disconnected
But the time when message is called is line 727:
# line 719
try:
# ....
self.__notify(frame_type, headers, body)
In the end I grabbed the logger by name and set a StreamHandler on it:
import logging
log = logging.getLogger('stomp.py')
strh = logging.StreamHandler()
strh.setLevel(logging.ERROR)
log.addHandler(strh);

Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError

Because the Twisted getPage function doesn't give me access to headers, I had to write my own getPageWithHeaders function.
def getPageWithHeaders(contextFactory=None, *args, **kwargs):
try:
return _makeGetterFactory(url, HTTPClientFactory,
contextFactory=contextFactory,
*args, **kwargs)
except:
traceback.print_exc()
This is exactly the same as the normal getPage function, except that I added the try/except block and return the factory object instead of returning the factory.deferred
For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look.
EDIT: Here's the traceback I get, which seems bizarrely incomplete:
Traceback (most recent call last):
File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders
factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs)
File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory
factory = factoryFactory(url, *args, **kwargs)
File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__
self.headers = InsensitiveDict(headers)
RuntimeError: maximum recursion depth exceeded
This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like
def f(): return f()
try: f()
except: traceback.print_exc()
then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to f()
The specific traceback that you're looking at is a bit mystifying. You could try traceback.print_stack rather than traceback.print_exc to get a look at the entire stack above the problematic code, rather than just the stack going back to where the exception is caught.
Without seeing more of your traceback I can't be certain, but you may be running into the problem where Deferreds will raise a recursion limit exception if you chain too many of them together.
If you turn on Deferred debugging (from twisted.internet.defer import setDebugging; setDebugging(True)) you may get more useful tracebacks in some cases, but please be aware that this may also slow down your server quite a bit.
You should look at the traceback you're getting together with the exception -- that will tell you what function(s) is/are recursing too deeply, "below" _makeGetterFactory. Most likely you'll find that your own getPageWithHeaders is involved in the recursion, exactly because instead of properly returning a deferred it tries to return a factory that's not ready yet. What happens if you do go back to returning the deferred?
The URL opener is likely following an un-ending series of 301 or 302 redirects.

Categories