Suppose, I have a 3rd party library that I am not allowed to modify. Suppose, it is called Fabric, but that is important only to explain the symptoms.
The script processes a list of existing files to get them using fabric.operations.get, which in its turn calls fabric.sftp.SFTP.get. Using fabric.sftp.SFTP.get produced some Warning: get() encountered an exception while downloading ... Underlying exception: Permission denied. I noticed the implementation was too old, and swapped the implementation of that function for one that uses sudo to work around the Permission denied:
import fabric.sftp
def sftpget(....same args as in current implementation....):
...here I pasted fabric.sftp.SFTP.get from the Internet
# swapping the implementation
fabric.sftp.SFTP.get=sftpget
This worked in 99.999% of cases. But getting three files still results in the same error. I tried to see if that is caused by some other codepath, but the only place where that string is printed is in fabric.operations.get in except: clause (grepped /usr/lib/python2.6/site-packages/ for get() encountered an exception while downloading). I tried to swap that function for a implementation that will print the stack trace of the exception, but I still only get the Permission denied message, and no stack trace.
It looks like the function does not get swapped in this case.
What could be the reasons for some invocations to use the original fabric.operations.get (since I don't see the stack traces printed) (and possibly the unpatched fabric.sftp.SFTP.get, since it seems sudo fix is not being used - I did check manually that those operations can be done on those files)?
during import, before you replace the get function some other piece of code might save a reference to the get function, for example:
class a():
def __init__(self,getter):
self.getter=getter
b=a(sftp.SFTP.get)
class a would then still hold a reference to the old piece of code despite it being replaced by you in the namespace.
Related
I am writing an interpreter for my simple programing language in Python. When there is an error during interpreting, a Python exception is thrown. The problem is that the stack trace is really ugly because it contains many recursive calls as I am navigating program's structure and interpreting it. And from the stack trace you cannot really see where in my program's structure the code was when it failed.
I know that I could capture all exceptions myself and rethrow a new exception adding some information about where an exception happened, but because it is recursive, I would have to do such rethrowing again and again.
I wonder if there isn't an easier way. For example, that next to path, line numbers, and function name in interpreter's code, I could print out also information where in the interpreted program's code that function is at that moment in the stack.
You can write a sys.excepthook that prints whatever traceback you want. I regularly use such a hook that uses the traceback module to print a normal traceback but with added lines that give the values of interesting local variables (via f_locals introspection).
This is a novice question.
Consider the below code block :
try:
import os
except ImportError as error:
print " Unable to import buildin module os"
raise error
Do we need to add exception block while importing python built-in modules(like above? What would cause to fail importing a built in module?
Can someone point at python documentation explaining this theory?
Short answer, no.
Longer answer: it doesn't help your program much to catch exceptions that you can't do anything about. Some file is missing -- you can report it, maybe ask the user again, or perhaps it is known that this sometimes happens and you can give a clear error message explaining why. Some API call fails -- maybe it can be retried, or someone needs to receive a message that a service is down.
But something as basic as this... First, it never happens (I've never seen import os fail in twenty years). Second, if that fails, there's nothing your program can usefully do (if this fails, chances are print also fails). And also, the library documentation doesn't say that this is something that can happen.
You have to rely on the basic system working. Only catch exceptions when it is known that they could be raised and you have a way to deal with them.
There are a couple of reasons that the code in the question is pretty much pointless.
First of all, it does not add any new information. The error is just reraised. The printout adds no new information that isn't already in the error and stack trace.
Second, as #RemcoGerlich's answer points out, you are asking specifically about builtin modules. It would make sense to react to the absence of an optional module by either finding a replacement or turning off program features, but there's nothing much you can do in response to your platform being broken.
Failure of builtin imports is never addressed in the documentation explicitly to the best of my knowledge. Builtin module imports can fail for any of the reasons a normal import can fail. Builtins are a collection of Python files and C-extensions (in CPython at least). Modifying, replacing, deleting any of these files can lead to anything from import failures to the interpreter not starting up at all. Setting the wrong file permissions can have a similar effect.
I have a bug in my code, but finding the exact cause of it is difficult because of how theano works.
Following the tips in the exception details, I set theano.config.optimizer='None' and theano.config.exception_verbosity='high', but that doesn't tell me enough.
In my case, for example, there is a problem with the dot product between two tensors. The stacktrace leads me through a lot and to a particular function which seems to contain in it, somewhere, the problematic call to theano.tensor.dot, but I can't find where exactly that part of the code is, and since I'm trying to implement things through keras, it gets even more complicated and tangled up.
Is there any way to get more details on an apply node? I've tried using StepMode, as it seems to be attached to the nodes, but if there is a way of making that tool print the exact lines from which the code in the node is executed, I don't know what it is. I tried using that to print a stacktrace when the problem occurs, but it prints just about the same stacktrace as the exception.
If you want to find the spots in your code that use theano.tensor.dot you can monkeypatch it with wrapper code that uses traceback.print_stack:
import traceback
original_dot = theano.tensor.dot
def debug_wrapper(*args,**kw):
traceback.print_stack()
return original_dot(*args,**kw)
theano.tensor.dot = debug_wrapper
This way any time theano.tensor.dot is called (after it is patched) it will show you the stack like the one in a traceback message and still do it's job. Note that I am not very familiar with theano so this is a general python debugging solution, there might well be ways specific to theano that let you do similar.
You should try using theano test_values. That way the exception will be raised exactly on the line where the error occurs and not after the compilation of the graph.
You need to change the theano.config.compute_test_value flag to 'raise' so that you get an error if there is an input tensor without a test_value, to make sure that all of the test computation will be propagated to the point where your error occurs.
I am utterly confused by the unittest documentation: TestResult, TestLoader, testing framework, etc.
I just want to tweak the way the final results of a test run are printed out.
I have a specific thing I need to do: I am in fact using Jython, so when a bit of code raises an ExecutionException I need to dig down into the cause of this exception (ExecutionException.getCause()) to find the "real" exception which occurred, where it occurred, etc. At the moment I am just getting the location of the Future.get() which raises such an exception, and the message from the original exception (with no location). Useful, but could be improved.
Shouldn't it (in principle) be really simple to find out the object responsible for outputting the results of the testing and override some method like "print_result"...
There is another question here: Overriding Python Unit Test module for custom output? [code updated]
... this has no answers, and although the questioner said 9 months ago that he had "solved" it, he hasn't provided an answer. In any event it looks horribly complicated for what is a not unreasonable way of wishing to tweak things mildly... isn't there a simple way to do this?
later, answer to MartinBroadhurst's question about documenting during the run:
In fact I could laboriously surround all sorts of bits of code with try...except followed by a documentation function ... but if I don't do that any unexpected exceptions obviously get ejected, ultimately being caught by the testing framework.
In fact I have a decorator which I've made, #vigil( is_EDT ) (boolean param), which I use to decorate most methods and functions, the primary function of which is to check that the decorated method is being called in the "right kind of thread" (i.e. either the EDT or a non-EDT thread). This could be extended to trap any kinds of exceptions ... which is something I did previously as a solution to this problem of mine. This then printed out the exception details there and then, which was fine: the stuff was obviously not printed out at the same time as the results of the unittest run, but it was useful.
But in fact I shouldn't need to resort to my vigil function in this "make-and-mend" way! It really should be possible to tweak the unittest classes to override the way an exception is handled! Ultimately, unless some unittest guru can answer this question of mine, I'm going to have to examine the unittest source code and find out a way that way.
In a previous question of mine I asked about what appear to be a couple of non-functioning methods of unittest.TestResult... and it does regretfully appear this is not implemented as the Python documentation claims. Similarly, a little bit of additional experimentation just now seems to suggest more misdocumentation : on the python documentation page for unittest they appear to have incorrectly documented TestResult.startTest(), stopTest(), etc.: the parameter "test" should not be there (the convention in this documentation appears to be to omit the self param, and each of these methods takes only the self param).
In short, the whole unittest module is surprisingly unwieldy and dodgy... I'm surprised not least because I would have thought others in more influential positions than me would have got things changed...
E.g. If I am trying to open a file, can I not simply check if os.path.exists(myfile) instead of using try/except . I think the answer to why I should not rely on os.path.exists(myfile) is that there may be a number of other reasons why the file may not open.
Is that the logic behind why error handling using try/except should be used?
Is there a general guideline on when to use Exceptions in Python.
Race conditions.
In the time between checking whether a file exists and doing an operation that file might have been deleted, edited, renamed, etc...
On top of that, an exception will give you an OS error code that allows you to get more relevant reason why an operation has failed.
Finally, it's considered Pythonic to ask for forgiveness, rather than ask for permission.
Generally you use try/except when you handle things that are outside of the parameters that you can influence.
Within your script you can check variables for type, lists for length, etc. and you can be sure that the result will be sufficient since you are the only one handling these objects. As soon however as you handle files in the file system or you connect to remote hosts etc. you can neither influence or check all parameters anymore nor can you be sure that the result of the check stays valid.
As you said,
the file might be existent but you don't have access rights
you might be able to ping a host address but a connection is declined
There are too many factors that could go wrong to check them all seperately plus, if you do, they might still change until you actually perform your command.
With the try/error you can generally catch every exception and handle the most important errors individually. You make sure that the error is handled even if the test succeeds at first but fails after you start running your commands.