Ignore TypeError in Python - python

I am working on some soon-to-be obsolete scientific software written in Python (Enthought Python Distribution, Python 2.7.1, IPython 0.10.1). We need to check if the older results are correct, but with the massive amount of data we have the GUI procedure we need to make working with non-GUI mode. The important piece of code is to save the data:
def _save_button_fired(self):
self.savedata()
In GUI when we click on the button the file is saved correctly. In non-GUI mode, when we do the following:
m.savedata()
the file is created, but a number of errors appear starting with:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
etc. etc. When I use CTRL+D, select Y to kill the Python, the file is quite surprisingly written correctly. This is suffcient for our needs. However, is there any way to ignore the error and just proceed further with the code? I looked at the forum for solutions but non of them seems to fit my situation. Also, I am not keen on Python, thus would be grateful for a working solution. With many thanks.

You could wrap it in a try/pass :)
try:
self.savedata()
except TypeError:
pass

An alternative to the try/pass solution is contextlib.suppress.
with contextlib.suppress(TypeError):
self.savedata()
Why one over the other?
The context manager slightly shortens the code and significantly clarifies the author's intention to ignore the specific errors. The standard library feature was introduced following a discussion, where the consensus was that: A key benefit here is in the priming effect for readers... The with statement form makes it clear before you start reading the code that certain exceptions won't propagate.
Source: sourcery.ai

Related

Is there a way to get the entire C stack trace of a Python code block from Python?

I was trying to figure out the exact control flow and function call to better guide me while writing in cpython for a fairly large and complicated codebase. It feels like this should be easily doable using pdb() but I can't seem to figure it out. Using bp.set_trace() only reveals the Python file called during the execution. Is this the write way of going about it? Since a fair bit of codegen and dynamic dispatch of function calls is used I can't precisely find the C++ method definitions of the functions called from the Python code.
It seems like this should be straightforward but most SO threads don't focus precisely on this, just the code flow sequence
I was wondering if pdb.pm() would give me what I need but it doesn't exactly work unless an exception has occurred.

Untangling the cause of an error in Theano

I have a bug in my code, but finding the exact cause of it is difficult because of how theano works.
Following the tips in the exception details, I set theano.config.optimizer='None' and theano.config.exception_verbosity='high', but that doesn't tell me enough.
In my case, for example, there is a problem with the dot product between two tensors. The stacktrace leads me through a lot and to a particular function which seems to contain in it, somewhere, the problematic call to theano.tensor.dot, but I can't find where exactly that part of the code is, and since I'm trying to implement things through keras, it gets even more complicated and tangled up.
Is there any way to get more details on an apply node? I've tried using StepMode, as it seems to be attached to the nodes, but if there is a way of making that tool print the exact lines from which the code in the node is executed, I don't know what it is. I tried using that to print a stacktrace when the problem occurs, but it prints just about the same stacktrace as the exception.
If you want to find the spots in your code that use theano.tensor.dot you can monkeypatch it with wrapper code that uses traceback.print_stack:
import traceback
original_dot = theano.tensor.dot
def debug_wrapper(*args,**kw):
traceback.print_stack()
return original_dot(*args,**kw)
theano.tensor.dot = debug_wrapper
This way any time theano.tensor.dot is called (after it is patched) it will show you the stack like the one in a traceback message and still do it's job. Note that I am not very familiar with theano so this is a general python debugging solution, there might well be ways specific to theano that let you do similar.
You should try using theano test_values. That way the exception will be raised exactly on the line where the error occurs and not after the compilation of the graph.
You need to change the theano.config.compute_test_value flag to 'raise' so that you get an error if there is an input tensor without a test_value, to make sure that all of the test computation will be propagated to the point where your error occurs.

pydev debugger not working

I am working with pydev (latest version) and the debugger is not working anymore (specifically breakpoints do not work). I get a strange error:
pydev debugger: starting
Traceback (most recent call last):
with no further text. ...
I am working with stackless python 2.7 and pyside (almost latest version). The breakpoints that are not working are not within stackless tasklets.
Anyone know the cause or a fix?
OK, (slightly embarassed) i have had a similar problem in the past, posted here and got extensive help here
I used that post to pinpoint the problem to this method:
def __getattr__(self, name):
if name in self._code_:
func = self.getfunction(name)
setattr(self, name, func)
return func
else:
return super(AtomicProcess, self).__getattr__(name)
I would like to use this or a similar method to set the attribute at the latest possible time (when it is called).I added the super call to possibly fix the problem, but no dice.
Does anyone know what causes the problem in this method?
Does anyone have a fix that achieves the late initialization but avoids the pydev problem?
Also I should mention that my code runs without problem but that the debugger seems to go into some infinite recursion in the method above, recovers and ignores breakpoints after this method.
Cheers, Lars
PS: anyone? Are pydev developers following stackoverflow or is there another place i might try?
It seems it's something as the previous issue although I'm not sure what (if you can pass me the code I can take a look at it, but without it, the only thing I can do is point to the last thread we had).
Keep in mind that if you have a recursion exception, this is something that breaks the Python debugging facility... What you can do as a workaround in the meanwhile is using the remote debugger to improve on that.
I do have a hunch thought:
My guess is that you access something in 'self' which is calling __getattr__ again... (which generates a recursion and breaks the debugger).
Another possible thing: instead of using the 'super' idiom in: super(AtomicProcess, self).__getattr__(name), use the superclass directly: Superclass.__getattr__(self, name)...
Cheers,
Fabio

Why does Python (WLST) tell me a documented function doesn't exist?

I'm using the Weblogic Scripting Tool aka WLST, a Python-based shell environment, to programmatically edit variables in Plan.xml files for projects to be deployed to the Weblogic server. I can get as far as getting an instance of the WLSTPlan Object, and can run getVariables and other methods to check that it is populated and view its contents. However, when I try to call the setVariable method, I get an AttributeError, which to my limited understanding means the method doesn't exist:
wls:/UoADevDomain/serverConfig> plan.setVariable("foo", "bar")
Traceback (innermost last):
File "<console>", line 1, in ?
AttributeError: setVariable
As the docs linked above (which I checked are the right version) show, this method definitely should exist, and other methods listed in the same doc work. I'm not sure if this is an issue with Weblogic or with my understanding of Python, but either way it's beyond me. I tried using the dir() function to list the plan object's attributes, but it returned an empty set so I guess that trick doesn't work in this environment.
Can anyone suggest how to go about diagnosing this problem, or better still fixing it?
Using javap and looking for setters on the WLSTPlan bean shows only the following setter
void setVariableValue(java.lang.String, java.lang.String);
Could be a typo in the docs. Can you try 'setVariableValue' instead.
The documentation is rather unclear, but from reading between the lines, it looks like setVariable is a method that you invoke on a VariableBean.
I'd try using the following:
plan.createVariable("foo").setVariable("foo", "bar");
(that's without having tested the code, though)

Vim Python omni-completion failing to work on system modules

I'm noticing that even for system modules, code completion doesn't work too well.
For example, if I have a simple file that does:
import re
p = re.compile(pattern)
m = p.search(line)
If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()).
If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case).
This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it?
I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.
Completion for this kind of things is tricky, because it would need to execute the actual code to work.
For example p.search() could return None or a MatchObject, depending on the data that is passed to it.
This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.
I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.
I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.

Categories