_shutdown AttributeError (ignored) when linting code that uses M2Crypto - python

I'm running lint as follows:
$ python -m pylint.lint m2test.py
with this code:
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
The lint output ends with:
Exception AttributeError: '_shutdown' in <module 'threading' from '/usr/lib/python2.7/site-packages/M2Crypto-0.21.1-py2.7-linux-x86_64.egg/M2Crypto/threading.pyc'> ignored
This code works fine when run (the above is actually a minimal test case; but the full version does work). The exception is ignored, but Bitten considers this a failure, so stops on this step.
I've tried adding 'M2Crypto.threading.init()'/'M2Crypto.threading.cleanup()' around the definition of the function, but that didn't fix the problem.
How can I prevent this problem from occurring?
I'm using M2Crypto 0.21.1, pylint 0.24 and Python 2.7 (also tried 2.7.2) on Debian Lenny x86_64.

The exception that you are seeing is caused by a bug in the astng package (presumably “Abstract Syntax Tree, Next Generation”?) which is a toolkit on which pylint depends, written by the same people. I should note in passing that I always encourage people to use pyflakes instead of pylint when possible, because it is quick, simple, fast, and predictable, whereas pylint tries to do several kinds of deep magic that are not only slow but that can get it into exactly this kind of trouble. :)
Here are the two packages on PyPI:
http://pypi.python.org/pypi/pylint
http://pypi.python.org/pypi/astng
And note that this problem had to be, necessarily, a bug in pylint and not in your code, because pylint does not run your code in order to produce its report — imagine the havoc that could be wreaked if it did (since code being linted might delete files, etcetera)! Since your code does not get run, no amount of caution, like protecting your call with threading init() or cleanup() functions, could possibly have prevented this error — unless the code snippets happened, for other reasons, to alter the behavior we are about to investigate.
So, on to your actual exception.
I had never actually heard of _shutdown before! A quick search of the Python standard library showed its definition in threading.py but not a call of the function from anywhere; only by searching the Python C source code did I discover where in pythonrun.c, during interpreter shutdown, the function is actually called:
static void
wait_for_thread_shutdown(void)
{
...
PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
"threading");
if (threading == NULL) {
/* threading not imported */
PyErr_Clear();
return;
}
result = PyObject_CallMethod(threading, "_shutdown", "");
if (result == NULL) {
PyErr_WriteUnraisable(threading);
}
...
}
Apparently it is some sort of cleanup function that the threading Standard Library module requires, and they have special-cased the Python interpreter itself to make sure that it gets called.
As you can see from the code above, Python quietly and without complaint handles the case where the threading module never gets imported during a program's run. But if threading does get imported, and still exists at shutdown time, then the interpreter looks inside for a _shutdown function and goes so far as to print an error message — and then return a non-zero exit status, the cause of your problems — if it cannot call it.
So we have to discover why the threading module exists but has no _shutdown method at the moment when pylint is done examining your program and Python is exiting. Some instrumention is called for. Can we print out what the module looks like as pylint exits? We can! The pylint/lint.py module, in its last few lines, runs its “main program” by instantiating a Run class it has defined:
if __name__ == '__main__':
Run(sys.argv[1:])
So I opened lint.py in my editor — one of the magnificent things about having each little project installed in a Python Virual Environment is that I can jump in and edit third-party code for quick experiments — and added the following print statement down at the bottom of the Run class's __init__() method:
sys.path.pop(0)
print "*****", sys.modules['threading'].__file__ # added by me!
if exit:
sys.exit(self.linter.msg_status)
I re-ran the command:
python -m pylint.lint m2test.py
And out came the __file__ string of the threading module:
***** /home/brandon/venv/lib/python2.7/site-packages/M2Crypto/threading.pyc
Well, look at that.
This is the problem!
According to this path, there actually exists an M2Crypto/threading.py module that, under all normal circumstances, should just be called M2Crypto.threading, and therefore sit in the sys.modules dictionary under the name:
sys.modules['M2Crypto.threading']
But somehow that file is also getting loaded as the main Python threading module, shadowing the official threading module that sits in the Standard Library. Because of this, the Python exit logic is quite correctly complaining that the Standard Library _shutdown() function is missing.
How could this happen? Top-level modules can only appear in paths that are listed explicitly in sys.path, not in sub-directories beneath them. This leads to a new question: is there any point during the pylint run that the …/M2Crypto/ directory itself is getting put on sys.path as though it contained top-level modules? Let's see!
We need more instrumentation: we need to have Python tell us the moment that a directory with M2Crypto in the name appears in sys.path. It will really slow things down, but let's add a trace function to pylint's __init__.py — because that is the first module that gets imported when you run -m pylint.lint — that will write an output file telling us, for every line of code executed, whether sys.path has any bad values in it:
def install_tracer():
import sys
output = open('mytracer.out', 'w')
def mytracer(frame, event, arg):
broken = any(p.endswith('M2Crypto') for p in sys.path)
output.write('{} {}:{} {}\n'.format(
broken, frame.f_code.co_filename, frame.f_lineno, event))
return mytracer
sys.settrace(mytracer)
install_tracer()
del install_tracer
Note how careful I am here: I define only one name in the module's namespace, and then carefully delete it to clean up after myself before I let pylint continue loading! And all of the resources that the trace function itself needs — namely, the sys module and the output open file — are available in the install_tracer() closure so that, from the outside, pylint looks exactly the same as always. Just in case anyone tries to introspect it, like pylint might!
This generates a file mytracer.out of about 800k lines, that each look something like this:
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
The False says that sys.path looks clean, the filename and line number are the line of code being executed, and call indicates what stage of execution the interpreter is in.
So does sys.path ever get poisoned? Let's look at just the first True or False on each line, and see how many successive lines start with each value:
$ awk '{print$1}' mytracer.out | uniq -c
607997 False
3173 True
4558 False
33217 True
4304 False
41699 True
2953 False
110503 True
52575 False
Wow! That's a problem! For runs of several thousand lines at a time, our test case is True, which means that the interpreter is running with …/M2Crypto/ — or some variant of a pathname with M2Crypto in it — on the path, where it should not be; only the directory that contains …/M2Crypto should ever be on the path. Looking for the first False to True transition in the file, I see this:
False /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:132 line
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
...
False /home/brandon/venv/lib/python2.7/posixpath.py:124 line
False /home/brandon/venv/lib/python2.7/posixpath.py:124 return
True /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:133 line
And looking at lines 132 and 133 in the builder.py file reveals our culprit:
130 # build astng representation
131 try:
132 sys.path.insert(0, dirname(path)) # XXX (syt) iirk
133 node = self.string_build(data, modname, path)
134 finally:
135 sys.path.pop(0)
Note the comment, which is part of the original code, not an addition of my own! Obviously, XXX (syt) iirk is an exclamation in this programmer's strange native language for the phrase, “put this module's parent directory on sys.path so that pylint will break mysteriously every time someone forces pylint to introspect a package with a threading sub-module.” It is, obviously, a very compact native language. :)
If you adjust the tracing module to watch sys.modules for the actual import of threading — an exercise I will leave to the reader — you will see that it happens when SocketServer, which is imported by some other Standard Library module during the analysis, in turn tries to innocently import threading.
So let us review what is happening:
pylint is dangerous magic.
As part of its magic, if it sees you import foo, then it runs off trying to find foo.py on disk, to parse it, and to predict whether you are loading valid or invalid names from its namespace.
[See my comment, below.] Because you call .split() on the return value of RSA.as_pem(), pylint tries to introspect the as_pem() method, which in turn uses the M2Crypto.BIO module, which in turn makes calls that induce pylint to import threading.
As part of loading any module foo.py, pylint throws the directory containing foo.py on sys.path, even if that directory is inside a package, and therefore gives modules in that directory the privilege of shadowing Standard Library modules of the same name during its analysis.
When Python exits, it is upset that the M2Crypto.threading library is sitting where threading belongs, because it wants to run the _shutdown() method of threading.
You should report this as a bug to the pylint / astng folks at logilab.org. Tell them I sent you.
If you decide to keep using pylint after it has done this to you, then there seem to be two solutions in this case: either don't inspect code that calls M2Crypto, or import threading during the pylint import process — by sticking import threading into the pylint/__init__.py, for example — so that the module gets the chance to grab the sys.modules['threading'] slot before pylint gets all excited and tries to let M2Crypto/threading.py grab the slot instead.
In conclusion, I think the author of astng says it best: XXX (syt) iirk. Indeed.

Many thanks to Brandon Craig Rhodes for having tracing this down and for such a detailed post.
I've removed the offending line from astng, code available from the hg repository until logilab-astng 0.23.0 is out. And I can confirm this fixes the OP's pb.

This looks more like a hack but I think it works. Copying the result of "as_pem()" and splitting it.
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None)[:].split("\n")
I'm using Python 2.6.7, M2Crypto 0.21.1, pylint 0.23

I was unable to reproduce (pylint 0.24 and M2Crypto 0.21.1 on Ubuntu 11.04 64bit) but two suggestions:
Explicitly initialize threading:
import M2Crypto
def f():
M2Crypto.threading.init()
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
M2Crypto.threading.cleanup()
Or recompile without threading:
m2crypto = Extension(name = 'M2Crypto.__m2crypto',
sources = ['SWIG/_m2crypto.i'],
extra_compile_args = ['-DTHREADING'],
#extra_link_args = ['-Wl,-search_paths_first'], # Uncomment to build Universal Mac binaries
)

Related

Modifying/debugging frozen builtins like importlib/_boostrap_external.py

Short version: How do I debug frozen libs? Can I tell Python not to use them (and use sourcefiles) or re-freeze them some how?
I'm trying to learn more about the inner functionality of pythons import mechanisms. And by going down that rabbit hole, I want to debug /usr/lib/python37/importlib/_bootstrap_external.py.
Using python -m trace --trace myscript.py gave me indications of where in _bootstrap_external.py I am landing, but not values. So I turned to pdb and it gave me a bit of information but appears to be skipping _frozen objects. (?).
So I added a few print('moo') where I saw my script ending up in _boostrap_external.py but no prints are ever made, because Python internally uses <class '_frozen_importlib_external.PathFinder'>
That lead me in a desperate attempt to try and renaming/removing /usr/lib/python37/importlib/__pycache__ in a hope that Python would re-compile my changes. But no such luck, Python still uses a frozen version.
So I modified /usr/lib/python37/__init__.py where it imports the _bootstrap module, so I changed this:
try:
import _frozen_importlib_external as _bootstrap_external
except ImportError:
from . import _bootstrap_external
_bootstrap_external._setup(_bootstrap)
_bootstrap._bootstrap_external = _bootstrap_external
else:
_bootstrap_external.__name__ = 'importlib._bootstrap_external'
_bootstrap_external.__package__ = 'importlib'
try:
_bootstrap_external.__file__ = __file__.replace('__init__.py', '_bootstrap_external.py')
except NameError:
# __file__ is not guaranteed to be defined, e.g. if this code gets
# frozen by a tool like cx_Freeze.
pass
sys.modules['importlib._bootstrap_external'] = _bootstrap_external
to this:
from . import _bootstrap_external
_bootstrap_external._setup(_bootstrap)
_bootstrap._bootstrap_external = _bootstrap_external
_bootstrap_external.__name__ = 'importlib._bootstrap_external'
_bootstrap_external.__package__ = 'importlib'
try:
_bootstrap_external.__file__ = __file__.replace('__init__.py', '_bootstrap_external.py')
except NameError:
# __file__ is not guaranteed to be defined, e.g. if this code gets
# frozen by a tool like cx_Freeze.
pass
sys.modules['importlib._bootstrap_external'] = _bootstrap_external
And that worked, ish. Obviously the problem will become more and more complex, and there has to be a more generic solution. But I continued my journey and found that _bootstrap.py is indeed loaded with my print('moo') changes. But when /usr/lib/python37/importlib/_bootstrap.py later calls the function _find_spec, and it does:
for finder in meta_path:
with _ImportLockContext():
find_spec = finder.find_spec
spec = find_spec(name, path, target)
find_spec will refer to the frozen version again. (Code snippet shortened to the relevant code in _bootstrap.py)
So my question finally ends up in, how do I debug these frozen files? or how do I make it so Python ignores frozen libraries (I can take the performance impact when debugging). Every attempt to find information on this (searching, docs, irc, etc) ends up on "why?" or "don't do this" or it points me to py2exe and how to freeze my libraries. I just want to be able to debug and understand more of how the inner mechanics work by trying things and looking at the variables.

Testing for presence of IPython

I have the same code, which I occasionally run from the command line as python source.py and occasionally copy/paste it into interactive IPython. I would like to execute slightly different code in either way, and tried to embed differences into the following code block:
try:
__IPYTHON__
# do IPython-mode code
except NameError:
# do script-mode code
I know that in Python this is a common technique, but it screws up automated flagging in PyCharm and I was hoping to find some more attractive way to test for the presence of IPython, perhaps by testing '__IPYTHON__' in ... or something similar. However, I saw that this symbol is neither in locals() nor in globals(). Is there any other place I can test for its presence, or any other more conventional test, not using exceptions directly?
I am using Python 2.7.11 and IPython 5.1.0.
Thank you very much.
UPDATE Related but unhelpful: How do I check if a variable exists?
I think you are looking for
if hasattr(__builtins__, '__IPYTHON__'):
print('IPython')
else:
print('Nope')
While the current accepted answer works in the __main__ namespace, it won't work in a package due to __builtins__ being a dict in other namespaces. This should work in any namespace in python3:
import builtins
getattr(builtins, "__IPYTHON__", False)

NGINX/Apache and disabling assertions in Python [duplicate]

I'm running a python script from inside a different software (it provides a python interface to manipulate its data structures).
I'm optimizing my code for speed and would like to see what impact on performance my asserts have.
I'm unable to use python -O. What other options do I have, to programatically disable all asserts in python code? The variable __debug__ (which is cleared by -O flag) cannot be assigned to :(
The docs say,
The value for the built-in variable [__debug__] is determined when the
interpreter starts.
So, if you can not control how the python interpreter is started, then it looks like you can not disable assert.
Here then are some other options:
The safest way is to manually remove all the assert statements.
If all your assert statements occur on lines by themselves, then
perhaps you could remove them with
sed -i 's/assert /pass #assert /g' script.py
Note that this will mangle your code if other code comes after the assert. For example, the sed command above would comment-out the return in a line like this:
assert x; return True
which would change the logic of your program.
If you have code like this, it would probably be best to manually remove the asserts.
There might be a way to remove them programmatically by parsing your
script with the tokenize module, but writing such a program to
remove asserts may take more time than it would take to manually
remove the asserts, especially if this is a one-time job.
If the other piece of software accepts .pyc files, then there is a
dirty trick which seems to work on my machine, though note a Python
core developer warns against this (See Éric Araujo's comment on 2011-09-17). Suppose your script is called script.py.
Make a temporary script called, say, temp.py:
import script
Run python -O temp.py. This creates script.pyo.
Move script.py and script.pyc (if it exists) out of your PYTHONPATH
or whatever directory the other software is reading to find your
script.
Rename script.pyo --> script.pyc.
Now when the other software tries to import your script, it will
only find the pyc file, which has the asserts removed.
For example, if script.py looks like this:
assert False
print('Got here')
then running python temp.py will now print Got here instead of raising an AssertionError.
You may be able to do this with an environment variable, as described in this other answer. Setting PYTHONOPTIMIZE=1 is equivalent to starting Python with the -O option. As an example, this works in Blender 2.78, which embeds Python 3.5:
blender --python-expr 'assert False; print("foo")'
PYTHONOPTIMIZE=1 blender --python-expr 'assert False; print("foo")'
The first command prints a traceback, while the second just prints "foo".
As #unutbu describes, there is no official way of doing this. However, a simple strategy is to define a flag like _test somewhere (for example, as keyword argument to a function, or as a global variable in a module), then include this in your assert statements as follows:
def f(x, _test=True):
assert not _test or x > 0
...
Then you can disable asserts in that function if needed.
f(x, _test=False)

Run python program from another python program (with certain requirements)

Let's say I have two python scripts A.py and B.py. I'm looking for a way to run B from within A in such a way that:
B believes it is __main__ (so that code in an if __name__=="__main__" block in B will run)
B is not actually __main__ (so that it does not, e.g., overwrite the "__main__" entry in sys.modules)
Exceptions raised within B propagate to A (i.e., could be caught with an except clause in A).
Those exceptions, if not caught, generate a correct traceback referencing line numbers within B.
I've tried various techniques, but none seem to satisfy all my requirements.
using tools from the subprocess module means exceptions in B do not propagate to A.
execfile("B.py", {}) runs B, but it doesn't think it's main.
execfile("B.py", {'__name__': '__main__'}) makes B.py think it's main, but it also seems to screw up the exception traceback printing, so that the tracebacks refer to lines within A (i.e., the real __main__).
using imp.load_source with __main__ as the name almost works, except that it actually modifies sys.modules, thus stomping on the existing value of __main__
Is there any way to get what I want?
(The reason I'm doing this is because I'm doing some cleanup on an existing library. This library has no real test suite, just a set of "example" scripts that produce certain output. I'm trying to leverage these as tests to ensure that my cleanup doesn't affect the library's ability to execute these examples, so I want to run each example script from within my test suite. I'd like to be able to see exceptions from these scripts within the test script so the test script can report the type of failure, instead of just reporting a generic SubprocessError whenever an example script raises some exception.)
Your use case makes sense, but I still think you'd be better off refactoring the tests such that they can be run externally.
Do you test scripts have something like this?
def test():
pass
if __name__ == '__main__':
test()
If not, perhaps you should convert your tests to calling a function such as test. Then, from your main test script, you can just:
import test1
test1.test()
import test2
test2.test()
Provide a common interface to running tests, that the tests themselves use. Having a big block of code in a __main__ check is Not A Good Thing.
Sorry that I didn't answer the question you asked, but I feel this is the correct solution without deviating too far from the original test code.
Answering my own question because the result is kind of interesting and might be useful to others:
It turns out I was wrong: execfile("B.py", {'__name__': '__main__'} is the way to go after all. It does correctly produce the tracebacks. What I was seeing with incorrect line numbers weren't exceptions but warnings. These warnings were produced using warnings.warn("blah", stacklevel=2). The stacklevel=2 argument is supposed to allow for things like deprecation warnings to be raised where the deprecated thing is used, rather than at the warning call (see the documentation).
However, it seems that the execfile-d file doesn't count as a "stack level" for this purpose, and is invisible for stacklevel purposes. So if code at the top level of an execfile-d module causes a warning with stacklevel 2, the warning is not raised at the right line number in the execfile-d source file; instead it is raised at the corresponding line number of the file which is running the execfile.
This is unfortunate, but I can live with it, since these are only warnings, so they won't impact the actual performance of the tests. (I didn't notice at first that it was only the warnings that were affected by the line-number mismatches, because there were lots of warnings and exceptions intermixed in the test output.)

Disabling python's assert() without -0 flag

I'm running a python script from inside a different software (it provides a python interface to manipulate its data structures).
I'm optimizing my code for speed and would like to see what impact on performance my asserts have.
I'm unable to use python -O. What other options do I have, to programatically disable all asserts in python code? The variable __debug__ (which is cleared by -O flag) cannot be assigned to :(
The docs say,
The value for the built-in variable [__debug__] is determined when the
interpreter starts.
So, if you can not control how the python interpreter is started, then it looks like you can not disable assert.
Here then are some other options:
The safest way is to manually remove all the assert statements.
If all your assert statements occur on lines by themselves, then
perhaps you could remove them with
sed -i 's/assert /pass #assert /g' script.py
Note that this will mangle your code if other code comes after the assert. For example, the sed command above would comment-out the return in a line like this:
assert x; return True
which would change the logic of your program.
If you have code like this, it would probably be best to manually remove the asserts.
There might be a way to remove them programmatically by parsing your
script with the tokenize module, but writing such a program to
remove asserts may take more time than it would take to manually
remove the asserts, especially if this is a one-time job.
If the other piece of software accepts .pyc files, then there is a
dirty trick which seems to work on my machine, though note a Python
core developer warns against this (See Éric Araujo's comment on 2011-09-17). Suppose your script is called script.py.
Make a temporary script called, say, temp.py:
import script
Run python -O temp.py. This creates script.pyo.
Move script.py and script.pyc (if it exists) out of your PYTHONPATH
or whatever directory the other software is reading to find your
script.
Rename script.pyo --> script.pyc.
Now when the other software tries to import your script, it will
only find the pyc file, which has the asserts removed.
For example, if script.py looks like this:
assert False
print('Got here')
then running python temp.py will now print Got here instead of raising an AssertionError.
You may be able to do this with an environment variable, as described in this other answer. Setting PYTHONOPTIMIZE=1 is equivalent to starting Python with the -O option. As an example, this works in Blender 2.78, which embeds Python 3.5:
blender --python-expr 'assert False; print("foo")'
PYTHONOPTIMIZE=1 blender --python-expr 'assert False; print("foo")'
The first command prints a traceback, while the second just prints "foo".
As #unutbu describes, there is no official way of doing this. However, a simple strategy is to define a flag like _test somewhere (for example, as keyword argument to a function, or as a global variable in a module), then include this in your assert statements as follows:
def f(x, _test=True):
assert not _test or x > 0
...
Then you can disable asserts in that function if needed.
f(x, _test=False)

Categories